Eric03 commited on
Commit
a8d43fb
·
verified ·
1 Parent(s): 93d003a

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2103.15573/main_diagram/main_diagram.drawio +0 -0
  2. 2103.15573/paper_text/intro_method.md +75 -0
  3. 2109.09166/main_diagram/main_diagram.drawio +1 -0
  4. 2109.09166/main_diagram/main_diagram.pdf +0 -0
  5. 2109.09166/paper_text/intro_method.md +118 -0
  6. 2112.01853/main_diagram/main_diagram.drawio +1 -0
  7. 2112.01853/main_diagram/main_diagram.pdf +0 -0
  8. 2112.01853/paper_text/intro_method.md +98 -0
  9. 2203.17234/main_diagram/main_diagram.drawio +0 -0
  10. 2203.17234/paper_text/intro_method.md +94 -0
  11. 2205.08096/main_diagram/main_diagram.drawio +1 -0
  12. 2205.08096/main_diagram/main_diagram.pdf +0 -0
  13. 2205.08096/paper_text/intro_method.md +76 -0
  14. 2205.09853/main_diagram/main_diagram.drawio +1 -0
  15. 2205.09853/paper_text/intro_method.md +96 -0
  16. 2205.15624/main_diagram/main_diagram.drawio +0 -0
  17. 2205.15624/main_diagram/main_diagram.pdf +0 -0
  18. 2205.15624/paper_text/intro_method.md +87 -0
  19. 2206.04762/main_diagram/main_diagram.drawio +0 -0
  20. 2206.04762/paper_text/intro_method.md +108 -0
  21. 2207.12141/main_diagram/main_diagram.drawio +0 -0
  22. 2207.12141/paper_text/intro_method.md +129 -0
  23. 2212.08108/main_diagram/main_diagram.drawio +1 -0
  24. 2212.08108/paper_text/intro_method.md +144 -0
  25. 2301.10902/main_diagram/main_diagram.drawio +1 -0
  26. 2301.10902/main_diagram/main_diagram.pdf +0 -0
  27. 2301.10902/paper_text/intro_method.md +149 -0
  28. 2303.01384/main_diagram/main_diagram.drawio +1 -0
  29. 2303.01384/main_diagram/main_diagram.pdf +0 -0
  30. 2303.01384/paper_text/intro_method.md +144 -0
  31. 2303.06121/main_diagram/main_diagram.drawio +0 -0
  32. 2303.06121/paper_text/intro_method.md +76 -0
  33. 2304.02560/main_diagram/main_diagram.drawio +1 -0
  34. 2304.02560/main_diagram/main_diagram.pdf +0 -0
  35. 2304.02560/paper_text/intro_method.md +99 -0
  36. 2305.03088/paper_text/intro_method.md +48 -0
  37. 2305.05873/main_diagram/main_diagram.drawio +1 -0
  38. 2305.05873/main_diagram/main_diagram.pdf +0 -0
  39. 2305.05873/paper_text/intro_method.md +146 -0
  40. 2305.07247/main_diagram/main_diagram.drawio +1 -0
  41. 2305.07247/main_diagram/main_diagram.pdf +0 -0
  42. 2305.07247/paper_text/intro_method.md +259 -0
  43. 2305.09062/main_diagram/main_diagram.drawio +0 -0
  44. 2305.09062/paper_text/intro_method.md +37 -0
  45. 2305.15913/main_diagram/main_diagram.drawio +0 -0
  46. 2305.15913/paper_text/intro_method.md +155 -0
  47. 2306.02845/main_diagram/main_diagram.drawio +0 -0
  48. 2306.02845/paper_text/intro_method.md +92 -0
  49. 2307.09312/main_diagram/main_diagram.drawio +1 -0
  50. 2307.09312/main_diagram/main_diagram.pdf +0 -0
2103.15573/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2103.15573/paper_text/intro_method.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Finding correspondences across images is one of the fundamental problems in computer vision and it has been studied for decades. With the rapid development of digital human technology, building dense correspondences between human images has been found to be particularly useful for many applications, such as non-rigid tracking and reconstruction [15, 14, 45, 16], neural rendering [60], and appearance transfer [75, 70]. Traditional approaches in computer vision extract image features on local keypoints and generate correspondences between points with similar descriptors after performing a nearest neighbor search, *e.g.*, SIFT [37]. More recently, deep learning methods [33, 72, 54, 17], replaced hand-crafted components with full end-to-end pipelines. Despite their effectiveness on many tasks, these methods often deliver sub-optimal results when performing dense correspondences search on humans, due to the high variation in human poses and camera viewpoints and visual similarity between body parts. As a result, the existing methods either produce sparse matches, *e.g.*, skeleton joints [10], or dense but imprecise correspon-
4
+
5
+ <sup>\*</sup>Work done while the author was an intern at Google.
6
+
7
+ <sup>1</sup>Project webpage: <https://feitongt.github.io/HumanGPS/>
8
+
9
+ dences [19].
10
+
11
+ In this paper, we propose a deep learning method to learn a Geodesic PreServing (GPS) feature taking RGB images as input, which can lead to accurate dense correspondences between human images through nearest neighbor search (see Figure 1). Differently from previous methods using triplet loss [54, 23], *i.e.* hard binary decisions, we advocate that the feature distance between pixels should be inversely correlated to their likelihood of being correspondences, which can be intuitively measured by the geodesic distance on the 3D surface of the human scan (Figure 2). For example, two pixels having zero geodesic distance means they project to the same point on the 3D surface and thus a match, and the probability of being correspondences becomes lower when they are apart from each other, leading to a larger geodesic distance. While the geodesic preserving property has been studied in 3D shape analysis [55, 29, 42], e.g., shape matching, we are the first to extend it for dense matching in image space, which encourages the feature space to be strongly correlated with an underlying 3D human model, and empirically leads to accurate, smooth, and robust results.
12
+
13
+ To generate supervised geodesic distances on the 3D surface, we leverage 3D assets such as RenderPeople [3] and the data acquired with The Relightables [20]. These high quality 3D models can be rigged and allow us to generate pairs of rendered images from the same subject under different camera viewpoints and body poses, together with geodesic distances between any locations on the surface. In order to enforce soft, efficient, and differentiable constraints, we propose novel single-view and cross-view dense geodesic losses, where features are pushed apart from each other with a weight proportional to their geodesic distance.
14
+
15
+ We observe that the GPS features not only encode local image content, but they also have a strong semantic meaning. Indeed, even without any explicit semantic annotation or supervision, we find that our features automatically differentiate semantically different locations on the human surface and it is robust even in ambiguous regions of the human body (*e.g.*, left hand vs. right hand, torso vs. back). Moreover, we show that the learned features are consistent across different subjects, *i.e.*, the same semantic points from other persons still map to a similar feature, without any intersubject correspondence data provided during the training.
16
+
17
+ In summary, we propose to learn an embedding that significantly improves the quality of the dense correspondences between human images. The core idea is to use the geodesic distance on the 3D surface as an effective supervision and combine it with novel loss functions to learn a discriminative feature. The learned embeddings are effective for dense correspondence search, and they show remarkable intra- and inter-subjects robustness without the need of any cross-subject annotation or supervision. We show that
18
+
19
+ ![](_page_1_Picture_5.jpeg)
20
+
21
+ Figure 2. Core idea: we learn a mapping from RGB pixels to a feature space that preserves geodesic properties of the underlying 3D surface. The 3D geometry is only used in the training phase.
22
+
23
+ our approach achieves state-of-the-art performance on both intra- and inter-subject correspondences and that the proposed framework can be used to boost many crucial computer vision tasks that rely on robust and accurate dense correspondences, such as optical flow, human dense pose regression [19], dynamic fusion [45] and image-based morphing [31, 32].
24
+
25
+ # Method
26
+
27
+ In this section, we introduce our deep learning method for dense human correspondences from RGB images (Figure 3). The key component of our method is a feature extractor, which is trained to produce a Geodesic PreServing (GPS) feature for each pixel, where the distance between descriptors reflects the geodesic distance on surfaces of the human scans. We first explain the GPS feature in detail and then introduce our novel loss functions to exploit the geodesic signal as supervision for effective training. This enhances the discriminative power of the feature descriptors and reduces ambiguity for regions with similar textures.
28
+
29
+ Our algorithm starts with an image I ∈ R <sup>H</sup>×W×<sup>3</sup> of height H and width W, where we first run an off-the-shelf segmentation algorithm [12] to detect the person. Then, our feature extractor takes as input this image and maps it into a high-dimensional feature map of the same spatial resolution F ∈ R <sup>H</sup>×W×<sup>C</sup> , where C = 16 in our experiments. The dense correspondences between two images I1, I<sup>2</sup> can be built by searching for the nearest neighbor in the feature space, *i.e*., corr(p) = arg minq∈I<sup>2</sup> d(p, q), ∀p ∈ I1, where d is a distance function defined in the feature space, and corr(p) is the correspondence for the pixel p from I<sup>1</sup> to I2. In our approach, we constrain the feature for each pixel to be a unit vector kFI(p)k<sup>2</sup> = 1, ∀p ∈ I and use the cosine distance d(p, q) = 1 − FI<sup>1</sup> (p) · FI<sup>2</sup> (q).
30
+
31
+ Since images are 2D projections of the 3D world, ideally F should be aware of the underlying 3D geometry of the human surface and be able to measure the likelihood of two pixels being a correspondence. We find that the geodesic distance on a 3D surface is a good signal of supervision and thus should be preserved in the feature space, *i.e*., d(p, q) ∝ g(p, q), ∀p, q ∈ (I1, I2), where g(p, q) is the geodesic distance between the projection of two pixels p, q to 3D locations on the human surface (Figure 2).
32
+
33
+ Network Architecture. Theoretically, any network architecture producing features in the same spatial resolution of the input image could be used as backbone for our feature extractor. For the sake of simplicity, we utilize a typical 7-level U-Net [51] with skip connections. To improve the capacity without significantly increasing the model size, we add residual blocks inspired by [77]. More details can be found in supplementary materials.
34
+
35
+ Our model uses a pair of images as a single training example. These images capture the same subjects under different camera viewpoints and body poses. Both images are fed into the network and converted into feature maps. Multiple loss terms are then combined to compute the final loss function that is minimized during the training phase.
36
+
37
+ Consistency Loss. The first loss term minimizes the feature distance between ground truth corresponding pixels: Lc(p) = Pd(p, corr(p)). Note however, that training with only L<sup>c</sup> will lead to degenerative case where all the pixels are mapped to the same feature.
38
+
39
+ Sparse Ordinal Geodesic Loss. To prevent this degenerative case, previous methods use triplet loss [54, 24] to increase the distance between non-matching pixels, *e.g*.
40
+
41
+ ![](_page_3_Figure_0.jpeg)
42
+
43
+ Figure 3. Learning Human Geodesic PreServing Features. We train a neural network to extract features from RGB images. The learned embedding reflects the geodesic distance among pixels projected on the 3D surface of the human and can be used to build accurate dense correspondences. We train our feature extractor with a combination of consistency loss, sparse ordinal geodesic loss and dense intra/cross view geodesic loss: see text for details.
44
+
45
+ $d(\mathbf{p}, \operatorname{corr}(\mathbf{p})) \ll d(\mathbf{p}, \mathbf{q}), \forall \mathbf{q} \neq \operatorname{corr}(\mathbf{p})$ . Whereas the general idea makes sense and works decently in practice, this loss function penalizes all the non-matching pixels equally without capturing their relative affinity, which leads to non-smooth and imprecise correspondences.
46
+
47
+ An effective measurement capturing the desired behavior is the geodesic distance on the 3D surface. This distance should be 0 for corresponding pixels and gradually increase when two pixels are further apart. To enforce a similar behavior in the feature space, we extend the triplet loss by randomly sampling a reference point $\mathbf{p}_r$ and two target points $\mathbf{p}_{t_1}$ , $\mathbf{p}_{t_2}$ , and defining a sparse ordinal geodesic loss:
48
+
49
+ $$L_s = \log(1 + \exp(s \cdot (d(\mathbf{p}_r, \mathbf{p}_{t_1}) - d(\mathbf{p}_r, \mathbf{p}_{t_2}))), \quad (1)$$
50
+
51
+ where $s = \operatorname{sgn}(g(\mathbf{p}_r, \mathbf{p}_{t_2}) - g(\mathbf{p}_r, \mathbf{p}_{t_1}))$ . This term encourages the order between two pixels with respect to a reference pixel in feature space to be same as measured by the geodesic distance on the surface, and as a result, a pair of points physically apart on the surface tends to have larger distance in feature space.
52
+
53
+ **Dense Geodesic Loss.** $L_s$ penalizes the order between a randomly selected pixels pair, which, however, does not produces an optimal GPS feature. One possible reason is due to the complexity of trying to order all the pixels, which is a harder task compared to the original binary classification method proposed for the triplet loss. In theory, we could extend $L_s$ to order all the pixels in the image, which unfortunately is non-trivial to run efficiently during the training.
54
+
55
+ Instead, we relax the ordinal loss and define a dense version of the geodesic loss between one randomly picked pixel $\mathbf{p}_r$ to all the pixels $\mathbf{p}_t$ in the image:
56
+
57
+ $$L_d = \sum_{\mathbf{p}_t \in \mathbf{I}} \log \left( 1 + \exp(g(\mathbf{p}_r, \mathbf{p}_t) - d(\mathbf{p}_r, \mathbf{p}_t)) \right).$$
58
+ (2)
59
+
60
+ This loss, again, pushes features between non-matching pixels apart, depending on the geodesic distance. It does not explicitly penalize the wrong order, but it is effective for training since all the pixels are involved in the loss function and contribute to the back-propagation.
61
+
62
+ **Cross-view Dense Geodesic Loss.** The features learned with the aforementioned loss terms produces overall accurate correspondence, but susceptible to visually similar body parts. For example in Figure 6 (top row), the feature always matches the wrong hand, since it does not capture correctly the semantic part due to the presence of large motion. To mitigate this issue, we extend $L_d$ and define it between pairs of images such that: given a pixel $\mathbf{p}_1$ on $\mathbf{I}_1$ and $\mathbf{p}_2$ on $\mathbf{I}_2$ :
63
+
64
+ $$L_{cd} = \sum_{\mathbf{p}_2 \in \mathbf{I}_2} \log \left( 1 + \exp(g(\operatorname{corr}(\mathbf{p}_1), \mathbf{p}_2) - d(\mathbf{p}_1, \mathbf{p}_2) \right).$$
65
+ (3)
66
+
67
+ At its core, the intuition behind this loss term is very similar $L_d$ , except that it is cross-image and provides the network with training data with high variability due to viewpoint and pose changes. We also tried to add a cross-view sparse ordinal geodesic loss but found it not improving.
68
+
69
+ ![](_page_4_Figure_0.jpeg)
70
+
71
+ Figure 4. Dense correspondences (visualized as optical flow) built via nearest neighbor search and the predicted visibility masks. Our results are more accurate, smooth, and free from obvious mistakes when compared to other methods. On the right, we show the visibility probability map obtained via the distance to the nearest neighbor. Note that our feature successfully captures occluded pixels (*i.e.*, dark pixels) in many challenging cases. The method is effective for both intra-subjects (rows 1-3) and inter-subjects (row 4).
72
+
73
+ **Total Loss.** The total loss function is a weighted sum of the terms detailed above $L_t = w_c L_c + w_s L_s + w_d L_d + w_{cd} L_{cd}$ . The weights are set to 1.0, 3.0, 5.0, 3.0 for $w_c, w_s, w_d, w_{cd}$ respectively. The weight for each term is chosen empirically such that the magnitude of gradients from each loss is roughly comparable. To encourage robustness across different scales, we compute this loss at each intermediate level of the decoder, and down-weight the loss to $\frac{1}{8}$ . As demonstrated in our ablation studies, we found this increases the overall accuracy of the correspondences.
74
+
75
+ Our whole system is implemented in TensorFlow 2.0 [4]. The model is trained with batch size of 4 using ADAM optimizer. The learning rate is initialized at $1\times 10^{-4}$ and reduces to 70% for every 200K iterations. The whole training takes 1.6 millions iterations to converge.
2109.09166/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-03-15T19:43:53.186Z" agent="5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36" etag="Cgpb8sfT-eNz0uOvAw-d" version="14.4.8" type="google"><diagram id="VXzkNqfG8osel0M0zl1h" name="Page-1">7V1tc5u4Fv41ntm9M9boBQR8zFvbve3eeps7t9tPGWzLDg0GL8Zx0l9/JZBsQMI2Djhu6kwmNhIcCc6jR+ccHZQeuZo9vU/8+f2f8ZiFPQzHTz1y3cPYpZj/FQXPeYFto7xgmgTjvKhQcBv8YLIQytJlMGaL0olpHIdpMC8XjuIoYqO0VOYnSbwqnzaJw3Krc3/KtILbkR/qpV+DcXovb8uGm/IPLJjeq5YRlDUzX50sRSzu/XG8youyc8hNj1wlcZzm32ZPVywUz049l1zQu5paJTV9Vl0N49EDE7WoRy5V9xIWpfuI+2uKPy/HX7//8TyIvv34chcM/v2xj6TyHv1wKRsZB/408Wda403bI/eX8x/s4/Xwg/396hv6vBo+3vQx1eSyMVeHPIyT9D6expEf3mxKL5N4GY2z+4b8aHPOpziey4fxnaXps8SWv0xjXnSfzkJZy8JhppdHlqQB1zwvWqRJ/MCu4jBOsm4QSihzsPG5bn1yovsFfcln8Z7FM5Ymz/yEhIV+GjyW0eZL0E7X560vHcQBbxhDOb5s4gJke5ufXIAcbHwUAQtuKl27LH8RL5MRkyKLyqq24tjbWnHw1lZSP5myVGvlIkn858Jpc3HCYsutWvxW6T73V3sbBgnmvksJ/EveS3VUUN2mKEN2E5Q72qiaJ2wcjNIgjowD4JM/ZGEZtH4YTCP+fcRxyDhILxV6L2TFLBiP8/HBFsEPf5jJEyNEPmcu3L7s2dfZCEr9rG1y3fegEeRbh6tomj2Z6Fg2W2K8EvzlVRBg6EjYvHBE9DFApKTjPrLWaldy4slkwdKKrlvRLjpzWEONEegB5FQGNgIHchWxqCbNIVVpLXESQVu6XttDw0XrHnbKPOitMA86SeZBwIGlH1zhIQqUoXQEHnJPnoe+L2dz1Z8ojtgJUpPtAOx0bV4Rz9rWynHMK0L5rbovMa9MEl7BvELeWyE59xRJ7pXNK2ydPK2dGIdZHgTQbsu8srGjSevMvLLhlq7X9tBw0XHMK2y/EeZRg+y0mIdUrWZhUCmSbJV55sM7ehfbT18nE5J8+BH/854FfWITTb2aUjdMI3S5ug9Sdjv3R6J2lfjzsqoncZSWCMSxPCLKgzBU5dIuGvuL+7VYjXqGeMhGpDn1aPqt1SPyXOCh8qCCGCjErzahUcuVAdD7QlhUBgON2i7pqalS8HqSLCqFhql8vLxmkj9+XvrPUsRaL/8bzBgnHfgftuJ/v8QzP9pU8m9T8XnL+OPG8KMSxnuXy8urX674d/4sCMWj/MDCRyZGuKyQkxAimqYlFvxkJM+xoRktmRjZN9HenCX8njmdiC4F0ZQXW52iBRLgViiYG4eQaK5RATg2BZDq0FFjvnXoELgLOkrVJThtdK4gIyr6i0wjF/wEROZPOp4+z9Ngxuk6nwpqMJUXD5NqySmAj9PMhXUJL29KCOTDrww3WA+3IlC5tPGEUeZ1CUMs5gerDENCgLAQ1I+lE5jrAoIMFGat2a4DKOo+S09Mwhbr2Tc95/Jjz7m+458p/+xh3gj1Z0Kt0XAxNyPH4SbA1XDCP1fiGv6bCfqXJgkOjDVW9puLXcwzhtwN0vKJFZBypaVlJFYNGGF3LySuGllCuuVjIEUjl5Y9g86QSCjWVh7gTiQixzCVkq6mUqKWXM98+Eb5kHhUm5ZPkw8RquFDA6/pKLFvPjYiy0H5ipz6zuy1iSBAp7z8W1m8sCGwt5p2Jh6zu+MxbADPmcfeDo/ZLgdcIapciZefKKXp0QMAwJlninpFEDionme4Yk0LIQXFkmPGHojyLXZb7VjMWtcsTP31nNTcYNeFGGz3yklnM77VidATK/cbaqmshGBo7WKe9bR3JJNej0gfYkcZ4Lu/SWVE5BlUheQ6WAZVQ+MKoaNaVyrovZP0+k0xs5sCNZE6AfbP9NcZUhGhwIP18/Me9Gcdlf6wacWmMf01BvKg5tIz9WkBWlgGVFPqg0elPssEp64cSxNFCQC9L7Par+1rXr+7oTed+prIqczH1N4BSnxUH8Qy+SAdYrJCfbDLIrkGe8tCJlMojtr8uaiFor3pqO7EX4TJ1lGzQs5OTcpOS8xGLQvY1dx1AiykT7Oc81SsozTTeuuzOyA2k+uaA4C34POa0b2fiFyfIo0t00nfrZ9Mc9PeQH+jXBOC+pLp8DeYexfq4/cc3RlHTiRYLrKHscGLqq9waKEib1TURHEyy5IIVd1KPlVRacFc7TBkKUdNf6FgU71SWI59aQ2KurVBqOoCPgwiKRWqvmQ1aeJHiwmXpaTK5Gu4ipNxucX1hUN/9DDNxla/8rSwsLPFg8pM+fwLkc9sHCzmoS+fVxCFgWppEsZ+WmleV5swdR6kjydsaHLtJ9NZEN09/MZN68Vydpeb8mkQjplawRG/l8Idfairvf5ds6L28fp2cMkRh64NLLc+Ld0BjlWIk1LDiDYYKbQ7I8XbQtIHpuTW59/WpdWxpyD9WzQIXNuTx9+yYxtTeXz9JHuUHTwXDgaKxg9zivL82n28i67zgZGHtFQ+h2orH/smBGOkZwYaxNVkBLeWnKneM6/JCzf6tYckgke8nzmIqOWpghxFnuOqgg2MsqPn4lEVSM2B3BB6XQOKulBDAKegQwFFRZZCcZlnp+TOsdUgjJRPVcUpq2Hm1mCHiF8ypFmYWTudWPkAK8XicQV75YAU0edV0wJkd/FNanL+qywYjS/EPhhCX6G/WASj9uZRuziLbp1B26asYt7wlhjgC5nNsR399V4XqF42ZTbH2Utc53RW71m9KGU+S2c4HRf9oGz5LQ770ax7U/Y8JjuY5+jZ81RPWT7ns1TyLbEDaCXr99RyWByqqTG3MebnBaw9hqrllhewym8bEIRMr0gZl606VLG+60NRxWfNGscutwSht482TcmtHSrT9NJSQZnnbJtdOYYe4A5rTbZN3YA1pdh0qGPT20ANzfpa433ny/LFXTm48WRyEQ593/REzHrkWdWARVHxzSNgVIt/2Lq4lt6JN3S+0Fpb77ebgemq4day5/BHNF+KqzG/EA7iBfvp/YjXeOuWA6PiNhCEgWNIf7eP7Cu4LcQpDie0QswWElKO2UK8juo2idm+fY7kYx14hWQ+WEUWAmKv08o02nznIwpwYamqmgCGgGXpfWh756Pttyp6UbpVcgymNZl55xFzyiPGpjbwsMa/tACdyi5J+2+/6gGM6kcJh6+lobftrXj0uxMN02MPDGX6nMNO9a4rQcClBT4rh5wJdE0+znHDTgpKZzVu8VQJoLjWU4XOCWhxV0L0gf7A52WaOwQDP/HzWaSQW5j/5SXk7C68ZK2TEkBI7axiucDWlxk8A77a8BuM25ARfSvllydwbJKALLWYKZOAOE9uW8I8bKG0agcZjTZe+C4IVRfbNn9eaNdgKvKFCjiprmhw84NotY2DKDtawVtbaW/J1AxEp30gHgCnnxI6lNYrVewJhje1zqEBuB3N2Nub6Ro8+kbMZxY7/i6olqXlXrTDXCbJr8tWp5Ft+xNChGgubksMZRL9uqxknW2rE4AcxRTY9Tu/t0NQOxp5Va6yzpbVwcAhuFanLdHWjlaOxmCPV6H9Py9+/vTn+O/Vl2QSDW6f+3oEqZXYg3zpFL39mIJhd+898d0kdRE5EBRikZ1u/MsPN/9PL8fZ5p8Skpv/Aw==</diagram></mxfile>
2109.09166/main_diagram/main_diagram.pdf ADDED
Binary file (38.1 kB). View file
 
2109.09166/paper_text/intro_method.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Dance represents a special genre of human activity. Our goal in this paper is development of algorithms to understand dance videos. We combine estimation of body movements with their feasibility as a part of dance. This enables interpretation of dance videos using not only constraints posed by the data but also those by the domain knowledge.
4
+
5
+ A variety of proposed methods have also focused on
6
+
7
+ dance videos [\[1–](#page-8-0)[6\]](#page-8-1). Most of these rely on kinect sensors to obtain depth information [\[1,](#page-8-0) [2\]](#page-8-2). [\[3\]](#page-8-3) classifies Indian dances by extracting patches centered at body's joint locations and using an LSTM network for classification. [\[4\]](#page-8-4) proposes to perform Laban Movement Analysis (in terms of dance domain constructs of Body, Effort, Shape and Space) to then describe human motion from a pose sequence. [\[5\]](#page-8-5) compares the effects of using three different representations - raw images, optical flow and multi-person pose data - on their proposed dance dataset proving that visual information is not sufficient to classify motion-heavy categories. There are several approaches to action recognition that first estimate poses [\[7–](#page-8-6)[9\]](#page-8-7). [\[7\]](#page-8-6) creates a coaching system for personalized athletic training based on pose correctness. [\[8\]](#page-8-8) improves action recognition performance by improving pose estimation accuracy using additional spatial and temporal constraints. However, [\[7,](#page-8-6) [8\]](#page-8-8) both estimate only the 2D poses, leading to difficulties ambiguity when the movements are along the viewing direction . [\[9\]](#page-8-7) estimates both 2D and 3D Poses as well as image features to predict actions from all three. [\[7–](#page-8-6)[9\]](#page-8-7) limit their representation for action recognition to pose sequences without including any higher level semantics that may define action. Moreover, these methods also require pose annotations in training videos. [\[6\]](#page-8-1) embeds RGB and optical-flow values into a single two-inone stream network for more efficient dance genre classification. In addition to the features such as pose and optical flow used in these works, in this paper we use dance domain representations to tune feature analysis to dance instead of being generic.
8
+
9
+ When people dance, they follow a carefully choreographed sequence of 3D *movements*, where each movement is hierarchically composed of simpler movements, ending in *basic movements*. Each basic movement is composed of a sequence of poses representing a specific dance pattern. For brevity, in what follows, we will refer to basic movements by simply movements, We identify movements of 16 main *body parts* e ∈ E illustrated in Figure [3.](#page-4-0) following Labanotation [\[10\]](#page-8-9), a well-known notation system used to record and archive human motion. Then in Table [1](#page-2-0) we
10
+
11
+ <sup>\*</sup>The support of the Office of Naval Research under grant N00014-20- 1-2444 and USDA National Institute of Food and Agriculture under grant 2020-67021-32799/1024178 are gratefully acknowledged.
12
+
13
+ <span id="page-1-1"></span>![](_page_1_Figure_0.jpeg)
14
+
15
+ Figure 1. Overview of the model architecture. Given a sequence of video frames $\{I_t\}_{t=0}^{T-1}$ , the model analyzes the content in a hierarchical manner, from the low levels (pose estimation & tracking) to the cognitive levels (movement and dance genre recognition). The input sequence $\{I_t\}_{t=0}^{T-1}$ forms the first (bottom) level. At the second level, our algorithm simultaneously estimates the 2D pose $\hat{p}_t^i$ and 3D pose $\hat{p}_t^i$ of each dancer i(i=0,...,N-1) at each frame, as well as the camera projection parameters. Our algorithm works under occlusions, e.g., among dancers. At the third level, each dance movement $\hat{y}_t^e$ of each body part $e \in E$ (defined over a sequence of frames) is recognized and its location, given by, e.g., its starting frame t and length are estimated, based on the poses estimated for the previous frames. At the fourth level, the dance genre $\hat{g}$ is recognized based on the movements $\{\hat{y}_t^e\}_{e \in E}$ of all body parts.
16
+
17
+ list the basic movements $y^e \in Y^e$ for each body part $e \in E$ , again following [10] and defined in terms of homogeneity of motion direction, and level which are frequently used to describe the dance in dance domain. Our dance recognition model adopts this hierarchy used by dance experts, which starts with the 3D pose sequence of the dancer, combines subsequences of joint displacements into dance movements, and finally infers dance genre from the sequences of the movements of joints. To help the model segment the pose sequence into the basic movements, we manually annotate the starting and ending positions of such movements for each body part for a subset of videos in the UID dataset. Our framework takes a raw dance video sequence $\{I_t\}_{t=0}^{T-1}$ as input, estimates poses $\hat{p}_t$ for each frame $I_t$ , recognizes the movement $\hat{y}_t^e$ (over multiple frames) of each body part ebased on its past pose sequence, and then predicts the dance genre $\hat{g}_t$ from the movement sequence. Experiments show that our hierarchical feature analysis is an effective way to recognize dance and our method outperforms state-of-theart on F-score.
18
+
19
+ The main contributions of this paper are as follows:
20
+
21
+ - We propose the first dance video understanding framework that analyzes the videos hierarchically from the bottom level of video frames, through the middle level of human poses, to the highest level of movements and associated dance genres.
22
+ - Our algorithm tracks and outputs 2D pose of each dancer in each frame in the presence of occlusions
23
+
24
+ <span id="page-1-0"></span>among dancers.
25
+
26
+ - We propose an unsupervised 3D pose estimation algorithm that starts with the estimated 2D pose sequence, and simultaneously and iteratively updates 2D poses, 3D poses and 3D-to-2D projection parameters using a single camera without using ground-truth for these poses or parameters. Our 3D pose network achieves state-of-the-art performance by incorporating kinematic constraints of a 34-DOF human skeletal model and temporal smoothness of motion.
27
+ - We have curated a large dance video data set, containing pose in ground truths for each video frame as well as for each movement, which we will share with the community for further exploration.
28
+
29
+ # Method
30
+
31
+ Figure 1 describes the components of our approach to dance video recognition and the hierarchy they form. Our approach can be summarized in the following steps: Step 1: For each input frame $I_t$ , the model estimates the 2D pose $p_t^i$ for dancer i appearing in $I_t$ . The model tracks approximate locations of the dancers $\{i\}_{i=0}^{N-1}$ throughout the video via their bounding boxes $\{B_t^i\}_{t=0}^{T-1}$ . Step 2: At each frame, the model provides an estimate $\hat{p}_t^i$ of the 2D pose $p_t^i$ of the dancer associated with each tracked box $B_t^i$ (Section 2.1). Step 3: The model then estimates 3D poses $\hat{P}_t^i$ from the estimated 2D ones $\hat{p}_t^i$ , by using an unsupervised 3D pose
32
+
33
+ <span id="page-2-4"></span>
34
+
35
+ | <b>Body Part</b> | Examples of Movement Label | # Labels |
36
+ |------------------|-------------------------------------------------------------------------------------------------|----------|
37
+ | Head | Head Turning Up; Head Turning Down; Head Turning Left; Head Turning Right; Head Circling | 7 |
38
+ | Neck | Neck Moving Left; Neck Moving Right; Neck Circling; Head Keeping Still; Unknown | 5 |
39
+ | Left Shoulder | Left Shoulder Moving Upward; Left Shoulder Moving Downward; Left Shoulder Circling | 5 |
40
+ | Left Lower Arm | Left Arm Moving Upward; Left Arm Moving Downward; Left Arm Moving Left | 11 |
41
+ | Left Upper Arm | Left Arm Moving Upward; Left Arm Moving Downward; Left Arm Moving Left | 11 |
42
+ | Torso | Torso Bending; Torso Unbending; Torso Turning Left; Torso Turning Right; Torso Swing; Somesault | 10 |
43
+ | Hips | Hips Waving; Hips Figure 8; Hips Circling; Hip Moving Up; Hip Moving Down; Hips Keeping Still | 10 |
44
+ | Left Lower Leg | Left Leg Moving Upward; Left Leg Moving Downward; Left Leg Moving Left | 15 |
45
+ | Left Upper Leg | Left Leg Moving Upward; Left Leg Moving Downward; Left Leg Moving Left | 15 |
46
+ | Left Foot | Left Foot Extension; Left Foot Flexion; Left Foot Relaxed; Unknown | 4 |
47
+
48
+ Table 1. Selected examples of movement labels of each body part. To save space, only the movements of the left body parts are shown in the table. The movements of the right body parts are the same as the left ones. There are 16 body parts and 154 movement labels in total.
49
+
50
+ estimation method (Section 2.2). Step 4: The model uses the LSTM network to recognize the movement $\{\hat{y}_t^e\}_{t=0}^{T-1}$ of each body part $e \in E$ (e.g., head, torso, etc.) from the trajectories $\{\{\hat{P}_t^{\hat{j}}\}_{j\in J_e}\}_{t=0}^{T-1}$ of all the joints $j\in J_e$ connected to the body part e, where $J_e\subset E$ (Section 2.3). We represent any given state of a dance as a set of body part configurations and the entire dance as a sequence of such sets. Step 5: For recognition, we first concatenate the movements $\{\{\hat{y}_t^e\}_{e\in E}\}\}_{t=0}^{T-1}$ of all body parts, and input it to an LSTM network to recognize the dance genre $\hat{g}$ (Section 2.4). The rest of this section introduces the components of this hierarchy.
51
+
52
+ <span id="page-2-2"></span>To estimate 2D (or 3D) pose, we estimate 2D (or 3D)
53
+
54
+ <span id="page-2-3"></span>coordinates of each body joint. Classical pose estimation methods such as pictorial structures framework and deformable part models largely rely on hand-designed features to determine body joint locations. Recently, deep learningbased approaches have achieved a major breakthrough in solving the problems in multi-person pose estimation (e.g., how to group keypoints for different people). They can be divided into top-down [11, 12] and bottom-up [13–15]. The former employ detectors to first locate person instances and then their individual joints; the latter first estimate all joint locations within the image and then assign the joints to the associated person. Although these methods provide superior pose estimates, they have two major shortcomings critical to our task. Firstly, most of the pose estimation methods cannot track a dancer through the video when there are multiple dancers present because they perform pose estimation from individual images, ignoring the temporal information. Besides, the methods perform training mostly on large datasets wherein the dance parts are very small, with a single person, limited pose variety, and clean background. and therefore cannot guarantee accuracy on real world dance videos. The method we propose can track se<span id="page-3-3"></span>lected dancers, detect estimation errors, and correct them automatically.
55
+
56
+ Object Tracking: As explained in Algorithm 1, our tracking algorithm is built upon the LDES tracker [16]. Since occlusion between dancers is a serious problem, our algorithm centrally addresses it. Following are the three stages of our algorithm: (1) Use the LDES tracker to track each $i^{th}$ dancer when the dancer has no overlap with other dancers, while maintaining a color histogram $h_t^i$ and a bounding box $B_t^i = (x_t^i, y_t^i, w_t^i, l_t^i)$ for the dancer. (2) Detect occurrence of overlap by detecting failure of the tracker as indicated by a significant difference between the directions of motion before and after overlap. (3) Predict the time and the location of the dancer when the overlap may be expected to end, from the location and velocity observed just before the beginning of the overlap. Since multiple dancers may be detected in the vicinity of the predicted location in the predicted frame, select the one that provides the best histogram match, and update $h_t^i$ and $B_t^i$ accordingly.
57
+
58
+ **Tracking Based 2D Pose Estimation:** As explained in Algorithm 2, we obtain the initial 2D poses by using the OpenPose method [15]. After we obtain the bounding box $B_t^i$ for each dancer i at the end of the overlap, the box $B_t^i$ may overlap with multiple boxes simultaneously, indicating multiple 2D pose estimation results. We select that pose $\hat{p}_t^i$ whose histogram is most similar to the one $\hat{p}_{t-1}^i$ seen in the previous frame. (Algorithm 2).
59
+
60
+ ```
61
+ Input: a sequence of 2D poses \{p_t\}_{t=0}^{N-1} of a dancer Output: a sequence of 3D poses \{\hat{P}_t\}_{t=0}^{N-1} of the dancer Set the temporal window size to be 2\Delta Denote total number of segments as s = \left\lfloor \frac{N}{2\Delta} \right\rfloor for t = \Delta to N - \Delta do \begin{vmatrix} \mathbf{for} \ k = 0 \ to \ K - 1 \ \mathbf{do} \end{vmatrix} Try new seed for DH parameters \Delta^k and perspective projection parameters \omega^k for i = t - \Delta to t + \Delta do \begin{vmatrix} \mathbf{Generate} \ 3\mathbf{D} \ pose \ \hat{P}_i^k = G(\Delta^k) \\ \mathbf{Estimate} \ 2\mathbf{D} \ pose \ \hat{p}_i^k = \Psi(\hat{P}_i^k; \omega^k) \\ \mathbf{Compute} \ error \ e_i^k = ||\hat{p}_i^k - p_i||_2^2 \\ \mathbf{Optimize} \ \Delta^{*k}, \omega^{*k} \end{vmatrix} end end Select the 3D pose corresponding to seed k^* = \arg\min \sum_{i=t-\Delta}^{t+\Delta} e_i^{\tilde{k}} \ \text{as the initialized pose}
62
+ ```
63
+
64
+ ```
65
+ Input: a sequence of video frames \{I_t\}_{t=0}^{T-1}, 2D poses \{p_t\}_{t=0}^{T-1} and initial 3D poses \{\hat{P}_t\}_{t=0}^{T-1} of a dancer
66
+
67
+ Output: a sequence of estimated 3D poses \{\hat{P}_t\}_{t=0}^{T-1} of the dancer
68
+
69
+ while new frame I_t available do
70
+
71
+ Estimate 3D pose \hat{P}_t
72
+ Project to 2D pose \hat{p}_t
73
+ Compute loss L = \alpha(||\hat{p}_t - \hat{p}_{t-1}||_2^2 + \beta||\hat{P}_t - \hat{P}_{t-1}||_2^2) + ||\hat{p}_t - p_t||_2^2 + ||\hat{P}_t - \hat{P}_t||_2^2
74
+ Update \omega^{2D} and \omega^{3D}
75
+ ```
76
+
77
+ <span id="page-3-2"></span>Towards our objective of using dance representations close to those used by experts, we need to use 3D, instead of 2D, pose sequences. Similarly for recognition using the language of dance experts, we need to extract descriptors of 3D movements from the 2D pose sequences, which constitute our method's next stage. Computationally too, 3D poses contain more information than 2D poses, and thus lead to more accurate dance recognition. However, predicting 3D poses from 2D poses is an ill-posed problem like other 2Dto-3D problems. The state-of-the-art methods [17–19] use a two-step pipeline for solving it: first detect 2D poses from video frames, and then predict 3D poses by learning the correspondences of 2D and 3D key points. [20] provides a simple yet effective baseline proving that the 2D to 3D task can be solved with a remarkably low error rate. [21] learns a mapping from a distribution of 2D poses to a distribution of 3D poses using an adversarial training approach. However, [20, 21] estimate 3D poses from 2D poses estimated from individual 2D frames, which ignores the temporal continuity information. [22, 23] use temporal correspondences of 2D keypoints to both learn the joint angles as well as predict the joint locations. They compute loss in terms of the distance between these key points and those back-projected using the estimated 3D pose. They enforce such geometric consistency to progressively refine the estimates of 3D poses. However, these methods are based on the assumption that the input 2D poses are accurate. [23] proposes a 2D pose correction module which uses a temporal CNN to refine the 2D initial inputs. However, this assumes that ground-truth 2D poses are available to train the correction module. These assumptions are often restrictive in practice, and do not hold for our dance videos which are collected [24] relates detected 2D poses across from the internet. frames based on tracking-by-detection and then recovers 3D pose in a Bayesian framework. However, their MAP estimation is not robust if the video is long or background changes dramatically. [25] proposes a method to cope with
78
+
79
+ <span id="page-4-2"></span>![](_page_4_Figure_0.jpeg)
80
+
81
+ Figure 2. Overview of Proposed 3D pose estimation method. Given a sequence of video frames $\{I_t\}_{t=0}^{T-1}$ , the dancers are tracked by our tracking algorithm in Algorithm 1 and each of their 2D poses $\{p_t^i\}_{i=0}^{N-1}$ are estimated by our tracking based 2D pose estimation algorithm in Algorithm 2. Then based on the 2D poses $\{p_t^i\}_{i=0}^{N-1}$ , we initialize their 3D poses and camera perspective projection parameters, $P_t^*$ and $\omega^{*2D}$ , as shown in Fig. 2 (top) and Algorithm 3. Finally, a neural network is trained to estimate the 3D poses $\{\hat{P}_t\}_{t=0}^{T-1}$ , which incorporates kinematic constraints and spatiotemporal smoothness of motion, as described in Algorithm 4.
82
+
83
+ occlusion. They first infer 3D locations of the visible body joints and then reconstruct the occluded joint locations using learned pose priors and a kinematic skeletal model. [26] fit a parametric human model (SMPL) to observed image key points and segments along with some additional constraints. However, [25, 26] require 3D pose labels and/or shape to supervise the training, which are not available for our "in the wild" video dataset. [27, 28] estimate 3D pose from in-the-wild images without 3D pose annotations, but they require either additional 2D pose datasets or a multi-To avoid these requirements and the need view setting. for groudtruth 2D pose, and to improve computational robustness, we propose an algorithm that integrates 3D pose estimation with 2D pose correction, which can be trained to converge on both estimates simultaneously while also estimating the camera projection parameters consistently.
84
+
85
+ We use the Denavit-Hartenberg (DH) parameters $\Lambda^k = \{\Theta^k, d^k, a^k, \alpha^k\}$ to represent the 3D pose. A 3D pose $\tilde{P}_t$ is generated by passing $\Lambda^k$ to the 34-DOF kinematic model G as follows:
86
+
87
+ $$\hat{P}_i^k = (J_0, J_1, ..., J_{24}) \tag{1}$$
88
+
89
+ $$J_j = G(\Theta, d, a, \alpha) = \mathcal{T}_{\Theta} \mathcal{T}_d \mathcal{T}_a \mathcal{T}_{\alpha} J_{j-1}$$
90
+ (2)
91
+
92
+ where
93
+
94
+ $$\mathcal{T}_{\Theta}\mathcal{T}_{d}\mathcal{T}_{a}\mathcal{T}_{\alpha} = \begin{bmatrix} \cos\Theta & -\sin\Theta\cos\alpha & \sin\Theta\sin\alpha & r\cos\Theta\\ \sin\Theta & \cos\Theta\cos\alpha & -\cos\Theta\sin\alpha & r\sin\Theta\\ 0 & \sin\alpha & \cos\alpha & d\\ 0 & 0 & 0 & 1 \end{bmatrix}$$
95
+
96
+ <span id="page-4-1"></span>![](_page_4_Figure_8.jpeg)
97
+
98
+ <span id="page-4-0"></span>Figure 3. Our 34-DOF digital dancer model. The values of the DH parameters $\Lambda = \{\Theta, d, a, \alpha\}$ of this model are listed in Table 7 in Appendix. The bounds of the joint rotation offset angles $\theta$ and bone length b are defined in Table 8 in Appendix.
99
+
100
+ <span id="page-5-4"></span>where $\mathcal{T}_{\Theta}$ , $\mathcal{T}_d$ , $\mathcal{T}_a$ and $\mathcal{T}_{\alpha}J_{j-1}$ are transition matrices, and $J_j$ is the 3D location of the joint j.
101
+
102
+ We initialize the desired estimates of 3D pose $P_t$ and the 3D-to-2D projection parameter $\omega_t^{2D}$ with multiple randomly selected seed pairs $\{\Lambda^{*k}, \omega^k\}$ (to sample the search space), as explained in Figure 2 (top) and Algorithm 3. $\omega^k = \{f^k, c^k\}$ are the perspective projection parameters. At frame t, we sample K seeds of the DH parameters to generate 3D poses $\{\hat{P}_i^k\}_{i=t-\Delta}^{t+\Delta}$ in a sliding window of size $2\Delta$ centered at t and 3D-to-2D projection parameter $\omega^{2D}$ . By comparing the reconstructed 2D pose $\hat{p} = \Psi(\hat{P}_t; \omega^{2D})$ projected from the generated 3D pose $P_t$ with the input 2D pose $p_i$ estimated in 2.1, we optimize the DH parameters $\Lambda^k$ generating the 3D pose $\hat{P}_i^k$ while enforcing: (a) constraints that govern the joint rotation offset angles $\theta^k$ , (b) consistency with the known bone lengths $b^k$ and (c) temporal smoothness of both the 2D and 3D poses. This is achieved by training with a loss function consisting of two parts: (1) temporal smoothness of both the 2D pose and 3D pose: $\alpha(||\hat{p}_t - \hat{p}_{t-1}||_2^2 + \beta||\hat{P}_t - \hat{P}_{t-1}||_2^2)$ . (2) preservation of 3Dto-2D projection (imaging) property: $||\Psi(\hat{P}_t;\omega^{2D}) - p_t||_2^2$ . The coefficients $\alpha$ and $\beta$ are chosen to be inversely proportional to the error: the larger the error, smaller the weight of the window. We also enforce constancy of the 3D to 2D projection parameters by smoothing it over a time window. At each time step t, we update the 3D pose $\hat{P}_t$ and the projection parameter $\omega^{3D}$ . From among the solutions obtained using the different seeds, the pair $\{\hat{P}_t^*; \omega^{*2D}\}$ corresponding to the seed offering the least error is selected.
103
+
104
+ As shown in Figure 2 (bottom), after obtaining the initial 3D pose $P_t^*$ and the 3D-to-2D projection parameters $\omega_t^{*2D}$ from the 3D Pose Initialization block, we train temporal convolutional networks to learn the mapping from the input 2D poses $\{\hat{p}_t\}$ to the 3D ones $\{\hat{P}_t\}$ . We use [17] as our baseline networks. During the training, in addition to the consistency between 2D and 3D poses at all times, we again enforce temporal smoothness of motion with the loss function defined as follows:
105
+
106
+ <span id="page-5-2"></span>
107
+ $$\mathcal{L} = ||\hat{p}_t - p_t||_2^2 + ||\hat{p}_t - p_{t-1}||_2^2 + ||\hat{P}_t - P_t^*||_2^2 + ||\hat{P}_t - \hat{P}_{t-1}||_2^2)$$
108
+ (3)
109
+
110
+ where $\hat{p}_t = \Psi(\hat{P}_t; \omega^{*2D})$ . See details in Algorithm 4.
111
+
112
+ To further improve the accuracy when limited labeled 3D ground-truth pose data are available, we introduce a semi-supervised training version of the proposed pose estimation method. A supervised loss is trained by using the available labeled ground truth 3D poses $P_t$ as target, and the loss in Equation (3) is implemented using the remaining unlabeled data. Here, the predicted 3D poses $\hat{P}_t$ are projected back to 2D joint coordinates for consistency with the 2D input $p_t$ . Similar to the training strategy in [17], we jointly optimize the supervised component with our unsupervised component during training, with the labeled data occupying the first half of a batch, and the unlabeled data occupying the
113
+
114
+ second half.
115
+
116
+ For each body part e, we train an LSTM-based model to recognize its (basic) movement. During training, the input is a sequence of 3D poses $\{\{\hat{p}_t^j\}_{j\in J_e}\}_{t=0}^{T-1}$ of all the joints $j\in J_e$ connected to the body part e and the output is a sequence of predicted movement labels $\{\hat{y}_t^e\}_{t=0}^{T-1}$ connected to e. Since this is a multi-label classification problem, which means the poses $\{\hat{p}_t^j\}_{j\in J_e}$ connected to the body part e may map to multiple movement labels $\hat{y}_t^e$ of e at the same time, we use the Binary Cross Entropy (BCE) loss between predicted movements $\{\hat{y}_t^e\}_{t=0}^{T-1}$ and the target movement labels $\{y_t^e\}_{t=0}^{T-1}$ . This loss is minimized during the training to obtain the optimal model. During testing, the trained model of each $e\in E$ takes a sequence of 3D poses $\{\{\hat{p}_t^j\}_{j\in J_e}\}_{t=0}^{T-1}$ of all the joints connected to e as input, and predicts the movement $\{\hat{y}_t^e\}_{t=0}^{T-1}$ of e.
117
+
118
+ Analogous to the approach in Section 2.3, we train an LSTM model to take a sequence of movement labels $\{\{\hat{y}_t^e\}_{e\in E}\}_{t=0}^{T-1}$ of all the body parts $e\in E$ as input. We use the output of the last time step from the last layer as the prediction of the dance genre $\hat{g}$ . For loss function, we use cross entropy between the predicted dance genre $\hat{g}$ and the target dance genre g. We describe the movement and dance genre recognition in detail in Algorithm 5 and Algorithm 6 in the supplementary document.
2112.01853/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-05-13T00:21:06.807Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36" etag="4KNJLH4yh4O7EvYzCztK" version="14.6.12" type="google"><diagram id="Icy7kJoeI3QuD2QXpKWb" name="Reward">7V1bl5s4Ev41Ptl9MAcJJMFjujuXmZPMyZn0ZHb2JQcbbDPBxgM43b2/foW5GIQA2UYY3G4/tBFX6/uqVKoqShPtfv38IbC2q8++7XgTqNrPE+1hAiHQEKL/4paXpIUALWlYBq6dHnRo+Or+z0kb1bR159pOWDow8n0vcrflxrm/2TjzqNRmBYH/VD5s4Xvlu26tZXpH9dDwdW55TuWwP107WiWtBiSH9o+Ou1xldwbYTPasrezg9BLhyrL9p8K9tHcT7T7w/Sj5tn6+d7y487J+SR7ofc3e/MECZxOJnPA7/vvXd8Fb+7dfHl++/UaMub6xpkBNf8hPy9ulPzl93Ogl64PA321sJ76MOtHunlZu5HzdWvN47xNFnbatorVHtwD9GkaB/yPvK422LFzPu/c9P9hfTVug+JMfWdiD93/xGf4mKrQnf7S9+pPTXvjpBJHzXGhKu+CD46+dKHihh6R7oZninPIR4nT76YAuybiwKiALUdpopYxa5tc+dDr9kvb7ERgY4BgIQAcQ2OrccSAPAoQRIE43XY00VO5qUu3qXD8UuxqY0rpavc6u1lS1tashj9WddPXuK/rn7vP08Vv45Hz65ffNx18scwr19q6e74KfeU87G/ttrLDp5tyzwtCdN3W1vj+DPuh/8tPpxl+pjtpvPDyXtl6yrWc3ik9SFYz1dDs+DyiqitPtw6nxRvHML07g0g5ygrStFrzQ3wVzp6GDAE4OdOyl04hxAUPEgTBrCxzPityf5XGLB2t6hy++Sx/5IK2wTKEKM5IflJ5VHGeYC+mmoUBDpyM91ImOYTbgZ9Q0gUIAItAgiKoIYpDybSIrWDpR5TaUGtZL4bBtfEDY8HMQIxGq2vjUCPPHhYMIJE9wEIgcodPVEQBGu5DE/PiabvpBtPKX/sby3h1a78rj8+GYT76/TaXjbyeKXlLrytpFvrBoqSXRUlCLcNlWuMolOt74YkVUWDb7FqiCTiQmoUhy4L219J+j6fpBn5l/us8P5NvcnmbGY6tonSkzmCENAEiIzCfwp7FDJBlviGO8LRZwPucNZzaeYYS7Gs7KlgPPcNA4qlDaYIaGKqbpWAYK41gqpPWj2ImiLSS3XDVHzKrc8s0z0SFRWG7PQn242nkUsBscdc0/kAwK9iGi3jj7bR04OUBwjxsWDrzBjU4q9OB7lHypwuJ57jZ02sc5K9wmzqKF+xyDxJExZuBzgI0cwhv4TEw0q6uBTysbFCaqDHw6Z9zTZY17QGC+PDJZIIKyYA5KFqpeuul0eoX8NwfGf3h1/OfYYvyfDgYlACYHCOxFqbO4hAj+Z+dnO6bhvk/fxnNPdft82Em/LZP/yZAyIXf0Knc/JuQhG1ySy9OnTe6QHX9tMqejgcmcgONwZDKXyVK70GmDErrsua992EFgYCKg9aPtHl+ftkNkWFDDIVrYnXmAT3FUnOE1bvQutLshjK6VL99rTJgBVwOmYhb/yheU7EOGHVhWiK9rkqvMsoYvvufO4/7bbW0rcvZXdrYFzTNjT23TRlTmIx6PM92x8TcOo2jSJstzlzG95pROcSzvLtYg7tzy3qY71q5te3V6rixunjVzvDtr/mO5b2fu1IHOMpnQnK5W/eEGR2nBDpQWN74yxFnROTrrFDV1hkIyBBWS2XkY6yzUeUYJizovhh/SHxodEdpvVPdDiZYzuS1IZWRNNFpuMB6PyoW6U/9cUHmTrb1BiO7jDLYooqbi4/ddwVDkCPqnWAMy9p+wgg0cOpJYs/31YsFKI/v04uhugh7S4SbVAhCcoHAb2cxq4TyVMH2kSTFbj6edVUUl2Zw1BTFTkGeSbMqkTuTuqOwS/mIROlJoIRBYfjWyrrEOkhNFHbHOzX4lnZfoGUv04yAEG+BWwU72+oHtBKeJfEbqIYq8VBGvSf0dYqDz1PD26eZYJoatzrlspjQQc0wkNjfG5B/MeGj6TP7hd7RAfvZwHdL8nwSrnOcfqA+L87yZ58Gl+abBTB274xKb5RlHj45LPhQi08GRSYUuKhUXy1NrfO5rD9MQNDARGGJa6JkiIJxi3bmz/DwoeBPWriNmb15ngoABBiZ2Aq8sjk3sOC5h/oEDm4NUp5BXOfIYZGAi0Ec61JvXmSFgasPCWiRD4NIvUhJScNUARYVGo7Pm5Bcpz3vbqx93sWHqCgCVcH72ihhhgv3M64/CcSPMvGVZukv8kiUgh5csTVy+S0cvWbKBafqrFR03PrfJZLhDxJ5y9quWfCnqIKUK1mvMD/GJ6D7crWPV+WPPaao50bs4fLb/dr+01msraYmNyHNMyqPcejVO9cSfR/+4GqGLMZOZrQGOIw/w3kk3OtCkXAeuxpuvSUl1+Wy5G3r0759u2S3t4y1s9ffKSm7hv60okH8c98W2aqBQ88SwdZ6BYsCZho8zUI6OStW88V6SNwKqPQmwrK7UBTznSVdWeqUZmtP7qrUzIFQILH5k9Q0Up1k13kLMmapWZJV2QkQ1T4NN1QzTNXSrgFu6i+EzU76F1BS1I4VwHEZtWChq8U8TgEaTBY3Iix2SZo8FuHgFpyoDZuBHdA7gx21xgFgGTNleoGBkqvkn826lWtwwFKwVPxydrmCdqMgwoIagqQMexjXHdI+xiEv81WGsUwlDo4ZVJB/s1cFKNMXQD5Kb5dOMFGIRr/qrgxhoWFFhYXQctxiLZHy9PoxVXTE1YOSfcY/AvNDACDA2pGIMiQJVHeUfXIIYaAzEVZf/oDBGAiGBIWIsV46RoUDzADHrfhwbxiJZh68OYwBVRSeFyerIQRZwAr0+kA1T0YwCimTcGJ/03tzjyt0wzv+sfrjK8ycl7p0K6KAB4ea3Je33rpfduiEhf5PVbOcnwDQyojU1X+ckZDYqy2NCwAQokFN8GhtKwRTMjUHx+C9UMZ3t1w1DUxMqRUMEMbQTjf9CHSm4+KDMbYBqKoWpaT457f5lIz4gIr6+wZG+ieZFcRgI6eGASA90BRWdyJhhPVKKpM/chEez3tAVvfCgsHQXRBSz4SayKS/i+hwc5Uem509I9bmUnu+D8fG0thS76ZfxJ70lfGP8MYzP3Dk3xmeGTY353xfnRdzkg+P8MAwbNEZrvsWwMZACSypYBulbfoVsyotEDW6UP4/yI7LlEd1dUvRXSHmRIMqN8udRfjy2fC+Mv6wtj0VCSjfGn8X4QdnyLUoeN+drdmTMQ6AYF1TzWCTGdiN9U0bYuNR8C+l1Qynmc5lACulb7iKb8/DGeemcH5E1D4BSTD9WTSmcb7mLbM6PMgQ7Ms6PyGnTC+ehqWhNP0M2528RWOmcH5NB/yr0/ChDsMPgvGhR8hHZNlgBpUgRlkH5lpvIZvwoQ7DDYLzouiAjsmwwUaREXavXlc3rW5hVPq/Ho8mJ2kOUteUmshl/i7LKZ/x4fJG9MP7SU9RblFU65Uc0Re2F8heeoZLBhlmFad2YWjmIGep47PU2BSzMeYIVUiR1OYEBA+WCc1Ryi7LK5/x4LJvOOD9k0yYbvG6cl8f5EZk2/XD+0rZNa4nVhZUinxXQfLRW/toqFtRUk1qbhWNAY71NgZKdJq9kZ7YY7ftv+6eF6tq3HS+UWa+zWPWtrk5cl5U7BQWztqhiVZCyN7DjinG8Miicwn9anNVYL03nVeo8KdxThuhQxBoVtF62h1++uruFZ6sQiWrJrP9btSQ5QUtKrI49ZVb30gBTfFRYFTLXYasGd1TiGmhlTa6nz9tVsWo+Zjzv95GFio36YtXFtV2tCXmIK1G7UxCXodbiStXb0C00HVWZuqoD5Wq0o2tLaBgq5cmRjjkFh/N4Rz8lh+uXcwi31oYdiLrmQFjmQDPgeXP5yVi121HREs4qhJ0WWa7nCWD0i2BZ6i6Kl/NXDa9f4ahXjtwYkhX11IbGkPpRo1+GwLubHsmLkuloYCzhxRkvw5LqGvO3Aaiw3ufA1IvJC10MhTg32mS109Sh0YbnCZU8l6nMZK5vHqMbWIGDm8iYJyWd85xmXXhX+L6d3J3D9e3YVrjKV1KLN75YEcV7s2+hVDzLaaNznDbclYqEFxjux2kD9LJSyZeWOdZpo4HS6me4XDFdZ3kpyYcD0gVNpfpwTJ5z8ki9h3l6793mpxv4m7Wzv4gbayNrvi/MGXdf6hgPsqOt+CCH6hXaYdhax7poMwu3++upXTaFkbOtNguuGUlp/ileZ+VUlRs4tNfSRWpiKU4ZEY8B1Figg4HIKmjiWvnopZniAJBpIk0jmBjIJNniPxkhVaSoqhYXtoU6He4IU6xRwK0Ky6KUa5DsEv5iETpSwj7mSSnoEwEvfLOmblrGUlGBVh4raOe2jRbxlvCqlO3RSW1QKlxn/O4Vg09YhTOOd1lKW2fHnLRD5Spt6Y7367VINdNgV7mB4OIWKVDrp60dQRqtnMiawHv6dUMHBOv7oe08gC+x3GMHRIDsFNSosoC3grJEDnQwB5Ww+Ov1rv3KmZ32uvxrDQ06WAOY8Gjwx9a2omISiboL3c2S/vccK9gkXy1v6QdutFrH+62NfaqxfD16om5WWWBI3taToqgPsmUM6To9iXYih1EfX7ZOML2tHyxAI5MZbiDHDQZAg2XdPY1ABzYHd+5f9Xmm7m8ZlkaXLGB84SfMuYXNjczxUoBf46DP5h11iH59Noco+k1KIYz2Y82YdEAHKAPW2Q05Uo56nVnAeilPYkzn4Rw4T1ZgiwaurgZozKpz3jLmsoD+C6hf3/+xg+7b4Nfwv/bn+ePzbMp7feU4mIEmpswDNhlPrSj2RhIEK38924Xt84LmlaFiBW3Mnfm8wiK6Z2YgHR2w5vhGK/ALJ1lBzkxRVrCSC3X9PFEYav1IqBOgP1wKaNtyjAUXaDw3nNmiG6DZXKmLA82LU0qW6Swt6uqlmk15ujjY9ZE4mWCLZjddFfZs1tLFsW/Lmu0L+wsi34+KZxOPLo78+aEciPjI596dpOFf4N/jd9N0wQAm07VHtz+XAPWJrl0TAN4IsEecVf6k6orplQEdOGIEGaDdGMCbvAPO4ry9MqADT6woBfQbBfYUYKM6JH/hro0ESBYJeNN6pv8LGUppv5YyQQu4VJNCNa4Vluc0TSrVFGpzmg4ZTJMT3iNmUo1ac1SbJkvF/Kb6GfSlspmQUSZZnuV+bDYTZl/iEsxmOiHhiE/N+pB0Fz7kLDF0fD7k8zwQqIwqLzO+Kx8y3Qz8uOsPtKC/avU5zgSgjf8H</diagram></mxfile>
2112.01853/main_diagram/main_diagram.pdf ADDED
Binary file (69.8 kB). View file
 
2112.01853/paper_text/intro_method.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The current success of deep reinforcement learning relies on the ability to use gradient-based optimizations for policy and value learning [27, 34]. Approaches such as *policy gradient* (PG) methods have achieved remarkable results in various domains including games [26, 33, 39, 8], robotics [16, 30] or even natural language processing [43]. However, the excellent performance of PG methods is heavily dependent on tuning the algorithms' hyperparameters [6, 42]. Applying a PG method to new environments often requires different hyperparameter settings and thus retuning [10]. The large amount of hyperparameters severely prohibits machine learning practitioners from fully utilizing PG methods in different reinforcement learning environments.
4
+
5
+ As a result, there is a huge demand for automating hyperparameter selection for policy gradient algorithms, and it remains a critical part of the Automated Machine Learning (AutoML) movement [13]. Automatic hyperparameter tuning has been well explored for supervised learning. Simple methods such as grid search and random search are effective although computationally expensive [2, 18]. Other complex methods such as Bayesian Optimization (BO [35]) and Evolutionary Algorithms (EA [7]) can efficiently search for
6
+
7
+ Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
8
+
9
+ optimal hyperparameters. Yet, they still need multiple training runs, have difficulty scaling to high-dimensional settings [32] or require extensive parallel computation [14]. Recent attempts introduce online hyperparameter scheduling, that jointly optimizes the hyperparameters and parameters in single run overcoming local optimality of training with fixed hyperparameters and showing great potential for supervised and reinforcement learning [14, 40, 29, 28].
10
+
11
+ However, one loophole remains. These approaches do not model the *context of training* in the optimization process, and the problem is often treated as a stateless bandit or greedy optimization [29, 28]. Ignoring the context prevents the use of episodic experiences that can be critical in optimization and planning. As an example, we humans often rely on past outcomes of our actions and their contexts to optimize decisions (e.g. we may use past experiences of traffic to not return home from work at 5pm). Episodic memory plays a major role in human brains, facilitating recreation of the past and supporting decision making via recall of episodic events [38]. We are motivated to use such a mechanism in training wherein, for instance, the hyperparameters that helped overcome a past local optimum in the loss surface can be reused when the learning algorithm falls into a similar local optimum. This is equivalent to optimizing hyperparameters based on training contexts. Patterns of bad or good training states previously explored can be reused, and we refer to this process as selecting hyperparameters. To implement this mechanism we use episodic memory. Compared to other learning methods, the use of episodic memory is non-parametric, fast and sample-efficient, and quickly directs the agents towards good behaviors [23, 17, 3].
12
+
13
+ This problem of formulating methods that can take the training context into consideration and using them as episodic experiences in optimizing hyperparameters remains unsolved. The first challenge is to effectively represent the training context of PG algorithms that often involve a large number of neural network parameters. The second challenge is sample-efficiency. Current performant hyperparameter searches [14, 28] often necessitate parallel interactions with the environments, which is expensive and not always feasible in real-world applications. Ideally, hyperparameter search methods should not ask for additional observations that the PG algorithms already collect. If so, it must be solved as efficiently as possible to allow efficient training
14
+
15
+ ![](_page_1_Figure_0.jpeg)
16
+
17
+ Figure 1: Hyper-RL structure. The hyper-state (green circle) is captured from the PG models' parameters and gradients at every Hyper-RL step (1). Given the hyper-states, the hyper-agent takes hyper-actions, choosing hyperparameters for the PG method to update the models (2). The update lasts U steps. After the last update step (3), the RL agent starts environment phase with the current policy, collecting an empirical return G after T environment steps (4). G is used as the hyper-reward for the last policy update step (blue diamond) (5). Other update steps (red diamond) are assigned with hyper-reward G.
18
+
19
+ of PG algorithms.
20
+
21
+ We address both these issues with a novel solution, namely Episodic Policy Gradient Training (EPGT)-a PG training scheme that allows on-the-fly hyperparameter optimization based on episodic experiences. The idea is to formulate hyperparameter scheduling as a Markov Decision Process (MDP), dubbed as Hyper-RL. In the Hyper-RL, an agent (hyper-agent) acts to optimize hyperparameters for the PG algorithms that optimize the policy for the agent of the main RL (RL-agent). The two agents operate alternately: the hyper-agent acts to reconfigure the PG algorithms with different hyperparameters, which ultimately changes the policy of the RL agent (update phase); the RL agent then acts to collect returns (environment phase), which serves as the rewards for the hyper-agent. To build the Hyper-RL, we propose mechanisms to model its state, action and reward. In particular, we model the training context as the state of the Hyper-RL by using neural networks to compress the parameters and gradients of PG models (policy/value networks) into low-dimensional state vectors. The action in the Hyper-RL corresponds to the choice of hyperparameters and the reward is derived from the RL agent's reward.
22
+
23
+ We propose to solve the Hyper-RL through episodic memory. As an episodic memory provides a direct binding from experiences (state-action) to final outcome (return), it enables fast utilization of past experiences and accelerates the searching of near-optimal policy [23]. Unlike other memory forms augmenting RL agents with stronger working memory to cope with partial observations [11, 21] or contextual changes within an episode [22], episodic memory persists across agent lifetime to maintain a global value estimation. In our case, the memory estimates the value of a state-action pair in the Hyper-RL by nearest neighbor memory lookup [31]. To store learning experience, we use a novel weighted average nearest neighbor writing rule that quickly propagates the value inside the memory by updating multiple memory slots per memory write. Our episodic memory is designed to cope with noisy and sparse rewards in the Hyper-RL.
24
+
25
+ Our key contribution is to provide a new formulation for online hyperparameter search leveraging context of previous training experiences, and demonstrate that episodic memory is a feasible way to solve this. This is also the first time episodic memory is designed for hyperparameter optimization. Our rich set of experiments shows that EPGT works well with various PG methods and diverse hyperparameter types, achieving higher rewards without significant increase in computing resources. Our solution has desirable properties, it is (i) computationally cheap and run once without parallel computation, (ii) flexible to handle many hyperparameters and PG methods, and (iii) shows consistent/significant performance gains across environments and PG methods.
26
+
27
+ # Method
28
+
29
+ In this paper, we address the problem of online hyperparameter search. We argue that in order to choose good values, hyperparameter search (HS) methods should be aware of the past training states. This intuition suggests that we should treat the HS problem as a standard MDP. Put in the context of HS for RL, our HS algorithm becomes a Hyper-RL algorithm besides the main RL algorithm. In Hyper-RL, the hyper-agent makes decisions at each policy update step to configure the PG algorithm with suitable hyperparameters $\psi$ . The ultimate goal of the Hyper-RL is the same as the main RL's: to maximize the return of the RL agent.
30
+
31
+ To construct the Hyper-RL, we define its state $s^{\psi}$ , action $a^{\psi}$ and reward $r^{\psi}$ . Hereafter, we refer to them as hyperstate, hyper-action and hyper-reward to avoid confusion with the main RL's s, a and r. Fig. 1 illustrates the operation of Hyper-RL. In the update phase, the Hyper-RL runs for U steps. At each step, taking the hyper-state captured from the PG models' parameters and gradients, the hyper-agent outputs hyper-actions, producing hyperparameters for PG algorithms to update the policy/value networks accordingly. After the last update (blue diamond), the resulting policy will be used by the RL agent to perform the environment phase, collecting returns after T environment interactions. The returns will be used in the PG methods, and utilized as hyperreward for the last policy update step. Below we detail the hyper-action, hyper-reward and hyper-state.
32
+
33
+ **Hyper-action** A hyper-action $a^{\psi}$ defines the values for the hyperparameters $\psi$ of interest. For simplicity, we assume the hyper-action is discrete by quantizing the range of each
34
+
35
+ hyperparameter into B discrete values. A hyper-action $a^{\psi}$ selects a set of discrete values, each of which is assigned to a hyperparameter (see more Appendix A.2).
36
+
37
+ **Hyper-reward** The hyper-reward $\mathbf{r}^{\psi}$ is computed based on the empirical return that the RL agent collects in the environment phase after hyperparameters are selected and used to update the policy. The return is G = $\mathbb{E}_{s_{t:t+T},a_{t:t+T}}\left[\sum_{k=0}^{T}\gamma^kr_{t+k}\right]$ where t and T are the environment ronment step and learning horizon, respectively. Since there can be U consecutive policy update steps in the update phase, the last update step in the update phase receives hyperreward G while others get zero hyper-reward, making the Hyper-RL, in general, a sparse reward problem. That is,
38
+
39
+ $$\mathbf{r}_{i}^{\psi} = \begin{cases} G & \text{if } i = \mathbf{U} \\ 0 & \text{otherwise} \end{cases} \tag{1}$$
40
+
41
+ To define the objective for the Hyper-RL, we treat the update phase as a learning episode. Each learning episode can lasts for multiple of U update steps and for each step i in the episode, we aim to maximize the hyper-return $G_i^{\psi}=$ $\sum_{j\geq i}^{\mathrm{U}n}\mathbf{r}_{j}^{\psi}$ where $n\in\mathbb{N}^{+}.$ In this paper, n is simply set to 1 and thus, $G_i^{\psi} = G$ .
42
+
43
+ **Hyper-state** A hyper-state $s^{\psi}$ should capture the current training state, which may include the status of the trained model, the loss function or the amount of parameter update. We fully capture $s^{\psi}$ if we know exactly the loss surface and the current value of the optimized parameters, which can result in perfect hyperparameter choices. This, however, is infeasible in practice, thus we only model observable features of the hyper-state space. The Hyper-RL is then partially observable and noisy. In the following, we propose a method to represent the hyper-state efficiently.
44
+
45
+ **Require:** A parametric policy function $\pi_{\theta}$ of the main RL algorithm $PG_{\psi}(\pi_{\theta}, G)$ where $\psi$ is the set of hyperparamters for training $\pi_{\theta}$ and G the empirical return collected by function $Agent(\pi_{\theta})$ .
46
+
47
+ - 1: Initialize the episodic memory $M = \emptyset$
48
+ - 2: **for** episode = 1, 2, ... **do** {loop over learning episodes}
49
+ - Initialize a buffer $D = \emptyset$ {storing hyper-state, action, and reward within a learning episode}
50
+ - for $i = 1, \dots$ U do {loop over policy updates} 4:
51
+ - Compute $\phi(\mathbf{s}_i^{\psi})$ . Select $\mathbf{a}_i^{\psi}$ by $\epsilon$ -greedy with $\mathbb{Q}\left(\mathbf{s}_i^{\psi},\mathbf{a}^{\psi}\right) = \text{M.read}\left(\phi\left(\mathbf{s}_i^{\psi}\right),\mathbf{a}^{\psi}\right)$ (Eq. 3) 5:
52
+ - Convert $\mathbf{a}_i^{\psi}$ to the hyperparameter values $\psi_i$ and 6: update $\theta \leftarrow PG_{\psi_i}(\pi_{\theta}, G)$
53
+ - Compute $\mathbf{r}_i^{\psi}$ (Eq. 1). Add $(\phi(\mathbf{s}_i^{\psi}), \mathbf{a}_i^{\psi}, \mathbf{r}_i^{\psi})$ to D if $i == \mathbf{U}$ then $G = Agent(\pi_{\theta})$ 7:
54
+ - 8:
55
+ - 9:
56
+ - Update episodic memory with M.update(D) (Eq. 4) 10:
57
+ - 11: **end for**
58
+
59
+ **Hyper-state representation** Our hypothesis is that one signature feature of the hyper-state is the current value
60
+
61
+ of optimized parameters $\theta$ and the derivatives of the PG method's objective function w.r.t $\theta$ . We maintain a list of the last $N_{order}$ first-order derivatives: $\{\nabla_{\theta n}\}_{n=1}^{N_{order}}$ , which preserves information of high-order derivatives (e.g. a secondorder derivative can be estimated by the difference between two consecutive first-order derivatives). Let us denote the parameters and their derivatives, often in tensor form, as $\theta = \{W_m^0\}_{m=1}^M$ and $\nabla_{\theta n} = \{W_m^n\}_{m=1}^M$ where M is the number of layers in the policy/value network. $\{\theta, \nabla_{\theta n}\}$ can be denoted jointly $\{W_m^n\}_{n=0,m=1}^{N_{order},M}$ or $\{W_m^n\}$ for short (see Appendix B.4 for dimension details of $W_m^n$ ).
62
+
63
+ Merely using $\{W_m^n\}$ to represent the learning state is still challenging since the number of parameters is enormous as it often is in the case of recent PG methods. To make the hyperstate tractable, we propose to use linear transformation to map the tensors to lower-dimensional features and concatenate them to create the state vector $\mathbf{s}^{\psi} = [s_m^n]_{n=0,m=1}^{N_{order},M}$ Here, $s_m^n$ is the feature of $W_m^n$ , computed as
64
+
65
+ $$s_m^n = \text{vec}\left(W_m^n C_m^n\right) \tag{2}$$
66
+
67
+ where $C_m^n \in \mathbb{R}^{d^{nm} \times d}$ is the transformation matrix, $d^{nm}$ the last dimension of $W_m^n$ $(d^{nm} \gg d)$ and $\text{vec}(\cdot)$ the vectorize operator, flattening the input tensor. To make our representation robust, we propose to learn the transformation $C_m^n$ as described in the next section.
68
+
69
+ Learning to represent hyper-state and memory key We map $s^{\psi}$ to its embedding by using a feed-forward neural network $\phi$ , resulting in the state embedding $\phi(s^{\psi}) \in \mathbb{R}^h$ . $\phi$ (s $^{\psi}$ ) later will be stored as the key of the episodic memory. We can just use random $\phi$ and $C_m^n$ for simplicity. However, to encourage $\phi(s^{\psi})$ to store meaningful information of $s^{\psi}$ , we propose to reconstruct $s^{\psi}$ from $\phi(s^{\psi})$ via another decoder network $\omega$ and minimize the following reconstruction error $\mathcal{L}_{rec} = \|\omega\left(\phi\left(\mathbf{s}^{\psi}\right)\right) - \mathbf{s}^{\psi}\|_{2}^{2}$ . Similar to [3], we employ latent-variable probabilistic models such as VAE to learn $\mathbb{C}_m^n$ and update the encoder-decoder networks. Thanks to using $C_m^n$ projection to lower dimensional space, the hyper-state distribution becomes simpler and potential for VAE reconstruction. Notably, the VAE is jointly trained online with the RL agent and the episodic memory (more details in Appendix A.3).
70
+
71
+ Theoretically, given the hyper-state, hyper-action and hyperreward clearly defined in the previous section, we can use any RL algorithm to solve the Hyper-RL problem. However, in practice, the hyper-reward is usually sparse and the number of steps of the Hyper-RL is usually much smaller than that of the main RL algorithm ( $U \ll T$ ). It means parametric methods (e.g. DQN) which require a huge number of update steps are not suitable for learning a good approximation of the Hyper-RL's Q-value function $Q(s_i^{\psi}, a_i^{\psi})$ .
72
+
73
+ To quickly estimate $\mathbb{Q}\left(\mathbf{s}_{i}^{\psi},\mathbf{a}_{i}^{\psi}\right)$ , we maintain an episodic memory that lasts across learning episodes and stores the outcomes of selecting hyperparameters from a given
74
+
75
+ ![](_page_3_Figure_0.jpeg)
76
+
77
+ Figure 2: Performance on (a) MountainCarContinuous (log scale) and (b) BipedalWalker over env. steps. In each plot, average return is on the left with mean and std. over 10 runs. The right is smoothed (taking average over a window of 100 steps) learning rate $\alpha$ found by the baselines (first 3 runs).
78
+
79
+ hyper-state. We hypothesize that the training process involves hyper-states that share similarities, which is suitable for episodic recall using KNN memory lookup. Concretely, the episodic memory M binds the learning experience $\left(\phi\left(\mathbf{s}_{i}^{\psi}\right), \mathbf{a}_{i}^{\psi}\right)$ —the key, where $\phi$ is an embedding function, to the approximated expected hyper-return $\mathbf{G}_{i}^{\psi}$ —the value. We index the memory using key $\left(\phi\left(\mathbf{s}_{i}^{\psi}\right), \mathbf{a}_{i}^{\psi}\right)$ to access the value, a.k.a M $\left[\phi\left(\mathbf{s}_{i}^{\psi}\right), \mathbf{a}_{i}^{\psi}\right] = \mathbf{G}_{i}^{\psi}$ . Computing and updating the Q $\left(\mathbf{s}_{i}^{\psi}, \mathbf{a}_{i}^{\psi}\right)$ corresponds to two memory operators: read and update. The read $\left(\phi\left(\mathbf{s}_{i}^{\psi}\right), \mathbf{a}_{i}^{\psi}\right)$ takes the hyper-state embedding plus hyper-action and returns the hyper-state-action value Q $\left(\mathbf{s}_{i}^{\psi}, \mathbf{a}_{i}^{\psi}\right)$ . The update (D) takes a buffer D containing observations $\left(\mathbf{s}_{i}^{\psi}, \mathbf{a}_{i}^{\psi}, \mathbf{r}_{i}^{\psi}\right)_{i=1}^{\mathbf{U}}$ , and updates the content of the memory M. The details of the two operators are as follows.
80
+
81
+ **Memory reading** Similarly to [31], we estimate the stateaction value of any $\mathbf{s}_{i}^{\psi} - \mathbf{a}_{i}^{\psi}$ pair by:
82
+
83
+ $$\mathbf{Q}\left(\mathbf{s}_{i}^{\psi},\mathbf{a}_{i}^{\psi}\right) = \operatorname{read}\left(\mathbf{s}_{i}^{\psi},\mathbf{a}_{i}^{\psi}\right)$$
84
+
85
+ $$= \frac{\sum_{k=1}^{|\mathcal{N}(i)|} Sim\left(i,k\right) \mathbf{M}\left[\phi\left(\mathbf{s}_{k}^{\psi}\right),\mathbf{a}_{i}^{\psi}\right]}{\sum_{k=1}^{|\mathcal{N}(i)|} Sim\left(i,k\right)}$$
86
+ (3)
87
+
88
+ where $\mathcal{N}(i)$ denotes the neighbor set of the embedding $\phi\left(\mathbf{s}_{i}^{\psi}\right)$ in M and $\phi\left(\mathbf{s}_{k}^{\psi}\right)$ the k-th nearest neighbor. $\mathcal{N}(i)$ includes $\phi\left(\mathbf{s}_{i}^{\psi}\right)$ if it exists in M. $Sim\left(i,k\right)$ is a kernel measuring the similarity between $\phi\left(\mathbf{s}_{k}^{\psi}\right)$ and $\phi\left(\mathbf{s}_{i}^{\psi}\right)$ .
89
+
90
+ Memory update To cope with noisy observations from the Hyper-RL, we propose to use weighted average to write the hyper-return to the memory slots. Unlike max writing rule [3] that always stores the best return, our writing propagates the average return inside the memory, which helps cancel out the noise of the Hyper-RL. In particular, for each observed transition in a learning episode (stored in the buffer D), we compute the hyper-return $\mathsf{G}_i^{\psi}$ . The hyper-return is then used to update the memory such that the action value of $\phi\left(\mathsf{s}_i^{\psi}\right)$ 's neighbors is adjusted towards $\mathsf{G}_i^{\psi}$ with speeds relative to the distances [19]:
91
+
92
+ $$\mathbf{M}\left[\phi\left(\mathbf{s}_{k}^{\psi}\right),\mathbf{a}_{i}^{\psi}\right]\leftarrow\mathbf{M}\left[\phi\left(\mathbf{s}_{k}^{\psi}\right),\mathbf{a}_{i}^{\psi}\right]+\beta\frac{\Delta_{ik}Sim\left(i,k\right)}{\sum_{k=1}^{|\mathcal{N}(i)|}Sim\left(i,k\right)}\tag{4}$$
93
+
94
+ where $\phi\left(\mathbf{s}_{k}^{\psi}\right)$ is the k-th nearest neighbor of $\phi\left(\mathbf{s}_{i}^{\psi}\right)$ in $\mathcal{N}(i),\,\Delta_{ik}=\mathbf{G}_{i}^{\psi}-\mathbf{M}\left[\phi\left(\mathbf{s}_{k}^{\psi}\right),\mathbf{a}_{i}^{\psi}\right],\,$ and $0<\beta<1$ the writing rate. If the key $\left(\phi\left(\mathbf{s}_{i}^{\psi}\right),\mathbf{a}_{i}^{\psi}\right)$ is not in M, we also add $\left(\phi\left(\mathbf{s}_{i}^{\psi}\right),\mathbf{a}_{i}^{\psi},\mathbf{G}_{i}^{\psi}\right)$ to the memory. When the stored tuples exceed memory capacity $N_{mem}$ , the earliest added tuple will be removed.
95
+
96
+ Under this formulation, $\mathbb{M}\left[\phi\left(\mathbf{s}_{i}^{\psi}\right),\mathbf{a}_{i}^{\psi}\right]$ is an approximation of the expected hyper-return collected by taking the hyper-action $\mathbf{a}_{i}^{\psi}$ at the hyper-state $\mathbf{s}_{i}^{\psi}$ (see Appendix C for proof). As we update several neighbors at one write, the hyper-return propagation inside the episodic memory is faster and helps to handle the sparsity of the Hyper-RL. Unless stated otherwise, we use the same neighbor size $|\mathcal{N}(i)|$ for both reading and writing process, denoted as K for short.
97
+
98
+ **Integration with PG methods** Our episodic control mechanisms can be used to estimate the hyper-state-action-value of the Hyper-RL. The hyper-agent uses that value to select the hyper-action through $\epsilon$ -greedy policy and schedule the hyperparameters of PG methods. Algo. 1, Episodic Policy Gradient Training (EPGT), depicts the use of our episodic control with a generic PG method.
2203.17234/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2203.17234/paper_text/intro_method.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ 3D object pose estimation has significantly improved over the past decade in terms of both robustness and accuracy [\[18,](#page-8-0) [30,](#page-9-0) [34,](#page-9-1) [20,](#page-8-1) [44\]](#page-9-2). In particular, the robustness to partial occlusions has greatly increased [\[28,](#page-9-3) [17,](#page-8-2) [24\]](#page-8-3), and the need for large amounts of real annotated training images has been relaxed thanks to domain transfer [\[1\]](#page-8-4), domain randomization [\[36,](#page-9-4) [19,](#page-8-5) [31\]](#page-9-5), and self-supervised learning [\[33\]](#page-9-6) techniques that leverage synthetic images for training.
4
+
5
+ Nevertheless, the use of image-based 3D object pose estimation remains limited in the industry, despite its huge potential for robotics and augmented reality. Scalable industrial applications would, for example, require the ability to handle arbitrary, previously-unseen objects without retraining and with access only to the objects' CAD
6
+
7
+ ![](_page_0_Figure_9.jpeg)
8
+
9
+ Figure 1: Our method can estimate the 3D pose of new objects in query images by matching them with templates created from their 3D models. These new objects can be very different from the ones, and can be partially occluded in the query images.
10
+
11
+ models, thus saving both training and data capture time. While a few works have already tackled this challenging task [\[31,](#page-9-5) [29,](#page-9-7) [39,](#page-9-8) [2\]](#page-8-6), most of them impose some additional constraints by assuming that the new objects belong to a known category [\[38\]](#page-9-9), remain similar to the training ones as in the T-LESS dataset [\[31\]](#page-9-5), or have prominent corners [\[29\]](#page-9-7).
12
+
13
+ By contrast, template-based approaches [\[39,](#page-9-8) [2\]](#page-8-6) offer the promise of generalizing to arbitrary new objects by learning an image embedding used to match the input image to a series of templates generated from their CAD models. Unfortunately, their use with new objects has been demonstrated only anecdotally, and we show in our experiments that these methods struggle in this challenging scenario, particularly in the presence of occlusions. We indeed notice that the <span id="page-1-0"></span>global representations used in [\[39,](#page-9-8) [2\]](#page-8-6) to compare the input image to the CAD-generated templates have two limitations. First, they generalize poorly to new objects in the presence of a cluttered background, and result in inaccurate pose estimation even for uniform background. Furthermore, they are ill-suited to handle occlusions.
14
+
15
+ These observations motivate us to keep the 2D structure of the images for a template-based approach. More precisely, given a small set of training objects, we learn local features that can be used to reliably match real images and synthetical templates. Relying on local features allows us to discard the background: While the object's mask in the input image is not available at run-time, we can use the template's mask, thus solving the first limitation of global representations. Note that using the template's mask to instead remove the background in the real image before computing the image global representation requires us to recompute the input image representation for each template, which would result in very slow matching.
16
+
17
+ As will be shown by our experiments, using local features also results in much more accurate poses. This can be explained by the fact that we do not use pooling operations, which remove critical information about the poses, especially for new objects. Finally, yet another advantage is that our method can be robust to partial occlusions. To do so, we introduce a measure to evaluate the similarity between two images that explicitly takes into account the object's mask in the template and the possible occlusions in the query image.
18
+
19
+ We demonstrate the benefits of our approach on the LINEMOD [\[12\]](#page-8-7), Occlusion-LINEMOD [\[3\]](#page-8-8), and T-LESS [\[14\]](#page-8-9) datasets. It consistently outperforms previous works [\[39,](#page-9-8) [2,](#page-8-6) [32,](#page-9-10) [31\]](#page-9-5) on new objects by a large margin. In summary, our contributions are:
20
+
21
+ - A failure-case analysis of previous template-based methods when testing on new objects;
22
+ - A method that can predict the pose of new objects from their CAD models, without training on these objects nor restricting these objects to be similar to the training ones;
23
+ - A method robust to occlusions even in the challenging scenario when objects are both new and occluded.
24
+
25
+ # Method
26
+
27
+ Our goal is to recognize new objects in color images and predict their 3D poses. We do this by matching the color image of the object with a set of templates. A template is a rendered image of a 3D model in some 3D pose. For each new object, the template set contains many templates, rendered from different views sampled around its 3D model. As the templates are annotated with the object's identity and pose, the method returns the identity and pose of the template most similar to the input image.
28
+
29
+ The challenge then is to measure the similarity between templates and input images. This should be done reliably despite that no real images of the new objects have been seen beforehand, the objects can be partially occluded, the
30
+
31
+ <span id="page-2-1"></span>![](_page_2_Figure_8.jpeg)
32
+
33
+ Figure 2: Understanding the influence of background on different image representations, with T-SNE visualizations of the image representations learned by [\[2\]](#page-8-6) (first row) and by our method (second row) for real images of LINEMOD objects. For a given column, all the plots have the same scale for comparison.
34
+
35
+ lighting differs between the templates and the real images, and the object's background is cluttered in the real images.
36
+
37
+ In this work, motivated by the better repeatability and robustness to occlusions of local representations compared to global ones, we measure the similarity between an input image and a template based on local image features extracted using a deep model. We train this model using pairs made of a real image and a synthetic image from a small set of training objects. Note that these training objects can be very different in appearance from the new objects.
38
+
39
+ We start this section with an analysis of the limits of global representations in Section [3.1.](#page-2-0) We then detail in Section [3.2](#page-3-0) our training procedure. It relies on a similarity measure that compares the local features of real images and synthetic templates. At run-time, we use an extended version of this similarity function that explicitly estimates which local features in the input image are occluded and discards them. We discuss this in Section [3.3.](#page-4-0) Finally, we detail how we generate the templates in Section [3.4.](#page-4-1)
40
+
41
+ In each training iteration, we sample N positive pairs, where pair i is composed of a real image $\mathbf{q}_i$ depicting a
42
+
43
+ training object and of a synthetic template $\mathbf{t}_i$ of the same object in a similar 3D pose. Following [39], we deem the two viewpoints similar if the angle between them is less than 5 degrees. All the pairs composed by a real image and a synthetic image of different objects or dissimilar poses (larger than 5 degrees) are defined as negative pairs.
44
+
45
+ **Triplet loss.** [39] proposed a metric learning approach based on the intuition that the distance between feature descriptors for positive pairs should be closer in the learnt embedding space than negative pairs. To learn this property, [39] used a training loss $\mathcal{L} = \mathcal{L}_{triplet} + \mathcal{L}_{pair}$ where:
46
+
47
+ • $\mathcal{L}_{triplet}$ is the triplet term, which allows the network to learn features such that the distance in the learned embedding space between the positive pairs $\Delta_{+}^{(i)}$ is lower than the distance between the negative pairs $\Delta_{-}^{(i)}$ within the limits of the margin m. This triplet term is defined as
48
+
49
+ $$\mathcal{L}_{triplet} = \sum_{i=1}^{N} \max \left( 0, 1 - \frac{\Delta_{+}^{(i)}}{\Delta_{-}^{(i)} + m} \right) \quad (1)$$
50
+
51
+ • $\mathcal{L}_{pair} = \sum_{i=1}^{N} \Delta_{+}^{(i)}$ is the pairwise term, to minimise distances between two images of identical poses but different viewing conditions.
52
+
53
+ [2] made an extension of this work by proposing a triplet loss which focuses only on learning object-discriminative features while using a pairwise loss to learn an embedding space analogous to the pose differences.
54
+
55
+ While these two losses work well, we experimentally show that the recent standard contrast loss InfoNCE [25] is the most simple and effective choice.
56
+
57
+ **InfoNCE loss.** For each real image $\mathbf{q}_i$ , we also create N-1 negative pairs by combining it with synthetic templates $\mathbf{t}_k$ of other pairs in the current batch, with $1 \le k \le$
58
+
59
+ <span id="page-4-6"></span> $N, k \neq i$ . Altogether, this yields N positive pairs and $(N-1) \times N$ negative pairs for each batch. We train our model to maximize the agreement between the representations of samples in positive pairs, while minimizing that of negative pairs with the InfoNCE loss function [25]:
60
+
61
+ <span id="page-4-4"></span>
62
+ $$\mathcal{L} = -\sum_{i=1}^{N} \log \frac{\exp\left(\sin(\overline{\mathbf{q}}_{i}, \overline{\mathbf{t}}_{i})/\tau\right)}{\sum_{k=1}^{N} 1_{[k \neq i]} \exp\left(\sin(\overline{\mathbf{q}}_{i}, \overline{\mathbf{t}}_{k})/\tau\right)}, \quad (2)$$
63
+
64
+ where $\operatorname{sim}(\overline{\mathbf{q}}, \overline{\mathbf{t}})$ measures the similarity between the local image features $\overline{\mathbf{q}}$ and $\overline{\mathbf{t}}$ computed by the deep model for real image $\mathbf{q}$ and template $\mathbf{t}$ , and $\tau=0.1$ , is a temperature parameter. As shown in Figure 3, $\overline{\mathbf{q}}$ and $\overline{\mathbf{t}}$ retain a grid structure and are 3-tensors. In practice, their dimensions depend on the size of the input image, ranging from $25 \times 25 \times C$ to $28 \times 28 \times C$ , with C=16.
65
+
66
+ **Local feature similarity.** While previous works on contrastive learning [25, 35, 23, 4, 10, 7, 5, 6] focused mostly on image classification and define the similarity metric sim(.,.) using a global representation of the two images, we found such a representation to only classify well either known objects or images with a clean background, as discussed in Section 3.1.1. To effectively handle new objects and complex backgrounds, we use a metric based on a pairwise comparison of the local features in $\overline{\bf q}$ and $\overline{\bf t}$ . Specifically, we define
67
+
68
+ $$sim(\overline{\mathbf{q}}, \overline{\mathbf{t}}) = \frac{1}{|\mathcal{M}|} \sum_{l} \mathcal{M}^{(l)} \mathcal{S}\left(\overline{\mathbf{q}}^{(l)}, \overline{\mathbf{t}}^{(l)}\right) , \qquad (3)$$
69
+
70
+ where $\mathcal{S}$ is a local similarity metric, $\mathcal{M}$ is a 2D binary visibility mask for template $\mathbf{t}$ , and index l indicates a 2D grid location. $\overline{\mathbf{q}}^{(l)}$ and $\overline{\mathbf{t}}^{(l)}$ are thus local features of dimension C. Considering the template mask allows us to discard the background in the real image. Note that the mask does not account for possible occlusions in the real image as it corresponds to the object's silhouette in the template. Occlusions will be considered in the next subsection. As a local similarity metric $\mathcal{S}$ , we use the cosine similarity
71
+
72
+ $$\mathcal{S}\left(\overline{\mathbf{q}}^{(l)}, \overline{\mathbf{t}}^{(l)}\right) = \frac{\overline{\mathbf{q}}^{(l)}}{||\overline{\mathbf{q}}^{(l)}||} \cdot \frac{\overline{\mathbf{t}}^{(l)}}{||\overline{\mathbf{t}}^{(l)}||}, \tag{4}$$
73
+
74
+ We empirically observed that measuring the similarity as the opposite of the L1 and L2 norms of the differences yields the same performance as the cosine similarity.
75
+
76
+ At run-time, given a real query image $\mathbf{q}$ , we retrieve the most similar template in a template set. To be robust to occlusions that can occur in the query image, we modify $\sin(\overline{\mathbf{q}}, \overline{\mathbf{t}})$ as:
77
+
78
+ <span id="page-4-3"></span>
79
+ $$\operatorname{sim}^{*}(\overline{\mathbf{q}}, \overline{\mathbf{t}}) = \frac{1}{|\mathcal{M}|} \sum_{l} \mathcal{M}^{(l)} \mathcal{O}^{(l)} \mathcal{S}\left(\overline{\mathbf{q}}^{(l)}, \overline{\mathbf{t}}^{(l)}\right) , \quad (5)$$
80
+
81
+ <span id="page-4-2"></span>![](_page_4_Figure_11.jpeg)
82
+
83
+ Figure 4: **Illustration of feature similarity** when not using the occlusion mask $\mathcal{O}$ (second row) and when using it. As discussed in Section 3.3, using $\mathcal{O}$ allows "turning off" the possible occluded local features in the similarity score.
84
+
85
+ where $\mathcal{O}^{(l)}=1_{\mathcal{S}(\overline{\mathbf{q}}^{(l)},\overline{\mathbf{t}}^{(l)})>\delta}$ with $\delta$ a threshold applied to the cosine similarity to "turn off" the occluded local features as shown in Figure 4. In practice, we set this threshold $\delta=0.2$ through ablation study. Note that Eq. (5) can be written as the element-wise product $\odot$ and can be computed efficiently with:
86
+
87
+ <span id="page-4-5"></span>
88
+ $$\mathrm{sim}^*(\overline{\mathbf{q}}, \overline{\mathbf{t}}) = \frac{1}{|\mathcal{M}|} (\mathcal{M} \odot \mathcal{O} \odot \mathcal{S}) \ . \tag{6}$$
89
+
90
+ On LINEMOD [12] and Occlusion-LINEMOD [3] datasets, we follow the protocol of [39] to sample the synthetic templates. More precisely, the viewpoints are defined by starting with a regular icosahedron and recursively subdividing each triangle into 4 smaller triangles. After applying this subdivision two times and removing the lower half-sphere, we end up with 301 templates per object.
91
+
92
+ On T-LESS [14], we follow the protocol of [31] by using a dense regular icosahedron with 2'536 viewpoints and 36 in-plane rotations for each rendered image. Altogether, this yields 92'232 templates per object. Besides, we also show our results with a coarser regular icosahedron with 602 viewpoints, which results 21'672 templates per object.
93
+
94
+ We use BlenderProc [9] to generate templates with realistic rendering for both settings.
2205.08096/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2023-03-10T04:36:05.120Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/20.8.16 Chrome/106.0.5249.199 Electron/21.4.0 Safari/537.36" etag="CqQPrtg4ER7jduIUS4xJ" version="20.8.16" type="device" pages="2"><diagram name="Page-1" id="6kdxHjvASe4cJ4S7GAvV">7ZVRT4MwEMc/DY9LKNVtPjqcM9E9LIvR+LJUeoPG0i5dEean95AyqGiiiXvzid7vjuv1/28goHFeLQzbZUvNQQZRyKuAXgVRRKaE4KMmh4aMKW1AagR3RR1YizdwMHS0EBz2XqHVWlqx82GilYLEeowZo0u/bKulv+uOpTAA64TJIX0Q3GYNnUaTjt+ASLN2ZzK+aDI5a4vdSfYZ47rsIToPaGy0ts0qr2KQtXitLs17199kj4MZUPYnL5jHpFxt4oLcsQVhm2K1nD+NXJdXJgt3YDesPbQKGF0oDnWTMKCzMhMW1juW1NkSPUeW2VxiRHA5HKrdAYyFqofckAvQOVhzwBKXHZGxU8xdmXMXlp3+dOpY1tP+qDRznqfH3p0suHDK/EKlaKDSLRgFcvTM9qhMFC5ZkgkFuLpXEphRQqUDIfH81ldrb41+gVhLbZAojR3obCuk/ISYFKnCMEFVAfmsVlPgJb10iVxwXm/zpT2+gX/h0Jlv0GRo0NHDvkH0VP7Qf388f6jvD4lOZxCG3SfsI9f7EdD5Ow==</diagram><diagram id="4t-Rl67331zgDQyjhTOh" name="Page-2">7LzHkuTIsiT6NWf5RMDJMsADPBDgO3AS4Bz4+oFndZ/bmdkj8tYjN6UqCQLEibmaqpk5/oOy7SFO0VBqfZo1/0Gg9PgPyv0HQWgKu7+DA+efAwSK/jlQTFX65xD8Pwfe1ZX9dRD66+hapdn87cSl75ulGr4fTPquy5Ll27Fomvr9+2l533x/6hAV2a8D7yRqfh/1qnQp/xylEPJ/jktZVZR/Pxkm6D+ftNHfJ//Vk7mM0n7/xyGU/w/KTn2//PmtPdisAWP397j8uU74v3z634ZNWbf8/7nAaskgN+CX1ya+Oac5Rfri//fX7GxRs/7V4b8au5x/j8Dd7gH8WrVfQ8Vs2bRU9wCpUZw1Zj9XS9V39+dxvyx9e5/QgA+YKPkUU792Kds3/XR/nmZ5tDbLP+7waKoCXLn0w300moc/E5hXR3a3mfl64OPvo9DfR8CtoiX6D/r48yciDF3xH4StXMawdkgRi/5xf+lvp+Sd4v7tAn8+HfYR3D854TW35v0Lk3oN94KZ6QWlXIwcW9Loe+DpjSU2VyIeZeZ+Xdl/fO+1ORvnMLZgE8sb4sYnV0KJ1jccrD+VtBN65bAETXgJI2xEedEsOfsanI633bPDJLhmPJ/dK8HaE//xsh6JhYY7ocRln+mE7+p9hc75PVWCXSUJ2SQ7s7zE94Y8qnpoBkE7lDcMq9Mnwwypa4wxnJS+r3blvmJDSOL+YVp0ipuV0OgrGrwQh04aClsbNMoejQ5MAhFc1O2oNhI3NId5URetuPgErRsHs4xWTqZ5xYCNU7Fpm1Qs2tpda9Z47+Q/CCOHAVQMYzmVsqokJC3tUoMNzRR7HKITOexm/mFdo5McJX8GQdUwVvF6CZh/P5WCsNvEhLqCTxdhXmpPsao28G+Lt+fZviBFKPlaPlheGj9qeREoZk75JsceuglhuSQatiHLwD6uKu1D00XoAWOYInqI5GhzrmPdzRtLeg3P3ZxlTS7YcqX6GZHae7jRDKU0QtrKuwGiplEMxKaDf02naZDWKHQYP92fvBJf8UbK2UPm3O/5Fo/G45vWdqCHAqznUTyYBx89Hg74gwHfzIKP//m3VfC3yWl//f28/6XD4/H67+f2133AF/t18Osj+fHX5/dTigf/3+MXWzz2f/xd/rnkWcr7ANr3aPa30ADL9OT7I6FymLf7/FBfpzOyxQtO9pBm9JUFsuCH9Cakp9OcVsv6YdCyb8ayOPYxjWypMIUjOC6HW1JjPV5a9TSqqXrwWsHIeqGeBXsOTNLYLD9EUCizkMLdRsrQF+p4CZFMGbG20bgSUXcfBf8XYpgxvPEpWE9104V9tOV4h2HAIPD3F2ji/XX3+n+P/e+x/5ePEbzHzDGCOTYC2bneeyjxmXXTFMJkqoY3rqcCUS2OM55V5i5ncNgUrAqyB/PWpMSndlgeqkRBPzUvahofxQ0g7P0FcOD+Al5J482x80a4mzWx1AgqaMXO9fXPvQwD/SCyyZfURcbpS9oj+jIFBRuwrWEA4N0+Q6mKP/cpiv/ee3+czOG+Sc/i3+X5lGslkSEyic0xHppG+YR+CPsK3esIFaRIQWgI9GMMuN7EpPB2HrYgJf1rUosgSvij6Mye5imaaO/G1Z48lB7FAf4iHJ/qOVP75UKo8jJmGeMw2rtPEiLIsvzNhc+FNQx6khxHePwYaaO4HvZ97kUVucC8hlx6NfKrvuZGYygNn7Zn6l9xqV+vUtuSHYFf9wN5xFS0J7ZTxpNQyo7qDu1iynYxqsR9Wd9nki3s/eVBEiEOCufOgsYmN8WqJTReBciz/fSh64jUb7jkSin7gRPEQfLgGFOnO/XtufWvn/dbX+oVLfnd6I9lApfs0k9h6RvtsYgQ2hGy5d20C4v4MMTz2rY+rnSfpbnAffubn0PxK8Yh+CHtz++zZ77znZEiPGTmhV8KtBz0FKGkofPqKEoFCMdC2qdGYr4fffs8ZjC1vYuXT50w+av4Zl26IBWcgyajJQXDfepNevMJeFA0aKT1/tmFYizLJblqq4ojI+InP+5QSz3DD+XIrXzcUz72oepIuLkAY57UFjI8nu2Ttr7gRtyeOLZ18RRa+3frHvWseLzb9mP2JELJeNy8IluCJpfaLMNkN1HqnRDx+ADcFcdm730439ZgcTu/Sx8p3xotM+hT/Dx3xjPqKFGouw8N4Ttcvzy/2DLCaCK17FRry9ds9ec/2nF3zJG3x8C39tEH/kV6y6LwU1ZL0XQ+N9IwJrmVMoo9NYF2B9HvLgvfgKFF99gJWTze34Ul+nbHQmOKh4eV4v3k/UD1+IKCM50GUhvapqYv5u7eYHVhYNWAOnXVFXFK8X3GL8t4+lCBxvc97nNGhDClvnEWkfavcRMwk9pOm4utRE6sC89Q86c1itVNVxpND/KTbItDIbaMXCIKjvSBhHkaZTFPPY1MHgbKqp1Moig02J/jt1kqCsgJHvwwjHU2JB6P4w2Va8NIuuGk3q2CtOE99JFc+k9AEKp3EI4a0XQQ8X7dao1FIY13vmNo/6CEenl89YrZRznAEuNPF+/vaKT3l/c5ycrs85qIYw0f1p6oc/QwbaX/4Io8rc/UxVsBe/7CC6a4wqqQa9mwPsaBrdLHwNj7rlWUn8eZLPytnoRyRJ4mIJX84tbX/TO8ZN1wuHNijRlHBug3m9FuUgc1RW5sAReML9+s7T0mtvtapa4wrL/vqlBPjNuBgbhkmti2r9wQ3PcmIPChfwjp5gs0+3L4X/6EKd7iZ2awcnLc+qwrmX1v+SS2rWNYLoa7IY1RSXg/a9u7/XY4MfHTK/USJWwjnHJYx7mFEwqLC8nnVHkkymrDleR8ZxGJdjspv+U+XqDuF/16SZ3ihm/np1fjZuZGhw29ufXnHkjMobQMHg7gdNS8GkXn/sX2D6WfBIHH+7gxqF75F+9VFD+OMQ91X3Ymivx6J+Pyw+qPxTC9FMy60iiEqnIPqcn3lMzVo9qzrx4qXPgYkOzwagSddPTnEzSl598iI8KZUAWH8L0nInVjVA3gYyNn8/W9XT00PhgxdKQ3hgjFPz09C0yxDIoi6zYMIEn2xP7Zn8c+FlYruW58UkH1deyxv1u1Fvi3eRK+YF804duwFqes1dNP8/GW4OiJ0xnyx+gFP7tq5/HGHrXmqFEofGcaLMNbfsHO55Oa9PPLSp7CM1DUkYX8JV0cX5eUCNsEhMfQd5Lr4wrn5nJrPnHB8PeWcVeGL0UkJgA52sxUKmOldQMpakd2HqyDJxAB/bFDgrX2lb314aLzZ14PESD/o8Fzx4XYSSSi6/ACa0bVNyy4btIv1ikfR+Swe2JCEoJkD1wDEJpnPhpPlHEhTApxuJ/8a7x41n0USx+E8zQ1ArVRvSDfqNVEwuD49vVnySfzGU/k5jcvXMXNDTGh0yQvTr8mhlCNbUe2ifTIJQRAiZMFC9Fw/pAj48sOuAdVOAmMwO4SNrCGobt6SVeoSziDWxlqt5ZxbeMocWXpRaTCxMq8F/L91MKq5y5m3mOZ96lKNNF9dwu3ukbvn0Cmiyhwoh1rMFhucX/YSCk8YvpBDZ6bzkD47iQxG0JhTc1EeaZrC+w9/94RxYffPCr4XjICcjAw7B0vOWJej4x5rmlcQs4u9hLB8GVBTHpr/+Vb4ddIDZqPe9btNQTjz+CEsc6r0VSx2BbNptVL+RhLf6PIyfBFkTRM/5q/cCAQXjIgQEMsHs/gnn2hiKoPJl/SGYUVIkz0lYx41Sa87KRho7PwoczBVdVB3wk57aV6Dxa3rX7m3m49QT82RBl34OYct/EPAnMbwcE836cDKuDY5cD0IS+9bf8oMt6aihIMUqO/2iovUHS5L2sXgquG2Q0TYrNsMGaZ5iZVMTuR1Xb16LjJOa7uVnNmEN6IOsWQ2JFg+CONRpajzHMJpw/6xp0MJpHTpCeI6KcNIbT/rsa28KlXXaz2ZzrP8hppxovbAwD8y2dLrL6EOBhI4ZGMUNduLk7ZdTM7SS6qbJ6QaNmapzFOkVlch2XchHxUBZ/2fEXdwVoIbgPnsd3pVGntn+eFQWLe2pF+iOO2hh+BGjvmQY9yNa+8dCzGszocS2hjRQs3ItuX5SopOcPvXjp0TWuzt1aTqKSru4hb45c2sJFsnCp1BzwCMl8nqSrIs8LI8eA1s2x1g+VG89zGge7jR435uXjQeV1Z5icvQeimuq+rqtn1YeOD4xXBzANxHy5b4FVfTYgSfQRIeUlmtzLJeVII3xRg5uYCDRTwT2DAsTSB1BkFeGcq7kGR3Ygv5GT18wmY4QRWgz7ZQcl0qQSbi7SPuUIn/T0/ybhxOA7o447ut7sV2JRdOTEFK2pxcB1Zyc1Lkv9BXq64kg+CDgiEukeYRBPrnvV5Dupo8nUpQDupuLi3sudMqYCORu7nQ3SPNx+m6AjAxk98zIKiCXvPoS9EM4jxPNWGcZyesnufxdEQEtJj3yLq7TsdWEhLhwF3DeCXpLfBZKQ3AuAxeTxhWFXYcCof2PvdumnzeB5g6plt+9jm66ZbsEC9YQFtZ0O+yM0lj1RzmqNN5Mb++MGFiQb5zDruvjWsHm+jr4nqcTOVTcGPAjDxqjuS9ELfB7VJ6QtV1meFm3giXp1LI19hO/Igq9mKcmmDPpS7EkkNmzEBw49kCEsJ8zW4E8w/zsNp8xpewjxKRmKr4173IDVIx0m7P/bd2Vmzc6J0yuwuA1GWx4Hm6KkVIHRDXTJGgXlSQce1V2xbEPOXf8mEjVWbFKk4/2Bpa62WcVEVAncuIxPPalWqwwVwRBdbvB2sSR5PFMJhlda2Wb54VHExa+X9yNCcPnltkowZnTUYjnyulWwAZUC/DJJgDReFLyutPI4AFuX3jt2NovchtBBwXI/ALkrZSCYEEpE/bwxhnq8ypUIJ6XHb1ngBWOiEaypJRXIlBz4KiOfVCbAHPKvJk5Z7Gg6rGzcIHlUi+/5wQYhOIPZ8UfE9vmWaA4t+1WLeSI2AYGuNgCNMjYIFk33kwqUH5cKsOPjYyUb9YarMBbhinJVBJwrVlzNagtEVAmqg+skEs0+tQ9KorU52yCOrxCib8tsYt0vfnOisKQDpbRAMyJCAeC6XIQiwMzBxfV/Zwp/nAFs+yXuhctufoO7s8Iez5u1NOjVvjD/v5jIJbez0FW6rbnwrfxgRx0aMeQHJyayKGhkgJDz64wWFi4si7l6BofRvhE4bFbP8V58Qw3lbtHovTUTccAuz0lc+SQ32sSWVHDjqtWAyJAOBGBUEVuZxS0yLcKOy5xJ6ddhqKMPZgT7e4t0FYRUOJHfCq7KamKrXtqw25KTrGfJ590joYNW2h9rbXsX6qY2+6TdGKonY0e4kl3DecJzxFEI7g7bQ0CiDnw4TmzpUqxWmHLNergqKk4gheW5nR8TlK6zz8QrOuy3dPbQ2oHmNV/A5QUOqqsvy10pRcEIfYNinMVggx3WKleF4lZdiKjWOPJPuuK/KrEv/AgVH0sUszTSkXWXIzF0aQM8WB9Mtl6VTa7plslxTcdQD3YFvjSssNppeA3N53X7raYedtZmjiXirisaTk9cgb5HQrh+/yC95RGuTcmMFUhIOEtJWmhDWzltS9+6/ePJLo56b4MbJJzbgrR4rxuBlyy7RpNgRvx0lpoiQnE3FlvDhezkKb8igzwPM7GwaPX5jqZs3vHuzfPxwLyportsZ+3XEjn+pq2WBH6QbRT2GpQUKWv5Q6ZX3NpI1P0ey4bAvPIIxFZx58KULjtjiZg89d/tJINb+IBAyzP7GSTOdr1LN36PEkAiEB/s6azJm4Soc3BhJ2R16+0a+Jqy0i4Uw6RXTkVWlAe2Or4IYw36t1ioK4cQQGyRz1HI8EXO1RHeA/dzK7seTACLUD0HlxLHvizEajIQaOXzqPI/F9AG3CfBfJsgpgCQGYK9bftxLJrmpee9veV62tbumPtaZOoDipFVDUgEhF//MVqnIyaPqNz9yl681mabNaCFx4Z8v5W9/tb2jfeTC/sSvvToAylD8vVrPpMibBMnbNLhsAVBtJwUf8gBh2+nkE+vGu8+B9Xdf4CQjE6xumA8Klvd5GVFtbB/CfZXb+mdu2GDZgccW8a9n9Oq8ipOguSz9zu13eN4/UbyG1q8kUrWlH+otoKHmgRtuRYoLHMKGXtK79EcqUTi/xhFINfhWAh6UZWD16xtfwKn5kW9ZQjEAsW6/IsXMJOJIQnXZ9md+cyR6DqdNbhpO+NrVgFaNnyrYgnSHXB+LxbrjMh2Gv2jyEwksxs95fMFQEO5QL0U5aDu86s7u55ZQfbqyIwDk7YCFhE3IN6X9W/sSpYlVTQhviYLWiS0slyhgZeqJWCgCsxZsYktcCbiKmmTSNj1vAbqFwNkJ0lucMjHoNoFrkVa5pvH9QfvbhpYTeLuaj9AbrLFWdG1YCAVJ4LGP74eeLajYuJSfErCil6kcmNuBhj8r9HDfYOwV4Cz5sYHetHtCDzbrG6kXyEUd84bemxfNQ4S7ycwe18k6oWNuJIeTX5+opaj8uhfoGpqe29d4Dvs8i0YYdpsKVu0dWIbL9gpqvU31uZHZewgZIAq5repnB/1keYCS15G0EUqW+6DcTLvK0+0zxTLHusn03P/4a1G5sR4QP21emtX/7MCI+PhGxHvdVMoxdgocOqZSPOWc6MjVg702CswSW4U6E70b3aoKcsSz89zxnBNRdsK4rc/eaU50UEgLcuVzZAvcCNHYfAUGgzNspuVlHqhkXPHTRh5hcXvuZzOBBdjT/OH7LTsKyxceeARc1lWZ8QXgt1HulgLQNPDhKiZYez1LGerrdKK638b0K5DyXmxjvPojgv0KpfkvUI0gZMBq3ZP1EpDJgjWcdKzLpLsZee+LFYUsQjZ6EzDEUGrpiqb6eFXDvvlbsWvizmzItuV0iFMgZgdL4M6BG4v+c7Bw99nlcQjrMbh/bNL72TH5GzAMmvgYz+yeCcW0fNBJRv0c2R+6zSxoPVeAoUSLMHieRtrh40LnTib63hZMye04bqjGAfvjwYl2Fa4yrWq1nPu97PRJKmn1vVJqyS4QusgS2xLrhcEvtHyNZ84Ii3+ztBNWLQu1e/RWJ+N2xaz9h4NHx8JtFHovc8sIREjudNPC57BjF1Q3nNuIeaffSKA+g2XEtPWl2oIbeFUB9BLz+lCc5YEosS8YxkV7UDClyVaUuFOytrgAVZ5KPbq5pqdSI/ANNZbTgpYvOfSaHffLmZBNdVGv1e+qBttuHjUOEoPcPE1Gm/fx8Z7pYZ3Kzf4+XbQFL4P3u01UYekvfmNtgqP1cVk/DwgjgkKne1b0GMf47JyR1cB5m3jr5kmHprDsEO9qupiZv2495aB8R8Veeym2jnTjnJf6ziM3J7dvlrzaW3xfNr+OZFX2w6M3AHLRZvQq61zwUO3Wn6gSWHMP4MSf1GvrLuQ91QrFrsbSWn7NpEo8HmR7vRGPbcq5U2Eylw/bvlkM2rFzZQTrMGWzMGXd5vlVk9cpFmI+miKecAL8UCbLAbx+Kegvsn1tIt1r4unsCmqhdRkCLsgIYIpCJIzcPuQOYJwc2lcwCmjDslblkncK6i3TJGMOwM83HE3OHw4K2xFEfEWfHxLBHatUAaG8jCbrwkeSCxhe+NJA5zAev4E6YuYPKTEuM5fFn9i5xduUkrhlLD9G0eYA/HlQh3eDqxILG5B60J1dFZoHAbkLWJ9c/gpFf5OqU9eF07+7ybg2stT5MAjdsOQDQ9SZslRVpThcrCukgflttfKs22u543QrmPvjuCUHjZMPWKzsTrY2nvSvsVgnikStSgIkblSXRnfwAl8sR0JmHPfmOm2py9yw9Mrn1EaShSKT0nfjyG/6FPN30jY9Sr9qKkVw/4gSp3+f1Tm/Z8jZSJ6MtgjS3u2NanP7+NP7sOe/PHWjPFbhXcdUQtvZO3fdHg0j+rUyj6/4wSdnDite0HTTs0A5q8xRcAd3PQr0frw1qK/fUjoM3ON5uYlbgXGKGeBWrIh2Mze45dkAXMsQeAJKfUjZ4ZY0+8PzQVyymHR8nJ+0iOiopS7dYEWL96WVAI/P0xGsBMolsxJ1zdT9uHEjHuMRcPEpvFAzqt0SOAEyhq0/rjvIhA9MpjYeegscN95tT/NEfakND0IChHViyo/t7CasNS0AqAYi2FrNmp8J3F/GPDNXwqkyhnj1Ews5X3GmUeCLDTwou5C70yn+lWa7/9P7uB83De4UnbCWzGA+X7FwiwGI7NComfT2GYzl50VTa3MrJzWSMHZeiN7i9S6zMXh6Nv0FGFPQtFvjcS2Zj4+9udnNPI3EskNiKZk07vX5tbn5e9N8QchHc0LAWMPci9Lsnhyn2Uey7Hril9UL/uIRnXSK8/AaczjT2AMNEy7Qc88vEwBBQgy/zQ9O9662jvdo3Ud0PULdEFkqQBIxSlziDW6OeZTTMde3F7ktN1Z3gpB+dRtW0pzW3/2Gkq/FAq1HRhQX4ZHuc5Ugu4tswhmH2/9mXcXCzbgppNQMYGea9UBK0r0/BsEIl2ISk9VzBoghvKghN3xBJW89ROUmOB1bbkWbuYbxxfva057u+/RbZSROVlUtcE0P7/k5vQ0/bMEO+ua+nKAuQvCpSzji9CsNKR+DoYiZ9nrCYCatdUBetZsHpFUJPnoRR6ZwVf1xiXNEnOf7c4g4eCCvAmDaFgHOwrDE4WSDa6CewyMGHufcO+FM0aYPeSCi3mAsFA436G5wkqir3vuknI8JM0xanj0PEbNlymS08jUfZqsEJuEYMx9bo7h+ZbE2JBwdkdIghhTiy4cnaXuHEsfuQ6oeidITuJIalAr1tnTdMIQTD608qC4yTxGxrQ3y2QjrbKF+9Z2hzX1uEvNQ1ILQiUPwFQHZ5C/hTvcpzFS88XfGbYJGXrh1vB0TpftOduEs6W2+HUmKaoFnV2ja9IlYUOMQSXCa6GONLkHkr67vmUB4iLcsgwX7nfS3xq1DajDHZb89C2TvqMNsNGZrqKlguH0p0j7XZwp438enp6+E9VOVww/+EeYPQmhQZGyKwpNf0dTmicM7ItrSZNK6uA1jdWqLS3xq4EQpigaqY6Yk4WpDViDfFf3MOOA9JFTolu72kXlpkeZzbQNAmLltJQBOBJBtQmeRQCcsKmish/fKYGP44nHLgb/ac5+0fsUj0c6PcTRNplufgxWXg8WyzK+J1IGO8yrd4x1rmjiMhBIMOFS1baPL2ELNcjPYvHD6MtfHfzNq2Ucqbie5HqIRqSTmHm96xEFqyUsDMopOEGwA4WbGbUgbKxMoFzwxStUrzs42Sbmzoh7buBlPX6LpVyaPIDnBJ2l4dhS59kPVaG6S8+rEY43eT1OW67A4UKhN76j7gnMpb5mg66nnCHBqyge56iE3/4hus31Fe89cUIs+74/EieIml2KCoHZC4IkMylQiFXLaaW7ekeodjSrkm6pBuQWzXYGz8j6UNQdtNnBLyfLVjUp7WLdNKUqzJjAWJa5xAg/Xq2XMJ9MybgWVByBcisMNNpO4uze2j+7h8p7z03KQo8uUt95oxCPf8yNI3MwLXRIAkNmhgECo/MGAUT0+O1gxHZCCBSFAnscyZkT0NFjB68EihihLtb75842gaX0SF34CC0x8h/BykeykZCCbBzVEFQivGuMjxpvQXnmURWH4WBe5plOFaBYXs/IOuifNm/YvHbtQIrZuJ3eWmcCUIOJvC+M0RM3ygY5tmcQzxbuFkm1Z+tihoce3CNzwL9uhkf6V3iocPbU8uuiyFzbZ106EQs6RiVcQ2UbLiHWNYCOyr4KKEktiaxViN97oJVL9bAGYE8/RUUWBj54YiSoZbnGKD50bpoXhTNrlekL9vfwK/u9MpM5vrNpJBrlyNLgpXW8NN0WNO8PQIk1HvthHHH2lwXywtl+FNMQfVkU2P56JipjQxNnnpDtRyHugHBye1IiDgLtSvhUUevTONNELIT31rohiLK+VOlUagQy7QyE105hl7nG1NMv9M48JyTPbIuOVm6tTEZRIrrJi921H/RXfx/mDg8/CpVhnYIufmUypqMWiNgmR6E43Nr4McoqgW4nMYDGN+3NCdVrIdJT9DBzDx6SZ7LrxumameNbqxbxpNAJeQib9dfdqSiCRrBcCEdSg6EwWsIRu3hOqhJ12ISoO12ACH82QoCZo4fF2D8BUfEu1HidEi7SA7PQ/M8AvjnqaygrQQ+Yqee9vqSRpkzNGG394KNIrnyErYPlRXtQDwnn2FPd/XC1TIHymaQVw1VLswz624o3nboVNbScBD/SMY9SDxpJnI0EPv18LWnk+WGffpNJD2Nc/8qsKaxeA74IpTrNUbmWK2t9YGR34gb/oV/G9nmd+PT/aE6IXYw++57RVnT9ZJkqxIwyon5UXM6i8CHZqMbDgWy78ZWr8W2SjFLrC5/69EkCslsdjMYMXtRrP4J85Z+5lPvh3awQpYAnB81sNwVMS2YF76Gb8mlvte/ads0xQZZMlmWM6z+9VAyrPyhx7X2bP4uNn7YlWFF6XaZ5n/qg26KfHQ7ZFzVTtVfyW7ee5C+Q02y7VkLf2o26sn14P1UZ4U60r8VsOH+J2kKdvJddE2B+Z8KMYXztnI5Aq1z/rUmzswQ9dJzUczH7LoCvs8ahee31fRjLF96qEGbKfT2FAa6Gpj++VD5Gy3xZz1NwJkWXxvXKB2q1nIA+orX/qH9UhkQI9WIguuRPDym8VD/qDelhKgN9qjS6K77VLSaB8tMdtVuwZ7MX36iweMDpGCf/XHv/fscdJQ9IZYDEaS+GijqiCRRGvEp8sJ6zYGJQIxJVSWnUB7frMtdRS8KfDvrda7tvX3dnLEmCoA4JaY2JYoqArv9ScowG5y2nVH6c2tIYrjiqb/tVKj+3d/Yy+RD19+8CNnD09GOaflZfCIyIfrqQ50hJWW4PvoBBfeKfG7dphIrlU44XMyyePCZhUclBSoVp4mA6MuH8fM7lYX7tgXSkQqFYw4WkggH1EN2iDnMh6s3Z/YjCRSj3/1uE4crHs80e1Dvu4brlkakDuMnsvxvQHAS7h8A1lsiwjN5TTWuz9AYKLv+q3qoeFMVXMEqIMCxVKCprHx8xN2Tc2up9svkWm4rL+R93Si0V5xSuk97CGmUdAlJZcjZ4k8iPOgvwZCLiraLcZF/bZhCDDKRouCXPJJJXizzqvnqWEe5UROV348BfJaiO0DV8oCpTgNOxCzrnzTEi34LE0z/DsWfMbvzGft/hpFk6GftivOQiFsD1XoGo+G+GC5Kz58UECRHOW/XLCjTiKaWrPeKI1YCrRJtWOp7mkkCa3PvGK773lXu4uJ0XWrVgeLz4QcrK7ifMXqdOIrgU+kzGfpomVR9iCfPCl0MSaCGo9SJpxuOJt5x6qnD+qjs23ALAnBYTxCtxbvOqfMR9VoCI92BPOZHeRilAtMIT50LzqRhKSIKz4xE2qUXPbsxxVXOtAKDleYvnVsulF1+94sRPAb0GBX2pcBxbpxf6nDoZ5PW25h6Lo2N6RDFm3qFDzILPiUo8JdVxIN2xC7Z1jXZM/x0kwiR6Pz7O4BvP8RFQ/ih1OgUHjnjZDPbxWIkGW9p11IAX2msw1eWSAAzHoQj48/vxvDvZ6SeVh5lqQzb1S6iu9yeckZMhbAEqltg/ySkk6GcTqHtzaaWXNpcm2wlYDPT+sLv3wBfSTv33ShXLbs+s/Pmkg7KhyKc/ua+4zIew4znc7kz+s+TDtLjOIpYzIaAp7FI8vZCem5P0MzbGb5dTLSY6jY13CjEgj3G/YzZes9FBpi93fnyPC36hz89WG+7xjL86MY2u9WQpMEl3eOSI28SlclP84rHjreEc+caPZ6Ael/PBM9FN7LKEO4We7iBUJYeOUjebIW5j1CU0QThOiaJbhnLIOVw0zNsHvp4ZMoX1DEhwgiWR3opabQ3o4CeP51qKlgOT2imMvwBklwFBrSOZ2RANThHWRy3yvDIzU12PBGTdH63zkQcBNcYcZetmtiE9dgY2CoJKs9vGPBAO3dJYmD43u81UCoRBvADypKS7hNuuwWTHfvQ6b327sAcq6mQIIiBQYLgcty2ElR4pEcZULElg94dnnAmGBm+IvnCIUEGqRRahPvntoQRPW51fEvtJItZk+uu7ldTqmY/cVbDNJ9k3TLN2qxmFmPQlVQkeKuComlP+r8nt8KBfnSjBY4Di21KsBQspXXsrHrVCxw5mU7UwqUBJEhvHIFj+YELrfHtC01z7P0d259L7L0E8Fc989HiMC5iNOQggnO8uTCbaXlGBn7bnr74s+0Bh3PbCWBL7fzca/IBD8FZRNamMHeG4gi82HZptUAirzAuc7wxAs9WZdX5UT5tNeHz/8Prj6ngManHCEH+p71T9Tw4AfgggA9poL7SdemQCv7g+PY6LL13fe5vLb6/G8iIIqjW9VoYBJUU9XlgAPA1rI2IefNbC1Yj2ElOISHEYwnhTmYLCFnHadf/C7XuOggI47g7ydz+CF8ZOnp/VBzkn/0Crqbp78OiwnIKlerAZqkppnMqFVV/TPXzW3Lju83BsK9XZYXZsoV4E5wP4MZarGf9ltcT994R8um+yTGodVsfWmk561IWz3UnPnu22/Knw19mSUgWCy2PRIUHSNuZvkrLfP6pwF/Xg1UmL9rJonffb9P6/toE6TRPWJ1usYRLUJYhN+t46vuFubzJLLjfOA5q/b9RMc8eyeZ424zdme3HwIMKgkFA4HVRT8RSpi4EMC82u/DtcrYD4OdsGdJ5QrdujUgkmBGIwcWS4e2jgsKddlxoe9OcOjYTGR/Vm/DPBeduWLO7wV97C9FpB+lKuRmoaqfmtDQqRNlExkmSYBx7ros/9VBa3d3z2s1UwR5EqfzQeFo3naHpdG05YhNPKJlnDG9Z2J0lasoMXuU+XPuuyXyLUg1EROmikBiCNJZb4b0h7BlIpP+J3z5BEiinp2rQM/fs1svfvjWjxtHuQu5qNNXOFck156jduAztk+5MdNNwbsE9b7j0ruGXJHQ66b5wbKQ7nW584Gcc1aeY9bvG+XHFrJ2Bk25k4Gpiqk6B8OqUhYXBOs8Xs8hbf0ut4z4AMb4duWLykH7nXKhPXsc8v85glyH+T7PByHvEeqzx9dm8MRFe8VSCMzG6iEc+ci5Rj+55ybg9Qzix+Y5xCQ5BsDfBHx1aeOv+coJcgKx0Ds1CHPT/UESfM3FR+v22p1d7GwWAmwAVE0Gil+1rD/Wd3+o2aM8HPE0lmimLRwGHaAdLHLH3bKgjtDXylvGxUQXQE40AztL5vK3+HLeql1fW1IcqLLFyFEOsXc4TC6BKNzhc6DVSv5Kq1OZaT+vW+J29/jW+QyZI6HvIdIdxUnNc+gNiLtUeJIT65XlJDYfx2hZ8/Li3/CVpg3ueQKgOMhdKxUsdRY7FXqmh3GkNk0AlVTv9a82fPj5yjIxVxLYl5oONSSwPBoWlQaqqt+7k4IZLCAJEDNBZIkjRSEicezx9pJVVj9k/yo238pj7V49+9W+0qhxuLM7L/6EH/1ofQS+/iqTPoANM+Az+WxshG8LfSOY1fMU7tkC5bFn3e4mR2YnQeKwKkbLiLLNSaWwbjhdIc9h88KzGVNBhMxrcI9ubEY/xvCLozXK70YesiM+7GYBwuBRLduxyUVzGnvV+HT29Dm53oEnuRkxlIJNNKC0Cv0hvbcbisvH4ZYf+Z5rtDPrzkvwEByj/j0fPvQYnPLK+XsYDlHimx8AuEIZFhcQjKvo0Rc2p9Pbv/avcgUtyK9PTSYhsOU30uSgM7Kl/qkWgwy325TQ874FavmufHCIj76ymFCPyOPXxsfFGZ+1IbfTGG48FaepA3/ubEV7AS6aQY7RZArQBn9JkC20V7ebVU8fvoRjZUtPTD3R75dWgUeLJB6UyIGYV+cdkSpzv9Cdy1ybI8T5SD05QyprmiGsdS0DhMzxNe/eCq+KuKb++AK29RWd72W54p0h8vTWG5n+2Qcr1+7PQOZeSA3KzwdQQX1K8SlkPpkJIt4ov0+Nrrwq1WPgD+rYOdOzxBPe+lnBNYzMYES27d87+feKtDzx0v/8GE/elQrf9bQHxUoX55fZXKsxmVnYPsygkS/ruS5AUioqgPsmaVOSWDgHEpVauHzEBlRWuMRKLC36E3wp6b8tNwnxzx7Y3dFgMjzk3cpIYnNuflKwahDpThuRrmeT5+b4UavvPdPwEAH3P/lmW5mfi/ai2t7fULFIVwWn0r0Pu0oAokThwEV/nmhqhtcfO3T9+gzA8zMDi75Gm5q4qQ+tJPR23j8REphLKoiF866seZRr8U6KgGlzpaG9/3O7SA3hHJBI5uacF7fd1Y9gasQiLoSphtQbl+5rWqFI4rodZeK+LMD7wdsjaH0sy93C0ux13rpaEKDAxsWSpD6717jgqwsadl/kveGpxoQqmW/9qyWWxEN6zOY3Q5m6hzztmOr7VMIX/Y9o0BPFPbRgmJmRKyR5Xlf/nNvscL6r7p4fab847kZ+cRM9I2IHtEmORaEQy5Xq5RUdbBrF/XL07cvNbVFVlS6KAk3tmvIDtv3r3oDmKuZn/tKQaADPC1szBPGBc1ZGNz/wJ1Ne8b7tXUzkvt8wAb755cNXQx7o59wzJ3y2hvaWIOoikylevumyk8tnbLKD5tlchZ+W7tZ2Zd0kTCFue8Bd7FYDoSf+8Y4i9+1RkLyJ65GnIKLXtR0aYsEcJUXP8ZcrLIX4/qBGpqegANBdcRCgo3wp+O7cEVD3T+n4PM8c0B/4ZxoMH1tu+r1E4kqTfFnIaw+4VRSOEi/M0SH6aGb8Xz4SgUF8MGJqj1hosN5hEuLeklCjw2vbWQUJEu7fXoGz/4Hd9C0L1TY+uep10/8vblXv0XDCKp6nCVEgo3TIhzcOqy6BMjZh+er1dxvtfzC9uQXmn0e1Y1+D88eIt+ctgqhPmulhKEajRJRWm5nVjy2bM/10ds6U1Tfd+JxD+4pnjfPZzU8hrzVzwYDbWlXyt/NrTY2bI7ms85uZtjPbf+rNyolNzxcdB/ygX/qKN565X0ComW9zw1VGsO8KPscxnIvf++aZx/3xXhN7r1yYuH6XgYQiLTUmySMU2WfjFoBGbynx9Nmi0b7iOzzJypoN/exWxbpVOkDn+D6RhWkeQ0TcSJp2hW5aNKFyResvyurv2sW5qX3Av7iZ/kMPv4qaxUJSfR57Scg6FiAx695wMjD8TLquknRz525twi7AGofJQ2oKr/zxDMnVNiivPhGYYmWXZ126dsrLcCCwI4FharQISIBSxGcnztiH8/bKdvlILKEbELSfHhlm5P2yQeD2pPY5UpUAzYbf9UezojCH/5Xn5VFkCTcpc+i/uVTywAwpGPKrNx9WvIlZOmtnsZOiFBFIm41ls4IT50NTX1VlGHF9Xukgic/8mWxHXD6NkcOm455OXtpzMlaWHVusU/zSk2MtR//5m/LAv6IS9LDl55vVTUlg94E8xRUqgmR4UV5yeQWsDRI7oCsWJ0pm5GZ/fd8BxgnW3Tl7tGaengrlHCVES/KhsC/V+TRd4bSDyCi2U+N9/Gxc1D/RaVS8vCAX43Gu3ku1UFTnaqt7dC/WgZdtgGH9EMDpYegYL/2sr7UhD8bgmkdJ146AkqcFRCvre1ty/ll7SN/FiTjBSX+XLLQjKjTovUf0Sjgc96Frcg1kHXaPGF7vh7RSFtnGB38L5X+KPh3Fd8UY8BD5KWamhQNGa2/4VDmf76tAUQE+22uD+fzoMVWwtqjocX3v+xCVkqnz3ZfFFHDDGpT4BH8c7xukX64U/kixN/8gwmL97DS7sACbiQGt7GyEmFQ3a339/Z0j5cToaO6C3BA5Poabvva+nHDPw9603ep/tVSk7p1RVlMpRkHLdqjo3vUo+jU9vu45ca/MRgO2Hg4x+z4PEP/FRpb3zs+1hLDbdwKMiYxJcQ3d8ewOhYEzXPZETfY4LvKhySKlTnOlTYDjSRQr6uS9fO87LSsO/saQb3efri3ot0nvxnbEdFurSwNrN7/9FgpaA9eFKXqQgf80FwVFAasR4zg+HOgmCRDr8mNr+IZ5cRR9asy/YqqdFB9AwRLkaAyD8pGCmCG6GcSzoB6UaD8r/eF4sFOviT4oDpFQz64RzbKDM6skKAFG4iIW/uMa9ob39UnD2QYKwAf/OYcWqC5R6fN+FQO2OhXFNETWejlXAuCbvU6GgQv4yDeCWyc14+tldPjfSrT2VGRR/3ItNjEa2QVErvHhqhY4/i8W2YGFUUu+xZOUG3BU24yjQU11uWn52LsoHWAkPWCvG6Nr3TG1z7iHzmv1dEct5Vukrt4D6Vftgkd17PwvZkFQQSGtF7LAOKZnKfTJIa2hoPCISEjPgdLtZM41Tm1jncWq8Aw39kWW1Bfmor130SX0tRDY1AeygtaW80JFF7kLvr1bqsRLkbFtZxNUYKebERI+WD8i/ycrtBrEwxehYAvH+yCEhbaFSX+J0N5Ci3r3iPOWTc/KUn+TPavQh0/O1xVDhbytcKb4hPzW6QO7yQiuif6rsn0g8ov3IHVn+8CqS2pEMN65Fov2Y1zOaJYrugM6g1FqqqvQsK6/qzqfkRbEbZS0EiAd5swgmXHj2ynZvTCOBVsgj0dt6J6zY0BwQ18mtG85LT3bdwg+ohmxSehIH77f9ivqH/lD0HoeBVMrrEcUumO5pidvMyu7aK7tI4uzEAYlshtyr5AsLf72rFwHT+UsBP8idMB5k17hy27KEk9N/x1+OvVerGdKHIVYZ7d6oKzGbDhNGdSGQjwiIoTO3aT0JHmdNREpIu569iozluoIinL6WqrWgV8k/dKdGbn4LR5NOWdyvxnAzZ/FapFw69MhjVDOpnXnE9kS2n57t9YURWQyvebap08CirimlybjC4rA4rEXfx9wdJbqD/eoFvvvM2NbRW8UHnNafJVb6vNbwE/YUh99HOcKp+3p0u45m3LCrmlRfMoZJcIAVMe7qvW4ZPKC7+C4+XKBz0p3U7CuKF4oH7Mz7lcuLX8DseqteJlEJRF+GnAM4zCQZ8vH3WPvbFI6k2bwntNV82xujyCq8UHOSdbUSRAa4aDuMw5qkud8J9zVJbaHnp73vg1+rZAn+Brk/oBWyB7BZtSQZBtKsPpmo64fVXzkFS7ojqFTSfPS+PTERSXYU6cA681zpS9OCkHg2AYFTcxhoQRhbP+XOCGLmFvsaxEK0nzywEG1N1qEQy/BAhNxPxiRH33eWwyx3jmm606N4fVSEcgYjpMVz8emx4siqlam0dF9R+7ojfvX95+8lKoj6biI+/S8htY7ZNE3V2+JwW8TkPqgnR0M+mg5MxfIyU0JKx//fU+liNkhNd3H40XHfQu1DqEqtaKD9X6qnhyODgA/q/pBcDbh1kTJ2vVt9oYbIDHQ9zfxK9z5lSBspTyD0uHnZ8I10HsLNei44uUR7lQHG8jOk4WvtOYSeQDwOzGWDgLvChFsEzHHFNsrPJ2PHpn0sEQTHkLQdLPOzP9Au58KeGbHhKqrMGrBIZErMNkJ5NpG/VnHCKUpMjB+2s6vva0GNJY0fRsqhlbThmN5aDEWDblzejZYKD9pFNEazpv6GOKk/+H92Ie2O6KxI2/AQxKtc7b4lFFJExamHT5pnkLSPK42WnH8Xv7tGdr8NJRaqqMvAgltTywC5db9rigdC5HLWgAaShcubbYGrIhr0KsFIUuJj6skObb7Y5+vePKfOsvxrl7nH3wLDnLpvegApiak+PlkV3yZfJd+vbqbIrwjgxnLnFaxIoJoWYkr7sEnL/R5mIGw3HSrRzh6IpZFwFCrVv8evf1MRd6a1pqHiMA7f5nvQ8TfMUteeMDcmr+pYC9NC71iWPVA6DK5LzEx8e9CtHqqXlKNd2ekArh6nbyan9IsumaY4cgWMcBPy++ExgVrR3UjVioPGo99vnORQ1NAIxp/NhAfwkqJgbU2MCoNHcHjKc3nj3dIbwdINvR2iJSPoXAlZX0uR4KdT47I1tlz/QsV4c8i/4Hw4PkhL1h9PFFSkGdasoSMQETQyVIWCctB+E0wAG48uXT+Ok5u/HLT4lfhMhhhouDTL5KBtrLa6UAKvzoq99vi1FfHohqn1Itb6sji7/fJKNyQs6htftvEVW+/MMULAmHTYn5Ukk3h68FqTMXA2wmvwEbeMfpvSiv/jej1Bmr1/pJM0nvq97Q4c7ynxFh5uyvlRrjxgJxHOZsf44ZuFsM+tByFA8qKgcWVGiEt5hc4Ku+TsN36NcmnK8h3yL+91tzps9NFZw9ROpha5HZ1ptMA4GBWhSW9pmc6s+nXZZ3j7EUvrkzlj+gAoicXd81cOCyPqYtkehsnzRpSNZnJ9Ph9tZv69B+KRWRYb/uY5IS1FyZAeKPBd5A4D250qCcMKmY+0Kh1LNLu915IBJFQufpyb/jA/qfaEeyN595rwUX3hEnsnx3lGWm8XJHXKYjcS2SVsbpMOD/7mD/R68eu9zw4OUJQvEln+cvBHr6CyKqMSmbumbrPfQvOvoILSNIgq50nZaMTH208xLSv6K3Lyz6lYn5CmDL221VTR95prjJLFRdYQ4HNmod5r0qMOHX+/+Y4uNwEZP56gvqQRSd1KbZm4kTip/Zjfv/ru/jp3zxhoztobwi5RU7EBZe9pG6YvorvnA36yk8kJu7VUO8B55HgMKeebBWqHalXdnTSy1pp43lX1cyLxAnAIkQwn5SKionjgTm8qDm0wGRsY7YEmiPY8sCsayBnN/P4l8i8nxVkkwSqAZUWD7mH/N5+MpErB4/Wi56uXiOKMnhD9ZZ5CNJdoL1b/Fz7jE99WhHlmRcOsTN971dkNzeguUIRCX6WqHFBwDc19tKJDqH5IAk9evZGF6Lu8eR2/byszKLfRQPgBMs1bGaK54jkry10d9wwnPsdHXd/Egbi3Ku1eP6X1kUcvfG9enMT4MYvzJvIKKstkFvxXmyBdlhu1wXSsOv95ZJ1FfeKzf8T6C52PEWleF4p+eKRAF3ryuUxLEPZmsH9jOCQXgEO3pUWR3v3PnaElqOF5gjXX3eXlRzsAP4h1KVbplqXFimRU75L1H2Qk1vdGSrhrbwEJsv8HaRlPuXeAL2GpzstYWJmgHeY/37vdbH8+romHolty+qNCcLqXGZcgKH2WTKGFXBvmD2R+UqxrouiH94kwFdR2E8P4eduMrZnVVNgSRPvCri4duCOg9fEPvjLWAeJo78UOFLGNtCPg94i5DH0IDh6OclJ3jMy8RJ/5d8zFPIirGwW45F0wVDXCVdJrzTnICa51L6nS9b/8qXVV5ul1r9j3xZTTxx8MIRUO2Tija3IC3m0LLE/8oTPgJe+cpBoNtsN07tfyXxz+Q8zaNS6vULGYUQkEFWGkGBu/sqPVr++Y5BMArFDqI8zBV5bXxLSUgIU10wWrfxqvcLlPAJROZ7Vb/W6Oi6/pmODuT8itQwX7EiMK2Prx0M4ezlry2+ieeFvrMovBXP8BOTHw+5BpiMHSo34jICZbHbbtwQiaBuv9z3/yvKeQDl5kjL3sIggNeGM2PRbfUZI40F/XtWSdxZEFR8F8ML7EhBQj9cwqPpEYcWde9f2vbk/k9739HkqNZt+WveHG+GAgkQ3rsZVnjvf31zVLdfv5tZET3rQcdXURlVSSaI4/Zea9tH69i3erOOHbHSRWiRYdivtioJ4v3v+LT/9mACr6PKWWh9zMsh0aFXpQM/6rv5IRGbJXAF+e35ZD/0S6cgxMX81cJ67VtWYEibfsI1OKOHT4tU5U3t0KA2GeP3ThBBjYV+99PhUW6fqQrhdnWFI51D4+svLkfgi5td/JysCgbAye1Q/VdEJBDvt6D2jAwulgxzZR2YibmiUGUgCbvsiMuP4sLnRDEhKI+nqJQO+N5sH90Zt6aWLuo5VoA8tV38MzYAvPPi1AmDaU4yqo05Y1DobPoFz31JJLgJq+q9BOzQWI7JuBI4suWkVdjQJbNvlD/PK7ovTmIsoaEeLufSB+Urh3GCnYBnO122K6+Vuv3+azRU90WPk7gH01HWimOfNdGkM7z7gZSDgNDzCKE2rH7UBv1U0AIx7oI/ubL+ygWgZm8crlORZ0wSG/RH2VGsp6HSveF3KoGHvyCxwrCIOmFf3+oc5VnBYcdQhLZfPuxXedlFDrYGtxB7Rz93P8BO56fdG8VxAK1LqUMrF8iJEOPq7pngrd+fpuwmL/WvcDxhrKFE8YQjSTOd39UkJXYxVEisZCoPi5wTD8fmmN/SrpeZh2B0iqyO0Y3yQIjIKha1D/43rvXf44wsWTykR1gsz/04O9Lq5/z81vTbhevrZ5FzqcYNnws/WRkyP7WCXmi91vOhgWK5kxfjic5OvkSN7zfZm6I9je9wgMXe9wv/tnaf32gILvSdBYUm6QpGIVENdN24Sn19q5oA8F2QwV9OhPHKGOXTKRs3Eemn7+R1QX1Vn0CUxqeqN+Q4/4YwXh92h5qPujyiTjbnEX1YEIFRY8h1wB4lDpZvYE7xW049nE90IzJFjEZRMcSTmFK3ZDpEP6gThEhBneipLZLsvHlr2N/6szdqx/K+7kDIX4f88TWDIBIA/TH27j27WagJW1MQOyGK0ucXY2E/24s1GS1IvRqZcN6TjgHpNa45Kkz6jVQFU+3v0dZkfpJEZ5fJAmeWPhj7z7NvEW/7mTB8IMcBkEeZKqFlJzSDuQhF6/zyBdoPcWultvOUZGzX3TjsbymqYNClhej/xnB8U+nrmaH8Fsgrf+OUq4kUF4gqkpNpS5d2zL11uEJp9DyqoWT8RgFb/R27dMXjFNQdJ/TjdPM0rrvilsTiwzxBcHf/LKw5INjfCLVw3s2jBj5yIB+oUYezeYALm7I0rsN0UcLWkdrnm8vP6F/rKd9C2H3zhfh8HvNI+wHydc31fcoJNICl8vKh5um0sryMwJZ3aZX7HYXy7KXnM2NYA1gI/MG1OXHuxRKI5p4scijC52F46ZJPFIhUpcX7eP56Bn8rs0+jbNJOwnbxmMOrfGD9xbGgshiwIc3ejh6OJyGHm7vnuf1tTQBztm9saDW4h3sXZx62LOnhvZYeJJVLf8ggj/pJ1cl5M7mTefxF4oB4F/WQguzhf+1A/pOz5kFtYsrc+Pn9Lb1XIYBg2KXuX8P60H964kH0g4fdQktYddrGCuWdH+TZYUXP/UWj9jLHdI9CQYi6cfMx3lPq/sC8FLFP/ZffZ3rpelisc1JGTlsYqkj5cV12fz9Goka/AXf8ZF4c85gULerhBLtasEijMXfKTimCFO/Db3bg7i4IHPkMSvfZRAvHpJ/Vs99PtuUOJXh+5t9+eXZ5PAZdNsqfPvsK6j8P0WapjIsD8ae+ST7uOxDEkHj9jGLhMu7GCkTiVXsVFD9P0fR9JiZ3P99RZWUQ47z5hYOwwa/YNy4rXZmUFTQuTPTZ5ssGLeUEatV/Mjc9y/XWSi0hpp5TVw2CoYZtrz4B9qIw/URwTyP65oL1OomGEa5QzjdoFpyZwv96f4aj14A5otNBdXGmB2bfG7rxQ4rEY6Yni6ggP9CqymUPb25HHeQ7M/DK3ewkaWUYsTff9HypCB1cdFAc9a3iZ7xqq/PJw2tXsNpseuPUDoO8lIxHE0Qi3TgA4fQjCjy75R33uOmbq5jN23aoKQHyjHsRb6Tw87r+oR8lZgP5ZGO1OSZkgQFN6IicS3ZtYwapT/FyEElAXFqgxZ1N3KeXrkAraJOseQAfqsUg/LQqb0X+4SZxzD+OByGyd5pxM/BubjpFNq8Es7sCQQQ5bsA16cpywTdCDB5a2L/YEXMT+aDercZ+xi6dgeLjHL1OmzpvWFi5Mz1Oi/QYtAf0Y5e9GVC3VwyAka8CieG8piOns/m9tsG9fZLwUyxKnqGSZNUJNgDS4VUfgOxxBspfWPHztIJx3Vsy6tpZp9USbbZlLEjkvO7T21AI/VGhGFic+SmM4A439kNxNSReFKFwm/xe67NZn+HP2uHPntvvncaPGi2rUcVosj07wE/RZ7on4AFtrhXvhkYeEjN5zxm258SOmJXr82UqoqyVnNsvpKUPPMhI7Gp9OTTFODLSJLYTmJ5Q1dQgGEyK1tTh2a/ufC+2vN041KQGL9BubWNgL5kjf9kSRfrtPFjgfkGfJJ9m+dhhgWyN9Z+CD95I8mWyu0RJD4noR84bNwtf4IOB4wRlqJpw7seStkWMQuLsaiw810DZJaYkSAhrigeLcSywam9rTFoBMKdiI2P0aXkOPsKHYSfQS8L2iZuVAdCjuCupOOEEnmZmWygdoOkT4yZDSK0cZ09tJ1DAOhr1iZMVAcdVxbce/jZJD8klH9UyQTYNTJ6PeQQjaEWpoY2NeRPzyNsvYOOMEP+UxsM+YLOGQ9xIXUFhcU1sMn0AZkkEZGWAWgQja5jKJ2qW5OmA85YvmYTrOhLSVe1O+zzxsvwUOBPRTc+ZCPLxuSZKMxhPxbdzYmWJPfzv6wcP7CxiQITThGSISwJ9yBjdyUgtyb2uGBCAfN9QKQKBpI8ksWCdJqlCY8IHcIhSQ/nHow78A6UZbbUZ6Xwnndh1OK5A0i7TTtTvyGvzHUhAFp7iMtWuch9cS9YbAYZN4P4Cz5wkV9PJaTqSnzGzuSU8FJCBs0bLYlc5WrnxsnxINT/rHey6eDkXTWRyRewKcT/z2I6OI8jbLFaw31wF5Np+s42SHqat0m4EpS5RywWt1ZgR/YwR+S0Dri9F/ljDEgMgjZK/lGYvp2yWZpT5jD9tSJHcfzOkUgwYNvqXx+GKnwN3hniYoVAf7rYX2htxYcNzvJ1wFFCslDI5WMY9NRnuVxr0zWSpf2krgLjumcO/rkMzWIhUzr4pBcD4VUVCwonhAkHa0xoFLyHK5UW6T8Cjgf2XYz+QWsE+JSKDbf7w4kOGnBhNrGIphHn9aRsd7SQDd65sRLzxbeePiVgRmRo9wJ729a0/thLfgb/p1tY5ewaz/QaB5OIQLJuaLluvBPpPK575Bp0Q+ufmERDPJnmHkREZjV2E3DtCw73c+pHZdV361U8CVcc/tBTzcHYPK6IzesObax7fRK5wvpwbHZwcqSCxScJCVgc1yoV2VxE/c6XVF+u/+QHfHqtLwSUzxGWIAHPgmDNup/MS980ru2Yf0GxbTH9lpPIgI5UhaPoJqmwibBsuPOlRrffvTO9bquwsWqve+4p+dzvpQO71E8Sl4HXw0+YdMU7xiWazdX+iE47RvjKXoa9fdoM3BDY0HEzK+JNfSmz6eDS6VpG/bX3m59YMzHh4pvMzptMBefFSlgvoT2skx1Yg16tKy5n9/ECbSi/WCutQiLr+iurieJCfz8dcxP3E4bhx7wVlwhX7FwJ0JJDMeSrICL1/orwToDU8P4NfmQp877zvlX6Fi/vvlQFVNWrQheXM51P9+R6qMRq7av1n1f6zav9Ztf+s2n9W7T+r9v/Jqk2Kt2qEp/oGw8UOzs+D2yjzNBWqctP5IROPn7bpT4zZ77cwbA5IO+queEhuillOL7y90jTP576a59A8foyshzxw15Qo5HrF3Wv2YgS5vEcxoBMvldi4LEOrOP9mftKf8j1hq2LE8Eh7UMdMrnGzhGE4mYelSJIAKm+y4HpWcqPIf8c+QA8Qi8zSCW538BhSWjomJAES4vjAyzO/eVFmzh3AVsrehEfGEEdThF/cs7655zULWaEuZ3OFeHWOmHWPFlA8QAw+BJiIb1ATMCUcZnv2fx17l+gk2ha23eh0G31Sl6LNHU5rXwvN+Mn8zk3poA+oq6PtKe7LxjeGxTdatPbbKJNxhLoe2t8y+aFrFRK56QHFJN5E17mZsFM+TUA0+6M6D/MB1Xmy+wW26hlzF8VCwCMrYP+350Mqxbyo4epTseRB8RgdR+ZDS6F/24Wf5vMNYhFidFiPkFA3YOghPRGYdOMYVIbjAJ2MeXVa944K0qYHPN+bdf3eDghPqBmZhKxnBf+emQJ6VWsg2Y+s4OuczGCYJULjc3Bp6n/2S/mRv9ZrNTtfIc1AOaBgFZTH2Zo4NrLx/ffdqTVeioX25W4qykk2/129p98ZL6hqC7ZzJlikcJfgzlGjT/QnvpG/DPP5wwbZluHjsSikSdv6/rbObCwhXom/NzSR/cLQX/WmgGUyRCvt6UeNnPJ0bBDNNXhVNYEtG4nr+NPvcjMfejcQaJRpSiar0PEmUbtfSx4FV0txjuV+2XB7DmP8HSYdhrfa/9OzjZsvI+fnd31VFcy2f+k5Vr0nloaL0XqBCMnNzU5gfZSZMqf8qqyeGW39rQoC8zkelgaHw7NexJGQ5l041kP2bO6Xx+31qClOcEmYrdmvr2XR39Th9NJv1p8/gCuUpfOk6oCJLVr6plCwwG7V2ZH/VlvhV2zYf679v7k2qkmg7T2KYoiAvUbGi/29Ct1JCgOln3HVPwIqepb+7NZHjkhrP6JSaPmuUDcq30fVZhMjjYPalBQSy+7UlwTw7/OTN3xr736wSwCOUAhIu7DCkYbvX20XZS6hs04dWj2TfyUa8EID29TCg50ZFuK+b+J0+BbnHyXMnonBnQ3iGBdshmnLu3JourqAkHJca8EKPJHcewIhtJQwn1QBZP4gXXEar6FISI0Z8PxCIC6Xv04JBgYxO+eS0W7iuceL2DBbYmeJSLSjP40hQHy+AslZoIX9DEnA4lPhy3xNWPFRHt+S2g+5mxVHPqsSFO9gOAJPx61yHNQaciiQqzjqyCJx77ei4qlGkwHBMDj2lTCUcyqv0DwqYP/oYM0WZd+c88lLcDiElAItLlUi/RHf7c0ngIptdTLbQHxJ4lNJ4fJQQ2pa38KNc9zQon0UcnYkR9o33NYjV7agH/nbnwPY+0TcfkwAQXFwIX37Z3FB4XMN1W9xfd8or61ethGGPN0Yq771g3rZAYYojI6Ri20LUIbtMQdrMS+zVCHkt6zKJs1dQ3T5guu4WgHzO7ASrsq3947iNvLc62UQFDgnCVpP1RmHojUt+8ZpPlgmMJZkfHYbaT/BsL4pN3375g7vEheOdkDERY76OMjxb4EmFGaQuw3Mquvm6SxX8jfKpKRO8herpXXK0Jn1sAN/ocZkUMoupwc6rzk3dJu6CBGbMWI37vbHPEdE7viIYhQ2JS9KrbQgbalXsjgzbPNi+SL3F5n2By2vckcNzOIbzjwjasYAkeonLhvBA/8CoZpXuxaBbTgsUFOgTktieORZvv+JS9vmsekq7hiB8wDKDsJ8eGvEtpvDmvKalnQMM0bWpKB+EFr1CIjnnJ8pLmMSRcqTFL3wD1GJpW6U9WMDiTR27fZftF3V7Lag08vxkkwv7UqNlTYObSBtzwgTTG5jLx+AHgoC6l8V3PfjFdY2HsID7hqtopN1r5sZ2vLNgmbz55O1mews6wAKhoK9jvv8Fks4wYwBEnG59/iItO1ZdghqYjNj7qKWS9j4p4yJTXlv9SqSlA4dW5T00PZu0/yjL/Nil33CR2B56USnBI6dh6Zxk/5ZuJhDN57Tm+VAydkTg5DP7kNkpXIu6dVLA3VbIh1+LW30Ix7hgnkFw7b69arKYlHnPhktNgv5Jwogg3uM3Uxuf3qrMhfdyRuMH1Uit8CCX2dP4PHeis+UD2vib848hTe87HrPzwKBFahGJr5B61I6gZYz59SWaEiHqa9Xy7iMFRAsasNToFPP4dEaCmDYrvnauq8p7zdN1mXf7heMLDIC1meO7NBEBcY9VIjzji/6fCo8FrqdqaXXphHXxu/+fpEGwgLqBSQS6YsQ+t5C3F9BXZVxo4FgGY6LAv+G6x6OH5R2EhfT/fK9AC1uPCT/m7GARiEMFvrt8q0vJWFP+IQ/ZcWuJrwDoxZpsuJ1aOEqNvkTtAYDXskN1T0UmT6i8spF7fl6c3jmf49cEs4DXp/gpMdCFFktUPiDAjlAqDPbqQuH0fHh248DdyfSh1ylFER+Y48wpFGP+RRTcwFJS6aqJENboDxN32vnz0MVNvjDGtePXOlOpfcQjSIsWB5e/9Pth+ohsHEbSMBFDD20w6Pq8dkY5CLrCod0GHCmnVkifjSB0vfpcLsDzKn6LRCoFttRIQ31bJAFxTapm75uD+6V15u3uuEzjXH2RebeN7Yoc5U1hdFN7qNTN8iCplu6DoqE7V9w8fDIY50XYFt2bvFFKavzvizgGCAqDvfh403ka4peNLY234CKbzY7y5z+9iJ3XzhyCZEm6EiGrXy/p33qnvmSrBM0XCp97+FwQ5XULNwwJw817coJ+2Z5XaALljW44TlM402KjiDt+E0lDvwcu1HHSk9uYHKJ4SI2Wx1yNjjGiM/Z5LtJ4ZZBAziKr8CLsGj0TAM3zYs1aXLOw83fakDSBhZU8mGIKk/qAZcpX6SehY+D7M/71o7TBnX2OlrNQvqLI4sLpcqOFmkGeFOAriYHcF4cihxihH72eCg/VH3a504zd3VxbCDVgWJPJCzWX13dIIbMW9UgokBMyjux4R80KAoanVKZi8lXZ4Y549cFkVuyGNP7SE4B/X2AeIhH5mLqRq4eN1FBckh+gzL4zSknqjxAWGiOFUVKcSyc0q4Im2kjLs1RL9IGfWa+PcEBI+j1KQwshR5jc3bI84BMRe/cb4VxMCIO4XfDl8TQTKVvwoO/Gd9qi68mSzH1OEJ8/RiEDF7/oGFLQsUsRbYUlIcGioE7CvPmIsInt8cSph0UFsyqpsVmUjWoJuLt2yEVk+yLjb200A8Js+baDFb7oypulIp7pU/ZiBe5EtClnvg+FxP0JG73ZOEp6AJkuIuL3XLP90B8JKuf9rskK1gvMFQU3IV+7m8I+FZaD4gd4Ibk0GUIHVm66eUFtNwTuEcF5SS3itkc/kQ/wA2EQlcAnHHjf/3vxtRct9FAbvokfSaOSFmC1IT2db+LV/TZFFQy8/Q156bZQNhqY4x7XHO0vkUbb+y6KMfez9HM6Yx+TDhkujhasYas6YwwgROC8ijyXjOYhiOdWrjHm+dZ/Qj+yG4hFiA5cdZ/XJWBS3gSIYCeXtzn2HL+WAC0m7IOjeTX4kOtt8KblspzB5FnTQ8YDmHOuu3FmRTc9vlncZ6F+tpPeQKbluruaV3daE9kK1u6UdMyHeqGQgMAJobrdtAd+iZ2buvGVVD6mb/KbLDNDraEREawH4nuDyVX6aDm4FWp95ZytCeHOcVr26GOQA4XP9w0xdqOPuYXn5lqm4ff6gjSXnZzIlrugDivhLri3GZDGUl0LKtUA4ci+NJvBII4CQsKg4HV1HSYpofZNZPB6LBvCPNBQF8ll4cqrNSnYqB1J/GO4z96yPKeVdheq50OXesnnTITHW3OIN663giIMPNQgJ8Yk9N7CYSWQeK6kslVpEMoga1xGFVVoVXTOslAjngWujVVLOvcs0KLTKWxLVj4M0mZs1w2qljYoYtmUy63B3q5KSJJqo0BMBiRXOizG0RuxJN1j/h6Nd8qU+tZri/LBPmDjG+eUerDMkmNxhZvuFsZimMXOIu4iAtBUrWvwuI1EbntGZ2+cCcJcN9SstIcmsc/gogji7AAm0ojgDxfgGdfkJ+AD2w8eX7I8vUoMSF1x3PAO6gqPSIgRJvJYdt+2i5BkTHOyTJp3afxv76JoJz0fq4OygO59wqklwbtcEfztoqry4wMTqym701VFNnh6NGmbsH8qYRT0WLo01rLloBZ7WavGWDF7BSH1WxaL+wwZvOuOOs1roQbD475642SYBImwVxhDzbpvgCK3JqGQsU0k9QSP/XidTQisPUZEj7txoSrJm+OTuQ36Il5IN6I6z4XkLXo6x4LHptjfmktBQT5p38fYDg7C12a+PhAZvtcWdBLLzya1BnAY98yVV3d9Yx8+ZveLOC+AQt9T3qLTQPTg/g6zDSqAPa2UavBqVikWXzeRiBJpqLug+0+FGSWCxgUgayGYYb1jhA2Ra8rnuh3XY4mlJfcT6NkNxHaoudTwicNwoIb2yJjDhVHI0mjs9qbAaJHkxfspRjsh3N4mfL1tP63vhHdqt400lsLcJ59RgqOgi7T6mFn7GlSjzUd6wEbn858kdkflf9MIO9z6i7w+W7GYAyETWIV4pvK6PfFS0tuFVN8u2k6pw/JNRA5r2mU9ybKXc6be7dhFfeG++/smV9ZQpDKYPETXSs9mHoVPeDEj7kJF494DMRKpOPMybVUaFLizNLzjJd5RqsleAJ094RuWaKVAFKlmOfkrXBA7JaDYO+VW8YD2Zj9areVdOjK7VfWOatnmZAQFHy26KQ2KYZLGgOifSrr+6fENF3qWq5bghX0eeY2imsQcnmC0iIkHtkVWUOkNcV0boAgv9zLrKCCuQZMjhP5vCMUdmAtM7QkDCJH4aCPDubpPB8J4Dy9l2LjHJrLI5LgJgnTHs35lBIlJXGPOCYJvTepXmajPqdxmz5z/y1AAKbaJ7J4AJKD5kOJnLVFp+Yp0se8XIGf90CpB51nCy8QyxF0rMCPggdJluOo6wAWdx7WTQKnmaGzEH6S3Fy2MQlOB1bappmCk9lF8dj3eddFJZ4lETq6fUPuCp6NNle7w0cBW3YB4ZuCVBEx+V42Kz8CaE3bex/WNidhEokSDlrxXKF82+n+T33P7gTBnwRQgj2VEF8TBZrdSqvBUWAVRVR/JeYqxcj0QzDw7qwO/e34rWk16KnHwDSK03UruDmz2cfcJtyZLxsfuOS3lnfL7hgJ3zBbfIeXnauj7rahTS/wlF7qjOpcg9Qr5xc2muFeJtSHRQODtkf61Gf0XZiKKVnRmTfp6h2uzu4JTcd6T8flpJurFfnypBydgiCxKhVtBGo6KPOdwkQsxGqxpMLdJcicPl5RdS8ztiYN5nt2buOPj4Jjz1p+BysjDBn2ahbM9KQFcuN12Ebm7MDoUL6y7AHMGpTag0Lnnboc0wKEQhBtj4GFOiujOQEabowjy0NAUqXsQWyQtGD/v/wOnXu5YSmQNPV62H/aDnOPzszPgtO6JRhI7/Tw5JZ3lYkKAqtGORkz69XXmvHeM9eQaBpYarAXTmzoTIZRJCVUiGoEfoQE248vmYp938iGWv0sj4lS1WZHcRlozsjcJAVu1FxATuP8gzNWSWcjohoendJ8XJ/YQHOsFpPtajZrkjYljLANF74R2lbGfYOeCjgkPDBtDTamDfVypCLSEi2bXFjHN1Gat8mNVZ+FVJH5mXemNFdgO9E9dzNeYKfKn6s95PBNwQHwcAM6MxEgV2BhZRMTAbYa5pWXKb1YKD/73IKzi+AnzqRvaSveLGlcg/tNoWhMVcqDnSR/5UicC4KFjnkJBuW/l2NLWTqe3uZAbAuIh7qiJqij1X9ARrg/zBS+ceBodJ8NdCZjjq6dn3PPt5I4gakCpZMZHZsVHMwT3XLfE6KibkjXn3el2v+I5068ihg33kQZ71hbEnFoGyB6kRzn9cheaMEAUGA1oaRTpY35UhWCWMblWzHFkXES63WR0QHqZNBcRNE01sKUzFvNXarVuQfBr/SQTa1iX/HajVYKXzs14d/+LwX3Ji8f5cIPVPoO+wJGkMFz7SU35vcVLcPSr20Gc+CzKLh3aRFe8BU5n5VWWqJkP7WIecV/2gZzHM26MoHXjzObr5PYigSpGxJ/+JmiS/HOBB9H2h6lO8YknunnBdqPdliqXmG/Yv5rSEa1vniduHl1PxW+iGtc44EpCsZWZ9kwazX4gp/KLSZIwkxn/OwyrA70hujEpYMdMjclFy/qGFRFZPpXlb9zRT6fGV+aH8RfWTN80PSIZzYdVxiq7hvZPOkIGzcJnmgZP1SAiogPxcxp7xye6MOrPCFwYt1cG/ObpVhOe7LtID3ATvw23cuH9kDJb2mjkOvBznEXu8tpYPWAjuiCOMTfdXojkYOdxlukvImI7aNXeREv5zUPo7Nfu1WyKnxCp8+AF5f5t8T3JmwNlsqYYw/Pl+/Y4Xsyzx4rB05DAfOSWPae0aGJXq8m9AbzDUQKMEAQDNgeWbomSErTBA7R21VgCY4ieQuMzbnS2mp3LedZorfcjOfsdXz4j5UNNG96lGVTGs0cg+DcBL7znzHFq2gPE8fNLRaZaJbFWICZa59eVhfcxCj+oDwt8GwE7J4VlI/BQvOykMFFtveLyUNoF6j9G62QHsuNp2RwDbC7cB/BVLQ5KCGK7dArs8zcufnu56W+GUPDTuQt9MT97M8TC/clX9R7XJDMjZLKlWGB44/u1OABMSJGRjKd50h1qhr/7WdGsRr6rTjezCivDqIUIXktN+oFZhvkEOpnn6ZMDaGvxKBHXcsKfnmciwFOa5Hhgsq8rgfDDU/jmQwmzqyWEpVAcFu6es8j0HOwbbQKAH2pXvT1voVom0n18EK4CzLDAIEbv9YHq0ExQFyXc4JziG09HaMz6wUw7kKz23bol4c15JIYArVuGInh5Etzg30SSXPfHyEC4I14tc+9HMlZxifpgkWz2b40RrmZRV5cYB6ZUuXUQX/M1S5d9OJPdPYM4kg04nsO+uCPXY+jrEgGshL9TGO8QjCosLGGNJ+y3XTpVputMBqlVNfEH+r54rs2OQybf2AvWVCMN9KXa/8c24FsZcc/byaQJXMW4+1HXg28L8b2pgG6KTUbc+LAxtm6lxpG+IOorekWczXSb2cwribcbtzzLVJcFg7PFh72jKPMoReA0Xp3j9D52pvIFxdbuJi8yT2FQH0lphBmdXnpZUlUk2FFZE/njAMk3gh3vLltCK/f7/AWngWkPx54VSRCtpAW+4luCmkB/BPFr5tubMs2fDxKOMUUetpq/hhcJYEwUdUmeRzJmxmLexOiqVjdAqUr1o9ArsyX2ajL4MVnJm/+UxVU67ipNfHCcbAN3oFhpJahcJ5KnATrIq+UIXMDCPwyuuUQkX67Y+wQqzWUD6A6OZLllcv6Ps3Ts7btpOFXE4oo/eYDB5sSvO88U7KaSxjkD3AMCvPowy9TAuJipDZNQFLc3YQbhaOR6lVND4Z9GiFgEfO2ga1Mc2PnV1Fl6wjkT/DNka3OLDdST9MBJrlkReePIvHC8sLHx6qln0cMrWOVlh7OKjifYTrxSBWBfHbISxXRJJ1WXkgtclOzyqefqMXkdQgqL38QEaNihhbIHTDlFj+Iy/tkbY+sZjwVsDq+BJg3I+d5PTjpZpDlPJ+CFH7AwF7qc4sugnhkNTpM0domqG29NXMZuthUN63Usw7drYjoqQ4kiugjRbub/06tHK1ZGhzpd/h5t8Hj3SqPB5vCerv35EpIHOWHF1w8i8EyaLxLjZetPH59RYVgBPMbYlVhkqfUkXinxjku2zwMedCNcmv5X1+B9Hihow2p76fpfjZ8QK3BSfUaJaQghDKffmPnX27bP/GNBl/3rrYfJQ3zdbWfdZGPdsLSj+tUlvpNDniaEgI+i+378evrKZ4Prnyuhs8ZN2QiVOfMU/o+jY0YbM1NHRXor2PkS+3DhzHlxjfwEWWS9jzYDdClt+Z/vefHxsz1MUnSVtCPOI6Wz+P+w4img7+mWvx8Pv+FPr9/b3lyn1doy6YlO8AVBIAvcGmIpqxb/selW1WhbHvwWd9my3QDIeifGzAU+nPL+ed7Gvvn+71Ml+LPNQom/lwrsvJT/PPYW4/9uRjNfy58/vvZwBL+5xPBRjzYrGn++9up75f/8TPQWKhQ+jQDv/G/AA==</diagram></mxfile>
2205.08096/main_diagram/main_diagram.pdf ADDED
Binary file (8.17 kB). View file
 
2205.08096/paper_text/intro_method.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Machine learning (ML) models are being widely deployed for various applications across different organizations. These models are often trained with large-scale user data. Modern data regulatory frameworks such as European Union GDPR (Voigt and Von dem Bussche 2017), and California Consumer Privacy Act (CCPA) (Goldman 2020) provide for citizens the *right to be forgotten*. It mandates deletion-upon-request of user data. The regulations also require that user consent must be obtained prior to data collection. This consent for the use of an individual's data in these ML models may be withdrawn at any point of time. Thus,
4
+
5
+ Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
6
+
7
+ a request for data deletion can be made to the ML model owner. The owner company (of the ML model) is legally obligated to remove the models/algorithms derived from using that particular data. As the ML models usually memorize the training samples (Feldman 2020; Carlini et al. 2019), the company either needs to retrain the model from scratch by excluding the requested data or somehow erase the user's information completely from the ML model parameters. The algorithms supporting such information removal are known as *machine unlearning* methods. Machine unlearning also offers a framework to prove data removal from the updated ML model.
8
+
9
+ The unlearning methods can be practically applied in the following ways: (i) forgetting single-class or multiple classes of data (Tarun et al. 2021), (ii) forgetting a cohort of data from a single class (Golatkar, Achille, and Soatto 2020a,b), (iii) forgetting a random subset of data from multiple classes (Golatkar et al. 2021). In this paper, we investigate the utility of teacher-student framework with knowledge distillation to develop a robust unlearning method that can support all the three modes, i.e. single/multiple classlevel, sub-class level and random subset-level unlearning. Another important question we raise is how well the unlearned model has generalized the forgetting? Recent studies suggest that the unlearning methods may lead to privacy leakage in the models (Chen et al. 2021). Therefore, it is important to validate whether the unlearned models are susceptible to privacy attacks such as membership inference attacks. Moreover, the trade-off between the amount of unlearning and privacy exposure also should be investigated for better decision-making on the part of the model owner. We propose a new metric to evaluate the generalization ability of the unlearning method.
10
+
11
+ The existing unlearning methods for deep networks put several constraints over the training procedure. For example, (Golatkar et al. 2021) train an additional mixed-linear model along with the actual model which is used in their unlearning method. Similarly, (Golatkar, Achille, and Soatto 2020a,b) strictly require SGD to be used in optimization during model training. These restrictions and the need for other prior information make these methods less practical for real-world applications. We present a method that does not require any prior information about the training procedure. We
12
+
13
+ <sup>\*</sup>These authors contributed equally.
14
+
15
+ <sup>&</sup>lt;sup>†</sup>Work performed while at the School of Computing, National University of Singapore
16
+
17
+ <sup>&</sup>lt;sup>‡</sup>Corresponding author
18
+
19
+ do not train any extra models to assist in the unlearning. Furthermore, we aim to keep the unlearning process efficient and fast in comparison to the high computational costs of the existing methods.
20
+
21
+ We make the following key contributions:
22
+
23
+ - We present a teacher-student framework, consisting
24
+ of competent and incompetent teachers. The selective
25
+ knowledge transfer to the student results in the unlearned
26
+ model. The method works for both single-class and multiple class unlearning. It also works effectively for multiple class random-subset forgetting.
27
+ - We propose a new retrained model-free evaluation metric called zero retrain forgetting (ZRF) metric to robustly evaluate the unlearning method. This also helps in assessing the generalization in the unlearned model on the forget data.
28
+ - 3. Our method works on different modalities of deep networks such as CNN, Vision transformers, and LSTM. Unlike the existing methods, our method doesn't put any constraints over the training procedure. We also demonstrate the wide applicability of our method by conducting experiments in different domains of multimedia applications including image classification, human activity recognition, and epileptic seizure detection.
29
+
30
+ # Method
31
+
32
+ Let the complete (multimedia) dataset be $D_c$ = $\{(x_i,y_i)\}_{i=1}^n$ with n number of samples, where $x_i$ is the $i^{th}$ sample, and $y_i$ is the corresponding class label. The set of samples to forget is denoted as $D_f$ . In class-level unlearning, $D_f$ corresponds to all the data samples present in a single or multiple classes. In random-subset unlearning, $D_f$ may either consist of a random subset of data samples from a single class or multiple classes. The information exclusive to these data points need to be removed from the model. The set of remaining samples to be retained is denoted by $D_r$ . The information about these samples are to be kept unchanged in the model. $D_f$ and $D_r$ together represent the whole training set and are mutually exclusive, i.e. $D_r \cup D_f = D_c$ and $D_r \cap D_f = \phi$ . Each data point is assigned an unlearning label, $l_u$ , which is 1 if the sample belongs to $D_f$ and 0 if it belongs to $D_r$ . The subset used for unlearning is $\{(x_i, l_{u_i})\}_{i=1}^p$ , p is total number of samples, and $l_{u_i}$ is unlearning label corresponding to each sample $x_i$ .
33
+
34
+ The model trained from scratch without observing the forget samples is called the retrained model or the gold model in this paper. In the proposed teacher-student framework, the competent teacher is the fully trained model or the original model. The competent teacher has observed and learned from the complete data $D_c$ . Let $T_s(x;\theta)$ denote the competent/smart teacher with parameters $\theta$ . It takes x as input and outputs the probabilities $t_s$ . The incompetent teacher is a randomly initialized model. Let $T_d(x;\phi)$ be the incompetent/dumb teacher with parameters $\phi$ and output probabilities $t_d$ . The student $S(x;\theta)$ is a model initialized with parameters $\theta$ i.e., the same as the competent teacher. It returns the output probabilities s. It is to be noted that the student is initialized with all the information present in the original model $(\theta)$ . The incompetent teacher is used to remove the requested information (about the forget data $D_f$ ) from this model. The Kullback-Leibler (KL) divergence (Kullback and Leibler 1951) is used as a measure of similarity between two probability distributions. For two distributions p(x) and q(x), the KL-divergence is defined by
35
+
36
+ We aim to remove the information about the requested datapoints by using two teachers (competent and incompetent) and one student. The student is initialized with knowledge about the complete data i.e., the parameters of the fully trained model. The idea is to *selectively remove the information* about the forget samples from this model. At the same time, the information pertaining to the retain set should not to be disturbed. Thus, the unlearning objective is to remove the information about D<sup>f</sup> while retaining the information about Dr. We achieve this by using a pair of (competent/smart (Ts) and incompetent/dumb (Td)) teachers to manipulate the student (S) as depicted in Figure 1. The bad knowledge about D<sup>f</sup> from the incompetent teacher T<sup>d</sup> is passed on to the student which helps the student to forget D<sup>f</sup> samples. Such an approach consequently induces random knowledge about the forget set in the student instead of completely making their prediction accuracy zero. This serves as a protection against the risk of information exposure about the samples to forget. The bad (random) inputs from T<sup>d</sup> may invariably corrupt some of the information about the retain set D<sup>r</sup> in the student. Therefore, we selectively borrow correct knowledge related to D<sup>r</sup> from the competent teacher T<sup>s</sup> as well. In this manner, both the incompetent and competent teachers help the student forget and retain the corresponding information, respectively.
37
+
38
+ For a student S, incompetent/dumb teacher Td, and competent/smart teacher Ts, we define the KL-Divergence between T<sup>d</sup> and S in Eq. 1.
39
+
40
+ $$\mathcal{KL}(T_d(x)||S(x)) = \sum_i t_d^{(i)} log(t_d^{(i)}/s^{(i)})$$
41
+ (1)
42
+
43
+ where i corresponds to the data class. Similarly, the KL-Divergence between the fully trained competent teacher T<sup>s</sup> and student S is given in Eq. 2.
44
+
45
+ $$\mathcal{KL}(T_s(x)||S(x)) = \sum_i t_s^{(i)} log(t_s^{(i)}/s^{(i)})$$
46
+ (2)
47
+
48
+ The unlearning objective can be formulated as in Eq. 3.
49
+
50
+ $$L(x, l_u) = (1 - l_u) * \mathcal{KL}(T_s(x)||S(x)) + l_u$$
51
+ $$* (\mathcal{KL}(T_d(x)||S(x)))$$
52
+ (3)
53
+
54
+ where l<sup>u</sup> is the unlearning label and x is a data sample. The data samples used by the proposed unlearning method consists of all the samples from D<sup>f</sup> and a small subset of samples of Dr. The student is then trained to optimize the loss function L for all these samples. The intuition behind optimizing over L is that we selectively transfer bad knowledge about forget data D<sup>f</sup> from T<sup>d</sup> by minimizing KL-Divergence between S and T<sup>d</sup> and the accurate knowledge corresponding to D<sup>r</sup> is fed from T<sup>s</sup> by minimizing KL-Divergence between S and Ts. The student learns to mimic T<sup>d</sup> for D<sup>f</sup> , thus removing information exclusively pertaining to those samples while retaining all the generic information which can be obtained by other samples of same class.
55
+
56
+ ![](_page_2_Picture_10.jpeg)
57
+
58
+ Figure 1: The proposed competent and incompetent teachers based framework for unlearning
59
+
60
+ The effectiveness of an unlearning method is evaluated employing several metrics in the literature. Some frequently used metrics are 'accuracy on forget set and retain set' (Golatkar, Achille, and Soatto 2020a; Tarun et al. 2021; Golatkar et al. 2021; Chundawat et al. 2023), relearn time (Tarun et al. 2021), membership inference attacks (Golatkar et al. 2021; Graves, Nagisetty, and Ganesh 2021), activation distance (Golatkar, Achille, and Soatto 2020a; Golatkar et al. 2021), Anamnesis Index (Chundawat et al. 2023), and layerwise distance (Tarun et al. 2021). Excluding the forget and retain set accuracy, all of the remaining metrics in the literature *require a retrained model* i.e., training a model from scratch without using the forget set. These metrics can only be interpreted with reference to such a retrained model. Such dependency on the retrained model for unlearning evaluation would lead to higher time and computational costs. Simply measuring the performance on D<sup>f</sup> and D<sup>r</sup> does not reveal whether the information is actually removed from the network weights. Thus it is not a comprehensive measure of unlearning.
61
+
62
+ We propose a novel 'Zero Retrain Forgetting Metric' (ZRF) to enable evaluation of unlearning methods *free from dependence on the retrained model*. It measures the randomness in the model's prediction by comparing them with the incompetent teacher Td. We calculate the Jensen–Shannon (JS) divergence(Lin 1991) between an unlearned model M and the incompetent teacher T<sup>d</sup> as below.
63
+
64
+ $$\mathcal{JS}(M(x), T_d(x)) = 0.5 * \mathcal{KL}(M(x)||m) + 0.5 * \mathcal{KL}(T_d(x)||m)$$
65
+ (4)
66
+
67
+ where m = M(x)+Td(x) 2 . The ZRF metric is defined as
68
+
69
+ $$\mathcal{ZRF} = 1 - \frac{1}{n_f} \sum_{i=0}^{n_f} \mathcal{JS}(M(x_i), T_d(x_i))$$
70
+ (5)
71
+
72
+ where x<sup>i</sup> is i th sample from D<sup>f</sup> with a total of n<sup>f</sup> samples. The ZRF compares the output distribution for the forget set in the unlearned model with the output of a randomly initialized model, which is our incompetent teacher in most of the cases. The ZRF score lies between 0 and 1. The score will be close to 1 if the model behaviour is completely random for the forget samples and it will be close to 0 if the model shows some specific pattern.
73
+
74
+ What is an ideal ZRF score? Suppose there is a class *aeroplanes* that contains images of *Boeing aircraft* along with other aircraft models in the training set. If we unlearn *Boeing aircraft*, we don't expect the model to now classify them as *animals*, *vegetables* or any other totally unrelated class. We still expect most of these unlearned images to be classified as aeroplanes. This comes from the intuition that the model must have been designed and trained with generalization in mind. An unlearning method that makes the performance much worse than the generalization error for *aeroplanes* is not actually unlearning. It is just teaching the model to be consistently incorrect when it sees a *Boeing aeroplane*. The ZRF score will be 0 when the model almost always classifies a *Boeing aircraft* as an *animal* or some other totally different class. The ZRF will be 1 if the model always classifies all classes with same random probability for *Boeing aircraft*. Both of these (∼ 0 or ∼ 1) are not the desirable outcomes. We expect the unlearned model to have a generalization performance similar to that of a model trained without the *Boeing aircraft*. It will have some random predicted logits since the *Boeing aircraft* class was not overfitted during training.
75
+
76
+ An ideal value of ZRF score depends on the model, dataset and the forget set. Ideally, the optimal ZRF value is what a model trained without the forget set would have. But in practical scenarios we do not have access to the retrained model. So, a good proxy for the ideal ZRF value could be the ZRF value obtained on a test set. The test set by definition is a set about which the model has never learned anything specifically. It is equivalent to saying, a *set* that the model has unlearned perfectly.
2205.09853/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-05-19T19:37:02.195Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36" etag="tnSJ74WO51ewtBia3Hs7" version="18.0.6" type="google"><diagram id="C5RBs43oDa-KdzZeNtuy" name="Page-1">7T1bd5u4ur/Ga8082AtJXB+btOl0Lt3dTed07/OyF7FxwsQx3rbTpvNwfvtBGGGQPkBgSeDWyWpjYxBY3/0+IddPL2+34ebhj2QRrSbYWrxMyOsJxn7gpP/TA98OBxzHPxy438aLwyF0PHAb/x3lB6386HO8iHaVE/dJstrHm+rBebJeR/N95Vi43SZfq6ctk1X1rpvwPr+jdTxwOw9XkXDa53ixf8iPIjc4fvBLFN8/5Lf2sXf44ClkJ+cL7x7CRfK1dIi8mZDrbZLsD6+eXq6jFd07ti+f3337vPr90X376z93/w3/vPrt0/v/mR4Wu+lySfEVttF633vpL850/ftft+9//fU/f+CpjTz3l3iK7MPaX8LVc75h+Zfdf2M7uE2e14uIrmJNyNXXh3gf3W7COf30a4oy6bGH/dMqfYfSl8tkvb8Jn+IVRZffw32SH7tNnrfZJQ/7fQp57JBX6X/pw9L/6Am72X2S3K+icBPvZvPkKftgvstOvVkeVkxfFmvGq9V1skq22TOSpUN/0+O7/TZ5jEqfuNlP/hSl44ef9Ljk5uZA+BJt99FLCbXyzX4bJU/Rfps+o5V/ShgF5ISDbGeW09LXEiK6+VkPJRzExGanhjn63xfrH0GcvsihDEM8fvjt3c5+s7rbb998e/vx8a93UTT1ve8M4nVwrccEbRAvAJyDHLvBzBZAji0I5Ng+HeAgiQe+YoAfNpYxU5SD5jP7MpieHc4fnrfRW3rpa+944NX6Prul7YgAfePSX7r+Y7Sfs6UX4e4hezD6ZhXeRaurcP54nz0wu3idrCOTiEjXPMg5hEQEvPHob/79bvM9Dp+zS5MUmZarTIh8iXfxHf3sap38Tr9XvvmLeJuKwThZZ19sS/f4KpeNYXZ+thPhKl3lFROPGVTW4eZT8iGJM5ymK22ibZyiUESfi64Z0s3/cDx49Vd8fwAHPXsXP21W8TKeh/nN2e7/Y7ncRXTNqZsfOYA1e/t3fP93eF+cgiG6s1/TX41051jBzOMozw7kWC3xTie7q+mNf/MevY4/+esPH959/DZFiyl2L2R3Ibvvmeyw7wKyDYs0Fli6aEzUZV4/b+P1PTUytmG8zl5yRJd+u32VssJVfE+3fp5uUAYhugcpRFav8g+e4sUiw5lttIv/zvEhgzWFe/atnKuJQzebYtwux1IBKoxiShRYIiIeuSGighQdbcSW7xnWpi/ZWECgANKNFOAPqBo5nXRh1M6jO3DKt+E2fErWCwD4rUiTQn/hRP7C1kjdiHi8NmtBBoyLAKEa6AJYJ112VABb+vNoPtcIMM8dH7w8SwBPtLiPGGehcja5T9bh6s3x6NXx6O9JsskB9Ve033/L9zyX6iUwRi/x/l+UJ8+c/N2/cw5NX79+Kb/5xt6s02/4r/Kbfx9XoG+Pl2Xv2HWySMGx66vsN0OWcLvPdIgjrmTHbuLVqrjjgp0xX4W7XTw/HMxPEXEuvYGV/VAhVdEqu6HbjsmROojaOQWmD3wfMfTBL/FvL174+PHzn/9+ebl9+9sjngbkcCKFdyP6bqNVqvt8qbrrIEzMLk23JfxWOiGXwMeVD6rYUUlh5i8jCkzKWN16PvItjgoOT1BztWdxRMiuLx74sHH5dRx5FbtyAsWJDLFMcTm+VVHkBIJDZXIriO8HI7ihSAwhXwuNSaB1anQbRWvmtewkSE7H8u8VY2F5yYgZJl8DWA5zNHukgsTrJEiIe5ogYddroDiQuTB/tGGK64GYphxQR25wZACFmtnADfoTDgwYLCkemHdEt3RAPkcZlhymdqVA7jYo4OKd/GMRp+n8KgGqohoUSLh9DdBRmwRBVQmSn3EUH2OgK9RCV0deYQduRQ+1rGaOQd+UPLI9qLSd+AJDxBfYs4D4BAX5T5VIfJRuRlD6qa5/YDbaZAhBQ+B+HxnSl7ebopH+MgR0nBNbFo2xGTTmeDsfMaiRIOIy3izwkOswbCfcqrPAto4/jlFiwKLv8kO4ow+/TA2EKIV8+qXdFY1N3KUv7vcZ2A8HKMbQE8Mcqu5/n2nW0VVhWxRH2JUbdunkgG/8iulh4S5325r7HumX3YZ+MD3EOVLEthDavBwu4x7jLoNNirn5X5L/fcj/fq19zFO8unWuV8EguvMd28nCf+V4J33zIdynsuEQkMOWPVpS75KdwmlpE5nYi60g9tIoIEvkcPO8f95GJxHEp/hw4fsoxSzrY/IUrkWkXI6LNpZD0kYRRxJow8d3xHV/DNpAjjUq4iBEII6P0e4h3EQC8MVwtmzMWT7sDaFUVU0bIgmku9beJfoZ8I58Cwk4AeZuNug+p+GEI+DEdbKeh/tonf77wfFCGx5YfBQcMX9kCx4oyCuDPQxMj7wYVcMZVTCBBpJGlfrQ6GkYRdweGGU6aIgqGNQtBKMRh/SH4aVxDUz3R8wHM3D4xObj6shptOT5822nS/iEcJY/u9pU8ISIxs37JN7RDZs/bzMcOFo5bvhEBef6brcpGxz1tkWOhZl1UYouihbGS5NFs3tWcQu2WswO7Ev3jAHLKrutMduKPA5hVEUoNas8iDsErkdCVo00PhnXwZuAbT7nPgCqm0gRxjRiN7GFL3bTQHYTIZjTl90ASkIkgOGkoOIN9jShgcMQto8rGk/gk0adR1H6h7H4HRBNMx8rB3NHiGwqFRPYA2tJxOfdDl5zkJsEfJ6IZyDKbTO+UHJFCGqNNekdvGjWgRTpOeIyVrOaUuMt1qi9MOVlEM+whBIzIs/wPWWCjH+ySrbsDava1yn2+Pgl4D9GlmFVSKzspyZIVGdvxHLGQHF5Y8Clckqj3SBLm51py66hrcn1zeTKaaHp3YZ+DYm7kLq71Owy5SzZ2u0Rp7L9FNLvvIq+ZK05BJNLwd70jKTtW/YRsgG/V365DJYevVTkl8QlAVmMi19qMwA4zwsUScNGGaEj+mIa4C5diA22GGloUWHKyjPZ4sTxCB8dQViAN4GiI74uaJNhbb1L2nIdYGQzJx09+f6C6eT4fGhPX6kW6JpvCeSpLtWyJvqjLhVleyjfloRrZCjctk3hNrY53ObFq+ZkR0f0EOTWh2E1tt7J0E13vSiKqnUHW+gvIPqJfUB1INoURdFi/pDsYtqXJaQXR0930WJx6C5SMl8HQd6oHXnvapG3LfQho9KqCoIY7SBnWj0WUJw1nWxDcV2hEKDhyc31adbQUO2fRK/kob5Fp7Xj89YO289KByQRnkgXPFni2cXa6Seu9NsYAsoEnOVbo4j1WUpzUosrFrC8mu/PjnsEY2EeQPcrs8wDqMC4CIP+8MQDC4NAtLlOg6awqwv3znUgrXC5xFmB1Ti5fAeoEovPaoKa1BUZs1W46jJM/OAi5XmfJu/i+fQQr3W7efwcE1rdPL5yN89JLb8RGURNHNwZCLTYKbKzKzlWRzdh1x4JmjBMu14a+CmfO1ZC4yrLI9ioYsm+dUlwvaNehU2yyiqbrBSydEDElrbTpdtI4XsWeoogQYsWT9okmCNKMNealcveLSIItACQZ6xHgHo95WyM1rEXMGkXg3DcSrr2xJSVHQg4bzt4FgA4r7glkRMEM8zfGaS2WrPedbutoCej0xd939fJ+suFy9ZwWdcXMM4K2rgsWC6sq7d1S+8d5dHlWWCXmzCh9iZMR+Ya2BX2OqUdnMhIeGy/ZHyQD9e36M++mpBDf3KXqlZ9kyVrDdWEUSz7swXCQgHHu2U9psBahCWDGdJs2aNXXDKVQOG8wIZjiI9ZKULU72O0ixfPWTDybpXMH7unNY6Ia+vizQTzJUjIhSrTgBxEbaFlhIZx4Yxcf62kOUgy0T56vJhBoXJiSi8uO2xDZ7FsCAWcuac5QQjhZv3kYgVqpaKBKaJA9HaSwIY0D8LcEkxV4OM0mlWFYkuqtcucvP+hBbvtCWPekFhe4NtQeYE+0T6Mc/17ZGT9mUyjzSLBY5RX/542Nad52IF6010/CrUktGIey4wNTehoqKuWgxjAURgnnKFQtPnBL9KqTlq5fHazaIQ6JtObCy5Xzm/eRot4vk9hgq11U3VwbVqzgiLS5grkrqXDveuAT2gc0Dn526p9qDeTK789txvuTgC1WzLcsWCYfksSFRVnz04IZ55A3ZagdDMVdbXwXPHTRjKhnkqLipFMZvJPfFFZKYYUd1RWDvKUTz6RmO9UUuzaav4kao5Uaz8sRnUu7XmKsmUW82sZQcOfzzU91BPKLXa1PBU5yqQ7FfPfff/BaVeh5tfINNSjuYXxBoeXFkHDi2ZkzYhXZQye6JdCzjFXxEjjC+SL0cfbD68+vXsvIMGP1QxRFyI4GAsBZ+LM5PrG67P7gNyeXB5c8MCIrm4TUVfXhQLwuKfTW8b/9fy0yfVat6fmPtTIYLOtJ0qbpFpdDrCoLjeO9xqLs5A9uDj66incPdZrSQ096WQ8PCpmBT1V1NXjbK3a7m7SfSPL6uNPKeVbKTtNv83PF6aslinnTNgTxTBLBjPDg/sMWbjwYLM8uJ2zKh/dAXsOiikzrDWVW11Bd3YO8y5C09n68OwePvPmyALd3YZYgnQDzzrh8VTm6UDHzu7BgZMfadkkduCmohdpNDZpxD61qp5BQDqxQ6qlE+jMH2SmlLEyKYOJDpBjv78Manbwmk6Js1lzza6Z+MJCx6EQ6rPrQPxGYuOjP6fvI7F7icje5NnWNkq5eHiXLUVxIQ8npOs6VxOHpo9QGtjlukoHzgkRijbnheVWYIVYz8KWvAVthUzATJchmFXHiJ1QLT52hqfYWWHLJjYRPSnvneeTsRAjE8hdxo3ZNj+2w+WoQZpTCgNAAmdWriq0OENAM98kYlKySHyrVbzZ1eltZU6629ACfvJ6Gb9EQlJGQ8pffaCbC0jdZD+dmLY2PurzLVihKiRAx0MK3HAwI5XoDzTSzOVzYr6qGSmgFo6ZkRLCWTad5jZia2Yz8rYsNnI5X8oJZgQFpR91PNYoU7VF58oQhGiuS0U/UwogaMh9aVKrKWKpg81dbbetiGBb9TbTbM8sXZyXslEc16VCFG15WAo5FntK2KICoaBvDwgeZxxsq00XkGnbNTTrQ22sD9SRdLG6xhhyqzsKG6pZ9r1Z4CHXYcK/OmfIpw1HTtcNMKrGXdxgVl6UL5LWzBCZC9jUYPZ+DtKKTO8WFjRW5iVYA/rntjeKuPHUPSO/mrooOykESHbmewow3V911y3MzYbHeefd2ifjz1c8I7Wx+vRSICfdp4XYBvu0wEAzPJLpUnlroPK2sZ52gM5Xp2GoqIRf2EpLlwhimewSAYOtuf+Tjm58Xpm1zCwUyLKX/NKxdYf4rhvxgQ1xDDUCqIlpudYMlywqfsC1M7NwyTDiPEXSPfrSu1jlTtQc5Xr2LLDqb6N5QiOyTk/aNK0enOPERkU2nWIVIZCNvAzXm6PxuS9u1JzHCKNaMBtlNIwjFUiu7VwXimuSRF8/bw8TCvfbMF5nL+XqQEeb+MRbIBCDgqb6aWNc+Z5hXfjqkEDoc2aLs4WQpSkLC3QHi96M1rzmZLt5CNe7A8LiDOcsimXTOBWPdMvocdYJhX6z5Ctw8iKaJ9uQDuGc7h/i+eM62uVnpei9j1OzBzw3f5zW8/L+u/x5p2WrL8J9OD1Q7HZeOe2Ah/R2N5M6DMzwr8C+HPc6ds0pfa3t/R1N6L6mm53/+fnwmHWJ54cPvoTbOEz/psQe0sz/fNfXyfaJbWblvHm4qTvla46h9EM75/XWKqJ+iGmKPnPKpoQrM0ilXGy9W6ZH2ed5Qq31NdkuqtcWmJTe7u4xTq+lCxw4zTSnm/J5LSDWC0N9g4plig3K3RUKc7wWHYCN4ssQupQlnFwH0VbXB9PL/02OU3trNrJn6cp4MOXPHpjSbRd/mngSlSTdwX4aRqIDX/vp05QWrNx8OrydzWYZzOlrdDg8RsSceK9/7relsmfDml6b4i+t/a3Cu2h1Fc4f7zPLl+lii2gZPq+OT6BBUbJsQVFyXVFR8gBFScUURlBRMqHb71J+uyrp9i2Ye1SFSvJVRN6UfrJFos3u566YdzEmxmpMINGYsETrF+kq6QBpBHJImK+F7ukxG8JJhifqeks0ca3WtARHT1Zj1wwA1+EyFbskGTs+nz4gl/HQw6sM7rXbXULcFdJ1yll2WVtZquGUX/zMcXijBunFH3DW/oAf3Oxv185cYaERWYBtvgLpbynaONL2jLx7QKc34+IB604KdGstE8a0cgeUCjO5rqOJpLGjzJ7WF1cgfBwMA5YAgaxlBZZATYSdnGwKKIuwN+fXXyLsp1gdcDa0K2l2sGR/8xH2Gqy1B8HaCwYOhoEaRvOdiILOeBjnJTVptGirvozktB4EkPPjkptU0clMFXnCZX+nsxV1bSq/p+F2PGPR6s5lbSrbizcHU6xgNzSUZsShYyoWNvTl89PqdUQp+Vs7pzgQ/OecooTgz3XylPJsbN1S9yS2/rgF4LVN9pmHMH2fTavXxRGI7/F9Y0QjDShdUBDPBBUNaII5B5FO9SU10WKo05k0BwdifinzXjiRv7A1ggoRHlQ0LV6sYWNCr1JpEmiCl0SR/kjhtfTn0XyuEV6B4P+AwQWIWxXgghmw2JjwOt3CmLKarKSrmNKjfXTbT5sJvlr+PDEySMaU1BXTAOpnxOnCO88V8U7AOsQIVfVsGJBLNNe16K1zbXaz1Vt9NezF1m2BwRPJZCPmZLDiEBDw4y1wHgfgtbdbcHkhhFDP2jmJpTTXx7HECd3YdDTtMHGrTiQLt9h3YEXnOHCtzGRgJ58hpHRcZ2b71WZTDrIMNR5t/OpVvejLabqsuVJlTsUoWiXoUjEc4vBpfrZrz+wyAEW/EjSESkXSHwxRfIFoB4i6Fho9RIeJgvXQLUyhiWT/geGGC7SKHDYkvr3BJTYjm1zLF+jAQTP/9F5vjifyzNRUgyhMcQMrJziOAi2+kwXdue7ZXYK7rXBykysYW5ob1GlQAD3XLlPMlCoppI1sNGl7vcMU6pqJDEuaJBVRgevzROQ3E5EsfVIctxDhyZzppxbtBm3W3AHm6c7DlcwY1ZGVv5iUidpUXuwK7Nv32hQk26SChMVw1j+Wy905jZ35rjAmsEaPMYGAMa/mIrqcmY3ER4O0AdgXdUUctAEYayoPrZHNgxpN1cGfuN/kTzz6zC2gW2WRyqOwj5xivc0GYgrwiZ5qBe8knAbmWdx+ePXp3fvT+JbATxbuneu4nKyigePlEmeB43MXUMSfOWVeVR1/goNUrxa4l1GPj2PI3lMaNzSb1NU4d0BmdoFipqK+f21NNMoWMhkdb8alKWo20GwxmeJtioyb97R86jx1qNzy1ZjEhISsGJDPQE00VPCZN3/eoldv9pvN5/XLl+lvweKXDw9T1BxY1F1f0X9QkUFtgQ1jbdUW3HEVzziiF+WM40AG6JPnq4Q1P2jRAljFinobBg3SFndwCX+KVSEzlQg20ppttP5MBWS82DWjLWCbT50lWFZbEBezAm4pn19KXUuPGqt+0EbRF6t+xFa9i89TTiOWmjwGnPb64bR3wekLTgPPffFUafVUEd8a2lPlilbGxRXQ2RUAA1KXKwAGpBhDPZuIWInRB8NAlAABMRsAKRgD0wVSr7nni4asoW5ZdkPrBCOX/Z6k7GfTS8ci+8FKu4vfqd7vVOUbdiAp07X5nVxxPvjrbbJJns9EHAwARMxLdJtA7B/q/qsiBQL0MbFw8Xk0TxJk+Gj5smzmtnq+3CsJOuUTtHqrgpw+Q87amjev/SI9qc5sey/iQ4rzYIuf7gsLELDboC4BwgyXpuYItM3BptuuzJP1Oprv8xTMSaFMd+DTdnWaPW0LJO4Vq4mr7JUudYlxiTPp1HST/UxO67PZjDTt0BwMVqJbS4DVaNvmnLrpzCNVISAxy1NPyxxQwSHN9q0mBedsFBVTGUmpslAvghRXdIn3wj5p0WWank+rKuOJLrUxs/biuC5dBdlCgxYfAfJXTxs+GEQSbZwumZd9My+PBX2aky9taV+ZwSxNrxy4EcZn9U3CAPI/A2Ex3WkYVnN7m+81MamHw7gxAckq077ZBCT17ZJqhK+Qf2QLSUOyqO/y4oPY3oy1LDeG+oNma1z6QQ87qh6LcgYeVT+udtDsuS+a6KTw+3CsBJlqBw2ylUBMoxAANIjHzLO5jfItoJWnLi8M3HT5vMwqfR6zAmlG4TGDG01L2FcDeczUb/KgHjL44UVS+Rgd9puHgtgZQbZ9gTxeQ0CtqkZDqRu6BA3XOwixBL62dD1dHuygT0zY+GSBbvqqLMfm0n+LrgmKVM16gVBFcdUqqSWrkuqx+Dp7cINqvgvJ8x/kZsqiAFcSYjlmm1qEs3LDsp4dywjhg6rID5oW1lxvG5htI92lLupMiLbGV1lHsf1ptLH5fCuN2sqHX9UhuF9CZ9+3OSpyetJN4AlD0Pm11JEKuIO+mD4yZuNAu6WLg2q7YWjKiUk7V0wOvSgkqnnbgNoIY2CtnI7R6ci0EZyz6tpYMn9+nkAhp71g3wPv1p3R2gKjJXweqWZGG0jkeP3IjHZghyKyZOBz4bRny2kZA23ltIxQR8Zpu9l9It/FPTmnI1RqaeScNaQJ6agy49xLU6ZwzZSpj9EuXjxnE63uVsn8sXYKe6v770wao+ri58Rp7BHnQknWgaaCTdCoZInDZxSIHunI0P6cGQQMG67ZOrLKVa4Dn9huSMY4+4FUOjFKjLxBreeiVnB0YWLqHx1dnFgipj4idNYXJz6izYgDxRJlMOMOFHfZ5fFFihEea80W6wvPWDBboYypAbBTKmZ6wt58LOzUh3BHbQxzM2Q3EyPTY2sGBws87c53bGdYla0FqztxPh/Q47WNiEV4kGTqs80OVdjJqzk5u4OF0xUd2yOGQNNZ0LvjDGZDND53iTfePO+ft5FR7rgcgjsunMhf2BB39PEdcd3z5o5FtxJRqdfFLRvx/UyU+gYbVRVIAoezvIaN8CLcJ8Xs0s7yR2hnOeDglZp0MkusA0CSxXTiYkITIGAx3bEFImFHj4g/KnF6qGq1yZc62LZt0IdVA0+JYgfmF4mfQkouZQjyu7in7LQ4+nt4F60+JLs47xxxl+z3KRsiVyv6wVU4f7zPGHO1s2r6k56S3ewVQxALwpb8eXL+R7lfyvtu5ou1PYvnyXoZpyx/e2B8N4twH6Z/6PEd/Zus2UuHjtnMPpg+xevnHTWYZpv1vSA1Kkz5nnI19sFjtJ8/jJZb105z40jlmv5qJQCXQ39X1PVE3FfQRAhuX9Yc31LevdJovzLQ8uwvbYPF7fw/vr3613Tj/+1//ee36f/GUyQrbYvAj3kjsvHBL61o+8xmBIJWWKBjXbMYQUoepmK6J13LjpY3WsIPkgmbhVGmb3jKkxZlumsCEPL5QSRYcRMecJdsUSG+MJMT+loTW4y/6OpqDWKzREXpKHoLEiZVW2wH9vzqB4C5vXeqaeNP2Klv1V0YyqiSKUr+3mzkZgwZRV4A/Igd7N/zaLnYYdPH13Kxf5KAJpZiAC+bdKofiYU0IsR4IYVHy0CUb/H4+AWSYN+j0NZsC+pEqUtfg/eqv2bbuPdnpbDB30RCk/0huG2BIuNltzJJ5KNV2E7d9BEy4P4Z47qYymCYeV6pIxq5yKjSwWGlTcLQGLfWdi7Z4PD2D1NbZsZlP7VmlmVX/Pa+21LBlr75EG3jdHcpBZry5jd6r7TnxvjOzC63PKo2iSGkZ3Ml5JByKyVWxlgsWyTR9B4dcEJSTSNoykmt1wKBjD2CAEsf/SEEoYMowVBpmNGIJCT5vxv+ZiYkCe+rqfb6jj0rF2dbXOaK5c4CsXZb8SwSPqBJ7+qVGWZDg4MerAne8EFG4PwognpYHG+WvzYmvRNVW6gHeTO5/oZaRHDjpl9EsBIRbFs2A/FQIpg0mxjnnd6nfhwBTBWAzdCowZpP8INhb5+T1NI38E1NqaFyFDIp9jq3kWLNkwphpTqLrHGbzsR3qL/bnVXtZEiwmCuup+gMZigd24qqki4aWUPPnkmK2QORzSF38bgkTHNNvXLtgoMSaoHSGGeIngZ/W7lX8jT4n1eKn3Z+DYxOG5ZhB+fDnxXSWOMEcBMzKatI4FiesvrU9O02oe0YjqenhPTwR7KI6Bn/Dw==</diagram></mxfile>
2205.09853/paper_text/intro_method.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Predicting what one may visually perceive in the future is closely linked to the dynamics of objects and people. As such, this kind of prediction relates to many crucial human decision-making tasks ranging from making dinner to driving a car. If video models could generate full-fledged videos in pixel-level detail with plausible futures, agents could use them to make better decisions, especially safety-critical ones. Consider, for example, the task of driving a car in a tight situation at high speed. Having an accurate model of the future could mean the difference between damaging a car or something worse. We can obtain some intuitions about this scenario by examining the predictions of our model in Figure [1](#fig:teaser){reference-type="ref" reference="fig:teaser"}, where we condition on two frames and predict 28 frames into the future for a car driving around a corner. We can see that this is enough time for two different painted arrows to pass under the car. If one zooms in, one can inspect the relative positions of the arrow and the Mercedes hood ornament in the real versus predicted frames. Pixel-level models of trajectories, pedestrians, potholes, and debris on the road could one day improve the safety of vehicles.
4
+
5
+ <figure id="fig:teaser" data-latex-placement="h">
6
+ <p><img src="arrow-turn.png" alt="image" /> <img src="arrow-later.png" alt="image" /></p>
7
+ <figcaption>Our approach generates high quality frames many steps into the future: Given two conditioning frames from the Cityscapes <span class="citation" data-cites="cordts2016cityscapes"></span> validation set (top left), we show 7 predicted future frames in row 2 below, then skip to frames 20-28, autoregressively predicted in row 4. Ground truth frames are shown in rows 1 and 3. Notice the initial large arrow advancing and passing under the car. In frame 20 (the far left of the 3rd and 4th row), the initially small and barely visible second arrow in the background of the conditioning frames has advanced into the foreground. Result generated by our <strong>MCVD</strong> concat model variant. Note that some Cityscapes videos contain brightness changes, which may explain the brightness change in this sample. </figcaption>
8
+ </figure>
9
+
10
+ Although beneficial to decision making, video generation is an incredibly challenging problem; not only must high-quality frames be generated, but the changes over time must be plausible and ideally drawn from an accurate and potentially complex distribution over probable futures. Looking far in time is exceptionally hard given the exponential increase in possible futures. Generating video from scratch or unconditionally further compounds the problem because even the structure of the first frame must be synthesized. Also related to video generation are the simpler tasks of a) video prediction, predicting the future given the past, and b) interpolation, predicting the in-between given past and future. Yet, both problems remain challenging. Specialized tools exist to solve the various video tasks, but they rarely solve more than one task at a time.
11
+
12
+ Given the monumental task of general video generation, current approaches are still very limited despite the fact that many state of the art methods have hundreds of millions of parameters [@wu2021greedy; @weissenborn2019scaling; @villegas2019high; @babaeizadeh2021fitvid]. While industrial research is capable of looking at even larger models, current methods frequently underfit the data, leading to blurry videos, especially in the longer-term future and recent work has examined ways in improve parameter efficiency [@babaeizadeh2021fitvid]. Our objective here is to devise a video generation approach that generates high-quality, time-consistent videos within our computation budget of $\leq$ 4 GPU) and computation times for training models $\leq$ two weeks. Fortunately, diffusion models for image synthesis have demonstrated wide success, which strongly motivated our use of this approach. Our qualitative results in [1](#fig:teaser){reference-type="ref+label" reference="fig:teaser"} also indicate that our particular approach does quite well at synthesizing frames in the longer-term future (i.e., frame 29 in the bottom right corner).
13
+
14
+ One family of diffusion models might be characterized as Denoising Diffusion Probabilistic Models (DDPMs) [@sohl2015deep; @ho2020denoising; @dhariwal2021diffusion], while another as Score-based Generative Models (SGMs) [@song2019generative; @li2019learning; @song2020improved; @jolicoeur2020adversarial]. However, these approaches have effectively merged into a field we shall refer to as score-based diffusion models, which work by defining a stochastic process from data to noise and then reversing that process to go from noise to data. Their main benefits are that they generate very 1) high-quality and 2) diverse data samples. One of their drawbacks is that solving the reverse process is relatively slow, but there are ways to improve speed [@song2020ddim; @jolicoeur2021gotta; @salimans2022progressive; @liu2022pseudo; @xiao2021tackling]. Given their massive success and attractive properties, we focus here on developing our framework using score-based diffusion models for video prediction, generation, and interpolation.
15
+
16
+ Our work makes the following contributions:
17
+
18
+ 1. A conditional video diffusion approach for video prediction and interpolation that yields SOTA results.
19
+
20
+ 2. A conditioning procedure based on masking past and/or future frames in a blockwise manner giving a single model the ability to solve multiple video tasks: future/past prediction, unconditional generation, and interpolation.
21
+
22
+ 3. A sliding window *blockwise autoregressive* conditioning procedure to allow fast and coherent long-term generation ([2](#fig:autoregression){reference-type="ref+label" reference="fig:autoregression"}).
23
+
24
+ 4. A convolutional U-net neural architecture integrating recent developments with a conditional normalization technique we call SPAce-TIme-Adaptive Normalization (SPATIN) ([3](#fig:our_net){reference-type="ref+label" reference="fig:our_net"}).
25
+
26
+ By conditioning on blocks of frames in the past and optionally blocks of frames even further in the future, we are able to better ensure that temporal dynamics are transferred across blocks of samples, i.e. our networks can learn *implicit* models of spatio-temporal dynamics to inform frame generation. Unlike many other approaches, we do not have explicit model components for spatio-temporal derivatives or optical flow or recurrent blocks.
27
+
28
+ Let $\mathbf{x}_0 \in \mathbb{R}^d$ be a sample from the data distribution $p_{\text{data}}$. A sample $\mathbf{x}_0$ can corrupted from $t=0$ to $t=T$ through the Forward Diffusion Process (FDP) with the following transition kernel: $$\begin{equation}
29
+ q_t(\mathbf{x}_t | \mathbf{x}_{t-1}) = \mathcal{N}(\mathbf{x}_t;\sqrt{1-\beta_t}\mathbf{x}_{t-1},\beta_t \mathbf{I}),
30
+ \end{equation}$$
31
+
32
+ Furthermore, $\mathbf{x}_t$ can be sampled directly from $\mathbf{x}_0$ using the following accumulated kernel: $$\begin{align}
33
+ q_t(\mathbf{x}_t|\mathbf{x}_0) = \mathcal{N}(\mathbf{x}_t; \sqrt{\balpha_t}\mathbf{x}_0, (1-\bar\alpha_t)\mathbf{I}) \label{eq:q_marginal_arbitrary_t} \implies \rvx_t = \sqrt{\balpha_t}\rvx_0 + \sqrt{1 - \balpha_t} \rvepsilon
34
+ \end{align}$$ where $\bar\alpha_t = \prod_{s=1}^t (1 - \beta_s)$, and $\rvepsilon \sim \gN(\vzero, \rmI)$.
35
+
36
+ Generating new samples can be done by reversing the FDP and solving the Reverse Diffusion Process (RDP) starting from Gaussian noise $\rvx_T$. It can be shown (@song2020score [@ho2020denoising]) that the RDP can be computed using the following transition kernel: $$\begin{align*}
37
+ &p_t(\mathbf{x}_{t-1}|\mathbf{x}_t,\mathbf{x}_0) = \mathcal{N}(\mathbf{x}_{t-1}; \tilde{\boldsymbol{\mu}}_t(\mathbf{x}_t, \mathbf{x}_0), \tilde\beta_t \mathbf{I}), \\
38
+ \text{where}\quad \tilde{\boldsymbol{\mu}}_t(\mathbf{x}_t, \mathbf{x}_0) &= \frac{\sqrt{\bar\alpha_{t-1}}\beta_t }{1-\bar\alpha_t}\mathbf{x}_0 + \frac{\sqrt{\alpha_t}(1- \bar\alpha_{t-1})}{1-\bar\alpha_t} \mathbf{x}_t \quad \text{and} \quad
39
+ \tilde\beta_t = \frac{1-\bar\alpha_{t-1}}{1-\bar\alpha_t}\beta_t
40
+ \numberthis
41
+ \label{eq:RDP}
42
+ \end{align*}$$
43
+
44
+ Since $\mathbf{x}_0$ given $\mathbf{x}_t$ is unknown, it can be estimated using eq. ([\[eq:q_marginal_arbitrary_t\]](#eq:q_marginal_arbitrary_t){reference-type="ref" reference="eq:q_marginal_arbitrary_t"}): $\hat\rvx_0 = \big(\rvx_t - \sqrt{1 - \balpha_t}\rvepsilon\big)/\sqrt{\balpha_t}$, where $\rvepsilon_\theta(\rvx_t|t)$ estimates $\rvepsilon$ using a time-conditional neural network parameterized by $\theta$. This allows us to reverse the process from noise to data. The loss function of the neural network is: $$\begin{align}
45
+ L(\theta) = \E_{t, \mathbf{x}_0 \sim p_{\text{data}}, {\boldsymbol{\epsilon}}\sim \mathcal{N}(\vzero, \mathbf{I})}\!\left[ \left\| {\boldsymbol{\epsilon}}- {\boldsymbol{\epsilon}}_\theta(\sqrt{\bar\alpha_t} \mathbf{x}_0 + \sqrt{1-\bar\alpha_t}{\boldsymbol{\epsilon}}\mid t) \right\|_2^2\right] \label{eq:training_objective_simple}
46
+ \end{align}$$
47
+
48
+ Note that estimating $\epsilon$ is equivalent to estimating a scaled version of the score function (i.e., the gradient of the log density) of the noisy data: $$\begin{align}
49
+ &\nabla_{\rvx_t} \log q_t (\rvx_t \mid \rvx_0) = -\frac{1}{1 - \balpha_t}(\rvx_t - \sqrt{\balpha_t} \rvx_0) = -\frac{1}{\sqrt{1 - \balpha_t}}\rvepsilon
50
+ \end{align}$$
51
+
52
+ Thus, data generation through denoising depends on the score-function, and can be seen as noise-conditional score-based generation.
53
+
54
+ Score-based diffusion models can be straightforwardly adapted to video by considering the joint distribution of multiple continuous frames. While this is sufficient for unconditional video generation, other tasks such as video interpolation and prediction remain unsolved. A conditional video prediction model can be approximately derived from the unconditional model using imputation [@song2020score]; indeed, the contemporary work of [@ho2022VDM] attempts to use this technique; however, their approach is based on an approximate conditional model.
55
+
56
+ We first propose to directly model the conditional distribution of video frames in the immediate future given past frames. Assume we have $p$ past frames $\rvp = \left\{ \rvp^i \right\}_{i=1}^{p}$ and $k$ current frames in the immediate future $\mathbf{x}_0 = \left\{ \mathbf{x}_0^i \right\}_{i=1}^{k}$. We condition the above diffusion models on the past frames to predict the current frames: $$\begin{align}
57
+ \label{eqn:vidpred}
58
+ L_{\text{vidpred}}(\theta) = \E_{t, [\rvp, \mathbf{x}_0] \sim p_{\text{data}}, {\boldsymbol{\epsilon}}\sim \mathcal{N}(\vzero, \mathbf{I})}\!\left[\Norm{ {\boldsymbol{\epsilon}}- {\boldsymbol{\epsilon}}_\theta(\sqrt{\balpha_t} \mathbf{x}_0 + \sqrt{1-\bar\alpha_t}{\boldsymbol{\epsilon}}\mid \rvp, t)}^2\right]
59
+ \end{align}$$
60
+
61
+ Given a model trained as above, video prediction for subsequent time steps can be achieved by blockwise autoregressively predicting current video frames conditioned on previously predicted frames (see [2](#fig:autoregression){reference-type="ref+label" reference="fig:autoregression"}). We use variants of the network shown in [3](#fig:our_net){reference-type="ref+label" reference="fig:our_net"} to model ${\boldsymbol{\epsilon}}_\theta$ in [\[eqn:vidpred\]](#eqn:vidpred){reference-type="ref+label" reference="eqn:vidpred"} here, and for [\[eqn:vidgen\]](#eqn:vidgen){reference-type="ref+label" reference="eqn:vidgen"} and [\[eqn:vidgeneral\]](#eqn:vidgeneral){reference-type="ref+label" reference="eqn:vidgeneral"} below.
62
+
63
+ <figure id="fig:autoregression" data-latex-placement="ht">
64
+ <div class="minipage">
65
+ <embed src="figs/autoregressive_v2_2.pdf" />
66
+ </div>
67
+ <div class="minipage">
68
+ <div class="picture">
69
+ <p>(150,150) (-25, 140)<span>Real Past</span> (27,140)<span><span class="math inline"><em>t</em> = 1</span></span> (58,140)<span><span class="math inline"><em>t</em> = 2</span></span> (89,140)<span><span class="math inline"><em>t</em> = 3</span></span> (120,140)<span><span class="math inline"><em>t</em> = 4</span></span> (151,140)<span><span class="math inline"><em>t</em> = 5</span></span> (-40,75) <span><img src="drive-by-1-5.png" style="width:100.0%" alt="image" /></span> (-40,0) <span><img src="drive-by-4-10.png" style="width:100.0%" alt="image" /></span> (-35, 65)<span><span class="math inline"><em>t</em> = 6</span></span> (-4, 65)<span><span class="math inline"><em>t</em> = 7</span></span> (27,65)<span><span class="math inline"><em>t</em> = 8</span></span> (58,65)<span><span class="math inline"><em>t</em> = 9</span></span> (89,65)<span><span class="math inline"><em>t</em> = 10</span></span> (120,65)<span><span class="math inline"><em>t</em> = 11</span></span> (151,65)<span><span class="math inline"><em>t</em> = 12</span></span> (-28, 88)<span>Prediction <span class="math inline">→</span></span></p>
70
+ </div>
71
+ </div>
72
+ <figcaption>(Above) Blockwise autoregressive prediction with our model. (Right) shows this strategy where the top row and third row are ground truth, and the second and fourth rows show the blockwise autoregressively generated frames using our approach. </figcaption>
73
+ </figure>
74
+
75
+ Our approach above allows video prediction, but not unconditional video generation. As a second approach, we extend the same framework to video generation by masking (zeroing-out) the past frames with probability $p_{\text{mask}}=\nicefrac{1}{2}$ using binary mask $m_p$. The network thus learns to predict the noise added without any past frames for context. Doing so means that we can perform conditional as well as unconditional frame generation, i.e., video prediction and generation with the same network. This leads to the following loss ($\mathcal{B}$ is the Bernouilli distribution): $$\begin{align}
76
+ \label{eqn:vidgen}
77
+ L_{\text{vidgen}}(\theta) = \E_{t, [\rvp, \mathbf{x}_0] \sim p_{\text{data}}, {\boldsymbol{\epsilon}}\sim \mathcal{N}(\vzero, \mathbf{I}), m_p \sim \mathcal{B}(p_{\text{mask}})}\!\left[ \left\| {\boldsymbol{\epsilon}}- {\boldsymbol{\epsilon}}_\theta(\sqrt{\bar\alpha_t} \mathbf{x}_0 + \sqrt{1-\bar\alpha_t}{\boldsymbol{\epsilon}}\mid m_p\rvp, t) \right\|^2\right]
78
+ \end{align}$$ We hypothesize that this dropout-like [@srivastava14a] approach will also serve as a form of regularization, improving the model's ability to perform predictions conditioned on the past. We see positive evidence of this effect in our experiments -- see the MCVD past-mask model variants in [\[tab:bair_pred,tab:SMMNIST_ablation\]](#tab:bair_pred,tab:SMMNIST_ablation){reference-type="ref+Label" reference="tab:bair_pred,tab:SMMNIST_ablation"} versus without past-masking. Note that random masking is used only during training.
79
+
80
+ We now have a design for video prediction and generation, but it still cannot perform video interpolation nor past prediction from the future. As a third and final approach, we show how to build a general model for solving all four video tasks. Assume we have $p$ past frames, $k$ current frames, and $f$ future frames $\rvf = \left\{ \rvf^i \right\}_{i=1}^{f}$. We randomly mask the $p$ past frames with probability $p_{mask}=\nicefrac{1}{2}$, and similarly randomly mask the $f$ future frames with the same probability (but sampled separately). Thus, future or past prediction is when only future or past frames are masked. Unconditional generation is when both past and future frames are masked. Video interpolation is when neither past nor future frames are masked. The loss function for this general video machinery is: $$\begin{align}
81
+ \label{eqn:vidgeneral}
82
+ L(\theta) = \E_{t, [\rvp, \rvx_0, \rvf] \sim p_{\text{data}}, {\boldsymbol{\epsilon}}\sim \mathcal{N}(\vzero, \mathbf{I}), ( m_p, m_f ) \sim \mathcal{B}(p_{\text{mask}})}\!\left[ \left\| {\boldsymbol{\epsilon}}- {\boldsymbol{\epsilon}}_\theta(\sqrt{\bar\alpha_t} \mathbf{x}_0 + \sqrt{1-\bar\alpha_t}{\boldsymbol{\epsilon}}\mid m_p\rvp, m_f\rvf, t) \right\|^2\right]
83
+ \end{align}$$
84
+
85
+ # Method
86
+
87
+ For our denoising network we use a U-net architecture [@ronneberger2015u; @honari2016recombinator; @{salimans2017pixelcnn++}] combining the improvements from @song2020score and @dhariwal2021diffusion. This architecture uses a mix of 2D convolutions [@fukushima1982neocognitron], multi-head self-attention [@cheng2016long], and adaptive group-norm [@wu2018group]. We use positional encodings of the noise level ($t \in [0,1]$) and process it using a transformer style positional embedding: $$\begin{equation}
88
+ \textbf{e}(t)=\left[ \ldots,\cos \left(t c^{\frac{-2d}{D}} \right), \sin \left(t c^{\frac{-2d}{D}} \right)
89
+ , \ldots \right]^{\mathrm{T}},
90
+ \end{equation}$$ where $d=1,\ldots,D/2$ , $D$ is the number of dimensions of the embedding, and $c=10000$. This embedding vector is passed through a fully connected layer, followed by an activation function and another fully connected layer. Each residual block has an fully connected layer that adapts the embedding to the correct dimensionality.
91
+
92
+ To provide $\mathbf{x}_t$, $\rvp$, and $\rvf$ to the network, we separately concatenate the past/future conditional frames and the noisy current frames in the channel dimension. The concatenated noisy current frames are directly passed as input to the network. Meanwhile, the concatenated conditional frames are passed through an embedding that influences the conditional normalization akin to SPatially-Adaptive (DE)normalization (SPADE) [@park2019semantic]; to account for the effect of time/motion, we call this approach SPAce-TIme-Adaptive Normalization (SPATIN).
93
+
94
+ ![We give noisy current frames to a U-Net whose residual blocks receive conditional information from past/future frames and noise-level. The output is the predicted noise in the current frames, which we use to denoise the current frames. At test time, we start from pure noise.](figs/SPATIN.pdf){#fig:our_net width="90%"}
95
+
96
+ In addition to SPATIN, we also try directly concatenating the conditional and noisy current frames together and passing them as the input. In our experiments below we show some results with SPATIN and some with concatenation (concat). For simple video prediction with [\[eqn:vidpred\]](#eqn:vidpred){reference-type="ref+label" reference="eqn:vidpred"}, we experimented with 3D convolutions and 3D attention However, this requires an exorbitant amount of memory, and we found no benefit in using 3D layers over 2D layers at the same memory (i.e., the biggest model that fits in 4 GPUs). Thus, we did not explore this idea further. We also tried and found no benefit from gamma noise [@nachmani2021denoising], L1 loss, and F-PNDM sampling [@liu2022pseudo].
2205.15624/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2205.15624/main_diagram/main_diagram.pdf ADDED
Binary file (93.6 kB). View file
 
2205.15624/paper_text/intro_method.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In the last few years, the facility location problem in a competitive market has received a lot of attention. In practice, modelling critical managerial decisions related to infrastructure planning, such as finding the placement of new retail, service or product facilities on the market, often leading to facility location problems. The competitive facility location problem deals a decision on selecting location to open new facilities in market where a set of incumbent competitors already operates in order to maximize the captured demand of users. There are two aspects need to be considered in this problem, namely, the demand of customers and the competitors in the market. Customers are independent decision makers and their choices among different facilities is based on a given utility that they assign to each location. To dealing with these aspects, several model facility attributes/features, e.g., distances, prices and transportation cost, are proposed to compute utilities.
4
+
5
+ There are two modelling approaches that popular to define the demand of customers. For the first one, the utilities of each location are assigned by customers in a deterministic way. A simple approach was proposed in [@Duran1986] where the closest facility among different competitors is chosen by customers. It means that all the demand of a zone is assigned to the closest one, which is not reasonable. [@Huff1964] proposed another approach where the demand of customers captured by a facility is proportion to facility attractiveness (e.g. facility size or square footage) and inversely proportion to a power of distance. We refer the reader to [@Berman2009] for a detailed review.
6
+
7
+ The second popular modelling approach is the probabilistic approach, where each facility is assigned selection probabilities from customers by a probability model. There are several framework for the probabilistic approach, but the random utility maximization (RUM) framework ([@McFa1973], [@Ben-Akiva1986]) is the popular one in this situation. This framework is based on the assumption that each facility associate with a random utility which is determined by the features/attributes of the facility and these utility need to be maximized. Under that assumption, we can compute the probability that a customer chooses a facility versus other facility. Accordingly, the goal of the competitive facility location problem is to locate new facilities in a competitive market so as to maximize the expected market share captured by the new facilities.
8
+
9
+ In the literature, there are several models base on RUM and the multinomial logit (MNL) is the popular one because of its simple structure. The advantage of this approach is that it allows us to consider the correlation between the characteristic of facilities and customers in the choice model. In addition, the choice model can be trained/estimated by using supervised data of customer decision so that the demand of customers can be more accurately predicted. Nevertheless, this approach is challenging as it involves the nonlinear choice model in the data-driven discrete optimization context and there are several mixed-integer linear programming (MIP) approaches are proposed to overcome this issue. [@Benati2002] was the first one introduced the MNL model for the facility location problem. They proposed three methods to compute the upper bound along with a branch and bound method to solve small instances. The first method is based on the concavity of the continuous relaxation of the objective function. They use the submodularity of the objective function to develop the second method and additional variables to reformulate the MNL model into the integer linear formulation for the last one. They also introduced a simple variable neighborhood search (VNS) method to solve instances with more than 50 potential locations. Some alternative MILP models,afterward , have been proposed by [@Haase2009] and [@Zhang2012]. [@Haase2013] gave the evaluation and comparison between the above MILP models and they concluded that the MILP model of [@Haase2009] is the most efficient. [@Freire2015] strengthened the MILP reformulation of [@Haase2009] by using tighter coefficients in some inequalities and also proposed a new branch-and-bound algorithm to deal with it.
10
+
11
+ However,these above studies just showed that the existing exact approaches are efficient for small and medium size instances except for large-scale instance as new york city instance proposed in [@Freire2015]. To overcome this issue, [@Ljubic2017] proposed a branch-and-cut method that combines two types of cutting planes refer to as outer-approximation (OA) cuts and submodular cuts. The first type is relied on the outer-approximation decomposition method to exploit the fact that the objective function of the the continuous relaxation of the problem is concave and differentiable and the remaining type is based on submodularity and separability properties of objective function. Their branch-and-cut method is an iterative procedure where cuts are generated for every demand points and a LP-relaxation is solved at each iteration. [@Tien2020] also proposed a multicut outer-approximation algorithm but it is based on the cutting plane approach instead of the branch-and-cut approach and it generates cuts for groups of demand points instead of cut of every demand points. They also compared their implementation with the state-of-the-art one proposed by [@Ljubic2017] and show that their approach is more robust and efficient, especially for the large instances. Moreover, this approach is currently the state-of-art for the competitive facility location problem under the MNL model.
12
+
13
+ It is important to note that the MNL model exploits the independence of irrelevant alternative (IIA) property which means that the ratio of the choice probabilities of facilities is unchanged no matter what other facilities are available or what attributes that other facilities have. Therefore, there are several models that more flexible by allowing them to relax the IIA property. One of the most preferable models in the literature is the mixed MNL (MMNL) which is fully flexible for approximating any RUM model ([@McFadden2000]). This model has been proposed and considered in some studies relate to facility location problem, such us [@Haase2009] and [@Haase2013]. However, the MILP approaches proposed in these above studies to solve the MMNL model have been outperformed by the branch-and-cut procedure of [@Ljubic2017] and the MMNL model is typically costly to estimate because it needs to generate much sample data and compute the average one.
14
+
15
+ An alternative model that can relax the IIA property from the MNL model is the nested logit model ([@Ben-Akiva1973]).This model allows us to model the correlation between facilities by partitioning the set of facilities into different subsets which are called nests and these nests are assumed to be disjoint. There are some studies that considered this model but in the different contexts. For example, [@Davis2014],[@Tien2020] proposed approaches to solve the assortment optimization problem under the nested logit model or [@Gallego2014] and [@Rayfield2015] in pricing problem. Both studies show that this model is a flexible, general choice model and it also has simple structure. However, to the best of our knowledge, there is almost no study related to the nested logit model in the context of the facility location problem.
16
+
17
+ Accordingly, most of the relevant studies only focus on the MNL model and it lacks a practically efficient method that is general to deal with different types of choice models, e.g., the MNL, the MMNL and the nested logit models. Thus, in this work, We propose a new approach to solve the competitive facility location problem where the demand of customers can be predicted by various choice models. This approach combines two steps, one, which call Binary Trust Region (BiTR) ([@Nocedal2006]), is an iterative procedure that approximate the objective function by a linear or quadratic one using the property of Taylor's expansion and Hessian matrix. We can use that approximation to perform a search over a local region around the current solution to find the better solution. This kind of approach has been proposed in [@Tien2020] to solve the assortment optimization under various choice models and show that it is very effective and useful. The remaining step is to further improve the solution given by BiTR by performing a greedy local search algorithm. We test our algorithm on three sets of instances used in [@Ljubic2017],[@Tien2020] under the MMNL and the nested logit model and compare the results with two method proposed in [@Tien2020] (i.e., outer approximation and multicut outer-approximation) and the branch-and-cut algorithm propsed in [@Ljubic2017].
18
+
19
+ The rest of paper is structured as follow.
20
+
21
+ In this section we present some basic concepts of discrete choice models with random utilities(i.e, the MNL, the MMNL and the nested logit model) and their applications in the maximum capture facility location problem.
22
+
23
+ In the context of discrete choice models, each decision maker $i$ is assumed to associate an utility $u_ij$ with each alternative/option $j$ in the choice set $S_i$. This utility is typically a sum of two parts: $u_{ij} = v_{ij}+\epsilon_{ij}$, where $v_ij$ is the deterministic one that can be estimated based on observed attributes/features of alternative/option $j$, and a random part $\epsilon_ij$ that is unknown to the analyst. There are exist several assumptions of the random terms and it leads to different types of discrete choice model. For the deterministic part, in general, a linear-in-parameters is used, i.e, $v_{ij} = \beta^{T}\alpha_{ij}$, where $"T"$ is the transpose operator, $\beta$ is a vector of parameters to be estimated and $\alpha_ij$ is the vector of attributes of alternative $j$ as observed by decision maker $i$.
24
+
25
+ As mentioned, the RUM framework ([@Ben-Akiva1973],[@Ben-Akiva1986]) is widely used in this context. This framework assumes that the decision maker selects an alternative by maximizing the utility and the choice probability that an alternative $j$ in the choice set $S_i$ is chosen by individual $i$ can be computed as $$\begin{align}
26
+ P(j|S_{i}) = P(u_{ij} \leq u_{ij^{'}}, \forall j^{'}\in S_{i}).
27
+ \end{align}$$
28
+
29
+ The MNL model is one of the most popular models used to predict the customer behavior because of its simple structure $$\begin{align}
30
+ P(j|S_{i}) = \frac{e^{v_{ij}}}{\sum_{j\in S_{i}}e^{v_{ij}}}.
31
+ \end{align}$$ This MNL model is defined based on the assumption that the random terms $\epsilon_{ij}$ are independent and identically distributed and follow the standard Gumbel distribution. Therefore, this model exploits the IIA property which means that the choice probability of one alternative will not be affected by the attributes or the state of the rest. However, in some situations, alternatives share unobserved attributes (i.e, random terms are correlated) and the IIA property does not hold so that this model may not work effectively.
32
+
33
+ Conversely, there are several models that allow to relax the IIA property and the MMNL model is one of them. Any discrete choice model derived from a RUM framework can be approximated by this model ([@McFadden2000]) so it is often used in practice. The MMNL model is defined as the MNL model with random parameters $\beta$ as follow
34
+
35
+ $$\begin{equation}
36
+ P(j|S_{i}) = \mathbb{E}_{\beta}\left [\frac{e^{\beta^{T}\alpha_{ij}}}{\sum_{k\in S_{i}}e^{\beta^{T}\alpha_{ik}}}\right] = \int \frac{e^{\beta^{T}\alpha_{ij}}}{\sum_{k\in S_{i}}e^{\beta^{T}\alpha_{ik}}}f(\beta)d\beta, \nonumber
37
+ \end{equation}$$
38
+
39
+ where $f(\beta)$ is the density function of $\beta$. The choice probability is obtained by taking the expectation over the random coefficients but it is hard to compute the expectation so a Monte Carlo method can be used to approximate it, i.e, if we assume $\beta_{1},...,\beta_{K}$ being $K$ realizations sampled over the distribution of $\beta$, then the choice probabilities can be computed as
40
+
41
+ $$\begin{align}
42
+ P(j|S_{i}) \approx \hat{P}_{K}(j|S_{i}) = \frac{1}{K}\sum_{k=1}^{K}\frac{e^{\beta^{T}_{k}\alpha_{ij}}}{\sum_{k\in S_{i}}e^{\beta^{T}_{k}\alpha_{ik}}}
43
+ \end{align}$$
44
+
45
+ Accordingly, this model is fully flexible but it also requires simulation to generate realizations of coefficients so that it is costly to evaluate the choice probabilities. Moreover, using the MMNL model also makes the problem more complicated so this model is not preferred in the facility location problem, especially when the data is large and complex.
46
+
47
+ Besides, there are several choice models to relax the IIA property of the MNL model results from making different assumptions of the random terms, i.e, the nested logit ([@Ben-Akiva1973]), network multivariate extreme value (MEV) models ([@Daly2006]). These model typically allow to consider and estimate the correlation between alternatives in different ways. For example, in the nested logit context, the set of alternatives will be partitioned into different and disjoint subsets (called nests). A more general version called the cross-nested logit model allows each alternative to belonging to more than one nest. The cross-nested logit model can approximate any RUM model as mentioned in [@Fosgerau2013].
48
+
49
+ Based on the assumption of group division for alternatives, [@Ben-Akiva2000] introduced a formulation of the cross-nested logit model for the choice probabilities
50
+
51
+ $$\begin{align}
52
+ P(j|S_{i}) = \sum_{l\in L} \frac{\left(\sum_{k\in S_{i}} a_{kl}e^{\mu_{l}v_{ik}}\right)^{1/\mu_{l}}}{\sum_{l^{'}\in L}\left(\sum_{k\in S_{i}} a_{kl^{'}}e^{\mu_{l^{'}}v_{ik}} \right)^{1/\mu_{l^{'}}}} \frac{a_{jl}e^{\mu_{l}v_{ij}}}{\sum_{k\in S_{i}} a_{kl}e^{\mu_{l}v_{ik}}}, \nonumber
53
+ \end{align}$$
54
+
55
+ where $L$ is the set of nests, $a_{kl}$ and $\mu_{l}$, $\forall k \in S_{i}$, $l\in L$, are the parameters of the cross-nested logit model. These parameters have been defined based on the assumption that $\mu_{l}>0, \forall l \in L$ and $a_{kl}$ is positive if alternative $k$ belongs to nest $l$ and $a_{kl}=0$ for otherwise. In the context of nested logit model, each alternative just belongs to only one nest so that the formulation of the choice probability can be written in a simple form as
56
+
57
+ $$\begin{align}
58
+ P(j|S_{i}) = \frac{e^{\mu_{l_{j}}v_{ij}}\left(\sum_{k\in l_{j} } a_{kl}e^{\mu_{l_{j}}v_{ik}}\right)^{1/\mu_{l_{j}}-1}}{\sum_{l^{'}\in L}\left(\sum_{k\in l^{'} } a_{kl^{'}}e^{\mu_{l^{'}}v_{ik}}\right)^{1/\mu_{l^{'}}}},
59
+ \end{align}$$
60
+
61
+ In this section, we aim at defining the parametric models based on some discrete choice models mentioned in the previous part. We consider the situation where a \"newcomer\" company wants to locate new facilities in a competitive market, i.e, there are already existing competitors that can serve the customers. The \"newcomer\" company wants to maximize the forecasted market share achieved by attracting the customers to new facilities. To capture the customers' demand, we suppose that a customer selects a facility according to a RUM framework. In this framework context, each customer has a random utility with each facility and they are assumed to choose facilities to maximize their utility. Accordingly, the firm chooses a set of location to open new facilities to maximize the expected number of customers given by the selected choice model.
62
+
63
+ In the following, we describe the problem with three different choice models in detail, i.e, the MNL. the MMNL, and the nested logit model. We consider a market where there are $M=\{1,...,m\}$ locations and we denote $Y\subset M$ as the set of locations that have the competitors' facilities. Let $I$ be the set of zones where customers are located and $q_{i}$ is the number of customers in zone $i\in I$. We assume the subset $X\subset M$ to be the set of locations considered to open new facilities. The expected number of customers given by facilities in $X$ is denoted by $R(X)$ and it can be computed as
64
+
65
+ $$\begin{align}
66
+ R(X) = \sum_{i\in I} q_{i}\sum_{j\in X} P(i,j|X,Y), \nonumber
67
+ \end{align}$$ where $P(i,j|X,Y)$ is the probability that a customer located in zone $i$ selects facility $j\in X$. As mentioned in the previous part, the probability based on RUM framework can be compute as
68
+
69
+ $$\begin{align}
70
+ P(i,j|X,Y) = P(u_{ij} \geq u_{ij^{'}}, \forall j^{'} \in X \cup Y).\nonumber
71
+ \end{align}$$
72
+
73
+ If the MNL model is used to predict the choice probabilities of customers, then the competitive facility location problem under the MNL model is find a subset locations $X\in M$ to open new facilities that maximizes the expected number of customers
74
+
75
+ $$\begin{align}
76
+ \text{max}_{X\in M} R(X) = \sum_{i\in I} q_{i}\frac{\sum_{j\in X}e^{\beta^{T}\alpha_{ij}}}{\sum_{j\in X}e^{\beta^{T}\alpha_{ij}}+\sum_{j\in Y}e^{\beta^{T}\alpha_{ij}}},
77
+ \end{align}$$ where $\alpha_{ij}$ is the vector of features/attributes associated with location $j$ and customers at zone $i$, and $\beta$ is the vector of parameter estimates given by choice model. For notation simplicity, we denote $v_{ij} = \beta^{T}\alpha_{ij}$, $V_{ij} = e^{\beta^{T}\alpha_{ij}}$, and $U^{I}_{Y} = \sum_{j\in Y}e^{\beta^{T}\alpha_{ij}}$. It is important to note that, total utilities of competitors' facilities $U^{I}_{Y}$ is a constant in problem.
78
+
79
+ We can formulate the problem under the MNL model as the integer nonlinear model
80
+
81
+ $$\begin{align}
82
+ \text{max}_{x_{j}\in\{0,1\}, j\in M} \sum_{i\in I}q_{i}\left(\frac{\sum_{1}^{m}x_{j}V_{ij}}{} \right)
83
+ \end{align}$$
84
+
85
+ # Method
86
+
87
+ In this section, we provide experiments results of our binary trust region (BiTR) algorithm on standard data sets from the literature and the comparison between BiTR and some effective approaches mentioned in the previous studies.
2206.04762/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2206.04762/paper_text/intro_method.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ ![Overview of our work paradigm: we investigate the existence of double-win lottery tickets drawn from robust pre-training in the scenario of transfer learning, with the full training data and the limited training being available, respectively.](Figs/Tisser.pdf){#fig:teaser width="98%"}
4
+
5
+ <figure id="fig:3d_loss" data-latex-placement="!htb">
6
+ <embed src="Figs/3d_loss.pdf" style="width:100.0%" />
7
+ <figcaption>Loss landscape visualization of subnetworks (<span class="math inline">73.79%</span> sparsity) from diverse adversarial fine-tuning schemes. Each sparse network is first identified by IMP and standard training on the pre-training task with standard <span class="math inline"><em>θ</em><sub>STD</sub></span>, fast adversarial <span class="math inline"><em>θ</em><sub>FAT</sub></span>, or adversarial <span class="math inline"><em>θ</em><sub>AT</sub></span> pre-training, respectively. Then, they are fine-tuned from the corresponding (robust) pre-training on downstream CIFAR-10 with <span class="math inline">100%</span> or <span class="math inline">10%</span> training data. </figcaption>
8
+ </figure>
9
+
10
+ The lottery tickets hypothesis (LTH) [@frankle2018lottery] demonstrates that there exist subnetworks in dense neural networks, which can be trained in isolation from the same random initialization and match the performance of the dense counterpart. We call such subnetworks as winning tickets. Unlike the conventional pipeline [@han2015deep] of model compression that follows the train-compress-retrain process and aims for efficient inference, the LTH sheds light on the potential for more computational savings by training a small subnetwork from the start if only we had known which subnetwork to choose. However, finding these intriguing subnetworks is quite costly since the current most effective approach, iterative magnitude pruning (IMP) [@frankle2018lottery; @han2015deep], requires multiple rounds of burdensome (re-)training, especially for large models like BERT [@devlin2018bert]. Fortunately, recent studies [@chen2020lottery; @chen2020lottery2] provide a remedy by leveraging the popular paradigm of pre-training and fine-tuning, which first identifies critical subnetworks (a.k.a. pre-trained tickets) from standard pre-training and then transfers to a range of downstream tasks. The demonstrated *universal transferability* across various datasets and tasks, shows the positive sign of replacing the gigantic pre-trained models with a much smaller subnetwork while maintaining the impressive downstream performance and leading to substantial memory/computation reductions. Meantime, the extraordinary cost of both pre-training and finding pre-trained tickets can be amortized by reusing and transferring to diverse downstream tasks.
11
+
12
+ Nevertheless, in practical settings, the deployed models usually ask for strong robustness, which is beyond the scope of standard transfer, e.g., for safety-critical applications like autonomous cars and face recognition. Therefore, a more challenging requirement arises, which demands the located subnetworks can effectively transfer in both standard and adversarial training schemes [@madry2017towards]. Thus, it is a new perspective to investigate the transferability of pre-trained tickets across diverse training regimes, differing from previous works [@chen2020lottery; @chen2020lottery2] on transferring downstream datasets and tasks. This inspires us to propose a new hypothesis of lottery tickets. Specifically, when an identified sparse subnetwork from pre-training can be independently trained (transferred) on diverse downstream tasks, to match the same accuracy and robustness, under both standard and adversarial training regimes, as the full pre-trained model can do -- we name it a **Double-Win Lottery Ticket** illustrated in Figure. [1](#fig:teaser){reference-type="ref" reference="fig:teaser"}.
13
+
14
+ Meanwhile, inspired by [@salman2020adversarially], which suggests that robust pre-training shows better transferability for dense models, we examine (1) whether this appealing property still holds under the lens of sparsity; (2) how can robust pre-training benefits our double-win tickets compared to its standard counterpart. To address such curiosity, we comprehensively investigate representative robust pre-training approaches besides standard training, including fast adversarial (FAT) [@wong2020fast] and adversarial (AT) [@madry2017towards] pre-training. Our results reveal the prevailing existence of double-win tickets with different pre-training, and suggest the subnetworks obtain from AT pre-trained models consistently achieve superior generalization and robustness, under both standard and adversarial transfer learning, when the typical full training data is available for downstream tasks.
15
+
16
+ Yet, another critical constraint in real-world scenarios is the possible scarcity of training data (e.g., due to the difficulty of data collection and annotation). What makes it worse is that satisfactory adversarial robustness intrinsically needs more training samples [@schmidt2018adversarially]. Our proposed double-win tickets from robust pre-training tackle this issue by leveraging the crafted sparse patterns as an inductive prior, which ($i$) is found to reduce the sample complexity [@zhang2021why] and brings data efficiency [@chen2021ultra]; ($ii$) converges to a flatter loss landscapes with improved robust generalization as advocated by [@wu2020revisiting; @hein2017formal], particularly for data-scarce settings shown in Figure [2](#fig:3d_loss){reference-type="ref" reference="fig:3d_loss"}. To support these intuitions, extensive experiments about few-shot (or data-efficient) transferability are evaluated with only $10$% or $1$% data for adversarial downstream training [@jiang2020robust]. In what follows, we summarize our **contributions** in order to bridge LTH and its practical usage in the data-limited and security-crucial applications:
17
+
18
+ - We define a more rigorous notion of double-win lottery tickets, which requires the sparse subnetworks found on pre-trained models to have the same transferability as the dense pre-trained ones: in terms of both accuracy and robustness, under both standard and adversarial training regimes, and towards a variety of downstream tasks. We show such tickets widely exist.
19
+
20
+ - Using IMP, we find double-win tickets broadly across diverse downstream datasets and at non-trivial sparsity levels $79.03\% \sim 89.26\%$ and $83.22\% \sim 96.48\%$ sparsity, using the fast adversarial (FAT) and adversarial (AT) pre-training. In general, subnetworks located from the AT pre-trained model have superior performance than FAT and standard pre-training.
21
+
22
+ - We further demonstrate the intriguing property of double-win tickets in the data-limited transfer settings (e.g., $10$%, $1$%). In this specific situation, FAT can surprisingly find higher-quality subnetworks with small sparsity while AT overtakes in a larger sparsity range.
23
+
24
+ - We show that adopting standard or adversarial training in the process of IMP makes no significant difference for the transferability of identified subnetworks on downstream tasks.
25
+
26
+ # Method
27
+
28
+ Aligned with previous work of pre-trained tickets [@chen2020lottery2], we consider the official ResNet-50 [@he2016deep] as the unpruned dense model, and formulate the output of the network as $f(x; \theta)$, where $x$ is the input images and $\theta\in\mathbb{R}^{d}$ is the network parameters. In the same way, a subnetwork is a network $f(x; m\odot\theta)$[^1] with a binary pruning mask $m\in\{0,1\}^{d}$, where $\odot$ is the element-wise product. In our experiment, we sparsify the major part of the dense network, leaving the task-specific classification head out of the scope of pruning.
29
+
30
+ The classical AT [@madry2017towards] remains one of the most effective approaches to tackle the vulnerability for small perturbations and build a robust model, in which the standard empirical risk minimization is replaced by a robust optimization, as depicted in [\[eq:minmax\]](#eq:minmax){reference-type="eqref" reference="eq:minmax"}: $$\begin{equation}
31
+ \label{eq:minmax}
32
+ \min_{\theta} \mathbb E_{(x, y) \in \mathcal{D}}
33
+ \max_{\left\|\delta\right\|_p \leq \epsilon} \mathcal{L} \big(f(x + \delta; \theta), y \big)
34
+ \end{equation}$$ where the perturbation is constrained in an $\ell_p$ norm ball with the radius equals to $\epsilon$, and input data $x$ with its associated label $y$ are sampled from the training set $\mathcal{D}$. To solve the inner maximization problem, projected gradient descent (PGD) [@madry2018towards] is frequently adopted and believed to be the strongest first-order adversary, which works in an iterative fashion as [\[eq:pgd\]](#eq:pgd){reference-type="eqref" reference="eq:pgd"}: $$\begin{equation}
35
+ \label{eq:pgd}
36
+ \delta^{t+1} = \mathrm{proj}_{\mathcal{P}} \Big ( \delta^t + \alpha \cdot \mathrm{sgn} \big ( \nabla_{ x}\mathcal{L}(f(x+\delta^t; \theta),y) \big ) \Big )
37
+ \end{equation}$$ where $\delta^t$ is the generated perturbation, $t$ denotes the number of iterations, $\alpha$ represents the step size, and $\mathrm{sgn}$ is a function that returns the sign of its input. Besides, [@wong2020fast] proposes a fast adversarial training method and claimed that adversarial training with Fast Gradient Sign Method (FGSM) [@goodfellow2014explaining], which is the single-step variant of PGD, can be as effective as PGD-based adversarial training once combined with random initialization. In the following context, we will refer standard empirical risk minimization process to standard training (**ST**) and robust optimization to adversarial training (**AT**) or fast adversarial training (**FAT**) according to the number of PGD steps. We remark that FAT alone may cause the issue of robust catastrophic overfitting [@andriushchenko2020understanding] when the train-time attack strength grows. Thus, an early-stopping policy [@rice2020overfitting], which was also suggested by [@andriushchenko2020understanding], is adopted to mitigate such catastrophic overfitting.
38
+
39
+ For a dense neural network $f(x; \theta)$, we adopt the unstructured iterative magnitude pruning (IMP) [@frankle2018lottery; @han2015deep] to identify the subnetworks $f(x; m\odot\theta)$, which is a standard option for mining lottery tickets [@frankle2018the]. More precisely, starting from the pre-trained weights $\theta_p$ as initialization, we follow the circle of prune-rewind-retrain to locate subnetworks, in which we prune $p\%$ of the remaining weight with the smallest magnitude and rewind the weights of the subnetwork to their values from $\theta_p$. We repeat the prune-rewind-retrain process until the desired sparsity.
40
+
41
+ In our experiments, we choose a precise $p\%=20\%$ [@frankle2018the; @chen2020lottery2] and consider three initialization: the standard pre-trained[^2] ResNet-50 $\theta_{\mathrm{STD}}$, the PGD-based adversarial pre-trained[^3] ResNet-50 $\theta_{\mathrm{AT}}$ and fast adversarial pre-trained[^4] ResNet-50 by $\theta_{\mathrm{FAT}}$. All the models are pre-trained with the classification task on the **ImageNet** source dataset [@krizhevsky2012imagenet]. It is worthy to mention that all pruning are applied to the source dataset (or pre-training task) only, since our main focus is investigating the mask transferability cross training schemes of subnetworks obtained from pre-training.
42
+
43
+ After producing subnetworks from the pre-training task on ImageNet by IMP, we implement both standard and adversarial transfer on three downstream datasets: CIFAR-10 [@Krizhevsky09], CIFAR-100 [@Krizhevsky09], and SVHN [@netzer2011reading]. For adversarial training, we train the network against $\ell_{\infty}$ adversary of $10$-steps Projected Gradient Descent (PGD-$10$) with $\epsilon=\frac{8}{255}$ and $\alpha=\frac{2}{255}$. On CIFAR-10/100, we train the network for $100$ epochs with an initial learning rate of $0.1$ and decay by ten times at $50,75$th epoch. As for SVHN, we start from $0.01$ learning rate and decay by a cosine annealing schedule for $80$ epochs. Moreover, an SGD optimizer is adopted with $5\times 10^{-4}$ weight decay and $0.9$ momentum. And we use a batch size of $128$ for all downstream experiments. To evaluate the downstream performance of subnetworks, we report both Standard Testing Accuracy (**SA**) and Robust Testing Accuracy (**RA**), which are computed on the original and adversarial perturbed test images respectively. During the inference, we generate the adversarial test images by PGD-$20$ attack with other hyper-parameters kept the same as in training [@chen2021robust]. More details are in Sec. [8](#sec:more_details){reference-type="ref" reference="sec:more_details"}.
44
+
45
+ Here we introduce formal definitions of our double-win tickets:
46
+
47
+ $\rhd$ *Matching subnetworks* [@chen2020lottery; @chen2020lottery2; @frankle2020linear]. A subnetwork $f(x; m\odot\theta)$ is matching for a training algorithm $\mathcal{A}_t^{\mathcal{T}}$ if its performance of evaluation metric $\mathcal{\epsilon}^{\mathcal{T}}$ is no lower than the pre-trained dense network $f(x; \theta_p)$ that trained with the same algorithm $\mathcal{A}_t^{\mathcal{T}}$, namely: $$\begin{equation}
48
+ \mathcal{\epsilon}^{\mathcal{T}} \Big( \mathcal{A}_t^{\mathcal{T}} \big( f(x; m\odot\theta) \big) \Big) \geq \mathcal{\epsilon}^{\mathcal{T}} \Big( \mathcal{A}_t^{\mathcal{T}} \big( f(x; \theta_p) \big) \Big)
49
+ \end{equation}$$ $\rhd$ *Winning Tickets* [@chen2020lottery2; @frankle2020linear]. If a subnetwork $f(x; m\odot\theta)$ is matching with $\theta = \theta_p$ for a training algorithm $\mathcal{A}_t^{\mathcal{T}}$, then it is a winning ticket for $\mathcal{A}_t^{\mathcal{T}}$.
50
+
51
+ $\rhd$ *Double-Win Lottery Tickets*. When a subnetwork $f(x; m\odot\theta)$ is a winning ticket for standard training under metric **SA** and for adversarial training under both metrics **SA** and **RA**, we name it as a double-win lottery ticket, as demonstrated in Table [\[tab:settings\]](#tab:settings){reference-type="ref" reference="tab:settings"}.
52
+
53
+ <figure id="fig:block1" data-latex-placement="t">
54
+ <p><embed src="Figs/block1_std.pdf" style="width:100.0%" /> <embed src="Figs/block1_adv.pdf" style="width:100.0%" /></p>
55
+ <figcaption>Comparison results of the subnetworks that are fine-tuned on three downstream datasets (i.e., CIFAR-10, CIFAR-100 and SVHN) under both standard and adversarial training regimes. For standard training, we report the standard accuracy; while for adversarial training, both standard and robust accuracy are presented. <span style="color: orange">Orange</span>, <span style="color: green">Green</span> and <span style="color: blue">Blue</span> represent the performance of subnetworks generated from IMP on pre-trained ImageNet classification (<span class="math inline"><em>m</em><sup>ST</sup></span>) with standard re-training and different pre-trained weights (i.e. standard <span class="math inline"><em>θ</em><sub>STD</sub></span>, fast adversarial <span class="math inline"><em>θ</em><sub>FAT</sub></span>, and adversarial <span class="math inline"><em>θ</em><sub>AT</sub></span> pre-training, respectively) while <span style="color: red">Red</span> stands for random pruning with adversarial pre-trained weight. The solid line and shading area are the mean and standard deviation of standard/robust accuracy.</figcaption>
56
+ </figure>
57
+
58
+ []{#sec:4 label="sec:4"}
59
+
60
+ In this section, we evaluate the quality of subnetworks $f(x; m\odot\theta)$ on multiple downstream tasks under both standard and adversarial training regimes. Before that, we extract desired subnetworks via IMP on the ImageNet classification tasks. During the process, the pre-trained weights $\theta_p$ are treated as initialization for rewinding, and standard training (ST) or adversarial training (AT) is adopted for re-training the sparse model on the pre-trained task. In the downstream transferring stage, subnetworks start from mask $m^{\mathcal{P}}_{p}$ and pre-trained weights $\theta_p$, where $p \in \{\mathrm{STD}, \mathrm{FAT}, \mathrm{AT}\}$ stands for {standard, fast adversarial, adversarial} pre-training, and the pruning method $\mathcal{P} \in \{\mathrm{ST}, \mathrm{AT}, \mathrm{RP}, \mathrm{OMP}\}$ which represents {IMP with standard (re-)training, IMP with adversarial (re-)training, random pruning, one-shot magnitude pruning} [@han2015deep] on the pre-training task ($\mathrm{RP}$ or $\mathrm{OMP}$ indicates there is no re-training). In the following content, Section [4.1](#sec:4.1){reference-type="ref" reference="sec:4.1"} shows the existence of double-win lottery tickets from diverse (robust) pre-training with impressive transfer performance for both standard or adversarial training; Section [4.2](#sec:4.2){reference-type="ref" reference="sec:4.2"} investigates the effects of standard or adversarial re-training on the quality of derived double-win tickets. All experiments have three independent replicates with different random seeds and the mean results and standard deviation are reported.
61
+
62
+ To begin with, we validate the existence of double-win lottery tickets drawn from diverse (robust) pre-training and source ImageNet dataset. We consider the sparsity masks from IMP with standard re-training $m^{\mathrm{ST}}$ and random pruning $m^{\mathrm{RP}}$ on the pre-training task, together with three different pre-trained weights, i.e. standard $\theta_{\mathrm{STD}}$, fast adversarial $\theta_{\mathrm{FAT}}$, and adversarial weights $\theta_{\mathrm{AT}}$. As demonstrated in Figure [3](#fig:block1){reference-type="ref" reference="fig:block1"}, we adopt two downstream fine-tuning receipts, i.e., standard training (report SA) and adversarial training (report SA/RA)), simultaneously. **Note that all presented numbers here are subnetwork's sparsity levels.** Several consistent observations can be drawn:
63
+
64
+ <figure id="fig:train_regime" data-latex-placement="t">
65
+ <embed src="Figs/block2.pdf" style="width:100.0%" />
66
+ <figcaption>Comparison results of the subnetworks that are independently trained on three downstream datasets (i.e., CIFAR-10, CIFAR-100, SVHN) under both standard and adversarial training regimes. For standard training, we report the standard accuracy; while for adversarial training, both standard and robust accuracy are presented. <span style="color: orange">Orange</span>, <span style="color: green">Green</span>, <span style="color: red">red</span> and <span style="color: blue">Blue</span> represent the performance of subnetworks generated by IMP with standard re-training (<span class="math inline"><em>m</em><sub>AT</sub><sup>ST</sup></span>), adversarial re-training (<span class="math inline"><em>m</em><sub>AT</sub><sup>AT</sup></span>) on ImageNet classification task, one shot magnitude pruning (<span class="math inline"><em>m</em><sup>OMP</sup></span>) and random pruning (<span class="math inline"><em>m</em><sup>RP</sup></span>), together with the adversarial pre-training (<span class="math inline"><em>θ</em><sub>AT</sub></span>) as initialization. The solid line and shading area are the mean and standard deviation of standard/robust accuracy.</figcaption>
67
+ </figure>
68
+
69
+ - Double-win lottery tickets generally exist from various pre-training, showing unimpaired performance on diverse downstream tasks for both standard and adversarial transfer. To account for fluctuations, we consider the performance of subnetworks is *matching* when it's within one standard deviation of the unpruned dense network. The **extreme sparsity levels** of subnetworks drawn from {$\theta_{\mathrm{STD}}$, $\theta_{\mathrm{FAT}}$, $\theta_{\mathrm{AT}}$,} are {$89.26\%$, $89.26\%$, $91.41\%$}, {$73.79\%$, $79.03\%$, $83.22\%$}, {$0.00\%$, $79.03\%$, $96.48\%$} with matching or even superior standard and robust performance under both training regimes (standard and adversarial) on CIFAR-10, CIFAR-100 and SVHN, respectively. All these double-win tickets surpass randomly pruned subnetwork by a significant performance margin. It demonstrates the superior performance of double-win tickets is not only from reduced parameter counts but also credits to the located sparse structural patterns.
70
+
71
+ - Subnetworks identified from adversarial pre-training consistently outperform the ones from fast adversarial and standard pre-training across all three downstream classification tasks, which is aligned with the result in [@salman2020adversarially]. Taking the extreme sparsity as an indicator, the adversarial pre-training finds double-win lottery tickets to the extreme sparsity of $83.22\% \sim 96.48\%$ while fast adversarial, standard pre-training reach the extreme sparsity level of $20.00\% \sim 89.26\%$ and $0.00\% \sim 89.26\%$. This suggests that the adversarial pre-trained model can serve as a desirable starting point for locating high-quality double-win tickets to cover both standard and adversarial downstream transferability. Note that here all downstream transferring can access full training data, i.e., *data-rich* fine-tuning.
72
+
73
+ - Along with the increase of sparsity, we notice that the performance improvements from adversarial pre-training $\theta_{\mathrm{AT}}$ ($i$) remain stable in the standard transfer (the first row in Figure [3](#fig:block1){reference-type="ref" reference="fig:block1"}) even at extreme sparsity like $98.56\%$; ($ii$) first increase then diminish in adversarial transfer after $95.60\%$ sparsity. It suggests that double-win tickets from adversarial pre-training are more sensitive to the aggressive sparsity in the scenario of adversarial transfer learning.
74
+
75
+ - The comparison results among different pre-training varies with the training regime of downstream tasks. Take the result on CIFAR-100 as an example, the subnetworks drawn from fast adversarial pre-training shows superior performance than the ones from standard training in the range of sparsity level from $0.00\% \sim 98.56\%$ under adversarial training. While for the standard training regime, fast adversarial and standard pre-training locate subnetworks with similar performance across the sparsity level from $0.00\%$ to $95.60\%$. The inferior performance of standard pre-training suggests that the vanilla lottery tickets that only focus on the standard training regime and use standard test accuracy as the only evaluation metric, is insufficient in practical security-crucial scenarios. Thus we take adversarial transfer into consideration and propose the concept of double-win lottery tickets to improve the original LTH.
76
+
77
+ <figure id="fig:data_efficience" data-latex-placement="t">
78
+ <embed src="Figs/block_data.pdf" style="width:92.0%" />
79
+ <figcaption>Data-efficient transfer results of double-win tickets from adversarial pre-training on three downstream datasets (i.e. CIFAR-10, CIFAR-100, and SVHN) with <span class="math inline">100</span>%, <span class="math inline">10</span>% and <span class="math inline">1</span>% training data. Both standard and robust accuracy are reported. <span style="color: orange">Orange</span>, <span style="color: green">Green</span> and <span style="color: blue">Blue</span> represent the performance of subnetworks located from IMP together with standard training on ImageNet classification (<span class="math inline"><em>m</em><sup>ST</sup></span>) with different pre-trained weights (i.e. standard <span class="math inline"><em>θ</em><sub>STD</sub></span>, fast adversarial <span class="math inline"><em>θ</em><sub>FAT</sub></span>, and adversarial pre-training <span class="math inline"><em>θ</em><sub>AT</sub></span>, respectively). The solid line and shading area are the mean and standard deviation of standard/robust accuracy.</figcaption>
80
+ </figure>
81
+
82
+ During the ticket finding on pre-trained tasks, we can adopt standard re-training and adversarial re-training after each IMP pruning process. Intuitively, adversarial re-training should be able to maintain more information from adversarial pre-training, and lead to better transfer performance on downstream tasks [@salman2020adversarially]. However, our experiment results surprisingly challenge this "common sense\". Specifically, we choose the adversarial pre-trained weight ($\theta_{\mathrm{AT}}$[^5]) as the initialization and compare four types of pruning and re-training methods on the pre-trained tasks, i.e., IMP with standard training ($m^{\mathrm{ST}}_{\mathrm{AT}}$), IMP with adversarial training ($m^{\mathrm{AT}}_{\mathrm{AT}}$), one-shot magnitude pruning (OMP) ($m^{\mathrm{OMP}}$) and random pruning ($m^{\mathrm{RP}}$).
83
+
84
+ As shown in Figure [4](#fig:train_regime){reference-type="ref" reference="fig:train_regime"}, the extreme sparsity of double-win lottery tickets on {CIFAR-10, CIFAR-100} is ($91.41$%, $83.22$%), ($91.41$%, $83.22$%), ($89.26$%, $79.03$%), ($20$%, $0$%) for ($m^{\mathrm{AT}}_{\mathrm{AT}}, \theta_{\mathrm{AT}}$), ($m^{\mathrm{ST}}_{\mathrm{AT}}, \theta_{\mathrm{AT}}$), ($m^{\mathrm{OMP}}, \theta_{\mathrm{AT}}$) and ($m^{\mathrm{RP}}, \theta_{\mathrm{AT}}$), respectively. IMP with standard and adversarial training shows similar performance, and both of them are better than OMP and random pruning. It suggests that the re-training regimes during IMP doesn't make an significant impact on the downstream transferablity of subnetworks. Due to the heavy computational cost of adversarial training and the inferior performance of OMP, we consider IMP with standard training as our major pruning method and investigate the data efficiency property of subnetworks in following sections.
85
+
86
+ In this section, we further exploit the practical benefits of double-win lottery tickets by assessing the data-efficient transferability under limited training data schemes (e.g., $1$% and $10$%). All subnetworks are drawn from IMP with standard re-training on the pre-training task. We consider three different pre-trained weights (i.e., standard $\theta_{\mathrm{STD}}$, fast adversarial $\theta_{\mathrm{FAT}}$, and adversarial pre-training $\theta_{\mathrm{AT}}$). The results are included in Figure [5](#fig:data_efficience){reference-type="ref" reference="fig:data_efficience"}, from which we find:
87
+
88
+ - $\theta_{\mathrm{FAT}}$ and $\theta_{\mathrm{AT}}$ significantly outperform $\theta_{\mathrm{STD}}$ on both data-rich and data-scarce transfer for all three datasets. It evidences that the robust pre-training improves data-efficient transfer. While on the challenging SVHN downstream dataset with limited training data (i.e., $10\%$ and $1\%$), the performance of $\theta_{\mathrm{FAT}}$ and $\theta_{\mathrm{AT}}$ degrades to $\theta_{\mathrm{STD}}$'s level at large sparsity $83.22\%$ and $97.19\%$ respectively.
89
+
90
+ - For data-limited transferring, sparse tickets derived from robust pre-training {$\theta_{\mathrm{FAT}}$,$\theta_{\mathrm{AT}}$} surpass their dense counterpart by up to {$0.75\%$,$22.53\%$} SA and {$0.79\%$,$8.97\%$} RA, which indicates the enhanced data-efficiency also comes from appropriate sparse structures. The consistent robustness gains under unseen transfer attacks in Section [9](#sec:more_res){reference-type="ref" reference="sec:more_res"}, also exclude the possibility of obfuscated gradients.
91
+
92
+ - In general, when subnetworks are trained with only $10\%$ or $1\%$ training samples available, those drawn from fast adversarial pre-trained weight $\theta_{\mathrm{FAT}}$ show superior performance at middle sparsity levels, with performance improvements up to {$30.97\%$, $2.42\%$, $10.05\%$} SA and {$8.36\%$, $0.70\%$, $18.49\%$} RA compared with $\theta_{\mathrm{AT}}$ for CIFAR-10, CIFAR-100 and SVHN, respectively. But with the increase of sparsity, adversarial pre-trained weight $\theta_{\mathrm{AT}}$ overtakes $\theta_{\mathrm{FAT}}$ and dominates the larger sparsity range.
93
+
94
+ To understand the counter-intuitive results that subnetworks from weak robust pre-training $\theta_{\mathrm{FAT}}$ perform better than the ones from strong robust pre-training $\theta_{\mathrm{AT}}$ at middle sparsity levels such as $73.79\%$ particularly for data-limited transferring, we visualize the training trajectories along with loss landscapes through tools in [@visualloss]. We take robustified subnetworks with $73.79\%$ sparsity on CIFAR-10 as an example. As shown in Figure [6](#fig:loss_land){reference-type="ref" reference="fig:loss_land"} and [9](#fig:loss_land_adv){reference-type="ref" reference="fig:loss_land_adv"}, for the results on the original test data (columns: a,b,c), the loss contour of $\theta_{\mathrm{FAT}}$ is smoother/flatter than $\theta_{\mathrm{AT}}$ and $\theta_{\mathrm{STD}}$, i.e., the basin with converged minimum has larger area in terms of the same level of loss like the $2.000$ contour in the middle row's plots of Figure [6](#fig:loss_land){reference-type="ref" reference="fig:loss_land"}. A smoother/flatter loss surface is often believed to indicate enhanced standard [@keskar2017large; @he2019asymmetric] and robust generalization [@wu2020revisiting; @hein2017formal]. It offers a possible explanation of $\theta_{\mathrm{FAT}}$'s superior performance to $\theta_{\mathrm{AT}}$'s by up to $8.98\%$ and $9.62\%$ SA improvements for data-limited transferring with $10\%$ and $1\%$ training samples. Moreover, loss geometric on attacked test data (Fig. [9](#fig:loss_land_adv){reference-type="ref" reference="fig:loss_land_adv"}) reveals similar conclusions.
95
+
96
+ <figure id="fig:loss_land" data-latex-placement="t">
97
+ <embed src="Figs/losssurface_std.pdf" style="width:85.0%" />
98
+ <figcaption>Visualization of loss contours and training trajectories of subnetworks located by IMP with standard re-training <span class="math inline"><em>m</em><sup>ST</sup></span> at <span class="math inline">73.79%</span> sparsity. Each subnetwork is adversarial trained with <span class="math inline">100%</span>, <span class="math inline">10%</span> or <span class="math inline">1%</span> training data on CIFAR-10. We compare three pre-training (i.e., standard <span class="math inline"><em>θ</em><sub>STD</sub></span>, fast adversarial <span class="math inline"><em>θ</em><sub>FAT</sub></span>, and adversarial pre-training <span class="math inline"><em>θ</em><sub>AT</sub></span>). The original test set is used.</figcaption>
99
+ </figure>
100
+
101
+ In Fig. [8](#fig:relative_similar){reference-type="ref" reference="fig:relative_similar"}, we report the relative similarity (i.e., $\frac{|m_i \cap m_j|}{|m_i \cup m_j|}$) between binary pruning masks $m_i$ and $m_j$, which denotes the degree of overlapping in sparse patterns located from different pre-trained models. We observe that subnetworks from different pre-training has distinct sparse patterns. Specifically, the relative similarity is less than $20.00$% when the sparsity of subnetworks reaches $73.79$% and the more sparsified, the larger differences arise.
102
+
103
+ Meanwhile, we calculate the number of completely pruned (zero) kernels and visualize the kernel-wise heatmap of subnetworks with an extreme sparsity of $97.19$%. As depicted in Figure [7](#fig:heatmap){reference-type="ref" reference="fig:heatmap"}, the subnetworks from the standard pre-trained model have the largest number of zero kernels, which roughly reveals the most clustered sparse patterns. And the subnetworks from robust pre-training are less clustered, especially for $\theta_{\mathrm{FAT}}$. We notice that these zero kernels are mainly distributed in the front/later residual blocks for subnetworks from $\theta_{\mathrm{AT}}$/$\theta_{\mathrm{FAT}}$, where they scatter evenly across all blocks. Typically, subnetworks with more zero kernels may have a stronger potential for hardware speedup [@elsen2020fast].
104
+
105
+ <figure id="fig:heatmap" data-latex-placement="t">
106
+ <embed src="Figs/heatmap.pdf" style="width:92.0%" />
107
+ <figcaption>Kernel-wise heatmap visualizations of sparse masks drawn from three different pre-training, i.e., <span class="math inline"><em>m</em><sub>STD</sub><sup>ST</sup></span>, <span class="math inline"><em>m</em><sub>FAT</sub><sup>ST</sup></span>, and <span class="math inline"><em>m</em><sub>AT</sub><sup>ST</sup></span> at <span class="math inline">97.19%</span> sparsity. The bright dots (<span style="color: yellow"><span class="math inline">•</span></span>) are the completely pruned (zero) kernels and the dark dots (<span style="color: 202,12,22"><span class="math inline">•</span></span>) stand for the kernels with at least one remaining weight. <span class="math inline">B1 ∼ B4</span> represent the four residual blocks in ResNet-50.</figcaption>
108
+ </figure>
2207.12141/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2207.12141/paper_text/intro_method.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Recent years have witnessed great successes of Reinforcement Learning (RL) in many complex decision-making tasks, such as robotics (Polydoros & Nalpantidis, 2017; Yang et al., 2022) and chess games (Silver et al., 2016; Schrittwieser et al., 2020). Among RL methods, a wide
4
+
5
+ *Proceedings of the* 40 th *International Conference on Machine Learning*, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).
6
+
7
+ range of works in model-free RL (Schulman et al., 2015; Lillicrap et al., 2016; Haarnoja et al., 2018; Fujimoto et al., 2018; Hu et al., 2021) have shown promising performance. However, model-free methods can be impractical for realworld scenarios (Dulac-Arnold et al., 2021) since massive samples from the real environment are required for policy training, resulting in low sample efficiency.
8
+
9
+ Model-based RL is considered one of the solutions to improve sample efficiency. Most of the model-based RL algorithms first use supervised learning techniques to learn a dynamics model based on the samples obtained from the real environment, and then use this learned dynamics model to generate massive samples to derive a policy (Luo et al., 2018; Janner et al., 2019). Therefore, it is crucial to learn a dynamics model which can accurately simulate the underlying transition dynamics of the real environment since the policy is trained based on the model-generated samples. If the learned dynamics has a high prediction error, the model-generated samples will be biased, and the policy induced by these samples will be sub-optimal. To reduce the model prediction error and learn an accurate dynamics model, some advanced architectures such as model ensemble (Kurutach et al., 2018; Chua et al., 2018) and multi-step model (Asadi et al., 2019) have been proposed to improve the multi-step prediction accuracy of the learned dynamics model. Besides, the idea of a generative adversarial network (GAN) (Goodfellow et al., 2014) is used to design the training process of a dynamics model (Shen et al., 2020; Eysenbach et al., 2021) to reduce the distribution mismatch between model-generated samples and real samples. Those previous works mentioned above aim to learn a dynamics model that can fit all historical policies. To be precise, when training the dynamics model, they randomly select the training data from the real samples obtained by all historical policies in the replay buffer. This learned dynamics model needs to adapt to the state-action visitation distribution of all historical policies to obtain a dynamics model that predicts transitions accurately under different policies.
10
+
11
+ However, since we only use the current newest policy to interact with the learned model to generate samples for policy learning during model rollouts, learning such a dynamics model that fits under (highly likely sub-optimal) historical policies may be unnecessary. Due to the state-action visitation distribution shift during policy updating, the state-
12
+
13
+ <sup>1</sup>Department of Computer Science, University of Maryland, College Park, MD 20742, USA <sup>2</sup>Tsinghua University. Correspondence to: Xiyao Wang <xywang@umd.edu>.
14
+
15
+ action pairs visited by historical policies may not appear in the state-action visitation distribution of the current policy, and vice versa. Thus, learning these samples may not benefit model rollouts. Besides, in many complex tasks, it is hard to predict all samples from all historical policies due to limited model capacity (Abbas et al., 2020), and as shown later in our paper, trying to learn every sample from historical policies can even hurt the accuracy when predicting the transitions induced by the current policy. Therefore, there is an objective mismatch between model learning and model rollouts — model learning tries to fit samples from state-action visitation distribution for all historical policies, whereas model rollouts require accurate prediction of the transitions induced by the current policy.
16
+
17
+ In this paper, we investigate how to learn an accurate dynamics model for model rollouts based on existing samples. (a) To begin with, we confirm through experiments that although the dynamics model learned by the previous methods has a low overall prediction error on all transitions obtained by historical policies, its prediction error for the current newest policy can still be very high. This leads to inaccurate model-generated samples which can hurt the sample efficiency and asymptotic performance of the policy. (b) We then derive an upper bound of the expected performance gap between the model rollouts and real environment rollouts. According to this upper bound, we analyze how the distribution of historical policies affects model learning and model rollouts. The theoretical result suggests that the historical policy distribution used for model learning should be more inclined towards policies that are closer to the current policy rather than a uniform distribution over all historical policies to ensure the model prediction accuracy for model rollouts. (c) Motivated by this insight, we propose a novel dynamics model learning method named Policy-adapted Dynamics Model Learning (PDML). Instead of learning a dynamics model that fits under a uniform mixture of all historical policies, PDML adjusts the historical policy distribution by reducing the total variation distance between the historical policy mixture and the current policy, then learns a policy-adapted dynamics model according to this adjusted historical policy distribution. (d) We conduct systematic and extensive experiments on a range of continuous control benchmark MuJoCo environments (Todorov et al., 2012). Experimental results show that PDML significantly improves the sample efficiency and asymptotic performance of the state-of-the-art model-based RL methods.
18
+
19
+ Summary of contributions: (1) Through detailed experimental results, we establish that learning a dynamics model that fits a uniform mixture of all historical policies may not be accurate enough for model rollouts. (2) We propose an upper bound of an expected performance gap between the model rollouts and the real environment rollouts, and theoretically analyze how the distribution over historical policies
20
+
21
+ affects model learning and model rollouts. (3) We propose *Policy-adapted Dynamics Model Learning (PDML)*, which dynamically adjusts the distribution over the historical policy sequence and allows the learned model to continuously adapt to the evolving policy. (4) Experimental results on a range of MuJoCo environments demonstrate that PDML can achieve significant improvement in sample efficiency and higher asymptotic performance combined with the state-of-the-art model-based RL methods.
22
+
23
+ # Method
24
+
25
+ **Reinforcement learning.** Consider a Markov Decision Process (MDP) defined by the tuple $(\mathcal{S}, \mathcal{A}, T, r, \gamma)$ , where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, and T(s'|s,a) is the transition dynamics in the real world. The reward function is denoted as r(s,a) and $\gamma$ is the discount factor. Reinforcement learning aims to find an optimal policy $\pi$ which can maximize the expected sum of discounted rewards
26
+
27
+ $$\pi = \operatorname*{argmax}_{\pi} \mathbb{E}_{\substack{s_t \sim T(\cdot \mid s_{t-1}, a_{t-1}) \\ a_t \sim \pi(a \mid s_t)}} \left[ \sum_{t=0}^{\infty} \gamma^t r(s_t, a_t) \right]. \quad (1)$$
28
+
29
+ In model-based RL, the transition dynamics T in the real world is unknown, and we aim to construct a model $\hat{T}(s'|s,a)$ of transition dynamics and use it to improve the policy. In this paper, we concentrate on the Dyna-style (Sutton, 1990) model-based RL, which uses the learned dynamics model to generate samples and train the policy.
30
+
31
+ **Policy mixture.** During policy learning, we consider the historical policies at iteration step k as a historical policy sequence $\Pi^k = \{\pi_1, \pi_2, ..., \pi_k\}$ . For each policy in the policy sequence, we denote its state-action visitation distribution as $\rho^{\pi_i}(s,a)$ , and the policy mixture distribution over the policy sequence as $\boldsymbol{w}^k = [w_1^k \ldots, w_k^k]$ . Then the state-action visitation distribution of the policy mixture $\pi_{\text{mix},k} = (\Pi^k, \boldsymbol{w}^k)$ is $\rho^{\pi_{\text{mix},k}}(s,a) = \sum_{i=1}^k w_i^k \rho^{\pi_i}(s,a)$ (Hazan et al., 2019; Zhang et al., 2021).
32
+
33
+ Learning a dynamics model is the most crucial part of model-based RL since the ground-truth transition dynamics is unknown and the policy must be updated based on the samples generated by the learned dynamics model. Previous works learn the dynamics model by randomly selecting training data from the samples obtained by the historical policy sequence $\Pi^k$ , which means the distribution of policy mixture is a random distribution: $w_i^k = \frac{1}{k}$ . The learned dynamics model is trained based on the following state-action visitation distribution
34
+
35
+ $$\rho^{\pi_{\min,k}}(s,a) = \sum_{i=1}^{k} \frac{1}{k} \rho^{\pi_i}(s,a). \tag{2}$$
36
+
37
+ ![](_page_2_Figure_1.jpeg)
38
+
39
+ Figure 1: (a) and (b): visualization of the state-action visitation distribution of different historical policies and the current policy using t-SNE. Env step 130k and env step 100k are the current policy. More details are shown in Appendix F.3. (c) and (d): the overall error curves and current error curves of MBPO on HalfCheetah and Hopper, respectively.
40
+
41
+ This model tries to fit all the samples obtained by sampling the state-action visitation distribution corresponding to all policies in the historical policy sequence, so the learned dynamics model is (hopefully) able to predict the transition for any state-action input.
42
+
43
+ However, as shown in Figure 1(a) and 1(b), since the policy is constantly evolving, the state-action visitation distribution of historical policies may have a huge shift from the current policy. There is little overlap between the state-action visitation distribution of policies at different environment steps. The state-action pairs visited by historical policies may not appear in the state-action visitation distribution of the current policy. During model rollouts, we only use the current policy to interact with the learned dynamics model to generate samples. Thus, learning these samples may not benefit model rollouts. When the model capacity is not large enough, learning these samples may even be detrimental to the learning of the samples collected by the current policy.
44
+
45
+ We conduct an experiment using a state-of-the-art model-based RL method called MBPO (Janner et al., 2019) on four MuJoCo (Todorov et al., 2012) environments HalfCheetah, Hopper, Walker2d, and Ant. MBPO first trains a model based on the real samples and then uses the model to roll out multiple samples for policy learning. The architecture of the dynamics model is a 4-layer neural network with a hidden size of 200, which is a very common architecture used in many recent model-based methods (Yao et al., 2021; Froehlich et al., 2022; Li et al., 2022). We present the overall error curves and the current error curves during
46
+
47
+ learning steps on HalfCheetah and Hopper in Figure 1(c) and 1(d). Here the overall error means the model prediction error for all historical policies during training. It is evaluated on an evaluation dataset which contains $1000 \times N$ samples from the real environment. N is the number of historical policies in the historical policy sequence. The current error is the model prediction error for the current policy, which is evaluated using L2 error on the 1000 samples obtained by the current policy from the real environment. The error curves for more environments can be found in Appendix F.1.
48
+
49
+ From Figure 1(c) and 1(d), we observe that there is a gap between the overall error and the current error. This means although the agent can learn a dynamics model which is good enough for all samples obtained by historical policies, this is at the expense of the prediction accuracy for the samples induced by current policy. Since we only use the current policy during model rollouts, this will lead to inaccurate model-generated samples and misleading policy learning. To demonstrate that this error gap is not caused by the model not having converged on the recent data, we also conduct another experiment. We checkpoint the replay buffer and model at multiple points during training, then train the dynamics model for a long time until convergence at these checkpointed locations, and test the prediction error on newly generated data. We find that even when the dynamics model has converged on all data, the prediction error on newly generated data does not reduce obviously. Experiment results and more details can be found in Appendix F.2.
50
+
51
+ Therefore, learning a dynamics model that adapts the state-action visitation distribution for all historical policies, in other words, a random historical policy mixture distribution used for model learning, is not the most efficient way for model-based RL (especially for task-specific problems). In the next section, we will analyze how the policy mixture distribution affects the performance of model-based RL.
52
+
53
+ In this section, we provide a theoretical analysis of how the policy mixture distribution affects the performance of model-based RL. First, we derive a theorem that upper bounds the performance gap between the real environment rollouts and the model rollouts under any current policy $\pi$ .
54
+
55
+ **Theorem 3.1.** Given the historical policy mixture $\pi_{mix,k} = (\Pi^k, \mathbf{w}^k)$ at iteration step k, we denote $\xi_{\rho_i} = D_{TV}(\rho_T^{\pi}(s,a)||\rho_T^{\pi_i}(s,a))$ as the state-action visitation distribution shift and $\xi_{\pi_i} = \mathbb{E}_{s \sim v_{\hat{T}}^{\pi_{mix}}} [D_{TV}(\pi(a|s)||\pi_i(a|s))]$ as the policy distribution shift between the historical policy $\pi_i$ and current policy $\pi$ respectively, where $v_{\hat{T}}^{\pi_{mix}}$ is the state visitation distribution of the policy mixture under the learned dynamics model $\hat{T}$ . $r_{max}$ is the maximum reward
56
+
57
+ the policy can get from the real environment, $\gamma$ is the discount factor, and $\operatorname{Vol}(\mathcal{S})$ is the volume of state space. Then the performance gap between the real environment rollout $J(\pi,T)$ and the model rollout $J(\pi,\hat{T})$ can be bounded as:
58
+
59
+ $$\begin{split} J(\pi,T) - J(\pi,\hat{T}) \leq & 2\gamma r_{max} \mathbb{E}_{(s,a) \sim \rho_T^{\pi}} [D_{TV}(T(s'|s,a)||\hat{T}(s'|s,a))] \\ & + r_{max} \sum_{i=1}^k w_i^k (\gamma \text{Vol}(\mathcal{S}) \xi_{\rho_i} + 2\xi_{\pi_i}) \\ & + 2r_{max} D_{TV}(\rho_T^{\pi_{mix}}(s,a)||\rho_T^{\pi}(s,a)) \end{split} \tag{3}$$
60
+
61
+ learning.
62
+
63
+ (1) The first term is about **model prediction error**. This term suggests that the model needs to adapt to the stateaction visitation distribution of the current policy to reduce the model prediction error, since this term is the expectation of prediction error of the learned dynamics model T under the current policy state-action visitation distribution $\rho_T^{\pi}$ . (2) The second term shows the effect of the policy mixture distribution on model rollout. This item contains two distribution shifts: (2a) state-action visitation distribution shift $\xi_{\rho_i}$ and (2b) policy distribution shift $\xi_{\pi_i}$ between the historical policy and current policy. It should be noted that $\xi_{\rho_i}$ is induced by $\xi_{\pi_i}$ , so it is reasonable to believe that a historical policy with a larger $\xi_{\pi_i}$ will have a larger $\xi_{\rho_i}$ . Both $\xi_{\rho_i}$ and $\xi_{\pi_i}$ are fixed since historical policies and the current policy are immutable during model learning and model rollout. Therefore, to reduce this term, we can only adjust the policy mixture distribution $w^k$ . Since the distribution shift varies across historical policies and the current policy, it is obvious that the random distribution $w_i^k = \frac{1}{k}$ is not the best choice. (3) The last term is related to the model sample buffer, which is used for policy learning. To maximize sample utilization, the model-generated samples obtained by the historical policies will be maintained in the model sample buffer until they are replaced by the new samples generated by the current policy. Therefore, the distribution of simulated samples in the model buffer is not exactly the same as the simulated sample distribution of the current policy, but is often mixed with the simulated sample distribution of the historical policies. This makes it necessary to adjust the sample distribution in the model sample buffer to make it close to the simulated sample distribution of the current policy during the policy learning process. This has been studied in many model-based and model-free methods (Schaul et al., 2016; Liu et al., 2021; Huang et al., 2021; Mu et al., 2021) and is out of the scope of this paper, and
64
+
65
+ The first two items on the right-hand side of Equation (3) provide useful insights on model learning. This first term points out the goal of model learning: to make accurate
66
+
67
+ we focus on reducing the first two terms related to model
68
+
69
+ predictions for the current policy. The second item further demonstrates that to achieve this goal, we should adjust the policy mixture distribution to reduce the distribution shift between the historical policy mixture and the current policy. According to the second term, we have the following proposition.
70
+
71
+ **Proposition 3.2.** The performance gap can be reduced if the weight $w_i^k$ of each policy $\pi_i$ in the historical policy sequence $\Pi^k$ is negatively related to state action visitation distribution shift $\xi_{\rho_i}$ and the policy distribution shift $\xi_{\pi_i}$ between the historical policy $\pi_i$ and current policy $\pi$ instead of an average weight $w_i^k = \frac{1}{k}$ .
72
+
73
+ The proof is in Appendix E. Proposition 3.2 illustrates how we should adjust the policy distribution to help the learned dynamics model adapt to the current policy. This naturally motivates our method, which is described in the next section.
74
+
75
+ In this section, we introduce our model learning method called *Policy-adapted Dynamics Model Learning* (PDML). PDML is designed to reduce the model prediction error during model rollouts, and it contains two parts. The first part is adjusting the policy mixture distribution into a non-uniform distribution, and the second part is learning the dynamics model based on this non-uniform distribution. The pseudo-code is in Algorithm 1.
76
+
77
+ Algorithm 1 Policy-adapted Dynamics Model Learning (PDML)
78
+
79
+ **Require:** current policy proportion hyperparameter $\alpha$ , interaction epochs I
80
+
81
+ - 1: Initialize historical policy sequence $k \leftarrow 0, \Pi^k \leftarrow \emptyset$
82
+ - 2: **for** *I* epochs **do**
83
+ - 3: Interact with the environment using current policy $\pi_c$ , add samples into real sample buffer $\mathbb{D}_e$
84
+ - 4: Add current policy $\pi_c$ into historical policy sequence: $\pi_k \leftarrow \pi_c, \Pi^k \leftarrow \{\Pi^{k-1}, \pi_k\}$
85
+ - 5: Adjust the historical policy mixture distribution $w^k = [w_1^k, \dots, w_k^k]$ via Equation (4) and (5)
86
+ - 6: Normalize $\boldsymbol{w}_k \leftarrow \boldsymbol{w}_k / \|\boldsymbol{w}_k\|$
87
+ - 7: Sample a training data batch of $(s_n, a_n, r, s_{n+1})$ from $\mathbb{D}_e$ according to $\boldsymbol{w}^k$
88
+ - 8: Train dynamics model $\hat{T}_{\theta}$ via Equation (7), use current policy $\pi_c$ to perform model rollouts
89
+ - 9: $k \leftarrow k+1$
90
+ - 10: **end for**
91
+
92
+ In this section, we introduce a mechanism to adjust the policy mixture distribution. According to our Theorem 3.1, to minimize the performance gap, one may set the weight
93
+
94
+ of the policy with the smallest $\xi_{\rho_i}$ and $\xi_{\pi_i}$ to be 1 and the weights of other policies in the historical policy sequence to be 0. However, this is not the best approach in practice since each policy can only interact with the environment for very few steps in model-based RL. This means each policy can provide very limited samples for model learning. If we only use a small number of samples from just one policy, it is difficult to learn accurate transition dynamics for the current policy.
95
+
96
+ Weights design for historical policies. In order to maximize the use of limited samples to estimate the transition dynamics, inspired by Proposition 3.2, we design the weight of each policy in the historical policy sequence $\Pi^k = \{\pi_1, \pi_2, ..., \pi_k\}$ except for the current policy $\pi_c$ (i.e., $\pi_k \in \Pi^k$ ) as follows:
97
+
98
+ $$w_{i}^{k} = \frac{\xi_{\pi_{i}}^{-1}}{\sum_{n=1}^{k} \xi_{\pi_{n}}^{-1}},$$
99
+
100
+ $$\xi_{\pi_{i}} = \mathbb{E}_{s \sim v_{\hat{T}}^{\pi_{\text{mix}}}} \left[ D_{TV}(\pi_{c}(\cdot|s)||\pi_{i}(\cdot|s)) \right], \quad \forall i \in [k-1],$$
101
+ (4)
102
+
103
+ where $\xi_{\pi_i}$ is the policy distribution shift between historical policy $\pi_i^k$ and the current policy $\pi_c$ ; it is also one of the distribution shifts in the second term of Equation (3). We use $[k-1]:=\{1,\ldots,k-1\}$ to denote the integers from 1 to k-1. We only use the policy distribution shift $\xi_{\pi_i}$ (and not the state-action visitation distribution shift $\xi_{\rho_i}$ ) because estimating the state-action visitation distribution shift using limited real samples is difficult, and thus the estimation may be inaccurate. Besides, as mentioned in the remarks of Theorem 3.1, state-action visitation distribution is induced by the policy, so it is reasonable to believe that a historical policy with a larger $\xi_{\pi_i}$ will have a larger $\xi_{\rho_i}$ .
104
+
105
+ Weight design for the current policy. In model-based RL, the current policy becomes a historical policy after interacting with the environment and is added to the historical policy sequence (see Algorithm 1). The total variation distance between the current policy and itself is 0, so Equation (4) cannot be used to calculate the weight of the current policy. For the weight of the current policy $w_k^k$ , we use the following equation:
106
+
107
+ $$w_k^k = \begin{cases} \alpha \sum_{i=1}^{k-1} w_i^k, & \text{if} \quad \alpha \sum_{i=1}^{k-1} w_i^k > \max_{i \in [k-1]} \{w_i^k\} \\ \max_{i \in [k-1]} \{w_i^k\}, & \text{if} \quad \alpha \sum_{i=1}^{k-1} w_i^k \leq \max_{i \in [k-1]} \{w_i^k\} \end{cases}$$
108
+
109
+ where $\alpha$ is a hyperparameter to control the proportion of the weight of the current policy to the total weight over the historical policy sequence. Equation (5) ensures that the weight of the current policy $w_k^k$ is always the largest in the historical policy sequence. Before each model learning iteration, we adjust the policy mixture distribution according
110
+
111
+ to Equation (4) and Equation (5) and normalize the weights $\boldsymbol{w}^k = [w_1^k, ..., w_k^k]$ to make sure they sum to 1. The details are illustrated in Algorithm 1.
112
+
113
+ Estimation of the policy distribution shift $\xi_{\pi_i}$ $\forall i \in [k-1]$ . Given a state $s_n$ , we define the output of policy $\pi_i$ as a multivariate Gaussian distribution $\mathcal{N}(\mu_{\pi_i^n}, \Sigma_{\pi_i^n})$ . In order to make the empirical estimation more accurate, we use each historical policy to traverse all N samples in the real sample buffer and output the action distribution corresponding to each state. Then we use the inequality between KL divergence and total variation distance to estimate $\xi_{\pi_i}$ :
114
+
115
+ $$\xi_{\pi_{i}} = \frac{1}{N} \sum_{n=1}^{N} D_{TV}(\pi_{c}(\cdot|s_{n})||\pi_{i}(\cdot|s_{n}))$$
116
+
117
+ $$\leq \frac{1}{2N} \sum_{n=1}^{N} \sqrt{tr(\Sigma_{\pi_{c}^{n}}^{-1}\Sigma_{\pi_{i}^{n}} - I) + (\mu_{\pi_{c}^{n}} - \mu_{\pi_{i}^{n}})^{\mathsf{T}}\Sigma_{\pi_{i}^{n}}^{-1}(\mu_{\pi_{c}^{n}} - \mu_{\pi_{i}^{n}}) - \log \det(\Sigma_{\pi_{c}^{n}}^{-1}\Sigma_{\pi_{i}^{n}})}$$
118
+ (6)
119
+
120
+ Novelty of PDML compared to prioritized experience replay proposed in model-free RL. In model-free RL, prioritized experience replay methods only need to consider how to improve the policy based on existing samples. Therefore, it is only necessary to select the sample that can bring the greatest improvement to the policy, and a weighting is designed for each sample. In model-based RL, the policy is learned based on model-generated samples, and the accuracy of these model-generated samples determines the sub-optimality of the policy. Thus, in the model-learning part, we focus on the model prediction accuracy. Our theoretical analysis shows that we should consider whether the state-action visitation distribution that generates the samples is close to the current policy when reweighting samples. Although a sample can bring a great improvement to the current policy (the TD value is high), if this sample is not in the state-action visitation distribution of the current policy, this sample will not be encountered during model rollouts. Then learning this sample will not bring any benefit to model learning and policy learning. Therefore, we reweight the state-action visitation distribution that generates a batch of samples according to $\xi_{\pi_i}$ , rather than a single sample as in model-free RL.
121
+
122
+ After adjusting the policy mixture distribution, we learn the dynamics model based on this adjusted distribution. Although our method can be applied to learn any type of dynamic model, here we choose to use the current state-of-the-art structure probabilistic dynamics model ensemble Chua et al. (2018): $\{\hat{T}_{\theta}^{1},...,\hat{T}_{\theta}^{B}\}$ . $\theta$ is the parameters of each dynamics model in the ensemble, and B is the ensemble size. Given a $(s_n,a_n)$ pair as an input, the output $\hat{T}_{\theta}^{b}$ of each network b in the ensemble is the Multivariate Gaussian Distribution of the next state: $\hat{T}_{\theta}^{b}(s_{n+1}|s_n,a_n) = \mathcal{N}(\mu_{\theta}^{b}(s_n,a_n),\Sigma_{\theta}^{b}(s_n,a_n))$ Before each model learning it-
123
+
124
+ eration, we sample the training data batch from the real sample buffer according to the adjusted policy mixture distribution $\boldsymbol{w}^k$ , and train the dynamics model using maximum likelihood:
125
+
126
+ $$\mathcal{L}(\theta) = \sum_{n=1}^{N} [\mu_{\theta}^{b}(s_{n}, a_{n}) - s_{n+1}]^{\top} \Sigma_{\theta}^{b-1}(s_{n}, a_{n}) [\mu_{\theta}^{b}(s_{n}, a_{n}) - s_{n+1}] + \log \det \Sigma_{\theta}^{b}(s_{n}, a_{n})$$
127
+ (7)
128
+
129
+ During model rollouts, we use the current policy $\pi_c$ as the rollout policy and sample the initial states from the real sample buffer according to the adjusted policy mixture distribution $w^k$ .
2212.08108/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-09-29T00:47:28.449Z" agent="5.0 (X11)" etag="gGE0bPdvdoD0heSMQp5F" version="20.3.7" type="google"><diagram id="oBHpnu2C6FYmAjQlKlxo" name="Page-1">7V1bc5u6Fv41zCSdMSMB4vLo2HHaOWl3p2nn7O6XDDayzSkGF2Mn3r/+SCDMRcJXQdI2bqYBgbSktZY+rYvkKPpg8XwXu8v5x8jDgaIB71nRh4qmQc0yyS9ass1KdOA4Wcks9j32VlHw4P+LWSFgpWvfw6vKi0kUBYm/rBZOojDEk6RS5sZx9FR9bRoFVapLd8YogqLgYeIGmHvtv76XzPOBgdLr77E/m+ekYf5k4e7ezgpWc9eLnkrE9FtFH8RRlGRXi+cBDij7csZkDY0anu56FuMwOabC9Mc/3z+NlvbjhyR4/3G8/Xq/uevZrJmNG6zZkBXNDEiDN56/KWqbP9e0nzdzMtSADre3jPHGx0/FI3I1Y7/T+qulG9JhJ1vGzPzFSRREsaL3Fdph3XF08snq1VryycBYY2RYWXs5DcJfPzyRFCDaR8QjInXVSKiTwbjxbHIiKT0lJiSlaINmUvJGNJm7sTQ6e4bz7l0DFcK1jTwVuD6gAuO4XkKmgJDkNAqT3tRd+AGBvf4iCiPS3gTXqIJJ4K5W1b5ePJv2jE+xbk4YIfg1VaWByCqJpdLJajUM59O3+3uZpFoW2xiOYYPK+FNpQjsDezNQlCk1zXQXS3JRYZ+EoQHgeUKKUM6yIgm1zlAOyXgCUpIdzsUzRGmaQjILNwiiSevylGSI7NNJII1dZyDxecbOBRPgF0DgvZOsWbnFtNPpJY08OjT0y/Wx/bnzpvEnDqXXOsx1sMQiectCh6tPwyibNbgFlOPhbER+VFUVvdC2ohwEQFKcOmIiACbP0jBG/kyrdEZL8DMtnyeLgBRAcukG/iwk1wGe0icbHCf+xA36rDiJqPn4NPcT/JC5dMOn2KVlcbQOPUyDJXQc1ANkwSyosfsRcwiHg2gd+5gaQZ/wE33oB8Eg4wfplD5F9B+rVCrPPkrqzEQ/cOmJmX52AyyHhPLwDhkHfi4VsRDRHY4WOIm35BX21GHhKxaw62nEN1CBnRU+FSEwzWAvzsvRL9NRgcGibyzyNtuRKGJT5IKFp04JVWlcqOqBcHJCffBJ5OEjpFvlXBiFuMZ+VpRrwYTwkQiK14OF73mUzImqYOb3rJOCGN7JAoMGqkoM2rbBiQvqAnHlZfJFpQuiijXprObukl5OA/zcp5FawhsceuxymAZH/ElVflXONjIPe3lQt4F1JcYgAV/yshgHbuJvqqFgEbMYhc9RGuTLJaMb9bkEgKUaTvGxq02uUnVmrZQjufWG7RMbTogBghOu4VS2O7ZcIG7jTdyaZpqWihDYfaBIRvA8iVvgcNulx0DrVv6oMYlA0Y48mWYAWSzv73GwwRRSRT7H4GrzaJBf17mBsFvMs+aKmB66uaLGPXjHjOzrHkztPpASy1MxJTWsqpQIvUv6d9IqLjYVJAC8ZSKVkzdEqqXzKO8gXr01U0WtrcnmAQ5HcTKPZlHoBvcRZUjK1//hJNkyxrrrJKpynXAq3v5N66sov/3Omktvhs+Vu21+9+wnpWrk7nveIrkuKtGbvE6jeLKJuWfkUGPsz+bWEbPjIEwdjT8XiczhsflPERm0XkxkYli3HdHUdlTodIrfDm+uDd3EnQZkaSadIJqwXfmrXpqL93gfMM39kxcD7MahH844/fozDHFLJ+u7VZUmtPPpVnadkACljbZscUdknF24OMN0cRYvy5c0vm+5PxhSGLuTH7NU5L1SdCGeja8gnc+Ee4CwObuAJroWhhuyRCU4kEdMjYtLOmQ6rB+OyXqmN3SonEw8OjF9Bu+vPDz1Qz/xIzoYKmYvE3OznGWaVo3zeq/NJWHWGgaxnLUaBqfyqdtWpsiDJliN2pq4LVjV+m87cS9R/XzSH+IMH+CVhAca9QvT4SEzv9DF49ybhry6hNfHY9JF+ZQ9XEC5tMGOC464BwdyLTtBtgaN2h8AjbpDoNGum6caD42aCBrbM2h4n5Pa6bkpV7gvt0XpzeUuzhmuyrlu0f5sQaEeunaRP5RHCQ+6Q84r82AtbmH8+uXbrVAr7t0xDhRhiuewTxHjlf+vO07bo3JZUgcsHQ1K04snRoh4Se3Vb7almdFXdjuUj5/APaASDDdqLokiI9hZ81pr/mo0na5wO66qLd8o0l7AKKI768BVscOtsi8NgOvLlpCAav3NbqHd42crmj5KP5cr8+mmN6IOcylrYdbNcCAwww2Qh0wqQc7WVhvnbbUp6QSXex6NaL7hsnXIOHYdsl/VOrQ76iE/79EtGB2XSWnZota7s6hbBzbIG87QMQSGs9al4QzztOEbll1qOaMjEWsHEa8GsvjsT5cqUFKAQh0OqUBFAQp9aFCB5hXqoHJIXtaOda9aUJK0aj+O3W3pBea+NNr0Bu/sM1ereSPKxTWgwwIKhVpn/ZbqNkDAp7hG/fuHF3MaT9bD+hp2pDO5m/CXe5N0G6BVzW9pUnxJvbaBLSfSvjcJIZ8cG4zuUkspjSWOCZq4E9pXr0iI4sUYe96fm+jULUd1yp96hMF2oIp4Y8dBKhBlPttKoEAoyqAY9AcNxnjmh4rFGF9sMqb2bdORzlXKVGpIQ235rIgC03MF3SaPm7JTn/2fmdHfPg/7X2+vKAfIi4R8Qsxsa8gq0JC3mzVwLWgBDVL9sviEuzXMzGj6I2ccPO2Vv8iI5J24/bTrORpM1ksa0Cgef6BPCeRtMi+iV6n7nw/392nl6x2dcLzK9ks1D1uwJ6GNkaMBDr2KZqQqc+pcJy6Ou84cpPan+245KZwufsnw3NU8rQXluDmmLXJzbLjbvlKa/Doiy4fOT37kqLrV2vw3O5//7v7537+7+3J7l0IAUVh6gLIGBGta06JAsM4Um37JAVjG2LvKMIFWG75idBj7M4IFj6QfogHkXWFd/evb1wwl1tlpnd8UCspHNJoOe7w8IjQdN5GBFEiQSYQQqoZV7Fnmt7MaEAgtBqs4kdICaPDZJUnbLg6F4PJYW5bir4XjOKWTmlPONiu3HRezbY3bIadB5IgWDA0gNcfvTs6s7Db0yk8t/daJoNb2vZuWpqIjtQVqxLdoSVuetvb77/Hmbvr4+CV8/LT+uXI+9mxh4KDlMFojrpcO0jDRkpKRT4eU0qrgfBGP1WxdKUVkKajujciRm8849gkH6Qp1/G5sIQfZMtr2FmuD29wHNLPaSMPu6mOa0mtNNZznOSNesY9nAoRqd3MdDX4k2yVu3HX1KuMgB8CvaTJJgC5DF+2cIi4QMjnoEm0GhxJgqx/+FVo3th14cX+IPz7cOT+93oskgKrocwCqWjsSIpxQHZ3zQAjVlUGvbZk59iSgoCmtFkCTd1pkH8/kgNAJO1/7nz+84c9x+IP4k8dIF4GP6FQ41FuymaD2EuhTAArUTaWcf3RsbS+scN7xWbaXXKvJ7AiwLH7f73FWkyyUEYXuutjbPonCVeLu+a7ON6ipQo3FGcXIUPOpXg7mEN9fE6BN8bJ8wHlF+11AB1BzPqxYHcGKU/fnAejWeGkO7rV7YCVa4thNSOU3WDkPVqCTf+nJgW/zcJqV8jIw+WWOncgJE50PJrAjNDE4NIFmt0bKnlzBW0DmNcGJYQHBKV8bqaD0rUkWjy8txWbEyvQiMeVzAOalrZU8AdC+F2TUVaZrgHHe3KBfAmAQcXpK37/FfUEXdHjrRfgNiqebL+S2+CMimeIVf4xFv/0/</diagram></mxfile>
2212.08108/paper_text/intro_method.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Software vulnerabilities cause great harm to people and corporations. Many Internet users have had their personal information breached because of security vulnerabilities, with common reports of breaches exposing millions of records [\[52\]](#page-12-0). The average data breach costs the target company \$4.24 million, according to IBM's
4
+
5
+ Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
6
+
7
+ ICSE '24, April 14–20, 2024, Lisbon, Portugal © 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0217-4/24/04. <https://doi.org/10.1145/3597503.3623345>
8
+
9
+ 2021 report [\[2\]](#page-11-0). The number of vulnerabilities is growing every year, as reported by the Common Vulnerability Enumeration (CVE) from 2016-2021 [\[1\]](#page-11-1). Due to its importance, we urgently need to develop effective and automatic vulnerability detection tools.
10
+
11
+ The rapid advance of AI technologies has motivated software companies to invest heavily in deep learning-based vulnerability detection tools [\[39,](#page-11-2) [55\]](#page-12-1). These tools have outperformed traditional static analysis [\[11,](#page-11-3) [20,](#page-11-4) [38\]](#page-11-5). Recently, large language models (LLMs) have reported state-of-the-art results; LineVul [\[24\]](#page-11-6), a recent model based on CodeBERT, reported 91 F1 score on a commonly used real-world vulnerability dataset [\[22\]](#page-11-7).
12
+
13
+ However, LLMs require large amounts of training data and computational resources for training and inference (see § [5.4\)](#page-8-0), but a large volume of high-quality vulnerability detection data is hard to get. They also can fail to detect vulnerabilities beyond the training dataset (see § [5.5\)](#page-8-1); for example, the top-performing transformer models LineVul and UniXcoder were not able to detect any of the real-world vulnerabilities in DbgBench [\[9\]](#page-11-8). Furthermore, by using solely text tokens, these models may not effectively learn program semantics, such as program values along paths, propagation of taint values, and security-sensitive API calls along the control flow paths. The performance of these models can be further improved when we consider such information (see [§5.3\)](#page-7-0).
14
+
15
+ In this paper, we explore the idea of combining dataflow analysis (DFA) algorithms with deep learning to develop small, efficient, yet effective models for vulnerability detection. In prior literature [\[16,](#page-11-9) [53\]](#page-12-2), deep learning integrated with domain-specific knowledge and algorithms has reported improved performance and better generalization to unseen data, while using less data and computational resources.
16
+
17
+ Dataflow Analysis (DFA) computes the data usage patterns and relations in the control flow graph (CFG) of a program and reports a vulnerability based on its root cause, i.e., whether the values and data relations collected from the program indicate the occurrence of the vulnerable conditions. Graph learning (learning based on graph neural networks (GNN)) can aggregate and propagate information in the graph in a similar fashion to DFA. In this paper, we explore the analogy between DFA and the GNN message-passing mechanism and design an embedding technique that encodes dataflow information at each node of the CFG. Specifically, we leverage the efficient bit-vector representation of dataflow facts to encode the definitions and uses of the variables. Graph learning on such an embedding propagates and aggregates dataflow information and thus simulates the dataflow computation as done in DFA. Using this approach, we hope that the learned graph representation can better encode program semantic information, e.g., reaching definitions, which will be very useful for accurate vulnerability detection.
18
+
19
+ <span id="page-1-0"></span>![](_page_1_Figure_2.jpeg)
20
+
21
+ Figure 1: Overview of DeepDFA
22
+
23
+ Based on this rationale, we developed an abstract dataflow embedding that can map variable definitions of individual programs to a common space so that the model can compare and generalize data usage patterns (dataflow) related to vulnerabilities across programs. We selected a graph learning architecture whose aggregate and update functions worked most effectively for the dataflow propagation.
24
+
25
+ Our evaluation shows that DeepDFA is substantially faster than our baseline models in terms of both training and inference time. It only took 9 minutes to train, and inference on a CPU took 5.8 ms/example. This remarkable efficiency permits applications for personalized training and inference in non-GPU environments. It is also efficient in its use of training data, achieving its best F1 score using only 50+ vulnerable examples and several hundred total examples (§ [5.4\)](#page-8-0). This frugality allows applications within a single development team, where it may be impractical to collect thousands of vulnerable examples. Yet, DeepDFA still outperformed all non-transformer baselines (§ [5.3\)](#page-7-0) and retained its performance on unseen projects better than all baseline models (§ [5.5\)](#page-8-1). Additionally, when applied to a real-world benchmark of unseen projects, Dbg-Bench [\[9\]](#page-11-8), DeepDFA detected 8.7 out of 17 of bugs (averaged over 3 runs) and correctly reported 3 out of 5 patched programs as nonvulnerable (§ [5.5\)](#page-8-1). In comparison, the highest-performing baselines, LineVul [\[24\]](#page-11-6) and UniXcoder [\[26\]](#page-11-10), did not detect any vulnerabilities. We also show that DeepDFA's learned representation can be used with other models to further improve their performance. By combining UniXcoder with DeepDFA, we surpassed state-of-the-art performance with 96.46 F1 score, 97.82 precision, and 95.14 recall.
26
+
27
+ In summary, we made the following contributions in this paper:
28
+
29
+ - (1) We designed an abstract dataflow embedding to enable deep learning to generalize semantics/dataflow patterns of vulnerabilities across programs ([§4.1\)](#page-3-0);
30
+ - (2) We applied graph learning on the control flow graph (CFG) of the program and abstract dataflow embedding to simulate reaching definition dataflow analysis ([§4.2\)](#page-4-0);
31
+
32
+ - (3) We implemented DeepDFA and experimentally demonstrated that DeepDFA outperforms baselines in vulnerability detection for effectiveness, efficiency, and generalization over unseen projects ([§5\)](#page-5-0);
33
+ - (4) We provided rationale to help understand why DeepDFA performs well and is efficient ([§3\)](#page-2-0); and
34
+ - (5) We surpassed the state-of-the-art vulnerability detection performance by combining DeepDFA and UniXcoder ([§5\)](#page-5-0).
35
+
36
+ We propose DeepDFA, a deep learning framework guided by dataflow analysis algorithms, shown in Figure [1.](#page-1-0) Given the source code of a potentially vulnerable program (left), we convert it to a CFG and encode the nodes using an abstract dataflow embedding which we designed. The CFG specifies the execution order of statements, and is the data structure on which dataflow analysis operates.
37
+
38
+ In the middle of the figure, we show our approach of computing abstract dataflow embeddings. In dataflow analysis, definitions of variables, e.g., a=3, are program specific. Applying to deep learning, we abstract these concrete definitions from different programs, and hypothesize that the usage patterns of the abstract definitions can be compared and summarized across programs during learning. To construct the abstract definitions, we used the properties of definitions that are important for vulnerability detection, based on domain knowledge from program analysis. Specifically, we considered the data types of the defined variable, the API calls, constants, and operators used to define the variables. Inspired by the bit-vector representation used in dataflow analysis, we encode the abstract definitions in a compact and very efficient fashion. We will provide more detailed design of this embedding in Section [4.1.](#page-3-0)
39
+
40
+ We used a bit-vector style of representing a set of abstraction definitions. This numerical representation can be directly used as the initial node representations for graph learning. In the right of the figure, we apply graph learning which aggregates the information from nodes like the "merge" operation performed in dataflow analysis, and also updates using the information at each nodes like
41
+
42
+ <span id="page-2-1"></span>![](_page_2_Figure_2.jpeg)
43
+
44
+ Figure 2: Analogy of information propagation in Dataflow Analysis and Graph Learning
45
+
46
+ the "update" operation performed in dataflow analysis. We provide more background on the analogy in Section 3.
47
+
48
+ Finally, we use the learned graph representation to classify whether the function is vulnerable or not. By directly propagating dataflow information through graph learning, we hope to present to the classifier a representation of the program which encodes useful information directly related to vulnerability, achieving efficient and effective vulnerability detection. The advantage of deep learning is that the mapping from the encodings of programs to the decisions are learned from the data, but in dataflow analysis, we need to manually craft rules to map from the dataflow analysis results to vulnerability decisions.
49
+
50
+ In this section, we provide the relevant background of dataflow analysis for vulnerability detection and graph learning. It provides understanding on why our approach is efficient and effective. Then, we compared the closely related work that also considers dataflow in deep learning to clarify the novelty of our work.
51
+
52
+ Dataflow analysis (DFA) is a method for computing data usage patterns in a program. In addition to compiler optimization, dataflow analysis is an important method for vulnerability detection. One instance of dataflow analysis, called *reaching definition analysis*, reports at which program points a particular variable definition can *reach*. A definition *reaches* a node when there is a path in the CFG that connects the definition and the node, and the variable is not redefined along the path. The reaching definition analysis can detect a null-pointer dereference vulnerability based on its root cause when it identifies that a definition of an NULL pointer reaches a dereference of the pointer. Similarly, it is a causal step to detect many other vulnerabilities such as buffer overflows, integer overflow, uninitialized variables, double-free and use-after-free [12].
53
+
54
+ DFA uses two equations to propagate the dataflow information through the neighboring nodes in the CFG, namely *meet operator* and *transfer function* [4]. The meet operator aggregates the dataflow sets from its neighbors. The transfer function updates the dataflow set using the information available in the node v. In the reaching definition analysis, the dataflow set is a set of definitions that reach a program point. A simple approach of performing a DFA is the
55
+
56
+ Kildall method [33]. It iteratively propagates the dataflow information to the neighbors of v in the CFG, one step at a time. The algorithm terminates when the dataflow information of all nodes stops changing, denoted a fixpoint. At termination, all nodes will incorporate the dataflow information from all other relevant nodes. When used for vulnerability detection, this information is compared to a user-specified vulnerability condition to determine whether a vulnerability has occurred in the program.
57
+
58
+ Graph learning starts with an initial node representation, and then it performs a fixed number of iterations of the message-passing algorithm [25] to propagate information through the graph. The initial node representation is generally a fixed-size continuous vector which represents the content of the node. At each iteration, each node aggregates information from its neighbors, and then updates its state to integrate the information. The two steps are done through the *AGGREGATE* and *UPDATE* functions, similar to the two dataflow equations of meet operator and transfer function. These functions can be simple numerical equations or neural networks. After iteration is done, all node representations are combined to produce a graph-level representation, which is passed to a classifier layer to make a prediction.
59
+
60
+ In Figure 2, we visualize the analogy between graph learning and dataflow analysis on a snippet of CFG. In the CFG, each node is a statement, and each edge indicates the order of execution between two statements. In Figure 2a, we show the two dataflow equations [4] that define a reaching definition dataflow analysis.
61
+
62
+ <span id="page-2-3"></span>meet operator:
63
+ $$IN[v] = \bigcup_{u \in pred(v)} OUT[u]$$
64
+ (1)
65
+
66
+ transfer function:
67
+ $$OUT[v] = GEN_v \cup (IN[v] - KILL_v)$$
68
+ (2)
69
+
70
+ where IN[v] and OUT[v] are the sets of dataflow located at the beginning and end of a statement. $GEN_v$ and $KILL_v$ represent the dataflow generated (new definitions) and killed (overwritten definition) in node v. Reaching definition is a may dataflow problem and thus the meet operator used union to merge the dataflow information from its predecessors. Meanwhile, reaching definition is a forward dataflow problem, and thus we used IN[v], $GEN_v$ , and $KILL_v$ to compute the dataflow at the exit of the statement.
71
+
72
+ In Figure 2b, we show an analogous behavior of graph learning.
73
+
74
+ <span id="page-3-1"></span>![](_page_3_Figure_2.jpeg)
75
+
76
+ Figure 3: Abstract dataflow embedding generation
77
+
78
+ <span id="page-3-2"></span>aggregate:
79
+ $$a_v^t = AGGREGATE(\{h_u^{t-1}|u \in pred(v)\})$$
80
+ (3)
81
+
82
+ update:
83
+ $$h_v^t = UPDATE(h_v^{t-1}, a_v^t)$$
84
+ (4)
85
+
86
+ where $a_v^t$ denotes the aggregated information from the neighboring nodes and $h_v^t$ denotes the state of node v after t iterations of message-passing (analogous to OUT[v]). We set t as a hyperparameter.
87
+
88
+ Previously, researchers have proposed to integrate dataflow information with deep learning for program analysis tasks. A category of approaches similar to Devign [56] used data dependency graphs as a part of the program representation on which deep learning is performed. However, Devign used word embeddings to encode statements into vector representations based on their unstructured text content. Such an encoding, even propagated through data dependency edges, cannot directly capture the dataflow patterns.
89
+
90
+ PROGRAML [17] has developed graph learning on LLVM IR code and applied it for compiler optimization tasks. It is another work that pointed out the analogy between DFA and graph learning. Their solution is to modify CFGs by creating instruction nodes and data nodes separately. PROGRAML adds control-flow edges between instruction nodes and data-flow edges between the data nodes. However, this work encoded nodes using an embedding which only represents LLVM IR operators and variable types. This approach is very coarse-grained in that many statements can have the same operators and variable types, but they will lead to different dataflow. Therefore, similar to Devign, the propagation of such an encoding even along dataflow edges does not directly capture dataflow patterns.
91
+
92
+ Our abstract dataflow embedding attempts to directly represent the variable definitions which are propagated in DFA and is modeled after the bit-vector representation used in DFA, which allows the network to learn the operations of the dataflow analysis algorithm. We also target a specific problem (reaching definitions, which was not targeted by ProgramL), for which the results of DFA are directly useful and pertinent to vulnerability detection (e.g. § 4.2).
93
+
94
+ # Method
95
+
96
+ Based on the analogous behaviors of DFA and GNN, we designed a node embedding that can represent the dataflow set at each node. We developed DeepDFA, a deep learning framework which conducts graph learning on the CFG of a program and propagates dataflow information for vulnerability detection.
97
+
98
+ In dataflow analysis, we use a *bit vector* to represent the dataflow set at each node. A bit vector consists of n bits of 0s and 1s. Its length is the size of the domain. A bit is set to 1 if its corresponding element is present in the set. In reaching definition analysis, the domain consists of all the definitions in the program, and the bits are set to "1" if the corresponding definitions reach the node. For example, in Figure 1, the program contains three definitions at nodes $v_1$ , $v_3$ , and $v_4$ so the reaching definition analysis uses a bit vector $[0\ 0\ 0]$ to initialize each node at the beginning of the analysis. This bit vector represents OUT[v] in the dataflow equations (See Section 3.2). It is updated at each step of propagation, and when the analysis terminates, the bit vectors for each node represent all possible definitions that can reach *that* node.
99
+
100
+ The bit-vector representation of reaching definition analysis efficiently encodes program semantic features related to vulnerability detection. The definitions of programs can be quickly obtained at the node via lightweight analysis locally at the statements. However, in graph learning, we cannot directly use the bit vector of definitions as the node embedding. This is because in dataflow analysis and the domain of definitions are both specific to a program. In other words, different programs have different variable definitions; the bit vectors of each program thus have different lengths and the elements (each definition) are not comparable either. Whereas, in graph learning, we want to extract dataflow patterns of vulnerabilities from all the programs in the training dataset. Thus, we need to have a "global" definition set that can be used to specify definitions for different programs, so that graph learning can compare them and generalize from them.
101
+
102
+ To address this challenge, we map all the concrete definitions in the programs in a training dataset to *abstract definitions* by identifying important properties of the definitions. Following a list of attack surfaces identified by Moshtari et al. [41], we designed the following four properties that can encompass the attack surfaces of a vulnerability and used them to represent a definition:
103
+
104
+ - API call: the call to library or system functions used to define a variable, e.g. malloc and strlen.
105
+ - (2) Data type: the data type of the variable being assigned, e.g. int, char\* and float.
106
+ - (3) Constant: the constant values assigned in the definition, e.g. NULL, -1 and the hard-coded string "foo".
107
+ - (4) Operator: the operators used to define a variable, e.g. +, and \*.
108
+
109
+ We analyze a large corpus of programs, e.g., the training set, and collect the top-k frequently used API calls, data types, constants and operators to construct a dictionary. k is a hyperparameter of DeepDFA. We select only the top-k keys because the representations of user-defined names of APIs and data types cannot be generalized across programs unless they are represented frequently in the dataset.
110
+
111
+ In Figure 3, we show an example of abstract dataflow embedding for an example $d_2$ in Figure 1: str = malloc(10 \* argc). This definition used an API call, malloc, with the constant 10, operator \*, and data type char\*. Contrasted with the 3-bit bit vector (the example in Figure 1 includes three variable definitions) that represents a concrete definition in dataflow analysis for this program, the abstract embedding is larger but a fixed size, consisting of 5x4 elements for this example. Here, 4 is the four properties we considered and 5 is the hyper-parameter k we mentioned above, which defines the size of the pre-defined dictionary, and the length 5 hot-vector encoding represents the value of the property. Because the vector that encodes the abstract dataflow embedding has a fixed size, our embedding approach can scale to any program size in the dataset without impacting the model's efficiency. The vectors in different programs encode common properties of definitions, so the model can capture the dataflow patterns across programs.
112
+
113
+ Abstraction potentially brings in approximation. Using the abstract dataflow embedding, two different definitions may lead to the same encoding. The embedding is designed to be sparse enough that within a program, unique definitions are often represented by unique embedding keys, which allows the model to distinguish definitions within the same function, similar to the bit-vector used in dataflow analysis.
114
+
115
+ Our goal in utilizing graph learning is to learn a node embedding that contains dataflow information. Without loss of generality, we use *reaching definition* as an instance of dataflow analysis for our explanations. Our approach takes the following steps. First, we construct the CFG for a program. Second, we perform static analysis to identify all the definitions in the CFG. We then initialize each node of the CFG using the abstract dataflow embedding, based on
116
+
117
+ whether the node is a definition or not. The abstract dataflow embedding is computed from all the programs in the training dataset (see §4.1 for details).
118
+
119
+ Once the nodes are initialized, we apply the message-passing algorithm [25] from graph learning to propagate the dataflow information throughout the CFG, similar to *Kildall's method* [33]. The main differences are that (1) we propagate the abstract dataflow embeddings of the CFG nodes, and (2) instead of using the dataflow equations of transfer function and meet operator, we alternatively apply the *AGGREGATE* and *UPDATE* functions defined in Equations 3 and 4 (See §3.3). Although the analogy applies for all GNN architectures trained with message-passing, we implemented our approach using a *Gated Graph Sequence Neural Network (GGNN)* [35], where *AGGREGATE* is an *Multi-Layer Perceptron (MLP)* and *UP-DATE* is a *Gated Recurrent Unit (GRU)*; we will use this architecture as an example to compare the two algorithms.
120
+
121
+ When dataflow information arrives at the merge point of a branch in CFG, graph learning applies the AGGREGATE function. Specifically, in GGNN, the MLP calculates a weighted sum of the representations of multiple neighboring predecessors, resulting in a single vector; this fulfills the same function as the meet operator. When dataflow information arrives at a new node, the UPDATE function in graph learning computes the next state by combining the information in the current node with the output of AGGREGATE from its predecessors. Specifically, in GGNN, the GRU selectively forgets portions of the previous state and integrates new information from the current node and from the neighboring states, similar to the set union/difference with GEN/KILL performed in the transfer function. Through applying AGGREGATE and UPDATE, the initial embedding will be updated with the dataflow information from the neighboring nodes, similar to the effect of dataflow analysis.
122
+
123
+ As Cummins et al. [17] noted, DFA iterates to a fixpoint and thus propagates information throughout the entire graph, while graph learning performs a fixed number of iterations t and thus propagates to neighbors in a distance t. We set t to the setting which maximized validation-set performance.
124
+
125
+ Finally, we combine the learned abstract node embeddings to produce graph level representation using *Global Attention Pooling* [35], and pass it to a classifier to predict the function as vulnerable or non-vulnerable.
126
+
127
+ The AGGREGATE and UPDATE functions are learned from labeled data during training, rather than using a fixed formula as in dataflow analysis. By learning from data, we provide an alternative solution to the challenges that often block dataflow analysis such as tracking pointers and handling library calls. Importantly, we no longer need to explicitly specify vulnerability conditions, as required in static analysis. Through learning from training examples, the classifier can capture patterns of dataflow information that represent various types of vulnerabilities and also select the relevant dataflow information for vulnerability detection.
128
+
129
+ In Table 1, we step through a reaching definition analysis for the CFG example in Figure 1 to demonstrate how dataflow information propagates through the graph and how our approach uses dataflow information for vulnerability detection.
130
+
131
+ The row *Iteration 0* shows the initialization of each node in the reaching definition analysis. At iteration 1, the DFA updates
132
+
133
+ <span id="page-5-1"></span>Table 1: OUT[v] at each iteration of DFA
134
+
135
+ | Iteration | $v_1$ | $v_2$ | $v_3$ | $v_4$ |
136
+ |-----------|---------|---------------|---------|---------|
137
+ | 0 | [0 0 0] | [0 0 0] | [0 0 0] | [0 0 0] |
138
+ | 1 | [1 0 0] | $[0\ 0\ 0]$ | [0 1 0] | [0 0 1] |
139
+ | 2 | [100] | $[1 \ 0 \ 0]$ | [0 1 0] | [0 1 1] |
140
+ | 3 | [1 0 0] | [100] | [0 1 0] | [1 1 1] |
141
+
142
+ $OUT[v_1]$ , $OUT[v_3]$ and $OUT[v_4]$ using the transfer function to indicate that the new definitions are introduced at the nodes. At iteration 2, $OUT[v_1]$ (including $d_1$ ) propagates to $v_2$ and $OUT[v_3]$ (including $d_2$ ) propagates to $v_4$ , through the CFG edges. At iteration 3, the meet operator is used to combine $OUT[v_2]$ and $OUT[v_3]$ . Specifically, $IN[v_4] = \bigcup \{OUT[v_2], OUT[v_3]\}$ , computed as $\begin{bmatrix} 1 & 0 & 0 \end{bmatrix} \lor \begin{bmatrix} 0 & 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 1 & 0 \end{bmatrix}$ ; then the transfer function combines $IN[v_4]$ with $GEN_{v_4}$ , resulting in $OUT[v_4] = \begin{bmatrix} 1 & 1 & 1 \end{bmatrix}$ .
143
+
144
+ After the DFA algorithm terminates, the final states of the nodes are used to detect vulnerabilities. The state of $v_4$ is [1 1 1], which indicates that both $d_1$ and $d_2$ may reach $v_4$ depending on the program values. Because the definition $d_1$ : str = NULL can reach the dereference at $v_4$ , we can conclude that this program has a null-pointer dereference vulnerability. Similarly, in graph learning, after a fixed number of iterations, all the node representations are combined using a graph readout operation to produce a graph-level representation, which is used to predict for vulnerability detection. Programs with the null-pointer dereference bugs will have the same abstract definitions characterized by the char\* type and the constant NULL to reach the pointer dereference statements. We believe that the dataflow information represented by DeepDFA will allow a relatively simple classifier to recognize this pattern among the training dataset.
2301.10902/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2023-03-15T06:29:07.723Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36" version="21.0.2" etag="7HPrO-R1DFBrgzsDOKu2" type="google"><diagram name="Page-1" id="R2MjF2f-2xmdvYJyYE2Z">7V3bcts2EP2WPmimebAG4J2PsZO0nV7H9vTy1IFJWEJNESoERXa/vgsRpEgCtGWbZOuImUwkAeACxNk9WCwXzMy9WN1/I8h6+SNPaTZzUHo/cz/MHAc7TgwfquShKAkcVBQsBEt1o0PBFfuH6sKy2ZaldNNoKDnPJFs3CxOe5zSRjTIiBN81m93yrNnrmix0j+hQcJWQjBrNfmOpXBalDkK15t9Stljqrr2qZkXK1lrGZklSvqt15n6cuReCc1l8W91f0EzNXjkxhaBPHbXVyATN5TEXOMUFn0m21TenxyUfyrsVfJunVLXHM/d8t2SSXq1Jomp3gC+ULeUq09XV/SD4ccuy7EoLIlvJVRHP5SeyYpmC/pqtAEYH/UR38O8lX5FcX3XBMy6gRc5zuPh8IUjK4I5axSnZLKuB6RuhQtL7zsnA1RSDclK+olI8QBN9QRQ489AvrtKqiePAm3taX3cHsHGgEV3WcHY9b45drWdaxxZVLwcU4IsGwg6KO4FyAMWJvXngtkEJ566BiYPx3A2Hg8V7Ghaap+8VwcCvJCObDUuaSBxgQ8fNe9EDTRdt6nly+moT41t0tSwTNCOSfW6Kt82S7uEXzqDjCh039tvoOMg74FBK2vCtSKi+uM5Hx8gLAkOeJGJBpSFvD2E1CUeh6ltQDTKp4VHLQ2FWUPr3VtHyuQ2pqhK+LfTnXshNWXAtCMtZvigrYFw37cZQVnRaFrfUC6xItkxbCn5HWyZoMVaSsUWutBLUhkL5ubJJBkvae12xYmmqurFySVNtG3TSwQIWlumFC3CbCPx5FJrsjG3sjF5PAcFbZuYeAHA90zyxF8yxY2AQxyYE2OmBhkMLBkDN3q/g6sHtgsPjg4yArNRc5zeb9f7W0eWfuGj3hhBLmYCbYlwZKDComtpeUPTDakGsUFRlJophZKJYTuFrQIxeCqIzgfgIiDEaE8R4mKVzPp9PC+KRShC5Bh87yLcpgTPQklgutc835e8nU+42ZSgc05QxftqzObnNTYCcJigIXJ3AaYo5dmdjCguxIay/bQ224AcTXVoHF3LJFzwn2cdDaQvBQ5sfOF9rkP+iUj7okJw2sZoKAG7i4Xd1/dwvf/6hxe1/fLhv/HoYQ1n61gKM/J5UALcJuEf8bUEkRbffPqyp+NzgZiTeHBP3wLp+yRI2g6xvJpFnWTmdbjU6mnNtAaUe/KcrBpNMhFT3erGkyd3kTR0Z/wX32WnGf/e7W9eMMMSuqRNeH97UWOGo7/JbCvMCsqZ41MsVxjeeFowbj8JmQOrnrVxvlaJc79cSxQHKWTstbo+icN425CiyGTIup7B3aLriVNMCXAUTA2Pbg0LHitJga7AtEHXq+x43jlq2g1sGcfzjnNASL3aG3PcMFJT6kfzFBdv7VJfbjE4r5MsDVGDMdiYeaJF0bBGqU7fx2HONFdL3KufluYZuExc5hrj+7Nw5ImD1Ra+eQRy0k1Vg0+/agoa4jDs0nsUF8dzv1pqjjcvpcHMsId+r6RFciV6IIgt6kS3+MFTI15pmND2+GXN1DF2LDQMLj/j4xprV1Gm/s/A82f/DhTKLWfhB/VVjO397W5gxTTvC1qc5lhyJXkx7oCjSZNqvM+3YmikzmGnbspUmLfivtQD2RKNqQVcc6sDmO8HzRcXlZ6IKFosDsBO3d3E7MPaYT+o7U6esC/aUY/EYcsGoOW/WfCmFzg/Kg1K/0WpDvwaju7h89+Zw6wEj30dGEKFrRztQPNh9XqxI3/trAkUg7RNT49lfMPufh40C1A7mBT5sWlH1p5Xge3xqhJHK2Bpbf6Ej93m5ThPGBhZHoxqi8VC1BaImVLuxQH7wBlA94izckKhu4FZkKf0m48nd9ZLlZYVuiN8C/K1UhR7hNyT1CP8RZ+7+vy5R30ch/XIvd0g3CWyHU90ym3eQw6ljZShdwsimI3Ov1xm3ndM2epaSawaiDqch0QciyRsy6h4wsZ2iQygYNfnUtYWFCpMCWQAISpZEbKhsWPJW3p5F3ba7WSvLruFYNkyKOXyv1rHFzdegUyrrqfx4V0QrFExntxon1bSKaDzKI40mNUmbfZK6koOd9X29ohieqsm5WJGsVveZCEbgE0ycyK1Qb5l4tF1C1l1NdhoxVemhwkVEGZVAJmcwUYnSfuNKLtZLkmuRTlGmKOxMs5EqrgiprGNgKrnuCZW3uq8BBs03tyC/7EmrsFKx/Wswat3suEibA6tkwb3c3DEQp2QW/HmmlbTR7oYkd4u96Z61EHd8XIBd//KuNtKUJlwQFfc5k0uW3OV0o4cHPCFZOT/ttjUsH21XG06j3W3GiWxPTso264w8lM0zBhUO+oqt1lxIkmuNb9nAxzzhKRW1damwhsfXpSeYrQey8ZB5uAh1PGm28H8fSc2uGbI8af73S5+twgOPe/LA7U6Sm8h/Iv+J/L8U8g9QK5NXvZvF4voPxfze88Lpp5F6GcJOPPQPIXNsQvTS1+c8JXrQN+l4trj6dOKhkbPpGwfMxj7x4D0vTn4qJtmMrMH0Ry+1wOAxC8QuGvDwg2dGy0/a0Q5iz3joP3agxes+5Tv52pOvPfnaX4qvHfqBubiPG2jxbM9mTn1tjz11qr/bJ47cF7vbT4gGJ2JId9uW3ju52633ORg52Nh++HCw5d98znJNN/JUPTLbKzaQZyXJ4SCxZdBOHtnkkU0e2ZflkcU+Nrd/ERrQI4Ofh3f3F2v64b9AcD/+Cw==</diagram></mxfile>
2301.10902/main_diagram/main_diagram.pdf ADDED
Binary file (55.7 kB). View file
 
2301.10902/paper_text/intro_method.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ *Hyperdimensional computing* (HDC) is a novel learning paradigm that takes inspiration from the abstract representation of neuron activity in the human brain. HDCs use high-dimensional binary vectors, and they offer several advantages over other well-known training methods like artificial neural networks (ANNs). One of the advantages of HDCs is their ability to achieve high parallelism and low energy consumption, which makes them an ideal choice for resource-constrained applications such as electroencephalogram detection, robotics, language recognition, and federated learning. Several studies have shown that HDCs are highly efficient in these applications  [@hsieh2021fl; @asgarinejad2020detection; @neubert2019introduction; @rahimi2016robust]. Moreover, HDCs are relatively easy to implement in hardware [@schmuck2019hardware; @salamat2019f5], which adds to their appeal as a practical solution for real-world problems, especially in embedded devices.
4
+
5
+ Unfortunately, the practical deployment of HDC suffers from low model accuracy and is always restricted to small and simple datasets. To solve the problem, one commonly used technique is increasing the hypervector dimension [@neubert2019introduction; @schlegel2022comparison; @yu2022understanding]. For example, running on the MNIST dataset, hypervector dimensions of 10,000 are often used. [@duan2022lehdc] and [@yu2022understanding] achieved the state-of-the-art accuracies of 94.74% and 95.4%, respectively. In these and other state-of-the-art HDC works, hypervectors are randomly drawn from the hyperspace $\{-1,+1\}^d$, where the dimension $d$ is very high. This ensures high orthogonality, making the hypervectors more independent and easier to distinguish from each other [@thomas2020theoretical]. As a result, accuracy is improved and more complex application scenarios can be targeted. However, the price paid for the higher dimension is in higher energy consumption, possibly negating the advantage of HDC altogether [@neubert2019introduction]. This paper addresses this trade-off and well as suggests a way to make use of it to improve HDC.
6
+
7
+ In this paper, we will analyze the relationship between hypervector dimensions and accuracy. It is intuitively true that high dimensions will lead to higher orthogonality [@thomas2020theoretical]. However, contrary to popular belief, we found that as the dimension of the hypervectors $d$ increases, the upper bound for inference worst-case accuracy and average-case accuracy actually *decreases* (Theorem [\[prop1\]](#prop1){reference-type="ref" reference="prop1"} and Theorem [\[prop2\]](#prop2){reference-type="ref" reference="prop2"}). In particular, if the hypervector dimension $d$ is sufficient to represent a vector with $K$ classes (in particular, $d > \log_2 K$) then **the lower the dimension, the higher the accuracy.**
8
+
9
+ Based on our analysis, we utilized the fully-connected network (FCN) with integer weight and binary activation as the encoder. Our research has shown that this encoder is equivalent to traditional HDC encoding methods, as demonstrated in Section  [3.2](#sec:low-d-h-t){reference-type="ref" reference="sec:low-d-h-t"}. Additionally, we will be learning the representation of each class through the majority rule. This will reduce the hypervector dimension while still maintaining the state-of-the-art accuracies.
10
+
11
+ When running on the MNIST dataset, we were able to achieve HDC accuracies of 91.12/91.96% with hypervector dimensions of only 64/128. Also, the total number of calculation operations required by our method ($d=64$) was only 0.35% of what was previously needed by related works that achieved the state-of-the-art performance. These prior methods relied on hypervector dimensions of 10,000 or more. Our analysis and experiments conclusively show that such high dimensions are not necessary.
12
+
13
+ The contribution of this paper is as follows:
14
+
15
+ - We give a comprehensive analysis of the relationship between hypervector dimension and the accuracy of HDC. Both the worst-case and average-case accuracy are studied. Mathematically, we explain why relatively lower dimensions can yield higher model accuracies.
16
+
17
+ - After conducting our analysis, we have found that our methods can achieve similar detection accuracies to the state-of-the-art, while using much smaller hypervector dimensions (latency). For instance, by utilizing a dimension of just 64 on the widely-used MNIST dataset, we were able to achieve an HDC accuracy of 91.12%.
18
+
19
+ - We have also confirmed the effectiveness of our approach on other datasets commonly used to evaluate HDC, including ISOLET, UCI-HAR, and Fashion-MNIST, achieving state-of-the-art accuracies even with quite low dimensions. Overall, our findings demonstrate the potential of our methods in reducing computational overhead while maintaining high detection accuracy.
20
+
21
+ This paper is organized as follows. For completeness, we first introduce the basic workflow and background of HDC. In Section [3](#sec:Method){reference-type="ref" reference="sec:Method"}, we present our main dimension-accuracy analysis and two HDC retraining approaches. To evaluate the effectiveness of our proposed methods, we conduct experiments and compare our results with state-of-the-art HDC models in Section [4](#sec:exp){reference-type="ref" reference="sec:exp"}. Finally, we discuss the implications of our findings and conclude the paper.
22
+
23
+ Hyperdimensional computing (HDC) is a technique that represents data using binary hypervectors with dimensions typically ranging from 5,000 to 10,000. For example, when working with the MNIST dataset, each flattened image $x \in \mathbb{R}^{784}$ is encoded into a hypervector $r \in \mathbb{R}^d$ using a binding operation that combines value hypervectors $v$ with position vectors $p$ and takes their summation.
24
+
25
+ Both these two hypervectors $\mathbf{v, p}$ are independently drawn from the hyperspace $\{-1,+1\}^d$ randomly. Mathematically, we can construct representation $r$ for each image as followed: $$\begin{equation}
26
+ \label{eq:hdc_encoding}
27
+ r = \textrm{sgn} \left ((v_{x_0}\bigotimes p_{x_0} + v_{x_1}\bigotimes p_{x_1}+ \cdots + v_{x_{783}}\bigotimes p_{x_{783}}) \right ),
28
+ \end{equation}$$
29
+
30
+ where the sign function '$\textrm{sgn}(\cdot)$' is used to binarize the sum of the hypervectors, returning either -1 or 1. When the sum equals to zero, $\textrm{sgn}(0)$ is randomly assigned either 1 or -1. In addition, the binding operation $\bigotimes$ performs element-wise multiplication between hypervectors. For instance, $[-1,1,1,-1]\bigotimes[1,1,1,-1] = [-1,1,1,1]$.
31
+
32
+ During training, hypervectors $r_1, r_2, ..., r_{60,000}$ belonging to the same class are added together. The resulting sum is then used to generate a representation $R_i$ for class $i$, using the \"majority rule\" approach. The data belonging to class $i$ is denoted by $C_i$.
33
+
34
+ $$\begin{equation}
35
+ \label{r_c}
36
+ R_i = \textrm{sgn} \left (\sum_{x \in C_i} r_i \right ).
37
+ \end{equation}$$
38
+
39
+ During inference, the encoded test image is compared to each class representation $R_c$, and the most similar one is selected as the predicted class. Various similarity measures such as cosine similarity, L2 distance, and Hamming distance have been used in previous works. In this work, we use the inner product as the similarity measure for binary hypervectors with values of -1 and 1, as it is equivalent to the Hamming distance, as noted in [@frady2021computing].
40
+
41
+ Contrary to traditional results that suggest higher-dimensional models have lower error rates, the majority rule's higher representation dimension in HDC domain does not always lead to better results. Our research demonstrates that if a dataset can be linearly separated and embedded into a $d$-dimensional vector space, higher-dimensional representation may actually reduce classification accuracy. This discovery prompted us to explore the possibility of discovering a low-dimensional representation of the dataset. Our numerical experiments support our theoretical discovery, as the accuracy curve aligns with our findings.
42
+
43
+ Based on the assumption that the dataset can be linearly-separably embedded into a $d$-dimensional space, we here investigate the dimension-accuracy relationship for the majority rule.
44
+
45
+ We further assume the encoded hypervectors are uniformly distributed over a $d$-dimensional unit ball: $$\begin{equation*}
46
+ B^d = \{r \in \mathbb{R}^d \big | \|r\|_2 \leq 1\}.
47
+ \end{equation*}$$ Moreover, we assume that hypervectors $x$ are *linearly separable* and each class with label $i$ can be represented by $C_i$: $$\begin{equation*}
48
+ C_i = \{r \in \mathcal{X} | R_i \cdot r > R_j \cdot r, j \neq i\}, \quad 1 \leq i \leq K
49
+ \end{equation*}$$ where $R_i \in [0, 1]^d$ are support hypervectors that are used to distinguish classes $i$ from other classes.
50
+
51
+ Selecting a sufficiently large $d$ to embed the raw data into a $d$-dimensional unit ball is crucial for this approach to work effectively. This assumption is reasonable because with a large enough $d$, we can ensure that the raw data can be accurately mapped into a high-dimensional space where the support hypervectors can distinguish between different classes.
52
+
53
+ Similarly, we define the prediction class $\hat{C}_i$ by $\hat{R}_i$ as followed: $$\begin{equation*}
54
+ \hat{C}_i = \{r \in \mathcal{X} | \hat{R}_i \cdot r > \hat{R}_j \cdot r, j \neq i\}, \quad 1 \leq i \leq K.
55
+ \end{equation*}$$ When we apply the majority rule to separate the above hypervectors $x$, we are approximating $R_i$ with $\hat{R}_i$ in the sense of maximizing the prediction accuracy. Here each $\hat{R}_i \in \{0, 1\}^d$ is a binary vector.
56
+
57
+ Therefore we define the worst-case $K$-classes prediction accuracy over hypervectors distribution $\mathcal{X}$ in the following expression: $$\begin{equation*}
58
+ Acc^w_{K, d} := \inf_{R_1, R_2, \dots, R_K} \sup_{\hat{R}_1, \hat{R}_2, \dots, \hat{R}_K} \mathbb{E}_r \bigg [ \sum_{i=1}^K \prod_{j \neq i} \mathbf{1}_{\{R_i \cdot r > R_j \cdot r\}} \mathbf{1}_{\{\hat{R}_i \cdot r > \hat{R}_j \cdot r\}} \bigg ].
59
+ \end{equation*}$$
60
+
61
+ ::: theorem
62
+ []{#prop1 label="prop1"} Assume $K = 2$, as the dimension of the hypervectors $d$ increases, the worst-case prediction accuracy decreases with the following rate: $$\begin{align*}
63
+ Acc^w_{2, d} & = 2 \inf_{R_1, R_2} \sup_{\hat{R}_1, \hat{R}_2} \mathbb{E}_r \bigg [ \mathbf{1}_{\{R_1 \cdot r > R_2 \cdot r\}} \mathbf{1}_{\{\hat{R}_1 \cdot r > \hat{R}_2 \cdot r\}} \bigg ] \\
64
+ & = \inf_{R_1, R_2} \sup_{\hat{R}_1, \hat{R}_2} \bigg [ 1 - \frac{\arccos (\frac{(R_1 - R_2) \cdot (\hat{R}_1 - \hat{R}_2)}{ \|R_1-R_2\|_2 \|\hat{R}_1-\hat{R}_2\|_2}) }{ \pi } \bigg ] \\
65
+ & = 1 - \frac{\arccos (\frac{1}{\sqrt{\sum_{j=1}^d (\sqrt{j} - \sqrt{j-1})^2}}) }{\pi} \to \frac{1}{2}, \qquad d \to \infty
66
+ \end{align*}$$
67
+
68
+ The first equality is by the symmetry of distribution $\mathcal{X}$. The second equality is the evaluation of expectation over $\mathcal{X}$ and the detail is given in Lemma 1. For the third equality, the proof is given in Lemma 3 and Lemma [\[lemma:inequality_2_for_mr\]](#lemma:inequality_2_for_mr){reference-type="ref" reference="lemma:inequality_2_for_mr"} 4.
69
+ :::
70
+
71
+ In the next theorem, we further consider the average-case. Assume the prior distribution $\mathcal{P}$ for optimal representation is uniformly distributed: $R_1 ,... R_K \sim \mathcal{U}[0, 1]^d$. We can define the average accuracy with the following expression: $$\begin{equation*}
72
+ \overline{Acc}_{K, d} := \mathbb{E}_{R_1, R_2, \dots, R_K \sim \mathcal{P}} \sup_{\hat{R}_1, \hat{R}_2, \dots, \hat{R}_K} \mathbb{E}_r \bigg [ \sum_{i=1}^K \prod_{j \neq i} \mathbf{1}_{\{R_i \cdot r > R_j \cdot r\}} \mathbf{1}_{\{\hat{R}_i \cdot r > \hat{R}_j \cdot r\}} \bigg ].
73
+ \end{equation*}$$
74
+
75
+ ::: theorem
76
+ []{#prop2 label="prop2"} Assume $K = 2$, as the dimension of the hypervectors $d$ increases, the average case prediction accuracy decreases: $$\begin{align*}
77
+ \overline{Acc}_{K, d} & = 2\mathbb{E}_{R_1, R_2 \sim U[0, 1]^d} \sup_{\hat{R}_1, \hat{R}_2} \mathbb{E}_r \bigg [ \mathbf{1}_{\{R_1 \cdot r > R_2 \cdot r\}} \mathbf{1}_{\{\hat{R}_1 \cdot r > \hat{R}_2 \cdot r\}} \bigg ] \\
78
+ & = \mathbb{E}_{R_1, R_2 \sim U[0, 1]^d} \sup_{\hat{R}_1, \hat{R}_2} \bigg [ 1 - \frac{\arccos (\frac{(R_1 - R_2) \cdot (\hat{R}_1 - \hat{R}_2)}{ \|R_1-R_2\|_2 \|\hat{R}_1-\hat{R}_2\|_2}) }{ \pi } \bigg ] \\
79
+ & = \mathbb{E}_{R_1, R_2 \sim U[0, 1]^d} \bigg [ 1 - \frac{\arccos \big( \sup_{j=1}^d \frac{\sum_{i=1}^j |R_1 - R_2|_{(i)}}{\sqrt{j} \|R_1 - R_2\|} \big )}{\pi} \bigg ].
80
+ \end{align*}$$ Here $|R_1 - R_2|_{(i)}$ denotes the $i$-th maximum coordinate for vector $|R_1 - R_2|$.
81
+ :::
82
+
83
+ Since the exact expression for average-case accuracy is challenging to evaluate, we rely on Monte Carlo simulations. In particular, we sample $R_1$ and $R_2$ 1000 times to estimate the expected accuracy.
84
+
85
+ We then present the curves of $Acc^w_{K, d}$ and $\overline{Acc}_{K, d}$ over a range of dimensions from 1 to 1000 in Figures  [\[worst\]](#worst){reference-type="ref" reference="worst"} and  [1](#average){reference-type="ref" reference="average"}, respectively. It is evident from these figures that the upper bound of classification accuracy decreases as the dimension of the representation exceeds the necessary dimension. This observation implies that a higher representation dimension is not necessarily beneficial and could even lead to a decrease in accuracy. It is easy to find that a high dimension for HDCs is not necessary for both the worst-case and average-case, the upper bound of accuracy will drop slowly when the dimension increases.
86
+
87
+ <figure id="average" data-latex-placement="t">
88
+ <div class="minipage">
89
+ <img src="Figures/worst-case.png" style="width:99.0%" />
90
+ </div>
91
+ <div class="minipage">
92
+ <img src="Figures/worst-average.png" style="width:99.0%" />
93
+ </div>
94
+ <figcaption>Average-case Accuracy <span class="math inline">$\overline{Acc}_{2, d}$</span></figcaption>
95
+ </figure>
96
+
97
+ According to  [@tax2002using], we can approximate multi-class case where $K \geq 3$ by one-against-one binary classification. Therefore, we define the quasi-accuracy of $K$-class classification as follows: $$\begin{equation*}
98
+ Quasi\textrm{-}Acc_{K, d} = \frac{\sum_{i \neq j} Acc^{ij}_{2, d}}{K(K-1)},
99
+ \end{equation*}$$ where $Acc^{ij}_{2, d}$ can be either the average-case or worst-case accuracy that distinguishes class $i$ and $j$. Since the accuracy $Acc^{ij}_{2, d}$ for binary classification decreases as the dimension increase, the quasi-accuracy follows the same trend.
100
+
101
+ <figure id="fig:workflow" data-latex-placement="htb">
102
+ <embed src="Figures/wf_all.pdf" style="width:100.0%" />
103
+ <figcaption>Workflow of Our HDC.</figcaption>
104
+ </figure>
105
+
106
+ To confirm the theoretical findings mentioned above, we propose a HDC design that is shown in Figure [2](#fig:workflow){reference-type="ref" reference="fig:workflow"}. For data encoding, the traditional hyperdimensional computing technique utilizes binding and bundling operations to encode data samples using Equation [\[eq:hdc_encoding\]](#eq:hdc_encoding){reference-type="ref" reference="eq:hdc_encoding"}. However, in this study, we use a simple binary fully-connected network with integer weights and binary activations as the encoder. Taking the MNIST dataset as an example, we demonstrate the equivalence of these two methods as follows:
107
+
108
+ $$\begin{align}
109
+ \label{eq:fcn-equivalent-to-hdc-encoder}
110
+ r= \textrm{sgn}(Wx)= \textrm{sgn} \left ( \sum_{0 \leq i \leq 783} W_{i,x_i=1}\right )
111
+ \end{align}$$
112
+
113
+ where $W_{i,x_i=1}$ indicates the weights whose corresponding input $x_i$ =1.
114
+
115
+ The equation [\[eq:fcn-equivalent-to-hdc-encoder\]](#eq:fcn-equivalent-to-hdc-encoder){reference-type="ref" reference="eq:fcn-equivalent-to-hdc-encoder"} shows that the sum of the weights $W_i$ corresponding to input $x_i=1$, while ignoring weights for $x_i=0$. The resulting sum of weight $\sum_{0 \leq i \leq 783} W_{i,x_i=1}$ in the linear transform corresponds to the sum of binding values of hypervectors $v$ and $p$ in Equation [\[r_c\]](#r_c){reference-type="ref" reference="r_c"}. The integer-weights FCN with binary activation is a natural modification of the hyperdimensional computing encoders, using only integer additions, as in traditional HDC encoders.
116
+
117
+ Specifically, in our one-layer integer-weight fully-connected network, if we randomly initialize the weights with binary values, it becomes equivalent to the encoder of HDC.
118
+
119
+ We used a *straight-through estimator* (STE) to learn the weights [@bengio2013estimating] (details of STE are discussed in the Appendix [^2]). The binary representation $R_c$ of each class is generated using the majority rule (Algorithm [\[alg_mr\]](#alg_mr){reference-type="ref" reference="alg_mr"}). To achieve this, we first sum up the $N$ hypervectors $r$ belonging to class $c$ and obtain an integer-type representation $S_c$ for that class. Subsequently, we assign a value of 1 if the element in $S_c$ exceeds a predefined threshold $\theta$. Otherwise, we set it to 0. This generates a binary representation $R_c$.
120
+
121
+ We have also devised a two-step retraining method to refine the binary representation $R_c$ to improve the accuracy. Algorithm [\[alg_tt\]](#alg_tt){reference-type="ref" reference="alg_tt"} outlines the procedure we follow. First, we feed the training data to the encoder in batches and employ the mean squared error as the loss function to update the weights in the encoder. Next, we freeze the encoder and update the representation of each class. If the output $r$ is misclassified as class $c_{wrong}$ instead of the correct class $c_{correct}$, we reduce the sum of representation of the wrong class $S_{c_{wrong}}$ by multiplying $r$ with the learning rate. Simultaneously, we increase the sum of representation of the correct class $S_{c_{right}}$ by multiplying $r$ with the learning rate. We then use the modified $S_c$ in Algorithm [\[alg_mr\]](#alg_mr){reference-type="ref" reference="alg_mr"} to generate the binary representation $R_c$.
122
+
123
+ ::::: minipage
124
+ :::: algorithm
125
+ ::: algorithmic
126
+ $N$ number of training data $\bm{x}$; Trained binary encoder $E$; Sum of representation $S$; Binary Representation $R_c$; Outputs of encoder $\bm{y}$; Pre-defined Threshold $\theta$;
127
+
128
+ $\bm{r} = E(\bm{x})$; $S_c = 0$ $S_c+=r$ $R_c[i]=1$ $R_c[i]=0$
129
+ :::
130
+ ::::
131
+ :::::
132
+
133
+ ::::: minipage
134
+ :::: algorithm
135
+ ::: algorithmic
136
+ Training data $\bm{x}$ with label $\bm{R_c}$; Trained Encoder $E$; $N$ training epochs.
137
+
138
+  
139
+
140
+ **Step1:** $r$ = E($\bm{x}$) $L$ = mse($r$, $R_c$) //Bp: STE
141
+
142
+  
143
+
144
+ **Step2:** $r$ = E($\bm{x}$) $S_{c_{correct}}+=lr*r$ $S_{c_{wrong}}-=lr*r$ Generate $R_c$ (Algorithm 1, line 5-9)
145
+ :::
146
+ ::::
147
+ :::::
148
+
149
+ After computing the representation of each class, we can compare the similarity between the resulting hypervector and the representation of all classes. To do this, we send the test data to the same encoder and obtain its hypervector representation. Next, we convert the value of 0 in $R_c$ to -1 and perform an inner product to check for similarity. The class with the highest similarity is reported as the result.
2303.01384/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-05-19T12:48:16.436Z" agent="5.0 (X11)" etag="IoQliB2XV0PC63vFCPbu" version="16.5.4" type="google"><diagram id="zZLt1gR3CT01pRjFCmS6" name="Page-1">7VrbcqM4EP0aV+0+hBIIMDzGl2Rqkq2ZKs/sbvYlRYwM2mDkgJyQfP0IEAaB8C34kqk4KRs1Ugt1H51uSfTgcJ5cR87C/4u4KOhpwE16cNTTNAP02XcqeM0FumbmAi/Cbi5SS8EEvyEuBFy6xC6KhYqUkIDihSickjBEUyrInCgiL2K1GQnEXheOx3sEpWAydQLUqPYPdqmfSy2jUvsLwp5f9KwCfmfuFJW5ith3XPJS6QuOe3AYEULzq3kyREFqu8IuuaKrlrurB4tQSLdpEC3dR099++knFnFnE/rl/t674FqenWDJB9zTdPaf5D/8yelrYQ6KEtbZwKfzgAlUdhnTiDyiIQlIxCQhCVnNwQwHQU3kBNgLWXHKHhcx+eAZRRQzQ1/yG3Psumk3gxcfUzRZONO0zxeGKiaLyDJ0UToSkOpaUhLnUMmK8SJ3/QwnaZ1B0zTFOFmfKKmIuKmuEZkjGr2yKsVdnfuR49Yoyi8lCgrI+BUAFDKH485baS5dwy64d3bwlCbxVM05KHQvU8inVg6cOMbTHz4ORXeJhmT2iV7/rRbueL2sMEqqt0avRSnBNGuk2JbFy3dZGegqL5dN00K15XcUYWaSFANct3uFU1tkpVbXxWQZTdFmKFMn8hBvOn7GqgnBzdenvyex8+3brbdYXsC8HnI9tBYIFUcbEkcXsggFDsXPIl3IvM97+E4wG1mJs34LzgoV+bh5q+r0rinSgK1Am9FP8dEEvbptKrolqs5N1VCdwXNliK0QKzV0v4HYcThtgpb54dZ5YFFDwOn2dBEhRgXOQ1CwwSIdTzZCY9AzRkwSpOoHzvTRy9BfI6aIUOZAkuqEmiqF4NoJWaeUVSzizyTQvYxqLoCiapb+PjhxN1/YYgsym8Xovd6VDl8/Hh8BxdiSkdQKG21got1ZZx2bbGQd/axYRzNrrAP2ZR3bFBVB46gcYzVQOEKMY66uUXjeTGNvTzTFTOuEaGxTFRymdoKnvqCzmBSdktC62SeQkBmkGaqLn9mlRzMzZ6msMfQd2usP3nr9UZHd5nVZ10L1j5DztjLW9inuKlctUgSrmeLCA6W46ziysRhZuS6puO4jOOkQCxOonm5hIvWasdMcpDhwUTELjWGM56wtu3hwIiZ9YtI/3v78nJ6pw2qO1s2mo2UpwsEcbe7l6OS3oNuDzGQoLtFOPpNtiYM7SOmriXaag024OhJRn3gkdIJxKa1sLhhbJ/OtztiYvnNMn0laDnVdRIS5Z1oO6/l9v6aoJS1nvnVeK9V4hrvmgTVpPyUKc42dJn0FL26JU04O7QA9M0ie10qxgSRrX0gaR4KkJu/nsJAsNoNaYmMFmubTMt3xH8xISC/yCHPJKqhgkZQ3y+CYK0krv0cLj8npmQQLhywq37KofM9+XRzL4nPen/gMn2Gb52HAkJP0EcL2/7dfg5vxj5/Q9/+7Go/nb1foRnqG8xGwF6Hp74y9FlxJ0Ldmha4pNbDJckTjiGCTEd37c0TJ9m0ZbLfZwBX93dOgBdK/3v5bytIo3NnJVPtEPtlBlGkr/dpmrgEUS7XLz36Rv77PZKjbRf49grPUrLINis4gCw4JWfUYkF2XhJ45ZPtNyJrdQBY2Yryism/L7pvAsrSDHaRKjbxh4+VsA/zwM7zvGt7VE4f35pl9l1y5w/p59/Be0uOdwI4H50rJEXD7ND7ZOr4ehPfdWmocHdcVHZgMmye+XSBUniRuwNAqbVUAEJCtAGjugG7ZW1Ed4tNo4rN9L/RkbyTU3oPSrT1DN4Q1fMKaogPjU7I7OcJxE6Nn9DLC2nAmf8dglzAH2crCtvuWrpm2YWiwRh/ZTV3VTVtn00Y/UGBTZbskHfDGzlEKWHC3rD4r7UsPJws3lia6ub7bu+101oGq2NZ6XXvPaFYs37/Oq5cvscPxLw==</diagram></mxfile>
2303.01384/main_diagram/main_diagram.pdf ADDED
Binary file (12.4 kB). View file
 
2303.01384/paper_text/intro_method.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Real-world data tends to be highly structured, full of symmetries and patterns. This implies that there exists a lower-dimensional set of ground truth factors that is able to explain a significant portion of the variation present in real-world data. The goal of disentanglement learning is to recover these factors, so that changes in a single ground truth factor are reflected only in a single latent dimension of a model (see Figure [1](#fig:celeba_fringes_smile){reference-type="ref" reference="fig:celeba_fringes_smile"} for an example). Such an abstraction allows for more efficient reasoning [@van2019disentangled] and improved interpretability [@higgins2016beta]. It further shows positive effects on zero-shot domain adaption [@higgins2017darla] and data efficiency [@duan2019unsupervised; @schott2022visual].
4
+
5
+ <figure id="fig:celeba_fringes_smile" data-latex-placement="h!">
6
+ <img src="figures/real_world/celeba_fringes_gender.png" />
7
+ <figcaption>Latent traversals of a single latent dimension (hair fringes) of DAVA trained on <em>CelebA</em>. DAVA visibly disentangles the fringes from all other facial properties.</figcaption>
8
+ </figure>
9
+
10
+ If the generative ground-truth factors are known and labeled data is available, one can train a model in a supervised manner to extract the ground-truth factors. What if the generative factors are unknown, but one still wants to profit from the aforementioned benefits for a downstream task? This may be necessary when the amount of labeled data for the downstream task is limited or training is computationally expensive. Learning disentangled representations in an unsupervised fashion is generally impossible without the use of some priors [@locatello2019challenging]. These priors can be present both implicitly in the model architecture and explicitly in the loss function [@tschannen2018recent]. An example of such a prior present in the loss function is a low total correlation between latent variables of a model [@chen2018isolating; @kim2018disentangling]. Reducing the total correlation has been shown to have a positive effect on disentanglement [@locatello2019challenging]. Unfortunately, as we show in more detail in this work, how much the total correlation should be reduced to achieve good disentanglement is highly dataset-specific. The optimal hyperparameter setting for one dataset may yield poor results on another dataset. To optimize regularization strength, we need a way to evaluate disentanglement quality.
11
+
12
+ So how can we identify well disentangled representations? Evaluating representation quality, even given labeled data, is no easy task. Perhaps as an example of unfortunate nomenclature, the often-used term "ground-truth factor" implies the existence of a canonical set of orthogonal factors. However, there are often multiple equally valid sets of ground truth factors, such as affine transformations of coordinate axes spanning a space, different color representations, or various levels of abstraction for group properties. This poses a problem for supervised disentanglement metrics, since they fix the ground truth factors for evaluating a representation and judge the models too harshly if they have learned another equally valid representation. Furthermore, acquiring labeled data in a practical setting is usually a costly endeavor. The above reasons hinder the usability of supervised metrics for model selection.
13
+
14
+ In this work, we overcome these limitations for both learning and evaluating disentangled representations. Our improvements are based on the following idea: We define two distributions that can be generated by a VAE. Quantifying the distance between these two distributions yields a disentanglement metric that is independent of the specific choice of ground truth factors and reconstruction quality. The further away these two distributions are, the less disentangled the VAEs latent space is. We show that the similarity of the two distributions is a necessary condition for disentanglement. Furthermore, we can exploit this property at training time by introducing an adversarial loss into classical training of VAEs. To do this, we introduce a discriminator network into training and use the VAEs decoder as generator. During training, we control the weight of the adversarial loss. We adjust the capacity of the latent space information bottleneck accordingly, inspired by [@burgess2018understanding]. In this way, we allow the model to increase the complexity of its representation as long as it is able to disentangle.
15
+
16
+ This dynamic training procedure solves the problem of dataset-specific hyperparameters and allows our approach to reach competitive disentanglement on a variety of commonly used datasets without hyperparameter tuning.
17
+
18
+ Our contributions are as follows:
19
+
20
+ - We identify a novel unsupervised aspect of disentanglement called PIPE and demonstrate its usefulness in a metric with correlation to supervised disentanglement metrics as well as a downstream task.
21
+
22
+ - We propose an adaptive adversarial training procedure (DAVA) for variational auto-encoders, which solves the common problem that disentanglement performance is highly dependent on dataset-specific regularization strength.
23
+
24
+ - We provide extensive evaluations on several commonly used disentanglement datasets to support our claims.
25
+
26
+ # Method
27
+
28
+ The $\beta$-VAE by @higgins2016beta is a cornerstone model architecture for disentanglement learning. The loss function of the $\beta$-VAE, the evidence lower bound (ELBO), consists of a reconstruction term and a KL-divergence term weighted by $\beta$, which forces the aggregated posterior latent distribution to closely match the prior distribution. The KL-divergence term seems to promote disentanglement as shown in [@rolinek2019variational]. The $\beta$-TCVAE architecture proposed by @chen2018isolating further decomposes the KL divergence term of the ELBO into an index-code mutual information, a total correlation and a dimension-wise KL term. They are able to show that it is indeed the total correlation that encourages disentanglement and propose a tractable but biased Monte Carlo estimate. Similarly, the FactorVAE architecture [@kim2018disentangling] uses the density ratio trick with an adversarial network to estimate total correlation. The AnnealedVAE architecture [@burgess2018understanding] as well as ControlVAE [@shao2020controlvae] build on a different premise, arguing that slowly increasing the information bottleneck capacity of the latent space leads to the model gradually learning new latent dimensions and thus disentangling them. We will use a similar but optimized approach for DAVA. More recent promising approaches are presented by @wei2021orthogonal using orthogonal Jacobian regularization and by @chen2021recursive applying regulatory inductive bias recursively over a compositional feature space.
29
+
30
+ When combining autoencoders (AEs) with an adversarial setting, one can connect the adversarial network either on the latent space or on the output of the decoder. @makhzani2015adversarial proposed an adversarial AE (AAAE) that uses an adversarial discriminator network on the latent space of an AE to match its aggregated posterior to an arbitrary prior. There, the encoder of the AE acts as the generator of the GAN and is the only part of the AE that gets updated with respect to the discriminator's loss. This kind of training is strongly connected to VAE training, with the adversarial loss taking on the role of KL divergence in the classical VAE training objective but without the constraint of a Gaussian prior. The previously mentioned FactorVAE [@kim2018disentangling] implements an adversarial loss on the latent space to reduce total correlation. @larsen2016autoencoding proposed using a discriminator network with the decoder of the VAE acting as generator to improve the visual fidelity of the generated images, but with no focus on disentanglement. The difference to DAVA is that they used the discriminator on the real observations, while we propose a discriminator that only sees observations generated by the decoder of the VAE. @zhu2020learning introduce an recognition network based loss on the decoder that encourages predictable latent traversals. In other words, given a pair of images where all but one latent dimensions are kept constant, the recognition network should be able to predict in which latent dimension the change occurred. Applied on top of a baseline VAE model, this loss slightly improves the disentanglement performance of the baseline VAE. In a semi-supervised setting, @carbajal2021disentanglement and @han2020disentangled propose adversarial losses on the latent space of VAEs to disentangle certain parts of the latent space from information present in labels. Unfortunately, such an approach does not work in an unsupervised setting with the goal of disentanglement of the complete latent space.
31
+
32
+ All supervised metrics have in common that they are dependent on a canonical factorization of ground truth factors. Given access to the generative process of a dataset, the FVAE metric [@kim2018disentangling] can be used to evaluate a model. It first creates a batch of data generated by keeping aforementioned ground truth factor fixed and randomly sampling from the other ground truth factors. It then uses a majority vote classifier to predict the index of a ground truth factor given the variance of each latent dimension computed over said batch. Without access to the generative process, but given a number of fully labeled samples, one can use metrics like DCI Disentanglement by @eastwood2018framework and MIG by @chen2018isolating. While DCI in essence assesses if a latent variable of a model captures only a single ground truth factor, MIG evaluates if a ground truth factor is captured only by a single variable. As a result, DCI compared to MIG does not punish multi-dimensional representations of a single ground truth factor, for example the RGB model of color or a sine/cosine representation of orientation.
33
+
34
+ There exists a small number of unsupervised disentanglement metrics. The unsupervised disentanglement ranking (UDR) by Duan et al. [@duan2019unsupervised] evaluates disentanglement based on the assumption that a representation should be disentangled if many models, differently initialized but trained with the same hyperparameters, learn the same representation. To achieve this, they compute pairwise similarity (up to permutation and sign inverse) of the representations of a group of models. The score of a single model is the average of its similarity to all the other models in the group. ModelCentrality (MC) [@lin2020infogan] builds on top of UDR by improving the pairwise similarity evaluation. A drawback is the high computational effort, as to find the optimal hyperparameter setting, multiple models need to be trained for each setting. UDR and MC do not assume any fixed set of ground truth factors. Nevertheless, a weakness of these approaches is that they do not recognize similarity of a group of models that each learn a different bijective mapping of the ground truth factors. The latent variation predictability (VP) metric by @zhu2020learning is based on the assumption that if a representation is well disentangled, it should be easy for a recognition network to predict which variable was changed given two input images with a change in only one dimension of the representation. An advantage of the VP metric is its ability to evaluate GANs as it is not dependent on the model containing an encoder. In comparison to our proposed metric, VP, UDR and MC are dependent on the size of the latent space or need to define additional hyperparameters to recognize inactive dimensions. @estermann2020robust showed that the UDR has a strong positive bias for low-dimensional representations, as low-dimensional representations are more likely to be similar to each other. One could see the same issue arise for the VP metric, as accuracy for the recognition network will likely be higher when there is only a low number of change-inducing latent dimensions available to choose from.
35
+
36
+ The effects of disentangled representations on downstream task performance are manifold. @van2019disentangled showed that for abstract reasoning tasks, disentangled representations enabled quicker learning. We show in Section [4.2](#subsec:downstream_performance){reference-type="ref" reference="subsec:downstream_performance"} that the same holds for our proposed metric. In work by @higgins2017darla, disentangled representations provided improved zero-shot domain adaption for a multi-stage reinforcement-learning (RL) agent. The UDR metric [@duan2019unsupervised] correlated well with data efficiency of a model based RL agent introduced by Watters et al. [@watters2019cobra]. Disentangled representations further seem to be helpful in increasing the fairness of downstream prediction tasks [@locatello2019fairness]. Contrary to previous qualitative evidence [@eslami2018neural; @higgins2018scan], @montero2020role present quantitative findings indicating that disentanglement does not have an effect on out-of-distribution (OOD) generalization. Recent work by @schott2022visual supports the claims of @montero2020role, concluding that disentanglement shows improvement in downstream task performance but not so in OOD generalization.
37
+
38
+ We first introduce the notation of the VAE framework, closely following the notation used by @kim2018disentangling. We assume that observations $x \sim \tilde{D}$ are generated by an unknown process based on some independent ground-truth factors. The goal of the encoder is to represent $x$ in a latent vector $z \in \mathbb{R}^d$. We introduce a Gaussian prior $p(z) = \mathcal{N}(0,\mathit{I})$ with identity covariance matrix on the latent space. The variational posterior for a given observation $x$ is then $q_\theta(z|x) = \prod_{j=1}^d\mathcal{N}(z_j|\mu_j(x), \sigma_j^2(x))$, where the encoder with weights $\theta$ outputs mean and variance. The decoder with weights $\phi$ projects from the latent space $z$ back to observation space $p_\phi(x|z)$. We can now define the distribution $q(z)$.
39
+
40
+ ::: definition
41
+ **Definition 1** (EP). The empirical posterior (EP) distribution $q(z)$ of a VAE is the multivariate distribution of the latent vectors $z$ over the data distribution $\tilde{D}$. More formally, $q(z) = \mathbb{E}_{x \sim \tilde{D}}[q_{\theta}(z|x)]$. We can reconstruct an observation $x \sim \tilde{D}$ the following way: We sample $z \sim q_\theta(z|x)$ and then get the reconstruction $\hat{x} \sim p_{\phi}(x|z)$. We informally call observations generated by this process reconstructed samples and denote them as $\hat{x}$.
42
+ :::
43
+
44
+ The decoder is not constrained to only project from $q(z)$ to observation space. We can sample observations from the decoder by using different distributions on $z$. We define a particularly useful distribution.
45
+
46
+ ::: definition
47
+ **Definition 2** (FP). The factorial posterior (FP) distribution $\Bar{q}(z)$ of a VAE is a multivariate distribution with diagonal covariance matrix. We define it as the product of the marginal EP distribution: $\Bar{q}(z) = \prod_{j=1}^{d}q(z_j)$. We can use the decoder to project $z \sim \Bar{q}(z)$ to observation space $\tilde{x} \sim p_{\phi}(x|z)$. We informally call observations created by this process generated samples and denote them as $\tilde{x}$.
48
+ :::
49
+
50
+ We can now define the data distributions that arise when using the decoder to project images from either the EP or the FP.
51
+
52
+ ::: definition
53
+ **Definition 3** ($\tilde{D}_{\text{EP}}, \tilde{D}_{\text{FP}}$). $\tilde{D}_{\text{EP}}$ is generated by the decoder projecting observations from the EP latent distribution $q(z)$, i.e. reconstructed samples. $\tilde{D}_{\text{EP}} = \mathbb{E}_{z \sim q(z)}[p_{\phi}(x|z)]$. $\tilde{D}_{\text{FP}}$ is generated by the decoder projecting observations from the FP distribution $\Bar{q}(z)$, i.e. generated samples. $\tilde{D}_{\text{FP}} = \mathbb{E}_{z \sim \bar{q}(z)}[p_{\phi}(x|z)]$.
54
+ :::
55
+
56
+ We can now define the core concept of this paper. We look at the similarity of two data distributions generated by the decoder, $\tilde{D}_{\text{EP}}$ and $\tilde{D}_{\text{FP}}$.
57
+
58
+ ::: {#def:pipe .definition}
59
+ **Definition 4** (PIPE). The posterior indifference projection equivalence (PIPE) of a VAE represents the similarity of the data distributions $\tilde{D}_{\text{EP}}$ and $\tilde{D}_{\text{FP}}$. In other words, PIPE is a measure of the decoder's indifference to the latent distribution it projects from.\
60
+ $\mathit{PIPE}(\theta, \phi) = \omega(\mathbb{E}_{z \sim q(z)}[p_{\phi}(x|z)], \mathbb{E}_{z \sim \bar{q}(z)}[p_{\phi}(x|z)]$, where $\omega$ is a general similarity measure.
61
+ :::
62
+
63
+ Unsupervised disentanglement learning without any inductive biases is impossible as has been proven by @locatello2019challenging. Given a disentangled representation, it is always possible to find a bijective mapping that leads to an entangled representation. Without knowledge of the ground truth factors, it is impossible to distinguish between the two representations. We argue that PIPE is necessary for a disentangled model, even when it is not sufficient. Suppose a model has learned a disentangled representation, but $\tilde{D}_{\text{EP}}$ is not equivalent to $\tilde{D}_{\text{FP}}$. There are two cases where this could happen. The first possibility is that the latent dimensions of the model are not independent. This violates the independence assumption of the ground truth factors, so the model cannot be disentangled. The second possibility is that the model has not learned a representation of the ground truth factors and is generating samples that are not represented by the ground truth factors. This model would not be disentangled either. We conclude that PIPE is a necessary condition for disentanglement.
64
+
65
+ <figure id="fig:metric/samples">
66
+ <p><br />
67
+ </p>
68
+ <figcaption>Shown are two models of <code>disentanglement-lib</code> <span class="citation" data-cites="locatello2019challenging"></span> with different disentanglement performance. The upper part shows a poorly disentangled model while the lower part shows a comparatively well disentangled model. In (a) and (c), odd columns show images sampled from the <em>AbstractDSprites</em> dataset, even columns show the corresponding model reconstruction. (b) and (d) show images generated by the decoder of each model given the factorial posterior distribution. While the upper model achieves better reconstructions (a), its generated samples (b) are visibly out of distribution. The bottom model ignores shape in its reconstructions (c), but its generated samples (d) capture the true data distribution more accurately.</figcaption>
69
+ </figure>
70
+
71
+ In Figure [2](#fig:metric/samples){reference-type="ref" reference="fig:metric/samples"} we show reconstructed and generated samples of two different models to support our chain of reasoning. Generated samples of the entangled model (Figure [2](#fig:metric/samples){reference-type="ref" reference="fig:metric/samples"} (b)) look visibly out of distribution compared to the reconstructed samples. Generated samples of the disentangled model (Figure [2](#fig:metric/samples){reference-type="ref" reference="fig:metric/samples"} (d)) appear to be equivalent to the distribution of the reconstructed samples.
72
+
73
+ We propose a way to quantify the similarity metric $\omega$ of PIPE with a neural network and call this the PIPE metric. Given a model $\mathcal{M}$ with encoder weights $\theta$, decoder weights $\phi$, and corresponding $\tilde{D}_{\text{EP}}, \tilde{D}_{\text{FP}}$ (see Appendix [8.2](#app:sampling_strategies){reference-type="ref" reference="app:sampling_strategies"} for details on how to sample from $\tilde{D}_{\text{EP}}$ and $tilde{D}_{FP}$).
74
+
75
+ 1. Create a set of observations $S_{EP}$ by sampling from $\tilde{D}_{\text{EP}}$. Informally, these are the reconstructed samples.
76
+
77
+ 2. Create a set of observations $S_{FP}$ by sampling from $\tilde{D}_{\text{FP}}$. Informally, these are the generated samples.
78
+
79
+ 3. Randomly divide $S_{EP} \cup S_{FP}$ into a train and a test set.
80
+
81
+ 4. Train a discriminator network on the train set.
82
+
83
+ 5. Evaluate accuracy $acc$ of the discriminator on the test set.
84
+
85
+ 6. Since a random discriminator will guess half of the samples accurately, we report a score of $2\cdot(1 - acc)$, such that 1 is the best and 0 is the worst score.
86
+
87
+ We train the discriminator network for $10,000$ steps to keep the distinction of $S_{EP}$ and $S_{FP}$ sufficiently difficult. We further use a uniform factorial distribution instead of FP for a slight improvement in performance. Details on the implementation can be found in Appendix [8](#app:metric_details){reference-type="ref" reference="app:metric_details"}.
88
+
89
+ To classify the performance of the PIPE metric, we evaluate correlations with supervised metrics on a diverse set of commonly used datasets. We namely consider *Shapes3D* [@3dshapes18], *AbstractDSprites* [@van2019disentangled] and *Mpi3d Toy* [@gondal2019transfer]. All datasets are part of `disentanglement-lib` [@locatello2019challenging], which we used to train the models we evaluated our metric on. More details on the implementation of the PIPE metric and the evaluated models can be found in Appendix [8](#app:metric_details){reference-type="ref" reference="app:metric_details"}. Results are displayed in Figure [3](#fig:metric/correlation_supervised){reference-type="ref" reference="fig:metric/correlation_supervised"}. They show that the PIPE metric correlates positively with existing supervised metrics DCI, MIG and FVAE, surpassing the correlations of the unsupervised baselines. While the performance of UDR on *Mpi3d Toy* and the performance of MC on *AbstractDSprites* is lacking, PIPE demonstrates consistent performance across all datasets. Further, as opposed to UDR and MC, PIPE can be evaluated on a single model only, whereas UDR and MC are only able to evaluate sets of models.
90
+
91
+ <figure id="fig:metric/correlation_supervised" data-latex-placement="h!">
92
+
93
+ <figcaption>Spearman rank correlation between different metrics on three different datasets. Correlations take values in the range <span class="math inline">[−1, 1]</span>, where 1 means perfect correlation, 0 means no correlation, and negative values mean anti-correlation. For better readability, we have multiplied all values by 100. PIPE, UDR <span class="citation" data-cites="duan2019unsupervised"></span> and MC <span class="citation" data-cites="lin2020infogan"></span> are the corresponding unsupervised metrics. We include Reconstruction Loss Rec as a trivial unsupervised baseline. DCI, MIG and FVAE are the corresponding supervised metrics. Low or even negative correlations of our metric with UDR and MC show that our metric captures a different aspect of disentanglement. The correlation of our metric with supervised disentanglement metrics is mostly consistent across datasets. We note that the correlation with supervised metrics on the <em>Mpi3d Toy</em> dataset changes direction for both UDR and Rec, while MC has difficulties with <em>AbstractDSprites</em>. </figcaption>
94
+ </figure>
95
+
96
+ To further quantify the usefulness of the PIPE metric, we analyze the predictive performance of the metric for an abstract reasoning downstream task. For the downstream task, a VAE is trained to learn a disentangled representation of a dataset. The downstream model then only gets access to this representation then trying to solve the downstream task. @van2019disentangled evaluated different supervised disentanglement metrics in terms of predictive performance of accuracy in an abstract reasoning task on the datasets *Shapes3D* and *AbstractDSprites*. They showed that good disentanglement is an indicator of better accuracy during the early stages on training. We reproduce their experiment on a reduced scale by only considering up to 6,000 training steps. As can be seen in Figure [4](#fig:metric/downstream){reference-type="ref" reference="fig:metric/downstream"}, our metric is on par with supervised metrics and clearly outperforms the unsupervised baselines. More importantly this means that PIPE is a positive predictor of accuracy of downstream models in a few-sample setting and is therefore a desirable property for unsupervised model selection.
97
+
98
+ <figure id="fig:metric/downstream" data-latex-placement="h!">
99
+ <img src="figures/downstream/Combined_correlations.tikz" style="height:4cm" />
100
+ <figcaption>Spearman rank correlation between different disentanglement metrics and downstream accuracy of the abstract visual reasoning task <span class="citation" data-cites="van2019disentangled"></span> after 6,000 training steps for FactorVAE and <span class="math inline"><em>β</em></span>-TCVAE. The same metrics as in Figure <a href="#fig:metric/correlation_supervised" data-reference-type="ref" data-reference="fig:metric/correlation_supervised">3</a> are evaluated. Correlation for unsupervised metrics are more sensitive to model architecture than for supervised metrics. PIPE shows higher correlation with downstream performance than UDR and clearly outperforms the Rec baseline.</figcaption>
101
+ </figure>
102
+
103
+ ::: wrapfigure
104
+ r0.5 ![image](figures/training_schedule/aaae_schematic.drawio.pdf){width="50%"}
105
+ :::
106
+
107
+ In the previous chapter we have established that PIPE is desirable for a model. We now present how to encourage this property at training time by designing an adversarial training procedure. We train a discriminator and a VAE at the same time. The discriminator needs to differentiate $\tilde{D}_{\text{EP}}$ from $\tilde{D}_{\text{FP}}$, which is achieved with the following loss: $$\begin{equation}
108
+ \nonumber
109
+ \begin{aligned}
110
+ \mathcal{L}_{dis} = &\mathbb{E}_{\hat{x} \sim \tilde{D}_{\text{EP}}}[\mathrm{log}(Dis(\hat{x})] + \\
111
+ &\mathbb{E}_{\tilde{x} \sim \tilde{D}_{\text{FP}}}[\mathrm{log}(1 - Dis(\tilde{x})) ].
112
+ \end{aligned}
113
+ \label{eq:loss_disc}
114
+ \end{equation}$$ The loss function for the VAE is compromised of a reconstruction term with the weighted negative objective of the discriminator: $$\begin{equation}
115
+ \nonumber
116
+ \begin{aligned}
117
+ &\mathcal{L}_{adv} = \mathcal{L}_{rec} - \mu \mathcal{L}_{dis}.
118
+ \end{aligned}
119
+ \label{eq:loss_vae_disc}
120
+ \end{equation}$$ with reconstruction loss $$\begin{equation}
121
+ \nonumber
122
+ \mathcal{L}_{rec} = \mathbb{E}_{q_\theta(z|x)} [\mathrm{log} p_\phi(x | z)]
123
+ \end{equation}$$
124
+
125
+ While this works in practice, it faces the issue of the weight $\mu$ being very dataset-specific. It is also closely related to FactorVAE, with the difference that FactorVAE applies the adversarial loss to the latent space and to the encoder only. Let us consider an alternative approach. Work by @burgess2018understanding looked at the KL-Divergence from an information bottleneck perspective. They proposed using a controlled latent capacity increase by introducing a loss term on the deviation of some goal capacity $C$ that increases during training: $$\begin{equation}
126
+ \nonumber
127
+ \mathcal{L}_{C} = |\mathrm{KL}(q(z|x)||p(z)) - C|
128
+ \label{eq:loss_annealed}
129
+ \end{equation}$$
130
+
131
+ Such a loss provides a disentanglement prior complementary to the minimizing total correlation prior used in $\beta$-TCVAE and FactorVAE. Unfortunately, it too depends on specific choice of hyperparameters, mainly the max capacity $C$ and the speed of capacity increase. We now demonstrate how to incorporate both the adversarial loss and the controlled capacity increase into a single loss function. We call this approach disentangling adversarial variational auto-encoder DAVA (also see Figure [\[fig:aaae_schematic\]](#fig:aaae_schematic){reference-type="ref" reference="fig:aaae_schematic"}):
132
+
133
+ $$\begin{equation}
134
+ \nonumber
135
+ \mathcal{L}_{\mathit{DAVA}} = \mathcal{L}_{rec} - \gamma \mathcal{L}_{C} - \mu \mathcal{L}_{dis}
136
+ \end{equation}$$
137
+
138
+ The main strength of DAVA is its ability to dynamically tune $C$ and $\mu$ by using the accuracy of its discriminator. This yields a model that performs well on a diverse range of datasets. We now outline the motivation and the method for tuning $C$ and $\mu$ during training.
139
+
140
+ ::: wrapfigure
141
+ R0.5
142
+ :::
143
+
144
+ The goal of the controlled capacity increase in the AnnealedVAE architecture is to allow the model to learn individual factors at a time into individual latent dimensions. The order in which they are learned corresponds to their respective contribution to the reconstruction loss. In DAVA, we want to encourage the model to *learn new factors by increasing $C$ as long as it has learned a disentangled representation of the currently learned factors*. This is the case when the discriminator cannot distinguish $\tilde{D}_{\text{EP}}$ from $\tilde{D}_{\text{FP}}$. As soon as the accuracy of the discriminator increases, we want to stop increasing $C$ and increase the weight of the adversarial loss to ensure that the model does not learn any new factors of variation while helping it to disentangle the ones it has picked up into individual dimensions. As can be seen in Figure [\[fig:training/capacity\]](#fig:training/capacity){reference-type="ref" reference="fig:training/capacity"}, this results in dataset-specific schedules of $C$ over the course of the training. Algorithm [\[algo:dava\]](#algo:dava){reference-type="ref" reference="algo:dava"} in Appendix [9.1](#app:hyperparams){reference-type="ref" reference="app:hyperparams"} describes the training of DAVA in more detail.
2303.06121/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2303.06121/paper_text/intro_method.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Pretraining models on large, diverse datasets and transferring their representations to downstream applications is becoming common practice in deep learning. Such representations should capture useful information from available data while ignoring irrelevant features and noise [@DietterichTC18; @efroni2021provable]. For instance, an object recognition model may achieve high accuracy on its training set while strongly relying on "spurious" correlations between background features and important objects. When transferred to new data with the same objects but a different distribution of backgrounds, performance may suffer. Similarly, a policy for vision-based robot control [@LevineFDA16; @FinnGL16] may perform well in its original training environment, but fail catastrophically when transferred to a new setting [@NagabandiCLFALF19] where all task-relevant features are the same but minor background features differ from training.
4
+
5
+ A promising principle for encouraging robustness to out-of-distribution observations is informational parsimony: learning features that contain minimal information [@tishby2000information] about the input while still containing enough information to solve the task(s) of interest. Such regularization is usually based on imposing an *information bottleneck* (IB) [@tishby2015deep] on the network that tries to minimize the amount of information flowing from the input, through the bottleneck, to the output prediction.
6
+
7
+ Although IB approaches have been beneficial in some cases, adding an information bottleneck at the penultimate step of computation does little to prevent overfitting in the preceding steps of computation, which typically comprise the overwhelming bulk of the model. To that end, *we consider restricting the information used throughout a computation rather than just restricting the information that comes out of a computation*.
8
+
9
+ In this paper, we propose learning input-conditioned functions that *gate* the flow of information through a model. For example, we can consider what happens when one inserts an IB near the beginning of a model's computation rather than near the end. In this case, we train the information gating functions to minimize information flow from the input into the rest of the model while still permitting the model to solve the task(s) of interest. We implement this gating using a differentiable parameterization of the signal-to-noise ratio. As the noise level goes up, the signal level goes down, and hence the model displays reduced dependencies on the input. Gating functions can be learnt in conjunction with any downstream loss corresponding to some task of interest. For example, the downstream loss could be a standard contrastive loss for self-supervised learning or an inverse dynamics loss when learning representations for reinforcement learning (RL). In both cases, we can learn gating functions that *reveal minimal information* used to optimize the loss or learn adversarial gating functions that will *remove any information* that can be used to optimize the loss. We primarily focus on InfoGating representations used for learning control policies in RL. A natural focus of the RL paradigm is on learning a mapping from observations to actions, thus discarding most information not useful for control. We bring forth such a notion of only capturing what the agent can affect through InfoGating. Using background distractors and multi-object interactions as a form of noise/irrelevant features, we see that InfoGating is able to remove almost all of the irrelevant information from the pixels, leading to better out-of-distribution generalization in the presence of noise and better in-distribution generalization in the presence of multiple task-irrelevant objects.
10
+
11
+ Our main contributions are as follows: **1.** A general purpose, practical framework for informational parsimony called InfoGating that can restrict information flow throughout a computation to learn robust representations. **2.** Qualitative analyses on the properties of the gating functions learned with InfoGating that show they are semantically meaningful and enhance intrepretability. **3.** Quantitative analyses of applying InfoGating in the context of various downstream objectives which show clear benefits in terms of improved generalization performance.
12
+
13
+ # Method
14
+
15
+ This section describes the primary downstream losses we use with InfoGating in this paper, namely an InfoNCE based contrastive loss and a multi-step inverse dynamics loss.
16
+
17
+ **Mutual Information Estimation via InfoNCE**. Given $\rvx$ and $\rvz$ as two random variables, their mutual information can be defined as the decrease in uncertainty when observing $\rvx$ given $\rvz$, compared to just observing $\rvx$: $I(\rvx, \rvz) = H(\rvx) - H(\rvx \ | \ \rvz)$, where $H$ is the Shannon entropy. InfoNCE [@oord2018representation], based on Noise Contrastive Estimation [@gutmann2010noise], computes a lower bound on $I(\rvz^1, \rvz^2)$, where $\rvz^1$ and $\rvz^2$ are two representations of the input $\rvx$ produced by some encoder $f(\rvx)$. Specifically, the bound is optimized by discriminating "positive" and "negative" pairs:
18
+
19
+ $$\begin{equation}
20
+ \label{eq:infonce}
21
+ \mathcal{L}_\text{InfoNCE} = \mathbb{E}_{Z^{-}} \left[\log \frac{e^{\psi(\rvz^1, \ \rvz^2)}}{e^{\psi(\rvz^1, \ \rvz^2)} + \sum_{\rvz^{-} \in \ Z^{-}} e^{\psi(\rvz^1, \ \rvz^{-})}} \right],
22
+ \end{equation}$$
23
+
24
+ where $\psi$ is a pairwise, scalar-valued function of the representations and $Z^{-}$ is a batch of "negative samples". Typically, self-supervised contrastive learning uses data augmentation such as random cropping and color jittering to define two augmented views $(\rvz^1, \ \rvz^2)$ of a given input as "positives" while views $\rvz^{-}$ of other inputs are treated as "negatives". The InfoNCE objective then encourages the representations of positive views of $\rvx$ to be close (i.e., $\psi(\rvz^1, \ \rvz^2)$ is high), while pushing apart the negatives (i.e., $\psi(\rvz^1, \ \rvz^{-})$ is low) [@bachman2019learning; @chen2020simple].
25
+
26
+ **Multi-Step Inverse Dynamics Models**. Multi-step inverse dynamics models [@efroni2021provable] predict the action(s) that took an agent from some observation $\rvx_t$ to some future observation $\rvx_{t+k}$, attempting to learn a useful representation of the observations. This resembles learning goal-conditioned policies through relabelling future observations as achieved goals. Although the trivial case of $k=1$ (a one-step model) does not capture long term dependencies, recent work has shown that multi-step models are able to capture more information that may be useful for controlling the agent [@lamb2022guaranteed]. Multi-step inverse models can be used to learn representations by simply predicting the actions $(\rva_{t},...,\rva_{t+k-1})$ conditioned on $\rvx_t$ and $\rvx_{t+k}$. Actions can be predicted using a standard max likelihood objective or with a contrastive reconstruction objective which maximizes the InfoNCE-based lower bound on $I((\rvx_t, \rvx_{t+k}), (\rva_{t},...,\rva_{t+k-1}))$. The learning objective in this case looks as follows: $$\begin{equation*}
27
+ \label{eq:inv_dynamics}
28
+ \mathcal{L} = \text{InfoNCE} \big((\rvz_t, \rvz_{t+k}), \rvy_{t:t+k-1} \big),
29
+ \end{equation*}$$ where $\rvz_t$ and $\rvz_{t+k}$ are representations computed from $\rvx_t$ and $\rvx_{t+k}$, and $\rvy_{t:t+k-1}$ is a representation computed from $(\rva_{t},...,\rva_{t+k-1})$. The negative samples in this case come from randomly sampling action sequences of the appropriate length from the agent's collected experience. In the simplest case, the representation $\rvy_{t:t+k-1}$ may depend only on $\rva_t$.
30
+
31
+ We present two approaches to InfoGating that we describe as *cooperative* and *adversarial*. Cooperative InfoGating involves keeping as little information as possible to obtain good performance on the downstream task. Adversarial InfoGating involves removing as little information as possible to preclude good performance on the downstream task. We include both approaches in this paper since identifying *minimal sufficient information* (cooperative) and identifying *any useful information* (adversarial) may have different affects depending on the downstream loss and the application domain.
32
+
33
+ The InfoGating approach has two major components. The first is an encoder of the input $\rvx$, $f(\rvx)$. The second is an information gating function, $ig(\rvx)$ (see Figure [1](#fig:main_fig){reference-type="ref" reference="fig:main_fig"}). In principle, we can gate information passing through any layer in the representation network that computes the encoding $\rvz = f(\rvx)$. The $ig(\rvx)$ function provides continuous-valued masks (values in $[0, 1]$) that describe where to erase information from the computation graph for $f(\rvx)$. The shape/size of the output of $ig(\rvx)$ will depend on where we want to gate information. For instance, if we wish to gate information in the input pixel space, $ig(x)$ masks are the size of the image. In general, the goal of $ig(\rvx)$ is to erase as much information from $f(\rvx)$ as possible without hurting task performance.
34
+
35
+ We primarily focus on cooperative InfoGating in this paper and we consider two cooperative variants: gating in the input space or in the feature space.
36
+
37
+ **InfoGating in Input Space**. We apply InfoGating to an input $\rvx$ by taking a simple convex combination of $\rvx$ and random Gaussian noise $\rvepsilon$. The combination weights are given by $ig(\rvx)$: $$\begin{equation}
38
+ \label{eq:infomask}
39
+ \rvx^{ig} = ig(\rvx) \odot \rvx + (1 - ig(\rvx)) \odot \rvepsilon,
40
+ \end{equation}$$ where $\rvx^{ig}$ denotes the info-gated input and $\odot$ denotes element-wise multiplication. An all-zero mask corresponds to complete noise (i.e., erasing all information from the input), while an all-one mask corresponds to keeping the original input. The function $ig(x)$ is learnt using the same downstream objective that is used to learn $f(\rvx)$. We encourage masks to remove information from the input by minimizing their L1 norm. This tends to produce sparse masks due to properties of the L1 norm [@ng2004feature]. The overall objective for learning with InfoGating is $$\begin{equation}
41
+ \label{eq:gen_infomask}
42
+ \mathcal{L}_{ig, f} = \mathcal{L}_{\text{task}} \big(f(\rvx^{ig})\big) + \lambda \ ||ig(\rvx)||_1,
43
+ \end{equation}$$ where $\mathcal{L}_{\text{task}}$ refers to any objective through which a useful representation $\rvz$ can be learnt. Note that when doing InfoGating, we minimize the downstream loss for the masked input $\rvx^{ig}$ (first term), instead of the original input $\rvx$, while masking out as much of the input as possible through the L1 penalty (second term). The $\lambda$ coefficient is a hyperparameter that controls how much of the input is masked. In principle, any loss function can be used in conjunction with InfoGating and we consider multiple downstream objectives: contrastive learning of dynamic, Q-learning, and behavior cloning. We describe the overall procedure in Algorithm [2](#alg:infomask-code){reference-type="ref" reference="alg:infomask-code"}, which assumes contrastive multi-step inverse dynamics modeling as the downstream objective.
44
+
45
+ <figure id="alg:infomask-code" data-latex-placement="t">
46
+ <div class="minipage">
47
+ <div class="algorithm">
48
+ <div class="small">
49
+ <div class="algorithmic">
50
+ <p>encoder <span class="math inline"><em>f</em></span>, masking net <span class="math inline"><em>i</em><em>g</em></span> <span class="math inline">$\rvx_t$</span>,  <span class="math inline">$\rvx_{t+k} =$</span> aug(<span class="math inline">$\rvx_t$</span>),  aug(<span class="math inline">$\rvx_{t+k}$</span>) <span> # random crop augmentation</span> <span class="math inline">$\rvx_t^{ig}$</span>,  <span class="math inline">$\rvx_{t+k}^{ig} = ig(\rvx_t), \ ig(\rvx_{t+k})$</span> <span> # get infogates</span> <span class="math inline">$\rvx_t^{ig} = ig(\rvx_t) \odot \rvx_t + (1 - ig(\rvx_t)) \odot \rvepsilon$</span> <span> # infogate current state</span> <span class="math inline">$\rvx_{t+k}^{ig} = ig(\rvx_{t+k}) \odot \rvx_{t+k} + (1 - ig(\rvx_{t+k})) \odot \rvepsilon$</span> <span> # infogate future state</span> <span class="math inline">$\rvz_t$</span>,  <span class="math inline">$\rvz_{t+k} = f(\rvx_t^{ig}), \ f(\rvx_{t+k}^{ig})$</span> <span> # get encodings</span> Run Adam update on overall loss:<br />
51
+ <span class="math inline">$\mathcal{L}_{ig, f} = \text{InfoNCE} (\rvz_t, \rvz_{t+k}, \rva_t) + \lambda \ ||ig(\rvx_t)||_1 + \lambda \ ||ig(\rvx_{t+k})||_1$</span> Run Adam update on masking loss:<br />
52
+ <span class="math inline">$\mathcal{L}_{ig} = -\text{InfoNCE} (\rvz_t, \rvz_{t+k}, \rva_t) - \lambda \ ||ig(\rvx_t)||_1 - \lambda \ ||ig(\rvx_{t+k})||_1$</span> Run Adam update on encoder loss:<br />
53
+ <span class="math inline">$\mathcal{L}_{f} = \text{InfoNCE} (\rvz_t, \rvz_{t+k}, \rva_t)$</span></p>
54
+ </div>
55
+ </div>
56
+ </div>
57
+ </div>
58
+ <figcaption>InfoGating Pseudocode</figcaption>
59
+ </figure>
60
+
61
+ ::: wrapfigure
62
+ r0.4 ![image](NewFigures/featinfomask_fig.pdf){width="\\linewidth"}
63
+ :::
64
+
65
+ **InfoGating in Feature Space**. As an alternative to directly gating the input pixels, we can also consider a variant where $f(\rvx)$ is masked instead of the input. Consider the following masking: $$\begin{equation}
66
+ \rvz^{ig} = ig(\rvx) \odot \rvz + (1 - ig(\rvx)) \odot \rvepsilon, \quad \quad \mbox{where} \quad \rvz = f(\rvx).
67
+ \end{equation}$$ The training objective remains the same as in the pixel-level case, i.e., minimize the downstream loss $\mathcal{L}_{\text{task}}(\rvz^{ig})$ for the masked representation $\rvz^{ig}$, while masking as much of $\rvz$ as possible (Figure [\[fig:feat_fig\]](#fig:feat_fig){reference-type="ref" reference="fig:feat_fig"}). This version is closely related to the deep variational information bottleneck (VIB), which minimizes the KL divergence between the (stochastic) representation $\rvz$ and a prior distribution, typically chosen to be a standard Gaussian. Roughly, the values in $ig(\rvx)$ can be seen as specifying how many steps of forward diffusion to run in a Gaussian diffusion process initiated at $\rvz$, where values near zero correspond to running more steps of diffusion and thus sampling from a distribution that is closer to a standard Gaussian in terms of KL divergence [@kingma2021variational].
68
+
69
+ Thus, like VIB methods, we aim to minimize $I(\rvx, \rvz)$, while maximizing the task performance of $\rvz$. However, feature space InfoGating uses a different optimization objective and parameterization compared to existing VIB approaches. We leave similar variations of InfoGating where arbitrary layers in a computation graph are gated for future work[^2].
70
+
71
+ The versions of InfoGating discussed above are cooperative objectives, where the gating network and the encoder work together to lower the overall loss. This leads to finding representations that capture the minimal sufficient information for a given task. We can also turn this formulation on its head, yielding an adversarial objective. In this case, the gating network is tasked with discovering masks that, when used by the encoder, lead to *maximizing* its loss. Instead of encouraging the masks to erase as much of the input as possible, the adversarial objective encourages the masks to remove as little information as possible while minimizing the encoder's performance on the downstream task. We write the adversarial objective for $ig(x)$ and $f(x)$ as: $$\begin{equation}
72
+ \label{eq:adv_infomask1}
73
+ \mathcal{L}_{ig} = - \mathcal{L}_{\text{task}} \big(f(\rvx^{ig})\big) - \lambda \ ||ig(\rvx)||_1, \quad\quad \mathcal{L}_{f} = \mathcal{L}_{\text{task}} \big(f(\rvx^{ig})\big)
74
+ \end{equation}$$
75
+
76
+ This gives rise to a min-max objective w.r.t. the encoder and the gating network while cooperative InfoGating corresponds to a min-min objective (see lines 11-12 in Algorithm [2](#alg:infomask-code){reference-type="ref" reference="alg:infomask-code"}).
2304.02560/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2023-03-07T16:56:07.647Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36" version="21.0.2" etag="73mfojoGhWT6BsWWhvAG" type="google"><diagram name="Page-1" id="5ytc8PO3FNJ29XLY3yl2">7V1bk6M4lv41ju1+SAdCF+Cx8rY9ETU7FdMVvd2PlE1msm0bDya7MvvXLzIIo4vTGIQEFFkz0QkmAes7Ojrn07ks4N327b/TcP/yz2QdbRaus35bwPuF67qB4+b/oWfeizPAdfzizHMar8tzpxO/xn9H5UmnPPsar6MDd2GWJJss3vMnV8luF60y7lyYpsl3/rKnZMM/dR8+l090Tid+XYWbSLrsf+N19lKc9XHt6l+i+PmFPRk45SfbkF1c3uLwEq6T77VnwYcFvEuTJCt+277dRRs6emxcihs9nvm0erE02mVN/qBE4q9w81p+t/K9snf2ZdPkdbeO6PVgAW+/v8RZ9Os+XNFPv+f45udesu2m/PiQpcmf0V2ySdLjX0Pn+FN9woYr/6K3T8kuewy38YaKwV3ymsZRmj/6f6Lv5Ycl8O7x4nizqd32Hj/494g+O0njv/NrQ/oG9DnlF4rSLHo7OyigGupcSKNkG2Xpe34J+wNS/kkpnxiS4vj7CWzX9YpzLzWgISiHLywF7Lm69wmD/JcSBjUk8DIkudTs6a+r12/RZUy+FQB+/ladCFd/Ph9h/ddrtol3UXl+HaZ//iu/TZzRL+4sHcyfdI9n6ZVPm3j/C4O8QAk6KvhvMb79pIZ/HR5eKrHSgBpChEOtUio11NgMrYPGznXBDI11GqEH8Ih7m0aQA8QnjmIaERkR4i9xd0ywJUzAFZgAX8bk0b97uLvTA4HnexwEEMgQVOc4CBzQHQEyUgQeQL68eJoQCHitBF3FJAgAE3huNcFExzzwxrmeKJcTjH34oFaDz2m4jnNQ2NW7hL6HDgh9H/HmAMEMlzqIilnkalha/Bm+bvAFTmARvmCGryN80ObsY2vjjF9r/LDN6QfAjF9H/KyufsyhnvFr7YU5Vtc/MFI6YzgASuYnckzi14DamPG7yv40i18DGmTG7yoD1Cx+DUiUGb+rDFCz+M30i24D1Cx+M/+i3QA1C+DMwOg2QIFJB8KdGRjdBqhZ/GYGRrcBaha/mYHRbYCaxW8mYHQboGbxmwkY7QaoWQBtBaIMOsYOQZfDJLdSJEQgw42LTGFhXp0wsRWaMmhMbpAwTxSYAF+BCUQ65oknQRCtn6Nfy8MkzV6S52QXbh5OZ29PINExOF3zOUn2JTT/F2XZezmk4WuWqIBrA88xcih6i7Pfj2oQeez4j/z4JteWDixP3L+V73c8eK8dfInSOB+oKOVApF+bg/CQv8WqPMUCQhrAmkabMIv/4u+lgqj80y9JnN+xEgfsO5w4IDEmLAvT5ygr/+oEtOpGS8fDDvvhwwIRRksIgtMP4p9SfHnpKZ/SNHyvXbanFxyu+DYlM/fBS390ff5L8QYn2a5Gvpm4NyCWrEfHSSqoXJV7WREQb2Yh4qpWaV8RKQc8LZFyYw3WUYdef7AC6VgrXB4tn7gSVn1FW0PHxkrReF0gyjE/5Noy+0STT6j0bMLDIV6x04/xhj0l2q3ZReVcy8+Un3+wQvS/Gty4QiQrAi2Xg5tqU+Tcnc6o/BZaFgIbonJuxtmAzdcEmn/hRhoxkxmnf2xpRpbrPOxWyZpOuzNaOcs1cPR3Qu90u6/ZVtX5msF1SXc/xW8Ryz07qug4jVZZnOzy4/zLUmjPplNctb5q0caOEOiPgaSOAYOMWzp1uLgQDmOWXamix66QPd5dcyFZBvUfTerZhV5vUx3ZEJwcmPSdem1g6eSjVJ7442hiocBlJ05u2/HovX4kOm6D0vjt4JNvhHJ5OnltjoOwJG/4anm71m+7Qeqvo8sPgzI99zV6y4a/2Dw+3j0Ej0NYbIDnKRYb2Ndi04C7uzDa4WFfpIUfh/1KolrHCCJHyApTeE++wnvydYxfg4ikwY8fEMbPMzh+DYibwY+fK4yfb3D8JsWyPD4GlP3uDyh+8QMyTr3ltI808MYKTsAiTiMNsLGBE/DtwSSzGiOGqaf1qSKXDOIyEO5iYctfBK6wH9eWIhTCnHpkCNlO0TyXLs8llWnX21y6KtTFuQyLzXpCN1Dk1xzZzQ0UIwl1jORVASqDH0lfGElgcCQbuLvdRrJn0xIGwuC5BgfvqiCFAQ4eEkL/KvbDxODNjm7z4A97hjmelJ/bszGBDeIi+7Vf0mS7zxbe7XG7buHdS0BllLrnwEijfLzCb8cL6LhQW/xwItrDTfxMSfZVPgJHop6OT7wKN5/KD7bxen20+cuNivwx+HaB7xfKwCgej/IkryGN0/WICIURfVkHuo4CRR1RnUxgaij+Fa+jJD/1lIbb6DBD2ARCj69IijwkQQhUhK0WCBtkkAzbCACCEYDMGQH4qtKgQxw8Vxg8bHDwZH+yWgLC17d5AWjnfgGi2K/1+tIeIy1gYb2esWfRJB5IKoZiSC0QpK6QsAA94RZNCVJAxACa3gKr8EjLVlifdcCxOO0aEAa1cEQhZlE+XV8Eucl5ISHka3y0y48rkuv8O9mGO8XKdDwuX4web8Jv0ea2EgplHmeb2dtw4LtGzSHEm9hQjHVoHjYHzsTiXtAXLaY5acBdzBLTl8QIG16u32zDSyUxIkvptIjkbSM/DWIHZvnpS37EaAQ3aCk/MOAtFBD4vUmMQkCsh/o3E5/2AtERaahrh931L9xInwFJZNLnt5K3W7hkQ33sb7nXSp6P3nYUriWxGF7y6tn8eV22o1DTAHrB0pVTWAFSKBUCdWSwEhXbVKBFB4YDiPznNWEf3BTm96f8Atfdvx2Hg33OQGY3Ouzz2dXlRvlf327Z3fJvVNyQf0h+unhhdnqcNM9Hq1cDDaZBJl1+jVFRP30xP0Rm7wpL4SlWZFWMBNHGiqePFHlX6KgEA9k5BaQvOOeaKcpkmKCqZvlB2RRldSEIAy1Kf6QlZo1uYJ+SJkW4fAIUdSb64niIP1r7uUqbrLIgi5RJvPg4YfKM3V0YveUShEzZ4jcuRKIAOFichs1zZeW7YSDeTVMFG9XDAJNmXemQZFJBS30rE4StKhOvAf03g8XAAoFdsAZSiqSL5q+y58eo+QGECvWJNFRMAKxN2GkN6GcFOPeg5m8GuD/ovFywvNtpaKC+kwShZQ00kFSnKzUQ1q+B6gUVmQbitJJnTithUSg83NQevV5/yA8jui1IT8FB4ruf8v+v1kmW/+dnWQo3+Qw9NFAOlxPnm/LGF8UOEplJeMT0n4JqyuhcoMzoKt49fz1OjHysdakNTzT7vUBVwBiqKAYdagOPVm00MUDMTXWV/yZNdX1bOB6xgduJJVgi36tra7B0iHtBXzcsrqRPFLiVwKQowKVfq63E2uyxCe7AFqWV2siIgjsslPWv8fM2HI22dkFHbQ21UQLSCouMKutJBf71zt94AlgE+SYNcplso9tgN6tkt45p9a981kib7fskyQf9WUK19QZaWnwpw/tnzZiHPrbpc9h9cY4CVa1pZUy8pwF3X+bt7j7/40t+5pc4W728xH9G6X+NPzPqso2taxMuX0gERAPXUyDqqkIvdMxkFvo3q93LaAm1x0xWDvdlVMbgyvC7b0vP41iQ3K6ujq8xq89YxHXnyCemLGKffosTFyvUMPVyFxhh3JWoZbO0Yk+7btR1sLz9BlmVs8Yop76Am0mNMZcaap6xZlGxy7FmD799ni2oxtA1MJ5U+eVasJMDy76m4e7wlKRbGuUlOkD3kbous/WA42NHqHMkhA4zV4gj9zBUeS2sojUXAIg04CRzRavN+L2US4DqAI5VwWCaEcmw9RW26Y834OyMgWqiJCI/z4Awexon/Ar3wcJ9NIUFiI8pTZSmb8Vf3nn/z5dJrRFbTD130QK8RidYxRX3ZTQFivCxgvzPjvMT3/08fhPKgHoXCI18QqmWZRX1pEPBB5OinvoNwcEIiDuwDDwT882dkWqMFCE2kZoUOdMzUoHVOWWlg1TXrsEnLpc4VQhbyeU6wUIfmdu/rUxcn8oob+c6Swyx1yyITXVHXpwaZjlfazaLz/HK7dim7yVc39lwDhpUz56VTokFsLo8qFIhuTTrp7AMKmLJz7xKkHKif4tXX/89seTnK1lMeqxyvOp500CD6GAoik4AlQY76Et6rFRXq8flnXocsjWn1QZi11XwLc6KF8IAlMf0fW5y/eOS8sTpfejBe+2g4QpYD/BjZVHqW5xMqfe/UmJfDPLrMfgzkOPAPj09xTuq4XMxP4q0KopokCT6A7jHD56elQN5wtYyUEV8uarJD8FStGlazf9JEWQGa/MRq947cBR02YxbE9yIGGZpFrdJEWT9buGL2tFnFYWMACXjMipfHniiLw+gRl++brUE5qwWITXBZx1VNHvj4nO8MpJd6V3Lf50vDb6cMFGF4AdLiLDbjJBoYW0BZ1KEoWEdwzrCGtExczxXe6CAycVA1Zd9u0/SkF4WZln+JWh/9FE4MDqjgBCLw2LlprDT1IFxtdSbAo5Mf92lyeFws03WPzw6LOiRoRMo0WG9wfj0Lz3oyGFaNc8/jW5G5vw/+ncPd3e60OGNDMy2Fi7liOhIzatKy9eQ+efnLyNBQeMcwRhXdVHf2VKjKoRBKl5WPxYKEmaOfWnh25+4cwYlq4JrIvilIunnirfDkL2OFW91U0/Y8yvOtqI0zJXABUDFPQ1NPEXF91P+0oXM/pxfEm7pkrP7dtjz0qoS4guvUpym77JL0i3N6b2lvAZNeS6llX7G8pHlF81yCdwdur1D83E9ZRzYmrqb6MlYYnbHiXumF0O9/K6O6ezwHilhBZHr9pvKJdUzmV3FZJ65A7WlLTRHgUiVUtQbewAmxcf1HMDH/sQSVJNi5PqOXyZWoZoj1K6wPO3Oqkm13ew7gtnurJpU/X6TcQXYt4qbTOeFFdE6Tu+9VYKm7rLc2Fclwal6TOux6icVjtV3CWNfLGFptmIacFV831m00s37bZoPPg2buIQaT7tzA26oUU210/Gchus4OoVDl/OVnb6P02h13F6jH6X02bfr2rlD8no8p2d6ijF3xFXtaDFrh9tu1AK4zKBNqIuUOT0LeRsH+zILWkU4atexbJLOS2UXSiXgya+q6I2JVdK9ilCZ9a5mrCGQA2ZVWOtRuVcxMjPWXd0YpnatYK2idIa2QTXvnRrqFnqDkNBJ2pdD/nrby3cvplUOQBZ3syyakkUi2DuKpBFVGXU9sqgi5YYmiwu6UiO6dR++lr/PwmlIOANh0YayouzRn2pQ5f8H7MWLHSG8U9GJFwQKpgIiV0vsrStziXfJdksDoZ2nNNxG35P0T/p7QkcsXCf7Ig7Xibfhc3RznH6us03WES2s6GQJvVe8jpKxztQWNQDYZHZ0SASCvERUgQF16kpVelELWQkbkJXRbv0pTZPvp1FU+kt0dM7E2HCIsIkKr5ioRasrvvzC4yPMf463DVN2IZ24ik2cNlVnGnLFXfvsYJZTyDQC4zEuZK9Jd0KCkybdSGMyGZQpzy9pfFQZhf74IXSB1PpDu3LweUoUeYEqBl0hnUDLIs7ipBUm5l5pFtKdoxv2HtQwPBoYWGka0n4iaLWJ93I046cjh1E0HKnMwv3YbUILXUcQMyVYHRlHXlyUPWSAlhQGOKkYt55bPSEiFqc3WIEATirErW+kkNjtwSRSk4pw6xspYHNOTSrArfc5JdaZlleq/pCSqbR58/d6CIXmD4ajpOCk+kr2HCUFfZvT7ap4tnn3tvsqyGe6EFcGm/Wd0759i2SGieudI+A++dTyGyQWWfQUFDBTZhwcRAsDzIzXQdWwurrDIF/HavFxDauFmgSsl6uq5NRAa20EhHzfUwTjtZWo5XthqYKKyZ6BAMmqfDTSdar2uuBrvVbFX7XWerUpcsf27ZpEzoOWRU7BMs2FMjRsRXlIsTL1FlvDGsCNUHOcmewG2oDxRBQRLbbGbcCE+7DiT7rbgAmPoXXBgCAk3ToUANSAHps3NHvc0EQO72xU+9nX72fyNwpIM+FuJTVkGLrHmiLxmeZgZqTnttMkAebXEJ+1v9GsSqQXLp+jT5FY6YEwIJGQkNQlEoG4SunrKQDQpEjAnpthB2IXo/LYBAeIruIAhw5Uv2yt7wX2gMIyfzcDdQaoQIjuNAvUpJoI9AyUa3NGuTNQjYESjRCjQE0qfMlg2ZPA6oI1qVAmg6gBxxH3JUzCNsc1tbbcvQCZrHCCJxXYZNZ2Nw3VpCp3mbXeTUM1KerCrP1uGqqZvGhtwRuGikyKvrBow5vGbVJshk0r3jRwk2I3DNvxrEadEaAmxW4YtuKNAjUpQsOwDW8UqElRGIYteKNAzQRGe/vdKFCToi9sWu9GUZsUk2HVdjcKWyChZCDi6ZTTsahldNQSPM7kdDSOlKpnebiOtYg6j7SMgwwgWZ66hntCVCT2/YaB/F0j7EhZEuLsewppAcL13SPyWJ7UHJF3SrxohPzlGzGDqIeIPG9SRI5ZN5P4xCR1wwI8Z6iudzRNQzWTN61dTdNQzfRNa2fTNFSTInAsupumcbOSvlN5LtVBr54LNmdmCoZ8a8+F4JrnEghN2BlLoNltAQ6Bytc/m/so/gEq1YA+x2VmsTTxIab1yqSIrH73MmuFJ0qwoG+QvGIL0DSg6tnNlFrkGYXKn1RITt9uptVZ5c/kTXOooN1ZNZM3V7iZdmfVpMgbk26m5YVrUkyOUTseOFaBwxJOP9bOkY95Fxey5o3X7hzJm4LCjXoq78FeWJtP7k8qBMi4a4eN9rH3J0WgGHfuDIM1KdLEuHtnGKyZNuni4JkFK5iJky4unmGwJkWd2HXyDCM3KSbFsptnGLpJMSvGrXpFh9j+oJoUmWLcpjcK1aTCYoxb9EahmhStYdyeNwrVTGp0seaNQjUpSsOuLW8Ut0mxG5YteYPAuayV9Q+7YSPGYOJAGNbmqT5CBB2omvLpT/ap0us+aMUiA7vJJ8ChwdwLD/toRUf9KX6jMLdv/3W5qwdtqSI0AXsg9N9CbvySUfG6PdB5vXv+epS1m2PHoPJUKUvl0efoKVvUewR1U7DIOQdvbZ5CxTzV0eDZZb3ra3gf9puYyl+WA7A7SGi37ruzOY7bIJtnSn3gdQDLCiCxWpxEhhW4Clx19OJxHZlHyaLtPklDetk+STYzsK2BdTlgPQgUMxaollY90A6kzVKtlx8iwaLezW/pAG/xcQ7F8ahhzzUjq/XSIbUkbV4nI7J0hHu2Xr59qW+bzuVbQfQUy/dqnWSjWb2L5kpdVm/Sz+qMWDVCI2vzj97TSJw7qL0NjZaec5refE95DNz+JuTchCgfenI2h0yMPmurVpHYW0wnhiryiGyozbPnoCX/eU3oWcoi3DB18Cm/hMpt0X2YXZL/9kz/+6nqUl/c8FuqvCPVjTeF6UXv57r7N+Xtfkqjm5+/F08u75h/vT37WJ/NlxZf7ocx+qpK+kzeXKXRp7LngaNlMZCZsFWyW4XZjGnr4p0ejyk8cdT1vup92fFgpsj4WgGuL6QpN18MYD1NWVhjoPhmGpcGoKDLRFCHyE9fuwP0nIbrODo1Oy2ntQ7FytKemDFGVPEmKr0qNiFsNwll/msG8DoXyfGsAjjSgKEBAQjtzsCRhhENCEBsdwaONLio0xZtj2haXhBHGn80UDRz58/u5BxpjNJwtKtkoCLXKIAjjVwaDoCSgWoYwJGGMA0IQNFANQsgu8kMoDYD1TCAUyJp7Js0koFqGM0pMTb20ZQNVMNwzvyNbgMVGPUw3Jm/0W2gGgZwSvzNMAxUwwBOibIZhoFqGMApkTT2TRrJQDWM5pQYG/toygaqYTgV/A2+TfZZTOMuaBiMCG7rAJxVPiQ0FnqQETib8Fu0ua0Ej91+HT2Fr5tME9aIj8hx2Uyux1gBBdJaAnLgSIkeG7Unquo4LAnCM1kmxIUjpXRsQOUKbd1MQzVSvsZGljwEYjtx3yhUMhfztUoaO8UnF7HE22T9uonk/MDTQgMuQ9k6heQueU3jfK0sVi1utQKK1M9HTP9pmk8uX4rVZ20N64sUy7/jMPogQrI5RjLd8jUNd4enJN3S8fjR0KiaSrIpw7zhOhrQVxiHUhp1Ozxk9uSveB0l9EukyTY3E3fPPx4qVbg6y3pjnQnrqLC+an2gMlJKxMaigxCfDgWIagOgv0VnpOSHDVNOgkrpCvcHlcxs/BYfXo8p5bqrBQzVF4aa6j5USYcVlKpZV1WM1Y+lTGt8pVjNSF6PpFgiCWCkwpL0hSWSiYtPr2/L/MwMaUtIoQBpVRinTjmqMr/1ACrTG2GVKDp1EF1NiyVxJBB9eVaqFKwWNhHJZoyV9L4Wgx+9xVlR2AMeKzHQ4z8KC8kB5fGprAc9eK8dNCzqUWThlQPeGNuOKYfVssd8dyjcomnKYW4LLQNIHPYjcGw+Jvx9NXVVkd4foQuv+eH1i65dWFwExyvlp/o1VcPfP8oHfFS55ow8c71+oUmJXjq5d1gJIoe3h/HSc1zx0+ulnRcjz3OXnuCnXy3hXfJu0UiDjqxs1Xi8bsKBY9JpRANp3dVpHQRs2aMK4iYH3e1lHTSoN4CPl4AnsgkSFq2m6qFqz8bUw6n2qB3tMJDCSd0WJd8hC76oWnXBKGqq5YvSMlfUp7WHr/ZAAr91UTUolOXyGL2s2dySnlOadY3fi79eg7k1kHJSnSS7LtU126uLuYWNmVuAR5gE0oZFW61J/K4687rnnDX2BBtSuF6DEI80QM7G5kzFuVfGm6ruVH/Gm8wSz0TU1RgKEworC8b2RkThq8La0s37bZrPHqpbL007fhubmzEN96ylteSDPesjHOKetX/3cHe3OBt/yk7fx2m0ohGbx49S+uzbde1crmqP57SgzfvKmBAF2h6Q0dYSrsqCTD4o4P559FP3UmSDDhhd0VxVwQhQb7N2CvQx9cR4vzl3OXvxm80ZgAyYymluWcE0n6nLwPXPEHeEeEva4I39+L2YhtKXwf6Fd/7w+u6mIZYjEH+/ufv8jy+jV1nG9i5daTs6cFVRIspapVqMRlbWcVCq69KSUeP6iEj2OW4vSss3pbOqutnvbIES6JKmOgv5wdIX3ElES+e7gU2+Dw+RYm4ucH5uMdY2Wccta8QT1U9raTveiojShpYwX+7PbGhpWheJsGWCgO51ThWfWkTYUznhZPnaGuDsRod9vsQYuFFxmt4p96y2tECz4l5S85mqOHlxd/6JpyyDCxfmp4vx0l3f3GRPG86/pcfl+zc0v88k5uXe8T1+8O+RHrNCjnLLJ4XRvjdYFSg8vDmD6P9ybzt8LX+3JMFm7dyOMqxBQG9YwFUpnpCF9nBpOb0J5/keEbNwzsKJTtHdVYQYVGnP/gRUVYZwFtBZQJldDQQBJcSk/mSx8YNy4a4O2aCzTOiDt9AXslHf8vaJKYdOyFPxxW42jbe7wdLjOU6fMgeoFwdOfOmyws7Zd/vw8u7uHlFsyojiPp4d757LFUixpkZz3FhzT1PNvopPVsl2/5pF/fbsGlOf1jNOrcYiUYIywp5yf0+14Olp6EXkrY7lcjl1tI8apZeuvEhYXAJoUGnI4egzlp26roqZW0g2RvtDs0FBzGi3zm2C5PtpJLlyaMogmcre+31RyzJpknLC9gqOm1Heop6RRWtP9rBhgGWTE7A6bA2AbAhTR9PUI/y+FhFdkTPWpHQjX4jqxA0bzbayBRuUprAjXTStAXHC5UDUo3ANR47E/VGvpRx5bKuLyRHoUY5kOvxkXzq1RrCqAjQj4UXMR09U7i2LnYCqsgxQZRa2WHvywzShLsJJCHJP7+WfyTqiV/w/</diagram></mxfile>
2304.02560/main_diagram/main_diagram.pdf ADDED
Binary file (86.5 kB). View file
 
2304.02560/paper_text/intro_method.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Video understanding poses significant challenges, often adding to the complications in image domain such as model complexity and annotation costs. The additional temporal dimension and different modalities of data introduce useful cues, but also can be redundant, raising interesting questions about trade-offs. Activity Recognition (*i.e*., classification) in particular— as the prominent task in video understanding has long been explored by the community in these research directions. Whether it is efficient architecture variants ranging from CNNs [\[13,](#page-9-0) [31,](#page-10-0) [57\]](#page-11-0) to Transformers [\[2,](#page-9-1) [5,](#page-9-2) [12\]](#page-9-3), training schemes from fully-supervised [\[7,](#page-9-4) [14\]](#page-9-5) to self-supervised [\[15,](#page-9-6) [47,](#page-10-1) [53\]](#page-11-1) or data regimes from unimodal [\[64,](#page-11-2) [76\]](#page-11-3) to multimodal [\[20,](#page-10-2) [39\]](#page-10-3), the progress has been steady and exciting. More recently, with the availability of internet-scale paired image-text data, the direction of vision-language models
4
+
5
+ <span id="page-0-0"></span>![](_page_0_Figure_9.jpeg)
6
+
7
+ Figure 1. Video-conditioned Text Representations: Pretrained image-VLMs can generate reasonable visual embeddings for videos (*e.g*. by temporally-pooling frame embeddings), together with paired text embeddings. However, usually, these text embeddings are not dependent on visual information— meaning, they are common for every video. Such representations lack the flexibility to align properly in a shared vision-language latent space, when optimized based on a contrastive similarity (*i.e*., *Affinity*) w.r.t. all videos. However, with *Video-conditioned Text* representations that specialize uniquely for each video, we grant more freedom for text embeddings to move in the latent space, and adapt to different scenarios (*e.g*. more-challenging recognition tasks).
8
+
9
+ (VLMs) [\[23,](#page-10-4) [50\]](#page-11-4) have emerged dominant, achieving strong generalization across numerous benchmarks. However, the progress of VLMs in the video domain is yet to be caught-up to its full potential.
10
+
11
+ Following the seminal VLMs such as CLIP [\[50\]](#page-11-4) and ALIGN [\[23\]](#page-10-4), there have been significant strides in tasks such as image classification [\[83,](#page-12-0) [90,](#page-12-1) [92\]](#page-12-2), open-vocabulary object detection [\[18,](#page-9-7) [38\]](#page-10-5), text-to-image retrieval [\[61,](#page-11-5) [86\]](#page-12-3) and robot manipulation [\[24,](#page-10-6) [91\]](#page-12-4). Such models are usually pretrained on paired image-text data based on a contrastive learning framework. The idea is to have two separate backbones an Image Encoder and a Text Encoder, that generate embeddings in a joint latent space. To optimize this space, the corresponding pairs of embeddings are drawn closer, by increasing their similarity (*i.e*., *Affinity*). The key advantage of such models is that, at inference, any semantic concept (given as a text input) can be embedded in the same space, giving intriguing zero-shot or few-shot transfer capabilities [\[1,](#page-9-8) [91\]](#page-12-4). For instance, CLIP [\[50\]](#page-11-4) excels at classifying unseen attribute categories (*e.g*. objects, scenes), or even counting
12
+
13
+ <sup>\*</sup>Work done as a student researcher at Google.
14
+
15
+ <span id="page-1-1"></span><span id="page-1-0"></span>![](_page_1_Figure_0.jpeg)
16
+
17
+ Figure 2. Overview of **VicTR**: First, we extract image (*i.e*., frame) and text tokens using a pretrained image-VLM. Next, such tokens go through a joint video-text encoder, generating video tokens and *video-conditioned* text tokens, based on which, we compute affinity-based logits for classification. Optionally, any semantic concept (given as auxiliary text) can also be processed similarly, to help guide the classifier. This is motivated based on the cooccurrence of semantics (*e.g*. rope, gym, one-person) and categories-of-interest, *i.e*., activity classes in our setting (*e.g*. rope climbing). Here, the color change of text tokens represents the idea of video-conditioning.
18
+
19
+ such occurrences [\[91\]](#page-12-4). However, these VLMs do not perform well in tasks that require specialized knowledge, such as localizing (*e.g*. detection/segmentation) or temporal reasoning (*e.g*. activity recognition), at least not out-of-the-box, as their training objective has not seen any location or temporal cues. Yet, with task-specific finetuning, such models can readily be adapted to specialized domains [\[18,](#page-9-7) [40\]](#page-10-7).
20
+
21
+ In the video domain, training VLMs from scratch may show a limited success [\[77\]](#page-11-6)— while also being expensive due to the lack of paired data at scale. As a compromise, the common practice is to adapt pretrained image-VLMs to video, by introducing temporal information. Such methods either insert temporal modules within the image backbone itself to have cross-frame interactions [\[40\]](#page-10-7), or use a post-processing video head on-top of the image backbone [\[4,](#page-9-9) [33,](#page-10-8) [36,](#page-10-9) [66\]](#page-11-7). In both cases, image embeddings are enhanced as video embeddings. However, the use of text embeddings varies among different approaches. Text may either be discarded [\[33\]](#page-10-8), kept frozen [\[36,](#page-10-9) [66\]](#page-11-7), used as conditioning [\[4\]](#page-9-9) (to further enhance video embeddings), or fully-updated jointly with video [\[40\]](#page-10-7). More often than not, the main focus is on visual embeddings (*i.e*., converting image → video), and the impact of updating text has been limited.
22
+
23
+ Nevertheless, video models benefit from semantic in-
24
+
25
+ formation [\[22,](#page-10-10) [70,](#page-11-8) [91\]](#page-12-4). In fact, certain attributes (*e.g*. objects, scene or human subjects) are directly tied with specific activities, and can simplify their recognition. For instance, the presence of attributes such as [rope, gym, one-person] can narrow down the potential activity to battling ropes or rope climbing. VLMs are especially suited to take advantage of such semantics. Any concept represented as text can be visually-grounded based on paired embeddings (in zero-shot), to extract relevant attributes for a given input that benefit recognition tasks. Such visually-grounded semantics are cheap in-terms of both annotation and compute costs, yet highly-useful.
26
+
27
+ Motivated by the above we propose VicTR, focusing on adapting text information to the video domain. More specifically, we generate *Video-conditioned Text* embeddings (see Fig. [1\)](#page-0-0), while jointly-training both textual and visual features generated by an image-VLM. By finetuning text embeddings, we observe significant gains in our framework, compared to just finetuning visual embeddings (similar to the observations in [\[92\]](#page-12-2)). We can also make use of freely-available auxiliary semantic information, represented in the form of visually-grounded text embeddings. Fig. [2](#page-1-0) shows an overview of the proposed architecture. Our video-conditioned text embeddings are unique to each video, allowing more-flexibility to move in the latent space and generalize to complex downstream tasks. Optionally, our video-conditioned auxiliary text can further help optimize this latent space. We evaluate VicTR on few-shot, zero-shot, short-form and long-form activity recognition, validating its strong generalization capabilities among video-VLMs.
28
+
29
+ # Method
30
+
31
+ In this section, we introduce the generic framework for adapting image-VLMs to video, and discuss how prior work fit into it. We consider CLIP [50] as the image-VLM, which is widely-adapted thanks to its convincing performance and open-source models. It consists of two encoders: Image and Text, optimized together on internet-scale paired imagetext data. Image Encoder (Encimg) is a ViT [10]. Given an input image $I \in \mathbb{R}^{H \times W \times 3}$ , it is broken down to patch embeddings (i.e., tokens) and processed through multiple transformer layers. The class token [cls] is sampled as the visual embedding $e_{imq}$ . Text Encoder (Enc<sub>txt</sub>) is a causal transformer, operating on tokenized text. Each class-label (or, any semantic concept) given as text T, is first converted into a prompt based on a template such as "a photo of {class}.", and tokenized with Byte Pair Encoding (BPE) [59] at the input of Text Encoder. Following multiple causal transformer layers, the [EOS] (i.e., end-of-sequence) token is extracted as the text embedding $e_{txt}$ .
32
+
33
+ $$e_{\text{img}} = \text{Enc}_{\text{img}}(I),$$
34
+
35
+ $e_{\text{txt}} = \text{Enc}_{\text{txt}}(T).$
36
+
37
+ The two encoders are jointly-optimized with Cross-Entropy loss, where logits are computed based on the similarities (*i.e.*, *affinities*) between visual and text embeddings. The corresponding pairs of embeddings (*i.e.*, positives) are drawn together ( $\uparrow$ affinity) in a joint embedding space, whereas the others (*i.e.*, negatives) are pushed apart ( $\downarrow$ affinity).
38
+
39
+ $$\text{Affinity}(e_{\text{img}},\;e_{\text{txt}}) = \frac{\langle e_{\text{img}},\;e_{\text{txt}}\rangle}{\|e_{\text{img}}\|_2 \|e_{\text{txt}}\|_2}\;.$$
40
+
41
+ When adapting this framework to the video domain, the above Image encoder, Text encoder and the learning objective usually stays the same. But now, video frames $V \in \mathbb{R}^{T \times H \times W \times 3} = [I^1, I^2, \cdots, I^T]$ become inputs to
42
+
43
+ <span id="page-3-1"></span><span id="page-3-0"></span>![](_page_3_Figure_0.jpeg)
44
+
45
+ Figure 3. Detailed view of **VicTR** compared to prior art: There exist multiple closely-related work on adapting pretrained image-VLMs to video, such as CLIP4Clip [\[36\]](#page-10-9), ActionCLIP [\[66\]](#page-11-7), CLIP Hitchhiker's [\[4\]](#page-9-9), EVL [\[33\]](#page-10-8) and X-CLIP [\[40\]](#page-10-7). All these follow a common framework (top-left). Text prompts and video frames are first encoded using two-separate encoders, and then fed into a video head to enable temporal reasoning. It is optional to use text tokens within the video head. Often, text information is kept unchanged [\[36,](#page-10-9) [66\]](#page-11-7), or even discarded [\[33\]](#page-10-8) (bottom-left). CLIP Hitchhiker's [\[4\]](#page-9-9) however, use text as conditioning to generate *text-conditioned video* embeddings. X-CLIP [\[40\]](#page-10-7) which is the closest to our method, jointly-optimizes visual and text tokens. But, it provides limited information for text to contrast-against: only temporally-aggregated visual embeddings, showing marginal gains from updating text. In contrast, VicTR allows text to contrast against both fine-grained visual and other text information, while also jointly-optimizing both modalities. We generate *video-conditioned text* representations, *i.e*., text uniquely-specialized for each video (refer to Fig. [1\)](#page-0-0). Our video head consists of three key operations: (1) *Token-boosting*, (2) *Cross-modal attention*, and (3) *Affinity (re-)weighting* (right). Token-boosting creates dedicated text tokens per video and per timestep, weighted by per-frame affinities of a given video. These enable us to model variations of semantics (represented as text) over time. Affinity (re)-weighting highlights or down-plays each text class, grounded on visual information. Such affinity weights are similar to the ones in CLIP [\[50\]](#page-11-4) training objective, making the optimization more-consistent. Cross-modal attention enables message passing between both visual-textual and textual-textual modes, creating a better contrastive representation. Also, optionally, VicTR can make use of auxiliary semantics (*e.g*. object, scene, human-subjects) given as visually-grounded text (refer to Fig. [2\)](#page-1-0). Such auxiliary semantics help align our video-conditioned text embeddings in the latent space.
46
+
47
+ the Image encoder (while each being processed separately), and further go through a Video Head Headvid to induce temporal reasoning capabilities. Optionally, text embedding etxt may also be updated or used as a conditioning within the Video Head.
48
+
49
+ $$e_{ ext{vid}}, [e_{ ext{txt}}] = ext{Head}_{ ext{vid}}(e_{ ext{img}}^1, \, \cdots, \, e_{ ext{img}}^{\mathcal{T}}, \, [e_{ ext{txt}}]).$$
50
+
51
+ Here, [·] denotes optional embeddings. This Video Head may just be a temporal pooling layer or a temporal transformer as in [\[36,](#page-10-9) [66\]](#page-11-7), or may even consist of more-specialized modules. Text embeddings could either be discarded as in [\[33\]](#page-10-8), used as a conditioning as in [\[4\]](#page-9-9), or jointly-updated with video embeddings as in [\[40\]](#page-10-7). Finally, logits are computed based on video-text affinities if text is not discarded, or as a linear mapping of video embeddings if text is discarded. This generic framework is shown in Fig. [3](#page-3-0) (top-left), along with variations of prior work in Fig. [3](#page-3-0) (bottom-left).
52
+
53
+ In VicTR, we adapt a pretrained image-VLM (*e.g*. CLIP [\[50\]](#page-11-4)) to video, focusing more on text representations. Refer to Fig. [3](#page-3-0) (right) for a detailed view. The image-VLM has not seen any temporal information during training. While it obviously affects the temporal reasoning capabilities of the visual embeddings— which most prior work focus on addressing, it also affects the text embeddings as well. The learnt latent space (and, the affinity-based objective) depends on both these embeddings. Thus, we consider text equally as important, if not more, in contrast to prior work
54
+
55
+ VicTR consists of a joint video-text model as Headvid, which consumes both visual and text embeddings from the image-VLM. It outputs text embeddings uniquely-specified for each video, *i.e*., *Video-conditioned Text* embeddings. It relies on three main components: (1) *Token-boosting*, (2)
56
+
57
+ <span id="page-4-0"></span>Cross-modal attention, and (3) Affinity (re-)weighting. Optionally, it can also benefit from any semantic concept available as auxiliary text, to optimize its latent space. Following subsections look at each of these in detail.
58
+
59
+ Let us first introduce a few additional notations. Consider a fixed vocabulary of n activity-classes given by $[T^1,\,T^2,\,\cdots,\,T^n]$ , and optional m auxiliary semantic categories given by $[A^1,\,A^2,\,\cdots,\,A^m]$ . The corresponding text embeddings can be denoted as $\{e^x_{\mathtt{txt}}\mid x=1,2,\cdots,n\}$ and $\{e^y_{\mathtt{aux}}\mid y=1,2,\cdots,m\}$ . Also, given an input video $V^i$ of $\mathcal T$ frames, the corresponding image embeddings can be denoted as $\{e^{i,t}_{\mathtt{img}}\mid t=1,2,\cdots,\mathcal T\}$ . The inputs to our Video Head are $e^{i,t}_{\mathtt{img}},e^x_{\mathtt{txt}}$ and $e^y_{\mathtt{aux}}$ tokens. As visual embeddings are extracted per-frame and the text embeddings per prompt, there is no interaction among frame tokens, among text tokens or, across frame-text tokens up to this point.
60
+
61
+ To introduce *video-conditioned text* embeddings, we first create a dedicated set of text tokens per video, by replicating the outputs of the backbone text encoder. Going further, we also create text tokens per each frame. This is done by weighting text tokens with the corresponding frame-text affinities. Formally, given (n+m) text tokens, we end up with $\mathcal{T} \times (n+m)$ dedicated text tokens per video, at the input of our video head. Refer to Fig. 3 (right).
62
+
63
+ $$\begin{split} e_{\text{txt}}^{i,t,x} &= e_{\text{txt}}^x \cdot \text{SigAffinity}(e_{\text{img}}^{i,t},\ e_{\text{txt}}^x), \\ e_{\text{aux}}^{i,t,y} &= e_{\text{aux}}^y \cdot \text{SigAffinity}(e_{\text{img}}^{i,t},\ e_{\text{aux}}^y). \end{split}$$
64
+
65
+ Here, SigAffinity(·) corresponds to affinity-weights normalized in [0,1] range. We convert the values given by Affinity(·) that lie in [-1,1], to be affinity-weights, by scaling with a learnable weight (w) and feeding through a sigmoid activation.
66
+
67
+ $$SigAffinity(\cdot) = Sigmoid(w \cdot Affinity(\cdot)).$$
68
+
69
+ Although such affinity-weights based on the original image-VLM embeddings are not ideal for temporal reasoning, it initializes a noisy-version of our *video-conditioned text* embeddings that gets updated iteratively, later in the network. Such a token-boosting brings multiple other benefits. (1) More tokens means higher the model capacity. It can help learn better representations, but also adds a compute overhead (which we handle through other measures, as discussed later). (2) It also highlights relevant text tokens by grounding text on visual embeddings, while diminishing irrelevant ones. Subsequent attention mechanisms attend less to such diminished tokens, simplifying the gradient flow during learning. In other words, it acts as a soft-selection of relevant semantics, specific to each video. (3) Finally, it enables our model to capture variations of semantic categories over time. How
70
+
71
+ certain attributes appear (or, disappear) over time is an important motion cue for activity recognition.
72
+
73
+ Next, we concatenate such boosted text tokens with visual tokens (corresponding to $\mathcal{T}$ frames), and feed $\mathcal{T} \times (1+n+m)$ tokens to the subsequent layers.
74
+
75
+ $$z^{i,t} = \operatorname{Concat}(e^{i,t}_{\operatorname{img}}, \ e^{i,t,x}_{\operatorname{txt}}\big|_{x=\{1,\cdots,n\}}, \ e^{i,t,y}_{\operatorname{aux}}\big|_{y=\{1,\cdots,m\}}).$$
76
+
77
+ Such $Z_0^i = [z^{i,1}, \dots, z^{i,T}]$ tokens go through L transformer layers in our Video Head. Each layer (l) consists of cross-modal attention, temporal attention, affinity (re)weighting and linear (MLP) layers.
78
+
79
+ We consider our token representation to be two-dimensional (*i.e.*, cross-modal and temporal), and apply divided self-attention (MSA) on each axis as in [2, 5]. First, we have a Cross-modal attention layer. Here, each visual token could attend to all text tokens at the same timestep, and each text token could attend to both the visual token and other text tokens at the same timestep. Since text tokens are already affinity-weighted, attention weights do not draw information from irrelevant semantic classes. Next, we have a Temporal attention layer. Here, both visual and text tokens go through a shared set of parameters, learning temporal cues in visual modality (*i.e.*, $e_{\text{img}} \rightarrow e_{\text{vid}}$ ), and modeling variations of semantics across time in textual modality.
80
+
81
+ $$\begin{split} \hat{Z}_l^i &= Z_l^i + \text{MSA}_{\text{cross}}(\text{LN}(Z_l^i)), \\ \bar{Z}_l^i &= \hat{Z}_l^i + \text{MSA}_{\text{temporal}}(\text{LN}(\hat{Z}_l^i)). \end{split}$$
82
+
83
+ Here, $LN(\cdot)$ stands for LayerNorm operation. Having a divided attention across two-axes instead of a joint-attention eases the compute requirement of our video head.
84
+
85
+ As previously discussed, the original affinities based on the image-VLM embeddings can be noisy, in the context of temporal reasoning. Now, as we have updated both our visual (*i.e.*, video) and text tokens with cross-modal and temporal information, they are in a better state to re-compute affinities. Hence, we compute new affinity values and re-weight the text tokens accordingly. Refer to Fig. 3 (rightmost). First, we split video and text tokens as in,
86
+
87
+ $$\left[\bar{e}_{\mathrm{vid},l}^{\;i,t},\;\bar{e}_{\mathrm{txt},l}^{\;i,t,x}\right|_{x=\{1,\cdots,n\}},\;\bar{e}_{\mathrm{aux},l}^{\;i,t,y}\big|_{y=\{1,\cdots,m\}}\right]=\bar{z}_{l}^{\;i,t}.$$
88
+
89
+ Next, we temporally-pool the text tokens to come up with a compressed representation, on which we perform affinity re-weighting. This is similar to token-boosting, but done with updated video-text embeddings that are already video-conditioned. Without loss of generality, the same operations apply for auxiliary text tokens.
90
+
91
+ $$\begin{split} &\bar{e}_{\text{txt},l}^{i,x} = \text{Pool}(\bar{e}_{\text{txt},l}^{i,t,x}), \\ &\bar{e}_{\text{txt},l}^{i,t,x} = \bar{e}_{\text{txt},l}^{i,x} \cdot \text{SigAffinity}(\bar{e}_{\text{vid},l}^{i,t}, \; \bar{e}_{\text{txt},l}^{i,x}). \end{split}$$
92
+
93
+ <span id="page-5-0"></span>Finally, such affinity (re-)weighted text tokens are concatenated with visual tokens, as $\bar{Z}_l^i$ , and go through an MLP.
94
+
95
+ $$Z_{l+1}^i = \bar{Z}_l^i + \text{MLP}(\bar{Z}_l^i).$$
96
+
97
+ Following L transformer layers in our Video Head, we temporally-pool all tokens. We end up with a single video embedding, n activity-text embeddings and m aux-text embeddings. We further aggregate auxiliary embeddings, leaving a single embedding per each of the k semantic categories (e.g. object, scene, human-subjects). Finally, we compute logits based on affinity, similar to the CLIP [50] objective, and use Cross-Entropy loss for optimization.
98
+
99
+ $$\begin{split} \log & \mathrm{logit}^{i,x} = \mathrm{Affinity}(e^i_{\mathrm{vid},L},\ e^{i,x}_{\mathrm{txt},L}), \\ & \mathrm{logit}^{i,x,y}_{\mathrm{aux}} = \mathrm{Affinity}(e^{i,x}_{\mathrm{txt},L},\ e^{i,y}_{\mathrm{aux},L}) \bigm|_{y=\{1,\cdots,k\}}. \end{split}$$
2305.03088/paper_text/intro_method.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Building systems that can comprehend human speech and provide assistance to humans through conversations is one of the main objectives in AI. Asking questions during a conversation is a crucial conversational behavior that helps AI agents communicate with humans more effectively [@James07; @li2016learning]. This line of research is known as *Conversational Question Generation (CQG)*, which targets generating questions given the context and conversational history [@nakanishi-etal-2019-towards; @pan-etal-2019-reinforced; @gu-etal-2021-chaincqg; @cohs-cqg]. Compared to traditional single-turn question generation [@https://doi.org/10.48550/arxiv.1905.08949], CQG is more challenging as the generated multi-turn questions in a conversation need not only to be coherent but also follow a naturally conversational flow.
4
+
5
+ Generally, there are two main settings for the CQG task: answer-aware and answer-unaware. In the answer-aware setting, the expected answers of the (to be) generated questions are exposed to the models [@gao-etal-2019-interconnected; @gu-etal-2021-chaincqg; @shen-etal-2021-gtm; @cohs-cqg]. In reality, however, the answers are only "future" information that are unknown beforehand. Thus, growing attention has been on the more realistic answer-unaware setting, in which the answers are unknown to the CQG model [@wang-etal-2018-learning-ask; @pan-etal-2019-reinforced; @nakanishi-etal-2019-towards; @qi-etal-2020-stay; @cohs-cqg].
6
+
7
+ Prior studies either attempt to ask the questions first, and compute the reward function to evaluate their answerability [@pan-etal-2019-reinforced] or informativeness [@qi-etal-2020-stay]; or they extract the answer spans from the context as the *what-to-ask* first, and generate the questions based on them [@nakanishi-etal-2019-towards; @cohs-cqg]. However, it has been argued that the former approach tends to generate repetitive questions [@qi-etal-2020-stay; @cohs-cqg]. For the latter approach, @cohs-cqg recently proposed a selection module to shorten the context and history of the input and achieved state-of-the-art performance. Nonetheless, it simply employs a naive heuristic to select the earliest forward sentence (without traceback) in the context as the rationale to extract the answer span. Although such heuristics ensure the flow of the generated questions is aligned with the context, we argue that the resulting conversations may not be natural enough, because, in reality, the interlocutors often talk about the relevant parts that may not form a sequential context. [Furthermore, previous studies [@gao-etal-2019-interconnected; @cohs-cqg] trained the models to decide the type of the question (boolean/span-based) to be generated implicitly. We argue that modeling question type explicitly is critical since in this setting, the answer, which hints the models to generate a boolean or span-based question, is unavailable.]{style="color: black"}
8
+
9
+ To address the above problems, we propose a two-stage CQG framework based on a semantic graph, *SG-CQG*, which consists of two main components: *what-to-ask* and *how-to-ask*. In particular, given the referential context and dialog history, the *what-to-ask* module *(1)* constructs a semantic graph, which integrates the information of coreference, co-occurrence, and named entities from the context to capture the keyword chains for the possible "jumping" purpose; *(2)* traverses the graph to retrieve a relevant sentence as the rationale; and *(3)* extracts the expected answer span from the selected rationale (Section [3.1](#what-to-ask){reference-type="ref" reference="what-to-ask"}). Next, the *how-to-ask* module decides the question type (boolean/span-based) via two explicit control signals and conducts question generation and filtering (Section [3.2](#how-to-ask){reference-type="ref" reference="how-to-ask"}).
10
+
11
+ In order to exhaustively assess the quality of the generated question-answer pairs, we propose a set of metrics to measure the *diversity*, *dialog entailment*, *relevance*, *flexibility*, and *context coverage* through both standard and human evaluations. Compared with the existing answer-unaware CQG models, our proposed *SG-CQG* achieves state-of-the-art performance on the standard benchmark, namely the CoQA dataset [@reddy-etal-2019-coqa].
12
+
13
+ Our contributions can be summarized as follows:
14
+
15
+ *(1)* We propose *SG-CQG*, a two-stage framework, which consists of two novel modules: *what-to-ask* encourages the models to generate coherent conversations; and *how-to-ask* promotes generating naturally diverse questions. Our codes will be released at <https://github.com/dxlong2000/SG-CQG>.
16
+
17
+ *(2)* *SG-CQG* achieves state-of-the-art performance on answer-unaware CQG on CoQA.
18
+
19
+ *(3)* To the best of our knowledge, we are the first to propose a set of criteria to comprehensively evaluate the generated conversations. Moreover, we propose *Conv-Distinct* to measure the diversity of the generated conversation from a context, which takes the context coverage into account.
20
+
21
+ *(4)* We conduct thorough analysis and evaluation of the questions and answers of our generated conversations, which can bring some inspiration for future work on the answer-unaware CQG.
22
+
23
+ # Method
24
+
25
+ Given the conversational history $H_n$ and the semantic graph $\mathcal{G}$, we create a queue $q$ to store nodes for traversing. We first add the nodes that appear in any previous turn' rationale to $q$ in the index order [^4]. We then traverse $\mathcal{G}$ by popping the nodes in $q$ until it becomes empty. For each node, we retrieve the sentence that contains it as the rationale $r_n$. If the model can generate a valid question from $r_n$ and any answer span extracted from $r_n$, we add all unvisited neighbors of the current node to the beginning of $q$. A question is considered being valid if it passes the QF module ([3.2.0.2](#ssub-rewriting-and-filtering){reference-type="ref+Label" reference="ssub-rewriting-and-filtering"}). Prepending the neighbors to queue is to prioritize the nodes that are connected so that the generated conversation can be formed from a chain of relevant sentences, which consolidates the coherence of the conversation. If the model cannot generate any valid $q_n$ by the current node, we add its unvisited neighbors to the end of $q$. The pseudocode of our proposed *Graph Traversal Algorithm* is described in [8.2](#appendix:graph-traversal-algorithm){reference-type="ref+Label" reference="appendix:graph-traversal-algorithm"}.
26
+
27
+ We follow @cohs-cqg to design the answer span extractor module. In particular, a T5 model is trained on SQuAD [@rajpurkar-etal-2016-squad] to predict the target answer span ($a$), given its original sentence in context ($r$). We use this pretrained model to extract $a_n$ from $r_n$. Note that we also deselect the answer spans that are the same as those of previous turns.
28
+
29
+ A high ratio of boolean questions in conversational datasets such as CoQA [@reddy-etal-2019-coqa] (around 20%) is one of the main challenges for current CQG studies [@gao-etal-2019-interconnected; @pan-etal-2019-reinforced; @gu-etal-2021-chaincqg]. To the best of our knowledge; however, there is no up-to-date work which attempts to tackle this challenge. This problem is even worse in the answer-unaware setting since there is no *`Yes/No`* answer to be provided to guide the generation of the models. Previous studies [@pan-etal-2019-reinforced; @cohs-cqg] simply train the CQG models to let them implicitly decide when to generate the boolean and span-based questions without any explicit modeling of the question type. We argue that explicitly modeling the question type is critical, as the models will gain more control on generating diverse questions, thus making the conversation become more natural. To this end, we introduce two control signals as the additional input to the QG model, and develop a simple mechanism to select the signal for the current turn.
30
+
31
+ We design two control signals to guide the QG model: `<BOOLEAN>` is prepended to the textual input if we expect the model to generate a boolean question, and `<NORMAL>` otherwise. To classify which signal should be sent to the QG model, we train a RoBERTa [@roberta-paper] as our *Question Type Classifier*. This binary clasifier takes the rationale $r_n$ and the answer span $a_n$ generated from *what-to-ask* module, the context and the shortened conversational history as the input, and generates the label $0/1$ corresponding to `<NORMAL>/<BOOLEAN>`. We conduct additional experiments to discuss why the $control\_signals$ work in [6.3](#ssec:how-do-control-signals-work){reference-type="ref+Label" reference="ssec:how-do-control-signals-work"}.
32
+
33
+ ::: {#tab:question-error}
34
+ **Type** **Example**
35
+ --------------- ------------------------------------------------------------------------------------------------------------------------------------------------------
36
+ Wrong answer 'Did he eat for [breakfast]{style="color: red"}?', '[breakfast]{style="color: red"}'
37
+ Irrelevant 'Was he still [alive]{style="color: red"}?', 'no',
38
+ Uninformative 'What happened one day?', '[Justin]{style="color: red"} woke up very excited', '[Who woke up?]{style="color: red"}', '[Justine]{style="color: red"}'
39
+ Redundant '[Did he eat something?]{style="color: red"}', 'yes',\..., '[Was he eating something?]{style="color: red"}', 'yes'
40
+
41
+ : Different types of common errors that CQG models are prone to without our extra postprocessing heuristics.
42
+ :::
43
+
44
+ []{#tab:question-error label="tab:question-error"}
45
+
46
+ Our RF module serves two purposes. Firstly, following @cohs-cqg, we train a T5 model on CoQA [@reddy-etal-2019-coqa] as our CQA model to answer the generated questions. A question is passed this filtering step if the answer generated by the CQA model has a fuzzy matching score greater or equal to 0.8 with the input answer span. Secondly, when invigilating the generated conversations, we observe multiple other errors that the blackbox model encounters, as shown in [1](#tab:question-error){reference-type="ref+Label" reference="tab:question-error"}. We thus propose extra post-processing heuristics to filter out the generated questions and try to avoid the following issues: *(1)* *Wrong answer*. Unlike @cohs-cqg that took the extracted spans as the conversational answers, we rewrite the extracted answer spans for the boolean questions by selecting the answers generated from the CQA model; *(2)* *Irrelevant*. For each generated question, we remove stopwords and question marks only for filtering purpose, and we check if all the remaining tokens exist in the context $C$; *(3)* *Uninformative*. To remove the turns like *("Who woke up?", "Justine")*, we check validity if no more than 50% of the tokens of $r_n$ exist in any previously generated QA pairs; *(4)* *Redundant*. Unlike previous studies [@qi-etal-2020-stay; @cohs-cqg] which only considered the redundant information from the generated answers, for each generated question that has more than 3 tokens, we filter it out if it has a fuzzy matching score \>= 0.8 with any of the previously generated questions.
47
+
48
+ We fine-tune a T5 model [@JMLR:v21:20-074] to generate conversational questions. We concatenate the input $\mathcal{D}^a_n = \{C, H_n, a_n, r_n, control\_signal\}$ in the format: `Signal`: $control\_signal$ `Answer`: *$a_n$*, *$r_n$* `Context:` *$C$* `[SEP]` $H_{sub}$, where $H_{sub} \in H_n$. The model then learns to generate the target question $q_n$. In our experiments, $H_{sub}$ is the shortened $H_n$, in which we keep at most three previous turns. It was shown to improve upon training with the whole $H_n$ significantly [@cohs-cqg]. The performance of the QG model is in [8.3](#appendix:question-generation-performance){reference-type="ref+Label" reference="appendix:question-generation-performance"}.
2305.05873/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2022-11-12T06:49:11.589Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/20.3.0 Chrome/104.0.5112.114 Electron/20.1.3 Safari/537.36" version="20.3.0" etag="3f19Ubg4jZMsqwvbjiDr" type="device"><diagram id="oq7AYGtSe0616C5xQC73">7V1bk5s6Ev4t++Cq3Ye4uEhcHjOT5OxDkkolqd2cpy3ZyDZ1MPhgJjOTX78SIEBIDMIIG0/sk+QYIWTc36dWd6tpL+z7/dMfKTrsPiUBjhaWETwt7HcLy/IMj/xLG56LBuAbRcM2DYOiqdHwLfyFi0aTtT6EAT5yHbMkibLwUDaaReM6iWO8zrg2lKbJI3/tJokCrs8BbTHXgzZ8W6MIC93+GwbZrvxallu3/xuH2x37ZNPxizN7xDqXQxx3KEgeG59lv1/Y92mSZMW7/dM9jqjseLl86Dhb3ViK40zlAqu44CeKHsrvVt5X9sy+bJo8xAGm/Y2Fffe4CzP87YDW9OwjQZe07bJ9RI5M8vaYpclf+D6JkjS/2iZYk/+qM0xctO8mibMPaB9GlAXfwz1B1DI+40fy79dkj+KySwm/adPjMIrY4HESk/a7AB13+d3RIctvg9MMP3VKxKzkTPiJkz3O0mfSpbwAWiVWJTch8JawaHmssTZhyYxdA2efUayk17YavEaAvClBkANi9wNCOHOgbzO0ok1N8cvQOWYoZUKkQJBJkaEwxml5zTqJInQ4hvlgRY9dGAUf0XPykLGPYUcSGLsh14CGDW0ODeC6AhY+EKFw4HgowEAovtKJfLdL0vAXFXFUSrMt/+NjuI9QTDQEClpNd0muEWlTlhzKdxHeZOXbVZJlyb48SMtva0gxDNLk8B2lW8y6SKbOIQnjLBcQvCN/CKPvDcJ1SL7ZPTk262Pyh3ZPs/skJngT+tBhMTpmj/iYqUJty6HugZKJfQyUUB1KcsNZiKKvZN1A8VZlgpWrDKqF35Q1mRAbFPgBHAxqQsS5ifLVYRcGAY5lkw1hYNqWKgKgFwFrIALlYLW4Bo+GogynMcoI+8lCcxRgre5TCWnnokivA7zyVtMgbTu2bwcjkX7icWlCZWsFXmE0vcC7l53iG+ys19MAH7j+ylBeT18GHojL57XPeO+cOK/Q2gvsaXC2bACgpgnumq9ugvs3c+ws5tgzD2afdabB52Fu/YV093zmtH+aefYCBCco69ZoeiexaV4WagPb2JkGagMTt9cbCfVA++x05BVG04y8QqhpQuQDhL3NRAaas/bwaqMHeVUD7ZrmvH1WoI01xtY0QEMHmi7WA7SqhXZNU/wWMTuriQZsNRNNQwDNvHAEbbNe+/4c4irm6w+hmZeNoWEzgNidBmvfcW3kjMX69QbRzMtG0TSGT3NNjdP3P3GhsKU7VXqm/usNrZlnja1tUOAiPSa6Evor4EADaEL/FQbcTIWIG46DtzShghytI3Q8huvvuzDmycAnEOCnMPtRnqHv/6TtS1gevXtqdHvHTLtGGoCVn4yDD2HExi9uCgcsZ4OlViQP6bps8gwJoL0LuWyPn7WlOEJZ+JP/yBdQ/UIZWdPFZhtVLMvANJetLevi/svraiDFoZjiYUMZ/hIa9at1b1lu4grDnsAPSyFsdwI/YnIDP5oHDYbQw5oi+dEojmTM3ieHjj0zjgAIhwCryhfgmS3qGZMxRCHadyUaxHVmxg6HJdHVGuQ0PjjthKez6Q8xIvg5SfcoEjhC1uVs8UKeWemdSxx2FIXbmFILU3uANNBVPlyj6G15Yk9Mhtx6kZksPO9yDpU3ZS4kyWn9+WxqlsaLhoZjt1YOJsYG7WyZrWB0M0w1CiDNTiN2ETAX8P4f5G+Wf2vyHhTtc0aygZwWWNwlr66BZ0sSB9lka0LDAqSjoFGIvVEFxwiMo1Xy+L5uuMsbyAkm1kJJXsvi7cKZqWerb5VVVc/Vxskz0/uwbSl2qGSCHXpudCuDl923bBgdn1RzsBjzZI2vEEU8jz1wVqJydgQUyQtnR96WSyHhnDJ9gd8aylOk7yn8UohcvnJ+QRm/wOz4xdswjnk6vzzQN5RGfimER6/En5kdJ4DbMp9851ROAAbL4CXzFE54Ek5Q2/czZxMTlMih+buZxYC5DMwMgqW4erwVtpU8yiSeJn55KRMXusrCP1d8io9AjLASqgcAq6GEya9vxtoTxy1rKvzJtPqkK74rcmV+sUy3xRV7BFfgEviNF69grJbm0Mib2UQzT9VB+dEXnJIVKV8s9BCQkY1TVnMjYOV/N6OcpxEQ+qA1lLB3o5F01mxIRw7a3LmcmyPh3OxC9NBsEYWFcgaH6I1e8mpknHKQ9/53s2Yh4FcxwPaVz2DN2rIAr+hn/I6wVDJ/wckAUAKLpQEWWUjzBksuGre9JQKBbEtEDk1bwZ0EzoBMxgtWU1Arl6EBEWC19oxdMYNYlmWmo5qCPSC18JYcroJ1we4XjZuhCYjKYMqCXh1gzjtPdGROaMFqrRiUg80mCdS+PXh7ponb/eTt0EfAlEvcXPjJ29k8jmm//kdvwYBHb2/zWMM8ljyeNdV6DBSiRNde4EhXeh3ojwld+5INlEM4n6/KKU2TDGVhQkd54xuafCKv5RNBSUzHExE0gQ4PFciiOk6U+4YHwmnLyCO6+Rnn7wdaFjKXSg6VUbeRd1v6/08oS8Mnctn+gZbDjIjgc4GVY5LbKYYtei9YUqhW5IupPwZ3LbUiNXADtjPwXSCJX5hsYI4eOsihkC7HpPwRrXD0JTmG5fxgSveuqjCar6KsWKYhm6kNzNvg5Yr9ji0h+6ctreu6xBHReCntt0Qruo6us+XxgX4qOh6Kwqub8AmrP2H3Mhp260GXOkOEq8w5ERgKsaQtofFh0Lfnl9ZFo+DsqVJxTHm90qXBvayJpKQQ5emibKUg9FC2Hk4sWVrymNx7eDhiGWLtHSo106LDICyveOMtPc90PWD6rmGYFrTwG5PPhHupy2A45X1GwasQ9ykmwRjC94jRXnqub5uGDYBnudDjJQiWpt087QvCc5fEEGr0kSzu7pKOaxmO5zsQmq6GBH2gEEZpbOSWa2rnDq6cn8KeKSdWTy7WiTdDh252dsCjf6MTDssDej2IVPOowXAi89YmpfLzp/JhnpVmkkY0ZeGNEZb0f4iWItYtYQMxaLrM55vtrG47y621qWxnKEZEKt+IGKrx8ZCQhf8GoDqAHYblZAAqVML/rZwf/nky4Ks9W8jaRkEhBik+vf0hoFHTuCuQ2JCxUOBpY+X1k8Xqx87Kgc7wWQDYceOJXSGWrQEY223vqYsWpywCCDXYlFAMEHxLNtkePWkFp6rI1F1Paa7g2BcEZ6YBA6H+A/vZknNokt86PAAVEjamEvxZHPeuL1h5drxFIWbpmZK5qMWYuLjzDa/D+W47gfr8M+fi3vaFIGDk5y9QjoZwM6Y9FzTCI3Of6dbgPxfwnv5s2WqzcO/ihfvuf3/T7XSC27/mv3fYUa3FVbNYlJR6X1mQdnEdS+JE2RLa6UhOdkQnuEPrX9YEOacz4yj4lfXC32MoC0KTO++Mdx15K0o8K7C8iPHgDKjr/LCP3mEqhed+6fX9llyAN+ghytSEeJrT3SXVOhbNmyyuLARSbQrrzw9wFLaA57Dr2LEzPtFerKNS52OOcxj2zuFL7ew5A9Lzr22Od0i92r9wISdNfgND2G0Us/rcJT+AJCLa0WUUYsou3XlVQ2UwV4u7pDCyRDFYOhSDip83R8XQ75pMJTJXxS87k8ho2nb+Giu38gp/6XPJAHymQMfZpju4zHc0604iBB1dRiGiUmdg8lJ+4+vp+sqa4zweuAl5zX56ST7Q2tpsl17VVpCPv2FWIklXOT5XwT2cQzmrU6sMdJCUK8fnicR150bcqg5/FURoaRl14lovD6QvnuSquNnXU3yJ8WQ2nDBbJfQ0FQcXdKThTsYQlYq1mpetcxbbUdE+/hUsm5ZjLZvllnTpItiKIExXht4VgyifPn4RyDZmizyA2AuA4N3SJ/qtle3MeIvc8tuRnPNtkbtiHOcGTBVi8/iqJsBTe55QCzBXUuh0Qt3sSHTz7H5Gpvo93bEVpqo6Y10DadTGYtzqNukr+8vhl9ezTnoxePYt3O4Tcl4nOBtvjeWpfisPAmjMFxzL8C8GjieG6W6zpsuGOSswCtG6WezQQVEmU+3PeSqBpRmG4b3+xXwykc0oLWFwGL5LboyKS7/cOyufA+KJKT/bTI9ZNrbfSCdJmkNHl1GIaH40/32E9zi3uW7P5mvQ+cLzRTL9JvvRTC2ztavq4y+aHTf3rLihecs9Jmv79yeZ69CX1+towEEhEYQXwACTqBR/XwYD0ZZO/tIjThu23H5X3D6X/cq7jhqNnoLbf2XiPGPJS3KYJlTl144xfVDsUxJg2uP/</diagram></mxfile>
2305.05873/main_diagram/main_diagram.pdf ADDED
Binary file (29.8 kB). View file
 
2305.05873/paper_text/intro_method.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ <figure id="fig:intro" data-latex-placement="t">
4
+ <embed src="images/intro_orientedNormal.pdf" />
5
+ <figcaption> We propose SHS-Net to estimate oriented normals directly from point clouds. In contrast, previous studies usually achieve this process through a two-stage paradigm using different algorithms, , (1) unoriented normal estimation (, PCA <span class="citation" data-cites="hoppe1992surface"></span>, AdaFit <span class="citation" data-cites="zhu2021adafit"></span> and HSurf-Net <span class="citation" data-cites="li2022hsurf"></span>) and (2) normal orientation (, MST <span class="citation" data-cites="hoppe1992surface"></span>, QPBO <span class="citation" data-cites="schertler2017towards"></span> and ODP <span class="citation" data-cites="metzer2021orienting"></span>). </figcaption>
6
+ </figure>
7
+
8
+ In computer vision and graphics, estimating normals for point clouds is a prerequisite for many techniques. As an important geometric property of point clouds, normals with consistent orientation, , *oriented normals*, clearly reveal the geometric structures and make significant contributions in downstream applications, such as rendering and surface reconstruction [@kazhdan2005reconstruction; @kazhdan2006poisson; @kazhdan2013screened]. Generally, the estimation of oriented normals requires a two-stage paradigm (see Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"}): (1) the unoriented normal estimation from the local neighbors of the query point, (2) the normal orientation to make the normal directions to be globally consistent, , facing outward of the surface. While unoriented normals can be estimated by plane or surface fitting of the local neighborhood, determining whether the normals are facing outward or inward is ambiguous. In recent years, many excellent algorithms [@lenssen2020deep; @ben2020deepfit; @zhu2021adafit; @li2022graphfit; @li2022hsurf] have been proposed for unoriented normal estimation, while there are few methods that have reliable performance for normal orientation or directly estimating oriented normals. Estimating oriented normals from point clouds with noise, density variations, and complex geometries in an end-to-end manner is still a challenge.
9
+
10
+ The classic normal orientation methods rely on simple greedy propagation, which selects a seed point as the start and diffuses its normal orientation to the adjacent points via a minimum spanning tree (MST) [@hoppe1992surface; @konig2009consistent]. These methods are limited by error accumulation, where an incorrect orientation may degenerate all subsequent steps during the iterative propagation. Furthermore, they heavily rely on a smooth and clean assumption, which makes them easily fail in the presence of sharp edges or corners, density variations and noise. Meanwhile, their accuracy is sensitive to the neighborhood size of propagation. For example, a large size is usually used to smooth out outliers and noise, but can also erroneously include nearby surfaces. Considering that local information is usually not sufficient to guarantee robust orientation, some improved methods [@seversky2011harmonic; @wang2012variational; @schertler2017towards; @xu2018towards; @jakob2019parallel; @metzer2021orienting] try to formulate the propagation process as a global energy optimization by introducing various constraints. Since their constraints are mainly derived from local consistency, the defects are inevitably inherited, and they also suffer from cumulative errors. Moreover, their data-specific parameters are difficult to generalize to new input types and topologies.
11
+
12
+ Different from the propagation-based methods, which only consider the adjacent orientation, the volume-based approaches exploit volumetric representation, such as signed distance functions [@mullen2010signing; @mello2003estimating] and variational formulations [@walder2005implicit; @huang2019variational; @alliez2007voronoi]. They aim to divide the space into interior/exterior and determine whether point normals are facing inward or outward. Despite improvements in accuracy and robustness, these methods cannot scale to large point clouds due to their computational complexity. In general, propagation-based methods have difficulty with sharp features, while volume-based methods have difficulty with open surfaces. Furthermore, the above-mentioned methods are usually complex and require a two-stage operation, their performance heavily depends on the parameter tuning in each separated stage. Recently, several learning-based methods [@guerrero2018pcpnet; @hashimoto2019normal; @wang2022deep] have been proposed to deliver oriented normals from point clouds and have exhibited promising performance. Since they focus on learning an accurate local feature descriptor and do not fully explore the relationship between the surface normal orientation and the underlying surface, their performance cannot be guaranteed across different noise levels and geometric structures.
13
+
14
+ In this work, we propose to estimate oriented normals from point clouds by implicitly learning *signed hyper surfaces*, which are represented by MLP layers to interpret the geometric property in a high-dimensional feature space. We learn this new geometry representation from both local and global shape properties to directly estimate normals with consistent orientation in an end-to-end manner. The insight of our method is that determining a globally consistent normal orientation should require global context to eliminate the ambiguity from local since orientation is not a local property. We evaluate our method by conducting a series of qualitative and quantitative experiments on a range of point clouds with different sampling densities, noise levels, thin and sharp structures.
15
+
16
+ Our main contributions can be summarized as follows.
17
+
18
+ - We introduce a new technique to represent point cloud geometric properties as signed hyper surfaces in a high-dimensional feature space.
19
+
20
+ - We show that the signed hyper surfaces can be used to estimate normals with consistent orientations directly from point clouds, rather than through a two-stage paradigm.
21
+
22
+ - We experimentally demonstrate that our method is able to estimate normals with high accuracy and achieves the state-of-the-art results in both unoriented and oriented normal estimation.
23
+
24
+ <figure id="fig:net" data-latex-placement="t">
25
+ <embed src="images/net.pdf" style="width:95.0%" />
26
+ <figcaption> The learning pipeline of the signed hyper surfaces for oriented normal estimation. </figcaption>
27
+ </figure>
28
+
29
+ # Method
30
+
31
+ In mathematics, an explicit representation in Euclidean space expresses the $z$ coordinate of a point $p$ in terms of the $x$ and $y$, , $z \!=\! f (x, y)$. Such a surface is called an explicit surface, also called a height field. Another symmetric representation is $F(x, y, z) \!=\! 0$, where $F$ implicitly defines a locus called an implicit surface, also called a scalar field [@bloomenthal1997introduction]. The implicit surface is a zero iso-surface of $F$, , the point set $\{p \in \mathbb{R}^3 : F(p) \!=\! 0\}$ is a surface implicitly defined by $F$. The explicit surface is usually used in surface fitting-based normal estimation, such as jet fitting [@cazals2005estimating], while the implicit surface is widely used in surface reconstruction. Generally, an explicit surface, , $z \!=\! f(x,y)$, can always be rewritten as an implicit surface, , $F(x,y,z) \!=\! z-f(x,y) \!=\! 0$. These two surface representations have the same tangent plane at a given point, where the normal is defined. See supplementary for details.
32
+
33
+ **Explicit Surface Fitting**. We employ the widely used $n$-jet surface model [@cazals2005estimating] to briefly review the explicit surface fitting for normal estimation. It represents the surface by a polynomial function $J_{n}:\mathbb{R}^2 \!\to\! \mathbb{R}$, which maps a coordinate $(x,y)$ to its height $z$ that is not in the tangent space by $$\begin{equation}
34
+ z \doteq J_{\alpha,n}(x,y) = \sum_{k=0}^{n} \sum_{j=0}^{k} {\alpha}_{k-j, j} x^{k-j} y^{j},
35
+ \end{equation}$$ where $\alpha$ is the coefficient vector that defines the surface function. In order to find the optimal solution, the least squares approximation strategy is usually adopted to minimize the sum of the square errors between the (ground-truth) height and the jet value over a point set $\{ p_i\}_{i=1}^N$, $$\begin{equation}
36
+ \label{eq:ls}
37
+ J_{\alpha,n}^{\ast} = \mathop{\rm argmin}_{\alpha} \sum_{i=1}^{N} \|z_i - J_{\alpha,n}(x_i,y_i) \|^2.
38
+ \end{equation}$$ If $\alpha \!=\! (\alpha_{0,0},\alpha_{1,0},\alpha_{0,1},\cdots,\alpha_{0,n})$ is solved, then the normal at point $p$ on the fitted surface is computed by $$\begin{equation}
39
+ \label{eq:fit_normal}
40
+ \mathbf{n}_{p} = h(\alpha) = (-\alpha_{1,0}, -\alpha_{0,1}, 1) / \sqrt{1 + \alpha_{1,0}^2 + \alpha_{0,1}^2} ~~.
41
+ \end{equation}$$
42
+
43
+ <figure id="fig:feat" data-latex-placement="t">
44
+ <embed src="images/feat.pdf" />
45
+ <figcaption> Feature encoding network in patch and shape encoding. </figcaption>
46
+ </figure>
47
+
48
+ **Implicit Surface Learning**. In recent years, many learning-based approaches have been proposed to represent surfaces by implicit functions, such as signed distance function (SDF) [@park2019deepsdf] and occupancy function [@mescheder2019occupancy]. The signed (or oriented) distance function is the shortest distance of a given point $p\!=\!(x_{0},y_{0},z_{0})$ to the closest surface ${\boldsymbol S}$ in a metric space, with the sign determined by whether the point is inside ($F(p)\!<\!0$) or outside ($F(p)\!>\!0$) of the surface. The underlying surface is implicitly represented by the iso-surface of $F(p)\!=\!0$. In the surface reconstruction task, a deep network is usually adopted to encode a 3D shape into a latent code, which is fed into a decoder together with query points to predict signed distances. If an implicit surface function is continuous and differentiable, the formula of tangent plane at a regular point $p$ (gradient is non-null) is $F_x(p)(x-x_0)+F_y(p)(y-y_0)+F_z(p)(z-z_0) \!=\! 0$ and its normal (, perpendicular) is $\mathbf{n}_p\!=\!\nabla F(p) / \|\nabla F(p)\|$.
49
+
50
+ As shown in Fig. [2](#fig:net){reference-type="ref" reference="fig:net"}, we propose to implicitly learn signed hyper surfaces in the feature space for estimating oriented normals. In the following sections, we first introduce the representation of signed hyper surfaces by combining the characteristics of the above two surface representations. Then, we design an attention-weighted normal prediction module to solve the oriented normals of query points from signed hyper surfaces. Finally, we introduce how to learn this new surface representation from patch encoding and shape encoding using our designed loss functions.
51
+
52
+ We formulate the surface function $F(x,y,z) \!=\! z-f(x,y) \!=\! 0$ as a more general format $F(z_1, z_2) \!=\! z_1 - z_2 \!=\! 0$. Similarly, the signed hyper surface is implicitly learned by taking the latent encodings of point clouds as inputs and outputting an approximation of the surface in feature space, $$\begin{equation}
53
+ f_{\boldsymbol S}({\mathpalette\irchi\relax}) \approx \mathcal{E}_{\theta}({\mathpalette\irchi\relax}| z_1, z_2),~ z_1 = e_{\varphi}({\boldsymbol P}^1_{{\mathpalette\irchi\relax}}),~ z_2 = e_{\psi}({\boldsymbol P}^2_{{\mathpalette\irchi\relax}}),
54
+ \end{equation}$$ where $\mathcal{E}$ is implemented by a neural network with parameter $\theta$ that is conditioned on two latent vectors $z_1,z_2 \!\in\! \mathbb{R}^c$, which are extracted from point clouds by encoders $e_{\varphi}$ and $e_{\psi}$, respectively. ${\boldsymbol P}^1_{{\mathpalette\irchi\relax}}$ and ${\boldsymbol P}^2_{{\mathpalette\irchi\relax}}$ are subsample sets of the raw point cloud $\mathcal{P}$, , point patches around a given point ${\mathpalette\irchi\relax}$.
55
+
56
+ Similar to existing unoriented normal estimation methods [@li2022hsurf; @ben2020deepfit; @zhu2021adafit; @guerrero2018pcpnet], we use a local patch ${\boldsymbol p}_q$ to capture the local geometry for accurately describing the surface pattern around a query point $q$, $$\begin{equation}
57
+ f^{\mathbf{n}}_{\boldsymbol p}(q) = \mathcal{E}^{\mathbf{n}}_{\theta}(q | z^{\mathbf{n}}_q),~
58
+ z^{\mathbf{n}}_q = e_{\varphi}({\boldsymbol p}_q).
59
+ \end{equation}$$ Since the interior/exterior of a surface cannot be determined reliably from a local patch, we take a global subsample set ${\boldsymbol P}_q$ from the point cloud $\mathcal{P}$ to provide additional information to estimate the sign at point $q$, $$\begin{equation}
60
+ f^s_{\boldsymbol P}(q) = \mathrm{sgn}\big(g^s(q)\big) = \mathrm{sgn}\big(\mathcal{E}^{s}_{\theta}(q | z^s_q)\big),~
61
+ z^s_q = e_{\psi}({\boldsymbol P}_q),
62
+ \end{equation}$$ where $\mathrm{sgn}(\cdot)$ is signum function, $g^s(q)$ denotes logit of the probability that $q$ has a positive sign. Thus, the signed hyper surface function at point $q$ is formulated as $$\begin{equation}
63
+ f_{\boldsymbol S}(q) = f^{\mathbf{n}}_{\boldsymbol p}(q) \cdot f^s_{\boldsymbol P}(q) = \mathcal{E}^{\mathbf{n},s}_{\theta}(q | z^{\mathbf{n}}_q, z^s_q).
64
+ \end{equation}$$ Different from the surface reconstruction task that learns SDF by representing a surface as the zero-set of the SDF, we do not learn a distance field of points with respect to the underlying surface.
65
+
66
+ To simplify notations, we denote $\mathcal{E}^{\mathbf{n},s}_{\theta}(q | z^{\mathbf{n}}_q, z^s_q)$ as $\mathcal{S}_{\theta}(\mathcal{X},\mathcal{Y})$, where $z^{\mathbf{n}}_q \!=\! \mathcal{X} \!\in\! \mathbb{R}^c$ and $z^s_q \!=\! \mathcal{Y} \!\in\! \mathbb{R}^c$ are high dimensional latent vectors. According to the explicit surface fitting, we formulate the signed hyper surface $\mathcal{S}_{\theta}:\mathbb{R}^{2c} \!\to\! \mathbb{R}^c$ as a feature-based polynomial function [@li2022hsurf] $$\begin{equation}
67
+ \label{eq:poly}
68
+ \mathcal{S}_{\theta,\mu}(\mathcal{X},\mathcal{Y}) = \sum_{k=0}^{\mu} \sum_{j=0}^{k} \theta_{k-j,j} ~ \mathbf{x}_{k-j} \mathbf{y}_j = \theta ~ [\mathcal{X} : \mathcal{Y}],
69
+ \end{equation}$$ where \[ : \] means the feature fusion through concatenation, $\mu$ denotes the number of fused items.
70
+
71
+ Similar to Eq. [\[eq:ls\]](#eq:ls){reference-type="eqref" reference="eq:ls"}, the bivariate function $\mathcal{S}_{\theta,\mu}(\mathcal{X},\mathcal{Y})$ aims to map a feature pair $(\mathcal{X}_i, \mathcal{Y}_i)$ to their ground-truth value $\mathcal{Z}_i=\hat{\mathcal{S}}(\mathcal{X}_i, \mathcal{Y}_i) \in \mathbb{R}^c$ in the feature space, , $$\begin{equation}
72
+ \mathcal{S}_{\theta,\mu}^{\ast} = \mathop{\rm argmin}_{\theta,\mu} \sum_{i=1}^{N} \| \mathcal{Z}_i - \mathcal{S}_{\theta,\mu}(\mathcal{X}_i,\mathcal{Y}_i) \|^2 .
73
+ \end{equation}$$ To solve the oriented normal $\vec{\mathbf{n}}$ from signed hyper surfaces, we introduce a normal prediction module $\mathcal{H}(\cdot)$, thus $$\begin{equation}
74
+ \label{eq:normal_out}
75
+ \setlength{\abovedisplayskip}{3pt}
76
+ \setlength{\belowdisplayskip}{3pt}
77
+ \mathcal{S}_{\theta,\mu}^{\ast} = \mathop{\rm argmin}_{\theta,\mu} \sum_{i=1}^{N} \| \mathcal{H}(\mathcal{Z}_i) - \mathcal{H}(\mathcal{S}_{\theta,\mu}(\mathcal{X}_i,\mathcal{Y}_i)) \|^2 .
78
+ \end{equation}$$ Finally, the oriented normal is optimized by $$\begin{equation}
79
+ \label{eq:loss}
80
+ \setlength{\abovedisplayskip}{3pt}
81
+ \setlength{\belowdisplayskip}{3pt}
82
+ \mathcal{S}_{\theta,\mu}^{\ast} = \mathop{\rm argmin}_{\theta,\mu} \sum_{i=1}^{N} \| \hat{\vec{\mathbf{n}}}_i - \vec{\mathbf{n}}_i \|^2.
83
+ \end{equation}$$
84
+
85
+ <figure id="fig:atten" data-latex-placement="t">
86
+ <embed src="images/atten.pdf" />
87
+ <figcaption> Attention-weighted normal prediction module <span class="math inline">ℋ(⋅)</span>. </figcaption>
88
+ </figure>
89
+
90
+ <figure id="fig:errorMap" data-latex-placement="t">
91
+ <embed src="images/errorMap.pdf" style="width:95.0%" />
92
+ <figcaption> Visualization of the oriented normal error on datasets PCPNet (left) and FamousShape (right). The angle error is mapped to a heatmap ranging from <span class="math inline">0<sup>∘</sup></span> to <span class="math inline">180<sup>∘</sup></span>. The purple indicates the same direction as the ground-truth, while the red indicates the opposite. </figcaption>
93
+ </figure>
94
+
95
+ **Attention-weighted Normal Prediction $\mathcal{H}(\cdot) \!:\! \mathbb{R}^c \!\to\! \mathbb{R}^4$**. As shown in Fig. [4](#fig:atten){reference-type="ref" reference="fig:atten"}, we use an attention mechanism to recover the oriented normal $\vec{\mathbf{n}}_q$ of the query point $q$ from $c$-dimensional fused surface embedding $z_{q}$, $$\begin{equation}
96
+ \label{eq:output}
97
+ (\dot{\mathbf{n}}_q, s) \!=\! \mathcal{O}\big( \mathcal{V}(o_{q}) \otimes {\rm MAX} \big\{ {\rm softmax}_{\mathcal{N}_q} \big( \mathcal{Q}_j(o_{q})_{j=1}^m \big) \big\} \big),
98
+ \end{equation}$$ where $o_{q} \!=\! \tau \cdot z_{q}, \tau \!=\! {\rm sigmoid}(\mathcal{I}(z_{q}))$. $\mathcal{O}, \mathcal{V}, \mathcal{Q}$ and $\mathcal{I}$ are MLPs. $m\!=\!64$ is the feature dimension size. First, a multi-head strategy is adopted to deliver $m$ relative weights $\mathcal{Q}_j(o_{q})$, which are normalized by softmax over neighbors $\mathcal{N}_q$ into positive interpolation weights. Then, the feature maxpooling ${\rm MAX}\{\cdot\}$ is performed to produce attention weights for each point. Meanwhile, the feature embedding $o_q$ is refined through another branch $\mathcal{V}$ and modulated as the weighted sum through matrix multiplication. Finally, the normal and its sign (, orientation) $\vec{\mathbf{n}}_q \!=\! (\mathbf{n}_q \!\in\! \mathbb{R}^3, s \!\in\! \mathbb{R})$ is predicted as a 4D vector by $\mathcal{O}$, and $\mathbf{n}_q \!=\! {\dot{\mathbf{n}}_q} / {\|\dot{\mathbf{n}}_q\|}$.
99
+
100
+ **Patch Encoding**. Given a neighborhood point patch ${\boldsymbol p}_q$ of the query point, our local latent code extraction layer $\mathcal{F}$ is formulated as $$\begin{equation}
101
+ \label{eq:local}
102
+ \dot{z}^{\mathbf{n}}_{i} \!=\! \mathcal{A} \left(\mathcal{B} \left({\rm MAX} \big\{ \mathcal{C} (w_{j} \cdot z^{\mathbf{n}}_{j}) \big\}_{j=1}^{N_{l}} \right), z^{\mathbf{n}}_{i} \right),
103
+ \end{equation}$$ where $i\!=\!1,\cdots,N_{l+1}$, $l$ is the neighborhood scale index and $N_{l+1} \!\leqslant\! N_{l}$. $z^{\mathbf{n}}_{i}\!=\!\mathcal{D}(p_i), p_i \!\in\! {\boldsymbol p}_q$ is the per-point feature in the patch. $\mathcal{A}, \mathcal{B}, \mathcal{C}$ and $\mathcal{D}$ are MLPs. ${\rm MAX}\{\cdot\}$ denotes the feature maxpooling over $N_{l}$-nearest neighbors of the query point $q$. $w$ is a distance-based weight given by $$\begin{equation}
104
+ \label{eq:weight}
105
+ \setlength{\abovedisplayskip}{3pt}
106
+ \setlength{\belowdisplayskip}{3pt}
107
+ w_{j} = \frac{\beta_j}{\sum_{i=1}^{N}{\beta_i}}, ~ \beta_i = {\rm sigmoid} \big(\gamma_1 - \gamma_2 {||p_i - q||}_2 \big),
108
+ \end{equation}$$ where $\gamma_1$ and $\gamma_2$ are learnable parameters with an initial value of $1.0$. The weight $w$ makes the layer focus on the point $p_i$ which is closer to the query point $q$. As shown in Fig. [3](#fig:feat){reference-type="ref" reference="fig:feat"}, we stack two layers $\mathcal{F}$ to form a block, which is further stacked to build our patch feature encoder $e_{\varphi}$.
109
+
110
+ **Shape Encoding**. Since the global subsample set ${\boldsymbol P}_q \!=\! \{p_i\}_{i=1}^{N_{\boldsymbol P}}$ can be seen as a patch with points distributed globally on the shape surface, we adopt a similar network architecture with the patch feature encoder to get the global latent code $z^s_q$. To obtain ${\boldsymbol P}_q$, we use a probability-based sampling strategy [@erler2020points2surf], which brings more points closer to the query point $q$. It samples points according to a density gradient that decreases with increasing distance from the point $q$. Moreover, we find that adding more points from random sampling brings better results. Then, the gradient of a point is calculated by $$\begin{equation}
111
+ \label{eq:sample}
112
+ \setlength{\abovedisplayskip}{3pt}
113
+ \setlength{\belowdisplayskip}{3pt}
114
+ \upsilon (p_i) = \left\{
115
+ \begin{aligned}
116
+ & \left[ 1 - 1.5 \frac{\|p_i-q\|_2}{\max_{p_j \in \mathcal{P}} \|p_j-q\|_2} \right]_{0.05}^1 \\
117
+ & ~1 ~~~\text{if}\ i \in \mathcal{R}
118
+ \end{aligned}
119
+ \right.
120
+ \end{equation}$$ where $[\cdot]_{0.05}^1$ indicates value clamping. $\mathcal{R}$ is a random sample index set of $\mathcal{P}$ with $N_{\boldsymbol P}/1.5$ items. Finally, the sampling probability of a point $p_i \!\in\! \mathcal{P}$ is $\rho(p_i) \!=\! {\upsilon(p_i)}/{\sum_{p_j \in \mathcal{P}} \upsilon(p_j)}$.
121
+
122
+ **Feature Fusion**. In order to allow each point in the local patch to have global information and determine the normal orientation, we first use the maxpooling and repetition operation to make the output global latent code has the same dimension as the local latent code. Then, the two kinds of codes are fused by concatenation, , $[z^{\mathbf{n}}_q : z^s_q]$ in Eq. [\[eq:poly\]](#eq:poly){reference-type="eqref" reference="eq:poly"}.
123
+
124
+ For the query point $q$, we constrain its unoriented normal and normal sign (, orientation), respectively. To learn an accurate unoriented normal, we employ the ground-truth $\hat{\mathbf{n}}_q$ to calculate a normal vector $sin$ loss [@ben2020deepfit] $$\begin{equation}
125
+ \setlength{\abovedisplayskip}{3pt}
126
+ \setlength{\belowdisplayskip}{3pt}
127
+ \mathcal{L}_{sin} = \|\mathbf{n}_q \times \hat{\mathbf{n}}_q \|.
128
+ \end{equation}$$ For the normal orientation, we adopt the binary cross entropy $H$ [@erler2020points2surf] to calculate a sign classification loss $$\begin{equation}
129
+ \setlength{\abovedisplayskip}{3pt}
130
+ \setlength{\belowdisplayskip}{3pt}
131
+ \mathcal{L}_{sgn} = H \Big( \sigma \big( g^s(q)\big),\ [f_{\boldsymbol S}(q) > 0] \Big),
132
+ \end{equation}$$ where $\sigma$ is a logistic function that converts the sign logits to probabilities. $[f_{\boldsymbol S}(q) \!>\! 0]$ is $1$ if the estimated normal faces the outward of surface ${\boldsymbol S}$ and $0$ otherwise. Our method achieves a significant performance boost by dividing the oriented normal estimation into unoriented normal regression and its sign classification, instead of directly regressing the oriented normals of query points (see Sec. [5.3](#sec:ablation){reference-type="ref" reference="sec:ablation"}).
133
+
134
+ To make the model also pay attention to neighbor points $p_i \!\in\! {\boldsymbol p}_q$, we compute a weighted mean square error (MSE) $$\begin{equation}
135
+ \label{eq:loss_n}
136
+ \setlength{\abovedisplayskip}{3pt}
137
+ \setlength{\belowdisplayskip}{3pt}
138
+ \mathcal{L}_{mse} \!=\! \frac{1}{N} \sum_{i=1}^{N} \tau_{i} \|\vec{\mathbf{n}}_i - \hat{\vec{\mathbf{n}}}_i \|^2,
139
+ \end{equation}$$ where the neighborhood point normals $\vec{\mathbf{n}} \!=\! \delta({z_{q}})$ are predicted from the surface embedding $z_{q}$ by an MLP layer $\delta : \mathbb{R}^c \!\to\! \mathbb{R}^3$. Moreover, we add a loss term according to coplanarity [@zhang2022geometry] to facilitate the learning of $\tau$ in Eq.[\[eq:output\]](#eq:output){reference-type="eqref" reference="eq:output"}, $$\begin{equation}
140
+ \setlength{\abovedisplayskip}{3pt}
141
+ \setlength{\belowdisplayskip}{3pt}
142
+ \mathcal{L}_{\tau} = \frac{1}{N} \sum_{i=1}^{N}(\tau_{i} - \hat{\tau}_{i})^2,
143
+ ~~ \hat{\tau}_{i} = \exp \left(- \frac{(p_i \cdot \hat{\mathbf{n}}_q)^2}{\xi^2} \right),
144
+ \end{equation}$$ where $\xi \!=\! \max (0.05^2, ~0.3 \sum_{i=1}^{N}(p_i \cdot \hat{\mathbf{n}}_q)^2 / N )$. In summary, our final training loss for oriented normal estimation is $$\begin{equation}
145
+ \mathcal{L} = \lambda_1 \mathcal{L}_{sin} + \lambda_2 \mathcal{L}_{sgn} + \lambda_3 \mathcal{L}_{mse} + \lambda_4 \mathcal{L}_{\tau} ~,
146
+ \end{equation}$$ where $\lambda_1\!=\!0.1$, $\lambda_2\!=\!0.1$, $\lambda_3\!=\!0.5$ and $\lambda_4\!=\!1.0$ are factors.
2305.07247/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-10-21T00:02:58.626Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36" etag="g7L6dv9zKMRDPLG9_v-w" version="20.4.0" type="device"><diagram id="64I_ZNPVAmwraxjgGnCM" name="Page-1">7Ztbb6s4EMc/TR4jEcileWybnO2Rthdtt93HlRcmiVVjU9s0aT/92mBysdPTc7Yk4HSlSg1jY8jvP4xnbNKJLtPVbxxli2uWAOmEQbLqRJNOGPb6YdjRf0HyWlpG435pmHOcmE4bwz1+A2MMjDXHCYidjpIxInG2a4wZpRDLHRvinC13u80Y2b1qhubgGO5jRFzrXziRi+ruqvvTDVeA5wtz6WhkGlJUdTYGsUAJW26ZomknuuSMyfJTuroEouFVXMrzvr3Tur4xDlT+zAl8+PbH3d+z32+fr77f3uLnx8frh64Z5QWR3HxhgWkumDpJwcIr0P8FnlNEzLeQrxUaznKagB6914kulgss4T5DsW5dKmdQtoVMiWmeYUIuGWG8ODdKEJzNYmUXkrMnqFooo6A7MyqNK/T66tjcJHAJq3e/fW/NVDkjsBQkf1VdzAlDo0LlhmaA5bampstiS86BsSHjRvP1wBvS6oOB/QvgQwf84/R+MvUbchi0jHK0x73TTLENg8nNjd+wo37LYPcd2AlQhlVAmWvMMeNQxsV4UZh8ht8/axn8gQN/mhPg3WvE81eUIr9xD8OW4R46uHOqUpAES8y8nytHbZssRz/KUjiiCUtPM03pDZpGf+agn0zurv2GbKcpzVMeO5QfKMguwU/gN2o7SWkedVW4bbH+B8VPS8R1MLm/8Ju3nZe0gLdbYiZYxBwkfkN6tiyKd85V8V4c+EzfTlNaQN+tM082TWkBbbcEerw7uao+jBrn7FY7UqWBYsZ4Clz4jdueMluA2612FOkTnTFbgNstdzIOCY4l410zVSooXlO3Z8oWUHdzcJzqzYEwSJBEAqTnxNsGPHQTcRXEMS22HE5u0ox6jfN2U0FEJHCqsvAXOMFI3jzyyI3kDmKgybnep1RHMUFC4HiX6kaCQB2p3t8wqdpKstW+ZK9sN1THa6iQOFucHyLdQjb4ATEOpPSe7eH3YTRXuGNYXfi9mBRZAwiW8xjMORspnGHGY2scS0+J+BykM04h6fo7f0Jld43s8yp7pGK0S3/0H1W0hokGvaOq2Heno044JNLEqh05h885qxq6olDqXHXoDbNVoVfVrj7N9f/qtYgMZ0DUDFcNrG60HLvs5niNioVy10/2x9KtwGtMiOA51c6mXASU/UJHVhwjcm4aUpwk+jJ7g/quN9YRmiu626H4bI9L2o9ufZuT7uz3y/KG78g7ReLVL0k5k+XaWzTpjgNrOg7r0bwbjvfGhS0XGB/VA9yXAWrzgKuiFv7fA2wPsOdl1wVGR3UBd32uthg/qYpEn7ygBpGtHK7/kzn34TR21wYrTUSG6Kc0vlAKV8XpWuZy1FOX2a5nm9fZXZSsTecbyLneHQkoyCXjT+LLyR1FH8sdHlXufaV0TaH7T73sVL6lpm52wRLx5aJ4fzj4UO/oqHrvK6pr0ruI34FgRCH7ckrbK+/NB3J34b02pb/TGSgw8deruO3N8cZlrpZzDiHzA8V6AAlCeha6nTKsBum71tzdG7uLL0etvAfu2ssLFnmRYcULiJ8ccbzadxi37VcPA3elI+Ms014K6vkI2Kzj/26PTb35t38G7uqCxKkuGAXwkry+EI49f0HFJt/8VvLggMniowr3iXlR0aeZ5QBCN55DDF2dORTTiH5Fw++nytkLPdxDpQ43Pzst9882P96Npv8C</diagram></mxfile>
2305.07247/main_diagram/main_diagram.pdf ADDED
Binary file (22.6 kB). View file
 
2305.07247/paper_text/intro_method.md ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Time series data is extensively studied in various fields such as finance (Xiong & Pelger, 2023), healthcare (Silva et al., 2012), and meteorology. However, incomplete or partial observations, equipment failures, and human errors may inevitably lead to the missing value problem, severely limiting the interpretation of the time series. For instance, the inherent illiquidity of certain assets can result in the occurrence of missing values, which in turn impacts our
4
+
5
+ *Proceedings of the* 40 th *International Conference on Machine Learning*, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).
6
+
7
+ ability to devise reliable trading strategies (Christoffersen et al., 2017). In populations of Intensive Care Units (ICUs), predicting mortality rates based on time-series observations of vital signs is essential (Silva et al., 2012). However, the presence of missing data has greatly limited the efficacy of medications and surgical treatments.
8
+
9
+ One standard approach to tackle such a problem is to leverage score-based generative models (SGMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021a;b), which propose to recover the data distribution through a backward process that estimates the scores of posterior distributions conditioned on the observed data. This conditional nature directly motivates the study of Conditional Score-based Diffusion models for Imputation (CSDI) (Tashiro et al., 2021). CSDI is able to learn the temporal-feature patterns well and achieves state-of-the-art performance in probabilistic time series imputation. However, transporting between terminal distributions is often quite expensive for SGMs and CSDI, which requires extensive computations and hyperparameter tuning. As such, a more efficient algorithm is needed to reduce the transport cost.
10
+
11
+ The Schrodinger bridge problem (SBP) was initially pro- ¨ posed to solve problems in quantum mechanics and can be transformed into the entropy-regularized optimal transport (EOT) (De Bortoli et al., 2021; Leonard ´ , 2014a; Chen et al., 2021b; Nutz & Wiesel, 2022). Solving the EOT formulation gives rise to the iterative proportional fitting (IPF) algorithm (Kullback, 1968; Ruschendorf, 1995), which provides a principled paradigm to minimize the transport cost and facilitates the estimation of score functions to generate samples of higher quality; SBP further enables the generalization of linear Gaussian priors to non-linear families with more acceleration potential (Chen et al., 2021a; Bunne et al., 2023; Pavon et al., 2021; Deng et al., 2020; 2022).
12
+
13
+ Despite the theoretical potential, the existing SBP-based generative models assume we can obtain the exact projections for the IPF algorithm, but in practice, it is often only approximated by deep neural networks (De Bortoli et al., 2021) or Gaussian processes (Vargas et al., 2021). In order to fill the gap, we extend the IPF algorithm by allowing for the approximated projections and refer to it as the approxi-
14
+
15
+ <sup>\*</sup>Equal contribution (Alphabetical order) <sup>1</sup>Machine Learning Research, Morgan Stanley, NY <sup>2</sup> School of Computing, University of Utah (Fang completed part of the work while interning at Morgan Stanley) <sup>3</sup>Department of Mathematics, Emory University. Correspondence to: Wei Deng <weideng056@gmail.com>.
16
+
17
+ mate IPF (aIPF) algorithm; we further conduct theoretical analysis for aIPF based on the optimal transport theory, which deepens the understanding of training budgets in score approximations. Empirically, we apply the SBP-based generative models to probabilistic time series imputation and demonstrate that minimizing the transport cost improves performance. We summarize our contributions as follows:
18
+
19
+ - We show a first convergence analysis for Schrodinger ¨ bridge with approximated projections and characterize the relation between training errors and the number of iterations. Our theory motivates future research for devising provably convergent Schrodinger bridge (SB) ¨ algorithms and paves the way for understanding when SB is faster than SGMs. To bridge the gap between theoretical understanding and practical algorithms, we also draw connections between the aIPF algorithm and the divergence-based likelihood training of forward and backward stochastic differential equations (FB-SDEs).
20
+ - We apply the Schrodinger bridge algorithm to proba- ¨ bilistic time series imputation. We show that optimizing the transport cost visibly improves the performance on synthetic data and achieves the state-of-the-art performance on real-world datasets.
21
+
22
+ # Method
23
+
24
+ The score-based generative models (SGMs (Song et al., 2021b)) have become the go-to framework for generative models. SGMs first inject noise into the data and then recover it from a backward process (Anderson, 1982)
25
+
26
+ $$d\mathbf{x}_t = \mathbf{f}(\mathbf{x}_t, t)dt + g(t)d\mathbf{w}_t, \tag{1a}$$
27
+
28
+ $$d\mathbf{x}_{t} = \left[ \mathbf{f}(\mathbf{x}_{t}, t) - g(t)^{2} \nabla \log p_{t}(\mathbf{x}_{t}) \right] dt + g(t) d\bar{\mathbf{w}}_{t}, \quad (1b)$$
29
+
30
+ where {xt} T <sup>t</sup>=0 ∈ R d § , x<sup>0</sup> ∼ pdata , and x<sup>T</sup> ∼ pprior; f ≡ f (xt, t) is the vector field; g ≡ g(t) is the diffusion term; w<sup>t</sup> is the standard Brownian motion; w¯ <sup>t</sup> is a Brownian motion with time moving backward from T to 0; p<sup>t</sup> is the marginal density of the forward process (1a) at time t. The score function ∇ log p<sup>t</sup> (·) is approximated via a model sθ(·, t); pdata is simulated via the backward process (1b) starting at x<sup>T</sup> . SGMs (Ho et al., 2020) aim to train sθ(·, t) by minimizing the mean squared error between the ground-truth score and estimator such that E λ(t)∥sθ(xt, t)−∇ log p<sup>t</sup> (x<sup>t</sup> | x0) ∥ 2 2 , where the weight λ(t) is set manually. Song et al. (2021a) proposes to maximize the likelihood to learn sθ(x, t) such that
31
+
32
+ $$\log p_0^{\text{SDE}}\left(\mathbf{x}_0\right) \ge \mathbb{E}_{p_{0T}(\cdot|\mathbf{x}_0)}\left[\log p_T\left(\mathbf{x}_T\right)\right] - \frac{1}{2} \int_0^T \mathbb{E}_{p_{0t}(\cdot|\mathbf{x}_0)}\left[g^2 \left\|\mathbf{s}_t\right\|_2^2 + 2\nabla \cdot \left(g^2 \mathbf{s}_t - \boldsymbol{f}\right)\right] dt,$$
33
+
34
+ where s<sup>t</sup> = sθ(xt, t) , p0t(·|x0) = p0t(xt|x0) stands for the conditional density of xt, which evolves with the trajectory of (1a). The inequality becomes an equality if the estimator s<sup>t</sup> exactly matches the score function. Thus, optimizing the lower bound provides an efficient scheme to maximize the data likelihood.
35
+
36
+ Even though SGMs have demonstrated success in generative models, they still suffer from transport inefficiency. A *long evolving time* T of the forward process (1a) is required to facilitate the score estimation and guarantee that x<sup>t</sup> will converge close to a prior distribution. Besides, the choice of priors is repeatedly *constrained to Gaussian* and further limits the acceleration potential. To tackle this issue, the
37
+
38
+ <sup>§</sup> d is the data dimension and can be reshaped to other formats.
39
+
40
+ dynamical Schrödinger Bridge problem (SBP) aims to solve
41
+
42
+ $$\inf_{\mathbb{P}\in\mathcal{D}(\mu_{\star},\nu_{\star})} \mathrm{KL}(\mathbb{P}|\mathbb{Q}),\tag{2}$$
43
+
44
+ where $\mathcal{D}(\mu_{\star}, \nu_{\star})$ denotes the space of *path measures* with marginal probability measures $\mu_{\star}$ and $\nu_{\star}$ at time t=0 and t=T, respectively; $\mathbb{Q}$ is the prior measure, usually induced by Brownian motion or Ornstein-Uhlenbeck process; $\mathrm{KL}(\cdot|\mathbb{Q})$ denotes the KL divergence with respect to the measure $\mathbb{Q}$ .
45
+
46
+ The dynamical SBP can be interpreted from stochastic optimal control (SOC) (see section 4.4 in Chen et al. (2021a))
47
+
48
+ $$\inf_{\mathbf{u} \in \mathcal{U}} \mathbb{E} \left\{ \int_{0}^{T} \frac{1}{2} \|\mathbf{u}(\mathbf{x}_{t}, t)\|_{2}^{2} dt \right\}$$
49
+ s.t.
50
+ $$d\mathbf{x}_{t} = [\mathbf{f}(\mathbf{x}_{t}, t) + g(t)\mathbf{u}(\mathbf{x}_{t}, t)] dt + \sqrt{2\varepsilon}g(t)d\mathbf{w}_{t}$$
51
+
52
+ $$\mathbf{x}_{0} \sim \mu_{\star}(\cdot), \quad \mathbf{x}_{T} \sim \nu_{\star}(\cdot),$$
53
+ (3)
54
+
55
+ where $\mathcal{U}$ is the control set $\boldsymbol{u}: \mathbb{R}^d \times [0,T] \to \mathbb{R}^d$ ; the state-space is $\mathbb{R}^d$ and is sometimes omitted; the expectation is taken w.r.t the joint state PDF $\rho(\mathbf{x},t)$ ; $\varepsilon$ is a regularizer.
56
+
57
+ Diffusion models have shown superiority in generative models and time series imputation, which motivate interesting theoretical works (Lee et al., 2022; Chen et al., 2023; De Bortoli et al., 2021; Koehler et al., 2023). As a theoretical ideal candidate, Schrödinger bridge also has gained tremendous attention (De Bortoli et al., 2021; Vargas et al., 2021; Wang et al., 2021; Chen et al., 2022), however, the practical theory has not been studied in the literature.
58
+
59
+ To bridge the gap between theory and practice, we initiate the convergence study of the practical Schrödinger bridge algorithm based on general cost functions and highlight the connections between SBP, EOT, and FB-SDEs.
60
+
61
+ By applying the disintegration of measures (Léonard, 2014b), the chain rule (De Bortoli et al., 2021) for the KL divergence for the dynamical SBP (2) follows
62
+
63
+ $$\mathrm{KL}(\mathbb{P}|\mathbb{Q}) = \mathrm{KL}(\pi|\mathcal{G}) + \iint \mathrm{KL}(\mathbb{P}^{\mathbf{x}_T}_{\mathbf{x}_0}|\mathbb{Q}^{\mathbf{x}_T}_{\mathbf{x}_0}) \mathrm{d}\pi(\mathbf{x}_0, \mathbf{x}_T).$$
64
+
65
+ where $\pi:=(\mu_{\star},\nu_{\star})$ is a coupling with marginals $\mu_{\star}$ and $\nu_{\star}$ ; $\mathcal{G}$ is a Gibbs measure: $d\mathcal{G} \propto e^{-c_{\varepsilon}}d(\mu_{\star} \otimes \nu_{\star})$ ; $c_{\varepsilon}$ is a cost function in Eq.(24); $\otimes$ is the product measure; the marginals of $\mathbb{P}$ (or $\mathbb{Q}$ ) at t=0 and T follow from $\mu_{\star}$ and $\nu_{\star}$ ; $\mathbb{P}^{\boldsymbol{x}_T}_{x_0}:=\mathbb{P}(\cdot|\mathbf{x}_0=\boldsymbol{x}_0,\mathbf{x}_T=\boldsymbol{x}_T)$ (or $\mathbb{Q}^{\boldsymbol{x}_T}_{x_0}$ ) denotes a diffusion bridge of $\mathbb{P}$ (or $\mathbb{Q}$ ) from $\boldsymbol{x}_0$ to $\boldsymbol{x}_T$ .
66
+
67
+ Assuming the same bridges for $\mathbb{P}$ and $\mathbb{Q}$ , the *static* SBP
68
+
69
+ yields a coupling $\pi_{\star}$ (see Lemma 1 in Appendix A.2):
70
+
71
+ $$\pi_{\star} = \underset{\pi \in \Pi(\mu_{\star}, \nu_{\star})}{\arg \min} \operatorname{KL}(\pi | \mathcal{G}), \tag{4}$$
72
+
73
+ where $\Pi(\mu_{\star}, \nu_{\star})$ is the set of couplings with marginals $\mu_{\star}$ and $\nu_{\star}$ . Moreover, the static SBP yields a structural representation for (4) (Peyré & Cuturi, 2019; Nutz, 2022):
74
+
75
+ $$d\pi_{\star}(\mathbf{x}, \mathbf{y}) = e^{\varphi_{\star}(\mathbf{x}) + \psi_{\star}(\mathbf{y}) - c_{\varepsilon}(\mathbf{x}, \mathbf{y})} d(\mu_{\star} \otimes \nu_{\star}),$$
76
+
77
+ where $\varphi_{\star}$ and $\psi_{\star}$ are the Schrödinger potential functions.
78
+
79
+ Next, the equivalence between the static SBP and entropic optimal transport (EOT) follows that:
80
+
81
+ $$\begin{aligned} \mathrm{KL}(\pi|\mathcal{G}) &= \iint \log \left( \frac{d\pi}{d(\mu_{\star} \otimes \nu_{\star})} \frac{d(\mu_{\star} \otimes \nu_{\star})}{d\mathcal{G}} \right) d\pi \\ & \doteq \mathrm{KL}(\pi|\mu_{\star} \otimes \nu_{\star}) + \iint \log e^{c_{\varepsilon}} d\pi \\ & = \iint c_{\varepsilon} d\pi + \mathrm{KL}(\pi|\mu_{\star} \otimes \nu_{\star}), \end{aligned}$$
82
+
83
+ where $\doteq$ denotes an equality that's up to a constant. Problem (4) is equivalent to the EOT with a 1-regularizer:
84
+
85
+ $$\inf_{\pi \in \Pi(\mu_{\star},\nu_{\star})} \iint c_{\varepsilon}(\mathbf{x},\mathbf{y})\pi(\mathrm{d}\mathbf{x},\mathrm{d}\mathbf{y}) + 1 \cdot \mathrm{KL}(\pi|\mu_{\star} \otimes \nu_{\star}). \quad (5)$$
86
+
87
+ Recall that the first and second marginal of the coupling $\pi_{\star}$ follow from $\mu_{\star}$ and $\nu_{\star}$ , respectively. As detailed in section A.3, we can arrive at *Schrödinger equations*
88
+
89
+ $$\int e^{\varphi_{\star}(\mathbf{x}) + \psi_{\star}(\mathbf{y}) - c_{\varepsilon}(\mathbf{x}, \mathbf{y})} \mu_{\star}(d\mathbf{x}) = 1 \quad \nu_{\star} - a.s.$$
90
+ (6)
91
+
92
+ $$\int e^{\varphi_{\star}(\mathbf{x}) + \psi_{\star}(\mathbf{y}) - c_{\varepsilon}(\mathbf{x}, \mathbf{y})} \nu_{\star}(\mathbf{d}\mathbf{y}) = 1 \quad \mu_{\star} \text{-}a.s.. \quad (7)$$
93
+
94
+ Notably, the score functions also give rise to a variant of *Schrödinger equations*, as shown in Eq.(23), establishing an inherent link between scoring functions and Schrödinger potentials.
95
+
96
+ To obtain the desired $\pi_\star := (\mu_\star, \nu_\star)$ , a standard tool is the iterative proportional fitting (IPF) (also known as Sinkhorn algorithm) (Ruschendorf, 1995). The exact IPF algorithm alternatingly projects the coupling $\pi_k := (\mu_k, \nu_k)$ at iteration k to every other marginal such that for any $k \in \mathbb{N}$ :
97
+
98
+ $$\mu_{2k+1} = \mu_{\star}, \ \nu_{2k} = \nu_{\star}.$$
99
+
100
+ To wit, we solve every other marginal alternatingly and show the convergence of the marginals to the correct distribution
101
+
102
+ $$\mu_{2k} \xrightarrow{\text{convergence}} \mu_{\star}, \quad \nu_{2k+1} \xrightarrow{\text{convergence}} \nu_{\star}.$$
103
+
104
+ ![](_page_3_Figure_1.jpeg)
105
+
106
+ Figure 1. IPF vs aIPF. The light solid lines (IPF) show the iterates of exact projections; the dotted lines (aIPF) present the approximate projections towards $\pi_{\star} := (\mu_{\star}, \nu_{\star})$ .
107
+
108
+ However, it is too expensive in practice to obtain the exact marginals $\mu_{\star}$ and $\nu_{\star}$ via Eq.(6) and (7). To solve this problem, it is inevitable to approximate the projections (numerically solved via FB-SDE in Eq.(9)) through specific tools, such as deep neural networks (De Bortoli et al., 2021; Chen et al., 2022) or Gaussian process (Vargas et al., 2021)
109
+
110
+ $$\mu_{2k+1} = \mu_{\star,k+1} \approx \mu_{\star}, \quad \nu_{2k} = \nu_{\star,k} \approx \nu_{\star}, \quad (8)$$
111
+
112
+ where $\mu_{\star,k+1}$ (or $\nu_{\star,k}$ ) is an approximate measure at iteration k+1 (or k) that is close to $\mu_{\star}$ (or $\nu_{\star}$ ). The approximate IPF (aIPF) is presented in Algorithm 1 and the comparison to the exact IPF is illustrated in Figure.1.
113
+
114
+ **Algorithm 1** One iteration of approximate IPF (aIPF). $\psi_k$ and $\varphi_k$ denote the estimates of potential functions at iteration k. Empirically, the integral is estimated via divergence-based likelihood training of FB-SDEs in section 5.1.
115
+
116
+ $$\psi_k(\mathbf{y}) \approx -\log \int_{\mathbb{R}^d} e^{\varphi_k(\mathbf{x}) - c_{\varepsilon}(\mathbf{x}, \mathbf{y})} \mu_{\star}(\mathrm{d}\mathbf{x})$$
117
+ $$\varphi_{k+1}(\mathbf{x}) \approx -\log \int_{\mathbb{R}^d} e^{\psi_k(\mathbf{y}) - c_{\varepsilon}(\mathbf{x}, \mathbf{y})} \nu_{\star}(\mathrm{d}\mathbf{y}).$$
118
+
119
+ For the convergence study, the existing (heuristic) finite sample bound (Vargas et al., 2021) conjectured that
120
+
121
+ $$\|\mu_{2k} - \mu_{\star}\| \lesssim \frac{1}{k} + \sum_{i=0}^{k} \epsilon K^{i}$$
122
+
123
+ $\|\nu_{2k-1} - \nu_{\star}\| \lesssim \frac{1}{k} + \sum_{i=0}^{k} \epsilon K^{i},$
124
+
125
+ where $\|\cdot\|$ is some metric and the aIPF projection operator is assumed to be K-Lipchitz. However, ensuring K < 1 based on general cost functions is not trivial (Conforti et al., 2023). As such, a *fundamental question* remains open
126
+
127
+ Can we ensure the approximation error is less dependent on the number of iterations k?
128
+
129
+ Answering this question yields concrete guidance on the computational complexity and tells us when the approximation error doesn't get arbitrarily worse. To achieve this target, we first lay out the following standard assumptions.
130
+
131
+ **Assumption A1** (Dissipativity). $\mu_{\star}$ and $\nu_{\star}$ satisfy the dissipative condition for some constants $m_{ds} > 0$ and $b_{ds} \geq 0$ .
132
+
133
+ $$\left\langle \mathbf{x}, -\nabla \log \frac{\mathrm{d}\mu_{\star}}{\mathrm{d}\mathbf{x}}(\mathbf{x}) \right\rangle \ge m_{ds} \|\mathbf{x}\|_{2}^{2} - b_{ds}$$
134
+ $$\left\langle \mathbf{y}, -\nabla \log \frac{\mathrm{d}\nu_{\star}}{\mathrm{d}\mathbf{y}}(\mathbf{y}) \right\rangle \ge m_{ds} \|\mathbf{y}\|_{2}^{2} - b_{ds},$$
135
+
136
+ where $\frac{d\mu_{\star}}{d\mathbf{x}}$ and $\frac{d\nu_{\star}}{d\mathbf{y}}$ are the probability densities for the probability measure $\mu_{\star}$ and $\nu_{\star}$ , respectively; $\nabla \log \frac{d\mu_{\star}}{d\mathbf{x}}(\mathbf{x})$ and $\nabla \log \frac{d\nu_{\star}}{d\mathbf{y}}(\mathbf{y})$ are the score functions.
137
+
138
+ *Remark:* The above assumption is standard and has been used in Raginsky et al. (2017), which allows the densities to be non-convex in a ball with a radius depending on $b_{\rm ds}$ . Notably, it also implies the log-Sobolev inequality (LSI) with a bounded constant $C_{LS}$ (Lee et al., 2022).
139
+
140
+ **Assumption A2** ( $\epsilon$ -approximation). $\nabla \log \frac{d\mu_{\star,k}}{d\mathbf{x}}(\mathbf{x})$ and $\nabla \log \frac{d\nu_{\star,k}}{d\mathbf{y}}(\mathbf{y})$ are the $\epsilon$ approximation of score functions $\nabla \log \frac{d\mu_{\star}}{d\mathbf{x}}(\mathbf{x})$ and $\nabla \log \frac{d\nu_{\star}}{d\mathbf{y}}(\mathbf{y})$ at the k-th iteration, respectively
141
+
142
+ $$\begin{split} & \left\| \nabla \log \frac{\mathrm{d}\mu_{\star,k}}{\mathrm{d}\mathbf{x}} - \nabla \log \frac{\mathrm{d}\mu_{\star}}{\mathrm{d}\mathbf{x}} \right\|_{\infty} \leq \epsilon \\ & \left\| \nabla \log \frac{\mathrm{d}\nu_{\star,k}}{\mathrm{d}\mathbf{y}} - \nabla \log \frac{\mathrm{d}\nu_{\star}}{\mathrm{d}\mathbf{y}} \right\|_{\infty} \leq \epsilon, \end{split}$$
143
+
144
+ Such an assumption is closely related to the $\epsilon$ -accurate score approximation in De Bortoli et al. (2021); Lee et al. (2022) except that our focus is the marginals on $\mu_{\star}$ and $\nu_{\star}$ while theirs is the marginal density along the forward SDE (1a). To further extend the score approximation assumption from $L^{\infty}$ (uniformly accurate) to $L^2$ (in expectation), we can leverage the "bad set" idea (Lee et al., 2022) or the Girsanov theorem (Chen et al., 2023) to match the likelihood training framework better. Moreover, the errors in the two marginals do not need to be identical, and a unified $\epsilon$ is employed mainly for the sake of analytical convenience.
145
+
146
+ **Assumption A3** (Lipschitz smoothness). The score functions of marginal densities $\nabla \log \frac{\mathrm{d}\mu_{\star}}{\mathrm{d}\mathbf{x}}$ are $\nabla \log \frac{\mathrm{d}\nu_{\star}}{\mathrm{d}\mathbf{y}}$ are both *L-Lipschitz smooth*.
147
+
148
+ To sketch the proof, we first show a summation property of $\sum_{k\geq 1}^n \mathrm{KL}(\pi_k|\pi_{k-1})$ without breaking the cyclical invariance property (Ghosal et al., 2022) in Lemma 3 such that
149
+
150
+ $$\sum_{k\geq 1}^{n} \mathrm{KL}(\pi_k|\pi_{k-1}) \leq \mathrm{KL}(\pi_{\star}|\mathcal{G}) - \mathrm{KL}(\pi_0|\mathcal{G}) + O(n\epsilon).$$
151
+
152
+ Next, we prove $\mathrm{KL}(\mu_{2k}|\mu_{\star,k}) \leq \mathrm{KL}(\pi_{2k}|\pi_{2k-1})$ and $\mathrm{KL}(\nu_{\star,k}|\nu_{2k-1}) \leq \mathrm{KL}(\pi_{2k}|\pi_{2k-1}) + O(\epsilon)$ , which yields
153
+
154
+ $$\sum_{k>1}^{n} \mathrm{KL}(\mu_{2k}|\mu_{\star,k}) \leq \mathrm{KL}(\pi_{\star}|\mathcal{G}) - \mathrm{KL}(\pi_{0}|\mathcal{G}) + O(n\epsilon)$$
155
+
156
+ $$\sum_{k>1}^{n} \mathrm{KL}(\nu_{\star,k}|\nu_{2k-1}) \leq \mathrm{KL}(\pi_{\star}|\mathcal{G}) - \mathrm{KL}(\pi_{0}|\mathcal{G}) + O(n\epsilon).$$
157
+
158
+ Moreover, we obtain an approximately monotonedecreasing property in proposition 4 as follows
159
+
160
+ $$KL(\mu_{2k}|\mu_{\star,k}) \le KL(\mu_{2t+2}|\mu_{\star,k+1}) + O(\epsilon)$$
161
+
162
+ $KL(\nu_{\star,k}|\nu_{2k-1}) \le KL(\nu_{\star,k+1}|\nu_{2k+1}) + O(\epsilon).$
163
+
164
+ Finally, combining Lemma 8 and the fact that $\mu_{\star,k}$ (or $\nu_{\star,k}$ ) is $\epsilon$ -close to $\mu_{\star}$ (or $\nu_{\star}$ ), our main theorem follows that:
165
+
166
+ **Theorem 1** (Approximately Sublinear Rate for Marginals). Given dissipative assumption A1, $\epsilon$ -approximate score assumption A2, and smoothness assumption A3, we have
167
+
168
+ $$\begin{split} \mathit{KL}(\mu_{2k}|\mu_{\star}) &\leq \frac{\mathit{KL}(\pi_{\star}|\mathcal{G}) - \mathit{KL}(\pi_{0}|\mathcal{G})}{k} + O(\epsilon^{\frac{1}{2}} + k^{\frac{1}{2}}\epsilon) \\ \mathit{KL}(\nu_{\star}|\nu_{2k-1}) &\leq \frac{\mathit{KL}(\pi_{\star}|\mathcal{G}) - \mathit{KL}(\pi_{0}|\mathcal{G})}{k} + O(\epsilon^{\frac{1}{2}} + k^{\frac{1}{2}}\epsilon), \end{split}$$
169
+
170
+ where the big-O notations are independent of k.
171
+
172
+ The proof is presented in Appendix B.1, which provides the first-ever evidence of the convergence of the aIPF algorithm with approximate projections. Our analysis suggests that to achieve an $\epsilon$ -accurate target, the iteration should be greater than $\Omega(\frac{1}{\epsilon})$ , although early stopping may be necessary to avoid excessive perturbations. It is worth noting that *order* $O(k^{\frac{1}{2}})$ is more preferable than linear-order or expansive upper bounds in Vargas et al. (2021) (when $K \geq 1$ ). However, we acknowledge this result is not entirely practical without the bounded-cost-function assumption. We believe the square-root order can be further refined by obtaining a tighter convergence rate (Ghosal & Nutz, 2022; Conforti et al., 2023). This refinement can be left as future work.
173
+
174
+ Moreover, the convergence of the minimized cost (3) potentially facilitates the estimation of score functions. However, it involves a trade-off between computation and accuracy. Such a trade-off can be used to establish instances where Schrödinger bridge is faster than SGMs.
175
+
176
+ The complexity analysis of SBP hinges on the convergence of aIPF based on entropic optimal transport; however, solving the exact integrals in Algorithm 1 is far from trivial and heavily relies on score-based sampling techniques. In the next section, we present the likelihood training of SBP to connect with aIPF and build the conditional variant for
177
+
178
+ applications in probabilistic time series imputations. To facilitate reading, we visualize the connections between the convergence analysis of aIPF and the likelihood training of FB-SDE in Figure 2 below.
179
+
180
+ ![](_page_4_Figure_15.jpeg)
181
+
182
+ Figure 2. Relation between IPF, SBP, SOC, and FB-SDE.
183
+
184
+ In this section, we solve SBP via likelihood training of FB-SDEs and briefly present the conditional Schrödinger bridge method for probabilistic time series imputation (CSBI).
185
+
186
+ Solving SBP is often intractable. We show it could be transformed into computation-friendly FB-SDEs. We sketch the main results here and detail derivations in appendix A.1.
187
+
188
+ Rewrite the SOC perspective of SBP (3) with $\varepsilon = \frac{1}{2}$ and constraints (14) into a Lagrangian (Chen et al., 2021a):
189
+
190
+ $$\begin{split} \mathcal{L}(\rho, \boldsymbol{u}, \phi) &:= \int_0^T \int_{\mathbb{R}^d} \frac{1}{2} \|\boldsymbol{u}(\mathbf{x}_t, t)\|_2^2 \rho(\mathbf{x}_t, t) \\ &+ \phi(\mathbf{x}_t, t) \bigg( \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho(\boldsymbol{f} + g\boldsymbol{u})) - \frac{g^2 \Delta \rho}{2} \bigg) \mathrm{d}\mathbf{x}_t \mathrm{d}t, \end{split}$$
191
+
192
+ where $\phi(\mathbf{x}_t,t)$ denotes a Lagrangian multiplier (Chen et al., 2021a). Further, consider the log transformation based on score functions $(\nabla \log \overrightarrow{\psi}, \nabla \log \overleftarrow{\varphi})$
193
+
194
+ $$\overrightarrow{\psi}(\mathbf{x}_t, t) = \exp(\phi(\mathbf{x}_t, t)/g^2)$$
195
+
196
+ $$\overleftarrow{\varphi}(\mathbf{x}_t, t) = \rho^*(\mathbf{x}_t, t)/\overrightarrow{\psi}(\mathbf{x}_t, t),$$
197
+
198
+ where $\rho^*(\mathbf{x}, t)$ is the optimal density conditional on the optimal control $u^* := \nabla \phi(\mathbf{x}, t)$ . Now we obtain the FB-SDEs (Chen et al., 2022):
199
+
200
+ ![](_page_5_Figure_1.jpeg)
201
+
202
+ Figure 3. Demo of conditional Schrödinger bridge for imputation (CSBI). The example shows two correlated time-series features, the dark dots are condition values, and the green band represents an 80% confidence interval of the imputed missing values, which starts from the prior distribution with very large uncertainty on the right to narrow and smooth band on the left matching the data distribution very well. The score functions $\nabla \log \overrightarrow{\psi}$ and $\nabla \log \overleftarrow{\varphi}$ in FB-SDEs (9) are trained via divergence-based likelihood.
203
+
204
+ **Proposition 1.** The forward-backward SDE (FB-SDE) associated with the problem (3) with $\epsilon = \frac{1}{2}$ and conditional constraints follows
205
+
206
+ $$d\mathbf{x}_{t} = \left[ \mathbf{f}(\mathbf{x}_{t}, t) + g(t)^{2} \nabla \log \overrightarrow{\psi}(\mathbf{x}_{t}, t) \right] dt + g(t) d\mathbf{w}_{t}, \quad (9a)$$
207
+
208
+ $$d\mathbf{x}_{t} = \left[ \mathbf{f}(\mathbf{x}_{t}, t) - g(t)^{2} \nabla \log \overleftarrow{\varphi}(\mathbf{x}_{t}, t) \right] dt + g(t) d\overline{\mathbf{w}}_{t}.$$
209
+ (9b)
210
+
211
+ where $\mathbf{x}_0 \sim \mu_{\star}$ and $\mathbf{x}_T \sim \nu_{\star}$ .
212
+
213
+ Define $\overrightarrow{\mathbf{z}}_t = g(t)\nabla\log\overrightarrow{\psi}(\mathbf{x}_t,t)$ and $\overleftarrow{\mathbf{z}}_t = g(t)\nabla\log\overleftarrow{\varphi}(\mathbf{x},t)$ . Itô's lemma (see Theorem 3 (Chen et al., 2022)) leads to the diffusion of $\overrightarrow{\mathbf{z}}_t$ and $\overleftarrow{\mathbf{z}}_t$ in (9).
214
+
215
+ We use models $\overrightarrow{\mathbf{Z}}_t^{\theta}$ and $\overleftarrow{\mathbf{Z}}_t^{\omega}$ to learn the forward policy $\overleftarrow{\mathbf{Z}}_t$ and backward policy $\overleftarrow{\mathbf{Z}}_t$ and refer to the objective of data likelihood as $\mathcal{L}_{\theta,\omega}^{\mathrm{SBP}}$ . In the context of imputation problems with conditional and target entries, maximizing the likelihood is equivalent to optimizing the backward policy $\overleftarrow{\mathbf{Z}}_t^{\omega}$ and forward policy $\overleftarrow{\mathbf{Z}}_t^{\omega}$ as follows (Chen et al., 2022):
216
+
217
+ $$\mathcal{L}_{\omega}^{\mathrm{SBP}}(\mathbf{x}_{0}) = -\widehat{\mathbb{E}}_{\mathbf{x}_{t} \sim (9a)} \left[ \frac{1}{2} \| \overleftarrow{\mathbf{z}}_{t}^{\omega} \circ \boldsymbol{M}_{\mathrm{target}} \|_{2}^{2} + \right]$$
218
+ (10a)
219
+ $$g \nabla \cdot \left( \overleftarrow{\mathbf{z}}_{t}^{\omega} \circ \boldsymbol{M}_{\mathrm{target}} \right) + \left( \overrightarrow{\mathbf{z}}_{t} \circ \boldsymbol{M}_{\mathrm{target}} \right)^{\mathsf{T}} \left( \overleftarrow{\mathbf{z}}_{t}^{\omega} \circ \boldsymbol{M}_{\mathrm{target}} \right)$$
220
+ $$\mathcal{L}_{\theta}^{\mathrm{SBP}}(\mathbf{x}_{T}) = -\widehat{\mathbb{E}}_{\mathbf{x}_{t} \sim (9b)} \left[ \frac{1}{2} \| \overrightarrow{\mathbf{z}}_{t}^{\theta} \|_{2}^{2} + g \nabla \cdot \overrightarrow{\mathbf{z}}_{t}^{\theta} + \overleftarrow{\mathbf{z}}_{t}^{\mathsf{T}} \overrightarrow{\mathbf{z}}_{t}^{\theta} \right],$$
221
+ (10b)
222
+
223
+ where $\widehat{\mathbb{E}}$ denotes the empirical expectations of the sampled trajectories according to the FB-SDEs (9); $M_{\text{target}}$ is the conditional mask to be clarified in section 5.2; $\nabla \cdot$ denotes the divergence (for clarity, $\nabla$ is the gradient.). The masks
224
+
225
+ and conditions are not required in Eq.(10b), because the backward SDEs start from the known prior. Since simulating the full sample trajectory is costly, we apply the caching-trajectory strategy (De Bortoli et al., 2021; Chen et al., 2022) to improve the efficiency. Now, we present the practical method in Algorithm 2 and refer to the conditional Schrödinger bridge for imputation as CSBI. Similar to De Bortoli et al. (2021), this algorithm can be viewed as a dynamic implementation of the IPF algorithm.
226
+
227
+ During the inference, conditional sampling follows the joint distribution learning by applying the backward policy (Song et al., 2021b). The Langevin corrector (Song et al., 2021b; Chen et al., 2022) can also be used to improve performance. See details in Appendix C.6.
228
+
229
+ Time series imputation task requires filling missing values in arbitrary entries given partial observations in random positions, where the condition-target relation usually varies from sample to sample. This requires the model to capture both temporal and feature-wise dependency at the same time. Next, we present our framework based on divergence objectives. The joint distribution learning of $\mathbf{x} := (\mathbf{x}_{target}, \mathbf{x}_{cond})$ is the following.
230
+
231
+ $$\begin{cases} \mathbf{x}_{0} \sim \mu_{\star} = p_{\text{target}|\text{cond}}(\mathbf{x}) p_{\text{cond}}(\mathbf{x}) \\ \mathbf{x}_{T} \sim \nu_{\star} = p_{\text{prior-target}}(\mathbf{x}) p_{\text{cond}}(\mathbf{x}), \end{cases}$$
232
+ (11)
233
+
234
+ where $p_{\text{target}|\text{cond}}(\mathbf{x}) = p(\mathbf{x}_{\text{target}}|\mathbf{x}_{\text{cond}})$ is the target conditional distribution, $p_{\text{prior-target}}(\mathbf{x})$ is the prior distribution of target values, and $p_{\text{cond}}(\mathbf{x}) = p(\mathbf{x}_{\text{cond}})$ is the data-dependent distribution of observations being conditioned on.
235
+
236
+ - 1: **Input:** samplers of joint space $p_{\text{prior}}(\mathbf{x})$ , $p_{\text{obs}}(\mathbf{x})$ , and masks.
237
+ - 2: Set output of $\overrightarrow{\mathbf{z}}_{t}^{\theta}$ as zero and warmup train $\overleftarrow{\mathbf{z}}_{t}^{\omega}$ using score matching.
238
+ - 3: repeat
239
+ - Update cached forward trajectories following (9a) with certain frequency. 4:
240
+ - Compute $\mathcal{L}_{\omega}^{\mathrm{SBP}}$ (10a) with $\mathbf{x}_0 \sim p_{\mathrm{obs}}(\mathbf{x})$ together with $M_{\mathrm{cond}}$ , $M_{\mathrm{target}}$ . 5:
241
+ - Take gradient step $\nabla \mathcal{L}_{\omega}^{\mathrm{SBP}}$ and update $\mathbf{z}_{t}^{\omega}$ . 6:
242
+ - Update cached backward trajectories of (9b) with certain frequency. 7:
243
+ - Compute $\mathcal{L}_{\theta}^{\mathrm{SBP}}$ (10b), with $\mathbf{x}_{T} \sim p_{\mathrm{prior}}(\mathbf{x})$ , $M_{\mathrm{cond}} = \mathbf{0}$ , $M_{\mathrm{target}} = \mathbf{1}$ . Take gradient step $\nabla \mathcal{L}_{\theta}^{\mathrm{SBP}}$ and update $\mathbf{z}_{t}^{\theta}$ . 8:
244
+ - 9:
245
+ - 10: until a stopping criterion
246
+
247
+ ![](_page_6_Figure_12.jpeg)
248
+
249
+ Figure 4. Demonstration of entry types and masks. The usage for SBP is described in section 5.2
250
+
251
+ Masking for conditional inference The irregular condition-target relation is indicated by observation mask $M_{
252
+ m obs}$ , condition mask $M_{
253
+ m cond}$ , and target mask $M_{
254
+ m target}$ (Figure 4). $M_{
255
+ m obs}$ covers all ground true values, unknown entries is complementary to $M_{\rm obs}$ without ground truths, $M_{
256
+ m target}$ is for the imputation target, $M_{
257
+ m cond}$ indicates the input condition for the model, which is a subset of $M_{\rm obs}$ . When the model is trained or evaluated, $M_{
258
+ m target}$ is usually set as part of $M_{
259
+ m obs}$ so the performance can be calculated by comparing the imputed values and the ground truths. When the model is deployed, $M_{\mathrm{target}}$ can also cover the unknown entries. See more details on masks in Appendix C.1.
2305.09062/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2305.09062/paper_text/intro_method.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Despite recent advances in deep learning research in various computer vision and NLP tasks [@torfi2020natural], it remains a challenge for the standard supervised learning setting to achieve satisfactory results when learning from just a small amount of labeled data [@song2022comprehensive]. Current deep learning algorithms tend to overfit when they are given a small dataset for training, limiting their generalization capabilities [@kawaguchi2017generalization]. Moreover, there are many problem domains where obtaining labeled data can be difficult to obtain or it entails a lot of manual work to get the data with the corresponding annotations or ground truth. This represents a major problem in many real-world applications (i.e., medicine) as some of the instances we are interested in are very rare [@piccialli2021survey].
4
+
5
+ Therefore in recent years the Few-shot learning (FSL) paradigm has been proposed ([@siamesenetworks; @prototypicalNets2017; @matchingNetworks2016; @sung2017relationNet; @maml; @reptile]) as a way to deal with this data scarcity problem. For classification tasks, the main goal of such methods is to categorize previously unseen data into a set of new classes, given only a small amount of labeled instances per class. The main challenge for FSL is to apply a fine-tuning process to an existing embedding network to adapt it to new classes; as a matter of fact, the main issue is that such process can easily lead to overfitting, due to the few labeled samples available for each class.
6
+
7
+ ![In our work we propose to use two loss functions that work well for metric meta-learning approaches for few-shot classification. We optimize an embedding network based on the error calculated by the ICNN Loss or the proto-triplet loss. Both losses aim to increase the inter-class distance and decrease the intra-class distance between samples of different classes.](images/proposed_model.png){#fig:model width=".7\\textwidth"}
8
+
9
+ <figure id="fig:test">
10
+ <figure id="fig:icnn_loss">
11
+ <img src="images/icnnlossmodel.png" />
12
+ <figcaption>The ICNN Loss is based on a score given by the intra and inter class distance from the batch points. For each data point, we measure its Inter and Intra Class Nearest Neighbors score and optimize the embedding network based on the quality of the generated features.</figcaption>
13
+ </figure>
14
+ <figure id="fig:proto-triplets_loss">
15
+ <img src="images/prototriplet_lossmodel.png" />
16
+ <figcaption>The Proto-Triplet loss takes as anchor a query point, the prototype from the same class as positive point and the nearest prototype of different class as the negative point.</figcaption>
17
+ </figure>
18
+ <figcaption> Comparison between the two distances.</figcaption>
19
+ </figure>
20
+
21
+ Currently, two main approaches to FSL exist: the first one is based on meta-learning methods ([@maml; @learningtolearn; @learningtooptimize; @l2lgradient]), where the basic idea is to learn from diverse tasks and datasets and adapt the learned model to new datasets. A second approach is based on metric-learning methods ([@distanceMetricLearning; @siamesenetworks]), where the objective is to learn a pairwise similarity metric such that a score (given by some distance) is high for similar samples, whilst dissimilar samples get a low score. Subsequently, these metric learning methods undertook a hybrid approach, as they started to adopt the meta-learning policy to learn across tasks ([@prototypicalNets2017; @matchingNetworks2016; @sung2017relationNet]). The main objective of these methods is to learn an effective embedding network in order to extract useful features from a given task and discriminate on the classes which we are trying to predict. From this basic learning setting, many extensions have been introduced to improve the performance of metric learning methods. Some of these works focus on pre-training the embedding network ([@ssl]), others introduce task attention modules ([@chen2020multiscale; @categoryTraversal; @principalCharacteristics]), whereas others seek to optimize the embeddings ([@convexoptimization]) and yet others employ a variety of loss functions ([@principalCharacteristics]). However, only a few methods have explored mechanisms for enforcing class separability via a custom loss function during training.
22
+
23
+ Our approach introduced herein can be seen as a hybrid between some of the above-mentioned approaches, as we attempt to improve jointly the embeddings and we also investigate the impact of two loss functions for FSL-based classification tasks.
24
+
25
+ More specifically, we explore the use of two different loss functions based on the concepts of inter-class and intra-class distance. The first one is the well-known proto-triplet loss, which to the best of our knowledge has not been explored in the context of few shot learning; the second, which we dubbed ICNN loss (based on the Inter and Intra Class Nearest Neighbor score first introduced in [@tesis_ivan]) represents a novel loss function that evaluates the quality of the features obtained based on the inter/intra class, the variance and the class ratio from nearest neighbors.
26
+
27
+ The basic working principle of our proposed approach is shown in Figure [1](#fig:model){reference-type="ref" reference="fig:model"}. As it can be observed in the figure, the used loss functions allow us to jointly optimize the embedding network and learn more discriminative features across tasks. In our experiments, we obtained an accuracy performance of 61.32% and 79.93% in the 5-way 1-shot and 5-way 5-shot settings respectively using the proto-triplet loss, and 60.79% and 80.41% using our novel ICNN Loss on the MiniImagenet dataset which represents an improvement of about 2% compared to the state of the art. We further test our methods on the CUB, Caltech, Stanford Dogs and Stanford Cars datasets to assess the generalization capabilities of the proposed framework, obtaining satisfactory results.
28
+
29
+ The ICNN Loss and Proto-Triplet Loss are two methods used for optimizing an embedding network based on the quality of the generated features. As shown in Figure [4](#fig:test){reference-type="ref" reference="fig:test"}, the ICNN Loss measures the intra and inter class distances of batch points and assigns each data point an Inter and Intra Class Nearest Neighbors score. The Proto-Triplet Loss, on the other hand, is based on triplet loss and calculates the similarity between points using prototypes of different classes as anchor, positive, and negative points.
30
+
31
+ The rest of the paper is organized as follows: In Section 2, we describe the related work in the few-shot learning problem. In Section 3 we describe the proposed loss functions. We first explain the Proto-triplet loss function, and then we explain the rationale for the ICNN Loss function and its derivation. Then we discuss some of the design choices we needed to take for the ICNN loss. In Section 4 we detail the experimental setup used for the implementation of the models. In Section 5 we show the results obtained and discuss about its performance. Finally, in Section 6 we present our conclusions and discuss future work.
32
+
33
+ # Method
34
+
35
+ We evaluate our experiments using the MiniImageNet dataset ([@matchingNetworks2016]), which is a version of the ImageNet Large Scale Visual Recognition Competition 2012 ([@imagenet]). Following the split proposed by [@optimizationfewshot], this version of ImageNet is divided into 64 classes for training, 16 classes for validation and 20 classes for testing, making a total of 100 classes for the meta-learning. Each class contains 600 images to have a total of 60,000 images. This dataset is used as a benchmark to evaluate most of the state-of-the-art few-shot learning methods.
36
+
37
+ We further test the proposed framework with other well-known datasets for assessing the generalization capabilities of te final model trained in MiniImagnet, namely the Caltech-101, Stanford Dogs, Stanford Cars and CUB datasets. Caltech [@griffin2007caltech] is a dataset consisting of 101 widely varied categories, as well as a background category. For each class, there are around 40 and 800 images, while most classes have about 50 images. These classes are randomly split to use 20 classes as the testing set. Stanford Cars [@stanford_cars] contains fine-grained images from 196 classes of cars.
2305.15913/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2305.15913/paper_text/intro_method.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Social media has become a mainstream communication medium for the masses, redefining how we interact within society. The information shared on social media has diverse forms, like text, audio, and visual messages, or their combinations thereof. A meme is a typical example of such social media artifact that is usually disseminated with the flair of sarcasm or humor. While memes facilitate convenient means for propagating complex social, cultural, or political ideas via visual-linguistic semiotics, they often abstract away the contextual details that would typically be necessary for the uninitiated. Such contextual knowledge is critical for human understanding and computational analysis alike. We aim to address this requirement by contemplating solutions that facilitate the automated derivation of contextual *evidence* towards making memes more accessible. To this end, we formulate a novel task -- **[Meme*X*]{.smallcaps}**, which, given a meme and a related context, aims to detect the sentences from within the context that can potentially explain the meme. Table [\[tab:task-example\]](#tab:task-example){reference-type="ref" reference="tab:task-example"} visually explains [Meme*X*]{.smallcaps}. Memes often camouflage their intended meaning, suggesting [Meme*X*]{.smallcaps}'s utility for a broader set of multimodal applications having visual-linguistic dissociation. Other use cases include context retrieval for *various art forms, news images, abstract graphics for digital media marketing*, etc.
4
+
5
+ Table [\[tab:task-example\]](#tab:task-example){reference-type="ref" reference="tab:task-example"} primarily showcases a meme's figure (left) and an excerpt from the related context (right). This meme is about the revenge killing of an *Ottoman Sultan*, by the *Janissaries* (infantry units), in reaction to their disbanding, by the Sultan. The first line conveys the supporting evidence for the meme from the related context, emboldened and highlighted in Table [\[tab:task-example\]](#tab:task-example){reference-type="ref" reference="tab:task-example"}. The aim is to model the required cross-modal association that facilitates the detection of such supporting pieces of evidence from a given related contextual document.
6
+
7
+ The recent surge in the dissemination of memes has led to an evolving body of studies on meme analysis in which the primary focus has been on tasks, such as emotion analysis [@sharma-etal-2020-semeval], visual-semantic role labeling [@sharma-etal-2022-findings], detection of phenomena like sarcasm, hate-speech [@kiela2020hateful], trolling [@hegde2021images] and harmfulness [@pramanick-etal-2021-momenta-multimodal; @sharma-etal-2022-disarm].
8
+
9
+ These studies indicate that off-the-shelf multimodal models, which perform well on several traditional visual-linguistic tasks, struggle when applied to memes [@kiela2020hateful; @mmmlsurvey2017; @sharma-etal-2022-disarm]. The primary reason behind this is the contextual dependency of memes for their accurate assimilation and analysis. Websites like [knowyourmeme.com](knowyourmeme.com){.uri} (KYM) facilitate important yet restricted information. [Meme*X*]{.smallcaps} requires the model to learn the cross-modal analogies shared by the contextual evidence and the meme at various levels of information abstraction, towards detecting the crucial explanatory evidence[^1]. The critical challenge is to represent the abstraction granularity aptly. Therefore, we formulate [Meme*X*]{.smallcaps} as an "evidence detection" task, which can help deduce pieces of contextual evidence that help bridge the abstraction gap. However, besides including image and text modality, there is a critical need to inject contextual signals that compensate for the constraints due to the visual-linguistic grounding offered by conventional approaches.
10
+
11
+ Even with how effective and convenient memes are to design and disseminate over social media strategically, they are often hard to understand or are easily misinterpreted by the uninitiated, typically without the proper context. Thereby suggesting the importance of addressing a task like [Meme*X*]{.smallcaps}. Governments or organizations involved in content moderation over social media platforms could use such a utility, underlining the convenience that such a context deduction solution would bring about in assimilating harmful memes and thereby adjudicating their social implications in emergencies like elections or a pandemic.
12
+
13
+ Motivated by this, we first curate `MCC`, a new dataset that captures various memes and related contextual documents. We also systematically experiment with various multimodal solutions to address [Meme*X*]{.smallcaps}, which culminates into a novel framework named `MIME` (MultImodal Meme Explainer). Our model primarily addresses the challenges posed by the knowledge gap and multimodal abstraction and delivers optimal detection of contextual evidence for a given pair of memes and related contexts. In doing so, `MIME` surpasses several competitive and conventional baselines.
14
+
15
+ To summarize, we make the following main contributions [^2].:
16
+
17
+ - **A novel task**, [Meme*X*]{.smallcaps}, aimed to identify explanatory evidence for memes from their related contexts.
18
+
19
+ - **A novel dataset**, `MCC`, containing $3400$ memes and related context, along with gold-standard human annotated evidence sentence-subset.
20
+
21
+ - **A novel method**, `MIME` that uses common sense-enriched meme representation to identify evidence from the given context.
22
+
23
+ - **Empirical analysis** establishing `MIME`'s superiority over various unimodal and multimodal baselines, adapted for the [Meme*X*]{.smallcaps} task.
24
+
25
+ # Method
26
+
27
+ In this section, we describe our proposed model, `MIME`. It takes a meme (an image with overlaid text) and a related context as inputs and outputs a sequence of labels indicating whether the context's constituting *evidence sentences*, either in part or collectively, explain the given meme or not.
28
+
29
+ As depicted in Fig. [\[fig:main-arch\]](#fig:main-arch){reference-type="ref" reference="fig:main-arch"}, `MIME` consists of a text encoder to encode the context and a multimodal encoder to encode the meme (image and text). To address the complex abstraction requirements, we design a Knowledge-enriched Meme Encoder (KME) that augments the joint multimodal representation of the meme with external common-sense knowledge via a gating mechanism. On the other hand, we use a pre-trained BERT model to encode the sentences from the candidate context.
30
+
31
+ We then set up a Meme-Aware Transformer (MAT) to integrate meme-based information into the context representation for designing a multi-layered contextual-enrichment pipeline. Next, we design a Meme-Aware LSTM (MA-LSTM) that sequentially processes the context representations conditioned upon the meme-based representation. Lastly, we concatenate the last hidden context representations from MA-LSTM and the meme representation and use this jointly-contextualized meme representation for evidence detection. Below, we describe each component of `MIME` in detail.
32
+
33
+ Given a related context, $C$ consisting of sentences $[c_1, c_2 ... c_n]$, we encode each sentence in $C$ *individually* using a pre-trained BERT encoder, and the pooled output corresponding to the `[CLS]` token is used as the context representation. Finally, we concatenate the individual sentence representation to get a unified context representation $H_{c}\in \mathbb{R}^{n\times d}$, with a total of $n$ sentences.
34
+
35
+ Since memes encapsulate the complex interplay of linguistic elements in a contextualized setting, it is necessary to facilitate a primary understanding of linguistic abstraction besides factual knowledge. In our scenario, the required contextual mapping is implicitly facilitated across the contents of the meme and context documents. Therefore, to supplement the feature integration with the required common sense knowledge, we employ ConceptNet [@concept_net]: a semantic network designed to help machines comprehend the meanings and semantic relations of the words and specific facts people use. Using a pre-trained GCN, trained using ConceptNet, we aim to incorporate the semantic characteristics by extracting the averaged GCN-computed representations corresponding to the meme's text. In this way, the representations obtained are common sense-enriched and are further integrated with the rest of the proposed solution.
36
+
37
+ To incorporate external knowledge, we use ConceptNet [@concept_net] knowledge graph (KG) as a source of external commonsense knowledge. To take full advantage of the KG and at the same time to avoid the query computation cost, we use the last layer from a pre-trained graph convolutional network (GCN), trained over ConceptNet [@malaviya_kg_completion].
38
+
39
+ We first encode meme $M$ by passing the meme image $M_i$ and the meme text $M_t$[^3] to an empirically designated pre-trained MMBT model [@mmbt], to obtain a multimodal representation of the meme $H_m\in \mathbb{R}^{d}$. Next, to get the external knowledge representation, we obtain the GCN node representation corresponding to the words in the meme text $M_t$. This is followed by average-pooling these embeddings to obtain the unified knowledge representation $H_k\in \mathbb{R}^{d}$.
40
+
41
+ To learn a knowledge-enriched meme representation $\hat{H}_{m}$, we design a Gated Multimodal Fusion (GMF) block. As part of this, we employ a *meme gate* ($g_m$) and the *knowledge gate* ($g_k$) to modulate and fuse the corresponding representations.
42
+
43
+ $$\begin{equation}
44
+ \small
45
+ \begin{array}{rcl}
46
+ g_m = \sigma([H_m + H_k]W_{m} + b_{m})\\
47
+ g_k = \sigma([H_m + H_k]W_{k} + b_{k})
48
+ \end{array}
49
+ \end{equation}$$ Here, $W_{m}$ and $W_{k} \in \mathbb{R}^{2d\times d}$ are trainable parameters.
50
+
51
+ A conventional Transformer encoder [@attention_is_all_you_need] uses self-attention, which facilitates the learning of the inter-token contextual semantics. However, it does not consider any additional contextual information helpful in generating the query, key, and value representations. Inspired by the context-aware self-attention proposed by @context_aware_attention, in which the authors proposed several ways to incorporate *global*, *deep*, and *deep-global* contexts while computing self-attention over *embedded textual tokens*, we propose a meme-aware multi-headed attention (MHA). This facilitates the integration of *multimodal meme information* while computing the self-attention over context representations. We call the resulting encoder a meme-aware Transformer (MAT) encoder, which is aimed at computing the cross-modal affinity for $H_c$, conditioned upon the knowledge-enriched meme representation $\hat{H}_{m}$.
52
+
53
+ Conventional self-attention uses query, key, and value vectors from the same modality. In contrast, as part of meme-aware MHA, we first generate the key and the value vectors conditioned upon the meme information and then use these vectors via conventional multi-headed attention-based aggregation. We elaborate on the process below.
54
+
55
+ Given the context representation $H_c$, we first calculate the conventional query, key, and value vectors $Q$, $K$, $V \in \mathbb{R}^{n\times d}$, respectively as given below: $$\begin{equation}
56
+ \small
57
+ [QKV] = H_c[W_{Q}W_{K}W_{V}]
58
+ \end{equation}$$ Here, $n$ is the maximum sequence length, $d$ is the embedding dimension, and $W_{Q},W_{K},$ and $W_{V} \in \mathbb{R}^{d\times d}$ are learnable parameters.
59
+
60
+ We then generate new key and value vectors $\hat{K}$ and $\hat{V}$, respectively, which are conditioned on the meme representation $\hat{H}_{m} \in \mathbb{R}^{1\times d}$ (broadcasted corresponding to the context size). We use a gating parameter $\lambda \in \mathbb{R}^{n\times 1}$ to regulate the memetic and contextual interaction. Here, $U_k$ and $U_v$ constitute learnable parameters. $$\begin{equation}
61
+ \small
62
+ {\begin{bmatrix}
63
+ \hat{K}
64
+ \\
65
+ \hat{V}
66
+ \end{bmatrix}} = (1-{\begin{bmatrix}
67
+ \lambda_k
68
+ \\
69
+ \lambda_v
70
+ \end{bmatrix}}){\begin{bmatrix}
71
+ K
72
+ \\
73
+ V
74
+ \end{bmatrix}}
75
+ +
76
+ {\begin{bmatrix}
77
+ \lambda_k
78
+ \\
79
+ \lambda_v
80
+ \end{bmatrix}}
81
+ (\hat{H_{m}}
82
+ {\begin{bmatrix}
83
+ U_k
84
+ \\
85
+ U_v
86
+ \end{bmatrix}}
87
+ )
88
+ \end{equation}$$
89
+
90
+ We learn the parameters $\lambda_k$ and $\lambda_v$ using a sigmoid based gating mechanism instead of treating them as hyperparameters as follows: $$\begin{equation}
91
+ \small
92
+ {\begin{bmatrix}
93
+ \lambda_{k}
94
+ \\
95
+ \lambda_{v}
96
+ \end{bmatrix}} = \sigma({\begin{bmatrix}
97
+ K
98
+ \\
99
+ V
100
+ \end{bmatrix}}
101
+ {\begin{bmatrix}
102
+ W_{k_1}
103
+ \\
104
+ W_{v_1}
105
+ \end{bmatrix}}
106
+ +
107
+ \hat{H_{m}}
108
+ {\begin{bmatrix}
109
+ U_k
110
+ \\
111
+ U_v
112
+ \end{bmatrix}}
113
+ {\begin{bmatrix}
114
+ W_{k_2}
115
+ \\
116
+ W_{v_2}
117
+ \end{bmatrix}}
118
+ )
119
+ \end{equation}$$ Here, $W_{k_1}$, $W_{v_1}$, $W_{k_2}$ and $W_{v_2} \in \mathbb{R}^{d\times 1}$ are learnable parameters.
120
+
121
+ Finally, we use the query vector $Q$ against $\hat{K}$ and $\hat{V}$, conditioned on the meme information in a conventional scaled dot-product-based attention. This is extrapolated via multi-headed attention to materialize the Meme-Aware Transformer (MAT) encoder, which yields meme-aware context representations $H_{c/m}\in \mathbb{R}^{n\times d}$.
122
+
123
+ Prior studies have indicated that including a recurrent neural network such as an LSTM with a Transformer encoder like BERT is advantageous. Rather than directly using a standard LSTM in `MIME`, we aim to incorporate the meme information into sequential recurrence-based learning. Towards this objective, we introduce Meme-Aware LSTM (MA-LSTM) in `MIME`. MA-LSTM is a recurrent neural network inspired by [@xu-etal-2021-better] that can incorporate the meme representation $\hat{H_{m}}$ while computing cells and hidden states. The gating mechanism in MA-LSTM allows it to assess how much information it needs to consider from the hidden states of the enriched context and meme representations, $H_{c/m}$ and $\hat{H_{m}}$, respectively.
124
+
125
+ Fig. [\[fig:main-arch\]](#fig:main-arch){reference-type="ref" reference="fig:main-arch"} shows the architecture of MA-LSTM. We elaborate on the working of the MA-LSTM cell below. It takes as input the previous cell states $c_{t-1}$, previous hidden representation $h_{t-1}$, current cell input $H_{c_{t}}$, and an additional meme representation $\hat{H_m}$. Besides the conventional steps involved for the computation of *input, forget, output* and *gate* values w.r.t the input $H_{c_{t}}$, the *input* and the *gate* values are also computed w.r.t the additional input $\hat{H_{m}}$. The final *cell* state and the *hidden* state outputs are obtained as follows: $$\begin{equation*}
126
+ \small
127
+ \begin{array}{rcl}
128
+ c_{t} &=& f_{t}\odot c_{t-1} + i_{t}\odot \hat{c_{t}} + p_{t}\odot \hat{s_{t}} \\
129
+ h_{t} &=& o_t \odot tanh(c_t)
130
+ \end{array}
131
+ \end{equation*}$$
132
+
133
+ The hidden states from each time step are then concatenated to produce the unified context representation $\hat{ H_{c/m}}\in \mathbb{R}^{n\times d}$.
134
+
135
+ Finally, we concatenate $\hat{ H_{m}}$ and $\hat{ H_{c/m}}$ to obtain a joint context-meme representation, which we then pass through a feed-forward layer to obtain the final classification. The model outputs the *likelihood* of a sentence being valid evidence for a given meme. We use the cross-entropy loss to optimize our model.
136
+
137
+ We experiment with various unimodal and multimodal encoders for systematically encoding memes and context representations to establish comparative baselines. The details are presented below.
138
+
139
+ ::: itemize*
140
+ **BERT [@devlin2019BERT]:** To obtain text-based unimodal meme representation.
141
+
142
+ **ViT [@vit]:** Pre-trained on ImageNet to obtain image-based unimodal meme representation.
143
+ :::
144
+
145
+ ::: itemize*
146
+ **Early-fusion:** To obtain a concatenated multimodal meme representation, using BERT and ViT model.
147
+
148
+ **MMBT [@mmbt]:** For leveraging projections of pre-trained image features to text tokens to encode via multimodal bi-transformer.
149
+
150
+ **CLIP [@clip2021radford]:** To obtain multimodal representations from memes using CLIP image and text encoders, whereas CLIP text encoder for context representation.
151
+
152
+ **BAN [@BAN]:** To obtain a joint representation using low-rank bilinear pooling while leveraging the dependencies among two groups of input channels.
153
+
154
+ **VisualBERT [@li2019visualbert]:** To obtain multimodal pooled representations for memes, using a Transformer-based visual-linguistic model.
155
+ :::
2306.02845/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2306.02845/paper_text/intro_method.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Emotions, characterized by a rich and complex mix of physiological and cognitive states, hold significant importance across multiple fields such as psychology, human-computer interaction, affective computing, and even extending to broader domains such as virtual reality, user experience design, healthcare, and education [@poria2017review]. Understanding and accurately interpreting emotions is essential in human communication and social interactions [@cimtay2020cross]. With the surge in the development and accessibility of multimodal sensing technologies, researchers can explore multiple modalities to enhance the accuracy and robustness of emotion recognition systems [@baltruvsaitis2018multimodal]. The current research trend focuses on building Artificial Intelligence (AI) systems that can be deployed for real-life applications [@paleyes2022challenges].
4
+
5
+ Two such modalities, facial expressions and physiological signals, have garnered significant attention due to the rich information they offer and their non-invasive nature [@yu2021facial]. Facial expressions, direct and non-invasive indicators of emotion, have been thoroughly investigated [@malik2021towards]. Various techniques involving the extraction of facial landmarks, local descriptors, or holistic representations have been proposed to capture nuanced variations in facial muscle movements that reflect different emotional states [@wang2018facial]. Physiological signals, such as remote photoplethysmography (rPPG) signals, provide another layer of emotional cues. These signals, obtained through non-contact video-based techniques, offer insights into physiological changes associated with emotional responses [@yu2021facial]. The interplay of these two modalities offers a more holistic understanding of emotions, thus enhancing the robustness of emotion recognition systems [@zeng2009survey].
6
+
7
+ Emotion classification through audio-visual information is a well-established research task [@rao2019learning; @xu2020improve; @majumder2019dialoguernn]. However, recognizing emotion using the physiological context along with the audio-visual information score for further exploration [@yu2021facial]. Furthermore, despite the significant advancements, many multimodal emotion recognition models do not provide meaningful interpretations for their predictions [@murdoch2019definitions; @longo2020explainable]. Most existing interpretability techniques have been implemented for visual modality and have yet to be fully explored for multimodal analysis [@ribeiro2016should; @selvaraju2017grad; @malik2021towards].
8
+
9
+ This paper proposes an interpretable multimodal emotion recognition framework that extracts rPPG signals and facial features from the input videos and uses their combined context for emotion detection. The Haar cascades classifier  [@soo2014object] has been implemented to extract the rPPG signals, whereas a pre-trained ResNet-34-based network extracts the visual features. Further, early and late fusion approaches that integrate the static facial expression features and dynamic rPPG signals to capture both spatial and temporal aspects of emotions have been incorporated.
10
+
11
+ An interpretability technique based on permutation feature importance (PFI) [@altmann2010permutation] has also been incorporated that computes the contribution of rPPG and visual modality towards classifying a given input video into a particular emotion class. The experiments performed on Interactive Emotional Dyadic Motion Capture (IEMOCAP) dataset [@busso2008iemocap] have resulted in an accuracy of 54.61% while classifying the input videos into ten emotion classes ('neutral,' 'happy,' 'sad,' 'angry,' 'excited,' 'frustrated,' 'fearful,' 'surprised,' 'distressed' and 'other'). The increased performance on using the multimodal context than the individual accuracies on using rPPG or visual modality alone advocates the importance of leveraging the multimodal context for emotion understanding. The average contributions of rPPG and visual modalities towards emotion recognition have been computed as 37.67% and 62.33%, respectively.
12
+
13
+ The contributions of this paper can be summarized as follows:
14
+
15
+ - A multimodal emotion recognition framework has been proposed to classify a given video into discrete emotion classes. It extracts the dynamic rPPG signals from the input videos and combines them with static facial expressions using early and late fusion approaches.
16
+
17
+ - An interpretability technique has been incorporated that computes the contribution of rPPG and visual modalities towards emotion classification using the PFI algorithm.
18
+
19
+ - Extensive experiments have been performed on the IEMOCAP dataset, and the results have been presented in terms of accuracy, precision, recall, F1 score, and modality-wise contributions toward emotion classification.
20
+
21
+ # Method
22
+
23
+ The proposed framework has been diagrammatically depicted in Figure [1](#fig:archi){reference-type="ref" reference="fig:archi"} and described in the following sections.
24
+
25
+ <figure id="fig:archi" data-latex-placement="h">
26
+ <img src="archi" />
27
+ <figcaption>Schematic illustration of the proposed framework.</figcaption>
28
+ </figure>
29
+
30
+ The video files are loaded and processed frame by frame using OpenCV (cv2) library [^2] and processed to extract rPPG signals and facial features.
31
+
32
+ *i) rPPG Signals Extraction*: Face detection within each video frame during the rPPG signal extraction process is accomplished using Haar cascades [@soo2014object]. The region of interest (ROI), predominantly the facial region, is isolated from each frame, after which the mean intensity is computed to generate the rPPG signal for each video. The calculation of the mean intensity within the ROI ($\bar{I}c$) is represented in Eq. [\[eq:1\]](#eq:1){reference-type="ref" reference="eq:1"}.
33
+
34
+ $$\begin{equation}
35
+ \bar{I}c = \frac{1}{N} \sum_{x=1}^{W}\sum_{y=1}^{H} I_{x, y, c}
36
+ \label{eq:1}
37
+ \end{equation}$$
38
+
39
+ Where $I_{x, y, c}$ is the intensity of the pixel at location $(x, y)$ for color channel $c$ in the ROI, and $N$ is the total number of pixels in the ROI, whereas $W$ and $H$ represent the width and height of the ROI, respectively, and $c \in {R, G, B}$.
40
+
41
+ *ii) Facial Features Extraction*: Facial feature extraction employs Dlib's shape predictor [@dlib2016davis], which is a version of the ResNet-34 trained on Face Scrub dataset[@ng2014data] to identify the facial landmarks in a given image of a face. As per Eq. [\[eq:2\]](#eq:2){reference-type="ref" reference="eq:2"}, it identifies 68 facial landmarks for each detected face within every frame, distinguishing unique facial characteristics.
42
+
43
+ $$\begin{equation}
44
+ \begin{split}
45
+ P &= D(F, \{L_i\}) \\
46
+ F &= [f_1, f_2, \ldots, f_n]
47
+ \end{split}
48
+ \label{eq:2}
49
+ \end{equation}$$
50
+
51
+ Where $F$ represents the face detected in a frame, $P$ represents the predicted points on the face, $D(F, \{L_i\})$ is the function for predicting points on the face, and $L_i$ is the set of landmark points for the $i^{th}$ point. As signals from different videos might differ in length, it becomes crucial to standardize the input for the neural network model. This standardization is achieved by zero-padding $\bar{I}$ and $P$ to match the maximum signal length.
52
+
53
+ Early fusion and late fusion approaches are used to combine the rPPG signals and facial features.
54
+
55
+ *i) Early Fusion*: In the early fusion approach, the rPPG signals and facial features are concatenated before being fed into the model. The fused data are then passed through a neural network comprising a flatten layer, followed by CNN layers of dimensions 512 and 256, and the final layer of size equal to the number of classes. The flatten layer transforms the 3D input tensor into a 1D tensor, and the subsequent CNN layers functions perform the classification task. The model structure is represented as per Eq. [\[eq:3\]](#eq:3){reference-type="ref" reference="eq:3"}.
56
+
57
+ $$\begin{equation}
58
+ \begin{aligned}
59
+ I' &= \text{concatenate}(\bar{I}c, P) \\
60
+ I'' &= \text{flatten}(I') \\
61
+ F_{early} &= \text{NNet}(I'', C) \\
62
+ \end{aligned}
63
+ \label{eq:3}
64
+ \end{equation}$$
65
+
66
+ Where $I$ is the input shape, $C$ denotes the number of classes, $\bar{I}c$ is the mean intensity within the ROI from the rPPG signals, $P$ represents the facial features, $NNet$ represents the early fusion network and $F_{early}$ is the output of the early fusion.
67
+
68
+ *ii) Late Fusion*: In the late fusion approach, the rPPG and visual models are trained separately, and their outputs are combined using a weighted average. Eq. [\[eq:4\]](#eq:4){reference-type="ref" reference="eq:4"} represents a late fusion approach where the models are trained separately, and their outputs are combined in the final output $F_{late}$.
69
+
70
+ $$\begin{equation}
71
+ \begin{split}
72
+ F_{late} &= w_1 \cdot M_{\text{rPPG}}(\bar{I}c) + w_2 \cdot M_{\text{facial}}(P)
73
+ \end{split}
74
+ \label{eq:4}
75
+ \end{equation}$$
76
+
77
+ Where $M_{\text{rPPG}}(\bar{I}c)$ and $M_{\text{facial}}(P)$ represent the outputs of the rPPG model and the visual model, respectively, and $w_1$ and $w_2$ are the weights assigned to each model's output in the final fusion.
78
+
79
+ This study employs three separate models for emotion classification. Two of these models operate independently, utilizing rPPG signals and facial features. The third model operates via 'early fusion,' exploiting the combined context of data from the rPPG and visual models. The outputs of these individual models are then collaboratively integrated through a 'late fusion' approach that uses a weighted addition technique. The individual models, based on rPPG signals and facial features, are constructed as follows.
80
+
81
+ *i) rPPG Model:* This model utilizes a Deep Convolutional Neural Network (CNN) with two hidden layers. It incorporates Rectified Linear Unit (ReLU) activation functions for emotion classification derived from rPPG signals.
82
+
83
+ *ii) Visual Model:* This model, built on facial features, employs a ResNet-based Deep CNN with two hidden layers and ReLU activation functions.
84
+
85
+ An explainability method based on permutation feature importance (PFI) [@altmann2010permutation] is implemented, which is used to estimate the importance of features by permuting the values of each feature and measuring the resulting impact on model performance. The PFI of feature $j$ is the decrease in the model score when values of feature $j$ are randomly permuted. PFI for a feature $j$ is the difference in the model score when the values of feature $j$ are randomly permuted. Eq. [\[eq:pfi\]](#eq:pfi){reference-type="ref" reference="eq:pfi"} mathematically represents the concept of permutation feature importance.
86
+
87
+ $$\begin{equation}
88
+ PFI(j) = E_{\pi}[f(X^{(i)})] - E_{\pi}[f(X^{(i)}_{\pi_j})]
89
+ \label{eq:pfi}
90
+ \end{equation}$$
91
+
92
+ Where $PFI(j)$ is the permutation feature importance of feature $j$, $E_{\pi}[f(X^{(i)})]$ is the expected value of the model score over all samples in the dataset when the model is scored normally, $E_{\pi}[f(X^{(i)}_{\pi_j})]$ is the expected value of the model score when the values of feature $j$ are permuted according to some permutation $\pi$, and $X^{(i)}_{\pi_j}$ denotes the dataset $X^{(i)}$ with the values of feature $j$ permuted according to $\pi$.
2307.09312/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2023-05-02T05:43:49.499Z" agent="5.0 (Macintosh)" version="20.7.4" etag="oEDr-SiEmzIBMMpDraiQ" type="device"><diagram id="oRDFE_W3vR_xWFx_FH08" name="Page-1">3VhLc5swEP41HJMBZDA5xnYenUln2ubQpDcZLaBWRoyQY5xfXwmEAWO7SWPHTi4e7SdpJe23L2Oh8ay4EThLvnICzHJtUlhoYrmuNwzUrwaWFYACtwJiQUkFOQ1wT5/BgLZB55RA3lkoOWeSZl0w5GkKoexgWAi+6C6LOOuemuHYnGg3wH2IGfSW/aREJhUaeK3Vt0DjpD7Zsc3MDNeLDZAnmPBFC0JXFhoLzmU1mhVjYNp2tV2qfddbZlcXE5DKl2x4vrkbjX7NyfcvLr5lk+DHwo/OfHM3uawfDES934hcyITHPMXsqkFHgs9TAlqrraRmzR3nmQIdBf4GKZeGTDyXXEGJnDEzCwWVD63xo1Glx5OiLSxrIZVi+dAWHuv9Wmg2lVKzi1xqF1BiylOokGvKmJmv3q4fvNWkBsr5XISww461a2IRg9yxDq2IVwEDfAbqvmqfAIYlfereAxvXjVfrGnbVwBD8CrId52TYPvf+g2+zawfjBOdJeVd9WK7YkGsOUGItFzhBF3GP6iKDSvETZnNz1ITm6rqXCnQ3us8dnqqU36EcMxqnahwqI4FQwBMISVVSvTQTM0pI5V2Q02c8LfVpe2ecprJ8lDeyvMmKAa0Aik0Z32xuJ/CGmx1h0LewUW+fO6g2gylaNXcv5sAo/6Zf02g+26y1VsCjKFeesU7h6oZvYLVH6rRPJWOqpmpOFgmVcJ/h0psXqqqvkZtnVaGNaKFjbRSpcBlzxkWpCEVR5IZhGWuC/4HWDPGnvufv4rQXV1tZcpDdMaYbGHnRVOmBgZJWga6xvUfO8Ji51bZeXUkd6zQrqfsRKqnbCyhyuIACh3gw3BRQF/4Q4T0FlHtqAYV6Ng4PmLSCEDYnrWngDTx7T0krWLMxOrKNLz5Y0vpE7eDgnfJcuVU9Ci9bC0ybtbVTCfyun3r2mqtVCvfbpNifv/W8+EfnORgGqGv5tzWe79BaomOmkEN/QfigOWNLFXqf3qj/DxIfrm6rziggg011O3CnyN9Tb7RWttHhyrYSmy+CVYw2n1XR1V8=</diagram></mxfile>
2307.09312/main_diagram/main_diagram.pdf ADDED
Binary file (9.83 kB). View file