Eric03 commited on
Commit
44c2175
·
verified ·
1 Parent(s): 0a59524

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2006.09740/main_diagram/main_diagram.drawio +1 -0
  2. 2006.09740/main_diagram/main_diagram.pdf +0 -0
  3. 2006.09740/paper_text/intro_method.md +95 -0
  4. 2009.09439/main_diagram/main_diagram.drawio +1 -0
  5. 2009.09439/main_diagram/main_diagram.pdf +0 -0
  6. 2009.09439/paper_text/intro_method.md +40 -0
  7. 2010.11230/main_diagram/main_diagram.drawio +1 -0
  8. 2010.11230/main_diagram/main_diagram.pdf +0 -0
  9. 2010.11230/paper_text/intro_method.md +128 -0
  10. 2102.11938/main_diagram/main_diagram.drawio +1 -0
  11. 2102.11938/main_diagram/main_diagram.pdf +0 -0
  12. 2102.11938/paper_text/intro_method.md +73 -0
  13. 2103.17185/main_diagram/main_diagram.drawio +0 -0
  14. 2103.17185/paper_text/intro_method.md +116 -0
  15. 2104.05981/main_diagram/main_diagram.drawio +0 -0
  16. 2104.05981/paper_text/intro_method.md +95 -0
  17. 2104.06669/main_diagram/main_diagram.drawio +1 -0
  18. 2104.06669/paper_text/intro_method.md +60 -0
  19. 2106.04335/main_diagram/main_diagram.drawio +1 -0
  20. 2106.04335/main_diagram/main_diagram.pdf +0 -0
  21. 2106.04335/paper_text/intro_method.md +100 -0
  22. 2106.05266/main_diagram/main_diagram.drawio +0 -0
  23. 2106.05266/paper_text/intro_method.md +107 -0
  24. 2108.08265/main_diagram/main_diagram.drawio +1 -0
  25. 2108.08265/main_diagram/main_diagram.pdf +0 -0
  26. 2108.08265/paper_text/intro_method.md +92 -0
  27. 2109.02614/main_diagram/main_diagram.drawio +1 -0
  28. 2109.02614/main_diagram/main_diagram.pdf +0 -0
  29. 2109.02614/paper_text/intro_method.md +17 -0
  30. 2110.05291/main_diagram/main_diagram.drawio +1 -0
  31. 2110.05291/main_diagram/main_diagram.pdf +0 -0
  32. 2110.05291/paper_text/intro_method.md +91 -0
  33. 2110.05877/main_diagram/main_diagram.drawio +0 -0
  34. 2110.05877/main_diagram/main_diagram.pdf +0 -0
  35. 2110.05877/paper_text/intro_method.md +19 -0
  36. 2110.06257/main_diagram/main_diagram.drawio +1 -0
  37. 2110.06257/main_diagram/main_diagram.pdf +0 -0
  38. 2110.06257/paper_text/intro_method.md +449 -0
  39. 2110.08350/main_diagram/main_diagram.drawio +1 -0
  40. 2110.08350/main_diagram/main_diagram.pdf +0 -0
  41. 2110.08350/paper_text/intro_method.md +103 -0
  42. 2112.01036/main_diagram/main_diagram.drawio +1 -0
  43. 2112.01036/main_diagram/main_diagram.pdf +0 -0
  44. 2112.01036/paper_text/intro_method.md +165 -0
  45. 2112.04728/main_diagram/main_diagram.drawio +0 -0
  46. 2112.04728/main_diagram/main_diagram.pdf +0 -0
  47. 2112.04728/paper_text/intro_method.md +60 -0
  48. 2112.07917/main_diagram/main_diagram.drawio +0 -0
  49. 2112.07917/paper_text/intro_method.md +93 -0
  50. 2203.07881/main_diagram/main_diagram.drawio +0 -0
2006.09740/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-04-15T21:36:33.068Z" agent="5.0 (X11)" etag="usX6BjyDipzHfl0lMvR3" version="12.9.13" type="device"><diagram id="e--KDMYL1gQMsiW6AnH9" name="Page-1">5VnbUtswEP2aPNKxrTgJjySE0mk7Q2Gm0KeOsDa2qC2lskIcvr6SLccXBXBLSAh9ivZIu5L2oiM5PTRJso8Cz6OvnEDc8xyS9dBpz/Nc5A/Vj0ZWBTJyDBAKSsygCriiD2BAx6ALSiBtDJScx5LOm2DAGYNANjAsBF82h8143Jx1jkOwgKsAxzZ6TYmMzC58p8LPgYZRObPrmJ4El4MNkEaY8GUNQtMemgjOZdFKsgnE2nmlXwq9s0d61wsTwGQXhbH7PWXXPy9/X959PpqeZx66+Hbk9s3i5KrcMRDlACNyISMecobjaYWOBV8wAtqso6RqzBfO5wp0FXgHUq5MNPFCcgVFMolNL2RU3tTaP7SpD76RTjNjORdWpcCkWN3UhULL80u50sulUrHYoN7Vo44zUMoXIoCnvGUSEIsQ5FPj3HV8VWEAT0AtSCkKiLGk982FYJOh4XpcFUTVMHH8m5gWdu9xvDAzXULKQB65jmtFuxnLZUQlXM1x7oWlquhm3GY0jic85iLXRbMZDIJA4akU/BfUesjw+NapnH8PQkL2tPttbxkFr6woc4K4fSMva/VooKhWiuWwrfsXWf61a4iRE334KCmIcZrSoOnI7jn5fK5tdl7NOf4G55RY54w0M1xwqha4jo07aMVm0HJ6UVJGq344tQz1/aYhdNwyVPjBMpQHcL3tf49pf0NMB7Fy1pjQe9UMdfNCwFzwANJUFYzpVrPVRliJoNJeNkPfrBXGGbQKy0A4piHT+aMyAhQ+1kVEFTGdmI6EEpIfx5uKtlnWW6jCUSvQyFfndZcyRP5rHXPo0KlruEPq8g+CunyrDKfJLRBCWfgOmGst74u5Bi9nrsdyeYepPOyYyv5bYkZvuC1mHO2WGYdWzkx0Xmg1VX5ZB8brzmQCUvqAb3NTOmnmelP5Nv1xzz/VttSBnBZns/sqxLZ+eD5To95r1ah38C+yXT7Iut6SPbRPWnPtJ9mEswBLYGp2zg6Q25DXuhAO98xt7iZys67wZ//bvd1vv9BGdpy8nZ5vjhWnT4n+CPduQ9B+OvVH+346oQ33vrfJMVvkihdTwOb7mVVgO/5y4dmfo84mB0go/f7OCEWJ1Wf3Ig7Vnxdo+gc=</diagram></mxfile>
2006.09740/main_diagram/main_diagram.pdf ADDED
Binary file (9.15 kB). View file
 
2006.09740/paper_text/intro_method.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Estimating the 3D rotation of an object from 2D images is one of the fundamental problems in computer vision. Several applications relying on 3D rotation estimation have been developed such as a robot grasping an object, a self driving vehicle constantly sensing its surrounding environment , an augmented reality system combining computer-generated information onto the real world , or a system detecting the face orientation to enhance human-computer interactions .
4
+
5
+ Advances of deep learning techniques have resulted in improvements in estimation of 3D orientation. However, precise orientation estimation remains an open problem. The main problem is that the space of all 3D rotations lies on a nonlinear and closed manifold, referred to as the special orthogonal group $SO(3)$. This manifold has a different topology than unconstrained values in $\mathbb{R}^N$, where neural network outputs exist. As a result it is hard to design a loss function which is continuous without disconnected local minima. For example using euler angles as an intermediate step causes problems due to the so-called gimbal lock. Quaternions have a double embedding giving rise to the existence of two disconnected local minimas. Some more complicated methods use Gram-Schmidt which has a continuous inverse, but the function is not continuous with a discontinuity when the input vectors do not span $\mathbb{R}^3$.
6
+
7
+ Despite these issues various deep learning based solutions have been suggested. One approach is to use one of the rotation representations and model the constraint in the loss function or in the network architecture . An alternative is to construct a mapping function, which directly converts the network output to a rotation matrix .
8
+
9
+ Quantifying the 3D orientation uncertainty when dealing with noisy or otherwise difficult inputs is also an important task. Uncertainty estimation provides valuable information about the quality of the prediction during the process of decision making. Only recent efforts have been made on modeling the uncertainty of 3D rotation estimation . However, those methods still rely on complex solutions to fulfil the constraints required by their parameterization.
10
+
11
+ In this paper, we instead propose a deep learning approach to estimate the 3D rotation uncertainty by using the matrix Fisher probability density function developed in the field of directional statistics . This unimodal distribution has been selected because of its relevant properties in regards to the problem of 3D orientation estimation: i) The parameterization is unconstrained so there is no need for complex functions to enforce constraints. ii) It is possible to create a loss for this distribution which has desirable properties such as convexity iii) The mode of the distribution can subsequently be estimated along with the uncertainty around that mode for further analysis.
12
+
13
+ Our method offers a simple solution to the problem of 3D orientation estimation, where a neural network learns to regress the parameters of the matrix Fisher distribution. While several other losses used for rotation estimation are discontinuous with multiple local minimas, with respect to the network output. In this paper we instead propose a loss which is convex with bounded gradient magnitudes, resulting in a stable training. In addition, we suggest a solution to efficiently compute the non-trivial normalizing constant of the distribution. Finally, the proposed method is evaluated on multiple problem domains and compared with the latest published approaches. The results show that our method outperforms all previous methods.
14
+
15
+ Our contributions include: 1) a method for estimating a probability distributions over the set of rotations with neural networks by using the matrix Fisher distribution, 2) a loss associated with this distribution and show it is convex with bounded gradients, and 3) an extensive analysis encompassing several datasets and recent orientation estimation works, where we demonstrate the superiority of our method over the state-of-the-art.
16
+
17
+ # Method
18
+
19
+ We train a neural network to estimate the 3D pose of an input image.
20
+ Specifically, the network outputs the parameters of the matrix Fisher distribution, which is a distribution over $SO(3)$.
21
+ From the predicted parameters we can obtain the maximum likelihood estimate of the input's 3D pose. In the rest of this section we review the matrix Fisher distribution and provide some visualizations to help the reader's interpretation of its parameters. Then we derive the loss, based on maximizing the likelihood of the labelled data, and finally explain how we deal with the distribution's complex normalizing constant when we calculate our loss and calculate the gradient during back-propagation.
22
+
23
+ We model 3D rotation matrices probabilistically with the matrix Fisher distribution . This distribution has probability density function
24
+
25
+ p(R \mid F) = \dfrac {1} {a(F)}\exp(\text{tr}(F^TR))
26
+
27
+ where $F$ is an unconstrained matrix in $\mathbb{R}^{3\times3}$ parametrising the distribution, $R \in SO(3)$, and $a(F)$ is the distribution's normalizing constant. We will denote that $R$ is distributed according to a matrix Fisher distribution with $R \sim \mathcal{M}(F)$. The distribution is unimodal but visualizing the distribution in equation () is hard as it has a 3D domain. Fortunately, describes a helpful visualization scheme, see figure for details, which we use throughout the paper.
28
+
29
+ \def\pwid{.18\linewidth}
30
+ [t]
31
+ {*{4}{m{\pwid}m{.01\linewidth}}}
32
+ \includegraphics[width=\linewidth,trim=170 120 140 60,clip]{images/VonMisesExamples/von_mises_1_1.pdf} &
33
+ {\vspace*{20pt}$+$} &
34
+ \includegraphics[width=\linewidth,trim=170 120 140 60,clip]{images/VonMisesExamples/von_mises_1_2.pdf} &
35
+ {\vspace*{20pt}$+$} &
36
+ \includegraphics[width=\linewidth,trim=170 120 140 60,clip]{images/VonMisesExamples/von_mises_1_3.pdf} &
37
+ {\vspace*{25pt}$=$} &
38
+ \includegraphics[width=\linewidth,trim=170 120 140 60,clip]{images/VonMisesExamples/von_mises_1_all.pdf}\\
39
+ \multicolumn{1}{c}{\footnotesize (a) $p(R e_1 \mid F)$} & &
40
+ \multicolumn{1}{c}{\footnotesize (b) $p(R e_2 \mid F)$} & &
41
+ \multicolumn{1}{c}{\footnotesize (c) $p(R e_3 \mid F)$} & &
42
+ \multicolumn{1}{c}{\footnotesize (d) Compact Vis.}
43
+
44
+ \caption[]{Visualizing the matrix Fisher distribution on $SO(3)$. We follow the convention of and recreate their figures to explain the approach, similarly for figure . For the above plots the parameter matrix is $F=\text{diag}(5, 5, 5)$. Let $e_1, e_2$ and $e_3$ correspond to the standard basis of $\mathbb{R}^3$ and is shown by the black axes. (a) This plot shows the probability distribution of $R e_1$ when $R \sim \mathcal{M}(F)$. Thus the pdf shown on the sphere corresponds to the probability of where the $x$-axis will be transformed to after applying $R \sim \mathcal{M}(F)$. (b) and (c) Same comment as (a) except consider $e_2$ and $e_3$ instead of $e_1$. (d) A compact visualization of the plots in (a), (b) and (c) is obtained by summing the three marginal distributions and displaying them on the 3D sphere. All four plots are plotted within the same scale and a jet colormap is used.}
45
+
46
+ Also not immediately apparent is how the shape of the distribution varies as $F$ varies. From we know the mode of the distribution can be computed from the singular value decomposition of $F= U S V^T$, where the singular values are sorted in descending order, and setting
47
+
48
+ \hat{R} = U
49
+
50
+ 1 & 0 & 0 \\
51
+ 0 & 1 & 0 \\
52
+ 0 & 0 & \text{det}(U\,V)
53
+ V^T
54
+
55
+ The latter operation ensures $\hat{R}$ is a proper rotation matrix. Similar results are available in . Figure displays examples of the distribution for simple $F$ matrices. These figures show that larger singular values correspond to more peaked distributions. To further help understanding of how the shape of the distribution relates to $F$ please consult section 3 of the supplementary material and .
56
+
57
+ [htpb]
58
+ \centering
59
+
60
+ \def\phei{.12\textheight}
61
+ {cccccc}
62
+ \includegraphics[height=\phei,trim=170 120 140 60,clip]{images/VonMisesExamples/von_mises_1_all.pdf} &
63
+ \includegraphics[height=\phei,trim=170 120 140 60,clip]{images/VonMisesExamples/von_mises_2_all.pdf} &
64
+ \includegraphics[height=\phei,trim=170 120 140 60,clip]{images/VonMisesExamples/von_mises_3_all.pdf} &
65
+ \includegraphics[height=\phei,trim=130 120 140 60,clip]{images/VonMisesExamples/von_mises_4_all.pdf} \\
66
+ {(a) $\text{diag}(5, 5, 5)$} &
67
+ { (b) $\text{diag}(20, 20, 20)$} &
68
+ { (c) $\text{diag}(25, 5, 1)$} &
69
+ { (d) $A\, \text{diag}(25, 5, 1)$}
70
+
71
+ \caption[]{ Visualization of the matrix Fisher distribution for simple $F$ matrices. (a) For a spherical $F$ the mode of the distribution is the identity. The distribution for each axis is circular and identical. (b) Here the axis distributions are more peaked than in (a) as the singular values are larger. (c) The distribution for the $y$- and $z$-axes are more elongated than for the $x$-axis as the first singular value dominates. (d) $A$ is the rotation matrix obtained by rotating around the $z$-axis by $-\pi/6$ degrees and thus the mode rotation is $A$ shown by the red axes. The shape of the axes distributions though remains as in (c).}
72
+
73
+ Finally, the deceptively simple form of the pdf in equation () hides the fact that its normalizing constant, $a(F)$, is rather complicated and non-trivial to compute accurately and efficiently. Explicitly the normalizing constant is
74
+
75
+ a(F) = \int_{R \in SO(3)} \exp(\text{tr}(F^TR))\, dR = \ {_1}F_1\left(\tfrac{1}{2}, 2, \Lambda_1\right)
76
+
77
+ where ${}_1F_1(\cdot, \cdot, \cdot)$ is the generalized hypergeometric function of a matrix argument and
78
+
79
+ \Lambda_1 = \text{diag}(s_1-s_2-s'_3, s_2-s_1-s'_3, s'_3-s_1-s_2, s_1+s_2+s'_3)
80
+
81
+ Since this matrix is not positive semidefinite we use the extension from to define the function for all real matrices.
82
+ Here $s_1, s_2, s_3$ are the singular values of $F = USV^T$ and $s'_3 = s_3\text{det}(UV)$. Equation () can be derived using results from and a derivation appears in .
83
+
84
+ Assume we have a labelled training example $(x, R_x)$ where $x$ is the input and $R_x \in SO(3)$ its ground truth 3D rotation matrix. To train a neural network that estimates $F_x$ for input $x$, it is necessary to define a loss function measuring the compatibility between $F_x$ and $R_x$. As the pdf in equation () has support in all of $SO(3)$, we use the negative log-likelihood of $R_x$ given $F_x$ as the loss:
85
+
86
+ \mathcal{L}(F_x, R_x) = -\log(p(R_x \mid F_x)) = \log(a(F_x)) - \text{tr}(F_x^T R_x)
87
+
88
+ This loss has several interesting properties such as it is Lipschitz continuous, convex and has Lipschitz continuous gradients which makes it suitable for optimization. See supplementary material section 4 for proofs.
89
+
90
+ In practice the loss in equation has an equilibrium far from the origin, in some experiments we believe this led to instability. To alleviate this problem we used a regularizing term which was 5\% larger than what is analytically correct to move the equilibrium closer to the origin.
91
+
92
+ The normalizing constant $a(F)$ is non-trivial to evaluate
93
+ accurately and quickly. To allow computationally feasible training we fit a simple function to approximate ${_1}F_1\left({1}/{2}, 2, \cdot \right)$ where the input has the form of $\Lambda_1$ and also to its derivative w.r.t to the inputs $s_1, s_2, s'_3$. The latter approximating functions are used for back-prop training, while the former approximation is used to visualize the loss during training. We used software from to compute the value ${_1}F_1\left({1}/{2}, 2, \Lambda_1\right)$ and its numerical gradients for multiple values of $\Lambda_1$. Via a process of manual inspection and curve fitting of simple functions we create an approximation of this function. The exact details are given in section 5 of the supplementary material.
94
+
95
+ Though perhaps our approach is far from optimal it does make training and testing computationally feasible and our results indicate that our approximation is sufficiently accurate to allow us to train a powerful network. Some preliminary experiments would also seem to indicate that the results are not so sensitive to the accuracy of the approximation, though having a more theoretically well-founded approach would be more satisfying such as adapting the approach in or approximating the relevant analytical derivatives calculated in .
2009.09439/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-05-25T13:37:54.498Z" agent="5.0 (X11)" etag="H7SETqrAf_6kOdu_fe02" version="13.1.3" type="google"><diagram id="Q_jAOWGsWikgjgDF14HS" name="Page-1">5ZxtV+I6EIB/DR/hNOkrH3lRr15Xveq9u+sXT2kjVEvDhrCiv34TmlKaBincllf3nLWdhmmZJ5NkZlJremc4vSDuaPAN+yisQc2f1vRuDULLdtj/XPARC3TbiAV9EvixCKSCh+ATCaEmpJPAR+NMQ4pxSINRVujhKEIezchcQvB7ttkLDrN3Hbl9lBM8eG6Yl34PfDqIpY6ppfK/UNAfJHcGmrgydJPGQsV44Pr4PRbN2uhnNb1DMKbx0XDaQSG3XWKXWNH5kqvzByMookU+cPN+0RsQhJ/Iz88747+nyTed1IWW3244EV9YPCz9SCzAnnvED71Jj/1qvw8Cih5Grsdl74w5kw3oMGRngB328CTykX/dmwtc761PuPR2QsMgQkLuu+TtlqkJKO8ZWkMzs0I4k/KW+S+aPDUiFE0XROKLXyA8RJR8sCbJVUtQEb0QCFtp7ylTXYgGizgTmq7oRv256tTS7EAYew3D6wdg+HHsjcAsB8LcpxMIRh4CVECwqmJgrGZAYqsK061A8BKEYQeHmMw+q7fa/F9JtjOtrO3MvO2AwnaG+f9tZ54/wuH4zbrSOmf2c9h6bT891YFq5GAGNe6faXyQsyX7qjRrsDEl+A0lJosw76MZKwqRGwb9iHsCMyBi8jY3XMAG6Za4MAx8P1zmJClDrSwYThaGlYdhKGDAEjqyGoaqJ3MGt8cPA2jmzmCgu5Zff8T/4mh4iS7x7T+t+lXdXsKiZnZGg2CvcbzgiD6IhzLKcBSZjdZsmIUm3sroAKjAY4Xc+H7wmx326axfckqt1HviFuyOmUZ7CbEMbg6UuFmFZurKqEGVU0nWR5Hf4gt+dtYLsff2OAiiLA3W4jwIF85EuGF8ZTXkZwKEvM0WbGIqbJLICApdGvzOhhUqQ4k73OGAPUk6zOnNLBJbsvUYT4iHxKcWwwBZka19rYi6pI9oTtGM2/xrb44y8fYTRtlki+7FH6scsnP32xXZAsHkumRfJ8PRAtoUNJyNri6h4tw+APBspqsGvAV3C141p+5pIKUD2JAXJcVCKd1olBBMqQ1YIBuwxwbcYiyqNl/zgMyn27n+p1oUqwwou3VpBjQKzMl7Y0Ar1/+Sp91V/zNU458I+WgQ+qhmt+9rdveZ/WYDcBuw472OA8vA5MiQnBykrYblxrIcycvRo4CO5DGOtuswPPFFRcbqhJzEMLIrdti0FGC26ybWEjD3JwUmN0s7u+aiyoDEWSmetctwsH5NcHKhHhdxWqyBM5qm17KZLjZR8XolszeDfC2mKj57caB2myAPRwzlxKMBjjj/tAvMM2PxYyxJjWUJrVg9xJ0mKbfOJLw0FfWv0Qu3M0wlbUwpHgpZJeChCRQ+ac2FmRChKvpmgQhrSWjtu+PBfNm2YOTsem5ZPoWbFU0D+oOTaxggOf/JWzZMaIrz7lSgnZ18LJzcIRIwI3Cn/to742i3wDJrT6J5Qy5nynXK4uG7NAsYxcJ3Rtv9WGg24g3GXzywBrL3kTcMrHqubHt2ED9BqbkEUzX/bGucGxHkB7sa4/qhOx6LD2fHu5Lq7/mxTRU5mdpWRzarQOpj83Twl7PCnowjOswFtKApp5qKjiW6oa9WVnE60Cqwr+LkkKZZixKQ5pVVjVQVrp0WUiM3eOqbe6klR+QKZRUjBaqJtjSkX+7b2xek0noIAHtDnMCRFTU0UBXNCe6aUdR8Qld+1yPUGkT0rMhWtuOGCZswy2DTYloOZnXFNCVJ5ca6kxpqDbmOzcbGJPZbl6ahW4qR1oDN9AduF+/Je6qMF1rWxmwlRWbDtqqiqenOVKt3iWeGBr44a3U63g/lLu5tBas+8rCPyFFn4xx5R4uqVKrqq2XEq9Ofd+PP5lvn4RX79KYFP5+6bpEXJvalUirvGYdJCWwLdVKl7eBq2x3dOw9NabDTVRWeqt56UFI4hDdPSrC8DrMbJplFCxY9S3npR2n6ZSXo7rHX06C0GwAq9lxVVX9WklDlM+K5M8hNwskWjtslWzjmc25w/DvD5W0d0NxeXVQJstIsxiHsJp4vn9Pl9KZpKRYVrVJV3npaSfPkt/nrIBccbZ43XqmqYpqgUpwHkcqQ3hTUNa3hGJvxlHXNX9LZFk1nLZoeL2cGXo6loKcfIL253rXR2dLAKq8wy6rwyw+slVuxV0d1BcK64+4XwNzUpaV+AZrb6RfJfartF+uNF0c5+tsNKUc5f1t5/dE/rws0G4Ze1Rxwdm0514Dafep8v3zEV89tgOpgecIz82Ks5Q55+BH1xqPZuZa06pGFRsp3alXaCn50ZdBVPJgiaBx8ur2ZKh46CQ9jes12zexyXROKxwWy7mvEVra0WABFc0VlZDtfo6v3y79vmjcD8np5PjF+3XuXRbKdx+3CuiXtmWOLaGCmxSNn08W5Y6+ltzy/VnJebwo/Rs7Jn6VaqBJmeDjlcF6ld2PO7DT9U1Jx8/TvcelnfwA=</diagram></mxfile>
2009.09439/main_diagram/main_diagram.pdf ADDED
Binary file (19 kB). View file
 
2009.09439/paper_text/intro_method.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ The network φ encodes the full NORB input, a 96 × 96 pixel grayscale image, to lower-dimensional representations. The system as a whole receives the image from the current viewpoint, the image of the goal viewpoint and a one-hot encoding of the taken action. The image inputs are converted from integers ranging from 0 to 255 to floating point numbers ranging from 0 to 1.
4
+
5
+ We use the same architecture for the φ network in all of our experiments except for varying the output dimension, Table 1.
6
+
7
+ The decoder network D has the architecture listed in Table 2. It is designed to approximately inverse each operation in the original φ network.
8
+
9
+ March 18, 2021 11/29
10
+
11
+ Table 1. Representation network architecture.
12
+
13
+ | Layer | Filters / Units | Kernel size | Strides | Output shape | Activation |
14
+ |---------------|-----------------|-------------|---------|--------------|------------|
15
+ | Input | | | | (96, 96, 1) | |
16
+ | Convolutional | 64 | 5 × 5 | 2 × 2 | (45, 45, 64) | ReLU |
17
+ | Max-pooling | | | 2 × 2 | (22, 22, 64) | |
18
+ | Convolutional | 128 | 5 × 5 | 2 × 2 | (9, 9, 128) | ReLU |
19
+ | Flatten | | | | (10368) | |
20
+ | Dense | 600 | | | (600) | ReLU |
21
+ | Dense | #Features | | | (# Features) | Linear |
22
+
23
+ Table 2. Regularizing decoder architecture. The upsampling layer uses linear interpolation, BN stands for Batch Normalization and CT stands for Convolutional Transpose.
24
+
25
+ | Layer | Filters / Units | Kernel Size | Strides | Output shape | Activation |
26
+ |------------|-----------------|-------------|---------|---------------|------------|
27
+ | Input | | | | (# Features) | |
28
+ | Dense | 512 | | | (512) | ReLU |
29
+ | BN | | | | (512) | |
30
+ | Dense | 12800 | | | (12800) | ReLU |
31
+ | BN | | | | (12800) | |
32
+ | Reshape | | | | (10, 10, 128) | |
33
+ | CT | 128 | 5 × 5 | 2 × 2 | (23, 23, 128) | ReLU |
34
+ | Upsampling | | | 2 × 2 | (46, 46, 128) | |
35
+ | BN | | | | (46, 46, 128) | |
36
+ | CT | 64 | 5 × 5 | 2 × 2 | (95, 95, 64) | ReLU |
37
+ | BN | | | | (95, 95, 64) | |
38
+ | CT | 1 | 2 × 2 | 1 × 1 | (96, 96, 1) | Sigmoid |
39
+
40
+ The predictor network f is a two-stream dense neural network. Each stream consists of a dense layer with a rectified linear unit (ReLU) activation followed by a batch normalization (BatchNorm) layer. The outputs of these streams are then concatenated and passed through 3 dense layers with ReLU activations, each one followed by a BatchNorm, and then an output dense layer, see Table 3.
2010.11230/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="drawio.corp.amazon.com" modified="2020-08-29T23:13:16.781Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.2 Safari/605.1.15" etag="w2CUjEKIbqSLubFucIVt" version="12.4.8" type="device"><diagram id="HwEtgdRo_6S1wOfiDaqq" name="Page-1">7Vttc5s4EP41zPQ+hEHIYPgYJ2k6N22uTXLX5L50VCPb3GHEgRzb/fUngTAvUvwWDHYSZyaGRVrgeXZXq5WswYvp4jpG0eQL8XCgmYa30OClZpqOabP/XLDMBL2+mQnGse9lIlAI7vxfWAgNIZ35Hk4qDSkhAfWjqnBIwhAPaUWG4pjMq81GJKjeNUJjLAnuhiiQpd99j07Ea1lGIf+E/fEkvzMwxJUpyhsLQTJBHpmXRPBKgxcxITQ7mi4ucMCxy3HJ+n185urqwWIc0m06LEl4i5Lv0Pk2d0fgxnsC9z/OQC9T84SCmXhj8bR0mUMQk1noYa4FaHAwn/gU30VoyK/OGedMNqHTQFz2UDJZtU1oTP7FFyQgcaoKuumHXRF3xTHFi2ffB6xQYtaFyRTTeMmaiA4W6OvCuIRtmTbQrUwyL7hyTV285KRMVE4LEgYyXt2gwJAdCBh3gdQ8WUj7Rg1PQ0Kzl+PWOJhDcO3+ZfxxDqJvvYfHq3t4/vvsDEhYfo3xE5Pcz+IwYd+aNfgwo1QzmX4jxkn0GzvUdc26lEBnuNAqsijwxyE7DvCIX+HY+cz1z4V46nse7zxgev1f6GeqyGDnEfFDmr6qNeB3YppmlCRZ8OKKRySkH9HUDziS9/4U80e9wXP2/5ZMUbiiS+JGweCzdDk1uvp9iS4AFXQdjC3Z8m845u9sKdiCPdm5QK9NtqDE1j2Kx5iyWJMxxr4ktt4aS0DBUqs+JQ/Qn7+8bDxpB8gVImuiU18FJDxUcIISbNhjuZ44JTGdkDEJUXBVSAcFsNxGizafCYkEnP9gSpciceUmWwUbL3z6UDp+5KpYlpKdXS6E5vRkmZ+E7H0fyielXvy06Jae5f22IBaHeY4N19GckFk8xGvAFGkWTQPGuvFbDAkc6bVWE+MAUf+pmnU3bgPWK3EmVVhq1Zns1wKkYhRuNyrJk4P3qLRvVOpvGZXcowpKfTkP2yHPGjKUcLwu01I5XtWCOEWlKSVG9k/bbs8n69NOR/ZJ1azzcNOY3vH4pHnyTulsmyoYTXtl2vU8jtGy1EDMMwrNX7mgZI1ubYSwQM2gMo2Fea0ebX+Lc95DQG1Yhh2HAPd4QsCmCFB205fQ1bDjSZ7VA241zrs19rIQIXqVK9l1RQZYryiLSZKiprzVlbz1+vbPU8h+IazPyR1FwbzV/DeP+qcPZudTCRtIsB1tzMqzliJReazkKeqsZedYx5uIHs6LkhiQrz1uymLs48hielJO7Rw+i8lB2m7ly9jCkf0gKCUlI4v/aYpFMDv9yGlM9skNQTxFeyGCjXi61a+FXMUizYGihHJ58tVXQVXzlpo37hkDzJ1iQFflhN3Wn1v3wk68TrmM0+rYfIJuVx2bwfGOzbCzsflljiovwL47ak+1hadVV83T+lNw1ROa+ktMW6bu1DjcoQCwjboDlwFs59UaSi0275uK7R/T81C9OaZb7ViwZdRm+3mw3mC7jRmbXCfRTDvgFWHPf6pYof3fjO8pTUk8yzbQnLMGwIwWxUX+5KJKXAhFIbnUyB6L7yDXuM+tci0rRQyCTFdVPxOnL5NL694VBH6UPFfPLhfGkyjbCzzyF9zX9jB5qbbexHhn12wIqAa7Vuvctrwj4phykPan7JKfdz5zkKvNf0sUvWRtaK+NeMJTwJZriVUuG/Ak16qyBNtbL1JWVeRdep8w8ngoXONOzZSym/O4Z1jYaVN+pysMSmrkCCeoWRfp3gA1be4hUxIj7yHTdb3b0NYEzFYNZqjygENtIVYC3ZczqRbmKZ3OOTYXfJufIdRDn+PqTlXJtvNbq753+nAL3EqD6Wbzw1s3GFhXsa+5SIoObC6KQuaAeMvus9TSSOoY/K+ZEG/W53GKDYuH+iWPmgB1jpO8WgYsNqg6GzmwmuGAnRa/0s18pvipM7z6Hw==</diagram></mxfile>
2010.11230/main_diagram/main_diagram.pdf ADDED
Binary file (40.7 kB). View file
 
2010.11230/paper_text/intro_method.md ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Nowadays automated conversational agents such as Alexa, Siri, Google Assistant, Cortana, etc. are widespread and play an important role in many different aspects of our lives. Their applications vary from storytelling and education for children to assisting the elderly and disabled with their daily activities. Any successful conversational agent should be able to communicate in different languages and accents, understand the conversation context, analyze the query paraphrases, and route the requests to various skills available for handling the user's request [@ram2018conversational].
4
+
5
+ In such a large-scale system with many components, it is crucial to understand if the human user is satisfied with the automated agent's response and actions. In other words, it is desirable to know if the agent is communicating properly and providing the service that is expected by the user. In the literature, it is referred to as targeted turn-level satisfaction as we are only interested in the user's satisfaction for a certain conversation turn given the context of the conversation, and not the overall satisfaction for the whole conversation [@park2020large]. Perhaps the most basic use of a user satisfaction model would be to monitor the performance of an agent and to detect defects as a first step to fix issues and improve the system. Anticipating user dissatisfaction for a certain turn in a conversation, an agent would be able to ask the user for repeating the request or providing more information, improving the final experience. Also, a powerful user satisfaction model can be used as a ranking or scoring measure to select the most satisfying response among a set of candidates and hence guiding the conversation.
6
+
7
+ The problem of user satisfaction modeling has recently attracted significant research attention [@jiang2015automatic; @bodigutla2019multi; @park2020large; @pragst2017recurrent; @rach2017interaction]. These methods either rely on annotated datasets providing ground-truth labels to train and evaluate [@bodigutla2019multi] or rely on ad hoc or human-engineered metrics that do not necessarily model the true user satisfaction [@jiang2015automatic]. Access to reliable annotations to be used in building satisfaction models has been very challenging partly due to the fact that a large-scale conversation system supports many different devices as well as voice, language, and application components, providing access to a wide variety of skills. The traditional approach of collecting samples from the live system traffic and tasking human annotators to label samples would not be scalable due to the cost of annotations as well as the turn-around time required to collect and annotate data for a new skill or feature. Note that onboarding new skills in a timely manner is a crucial to ensure active skill developer engagement.
8
+
9
+ To address this problem, we propose a novel training objective and transfer learning scheme that significantly improves not only the data efficiency but also the model generalization to unseen skills. In summary, we make the following contributions:
10
+
11
+ - We propose a contrastive self-supervised training objective that can leverage virtually any unlabeled conversation data to learn user-agent interactions.
12
+
13
+ - We show that the proposed method can be used to pre-train state-of-the-art deep language models and the acquired knowledge is transferable to the user satisfaction prediction.
14
+
15
+ - We suggest a novel and scalable few-shot transfer learning approach that is able to improve the label efficiency even further in the case of few-shot transfer learning.
16
+
17
+ - We conduct extensive experiments using data from a large-scale commercial conversational system, demonstrating significant improvements to label efficiency and generalization.
18
+
19
+ # Method
20
+
21
+ In this paper, we consider the conversational interaction between a human user and an automated agent. Each interaction consists of a set of turns in which the user provides an utterance and the agent provides appropriate responses. A set of turns that are happening within a certain time window are grouped as a conversation session. Formally, we can represent a session as a set of turns: $$\begin{equation}
22
+ S_i = \{ (U_i^{t=0},R_i^{t=0}), \dots, (U_i^{t=T},R_i^{t=T}) \}
23
+ \end{equation}$$ Here, $S_i$ represents session $i$ consisting of a set of turns as tuples of utterance and responses, $(U_i^t,R_i^t)$, for the first turn $t=0$ to the last turn $t=T$ in that session.
24
+
25
+ In the context of turn-level user satisfaction modeling, we are interested in the classification of a certain targeted turn within a session as either satisfying (SAT) or dissatisfying (DSAT). Note that the satisfaction here is defined based on the agent's response given a certain utterance and the context (i.e., other session turns). We use the notation $Y_i^{t^{*}} \in \{\text{SAT},\text{DSAT}\}$ to indicate the user satisfaction for the targeted turn $t=t^{*}$ of session $i$. See Figure [2](#fig:sat_example){reference-type="ref" reference="fig:sat_example"} for examples of SAT/DSAT interactions.
26
+
27
+ <figure id="fig:sat_example" data-latex-placement="H">
28
+ <figure>
29
+
30
+ </figure>
31
+ <figcaption>A few examples of <span style="color: ForestGreen">SAT</span> and <span style="color: BrickRed">DSAT</span> turns to illustrate the importance of the conversation context.</figcaption>
32
+ </figure>
33
+
34
+ In this study, we use real-world data from Alexa, a large-scale commercial conversational agent. Specifically, we use a dataset of about 891,000 real-world conversation sessions in which a certain turn within each session is annotated by a human annotator as SAT or DSAT. Human annotators had access to the session context and followed a standard labeling protocol (further information is provided in Appendix [7](#sec:appendix_annotation){reference-type="ref" reference="sec:appendix_annotation"}). As a preprocessing step, we limited turns within each session to a window of five turns: at most two turns before the targeted turn, the targeted turn, and at most two turns after the targeted turn. This labeled dataset is denoted as $\mathbb{D}_{sup}$.
35
+
36
+ In addition to $\mathbb{D}_{sup}$, we also use a large pool of real-world session data without any annotation or label. This dataset is about twice the size of $\mathbb{D}_{sup}$, but as we are not limited to targeted turns, we keep all session turns and decide context windows based on a randomized data augmentation step. The resulting effective sample size is significantly larger than $\mathbb{D}_{sup}$. We denote this unlabeled dataset as $\mathbb{D}_{unsup}$. As both datasets were sampled from real traffic, we ensured that there is no overlap between $\mathbb{D}_{unsup}$ and the evaluation splits of $\mathbb{D}_{sup}$.
37
+
38
+ The conversations cover a wide variety of internally developed (1p) and third-party (3p) developer skills. Due to the imbalanced traffic, in our datasets, there is a huge variation between the number of samples for different skills. For instance, 1p skills such as music or weather have hundreds of thousands of samples while many 3p skills only have less than 10 samples throughout our datasets. To properly evaluate the performance of our predictors on such imbalanced data, we proposed a novel approach to split the data and to evaluate. We build two test sets: a test set measuring in-domain performance and another test set to measure the out-of-domain generalization. The in-domain test set consists of samples from skills that the train set covers. The out-of-domain test set measures the performance on skills that are not covered by the train set. Ideally, we would like to observe good classification performance in both test splits, indicating the ability of our models to learn and model the current major traffic and to generalize to less frequent or future traffic. Based on this, we split $\mathbb{D}_{sup}$ to 70% train, 15% validation, and the rest for the test (about $1/5$ of test samples are out-of-domain and $4/5$ are in-domain). The in-domain and out-of-domain test sets consist of 17 and 275 skills, respectively. The $\mathbb{D}_{unsup}$ is randomly split to 80% train and the rest for validation, regardless of skills. Table [1](#tab:dataset_stats){reference-type="ref" reference="tab:dataset_stats"} presents a summary of dataset statistics for $\mathbb{D}_{sup}$.
39
+
40
+ ::: {#tab:dataset_stats}
41
+ **Property** **Size**
42
+ ------------------------------ -------------------
43
+ Total number of samples $\approx 891,000$
44
+ Total number of 1p skills $> 20$
45
+ Total number of 3p skills $> 1500$
46
+ Ratio of SAT to DSAT samples $> 20$
47
+
48
+ : Dataset statistics for $\mathbb{D}_{sup}$
49
+ :::
50
+
51
+ Figure [1](#fig:arch){reference-type="ref" reference="fig:arch"} shows a high-level drawing of the network architecture used in our experiments. It consists of a language model (LM) that encodes utterance and response pairs to vector representations. Here, we consider up to T turns before and after the targeted turn. To further summarize the list of the previous or next turns, we use GRU layers [@chung2014empirical]. Then, an average pool is used to produce a representation vector, $\bm{z}$, for each session. Note that before the pooling, simple non-linear MLPs are used to transform each partial representation. Finally, $\bm{z}$ is used as an input to a set of different head networks, responsible for making predictions for different objectives.
52
+
53
+ Regarding the LM, we use the standard BERT encoder [@devlin2018bert] architecture pre-trained as suggested by @liu2019roberta. To make a fixed-length representation of the utterance response pairs i.e. turn semantics, we use an average pool at the last encoder layer of the BERT token representations. We also tried other approaches such as using the classification token instead of pooling, but based on our initial results simple pooling performed consistently better.
54
+
55
+ We share our BERT-based LM parameters across the network to encode the session turns. However, we train separate GRU networks to summarize the previous and next turns. The output dimension of the LM is equal to $768$, the size of the standard BERT hidden layer. The hidden layer and output size of our GRUs are $256$, and we use 2-layer bi-directional GRUs. Each head is a simple MLP with a single hidden layer of size $256$ followed by a ReLU nonlinearity. The final network consists of about $117.7$ million parameters from which about $110$ million is related to BERT and the rest is for GRUs, heads, etc.
56
+
57
+ As a baseline approach, we use the network defined in Section [3.3](#sec:arch){reference-type="ref" reference="sec:arch"} with a binary classification head to distinguish SAT and DSAT samples. Here, we use labels provided by $\mathbb{D}_{sup}$ and a binary cross-entropy (BCE) loss function. An Adam optimizer [@kingma2014adam] with a batch size of $512$ is used to train the network for $10$ epochs. The base learning rate for all non-BERT layers is set to $10^{-3}$, while for BERT layers, we use a smaller learning rate of $5 \times 10^{-5}$. The learning rates are decayed with a factor $5$ twice at $60\%$ and $80\%$ of total iterations. Unless indicated otherwise, we use a similar training setup for other experiments suggested in this paper.
58
+
59
+ We define a self-supervised objective in which the model is tasked to distinguish real sessions from unreal (or noisy) sessions. Any unlabeled dataset, such as $\mathbb{D}_{unsup}$ can be used to sample real sessions. To generate unreal textual information, different approaches have been suggested in the literature such as back-translation [@fang2020cert], generative modeling [@liu2020self], or even random word substitutions.
60
+
61
+ In this work, we leverage the multi-turn and structured nature of sessions to generate noise samples by simply shuffling the targeted utterances/responses within each training batch (see Figure [3](#fig:nce_example){reference-type="ref" reference="fig:nce_example"} for an example). Intuitively, the noise samples are sessions in which the targeted utterance or response does not belong to the rest of the session. Therefore, the model has to capture the joint distribution of the context and targeted turns. Algorithm [\[alg:nce\]](#alg:nce){reference-type="ref" reference="alg:nce"} shows an overview of the sample generation and training process for the proposed contrastive objective.
62
+
63
+ <figure id="fig:nce_example" data-latex-placement="ht">
64
+ <figure>
65
+
66
+ </figure>
67
+ <figcaption>A toy example demonstrating the generation of unreal samples from a batch of two real samples. Session context is omitted for brevity. </figcaption>
68
+ </figure>
69
+
70
+ :::: algorithm
71
+ ::: algorithmic
72
+ $\mathbb{D}_{unsup}$, $h_{\theta}$ (model w/ contrastive head) $X \leftarrow GetBatch(\mathbb{D}_{unsup})$ $batchsize \leftarrow length(X)$ $y \leftarrow ones(batchsize)$ $X_n \leftarrow clone(X)$ $shuffle(X_n[`targeted\_utterance`])$ $shuffle(X_n[`targeted\_response`])$ $y_n \leftarrow zeros(batchsize)$ $p \leftarrow h_{\theta}([X;X_n])$ $loss \leftarrow BCE(p, [y;y_n])$ Backprop $loss$ Update $\theta$
73
+ :::
74
+ ::::
75
+
76
+ The objective introduced in Section [4.1](#sec:nce){reference-type="ref" reference="sec:nce"} is not directly applicable to be used as a user satisfaction model. One approach to leverage the pool of unsupervised data is to pre-train the model on unlabeled data using the self-supervised objective, and then attach a classifier head and finetune the network to distinguish SAT and DSAT samples. In our implementation, we pre-train using the self-supervised objective on $\mathbb{D}_{unsup}$ for $10$ epochs, then train a classifier head on $\mathbb{D}_{sup}$ for another $10$ epochs; adjusting the learning rates for the network body to $\times 0.1$ of the base learning rates (see Section [3.4](#sec:sup){reference-type="ref" reference="sec:sup"} for more information on the learning rate setup).
77
+
78
+ In the pretraining approach, we solely relied on the loose semantic relationship between the self-supervised and the user satisfaction modeling tasks. However, it is desirable to have a representation that is not only solving the self-supervised task but is also useful for the final objective. In other words, we have a source task ($S$) which we have a large number of training samples and a target task ($T$) with a limited number of samples that is our main interest. The idea is to use information from the target task during the source training such that the trained model is most compatible with the target.
79
+
80
+ Let us assume we have datasets $\mathbb{D}_S$ and $\mathbb{D}_T$ corresponding to the source ($S$) and target ($T$) tasks as well as inference functions for each task: $f_S(.|\theta,\omega_S)$ and $f_T(.|\theta,\omega_T)$. In this notation, $\theta$ represents shared network parameters (i.e., the body in our architecture) and $\omega$ represents task-specific parameters (i.e., a head in our architecture). Formally, when optimizing for task $S$, we are interested in: $$\begin{equation}
81
+ \argmin_{\theta,\omega_S} \mathbb{E}_{\bm{x},y \sim \mathbb{D}_S} [ L_S (f_S (\bm{x}|\theta,\omega_S), y)] \;,
82
+ \end{equation}$$
83
+
84
+ where $L_S$ is the loss function for the source task. A simple gradient descent step to solve this problem can be written as: $$\begin{equation}
85
+ \begin{aligned}
86
+ \theta^{t+1} \longleftarrow \theta^{t} - \eta \nabla_{\theta} \mathbb{E}(L_S(f_S (\bm{x}|\theta^t,\omega_S^t), y)) \;,\\
87
+ \omega_S^{t+1} \longleftarrow \omega_S^{t} - \eta \nabla_{\omega_S} \mathbb{E}(L_S(f_S (\bm{x}|\theta^t,\omega_S^t), y)) \;.
88
+ \end{aligned}
89
+ \end{equation}$$
90
+
91
+ However, we are interested in optimization steps that do not increase the loss value for task $T$: $$\begin{equation}
92
+ \label{eq:not_increase}
93
+ \begin{aligned}
94
+ \mathbb{E}_{\bm{x},y \sim \mathbb{D}_T} [ L_{T} (f_T (\bm{x}|\theta^{t+1},\omega_T^{t+1}), y)] \leq \\
95
+ \mathbb{E}_{\bm{x},y \sim \mathbb{D}_T} [ L_{T} (f_T (\bm{x}|\theta^{t},\omega_T^{t}), y)] \;.
96
+ \end{aligned}
97
+ \end{equation}$$
98
+
99
+ Considering [\[eq:not_increase\]](#eq:not_increase){reference-type="eqref" reference="eq:not_increase"} as an optimization constraint can potentially halt the optimization because improvements to the source objective do not directly translate to improvements to the target task. In other words, the constraint above may not be always directly satisfiable using gradient steps in the source domain.
100
+
101
+ To overcome this issue, instead of using gradient descent, we define the problem as a Randomize Block Coordinate Descent (RBCD) [@nesterov2012efficiency; @wright2015coordinate] optimization. At each RBCD iteration, only a subset of model parameters, i.e. a block noted as $\bm{b}$, is sampled from a distribution $\mathcal{B}$ and used for the gradient descent update[^3]: $$\begin{equation}
102
+ \begin{aligned}
103
+ \bm{b} \sim \mathcal{B} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \;,\\
104
+ \theta^{t+1}_{\bm{b}} \longleftarrow \theta^{t}_{\bm{b}} - \eta \nabla_{\bm{b}} \mathbb{E}(L_S(f_S (\bm{x}|\theta^t,\omega_S^t), y)) \;.
105
+ \end{aligned}
106
+ \end{equation}$$ Note that we only use the RBCD optimization for the network body parameters ($\theta$), while the head parameters ($\omega_S$ and $\omega_T$) are optimized using a regular gradient descent optimization.
107
+
108
+ In this work, we propose the idea of adjusting the block selection distribution, $\mathcal{B}$, such that parameters having more aligned source and target gradients have more chance of being selected: $$\begin{equation}
109
+ \label{eq:block_sel}
110
+ \begin{aligned}
111
+ \mathcal{B}: Pr(i \in \bm{b}) \propto \langle \nabla_{i,S}L_S, \nabla_{i,T}L_T \rangle \;,
112
+ % \mathcal{B}: Pr(i \in \bm{b}) =
113
+ % \max(\bm{1}[\langle \nabla_{i,S}L_S, \nabla_{i,T}L_T \rangle \geq 0], \\
114
+ % k~U < \lambda) \;,
115
+ \end{aligned}
116
+ \end{equation}$$ where the inputs to $L_S$ and $L_T$ are omitted for brevity. Intuitively, [\[eq:block_sel\]](#eq:block_sel){reference-type="eqref" reference="eq:block_sel"} is used to discourage parameter updates that are not aligned with the T task which can be viewed as a soft method to enforce the constraint in [\[eq:not_increase\]](#eq:not_increase){reference-type="eqref" reference="eq:not_increase"}. Here, there are multiple options to define the granularity of the block selection such as layer-wise, neuron-wise, or element-wise. Based on our initial experiments, we found that defining the block elements to be layer-wise results in the best performance.
117
+
118
+ Algorithm [\[alg:joint\]](#alg:joint){reference-type="ref" reference="alg:joint"} shows an outline of the proposed method. At each iteration within the training loop, we back-propagate the $S$ and $T$ losses and store the gradients of layer parameters. For parameters related to the $S$ head, we follow a simple gradient descent update. For body parameters, we only update the parameters if the inner product of the $S$ and $T$ tasks is positive or at a small random outcome with the probability of $\alpha$. To guarantee the convergence of the source task, we allow all parameters to be selected at each step at least with a very small probability of $\alpha$. In our experiments, we consider $\alpha$ as a hyperparameter taking values in $\{0.001, 0.005, 0.01, 0.05, 0.1\}$. Additional care is required when updating the $T$ head layer parameters as the $\mathbb{D}_T$ is usually much smaller than $\mathbb{D}_S$ and the $T$ head is prone to overfitting. We use a validation set from task $T$ to detect overfitting for the $T$ head and early stop the updates. Note that a hyperparameter $\lambda$ is used to set the frequency of the $T$ head updates after the early stopping. Having less frequent head updates allows the $T$ head to gradually improve and adapt to the changes in the body without getting overfitted. In our experiments, we search for proper $\lambda$ values in $\{0.001, 0.002, 0.005,0.01\}$.
119
+
120
+ :::: algorithm
121
+ ::: algorithmic
122
+ $\mathbb{D}_S$, $\mathbb{D}_T$, $f_S$, $f_T$, $\alpha$ (random selection rate), $\lambda$ ($T$ head update rate) $(\bm{x}_S,y_S) \sim \mathbb{D}_S$ $(\bm{x}_T,y_T) \sim \mathbb{D}_T$ `// compute & store gradients` $loss_S \leftarrow L_S(f_S(\bm{x}_S, y_S))$ Backprop and store $loss_S$ $loss_T \leftarrow L_T(f_T(\bm{x}_T, y_T))$ Backprop and store $loss_T$ `// Layer-Wise RBCD update` $\mathcal{P} \leftarrow \mathcal{P} - \eta \nabla_{\mathcal{P}} loss_S$ $sim \leftarrow \langle \nabla_{\mathcal{P}} loss_S, \nabla_{\mathcal{P}} loss_T \rangle$ $\mathcal{P} \leftarrow \mathcal{P} - \eta \nabla_{\mathcal{P}} loss_S$ `// if T head parameter` $\mathcal{P} \leftarrow \mathcal{P} - \eta \nabla_{\mathcal{P}} loss_T$ Validate, update $NotEarlyStopped$
123
+ :::
124
+ ::::
125
+
126
+ In contrast to other works in the literature which mostly leverage the alignment of concatenated gradients [@lopez2017gradient; @luo2020n], we propose layer-wise similarity measurements providing more granularity and more adaptability. Also, the suggested approach does not require any inner loop optimization process or gradient projection and hence is scalable to large-scale problems. The only computational and memory overhead is to store the model gradients with respect to each task and to compute inner products between the layer parameters.
127
+
128
+ The method explained in this section is general to few-shot transfer learning and joint training settings where a large source dataset is being used to achieve representations that are most useful for a final target task. For our use-case, we use the suggested approach considering the source task, $S$, as the self-supervised contrastive objective and the target task, $T$, as the user satisfaction prediction task. In our experiments, after the joint training process, we reinitialize the $T$ head and finetune the network for the $T$ task. We found this approach to be helpful to achieve the best results as the jointly trained $T$ head is often slightly overfitted.
2102.11938/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-02-05T01:23:13.618Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36" etag="cbAqK556py8eeBKikdGu" version="14.2.9" type="google"><diagram id="OjJePIGKD2qkCcPY8AbB" name="Page-1">7H3XsqvYtejX9KNdJBEeyUEggkACvZFBASQRxdffOdBaa+8+9nHdqnuOy9fWrurVAqQZRg5zDP4gxdusPuN7ZbVZfv2DwLL5D1L6gyBwhqPR/+DO632Hpr5ulM86+/rSrxv7esm/bmJfd4c6y7s/fbFv22tf3/98M22bJk/7P92Ln892+vPXivb651nvcZn/zY19Gl//9u6xzvrqfZfdYL/ua3ldVt8z49jXk1v8/eWvG10VZ+302y1S/oMUn23bvz/dZjG/AvC+4fL+nfLfPP1Z2DNv+v+bHzCJtjnwB1PKJMe0jOlc3R9/+UJG17++N5xnaP9fl+2zr9qybeKr/Ouu8GyHJsthVAxd/fqO2bZ3dBNHN89537++kBkPfYtuVf3t+vU0n+s+/O1zBEP9dfN1Jc1fI68Xr++Lpn++wt8vfvsVXP762Xr1/bu/hdIX4Lp2eKZfey5M4XgX8b8EDldygmdj5vCXv3xTW/ws8/4ffY95fxEA99sMX0hQ8/aWowWhLzzza9zX458JK/6iz/Lne79QiD58YfHvY/QfLXuMr8PXTH8Q9BVtQChaBILfcU0/hvb7wV+6FVs8+gK+uc+/HqJPJfxfbJsRPTTjV/78HhEt8D3o+yt/Q0u/KAVQPVV1n+/v8Qr1CYmLP1NFUV+vYnttn+tvyRzPNjmD7nf9s73kvz3haIaM6X+E2zF/9vn8D5Hx/fSbXb/EE7H5GmL6xez4NwdXvzE6jf0v4Y/4O/j7L2BFkuQOH+vbKrwE2G6NRJYZJ/nVabu6r9sGPU/avm9v6AtXeCDE6aVcMfIbLIv1329j8Ne6hN/2wMtC3N3fQrWoZ8CjsE7Jf9/Fvu+gz1ncx4h+3peEcm/KPwixPgi2N2FbtWx59G+3Dyo5KNGnBP6ohMhH6/3JcCT0f2tPXCUXFwwXs8pAM8bT7dqdXJ7f67fDdcqCgL3EpZ0fD4IR67yxd29/EEIt8rpw131XLytdCdyroauyfNuJgaku5nYsj1feE8KDcsCCIHS7MmYPudKFuTJ1j9fzQdUlcdi9biXNbF7kfVo4kiEbEtBICNfnzbcnqcUL4dqd50CjBDaOZNapt/LOp/BzcKAFQeB5VyAsZsPruSPPQSgZizChEcieSSf8MZHJyT+GzwaN6KDbvXfRPbcVeLlMN/Krcu+8k+9ZPjcV3qhZvrRqNe37u+DrhztmEZfO4vW0D5otWcnVoYmf1fZ5sqKFOrKay7/OKRo4PE12SUzL9Skbwu65P+eT/Rpwt72429KjonQrSEt14s+6i5ZgLhuAXfQotUbaohvd3eF2m+FULgl6otizgA/7Ohk8tw62xxnuzcUmORdMwTr2kZemIoqPxuZkTo8TXpMqc/Xbq2eV3NQv0cuojpOOZjjh8mMXa2fFVF5deTn7m4d5FaYoxXiZ2TIXBE4hu6Tps8TM1ynesmge69L6i25Zh/JCh4rD7/vQCmvBq8TDI9kV+p1NZKkRwyZ9eouKflC9+gOaC/2nNMo1qQwtfqNPPVpu0iGQC/TeeEX8/bLl5ai/ir7m7mIl2fq6H533tMoT3tEYFl20Q+E21j1rW9fucnbPHmEV+w4N4Ffs4c4dk02tw8jdQVyq+VhvHhcxVbe3u8ScdpNSaMm03QR7/2rwZCTvG9UDOrjo1V6pbtfXzN1lrPb6sS+5Y6Doo3Yazq9p9B+Dc8EOG3n/2EhHd/GOqpjVFJpoKPlhr+Oco9yPspG3tec4L1t16syRyzkXY4tX2ohXrqbQlPWlFLBzPNwHmjrtboawTKU3X/hZmTCsVU+PfevR/nQrxe2E5KfgejI/2yInO/tWrA5IJAjxQXwMyUMwnUPbhnfVbErL9ji59BfqqfjOpdzQ+yl2Wtl/wfdZUVW7nNk+5oJtqUDY9td9qTGmiQnnvRGlnhvV7tzYtQXMqgqDxOYSoEfotLFMdjOp71qkWoTW1yotEXnWOLhlUW1untEVr+RoxaJ0Q9JECY1+3vTpiZoY916leVlmPM3xr6Xzd6UAOBLHZ0XJmtvNeZA+D/ODNBDLKaeOFRaHegyn2IlaRogDq3U9vzHp7JEPNb0cWu8Qek+A9rn2D0u3fabTSRlhUhv9eTwtl3ygTwKeusdIG6aAqzyGnwle7UTTnia8fWXzRD9yjd85h+3YusrI5ncqfcjZy0dDsGwSAImeDsGwy5mMQZpAwXVa4HsFwwtKH5TdxULf2O8TTRbqaUcWlWoac2BpbPJKnWwSMAl3aPN0v1IzjphQdJxOwa+z7vLCMpIcGtBFsld4bQ2Oz0qTw5UHPucnmVvO2RbjbRPnW8MN49ueohEjDJScF9udVe6PkxBQkw6UWkrFLFh8KUUY3+35/T0Stl0318NpuUXCLQaqifngqtvXSDxEPBdL5yP/JHiFKF+niRcmxWDlTSooqZAUFYD0Ii+Y7OS8Q8u8W+nhXi5PZX1oS45q75SuTheZv0rtXaEMITRcU/e2sSGfaq3Q84MuJLpKRFKgJywiAmGjnl/C7aIlN4OUc/piAhVLr1rFrvwZ5w3clfMyMfaS91ANWnYxdUNslYd8pdUtJj5wpaftHa1omKa1ByYQi5bBoklzJ+MW74aj0p31UYlu10ipkcF8E65H8XCsl0NpFmI4tjMe7cgo37x0i+SPTJmEqbjPK36cHkWZFgKSRoKALCJhu0N/TI1NR6pIJg1BFnuVMjsZzxLEL9+yU3MXdjc5ahG973eVQ8gBw/f0zWf1k25IW+N8cQs0iBTf5MbWLepm1upg2NRVq21Srlgvl9vrZd/oV+Ek1Ptw3vPxLeiNB9aE+yjTm2Af3jW8VpKLT+pU7mLd3Uv9LXVJXlahX9naeWlkULh3roI17etZ2RtX4RiYJ0+9VeFWwILx0QpXT4tLHr+c7lIDRNy355Pr49IYKKDqafsc0c75cd7ZHm0t/iNDNBru4g3D2PmmrZYoVO9x6F9NX+lFAtOZfdIrGWb3hKBdZqedC1fhnCdtFS0iYuE8ZsKNnsjDZriY2ZA8D0OLLE7SKwBvjBNQ+d3B6tF9cEXShpn0xB9FC9xZLVzR0Cns5ckVxTPkinF35KOztNd3aUke1f205KI1HXbS5aFK0amuMFOS061TWamtSfpcIRNO4F+Cvoyq0UXGXGfeBamBl50H0cYbMtlul/is3XYShWvqK018JG930nwnLUm98RtHW1RrQCRyC29AA6qKBpV28k71q4seVVcqZnfe9SHfptqqLwtxvL7wfC9t0w5bJD/MjOv9ApZHZnGLNR9fKeHvl+wYNYv50NSSxLVXGx3PJ4zjhGTzqDc1eUNaUwiW5ZwnzRJ2vSlitam6x8eLsI+hl1g1IZmh1R3wMu+G1yu6lsrxcusO1zLrm6dq3U7zUbt12/40H1oQc0N4vPSZQSOYJdkJ3dm1GLH3E23n3wemv3pJfyOOjPtMDn2XIzuD6cJ4Yexnn/RXn9k1RDT2B+5RdOTSUvbDpcCmkNyNIwVYuvc2+PlytDlqpppBsV0Mj5sjHbULdmr6xh5YfLzVyIAWnvVmbxCPMN5Q+oNGRuo23OV0X7JnybEyo8KPgPcXrbLEjsKT9jBwRLFUS+74E4VE6oTsecHqhMHb3cxTrVhn59kzQ55vEhKsHiczpo1Re/N+l3gjc6VyvWK3Z2KwDYObFrKxxxPjXF9Xu2uxuMJbYkNZyaLmI1ni+RN/ZaN2RFMwGbUjrxH6lPvUwTmnG7yf9c3ZJLYFV4dUNZ5tzidBgnGMQ9AgAhKmKEA9bQb0Z7UNwXd4FecmSk3vwcWFL/FGxxedqJb6hj/cBJWSQcBogXZ0i1dJPHhtShJhGqTdFLDUcp/SpUnsepZIWWv08S6SIVLSypDYtBeIE6Y/3a4XScsSp3sik9EYiglrL0jyC6OYiwgkgn7zFE0vb15w0kz9gX69mFophhUxnqjFue2u5/Ol8NpqW+MXl6u12LhecvIsh9q5ujGF/cpUp4viYPZN/Xyd97QzemOjSinO9Nk8a8pM0U/NiahC9VmkdJWrK+po48gnlHfUpaqDbo803sbY3/zj3cEvve3TG7O4t/vbYethZ4GdizGhUvp6e5zMYHP29sfGjDdPPxePsn+4MK3vZFsTA3juejTfuDdOr/jkp48k08QXefNBlUjYqdJfwVnFTtrqrKE/yh09SV7R2L3ixWP5w1PQWWWKzMNUMx6v1mKv+k9d7OrHqyFc7fRibGaWtRqZemQl9iedvJ3kzW1xhqxPbstw7RcArrFnm6JBjK3oC9i4B0OKYwYDbPrmNrOb3kKmTXx8nR9XYGY8s5mTor389ClbB9kLbUEu5tKhOSxo6HFU2OioVBkImE3MPl0Om4jbns6BogT5BZSoWgYWVh5YH86sptvTM7G7O4UXXMIRp+UEuioq08VEc9N4c+hePbnLZ/PVs3SzgEtRwEKHOQ8VDSTMcwgY/L415eft6i+sM5gMCCozC5rmznlg5RR4DaStsOUxQ6Jecyq6PRYvyrue2DF7vUbTeVF7U3oyFFsWDxKJjc0LH1TC4DYGvqeu5j0YRtPt4/4Ziu1221KD9urSLjkwwvG6xakxNF+vs31oX8/wSW+H4ME8HvpjiK+bviefd7rlnrM6DIc2Cxh6mp/4NWLJuE9t4pql2lPtT8lT5Iq3gyA3r4Aldxevi7cz/tiS6X43R4OjzXckMu7k5gZm2oskjjOgMT85R9pL7GeiJgueb/Bd2LM4S2wMulc2J5IJkNoZFS2WmVs1puo56U421ZHTgdiGKSi3V3i+EgPTkjHJ3LQxzqrnyGQ3jdgVx72zt7f2fMeaFN+pKdGDLipYmBivEmCUBihaGjn8NWlPngu0s8sOduByB23bgr8peVRGnbvayTzGdpzbKyO1J0M41flA0Y7NbQh9eMr0yakMcC4y6j7RhfQsOV1eiltBNMSxSPxnU8TMPO6KTZFw4Pkl8bgkQ89GplPlsTlWWUIgnteYNkCk224yyTRApk3P+4tRGxsHwnnue28MMZojSUHKc61dgFwS80gdkXct3BhnwGwqIE8h+jbBXJ1ix/QJi76kMPQyMeOZckhkmIDJmRaG9AIdcpAm1rmc59h2mv3RoSP22eAl9wjHB1t4/IE8h/Mx9ZvVULku5Ei/GA7Em+jcqsw2YTPAG8UCjHJiZueEjZs9zAmYfmKv0T9c+/wcPgtiHkOkKcBjvw9cT/ijn+Fj0XFv1hDuaElBU/Tg517O+LFowCHCC52hiy1YISPDFfO9WPAT8pLsuT1UEV8QmFAa1Ja3LWAVDWk9qWke3lDbp+EhFlOD3ycFT7Gk5E8IOsSQUizf8chY4s73yRVbpw/8ctIetN9EfUB4m/xWdcFRMmZGHN4bEQ5zxF+UloeN+yPc4RKCW2leiT3L4uqI9+jTY7nnD6/1mnLDigfNP3hSidAADrPgLnFxdj2O2wcQpKlvh2h/PO18l3/AtcjzMu+1vODClQx/wHLyut/uCCq/L3l++rpGHyviwgv8z3Ob//5nrH/XR5fv55utyvMpXOzhT76FSd3v7wnDlpd+v+7L9aMQYDtjD7eV3Tm9Ha5Js2sT0gWjyo5369pDQT+GFvpydEWXR8NVjKD2YZ0CfrsGR82L/GCiI4LrzZtySQjjajZVn34t1tFKOlfxKTkesGgvUMlxHtLlTjl7ozqpXH1CRhK67k+hV5kvvcylO5WEAhYvWH2UcVeXd2N6C8rdnsLMM187ZVvqyPNNb97NvhqyV/+M+TqpEaffKizTeNp8cWRGpkO2WEOCfFtzkTe2fxktSR+ts97D781wt0kar8pFfEhf1vfYv8a/eNeU2L1iWM+RG3TNuJzO9yq6cXgmYbX+6/tf//2s5Zqr1xvaY5tp3mTX7Jg0yNI9emN0C4YVVsSBio/RaO2pCe0biK/+bSRpXkewr8I9Vw/dUfbuCIJDSlzH5ExhO4lnv0agA/VKnY5z5KnX/nTcLLBS588rY09hdUcrP6TE4eWi2WGsTNr4EWlc19+oHOaI3Nech+UUGlJC4DAe9t+M5cZhdU2UqxwfcaCbe0KAmNxJFvVrpK/VB39a/fZ7NPQ9wBntqYcFreSeat7XKCsspGn8wTL6Ffo2ZQNlqfg1U8G4E0tlx93WuGgaB55zWClO/2EUAqhUBFKXku9ryX9ZjzKFL+1E+Sq7B4/i8yVbiP5AexeNtYv4vhP3tXyvLOT/WfpQPQPdNONDnEHATVvMJ3I9EmSy7BX3WF+Z553QWbrbgOiMwuPj0muiNBxON8sTt+or3HupFpU3i/PD4LwYDZs3zpCHHNvdzXMBgy6OASKw2sMI1Wk3MERO7JBuZchpLzEghTXwlPXqdJYfhSCVwMelt5Xzc30oJjLsEtIk2dEZFwV0oLPfBN0MkU6IHkFspuGScXkriApd1xAMQEMqZzDJs2P7VF5+ZhLpLUUDI67jyxFZM7PG5Ux6cp4jRhVj4cFKdg1O0+C2JzVDz/0MqqYtIvJObpEu6gSzcJ5Hq574dSBFdCYGLxzqzo7IxXeeOG2T5ILRTEUzyMYs+z5CWj1Mpewcw+hVA8YGq1qdeaLup9eDl0UYyZZOL4g0K5fXWzbLp3wstHykNOpwoBnuYGmbYwVzCiBeXX37HCcrCUSQkKID12Wh6JMO47UhJglX8NYqyYX9iqWH7nCxQHlAG655FV81xZ5tjHfhB2dMFGtKq/bOupqtwPLuISzaiEC/RUQn57wbhuC06B4P4xnu2NrSduF4dQXEjj+3tkAH0lS7aIay0BVFWTZq9ljXJ2itrCiTU1/Sdb+BSfNl3UGgau5gPDQoKI1y6AU2QvhH915V5qb2oYl5a10ULShuKmGtVsk/UNhygp9nPrKcboxNFXlvJyrL1fcQKIN2SKLYkr0rv5eIoHQkhWdMjNlytxsr61nZ4hB17ZU5C5A9KQ6qtzflGn1dljZ8+QAlys+9u8JM5/gnpmw2bJvTLqtfAC5qjXZmPvuRIQ6gxJUlH0jq1Z+jDHyIMRwpiwr6U1WJ3grXHuCYjTRhYGjPZiCKDyaORVwteRHprSAId+MGuZaAo4NY11T8sHEVILhrj4Em9BuCY2A3B3Eoqe1jh98A40J7RPjtT2AVMu4FCLMvEUnsrg9T3iOCUDFByDJ4/IwagF4PFNPfWzOIYR+E4HkpPG6SpkS03QfbrVk9WiaIABdEdXJTlWwSskO4qIItLZWPljxGQFxqlbiWijdgmPawz2BLIOB1xLHtPKC9xJUVnATu6usMwVbEAbbEreyR2cCXSQS0Ao+zIQdqwd22HdTHufdKhAo/Vq7ClOTZmAN1K+6ztcXHOTsBntxY6XmkfjMIHGMGAoX7jBBQl1Ncz/IH4h+I/79DvEYKUsSV0bRedRxe5Ue8IzTs2BU2VSMzDFk++CnrYJegclQMcmid9Bg2O1Ae1M54mhSphrc9UhVvHQWadnDGAasBhuPQWRkrXkR9QlgQtHk8Pfm7x1lUtyua9o485ceZUk8g0iCWfHYYYksgiIIrTu5A1m9A1nviHhkjGn+oiAHH3CAg7dJTt8IV8JGOd0LD7/zkYj2jURBSm8p20HCI9QtHweNl5TohIa7LAe8ew3ZIu7LAFGW3atyRPk7I6hZvGze8p6T5kEp1ZMWjxqW8G5BPSJmQ+IsoKftp42R8Av2y8NYr7GWADMQS+bkYWlHcLrgAK087pDFefpd1uNu1g92e+0HU+RL529prNNAMBZ/qkpiXlB4vmW8L/IT2pCuyc5FKgte3TF9hUmXoCi4gE/006I6A6Pw50ITLGguvwcoEYj/Gyomf/CAZ08MiK8Jkx15ZXVvEA6/+lT2eItr7AfOsqp4D15IP6v0107Pgplp4ifWOBbppnrfoUclAYTq2i1IvkIRzag53189FsaRI5JBN0wXtQ+nIYCldNGb2Ej1p5qn+IoxTQDQQ5AEPbFPPA8m0PoEPTnQL56iexkYiif08XRD/FK6l4QNZbSkRchj9kByJniCb106NE3Aszc3SDlRnlvF9Luwm9RlSVvlJQTQ4UL2ksZfU0tFnGtl/OMJ1Cffb43ljUmopjm7YNK/aqYazL+gU8LYN4b40LSNLycthAB4ZT0eTDXc0U5y3xK4iNfAcIaFfLtXM3jDW1TXkoLkH0hCpwmh+/yy+AAsnicy8YZUGZN9GYNNs6gVxbGIx53C3P5aU8dgSzHaD7FGKFPTZx5AZ8qBvOxvfN2gD2xf14uf00tr8VuyGlnMRfJ321R/k1Vhoh+Y1W9gDGdu3bsrE0fERVpCxHddTxR7Q5yILZXnSGyOWSv/y3Dvu3tbR53Or3URdkUD6uJi5UXW5EgQ/lbfPwXAEFWFgY1R8FACWo/OIZ4G5Nc8PnTmVsAVzyLBFielS159OoC0KUL7+sBGvKOJvtA/7YYZCxcgnonyVKwpi7sJAE3fFpHYOjRyBaghAKgQyB/aOs7roCJNbJLvMS+gc3a3BP8bRH0xOesVg9fbUhY2TjbZ7R65GfXScAcJtAtUPDpghmFLqBsCHTzmPf1zEPdZ5yJy1ottlw6HfNjuOU57aMuhjRhY3dsE7/Q4/EFzuhIgQLOTjhCQCgWFCveNBcisgIWizMVeTtbqAfOxaUl2Fpyi6OuLYYNynBtikutIB1/S9HJ3f1vxeELy0vtPLaowZILtNc1c9JJDdSENt0ZJNLLSPq71o6odAk07ezZpWg10uPSTZ7nlH7Vdr0UQqp2aSmJ/V1VuoL+h6oyz4ZbUVX8DxxyA5p/JqEOvygZ/isLWDYbVHiVXfvc7Wqu/cz94/e/8P3PsZU/oZswdySFim5h4YgxxtfKtESPQnr6m/BdxSmW9v3L9teyFJ1f4Zc6R9odOZQCpvIbN81kMF92suVemhJC+nFTSCgKYXpm34tN/+lEh8e08dS5pv+AnTXteQwhmJ4pwcGfa4o5cfPxkSMZzNIHXDLEgxrl4sPzlIl90YVpJn/r0uC4nJUggU3FivBwvk9Gwt6PP7FyXSHqqrBdf1Wp/Ab1UXln87hRIEa/SE3x0PX9dodrjjTJa/uqoWAh6ySXjFC9edWBIF16tnaH15hhJ4z/xByPbR6l2fV+/aR3h032PoW6kULgJurNe1ZSI6msH9/bVMpO/USTpc3ff1CPpieW8UTQCRn22CNnqY9fUaFCC6M67LXPELW0XLdMPgtl5TE7KqHHmxfrYqIQ0NSUCIw76jDKyKx2OBvm5SAUSL/J5IlIfWRydQNPLZlK526HUbLdJ9u615A7f3kkJ2V4VWn5gYBmauBJeFr8PWOAY6EVLxfSOGen+cMDXSDS2aRD2O++M1hxT4/mmqR5NaRrHmFzj3kpsbw66mbPSXkeV391OosURCMxjBFMTtzqyWSM/2d3VgRJps3uEN4JxJetvRBReDVTJwRv5Wh+cH5E+dgmFOzvjsuSSqU7F5Wq936IdDKHnWxblZqWdfC6475A2hgq+CfIQc4Rc/GZP/5rg9P8lHiCwp7zgNJljuYNBWui4EWRCVSNFOcFr9idKj+dIML6Rir7SILF++yjmns96jyRxPae1rKb9kDbJi2z5rch7GklrRrVKfjTYrQciuKSoLGIzl7KyjnTGhzEfDFrNVMsW8KtQ4XVDgCSGPz3gJk+M1jfAOEfW6Jnn0YrHraI4rC1uqoTbTGllpy0A2mztX1e+xmlbiueM+lV8rbLHSEpQn5Canr9iOL5t1ftbecZhaiNzBPuPqiVpHyxHp4pvd7Lcr7SNvVA9Hn5HeowmOawu01VHGKpOUO09tzfBElu+YDn/RQxm8JnvljEoWpyrlnNbyqa+YS6Q9X37ZrFGlYymUUZ+vmc11vFZMy1TCTpvJ/UKYMGF5tzj8O6ol8PvBgHB/ukrYra4I4oYuoug93olGEuVwaQTrC2gS78GJNOcNNsmVZ5Fq2nlZ5bFeHmXz3HLeLf+KaYl8n/tpQK/8hrm6qKyJ3Gl+j+djUpmfTXuN/SCUubZY40RGraPZpSEqszOfLyuQPb7TQ8enpdW0fcf0VIFmL9SbMpWWj1QzvMMhx3csiW+QBc6pefrFEliZ7rSntUz/3nTO8Bhyuxjk/2gcCCligWUzbUPgHHlgTgRxDg81h2uj9Kap0r3JmzLOeWK3pKGAveVGDAedRod4aAJW9GQSkkQXHno6SsjLV0z2byBMTDsNxI4n3LiAfe92MjUe6/j+fIaEIjKmY+bQJ8mRY0b8PZUA+fDUPvPiF2+e0ajzY9HbFQIK8CZdb/bU+ngG9rFeyER4R04VRAUPAh6v/LA+7uDxqlUFxbX5x3H4shD0Fz9Zx+47qhwIsjvo6PEXd+hIRlhx99q+rZkAuKMdkHkAjy0g38V6lPN2xQN6XMb9+hjWsT5m4LG+bgL5v+X2+hy+tLuF6El5osfvsDjvIYmBHndfkLMQl6pPq34/PgBtPsyrWa5cCuSTLtvW2q8SBD2O1GcMj1d6WKlrffymrgNPifeYqfQvYEr8HKPHb4vClT2RqtHjD6w/sP7A+p8AayNSkyYjIdGnweHszkFGm3DMih6O1RHjjRmfKoGJ9BBukdUs0F+rkIvrVrhHWkLu5lSTFu5+JsDmoMtMq154xhV5PGchJMXuxprkM1e993dwNO1MsIe93XVxqG9oZ2V8FI4zpGF6sNaHfjeQff/MuYZmCkeHW05oy9TbMkE2D4vgQpLYxvK+rHo008Bv9WYV3OpqcqPNJq/09s7+gF2tk/xO9r8M7AJRy66fkUm+//IcIjkpkaZfNb5RIVtIuhHUzXiPKMJ3di6yzt6UOiF9Iy1M6n0Z7Mg3QL6Lrj3E1d6wI2SHnTfZTa/X8YSyRFQgyRIurBQ1Woa4m1l7kb0vp8KVkW9UG1T5zsldZEGFCHJzqvV1RD7Sxd2EiJp9f1+bRP5cFP5llr+8K/QLim/rdLW7I4kSylM/Gq/oneVD+pU/pzzmWW8QgqmFqIcg5dny1hHRKCUCIrLzvjQxsg3sJKmji/41P/oOziNrZd3yAvac4/Sz/OU4IUhQfFTmrb7SzKZCdolzJaJa/wFhyluuE6grpbPTRSe1ibF+A+GAQNgApSAFDDlDV06eNALiO8sIQCR5Qfa//KyCRyDhOBuh8MurQt8vxdqY3iB3SlW4EWxj1MY6IhoDgVDy5Dd/TA0l8UuS+jLkQla/TUUgBKtntaQisAk3/WDU7jcKJV4KJGSprjLDAjt+jev/hkTkO9YCXZZvPxxsUsh9IySW30AUJn7mOfftfS6ie04LqfsNiCICYp2/c7naBuh6vCMgrjgGICIkyp7z78sHNH8aNnAujSTqtEiYANmONN6DvOG6O16MYf0ImSwydw9TeXE2ea6WzWVOjofLBg/7tx2nBcL44Jiz87Dp19nYgMM5hfnmRVzt4Tm5DpSJcBr6McTKwRwU1mokHM6NLffTEzJmN6LvN17VpEdIRIX9SHNQH9gnOGA9NrK3edqOxKaCAIG9CTcvplyPKTLsRseXDUvj1422qPuCzzbL9Qxn7CD6e7N3LFQ+LXAQWUGuw3F44mAcsxys5r0LcJG5I8mwu7TojeEdBsAJJhsJbbLJkWFSBtZAYzPFFpDQcpqFyZcTLC+PK8dpzvjT31CsHV7PEWGfnxCwGcEnaznY6YvmITIdIneBm8ljsHEMOOUnbkg7x+44noselmlMIU2dQmzSR5gNmnoOE5zuC6zY+1xOppv5clrPc/H74GB7240Y6TpUKv6PFHv+BWf+S7UnRv9ttScORcV/U++J4/9bBZ/kp+DzU/D5Kfj8FHx+Cj4/BZ+fgs9Pween4PNT8Pkp+PwUfH4KPj8Fn5+Cz0/B56fg81Pw+Sn4/BR8fgo+PwWfn4LPT8Hnp+DzU/D5KT/8ty8//ED8nw3xT8Hnp+DzU/D5x6fg81Pw+e9b/PbZ+2fv/2l7/xR8fgo+PwWf/yaFcJ+Cz0/B56fg81Pw+SmM+9csjPvA+gPrf0dYfwo+/4UL3T4Fn5+Cz0/B56fg81Pw+eeCTxr/c8En/i9Q8Mn+L72h1RaQgvPRY092PHkv73ze1+3d9+DJ8/uLsDMEIjTO94c/GNFG//3fvtcVgb//88tb//yS1qZt8v/yRtevW/FXtWmKkJo//04Z6q3OsvUtw3/vbbF/fvPw/wiBbIi/4uSfaISk/pZGGOqvzN+hEeJ/i0S+ae/fvCj4AhaiEb2LgsUAv12XT1Hwpyj4UxT8KQr+FAV/ioI/RcGfouBPUfCnKPhTFPwpCv4UBX+Kgj9FwZ+i4E9R8Kco+E9FwaL/+FNRsHjk/2FRsL88hvBTEvwvWhKM2/6/bEnwmZ/+aSXBvW3WKfXflQQLJv9dEix339fyJWemcQ/kv5exn5Lg4xSRN08Q1eiQ1HthvkzRRReNSq75wLgbhNkjUbzGORMmfg4VZ0347tzf8SfIpSLREnjGcRSkiN4nm95pfakEi1eMM0mf3mnhNVWcs+LUXRX967CqwNcy3+n3SjLeh5nXRJ5gM+XXZzgsKLiuX6uB9b5Ed0U+1XTt12zbqtT4bRtN3s+4+t63se85Slfkj3IwVdfz9xwIC81u+pmjlPlWN2Zxq/5aK59rF6n6tSO31CpFRm73r3F1g8x/5tAFfituU/dVlT/julK94aevOXhRCGSeqg6S+mvtpvjysJ85RN0Vp/omW5f4Z1xZZoafOWS+VBB89JM7/6xdF6a0TL/mQPt7qEgePqqzIP+MWxkW8zMHgDTyfFE9Wj/jimJK/swh8q5Ylo2sI5C6P+NSuWv/zFF6Ip/LgVtdzj/jIpBm1M8cK0jvs7RVf8YVeNu8SN9z8LriIvgogRX9WntnGtr+ew5Xr3hT3Ebuy2t/xnX9gfqZAyE2kGWqukq/1l5qwkv4OigO+2sjYRIfCLE/4+7ly9L/zAEgrRRLP0XTz7i6saQ/c6yIFVO3RiD9XjuvVRue+plDkHWB8jxR3e9+xhXFjCx/0RsCad3IVhv9jItoYXB/440Dgo9+cOfy19qn7pJ9eOXDKx9e+fDKh1c+vPLhlQ+v/O/xCj/tB2ksSCrj4vYFoY5m4cC+NxvrIIjObLnSmfMtOOQLGJj4TpyGi5Jt2cufZpIjQ9xmHtZvJMp88al0Hzc1i2aQpgeCVH73w2m6pnIl2JDhPLMWHFG0qRDBmbyRbV0mnVztdtnlvpEAZ7cJOS0Y4WoXp+RdreJ48eHBUW1b3JaaSLdDNor+9NSNV84VBwy21wiR6z8KaDrEq+izKcZejStwIFPkfVmmT5fXRUymUr8vRTCeLH7tPqPInmA7rDY3U82aCCfhleZxKOWxBQ3RI9ZS0FvIwAUNQeNARMe1QJAyB+AIT9y5B1ne4BB03UFNT7etxVS5EuxCUBWi7/hpXSrRE6J9wFPPxOtJ7r5zpQonNucFuV3TiOTKrWEU5ynOSOY8RmnWDEwofUT7OzvLiyxI+c64n88kLZZwjlJ3/crqMTqzjJeY3q/DXXoZfAsrsqEMzs3z4ohmW5I5I7GYd+VOoDLvlbg0VyN8HG++59wtvvQpcXo0zvM+Yw6Ccr9j7MJHJNMFSJI5s+A3OVYiOOYNZEygeEtsEb/l2yMp3/jr6tvtVtqVZIE6eJ0ebaRItMpGiqKi9oAgFT5XgsgdbVcVCgTB9VR22QPFhpWFMIYuvXxRBXfSqkyuLiut3ywF0UOUEOw5lU+8bZCedRneh04RLHjTCGWnQrRccaJdrO2LWGOPMHLf4o1lhELq+n26cyIZZBUJMvO4XLizJfZIN9wPF8pZKVoLBCr0M0ywdEcMysag99xhQsNVLMhqp/WX0wRHVeuGT6scjtqWvoW44GZdrxcxmo4yhldj8kLy2hJBJtKGZPCUhMYt3Hj0Uh9OqoYYTzHJSJmRvOW77X2q4rR0oSwBYSbxBVa0hQhB+pFVoqanPzIqUDLe3oXZ+WR5uim2iD7izIGYwEAd+VR9zEabXlyVv/K5imHPExcvKeC6j9iNWVoSBtCCJGjITFYE2J7Z6pUhOG6EyPcbbcQPoaUjGbrhhvIJR5X3SF76iWc1yzZDvBYnD4ia1ogrGVhh0alNyN2QJMJY8sSpkyyaLpL07IDbhVvDfJcgZHw0BZR3AbfRTNqbuKD7fn4eydl1dQXw0wXhRqtMJHybe1aM1HRxHyADxfswny56wV+Q+swFkJezu5cDRlVsnhbOiF+Im0RdVvgFcqXYiumXEgIywHKknlC4JVA+4oAxSEY0nw171U/vc8VnIXD9m82NEQkVwojmEtP8Pha/TYWXshEo2QL8VY3EnuE4dzCZIr2/npbXYQrRuKFfYFBqLe6QgDPmYVO4E/9MEW9kie0Tq4zHgTbIg+S/pOkKUvDyPKZQcO4AnUi0bNGF7CEAGZOqa50CHLGs8NrFpxIxAJJLEUNwZ9hnmwLNuS+EBRxkIzFi4eUF8pwDzEMsd0si7oQdIO7kVdhpAqGfKZHO034runUTlf2s8d9aQ+BDtPuDlpI+0nzTVheRGPLmYuwUwHN6jDVD58tTAJJ5CzmGuCdXPOQ7e88KfG8ho6q4wyGrSVKnDo5gmSnpwlFtZE1tFqY/hMteLhs7iVqahUPkNdAN5hvcq0Fa93ajr0VbIkuKBipgqh666NwviPcdC2Y89MhawUD7mllxw/QO7S9TlAFbRE9qdST9gBWW7T5LkYRnnkw2hfJLcENEwwoszKVeWI+0xDGEgzyOgYll2wrT41keTi9zMvagg10QH4E8C/bsJbXNuwRQMaRlFQ3geABrsToSFD/JXImwytAmuWojHgcNoT6xEKEO7ZnWPP6tWQ8XnjoXIbErjQYoX90TazsAOPXeQGnDdBQkoBAqgxM2LdiMhgNYuhMud2UFNLKCkRnvw+LSLZLt5uOAsZZOIjHQqMyVk6AOoUoVtG58wrgyFXyQbdtD40BjgkkHmrlKtsapUHCRCg+eQdYQKC+A3HKrT4YvHJE9sOOKiICdaqDVMQtpJS8HeDw5hVl1tRMhnEBWvebYPXReQVzaTR6Uc4gTaFB1gm3IdXlD0N6aqTf/sn+R7kS2D5lhlIrGqieYA4kIZpeAXFVu/WsPbQtAr9V9TzAZAXOTfnIbEVP4Ils221PRuVwH+lmFSKSDuRf3pux42yI2B5zFKwpp0ZvtjC2i2egCNCtDTz33jk0XHUTg9Ui3FhTgDmiFdMJJm9OMLIbzbsSZ1NN3wh5BAg+NXoL4fCOcTr3FXtyt6CBufXSvrgkDjre3BJninFXKwgyarZBVu2eQ5lHukOvy6ZzX1VXGU9c4rVMxBD9CBj9CEZA9VR6O1Pks70rg1F1VsWsrLRfJlENWi9YsRYBPyAnCUUwomiqzFHaDULGxKAXZcwK2V9cCCyECbsL7MRGNyUMc1/TWtH9XRgiT2kodsvaAihH/aKy5sabVJLuC1XQZDynfp6BlcRdj16KVJ9I69OFkSy8k1hA8h3voT7ByTciQps3gfD/yNi4ssvby5iQRIcg5DfAJyAczSbgJoLtiV3wdwFq2kTGAdGJLHI+TUiKh7B8JrgNjdQL9KrUXS0ZmDoI854FhoACPEKAdxm4oLkNZrtzKK6fXumMVraO3nFlvjZpPxQchW2n14518bLH/YVusNZ98Decw/UdC+RtEF4dBcsi1uARotLghg7pGkM2jotjbIM82RNxcENGD1a7TTL+hQkgcsjzvvDONKWd3uIsoB98YnqwETFbtzlf8ILeh18t7P/V5niMuiHoSyxNKUy5LMtk1rxe4IST37JjoejSHm3+rE6feO8e7M0LqmN68x3dG5DQct8L4lfW8hxXdInuehTQvr4Gbw/GUgSQn31CH/WxgcMISJGLTafNzqgOQSpdLcGKxC3LAfBy4uWgD5GTDsRnI27g7HAm3++Z+UAsXmSmzbT0pnxY04IPljDPqk7lbq+U1dVEKB0mqnSoaz6eQI1rvi0nHeWt7P10D1UED3De2xUweLeiev9A04b8eaQ8ai+eLNPRL+0jz6LfIigERnLk8Wm18PV0xw0be1hMK8GS69JCVyeUzkB3ti3HRmaoG1RuU75V1TE8h0knPkEVsIomP4+kgnDHgZP9ss3JeIs0O5zmG/sI+GhMO45a8RioYJV2HcksDbkFRNQGPaEq9H0+Z6ANvv4rKbmVkpzphF4/DBRhInKsJkuEyFum3oVM2SG5C+p8/B/z0uN4ObsSBrrtQywhNQhGFO8/nydneX8RI8QI3Zza/SVoR/RbZArSB8xKC8uNwC90Lkkx211D+Y/XIPrD+wPoD6w+s/4NhDb837gsLBagjcV8YOD85brRTDyf+yCjiS49CVsd+2W4saMcNHoQJ55sEZENB+x2wz6+7vhJU6gz+LOi4elxdhS0GnmcfNXQ+8eALWsr1xstoyCNEVa7HnqTDCEYHvVhYvr7jwSeOd0Qx4GALFiEdnqD7DfjHqUadh3jSACc7ZlNMOVgfTh0hO4W5bZjUcGSPtyVCJmN59eNlpIT5SBHBdhkk9yy/+MkF7+pQn/fEBH5fshugwB28IQ+882Pgb2ppEkqIHEadgz0gcpTAelXCpIPJ20ME3MixFLyeAjwMkyHZHasfQK9TLa6ewI+QwScugyt+H5Edenty593LQ8ZDCzCJ80U+lEewoe3Fcai11RgDO7E3Kj2x/tqDCPy24pkgjwG5axvw3UkLPIUKQRgsZ6ob5msJLcogGnoGp/UlysAp7L4fKguZsQgDj2u1J6ILqyP6cVph1tgrBL8hVlPFfAZWvngE7+96BzJUoBWFQo1gmcHJtqARDzPYztmpClQ9EMobsnSPz9OEbzCwjG/MTu6hTQ94XoXfLVYkN8j48VYakGSI+87MhUJ2prxGRpzne0cZ+P1kAXQeBOJ5RlYknBVfsgUt77lyHhiZSc890IzbK5F2yHN7rBZ+8Mx0dQf2NMUl2Q254VuIzr76bnInmazLstlR+qZLwUeadLBJxZ1eUCc+QPg3kmPmQTzuyAI2kipkken2JVcOaGn6GpWuRPKYaUCHaZ6TgNW8Q/eL51porlCQ9xnl8d2OiZQPvL0jFNxjr6nCClToFdCLQsjA0zFgBYY5N6moAG9Xp+y8Q4TllQgGnPbMktsddkPfkqyeROEBsKH78aVNclEjn1KjRWrFqCsC/ZX2pPPP9CjLNKmMIblGMaUW4EsbC6dQig4e/J5eeIt3tybEEdrbkaGrCXlSXnMlNiJCdAj+KXHP7mIIXiDilnJlHMTJUw6cvAn4it2jJwU1OiZMn5s58phasjmwF1YqYUYqf6W+bgoJ+AUkSJDkCvTJmYp66vgyROs6wJFItyoi8N12NyZrEC4HfYto/Ckq3cxT+iotdpFGr623djPyj865yKmpbkA+CDeTI7S6c9Y4AVrbsSgbJNHFL4l+dVUxQhoh7q9SSEDsSEs6Yo8Z0IoMIv/kZWN5tqBA3EW/y8UabFGwNXtwdHCPH9MDZBW4IgyhcUgqQFQhPZHTFQEayYaw3Hj1jo/MPUREuPCeJ+Al3m/McXRLdzAfsBu66maNEiGCcmm2EuVBFggyNnxX28gffkJ0JsbB3zjDSwEeLGQYRjjfafiCSCEfexp6l5EgPgJcah4exIOlwEtfsuGC3OQKA+31bIjTteNDyHzp0OjE5mfgYyvf2EbBV+s+/TW0ALm9MDURTZOgRa9QLwSzOh2ZvhB9MECTzEgwRXxDcv2eNaZsyNsS5Aq+JffCGbggd47k6cKX+05/8rdmS4jJGomGBJFdIlpc+wAsFveTU4Jc4EXpvPk7t2Vm4BHl63ld4zszp+uQRdOZffmd0SqPSL9lUPjr/GkwqUSD4cp37jEDVkBDMdzt9JOR1UFW62Z8/slqHlcNxjncuwEXzOkiH1OxvO/uZdCyDmJ9NEdk9E8mEyLKuhmd18Yda9aWBm0Hx1433+3mwGuG3K1L4PJPYrFHNgLd0wX9bkEG7dNKyHlG52H7ncOjIefUk4y1YD9pPIi2uMdZ+8kT96v8Zgr63X4O5itLyEef+y3/lV/dAt9VOANVAL8Pxds8Gio4fmUmEc1AnHQkvlqyrclJ0Kpl/7B+Mp4J0nu4CTSCrS3pYE5DgNzhrEXBNyowEUNa70meyFb8NZSIhkIUfvnCqtlC9AEc/stXo7YVNoiTZykJ3G+siiTIIBLa1N3f7XoQxmKRT/l7bVPtF1YlNJhgk0W3OO4PmCFSLCRy+pNHJhGVPqD5hnJfGxWuOFvlZW1j7TdeW6XnbS1cm9f9PhhPocFWvbDOCYJ/W4AGOn2171uFL5JrL5Fop5+s6ID4LBihKsJfGxjCrBek6Txfzir3CyGSa+7FdBizJi/1X0MZaCidqr9xOyAmCuCwJIRNftr6rbSP6E7e9d4PfkEK28+1seHF+MZeqwuTiBipLj9c+eHKD1d+uPLDlR+u/HDlhys/XPn/C1duC6d515Kiv9CMhlSCX7PLFmR/c6vDf+0Y0STENrOMv2JOHVBre3Y02FXxL/vBvYni/1CPKxL7cw807u/0QMPYv363RvvnNEHD8b/T4ep/ogua7/HQBs32ov+8bmbUX1mC+/WPpf6EeOrvNL/bcH9lyX9mYzPiP6KxWQJ/VOLd2IzfTYYDDUI/jc0+jc0+jc0+jc0+jc0+jc0+jc0+jc0+jc0+jc0+jc0+jc0+jc0+jc0+jc3++DQ2++PT2OzT2Oz7e8Kw/VNjM6Ev14//XWMzMKrseLeu/dPa7F+vtRm2k/5lW5tJFvVPa21WKjvutsZF/15rM54AKl1bm0nJ97Xkv6xHmcKXdqL809osW4j+QHsXjbWL+L4T97V8ryzk/1n6UD0D3TTjQww1RJa2mE/keiTIZNkr7rG+Ms87obN0twHRGYXHx6XXRGk4nG6WJ27VV7j3Ui0qbxbnh8F5MRo2b5whDzm2u5vnAgZdHGg8MFZwCjqqTruBIXJih3QrQ057iQEprIGnrFens/woBOmrpcNWzs/1oZjIsEtIk2RHZ1wU0IHOfhN08KZL0Bfri9AaLoEULigIKLWrIRgAqd8zmOTZsX0qLz8zifSWfifDRmTNzBqXM+nJeY4YVYyFByvZNTgNB6a3Sc3Qcz+DqmmLiLyTW6SLOsEsnOfRqqd3ukoRnYnBC4e6syNy8Z0nTtskuWA0U9EMsjHLvo+QVg9T6ectzmBssKrVmSfqfvp+p5xrS6cXRJqVy+stm+VTPhZaPlIadTjQDHewtM2xemfVINKhb5/jZCXBO63mwHVZKPr7dcBtiEnCFby1SnLf6VoP3eFigXq/wcq8ru9jPtvY+33MZwze16xVe+crYcny7iEs2oh4J0flnHfDEJwW/f0GKcOFt0ZvF45Xv5Kz59YW6ECa3u9BK3RFUZaNmr3ftwdFCIoyOfXl/ebQwKT5soa3Oblz906HgpDgy6EX2Oj9HrJXlbmpfWhi3vpKxypuKmGtVsk/UNhygp9nPrKcboxNFXlvJyrL1fcQKIN2SKLYkr37Lm7fISgdSeEZE2O23O3GynpWtjhEXXtlzgJkT4qD6u3N9zv0pA1fPt7vtO6/Ervc9/u6c9pl3+/5Umu0M/PZjwxxACWuLPlAUq/+HGXgQ4zhSFlU0J+qrzfUCj3AMRtpAlLrrhmI4oOJYxFXIdHZl0EQ7sYNci0BRwexrqn4YeNrI5Fdeww0od8QHLMWXotDSW0fO/wGGBfaI8JvfwKrkIGUtgJvVt8+d9eHCanqVsUEIcvg8RMS1QLfA8X099YMYtgHAe9ph8dNsiaf+2C7NatHywRrMx2iOrmpSjYJ2UGiOoC3vj9a8hgBcalV4loq3oBh2sM+gy2BgNcRx7aDxG+ZuLKCk8BdPRzFkEUcYEvcyh6aDpVJBLQCj7NhPVyBr+9lf5z79b3sfqxchSnJszEH6lbcZ2uLj3N2Ajy5sdLzSP1mEDjG1oLhZ4SAupziGo40fCD+gfj/K8RrpCBFXBlN61XH4VV+xDtCw45dYVM1MsOQ5YOfsg52CSpHhaMZXSc9hs0OlAe1M54mRarhbY9UxVtHgaYdnHHA4CV12Dh0VsaKF1GfEBYEbR5PT/7ucRbV7YqmvSNP+XGm1BOINIglnx2G2BIIoutbEXcg6zcg6z1xj4wRjT9UxIBjbhCQdumpW+EK+EjHO6Hhd35ysZ7RKAipTWU7aDjE+oWj4PGycp2QENflgHePYTukXVlgirJbNe5IHydkdYu3jRveU9J8SKU6suJR41LeDcgnpExI/EWUlP20cTI+gX5ZeOsV9jJABmKJ/FwMrShuF1yAlacd0hgvv8s63O3awW7P/SDqfIn8be01GmiGgk91ScxLSo+XzLcFfkJ70hXZuUglwetbpq8wqTJ0BReQiX4adEdAdP4caMJljYXXYGUCsR9j5cRPfpCM6WGRFWGyY6+sri3igVf/yh5PEe39gHlWVc+Ba8kH9f6a6VlwUy28xHrHAt00z1v0qGSgMB3bRakXSMI5NYe76+eiWFIkcsim6YL2oXRksJQuGjN7iZ4081R/EcYpIBoI8oAHtqnngWRan8AHJ7qFc1RPYyORxH6eLoh/CtfS8IGstpQIOYx+SI5ET5DNa6fGCTiW5mZpB6ozy/g+F3aT+gwpq/ykIBocqF7S2Etq6egzjew/HOG6hPvt8bwxKbUURzdsmlftVMPZF3QKeNuGcF+alpGl5OUwAI+Mp6PJhjuaKc5bYleRGniOkNAvl2pmbxgLHTEuyEogDZEqjOb3z+ILsHCSyMwbVmlA9m0ENs2mXhDHJhZzDnf7Y0kZjy3BbDfIHqVIQZ99DJkhD/q2s/F9gzawfVEvfk4vrc1vxW5oORfB12lf/WE9D4fIt3nNFgY9N27dlImj4yOsIGM7rqeKPaDPRRbK8qQ3RiyV/uW5d9y9raPP51a7iboigfRxMXOj6nIlCH4qb5+D4QgqwsDGqPgoACxH5xHPAnNrnh86cyphC+aQYYsS06WuP51AWxSgfP1hwzFC8Tfah/0wQ6Fi5BNRvsoVBTF3YaCJu2JSO4dGjkA1BCAVApkDe8dZXXSEyS2SXeYldI7u1uAf4+gPJie9YrB6e+rCxslG270jV6M+Os4A4TaB6gcHzBBMKXUD4MOnnMc/LuIe6zxkzlrR7bLh0G+bHccpT20Z9DEjixu74J1+hx8ILneavt5SPyGJQGCYUO/WY1gKSAjabN4NgaoLyMeuJdV3tz3R1RHHBuM+hQYrvK50wDV9L0fvJnfeXhC8tL7Ta6sX3QDZbZq76iGtR+lkcYuWbGKhfXwfTdMPgSadvJs1rQa7XHpIst3z7uudzCZSOTWTxPz8btRXX9D1Rlnw91t1X8DxxyA5p/L7oKB84Kc4bO2vd1wTq757nS3v3Rzvs/fP3v/z9n7GlH7G7IEcEpapuQfGIEcb3yoREv3Ja+pvAbdU73ezl/5t2wtJqvbPmCPtC53OBFJ5C5nlsx4quF9zqUoPJXk5vY+DCmh6YdqGT/vrneTEt/fUsaT51aFz2usaUjgjUZyTI8Med/Ty4ydDIoazGaRumAUpxveh2clBuuzGsJI8fx34tZCYLIVAwY31erBATs/WwgtfbzoukfZQXS24vhtaTuC3qgvLf71JGYI1esLvjoevazQ73HEma21muL6rGdkkvOKF604siYLr1TO0vjxDCbxn/iBk+/f7ps+rd+0jPH69b1rfSqVwEXBjva4tE9HRDO7vr2UifadO0uHqvq9H0BfLe6Pvt2YjxYk2eoDmWugaFCC6M67L5N/v5QaBr7hhcFuvqQlZVY68WD9blZCGhiQgxGHfUQZWxeP1OK1JBRAt8nsiUR5aH51A0chnU7raoddttEj37bbmDdzeSwrZXRVafWJiGJi5ElwWvg5b4xjoREjF940Y6v1xwtRIN7RoEvU47o/XHFLg+6epHk1qGcWaX+DcS25uDLuastFfRpbf3U+hxhIJzWAEUxC3O7NaIj3b/5/2vqvbUaNZ+9e8l55FRlw2SSCBCEIB3UlkBVBAQtKvP1UoTNrH9nq/mfl8PHjZXlti726o1FVd/Tzs+2dREdiyeR1JHjbqI49OpSVmJWdpkDyWwzVyBjBuKooL93KspVVYREp5tG+PrR8JVHIs0nX5OC5eyJ53Tkqm3x51N/sJ6JdeDJrg4XFj0mizFhzw2KehZNs7DwQ7am8EMohc4QR38jyw7Qsks+YbVnceR8NhiDyR3NPjULuHb3o3qts9e8YayGKrOi4T8jg+rXh5FPRCvjUIzbMU/Y4JY/Y8HL2m5Cy5DBwlbiPTkvTlghZS7nGAPcO3pLt+Wb7Ot5uG6gt3u9eO5nqaPORKjn/Q71bZRLPKvZQXj7HKSiXSbBy1kAVNoTJb1o/Ym2yeezsBQozXxmMfppBD7+ys6f7zQH8Cpkvzo2vQEghbUI2a80sgqo/RZNdzZME+cS0wwdT3hBta8wX7ABIIZGPONayanNYzck1p8khyq9fhfYmExvEWZOUDJpDJWVi3QAetHa9SoixSqQXfPJEQY7mhktPdfbw7npLJuGUacFrItTw0dVnhhTR8HNbPFgJElOmmlO2n0FTi44k09yE21cM3y5fV9d7GYzObada6kvwnBMSoFFInQTQRWn+jPFPR20Zuc32Mh2+2T9aW0+79gMo8RyloJuaecIyBol/d6/oBr/fJyZy7gaC+oR6u15eF3oZ7WKZekbBvzfEQ/nMviZSQgUv9JHq6BJVFI+No35t/t52LhIKyS4T6x5AwSDF3vG2xKhlaYqfigmHW82kh0cZFdZ5AhJ3GZ8uEMKN7NJepR9xY4kGni8scDJlKa3Y1Z5nTfFoL4YrdPPdkv5Mw04wMDDu+vJMmvcfTNpZBqBOp12tsKEIyvRSn9Wo1k8QL/ZhKxn545KzJCxSxhlGvh7tZtRLQ0TeFgh9z7eUruo99gxThsXOqgxUcGLzc+kN7+YSX21VV1j2HHGbnZ4Zg3khjz06vXeWJrHlnEy4/vcOEGGEvT7fhI5uZoHdUZ0gP8LKN5nu3D9l12OoBLmfLur2M99FeFvGy2T4E1L/ZcHs8P1d3G+xJP8Llx7Y48SFiwOXTU3I2eGn/aBePy1O0zYO1tbLWS9F8ovuwsscPCvIpWPpxiZdbe2itq738sK4p4ZT9UszNpzBVcl3C5UdG4Wm+whVwuZN1J+tO1r9A1oOwvypjFht9Bh7OPrmQtMmzOK3xWB1z2YmXY5+hFOE8H0LWLAvPu9DS7VDeh8aKHV0jQ71L+zWDOYeQxUZ+o2MpTZbXGBlm9P2gbfJZD/DZ9zpqRhbmw/5oe3e5l7TjbDmTZ1dsw7Rcjud6dGbr+phIpSCmLtKbn925o3GPzARynh7IhWUp3vafWT3MdCZDs2wDd79NueFhV7do9+j+YF5tsmSkBc8EOwVrGdVXSMnHz8oh1FYZrPTtij/IIRdSdwy3GzxGVPB3Rh5kZw9LbWC9Ue8t+U47HtQGULuYxkFp8w0nhDxszcc7s2jHQy5Tg6iaSsutRV3sgTK69py75j+LCk+D2qgYcA/IXrrR5D7uIJeLwiQvWudRA0bde/y+0ShknabB5qo9qyv4C45UxYPOP1Q5OVvUl8EtfHT5YH0l64hQ/uM1BXZL6tTYDKtd7RbyB/mmnIEQIc97rsSQGzirVRFuzOf88Ds0gWylfeQ75nOuW1+1Z+EEkuCQErgyW5vhc8hL3C0TFuZbhBGxPXfSby2912xM1mhE+wsRnkGEZfGAEmLP0NNWRwGE+OgyohBZImvBs85KCYhEkhxQ4bOqgt/PlGLQPETuZn15x/TKQQs5RXAiilD1tYd/NCWnkvsqCrQWUIt1Wx9EiFlPm0mFmBPy9XlQeC8VqkSdqJCptjHDxjy+3df/QolQOxaykGWPOhxzUux9gxKzlxDlhlyJ5D2qz7viraNUPX0hRAWEWCSPXq7Bo11f9iDEB+iy3ypR891/rx8IZHHm8VwayxRRuhInkDsKdI3xRjrt6fQyLw5zMQ6t0cHSb5LDrvM7v7muZtMNTyOIHC3HmMiXgySu3YMj3NYDHgvOZp7wN2brnI+N5yJMRDLgj3GvHNNBuUUj0Xhu7L5fHLFjtmPqmvfzMpphI2peXwQJAYL1ikatLwfxIz2tLgyf4waBw8/5m5i1xxTFHm/Sd74n0FveuPfHKYn5+3aNZ+xw93fnjHqIfLrjQWQdSofZ+UhjctyT8G4eT4ElsjRjxd4oSuvB+bENQDNifGGMxmEvohi1GHGBunK9FBtabnkXk/sCby9Z5q5bruljwHM9Z75dh4yzPuKGzQVrskrCJ70JBHem51AuSFd2NuHdAZ7yU3jWSag9TSeKT8WGmKrNSWf46DCPz0Z/PV/RQp1S6TiQEjbir5tFe56LjCdTxx/ySmiaiFT8MfBPRvzUE7/Af+LRxi/wnzT/AfCXpj7xvxT3y/4k3K8jI+wXLisGGY006/dD/7LSpxea9qVx5gONM8InXvqVkF/hJ2kcvqL+IyoO/Ac/1cUuOf12Sme+AfezFP2dxjnq52hbXBn8lEwtNVZdyx4063x/+KP3nayTOEvGz4/Vsc6rrCqXW+3zt99I5fPvWBWitFv1rJO6vo1bg1CX57r6WnnJtajnX/wc4lAQ1x6f1Otz5PbD7fWhhMedf/nhi7/Cj5//rP10+yulnarzMUr+zBHExy/Wy2OW1H/2i08houT+1AiOyXZZF5fkq/v48Q4s/iQHpqkr/Pt24ceQq+Pr8jSJ6ur4d136sxHR/4vzfWEwXzr2fxg2oWM+Eb+LAnBFEkR2KfwgX+19Q8TBf++rNPOBswo/LTT3fpJmYQ32iRJovjkOTIzO9L9HjZzwN9TI/0o1Mh/lVB2pRkeq0ZFqdKQaHalGR6rRkWp0pBodqUZHqtGRanSkGh2pRkeq0ZFqdKQaHalGR6rRkWp0pBodqUZHqtGRavynI9XoSDU6Uo2O4qGTeCfxjlSjI9XoSDU6Uo2OVKMj1ejIFbpn7569I9XoSDU6Uo1/DdlAR6rRkWp0pBodqUZHPvDPJB/oZN3J+t8o645U4x9MJtCRanSkGh2pRkeq0ZFqfIn2FGnpkyh8Dfhkmf//PBrMfwO0j87Hyxtb+5ug7vEU1d9C3b+4Mv4hqPvXfXeg3g7U24F6O1BvB+rtQL0dqLcD9Xag3g7U24F6O1BvB+rtQL0dqLcD9Xag3g7U24F6O1BvB+rtQL0dqPc/Hai3A/V2oN4OYvrvhZh2Ev/VEu9AvR2otwP1/qcD9Xag3n8vwLF79u7Zf7dn70C9Hai3A/X+S8COHai3A/V2oN4O1NuBH/+Z4MdO1p2s/42y7kC9/2AwYwfq7UC9Hai3A/V2oN6vQL2M9Enkvwb1MuI/ANTL/6T38ToyrHEBXPY119fG2igggemMxvCN7kB5QoGj4mXdJ7b2271EXRC/fqMvLQnf2cL7Res/+i3qHxrCa7IO3f1X6G7h76K7pX8Wulv4SZ6uTywrhKuKM5o61gS9nFjfv1sdlurvv/wV7+hO05SJoo/e0R0LK4H/Qe/olqjvWRv4D1gbWOrTC2f/a97ULf0WoP4NVniD8AHqVyb0bnvvQP0dqL8D9Xeg/g7U34H6O1B/B+rvQP0dqL8D9Xeg/g7U34H6O1B/B+rvQP0dqP8rUL8SHL4C9Ssz8qeg/uB+OM87SP8/FNJPO8E/FtK/Js0vg/TXjlVE3P8G6Zct8oL0a6fXZ22TiM1ljOY/1qg3pH/WhOzOl5V+OF0VY/m6acKNqQxyrSCTwX7AWDWE4nafcyUuj+dcsht6tK739BHjUroyVnhNkjhs8T5OJj6O5agZZrzKMlbN5nGsoz3qkfSU5rTVzedhc5kUGjmZ+1wdPMAIbSNedsTs+TMe9pU9Lyj6E/vxEb5VSGSYxufZhnlmkGEVNv57XHMcONRrjsxTyEybNPl2/ZoDtFCOmvccmUYqc3BVhv3P90oSY6Pmn5/Iy4xc16Ds/jyuOWCT9xymTIbKMPJuefYe11MLnjTPOYgiTzTC5VO1//neLeXmU+85FNNTmmKn2Zvle1xNE8/vOTSS6SAfc+Fd3/duyk2URc854PkOfYiHh3wta+9x84EtvudAkYZ+oPRn9ntcRYnY9xwK8ZQsKzUTROq9x+USz3nPkfkKSbSJl2/W73FBpDH3nqMV6f6qDvvvcWXiWBv1NQcxdQ/ko0/s8PO9n6yBMX7N4Zk5sZRh6N386j2uF5y59xyg2ImmcflW/XzvmSHf5CfQA5+vCuVGOYBi3+OOtc29fs+BIs1121yEzXtcc3CP3nO0ilUirwCRvu6dGDlPuPccsmbKnO8r/fHoPa6ixGz22d5ApEWp2VX4Hhds4ex94RtTkI859a7Z53tvTpu485XOVzpf6Xyl85XOVzpf6Xzl5/kKacZn9ZKyXCwtqxtudZR3CfN7q7SnsuJebU9dS4GNh/RRAw05Kc15o8fD3uarmbRwoAxjn6p5lbNuJFL3F77owQxqcwBJJftg3jTbSMtlBzuc656NR4wdbg5yZndsVWSrk5aPRvFmz6uos10DRQvFeMbGzYhn5BJRDj5CLRxlmBmKUJ3jixI0R3NwS6R0SuHjlXLoBYcUScNIH362lKVf0DoeqFZIoGnCYnPbKKsmM/f3dHJZ2KRlj9I1X3bcnnEtm6JngU7mW4HQCMVzZAPskao45AYb0LIB0pgy4awF+HLWGT3CV0beVNN4GjddR4jJOw0LJdK3TO/OcDnY9/Job3LFl8PxhHDHlV+z0n7kqTnN8Os7lF3NBeLKrhR196hcIeYcLurVGFByFoDtj5w4SeNJRE6D/XrNCkqG56BNL8jtmhJie3BTov32vFdvA1LhHTkIY/WSJJ3BbPfVNWapJfG0k8zF/m3lCVIB+pjtAt/d2yQLOKU5lO5xf6VckHI9Ep00AJM5TSCSuVc5KBMqAzkmJXZMEHypVOBvyXDGajuybWu7UWu7qiZzU/9khrwaKnZWqmGYFj4apE4SfRJ6F8fryylIsEVVZDVa7Dy3QWPw0U/ufdlrjDzW8k1r6ztbB3sIV0xvHWkL4gxY396cH4fGQRbEGsw1NwdbziXFSVv6sd5gDBrZD+nSHszlyAvqaOSGGsYqFmPm7L6R1rZSw9qwn244t7VoYyJz8yCmZNt0lUlWDoSxNG1guLyHsdqtgvuiwaPmRUmiPMGj8llggxfs7O12o4TNTKPo/LK6Qby2FYyJwkAdEE6FcVNvefGjAE+azynCiasLZ4XakJyG+yZfRpmHsCLQzCqQe4ojhyDpQ5wrhhm9Y9REj4kzmsfrhe2bllKBfSxjF/cEztyMRP3DdVBFG69PtiTpU9RxIS3vEeq6Dnu8ldkqhdLCJuhcbOwQtX3t5bcY5MjLYRCUxoWezm0TYigvnbMjQg3GEC+DlW+X92EMvrZcHXDXtACvFPEO01O/nEs7iERUj11I/UZTLA8ife9MO6lX4HybyVwMYAqEZ6K3CWJUW7RsBkGyvrBXzzN11M9pMueN3ILgW+7j9MI1G++AMVDZn6+LjZmSDSyfiYzx8uqNtYnY1x0iyGvwF2ancptWfhMt1x3dCjIVhIyyvHBHBF7KXAAecJmsLjCfg89qLh64gLU88YKdI11CFhH+YHMry3rBWoaRfNN5mdNs1F9eqr01wjEmjaUI4+3ifps2cxh3HqQUUiUoIwhwg+uZT72GHCPwjXjlBEwb42m0DXaqBje12WIU3BxnERJGuGgnqqDZQqr5IKBB0zeNk44ecW/lNVouMnAAiEuhyEhrfM4qQpvzbqAFGmMjc6HmmxvGcwk1j3u5Qxa8E58AvJP08UlXuPXTrNR1Mx4qXlGGWX01yGvVkMkcnn5qRGwAK18zNBUIQ/41vZx01HM0WxoDk2SLCUbmIfYYljXb6iEZOeOeTGobkqp0j4esGrXfnPAIlhWxHkItIJvi72I9nd/HWlY6q7ASeggCKdBuqGAg3UpYdXc7YZtWGWRSAlqBmNfIgrXfgO+7Ns44rSFboXD1teJ0R5kneL5Y18/UXfHVyoToh65wH47jCCK8eBTjZq7dZG8ONqzjjXncjaphlZjN8SCPO6CUrKrk5nDMpoub1QzGuAZ7GD4m2lV2rv6qcIjHoBVjW1Y3UI5TzBbzGcORRpMy0KooWGy7GhEaV4j+kZqD6uCZBcMnj5V1uiHcOp0zo2xQouX3x0xL54GolRKhSc1MVtFCuBhP2FSYMw5c1NKe8aRtT4aRdYqNSYA3Fw0htluHKdWzTRbCQNkXt5KKOKI80uG+6YaSskgOMLYNp6WLxCKNiTazVR1D6iNgKpIPRIRsCBcvlNx9VywGgTyDfGAkpSGDT2rgqk7ZsCr5CcrjKOliu1a7IegEu+qF1BsjcxJ46anxEY6lNLiC9ht8DK3IdiDtoRX518/5L6ydkPuwMcX1YayiwTkgRIijFcZVfVffxkg7gutaUdeMGDM4NxusdhdwikDpZeVwkZ486YTrcx93Il3K23g7fUQcm+GndI/OOVhFd457qcBmww3arIacmN6eajYmhsDtTKhsBNCf4Q6FlaTyiytkDOvRhRYj3xzJY5AEPR/UKu7Pl/JiUdu9jTdUXPDWw+l2KucTiThDho1oyc40+YorW6r1nVqElUffY68rEBJi9tsYz22XUREpc6wjNKwjdBnyqWw649ZrbZShp47yvNdS4XkQU6ZxodhXNUR9Yk8Qj2Ii6DGLI3waUAVvczrkczI17rcAKTlEb6Lry0oZND54XFnbzfiBbJKbfqWeINtDKwb/MXoWbzdtSrbFrGlzmUakjnCVpT2q14LOjrDqCNOFo94grIE8z/t50OCdG3IMK22M+ByoNjY9yPaScqEyc4xzBuoTlY9pkryTce1aesptitmyA8kArIkVM5s1egZBOZgx0gmT1QbXV7Xa2BqkOSB5ycfEQEcfYXB1uJzO6eacZa23En1xa5+4D/dR2+7VrAYFiZQDo9lR/q5OulzsB+dilXUkBZ7DDA4rLuDBLqZn1WVbcBjaaLqDhLoAySZhmo4djGc8syw3YPSYtZuCWPPcHBuHPULcR6cxkpwT7YHl0PzA1/SJGOej9ZaeatXcr7VxEAWESMwGrGdl+3JmaVnGrkbl7YZlCCsdT2K4nVnnXbArVm4xdmd794KtY4F/jO9eoGiYDeXLs+u5n+dCBfl8D9u8xMAyRyLcACInKbnp+Dqg8IQlRsTyZFyPTTHBqLTZTBY9agMFWECjN6fVBIpsPDaDfRtvRENw2/P7aT/1IE25OvaRCwTZQD+4r2mxfxT3dpt5NacwwoMk+aivDI5HOQFbr9PGpIk93C+2k74LA+x5xxYbX5BNP7gLAhPcDlGNKxYhaTQPMmcmEPhbyGIwBMcegbtdbhdbauBAtXVEAK0mZD5kmVJyRbMTAmWZnqy+gbAMLvCzYik0c1iTjvMeuImqHGaLqbym0JODtdPTkgxWdjzPca43vUNp4WHcjBisTnHq9pwNBdQtLlTlhIBN9fezRawE6Nu3NHcqDfJUd35aXs4bdCDlmjfYDNeo0NydTzoPcRPb/2Q9Ic1hu5t6oYRr3Ya7X5DkFyzcPR4X7nB/Yy4ckaVr7BB+VSnwt5ALCAOaqCDlw3Q39zYQmZxTyQWHtiLrZN3JupN1J+vfWNb494P9vYcA8guzv4t4fvLCG4saT/yxYUgyn4OsY3wf8jbS6WMFYeH5JhlyKKTPwvx8O6pzuc+tsZ7FNa64tKXCkMLKsw5LIWkI1oK2vt0RDYac4a7KdlazwjzE0XFdTO3AHBGsiZcjJj3TmAumc2G+QPYqrI8jg1ufl42BOhmJfNokmH24RQh5irjjxWjgaj5xVEZjl1pbx2uwCJNQVzB3OaveWruRxsPqalqsx0yDdd9qdEaCCqyGfKzOZ5OAL9RGznDnMDy51AF3jlZ4v33GEiaNP8Yd8EFCRVj1pFhhWCLbG/XMKa7rXEX3F1hHaFgTZ5Mtvb9AHro7SuvRzYfkoUKZLJO7Ns1mmEM7d9flWqpAEZ/E4ftC0wtaDjGs29LjCioGKNd4rN1ZGyuFHCSMmTN3Ol+3GVIM4m7oGovWm6Khp/TG9Tm3IY0FDRy2+ZgJNz0T7Met5KvR2+LmN+7V5EsSY5avzLD62+7RDHWkktG5C2ZmeLJtUirTK+bO8SKf9M2JnO0g050dFw3NU5gZ78SRViPNFlZeaXC626FWQvLjtzagarjvexU3HOSZWrsz4h4fTxRj3c+maOeTibK+QhaJZ8Xv8R1u79h6HiaZq1o6wIzDLROdoHI7tBn+5Bib/RHm05y0indQhg9xd/ZWnxqv0dgiy8oRZ/KnCGukxsScVBmZKbcgE9D/YDWLfdyPm/VQG6t83oPU7RlXpnBrZrsrnSvsLDbQDqMkYVGryQm+T48tUYTOYd/nol0edGqsNiXOiNFpv7eN9J7Mzf0UuWTkGCudAd7BwLqWkaKjb+eLeD0Cw/IzkIFkHOPVbo9PI+xWcdEo8gFlI9SXm9FoaQE1pSEoXKtRT0H7y5zGJMdopmkCq1/mbLuLqVYoX2Fwl3RON7GCHwt3YhNvaOE+QrWbiULeQCXll1uGV0DRc6xPmX28V+ZYBYK3ZK3jgCc3CXoyPyF5bwxXUu7iWjh9YiVQMVVsOe1temqGM3LJLQpMS15hXcBiBFlt0T4lS+8vTiSbw31N8Uikl6ch1m6jnRiXoMuzOQQbPyr66Uo4s40Wo9AQWuq80RXqo3WiSP3IHGA/iLZWM6SqdNt9Ari3WZqVENGVZ0Tfen0lhBVhWW/VOYN7R8bqxIypAVIJ4s4/u+Ft35F13Hcx91rabrboVNs9mLm0Ty7RFLsKUjqfI/FPJOOuQrRgmy0IGmLDPOP9YkRCa4w7ItJ8n6ywStzvxNnFy7yzdcCnEfLT1eAU3EHZlEOV87ELhB0bciocqIePuDuzpLHeWONLPQ497DBc8HznIJAVDmrs5lx7oor7I+il1vTAHHocVun3+LyBMjmncPU6lsxieyJz7HyZSFTkkCv6sZ3wziAlefucQbu1gL29eWSBTSPQX98iXghndU9sdAP7ENEmxQsjpssdxPV9XFraQBtmGFfoITuW1+gFiTtjFxuSjU/mkezKIaOs2p1obBA5Gdhiy+Nxt6V3Twl7gRv95F9fvS0rxoooac/rDl6dOdPELpopjrNXRyubwfoWI/DX/WowNYPBaP3Ve4zRFWAoUdot3h1ZE2O1aS3X767mrF3BJFd6EOjhnB7UmLrtv9gHkXIS9/oEiYmFdycTd5RNK1y3xDtt11bA1Q6PvfIvukismrF36zG09m4s1pAjCLWQCg8KQaQ/zLDnGa7Pw1cPT8CeU82K9p16t/Fwt8WbXY13n7hu47eYCg/6SJwvy7Afva6H5NlfHaLf5bSIKIAvhyIOgaEms2dnEmwG90kvzJNSsW1O4qqa1Qf73fFcwbpHW2gjVEspiXMOZOwdXo1w8lIFpVCw6h3ZBVspn4dSYCiw8M1Tq1aFuw9Y8G+eRIutbMCTr+pq4r20qrAYg1ikmdw/6LZAY0uFRGRfOFz11KoKg8kOm57urvcWM+4UyysteveRWbDSA5Ln6PuWaLTVWRsvC4eqXnqt9Jo4xrwln/xyMMLBYO260M6JgX+Y4gq0eNJvtsEX4tpNYarm3RU9g59NLoiKCFoCUpx1AyudH2hx7j0VonrWWInOl7hMMvPzUAMYyuSKl27P4EQTPCyJ2yZvWs7W9sHutFHtv/WLUdg5tsSkm8FLe5UpNwo4UpF1Xtl5ZeeVnVd2Xtl5ZeeVnVd2Xvl/xSuHqVs+sKTwfySjYfXJ59k1G7u/iX2iPz8x2CTubcYx2VJuMeHa1yvAYFs92IzP3k5RfgzHFU1J3HckVx+xGFK9T7TwK2kMWfoDlqsfQW7m+ppqKoGmImOhT5DT0PHD346vkIZ/PnFfa/4jzkKRe7Nc/hrWQuYn6V1WYEWhLGc8/g11TbGfWPErXbPc97qWqE944Pdn6FpcGfyUTC01Vl3LHjTrfH/44yMiux+haqUqL3BRK6MqTo6/nbq5V1B+BXTpe9ZC7icxkQ4mqnGLnXpnMEvKd7LqrnF/0D+LklYxiE8glvvmODDh3qjer6CfTOiYT8SP6CclQWSXP4h+khO+1iL3Ipj9clnmP1DjjyCe/FiNH/GN/gXx5BdS/Jg/8kcwU35POfkxKyV8zut6f2pNqj18FJf0pyKqyrQAazh+iirk8mh5Khkdv0fy7Dw5Vu3Pf5zxrCEitfW4qk9/vG79D5rpfWrZLP/fld7rfR2lPyAc7X1ANvr67sfrXPxJrtuD7yiDyGYwaWmk22zMJNbvtzrzIvO1znvfO/qvDde9v/bzpIzJ8Vg1KMbt8nQqoh/A/PwFOzWIs6hvfsujXJVfMlT/en7olzy+5If+UHAvjfwlP/QXmv0ogr+++9s00s8Z3KpoXfJFSc5+vYKw1DcW83jy5199NprvB5K+Hoj/1vQekvluoNb63o/93xvkR/b3l0zm/6fIy7+iXf9blvqnBvjXTObC37TUH85k/qf3/cNXGTLVfNLHWv9bnvJvU8d/C2u5SPGfKPqbJOL7Uo/+aEX5aZkj89Fmzq9YUf5Ld/x7K9F/76evguj/2ooiflNZ8t9azN9dUcRvihv+233E/3pFgY/HCmPA518Hx8ztKk7wN/4H</diagram></mxfile>
2102.11938/main_diagram/main_diagram.pdf ADDED
Binary file (25.3 kB). View file
 
2102.11938/paper_text/intro_method.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Humans have a rich capacity to infer the underlying intentions of others by observing their actions. For example, when we watch the animations from Heider and Simmel (1944) (see [video](https://www.youtube.com/watch?v=VTNmLt7QX8E)2 and Figure 1), we attribute goals and preferences to simple 2D shapes moving in a flat world. Using behavioral experiments, developmental cognitive scientists have found that even young infants infer intentionality in the actions of other agents. Infants expect agents: to have object-based goals (Gergely et al., 1995; Luo, 2011; Song et al., 2005; Woodward, 1998, 1999; Woodward and Sommerville, 2000); to have goals that reflect preferences (Repacholi and Gopnik, 1997; Kuhlmeier et al., 2003; Buresh and Woodward, 2007); to engage in instrumental actions that bring about goals (Carpenter et al., 2005; Elsner et al., 2007; Hernik and Csibra, 2015; Gerson et al., 2015; Saxe et al., 2007; Woodward and Sommerville, 2000); and to act efficiently towards goals (Gergely et al., 1995; Gergely and Csibra, 1997, 2003; Liu et al., 2019, 2017; Colomer et al., 2020).
4
+
5
+ Machine-learning and AI systems, in contrast, are much more limited in their understanding of other agents. They typically aim only to predict outcomes of interest (e.g., churn, clicks, likes, etc.) rather than to learn about the goals and preferences that underlie such outcomes. This
6
+
7
+ Figure 1: A still from Heider and Simmel (1944). Despite the simplicity of the visual display, we ascribe intentionality to the three shapes in the scene: The large triangle is chasing the
8
+
9
+ small triangle and the circle, whose goals are to avoid it.
10
+
11
+ impoverished "machine theory of mind"3may be a critical difference between human and machine intelligence more generally, and addressing it is crucial if machine learning aims to achieve the flexibility of human commonsense reasoning (Lake et al., 2017).
12
+
13
+ <sup>1</sup>The dataset and code are available here:<https://kanishkgandhi.com/bib>
14
+
15
+ https://www.youtube.com/watch?v=VTNmLt7QX8E
16
+
17
+ Recent computational work has aimed to focus on such reasoning by adopting several approaches. Inverse reinforcement learning (Ng et al., 2000; Abbeel and Ng, 2004; Ziebart et al., 2008; Ho and Ermon, 2016; Xu et al., 2019) and Bayesian approaches (Ullman et al., 2009; Baker et al., 2009, 2011, 2017; Jara-Ettinger, 2019) have modeled other agents as rational, yet noisy, planners. In these models, rationality serves as the tool by which to infer the underlying intentions that best explain an agent's observed behavior. Game theoretic models have aimed to capture an opponent's thought processes in multi-agent interactive scenarios (see survey: Albrecht and Stone, 2018), and learningbased neural network approaches have focused on learning predictive models of other agents' latent mental states, either through structured architectures that encourage mental-state representations (Rabinowitz et al., 2018) or through the explicit modeling of other agents' mental states using a different agent's forward model (Raileanu et al., 2018). Despite the increasing sophistication of models that focus on reasoning about agents, they have not been evaluated or compared using a comprehensive benchmark that captures the generalizability of human reasoning about agents. For example, some evaluations of machines' reasoning about agents have provided fewer than 100 sample episodes (Baker et al., 2009, 2011, 2017), making it infeasible to evaluate learning-based approaches that require substantial training. Other evaluations have used the same distribution of episodes for both training and test (Rabinowitz et al., 2018), making it difficult to measure how abstract or flexible a model's performance is. Moreover, existing evaluations have not been translatable to the behavioral paradigms that test infant cognition, and so their results cannot be analyzed in terms of the representations and processes that support successful human reasoning.
18
+
19
+ Our benchmark, the Baby Intuitions Benchmark (BIB), presents a comprehensive set of evaluations of commonsense reasoning about agents suitable for machines and infants alike. BIB adapts experimental stimuli from studies with infants that have captured the content and abstract nature of their knowledge (Baillargeon et al., 2016; Banaji and Gelman, 2013). It provides a substantial amount of training episodes in addition to out-of-distribution test episodes. Moreover, BIB adopts a "violation of expectation" (VOE) paradigm (similar to Riochet et al. (2018); Smith et al. (2019)), commonly used in infant research, which makes its direct validation with infants possible and its results interpretable in terms of human performance. The VOE paradigm, moreover, offers an additional advantage relative to other measures of machine performance like predictive accuracy, in that it reveals how an observer might fail: VOE directly contrasts one outcome, which requires a high-level, humanlike understanding of an event, to another one, which instantiates some lower-level, heuristically, or perceptually based alternative. BIB thus presents both a general framework for designing any benchmark aiming to examine commonsense reasoning across domains as varied as agents, objects, and places, as well as a key step in bridging machines' impoverished understanding of intentionality with humans' rich one.
20
+
21
+ AGENT (Shu et al., 2021), a benchmark developed contemporaneously with BIB, is inspired by infants' knowledge about agents and has been validated with behavioral data from adults. Similar to BIB, AGENT challenges machines to reason about the intentions of agents as the underlying cause of their actions. Both benchmarks test whether models can predict that agents have object-based goals and move efficiently to those goals. There are nevertheless key differences between BIB and AGENT. First, BIB evaluates whether models can reason about multiple agents, inaccessible goals, instrumental actions, and the differences between the intentions of rational and irrational agents; AGENT does not test these competencies. The advantage of including them is that they introduce additional elements—beyond a single rational agent and one or two possible goal objects—that models must flexibly account for in their reasoning. These competencies, moreover, extend the infant cognition literature, potentially allowing BIB to further inform tests for infants. Second, BIB and AGENT evaluate new models differently. AGENT involves training on many different leave-out splits, where those splits include relatively minor differences between the training and test sets. BIB, in contrast, presents a single canonical split designed to maximally evaluate the abstractness of a model's reasoning: Models tested on BIB must flexibly combine learning from different types of training scenarios to solve a novel test scenario. We thus ultimately see BIB and AGENT as complementary and hope that new models focused on commonsense reasoning about agents will be evaluated on both.
22
+
23
+ <sup>3</sup>Note that in the psychology literature, "theory of mind" typically refers to the attribution of mental states, such as phenomenological or epistemic states (i.e., perceptions or beliefs) to other intentional agents (Premack and Woodruff, 1978). In this paper, we address on only one potential component of theory of mind, present from early infancy, which focuses on reasoning about the intentional states, not the phenomenological or epistemic states, of others (Spelke, 2016).
24
+
25
+ BIB focuses on the following questions: 1) Can an AI system represent an agent as having a preferred goal object? 2) Can it bind specific preferences for goal objects to specific agents? 3) Can it understand that physical obstacles might restrict agents' actions, and does it predict that an agent might approach a nonpreferred object when their preferred one is inaccessible? 4) Can it represent an agent's sequence of actions as instrumental, directed towards a higher-order goal object? 5) Can it infer that a rational agent will move efficiently towards a goal object?
26
+
27
+ Following the VOE paradigm, each of BIB's tasks includes a familiarization phase and a test phase, together referred to as an "episode." The familiarization phase presents eight successive trials introducing the main elements of the visual displays used in the test phase and allows the observer to form expectations about the future behavior of those elements based on their prior knowledge or learning. The test phase includes an unexpected and expected outcome based on what was observed during familiarization. Typically, the unexpected outcome is perceptually similar to the familiarization trials but is conceptually implausible, while the expected outcome is more perceptually different but involves no conceptual violation. The unexpected outcome is thus unexpected only if the observer possesses an abstract understanding of the events, and the expected outcome reflects a lower-level, heuristically, or perceptually based alternative. When VOE is used with infants, their looking time to each outcome is measured, and infants tend to look longer at the unexpected outcome (Baillargeon et al., 1985; Turk-Browne et al., 2008; Oakes, 2010).
28
+
29
+ Inspired by Heider and Simmel (1944), the primary set of visual stimuli present a fully observable "grid world," shown from an overhead perspective, and populated with simple geometric shapes that take on different roles (e.g. "agents," "objects," "tools") and provide few cues to those roles. We chose this type of environment as particularly suitable for testing AI systems (e.g., Baker et al., 2017; Rabinowitz et al., 2018) because it allows for procedural generation of a large number of episodes, and the simple visuals focus the problem on reasoning about agents. This design will also allow infancy researchers to test new questions about infant's understanding of agents in future work. While BIB and the baseline models tested here focus on this 2D grid-world environment, we have also instantiated the stimuli in 3D as a means of varying perceptual difficulty in future studies evaluating other models (appendix A).
30
+
31
+ Developmental Background. Infants infer that agents have preferences for goal objects, not goal locations (Gergely et al., 1995; Luo, 2011; Song et al., 2005; Woodward, 1998, 1999; Woodward and Sommerville, 2000). As illustrated in Figure 2 (left), Woodward (1998)'s seminal study showed that when 5- and 9-month-old infants saw a hand repeatedly reaching to a ball on the left over a bear on the right, they then looked longer when the hand reached to the left for the bear, even though the direction of the reach was more similar in that event to the events in the previous trials. These results suggest that the infants expected that the hand would reach consistently to a preferred goal object as opposed to a preferred goal location. Other studies have shown that infants' interpretations are not restricted to reaching events. For example, infants attribute a preference for goal objects to a 3D box during a live puppet show when that box seemingly exhibits self-propelled motion. (Luo, 2011; Luo and Baillargeon, 2005; Shimizu and Johnson, 2004). When shown an agent repeatedly moving to the same object at approximately the same location, do machines infer that the agent's goal is a preferred object, not location?
32
+
33
+ Familiarization Trials. The familiarization shows an agent repeatedly moving towards a specific object in a world with two objects (Figure 2a center). The agent's starting position is fixed, and the locations of the objects are correlated with their identities such that the preferred object and nonpreferred object appear in generally the same location across trials (appendix Figure 9 and 10).
34
+
35
+ Test Trials. The test uses two object locations that had been used during one familiarization trial, but the identity of the objects at those locations has been switched. In the expected outcome (Figure 2b), the agent moves to the object that had been their goal during the familiarization, i.e., their preferred object, but the trajectory of their motion and the location of that object is different from familiarization. In the unexpected outcome (Figure 2c), the agent moves to the nonpreferred object, but the trajectory of their motion and the location they move to is the same as familiarization. The model is successful if it expects the agent to go to the preferred object in a different location.
36
+
37
+ ![](_page_3_Figure_0.jpeg)
38
+
39
+ Figure 2: Can machines represent an agent's preferred goal object? Inspired by Woodward (1998)'s study with infants (left), BIB presents an agent navigating to their preferred goal object in approximately the same location across eight familiarization trials (a). At test, the location of the preferred goal object changes. The expected outcome (b) presents the agent moving to their preferred goal object in a new location, and the unexpected outcome (c) presents the agent moving to their nonpreferred object in the preferred object's old location. This evaluation has been rendered in 2D (middle) and 3D (right).
40
+
41
+ Figure 3: Can machines infer that rational agents move efficiently towards their goals? Inspired by Gergely et al. (1995) (left), BIB presents a rational agent navigating around an obstacle to its goal object across eight familiarization trials (a). At test, the rational agent either follows an efficient path (b) or an inefficient path (c).
42
+
43
+ # Method
44
+
45
+ Developmental Background. Infants understand the principle of solidity (e.g., that solid objects cannot pass through one another), and they apply this principle to both inanimate entities (Baillargeon, 1987; Baillargeon et al., 1992; Spelke et al., 1992) and animate entities, such as human hands (Saxe et al., 2006; Luo et al., 2009). Infants' expectations about the objects agents might approach are also informed by object accessibility. Scott and Baillargeon (2013) demonstrate, for example, that 16-month-old infants expected an agent, facing two identical objects, to reach for the one in the container without a lid versus the one in the container with a lid. When shown an agent repeatedly moving to the same object, do machines recognize that the agent's access to that object might change, and do they predict that an agent might then approach a nonpreferred object?
46
+
47
+ Familiarization Trials. The familiarization shows an agent consistently choosing one object over the other, as above, and objects appear at widely varying locations in the grid world (Figure 4).
48
+
49
+ Test Trials. The test presents two new object locations. In the no-expectation outcome, the preferred object is now blocked on all sides by fixed, black barriers, and the agent moves to the nonpreferred object. In the unexpected outcome, both of the objects remain accessible, and the agent moves to the nonpreferred object (Figure 4). The model is successful if it has the same relative expectations; that is, weak or no
50
+
51
+ ![](_page_4_Figure_2.jpeg)
52
+
53
+ Figure 4: Can machines understand that obstacles restrict actions? The familiarization trials present an agent navigating to their preferred object in varied locations (a). At test, either the preferred object is inaccessible and the agent goes to the nonpreferred object (b) or the preferred object is accessible and the agent goes to the nonpreferred object (c).
54
+
55
+ expectation about the agent's moving to the nonpreferred object only when the preferred object is inaccessible.
56
+
57
+ Developmental Background. Infants represent an agent's sequence of actions as instrumental to achieving a higher-order goal (Carpenter et al., 2005; Elsner et al., 2007; Hernik and Csibra, 2015; Gerson et al., 2015; Saxe et al., 2007; Sommerville and Woodward, 2005; Woodward and Sommerville, 2000). For example, Sommerville and Woodward (2005) showed that 12 month-old infants understand an actor's pulling a cloth as a means to getting the otherwise outof-reach object placed on it. When shown an agent repeatedly taking the same action to effect a change in the environment that enables them to move towards an object, do machines expect that that object is the goal, as opposed to the sequence of actions?
58
+
59
+ Familiarization Trials. The familiarization includes five main elements: an agent; a goal object; a key; a lock; and a green removable barrier (see Figure 5). The green barrier initially restricts the agent's access to the object. And so, the agent removes the barrier by collecting and inserting the key into the lock. The agent then moves to the object.
60
+
61
+ Test Trials. The test phase presents three different scenarios for a total of six different outcomes. In the scenario with no green barrier: the agent moves directly to the object (expected); or to the key (unexpected) (Figure 5a). In the scenario with an inconsequential green barrier: the agent moves directly to the object (expected); or to the
62
+
63
+ ![](_page_4_Figure_9.jpeg)
64
+
65
+ Figure 5: Can machines recognize instrumental actions towards higher-order goals? BIB's three types of test trials evaluate machines' understanding of instrumental actions. The agent's goal is initially inaccessible (blocked by a green removable barrier). During familiarization (left), the agent removes the barrier by retrieving the key (triangle) and inserting it into the lock. At test, the agent's moving directly to the goal is expected when the green barrier is absent (a) or not blocking the goal object (b,c); its moving to the key in those cases is unexpected.
66
+
67
+ key (unexpected) (Figure 5b). In the scenario with variability in the presence/absence of the green barrier: the barrier blocks the agent's access to the object, and the agent moves to the key (expected); or, the barrier does not block the object and the agent moves to the key (unexpected). The model is successful if it expects the agent to go to the key only when the green removable barrier is blocking the goal object (Figure 5c).
68
+
69
+ Developmental Background. Infants expect agents to move efficiently towards their goals (Gergely et al., 1995; Gergely and Csibra, 1997, 2003; Liu et al., 2017, 2019; Colomer et al., 2020). In a seminal study by Gergely et al. (1995), for example, 12-month-old infants repeatedly saw a small circle jumping over an obstacle to get to a big circle (see Figure 3 left). At test, the obstacle was removed, and the small circle either took the same, now inefficient, path to get to the big circle or took the straight, efficient path. Infants were surprised when the agent took the familiar, inefficient path. These findings have been replicated by instantiating the agent and object in different ways (as, e.g., humans, geometric shapes, or puppets) and by using different kinds of presentations (e.g., prerecorded or live) (Colomer et al., 2020; Phillips and Wellman, 2005; Sodian et al., 2004; Southgate et al., 2008; Liu et al., 2017). When infants see an irrational agent, i.e., one moving inefficiently to their goal from the start, however, they do not form any expectations about that agent's actions at test (Gergely et al., 1995; Liu and Spelke, 2017). When shown a rational agent repeatedly taking an efficient path around an obstacles to its goal object, do machines expect that that agent will continue to take efficient paths, as opposed to similar-looking paths, relative to the obstacles in the environment?
70
+
71
+ Familiarization Trials The familiarization includes two different scenarios: a rational agent consistently moves along an efficient path to its goal object around a fixed black barrier in the gird world (Figure 3a); or, an irrational agent moves along these same paths as the rational agent, but there is no barrier in the way (appendix, Figure 14b).
72
+
73
+ Test Trials. The test includes two possible scenarios. One scenario shows only the rational agent, and it presents one of the familiarization trials but with the barrier between the agent and the goal object removed or changed in position (such that a curved path is still required). The agent either moves along an efficient path to its goal (expected) or the agent moves along one of two unexpected paths, either the exact same, but now inefficient, path that it had during familiarization (path control, Figure 3) or along a path that is inefficient but takes the same amount of time as the efficient path (in this latter case, the goal object starts off closer to the agent, appendix Figure 11). The second scenario shows either the rational or irrational agent taking an inefficient path towards its goal. This outcome should be unexpected in the case of the rational agent, but should yield no expectation in the case of the irrational agent (appendix, Figure 14). The model is successful if it expects only a rational agent to modify its path based on the location of barriers and move efficiently to its goal.
2103.17185/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2103.17185/paper_text/intro_method.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In the original style transfer formulation, Gatys et al. [@Gatys2016] propose an iterative method for combining the content of one image with the style of another by jointly minimizing content and style losses. The content loss is the Euclidean distance between the rendered image $I_r$ and the content image $I_c$ in the VGG feature space:
4
+
5
+ $$\begin{equation}
6
+ \label{eq:content_loss}
7
+ \mathcal{L}_{\textnormal{content}} = ||\phi_l(\mathit{I}_{\textnormal{r}}) - \phi_l(\mathit{I}_{\textnormal{c}})||_2,
8
+ \end{equation}$$
9
+
10
+ where $\phi_l(\cdot)$ denotes the $l$-th layer of the VGG-19 network. The style loss is defined as:
11
+
12
+ $$\begin{equation}
13
+ \label{eq:style_loss}
14
+ \mathcal{L}_{\textnormal{style}} = \sum\limits_{l=0}^L w_l E_l
15
+ \end{equation}$$
16
+
17
+ with
18
+
19
+ $$\begin{equation}
20
+ E_l = \frac{1}{N_l^2 M_l^2} ||G_r^l - G_s^l||_F
21
+ \end{equation}$$
22
+
23
+ where $G_r^l$ and $G_s^l$ are the Gram matrices of $\mathit{I}_{\textnormal{r}}$ and $\mathit{I}_{\textnormal{c}}$ respectively, computed from the $l$-th layer of the VGG-19 network.
24
+
25
+ # Method
26
+
27
+ <figure id="fig:patch_compare">
28
+ <table>
29
+ <tbody>
30
+ <tr>
31
+ <td style="text-align: center;">Gatys et al.</td>
32
+ <td style="text-align: center;">Ours</td>
33
+ </tr>
34
+ <tr>
35
+ <td style="text-align: center;"></td>
36
+ <td style="text-align: center;"></td>
37
+ </tr>
38
+ <tr>
39
+ <td style="text-align: center;"><img src="figures/experiments/compare/tubingen_gatys_crop.jpg" style="width:23.0%" alt="image" /></td>
40
+ <td style="text-align: center;"><img src="figures/experiments/compare/tubingen_our_crop.jpg" style="width:23.0%" alt="image" /></td>
41
+ </tr>
42
+ </tbody>
43
+ </table>
44
+ <p><br />
45
+ </p>
46
+ <figcaption>For Gatys et al. <span class="citation" data-cites="Gatys2016"></span>, the pixels are adjusted to match the brushstroke pattern. In our approach, the brushstroke pattern is occurring by design. Style image: “Starry Night” by Vincent van Gogh. Content image: original image of Tuebingen from the paper <span class="citation" data-cites="Gatys2016"></span>. Same region of the sky is cropped.</figcaption>
47
+ </figure>
48
+
49
+ <figure id="fig:approach_diagram">
50
+ <div class="center">
51
+ <img src="figures/approach_diagram_v5.jpg" style="width:99.0%" />
52
+ </div>
53
+ <figcaption>Comparison of our method (bottom row) with Gatys et al. <span class="citation" data-cites="Gatys2016"></span> (top row). Gatys et al. <span class="citation" data-cites="Gatys2016"></span> optimize pixels to minimize style and content loss. We directly optimize parameters of the brushstrokes. To do that we have designed a differentiable rendering mechanism that maps brushstrokes onto the canvas. Each brushstroke is parameterized by color, location, width and shape. Brushstroke parameters are updated by gradient backpropagation (red, dashed arrows).</figcaption>
54
+ </figure>
55
+
56
+ The method by Gatys et al. [@Gatys2016] adjusts each pixel individually to minimize the content and style losses. However, artworks generally consist of brushstrokes, not pixels. Instead of optimizing on pixels, we therefore optimize directly on parameterized brushstrokes, using the same content and style losses defined in Eq. [\[eq:content_loss\]](#eq:content_loss){reference-type="ref" reference="eq:content_loss"} and Eq. [\[eq:style_loss\]](#eq:style_loss){reference-type="ref" reference="eq:style_loss"}, respectively. See Fig. [4](#fig:approach_diagram){reference-type="ref" reference="fig:approach_diagram"} for an overview of our method and Fig. [3](#fig:patch_compare){reference-type="ref" reference="fig:patch_compare"} for a comparison of the synthesized brushstroke patterns.\
57
+ Our brushstrokes are parameterized by location, color, width, and shape. The shape of a brushstroke is modelled as a quadratic Bézier curve [@Nakano2019; @Ganin2018; @Huang2019], which can be parameterized by: $$\begin{equation}
58
+ \label{eq:bezier_curve}
59
+ \mathbf{B}(t) = (1 - t)^{2}\mathbf{P}_0 + 2(1 - t)t\mathbf{P}_1 + t^{2}\mathbf{P}_2 \mbox{ , } 0 \le t \le 1.
60
+ \end{equation}$$ A key difficulty here is to find an efficient and differentiable mapping from the brushstroke parameter space into the pixel domain. To this end, we propose a mechanism to construct this mapping explicitly. See Sec. [4.2](#sec:neural_renderer){reference-type="ref" reference="sec:neural_renderer"} for details.\
61
+ Using our rendering mechanism we can backpropagate gradients from the style and content losses through the rendered pixels directly to the brushstroke parameters.\
62
+ After the optimization is finished, we render the optimized brushstroke parameters to obtain an image $I$ and then apply the standard Gatys et al. [@Gatys2016] approach on the pixel level using $I_s$ as style image and $I$ as content image. This final step blends the brushstrokes together and adds some texture. Fig. [7](#fig:before_after_pix_opt){reference-type="ref" reference="fig:before_after_pix_opt"} shows the effect of this pixel optimization.
63
+
64
+ Similar to Gatys et al. [@Gatys2016], we use layers "conv4" and "conv5" for the content loss and layers "conv1", "conv2", "conv3", "conv4", and "conv5" for the style loss.\
65
+ We use Adam [@adam2014] with learning rate 0.1 for optimization.\
66
+ Similar to Johnson et al. [@Johnson2016], we employ a total variation regularization.
67
+
68
+ Nowadays, generative models have reached unmatched image quality on a variety of datasets [@karras2019style; @brock2018large]. Thus, our first attempt to generate brushstrokes followed this line of work. We generated a dataset of brushstrokes simulated in the FluidPaint environment [^2] and trained a network inspired by StyleGAN [@karras2019style] to generate images conditioned on brushstroke parameters. Despite achieving satisfactory visual quality, the main limitation of this approach is that it is memory-intensive and can not be efficiently scaled to process a large number of brushstrokes in parallel. This is critical for us since our method relies on an iterative optimization procedure.
69
+
70
+ Therefore, instead of training a neural network to generate brushstrokes, we explicitly construct a differentiable function which transforms a collection of brushstrokes parameterized by location, shape, width and color into pixel values on a canvas. Formally, the renderer is a function: $$\begin{equation}
71
+ \label{eq:renderer_as_function}
72
+ \mathcal{R}: \mathbb{R}^{N \times F} \rightarrow \mathbb{R}^{H \times W \times 3},
73
+ \end{equation}$$ where $N$ denotes the number of brushstrokes, $F$ the number of brushstroke parameters (12 in our case), and $H$ and $W$ are the height and width of the image to render. This renderer requires less memory and is also not constrained by the limitations of a brushstroke dataset.
74
+
75
+ Before explaining how our render works, let us start with a simple example. Assume we have a flat disk parameterized with color, radius, and location (1, 1, and 2 scalars respectively) and we want to draw it on a canvas. For the sake of brevity, we assume our images are grayscale but the algorithm trivially generalizes to the RGB space. A grayscale image is a 2D matrix of pixel values. First, we need to decide for every pixel whether or not it belongs to the disk. For this, we simply subtract the disk location from each pixel coordinate and compute the $L_2$ norm to obtain distances $D$ from each pixel to the disk center. Now we have to check if the distance $D$ is smaller then the radius to get a binary mask $M$. To incorporate color, it suffices to multiply the mask by a color value.
76
+
77
+ If we have two disks, we simply repeat the procedure above for each disk separately and obtain two separate images with disks, namely $I_{1}, I_{2} \in \mathbb{R}^{H \times W \times 3}$. Now, how do we blend $I_{1}, I_{2}$ together? If they do not overlap we can sum the pixel values across disks $I_{1} + I_{2}$. However, if the disks overlap, adding them together will produce artifacts. Therefore, in the overlapping regions, we will assign each pixel to the nearest disk. This can be done by computing the distances $D_1,\ D_2 \in \mathbb{R}^{H \times W}$ from each pixel to each disk center and determine for every pixel the closer disk. We call this object an assignment matrix $A:= \{1\ \text{if}\ D_1 \leq D_2,\ 0\ \text{otherwise}\} \in \mathbb{R}^{H \times W}$. Now the final image $I$ can be computed using the matrices $I_1$, $I_2$ and $A$: $I:=I_1 * A + I_2 * (\mathbf{1} - A)$. The assignment matrix $A$ naturally generalizes to $N$ objects: $$\begin{equation}
78
+ \label{eq:assignment_matrix}
79
+ A(i,j,n) := \begin{cases}
80
+ 1 &\text{if}\ D_n(i,j) < D_k(i,j)\ \forall k\neq n,\\
81
+ 0 &\text{otherwise}.
82
+ \end{cases}
83
+ \end{equation}$$ It indicates which object is the nearest to the coordinate $(i,j)$. The final image computation for $N$ images of disks $I_1, .. , I_N$ then corresponds to: $$\begin{equation}
84
+ \label{eq:assignment_renderings_blending}
85
+ I(i,j):=\sum_{n=1}^{N} I_n(i,j) * A(i,j,n)
86
+ \end{equation}$$
87
+
88
+ Hence, the final image is computed by the weighted sum of renderings weighted according to the assignment matrix $A$. Both the assignment matrix and the individual renderings $I_1,...,I_N$ originate from the distance matrices $D_1,.., D_N$ from each pixel location to the object. Indeed, to render a single object we take its distance matrix, threshold with radius/width and multiply by a color value. The assignment matrix is an indicator function of the smallest distance across distances $D_1,.., D_N$. Thus, the matrix of distances is a cornerstone of our approach. We can effectively render any object for which we can compute the distances from each pixel to the object.
89
+
90
+ Our initial goal was to render brushstrokes. To render a disk we take a distance matrix $D$, get a mask of points that are closer than the radius and multiply this mask by a color value. The same holds for a Bézier curve.\
91
+ First, we compute a matrix of distances to the curve $D_\mathbf{B}$ (matrix of distances from every point in a 2D image to the nearest point on the Bézier curve).\
92
+ Then, we mask points that are closer than the brushstroke width and multiply them by a color value. We approximate the distance from a point $p$ to a Bézier curve by sampling $S$ equidistant points $p'_1,..,p'_S$ along the curve and computing the minimum pairwise distance between $p$ and $p'_1,..,p'_S$. Note that there exists an analytical solution of this distance for a quadratic Bézier curve, however, the approximated distance allows the use of arbitrary parametric curves.\
93
+ In the final step, we can compute the individual renderings of brushstrokes and the assignment matrix as in Eq. [\[eq:assignment_matrix\]](#eq:assignment_matrix){reference-type="ref" reference="eq:assignment_matrix"} and blend them together into the final rendering with Eq. [\[eq:assignment_renderings_blending\]](#eq:assignment_renderings_blending){reference-type="ref" reference="eq:assignment_renderings_blending"}.
94
+
95
+ For the sake of clarity, we have left out two important details in the above explanation.\
96
+ First, the renderer should be differentiable, yet, the computation of the assignment matrix and the masking operation are both discontinuous. To alleviate this problem, we implement a masking operation with a sigmoid function. To make the assignment matrix computation differentiable we replace it with a softmax operation with high temperature.\
97
+ Second, the computation of distances between every brushstroke and every pixel on the canvas is computationally expensive, memory-intensive and also redundant because a brushstroke only affects the nearby area of the canvas. Therefore, we limit the computation of distances from a pixel to all the brushstrokes to only the K nearest brushstrokes, see Sec. [10.2](#sec:distances){reference-type="ref" reference="sec:distances"} for details.
98
+
99
+ ::: algorithm
100
+ init $\mathcal{C} \in \mathbb{R}^{H \times W \times 2}$  init tensor of brushtrokes colors $c_{strokes}$ from $\mathcal{B}$ parameters init tensor of brushtrokes widths $w_{strokes}$ from $\mathcal{B}$ parameters sample $S$ points $t \in [0; 1]$ sample points $t$ at each brushtroke $\mathcal{B}_{sampled}:=\{\text{compute}~B_i(t_j)\ \text{with}\
101
+ Eq.\ref{eq:bezier_curve}\ | \forall i,j \}$
102
+
103
+ $D_(x,y,n,s) := ||\mathcal{C}(x,y) - \mathcal{B}_{sampled}(n,s)||_2$
104
+
105
+ $D_{strokes} := min(D, axis=4)$
106
+
107
+ $M_{strokes} := \textbf{sigm}(\textbf{max}(t \cdot ||w_{strokes} - D_{strokes}||_2 , axis=4)$
108
+
109
+ $I_{strokes} := M_{strokes} \cdot c_{strokes}$
110
+
111
+ $A:=\textbf{softmax}(t \cdot D_{strokes}, axis=3)$
112
+
113
+ $I:=\textbf{einsum}('xyn,xync->xyc', A, I)$
114
+ :::
115
+
116
+ See Alg.[\[alg:render_algo\]](#alg:render_algo){reference-type="ref" reference="alg:render_algo"}. See Sec. [10](#sec:renderer){reference-type="ref" reference="sec:renderer"} for additional technical details of the implementation.
2104.05981/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2104.05981/paper_text/intro_method.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In 2014, Michael Jordan, in an interview [@ieee] said that "*Deep learning is good at certain problems like image classification and identifying objects in the scene, but it struggles to talk about how those objects relate to each other, or how a person/robot would interact with those objects. For example, humans can deal with inferences about the scene: what if I sit down on that?, what if I put something on top of something? etc. There exists a range of problems that are far beyond the capability of today's machines.*\"
4
+
5
+ While this interview was six years ago, and since then there has been a lot of progress in deep learning and its applications to visual understanding. Additionally, a large body of visual question answering (VQA) datasets [@VQA; @ren2015exploring; @hudson2019gqa] have been compiled and many models have been developed over them, but the above mentioned "inferences about the scene" issue stated by Jordan remains largely unaddressed.
6
+
7
+ In most existing VQA datasets, scene understanding is holistic and questions are centered around information explicitly present in the image (i.e. objects, attributes and actions). As a result, advanced object detection and scene graph techniques have been quite successful in achieving good performance over these datasets. However, provided an image, humans can speculate a wide range of implicit information. For example, the purpose of various objects in a scene, speculation about events that might have happened before, consider numerous imaginary situations and predicting possible future outcomes, intentions of a subject to perform particular actions, and many more.
8
+
9
+ ![Motivation for the proposed `CLEVR_HYP` dataset: an example demonstrating how humans can do mental simulations and reason over resulting scenario.](motivation_2.png){#fig:motivation width="\\linewidth"}
10
+
11
+ Among the above, an ability to imagine taking specific actions and simulating probable results without actually acting or experiencing is an important aspect of human cognition (Figure [1](#fig:motivation){reference-type="ref" reference="fig:motivation"} gives an example of this). Thus, we believe that having autonomous systems equipped with a similar capability will further advance AI research. This is particularly useful for robots performing on-demand tasks in safety-critical situations or navigating through dynamic environments, where they imagine possible outcomes for various situations without executing instructions directly.
12
+
13
+ Motivated by the above, we propose a challenge that attempts to bridge the gap between state-of-the-art AI and human-level cognition. The main contributions of this paper[^3] are as follows;
14
+
15
+ - We formalize a novel question answering task with respect to a hypothetical state of the world (in a visual form) when some action (described in a textual form) is performed.
16
+
17
+ - We create a large-scale dataset for this task, and refer it as `CLEVR_HYP` i.e. VQA with hypothetical actions performed over images in CLEVR [@johnson2017clevr] style.
18
+
19
+ - We first evaluate the direct extensions of top VQA and NLQA (Natural language QA) solvers on this dataset. Then, we propose new baselines to solve `CLEVR_HYP` and report their results.
20
+
21
+ - Through analysis and ablations, we provide insights about the capability of diverse architectures to perform joint reasoning over image-text modality.
22
+
23
+ <figure id="fig:examples">
24
+ <div class="minipage">
25
+ <p><strong>I</strong>: <img src="cvlqa_ex.png" style="height:70.0%" alt="image" /></p>
26
+ </div>
27
+ <div class="minipage">
28
+ <div class="enumerate">
29
+ <p><strong>T<span class="math inline"><sub><em>A</em></sub></span></strong>: <span style="color: blue">Paint</span> the small green ball with cyan color.<br />
30
+ <strong>Q<span class="math inline"><sub><em>H</em></sub></span></strong>: Are there <span style="color: orange">equal</span> yellow cubes on left of purple object and cyan spheres? (A: yes)</p>
31
+ <p><strong>T<span class="math inline"><sub><em>A</em></sub></span></strong>: <span style="color: blue">Add</span> a brown rubber cube behind the blue sphere that inherits its size from the green object.<br />
32
+ <strong>Q<span class="math inline"><sub><em>H</em></sub></span></strong>: <span style="color: orange">How many</span> things are <span style="color: orange">either</span> brown <span style="color: orange">or</span> small? (A: 6)</p>
33
+ <p><strong>T<span class="math inline"><sub><em>A</em></sub></span></strong>: John <span style="color: blue">moves</span> the small red cylinder on the large cube that is to the right of purple cylinder.<br />
34
+ <strong>Q<span class="math inline"><sub><em>H</em></sub></span></strong>: <span style="color: orange">What color</span> is the object that is at the bottom of the small red cylinder? (A: yellow)</p>
35
+ </div>
36
+ </div>
37
+ <figcaption>Three examples from <code>CLEVR_HYP</code> dataset: given image (I), action text (T<span class="math inline"><sub><em>A</em></sub></span>), question about hypothetical scenario (Q<span class="math inline"><sub><em>H</em></sub></span>) and corresponding answer (A). The task is to understand possible perturbations in I with respect to various <span style="color: blue">action(s)</span> performed as described in T<span class="math inline"><sub><em>A</em></sub></span>. Questions test various <span style="color: orange">reasoning capabilities</span> of a model with respect to the results of those action(s).</figcaption>
38
+ </figure>
39
+
40
+ # Method
41
+
42
+ Pre-trained transformer-based architectures have been observed [@li-etal-2020-bert] to capture a rich hierarchy of language-structures (text-only models) and effectively map entities/words with corresponding image regions (vision-language models). We experiment with various transformer-based models to understand their capability to understand the effects of actions on a visual domain.
43
+
44
+ To evaluate the hypothetical VQA task through the text-only model, we convert images into the templated text using scene graphs. The templated text contains two kind of sentences; one describing properties of the objects i.e. "There is a $<$Z$>$ $<$C$>$ $<$M$>$ $<$S$>$\", the other one describing the relative spatial location i.e. "The $<$Z$>$ $<$C$>$ $<$M$>$ $<$S$>$ is $<$R$>$ the $<$Z1$>$ $<$C1$>$ $<$M1$>$ $<$S1$>$\". For example, "There is a small green metal cube.\" and "The large yellow rubber sphere is to the left of the small green metal cube\". Then we concatenate templated text with the action text to create a reading comprehension passage. We use state-of-the-art machine comprehension baseline RoBERTa [@liu2019roberta] finetuned on the RACE dataset [@lai2017race][^6]. Finally, we predict an answer to the question using this reading comprehension passage.
45
+
46
+ Proposed by [@tan2019lxmert], LXMERT is one of the best transformer based pre-trainable visual-linguistic representations which supports VQA as a downstream task. Typical VQA systems take an image and a language input. Therefore, to evaluate `CLEVR_HYP` in VQA style, we concatenate action text and question to form a single text input. Since LXMERT is pre-trained on the natural images, we finetune it over `CLEVR_HYP` dataset[^7] and then use it to predict answer.
47
+
48
+ <figure id="fig:baselines">
49
+ <p><strong>Nomenclature</strong> I: Image, SG: Scene Graph, TT: Templated Text, <span class="math inline"><em>T</em><sub><em>A</em></sub></span>: Action Text, <span class="math inline"><em>Q</em><sub><em>H</em></sub></span>: Hypothetical Question, A: Answer, FP: Functional Program, ’: Updated Modality<br />
50
+ <br />
51
+ </p>
52
+ <div class="minipage">
53
+ <p><strong>Baseline 1:<br />
54
+ </strong><br />
55
+ </p>
56
+ </div>
57
+ <div class="minipage">
58
+ <p><strong>Baseline 3:<br />
59
+ </strong><br />
60
+ </p>
61
+ </div>
62
+ <div class="minipage">
63
+ <p><strong><br />
64
+ Baseline 2:<br />
65
+ </strong><br />
66
+ </p>
67
+ </div>
68
+ <div class="minipage">
69
+ <p><strong><br />
70
+ Baseline 4:<br />
71
+ </strong><br />
72
+ </p>
73
+ </div>
74
+ <figcaption>Graphical visualization of baseline models over <code>CLEVR_HYP</code> described above.</figcaption>
75
+ </figure>
76
+
77
+ In this method, we break-down the QA task with mental simulation in two parts; first, learn to generate an updated image (such that it has incorporated the effects of actions) and then perform visual question answering with respect to the updated image. We use the idea from Text Image Residual Gating proposed in [@vo2019composing] to implement the first part. However there are two important distinctions; Their focus is on the retrieval from the given database. We modify their objective and develop text-adaptive encoder-decoder with residual connections to generate new image. Also, editing instructions in their CSS dataset [@vo2019composing] were quite simple. For example, 'add red cube' and 'remove yellow sphere'. In this case, one can add the red cube anywhere in the scene. We modify their architecture to precisely place objects to their relative spatial references (on left/right/front/ behind). Once we get the updated image, we feed it to the LXMERT [@tan2019lxmert] finetuned over the CLEVR [@johnson2017clevr] dataset along with the question and predict the answer.
78
+
79
+ Instead of directly manipulating images, in this method, we leverage image scene graphs to convert image-editing problem into graph-editing problem, conditioned on the action text. This is an emerging research direction to deal with changes in the visual modality over time or with new sources of information, as observed from recent parallel works [@chen2020graph; @he2020scene].
80
+
81
+ We first use Mask R-CNN [@he2017mask] to get the segmentation mask of the objects and predict attributes (color, material, size, and shape) with an acceptance threshold of 0.9. Segmentation mask of each object along with original image is then passed through ResNet-34 [@he2016deep] to extract precise 3D coordinates of the object. We get the structured scene graph for the image. Then we use seq2seq with attention model originally proposed in [@johnson2017inferring] to generate functional programs (FP) for action text and question. The execution engine executes programs on scene graph, implemented as a neural module network [@andreas2017neural] to update the scene representation and answer questions.
82
+
83
+ We learn to update scene graphs according to functional program for the action text using reinforcement learning[^8]. The reward function is associated with our ground-truth program executor and generates reward if prediction exactly matches with ground-truth execution. Once we get the updated scene representation, we use neural-symbolic model[^9] proposed by [@yi2018neural] to obtain the final answer. It is notable that [@yi2018neural] achieved near-perfect performance on the CLEVR QA task in addition to being fully explainable.
84
+
85
+ In this section, we benchmark models described above on the `CLEVR_HYP`. The dataset is formulated as a classification task with exactly one correct answer, so we use standard accuracy as evaluation metric. We then analyze their performance according to question and action types.
86
+
87
+ ::: table*
88
+ $\frac{}{}$\
89
+ :::
90
+
91
+ Quantitative results from above experiments can be visualized in top part of the Table [\[tab:tab3\]](#tab:tab3){reference-type="ref" reference="tab:tab3"}. Among the methods described above, the scene graph update model has the best overall performance 70.5% on original test data. Text-editing model is best over balanced set, but observed to have the poor generalization capability when two actions or reasoning capabilities have to be performed. `CLEVR_HYP` requires models to reason about effect of hypothetical actions taken over images. LXMERT is not directly trained for this objective therefore, it struggles to do well on this task. The reason behind the poor performance of text-only baseline is due to its limitation to incorporate detailed spatial locations into the templates that we use to convert image into a machine comprehension passage.
92
+
93
+ Two of our models (scene graph update and text-editing image) are transparent to visualize intermediate changes in the scene after performing actions. We analyse their ability to understand actions and make appropriate changes as shown in below part of Table [\[tab:tab3\]](#tab:tab3){reference-type="ref" reference="tab:tab3"}. For the scene graph method, we compare the ground-truth functional program with the generated program and measure their exact-match accuracy. For the text-editing image method, we generate scene graphs for both images (original image and image after text-editing) and compare them. For attributes, we do exact-match, whereas for location information we consider matching only on the basis of relative spatial location.
94
+
95
+ Both scene graph and text-editing models do quite well on 'remove' and 'change' actions whereas struggle when new objects are added or existing objects are moved around. The observation is consistent when multiple actions are combined. Therefore, actions remove+change can be performed with maximum accuracy whereas other combinations of actions accomplish relatively lower performance. It leads to the conclusion that understanding the effect of different actions are of varied complexity. Most models demonstrate better performance over counting, existence and attribute query type of questions than comparison questions. The scene graph update and text-editing methods show a performance drop of 6.1% and 9.1% respectively when multiple actions are performed on the scene. However, there is less of a performance gap for models on 2HopQ$_H$ compared to the test set, suggesting that models are able to better generalize with respect to multiple reasoning skills than complex actions.
2104.06669/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-10-06T01:52:18.138Z" agent="5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36" version="13.7.7" etag="daKe8lADTjgdbDgag99b" type="google"><diagram id="KOXTKUxQAqmNpyPhJRlC">7VxRc6O2Fv41nj6FkRBgeIyTdfvQzuw0O3N7n+4oIGxuALkgJ3Z/fSVAGIEckxhsZ+1tdw0HISSd73w6OjowQQ/J5tcMr5Z/0IDEExMEmwl6nJim5dmQ/wjJtpR4JioFiywKShHcCZ6if0glBJV0HQUkVwoySmMWrVShT9OU+EyR4Syjb2qxkMbqU1d4QTqCJx/HXel/ooAtSylyANhd+I1Ei2X1aMuVVxIsS1eCfIkD+tYQoW8T9JBRysqjZPNAYjF6cmDK++Z7rtYty0jK+txglje84nhdda5qF9vK3pI0uBeDxs/8GOd55E/QbMmSmAsgP6y7UJywjL6QBxrTrLgbmY9TMS5oFkZx3JA7AHtwyuVkE7G/qrvF8X/5MTDs6uxRgAbIk608SVm2/at50rhLnO5uK87kfWXrpM4sLglwviRB9fiy6yRoaT+n68yvRFYFN5wtiASW1x10WKuSGwGhCeGt4EXedmixKwAsGziRsozEmEWvaitwBdpFXV39hO804g82QWVgd9CqKqrsC9Y1y0rKLlX3NeHRrspr1wRaNZUj0amJHzR6vhMV8NNDEXWRxzXxVJ3SjC3pgqY4/raTzvIXwvxlpd1dkd8pXVU6/T9hbFtRCF4zqmJXgg8cD74PQQ91DYIg3wVAZ0Iz4NnFFX+dvdZgVczuc8h1egO3NyIV/b+jbEvDO07MhEVGr/xwwYpelaJ8hVMFGc7fa0GQs5Cm7C4vlHvPC5hwtdldlLVAg1/6QZNf8rqv4khWLuoo5go53rv7URgi/rdb5YrTIFcEH4I3vJUV8S6XdamN5+Ky/R2x0tMW9BnZsBbNtmHhFH+6QApt8R+X4zhapIK1eY8Jvzh7JRmL+DR2X11IoiAozOhtGTHytMIFVt74pM1lGV2nQYsaRQVks3eq2QMeySWOBVUyAU5FJg1etGShJjE66HjE2Z+gF9Xgvg7XdGfk44inB704XXpxz0cvzl56qcz9k1xici5xcCLsI33Oxc+PJRE94CMumsz/PkeifhoKb4+mxcUcvxZssV4Ze8niKuwfqfYPXZ39o3Hsf3qz/zHt39U4xqA3RAYnAHckAkBdAqCJQDAWPgVbEmH7abwtfkixMBWtSpckK3gBp1vJDn0cEN8HQHjc7WaIurREcuUEYxq2SjEI6CimLjY0yXg3khmTZCSjKCzTHyeDs4xsz+A0Y3Vo5jdSdMAnUelLkJSuF30oZBoCoFvDLOvV1fq5Zgl7Vvos9mNz+dIowMd+D8PQQi8vBedFScGD4rEgzEqCpNnLlZNTy/uxzBN6P9JKBkeqvWdCXJcLZD5AtTs8yoQnp9iVWIiDkNcop+FwnZIMx1cOOqs9I9qWDnbWWDMi1EWYPwY8ZKvAAyH21Rt+xRnmxBV0cfM0lPZT4c6piq9EQ+kcDKRzy7YNx1OU7nldnTuWgaYanYMBdI5GIht4GrJZRvnOacepT3Zr+RsHfRyPHadcF/ezzdGccmhdileu7nGZPd1yqLjlhze5zuGY2xrHHJ3RMbdHYiBNAPAc6//yQe9x0o1yDoYax6Sc4wPQR3k9v1yd2wPdaUvnWqdHr/EhfJ6LiS//vJOMp5lk7DNOMu5lqvxTGv8i+jbPuKcg23OaTYUB0xXKzcl+SQvcQwkKeuT/pIQExR231c1+V6Og2IN7Dra5Kze0s2GeLgpdOrs4zgUgSlgNEYiud8p7BJdzmigx5sktxLzf7dXFmEd0e2sWv7AZ8ZNOkHmJXpBpXtasKNtzmp2F1qxYUtAuv0byUZNTyhmrEc27ccTBDYExOUIXHG6TxtdJOZ+On3OuSa2TY3ZBOeftTHEIoITQh7POUQek/P92bcNlnmuzkb8uJMdHpC7Zq/9C/IbIw4jUxa/Pg0hgeJ6roNLhC+Ez4FI84zufw/kwion0sae/pMsZ+oC/dC6wIuvrgFUX7D4GrF+AJPfbUx9IypcWP7db9iX5UxMm6SJ8QExOxyZQBC1kWxq0WlPHPS9aUXMh3XNKl3k6Cib7vyB2IkyaU0d43E0cubC1e9Mbk1Du2dR5Ce3XJAfEoy4hvYVHjreVOIyS4v3jJhTbi0YmwiG19Hf8TOLvNI9YRMXVZ8oYTXiBWFyYYf9lUSwkm+vT4g8vUjzsPl+V70kLLGF5EkYbwXezqj2PS8bEC9b3YhzMuR+kwIh8moYRX6Jmhi9icPMAM8x/hDwXy2vMfG5C5hyJ0VtF/xMvQBurdDEZYClrIlPRnzftLmRlkSYYpeyoVezBvYBe27Ugly+3SxlULmsDqQBMQ10gVUno7RF1cXRRl6i5daxk/l7ZhvJd7de8A68Rk+iQLqr/8zMGH6o8uSN3/GJCxOxkzi3OnXPy9xrH+R003aHoQ5Mx4CBD+mFNDnENV6PkhvgoPesys1t6jlvqyspW1CYTk5B1vIDKhSkAkmwW4lMcRkL9l/XKSHAmfvx1Fm9nWWFHWscHNN3e4iTKuKbLRqQiPK/zjDg/zefzgawQWl0tde3Q1Pgc1gA0L6eKo2n+IBm72hD4RHSMtxKg+gjWR1Z9ZF4fO0PkGXJDXgJDflemAQzo7IoNztA9ItkXabncQx7fcqHXsVu+3nNOZ7rvZ/1WsD5HBs6BXUMlztBaXALgYv0W4kfiD0iTrSvdmHNsIaKD2bpDsaynY9kfZSpBGIuVyrXxqO2oXi50NCkDUNrt4BTqXKqRfjoSc6KUAMV2R0iC3BMzcdqflnJ6flrqMx+E0sXwTkcLfxKa8WVKJFYb4M3IDNbOR7g+tkCorX4NW9RbLoOzxcGvOtyCLl8cYBBZbbceOui0gRddaO/nD7zsD9WOvsQ320t86GhyknQLBXMAfVs9Am0q0HWm8F6SWD1ardl/PkdosFF0236c9AoaQ+hq3Dg0xBAe/LrAuA48SXPhwT8sccrhf32kCTr2o3PiLaDN6RtC/7ow2UlMyPOGMyEIuzx0SiM6+Nb8qEb0LY6jVS7efL8y84Fuaxv41MbTI/twHON5eCjeLh5kFBE8p+mcN4D0QDMSXp3doGnrg0bj2g0/3X0WvYwm7L4uj779Cw==</diagram></mxfile>
2104.06669/paper_text/intro_method.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ From the onset of language, storytelling has been crucial to the transmission of knowledge [\(Ramanujan 1991\)](#page-8-0). It has been well-established that readers remember only an abstract representation of stories [\(Schank 1972\)](#page-9-0). Before the printing press, classes engaged with oral teaching of scriptures, such as rabbis, underwent extensive training to reproduce them with no distortion [\(Bos 1995\)](#page-7-0). Formally analyzing story structure commenced with the ancients, through works like Aristotle's *Poetics* [\(Halliwell et al. 1998\)](#page-7-1). These studies led to the concept of a *narrative*, distinct from story events.
4
+
5
+ For a story, there are two *orders*: the chronological order of events as they happened and their order as presented in text. These have been analyzed under different names [\(Propp](#page-8-1) [2010\)](#page-8-1). We refer to them as *story order* and *narrative order*,
6
+
7
+ <span id="page-0-1"></span>![](_page_0_Figure_10.jpeg)
8
+
9
+ Figure 1: Example of our task and dataset, with original input story S on the left, target narrative order π<sup>i</sup> <sup>0</sup> on the top, and human rewritten story S 0 on the right.
10
+
11
+ or *story* and *narrative*, respectively. [Genette](#page-7-2) [\(1983\)](#page-7-2) enlists typical orders observed in writing. A *linear* order narrates events in same sequence as story order. The *in medias res* order starts with events in the middle, goes back to the start, then proceeds to the end. Changing from near-*linear* to more "interesting" orders is prevalent in cinema, e.g. *The Imitation Game* starts with Turing's post-WWII 1951 interrogation. *Memento* and *Naked Lunch* are known for their esoteric narrative orders - loosely described as retrogade (reverse of linear) and syllepsis (lacking chronological logic), respectively.
12
+
13
+ [Morgan](#page-8-2) [\(2017\)](#page-8-2) explains how narratives surpass "mere chronicle". Narrative orders of presenting materials in scientific *explanations* directly affects how researchers *interpret* and *understand* them since the order implies not only temporal but other inferences about causality, processes of change, etc. Narrative order can thus influence *model explainability*, especially for explanation generation [\(Rajani et al. 2019\)](#page-8-3), a recent area-of-interest [\(Wiegreffe and Marasovic 2021\)](#page-9-1).
14
+
15
+ In this work, we do not delve into the complex and somewhat subjective question of *which* narrative order is most suitable or "interesting". We focus on *how* a given story in *linear* narrative order can be rendered in a specified, *non-linear*, *target* order while preserving plot.[1](#page-0-0) We call this *Narrative*
16
+
17
+ <sup>\*</sup> Equal Contribution by the two authors Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
18
+
19
+ <span id="page-0-0"></span><sup>1</sup>Code+data at<github.com/vgtomahawk/NAREORCamReady>
20
+
21
+ *Reordering*, or NAREOR. To the best of our knowledge, we are the first to propose and investigate this task.
22
+
23
+ Our work is not entirely adrift from past research in this vein. Montfort (2007) tries generating fiction narratives from basic existent-event info with a special focus on narrative order, using a rule and planning approach. Unlike our work, their rule-based system does not involve learning. Moreover, being generation in a given narrative order from unstructured story elements rather than reordering an existing story, their setting does not require solving challenges such as disentangling events from stories which are inherent in NAREOR.
24
+
25
+ Formally, NAREOR involves reordering a story S with sentences $s_1, s_2, ..., s_n$ to a reordered story S' with sentences $s_1', s_2', ..., s_n'$ according to a given target narrative order $\pi_{i'}$ . $\pi_{i'}$ is a permutation $\{\pi_{i'}|\pi_{i'}:i'\to f(i'); 1\le i'\le n; f(i')=i\}$ mapping from target sentence² indices i' to original sentence indices i, where f is a one-to-one and onto function from $\{1,2\dots n\}$ to itself. In practice, we write $\pi_{i'}$ as the sequence $\{i=f(i')\}_{i'=n}^{i'=n}$ (f and i' become implied).
26
+
27
+ NAREOR's challenges are evident from the example in Figure 1. Simply reordering sentences is far from sufficient, as rewritten text must be adjusted to handle coreference, tense, and other discourse dependencies. For example, narrative order affects tense since it can change the first 2 of 3 Reichenbach times (Reichenbach 1947) that together determine tense - speech, reference, and event time. NAREOR involves pinpointed and critical edits; a single missed or incorrect edit can result in an entirely different or invalid plot. Since $\pi_{i'}$ can be seen as a control, NAREOR is a controllable generation task (see Appendix A for discussion).
28
+
29
+ NAREOR is also a novel form of story-level paraphrasing and can be used to generate more interesting variations of stories (§5.1). Outputs can also serve as challenge sets for temporal or event-based tasks such as sentence ordering to assess the temporal reasoning capabilities of models (§6). NAREOR can also be potentially useful for pedagogical setups related to language skills such as essay writing, and applications to medicine involving clinical narratives (§6).
30
+
31
+ To complement NAREOR, we present a dataset, NARE-ORC, with human rewritings of stories from ROCStories (Mostafazadeh et al. 2016a) in non-linear orders. We conduct a thorough analysis, examining various ways humans modify the text when reordering (§2). We perform experiments with BART, T5, and GPT-2 on NAREORC using novel, task-motivated training methods we propose (§3). We evaluate our models with both an automatic and human evaluation along with qualitative analysis (§5). We demonstrate that our proposed training methods are effective but have room for further improvement. We illustrate that NAREOR is indeed a challenging task with potential for further exploration.
32
+
33
+ **Source Corpus:** ROCStories has $\approx 98.5$ K five-sentence English stories. For the dev and test splits, each example
34
+
35
+ contains a four-sentence story prefix with a one-sentence coherent and incoherent ending. We treat the coherent endings as the fifth sentences for NAREORC's dev and test stories.
36
+
37
+ Assigning Target Narrative Orders: The target narrative order $\pi_{i'}$ is not part of the ROCStories input. We devise a randomized procedure to assign a reasonable $\pi_{i'}$ for each example. We sample 3 permutations from the set of non-identity n!-1 permutations.<sup>3</sup> We find Kendall $\tau$ correlations (Kendall 1938) between identity permutation $I_n$ , $\{1,2,3,4,5\}$ , and each of the three permutations, retaining the lowest as $\pi_{i'}$ . We prefer this to sampling at random because we want our examples to be sufficiently non-trivial w.r.t. the task.
38
+
39
+ **Supervised & Unsupervised Splits:** We set aside 600, 200, 200 stories from train, dev, and test splits of ROCStories. These act as NAREORC's trainSup, devSup, and testSup splits, for which we collect human references. Remaining stories in each ROCStories split are retained as trainUnsup, devUnsup, and testUnsup of size 95161, 1671, 1671.
40
+
41
+ **Human Annotation:** For trainSup and devSup, we annotate one reference per example. For testSup, we collect two each to help reference-based metrics. We conduct our study on AMT. To understand task difficulty, we ask a "Hardness" question with options VeryEasy, Easy, Moderate, Hard, VeryHard. On average, annotators found $\approx 70\%$ of rewritings to be Moderate or Hard, demonstrating that NAREOR is quite difficult even for humans. More details in Appendix B.
42
+
43
+ # Method
44
+
45
+ We introduce two task-specific training methods.
46
+
47
+ This is partially inspired by how humans rewrite; a common approach is to first reorder sentences naively (simply swap positions), then make other changes. NAR-d attempts to mimic this, learning to convert from naive orderings to high-quality text. It involves two stages of model training.
48
+
49
+ - 1. **Denoise-1S:** Stage 1 is unsupervised training through story-level denoising. We use trainUnsup without humanwritten reorderings, and simulate them using the original human-written ROCStories (the outputs during training). Deletion and swapping of tokens are used to create inputs from these stories that simulate naive reorderings. This noising aims to emulate the reverse of the content editing that occurs during NAREOR. Specifically, we randomly delete 12.5% of tokens and swap another 12.5%. We found humanrewritten stories were, on average, in combination of token length (longer) and swappings, $\approx 25\%$ different from the originals. We split this between deletion and swapping to approximate naively-reordered stories. Story sentences S are first reordered as per $\pi_{i'}$ to produce $S'_{naive}$ , then each is edited to fit the new narrative. We swap tokens as humans often swap words like coreferent mentions based on how the narrative order changes. Hence, this stage learns to denoise text by converting noised versions to human-written text.
50
+ - 2. **Denoise-2S:** The second stage is supervised training atop the model above. The inputs are the 600 original stories in trainSup, with sentences naively reordered as per target narrative order $\pi_{i'}$ to $S'_{naive}$ , and the outputs are the human rewritings of these. The model learns to further translate from naively-reordered text to fluent human-written text.
51
+
52
+ Unlike NAR-d, NAR-r models themselves handle reordering given the target order rather than naive reordering beforehand.
53
+
54
+ • **Input Encoding Scheme:** We describe how the task input $\{S, \pi_{i'}\}$ is encoded as a token sequence for both Stage-1 and 2 training. To enable the model to distinguish different
55
+
56
+ - sentences, we prefix each $s \in S$ with a tag from <a> to <e>. We specify $\pi_{i'}$ as a sequence of these, separated from S by <sep>. NAREOR involves rearranging mention types among coreference chains (see §2.2), so we use NeuralCoref (HuggingFace 2020) to detect these chains. For each, we assign a unique uppercase tag (<X>) to replace its mentions. At the end of the input, we list each tag and the head mention of its coreference chain in order. We then append <st> to mark the end of the input. An illustration of the scheme follows: <a> Since I had front seat tickets, I was able to directly see <XI>. <b> <XI> tried to reach out with <XI> <X2>. <c>I grabbed <X2> and <XI> pulled me on stage. <d><XI> began to sing. <e> The concert had started. <sep> <e> <d><a><br/><a> <b> <c> XI> The music artist <X2> her hand <st>
57
+ - **Reorder-1S:** We use examples from trainUnsup for stage 1. It is problematic to train for the forward direction of our task $S, \pi_{i'} \to S'$ since S' is not known. Approximating S' using $S'_{naive}$ would hurt output fluency. We instead train in the inverse direction $S'_{naive}, \pi_{i'}^{-1} \to S$ , where $\pi_{i'}^{-1}; \pi_{i'}^{-1}(\pi_{i'}) = I_n$ is the inverse permutation of $\pi_{i'}$ . To reduce train-test mismatch, we use the inverse formulation half the time, and an autoencoding one, i.e. $S, I_n \to S$ the other half.
58
+ - **Reorder-2S:** trainSup examples are used to further finetune on reorder-1S. We train in the task direction $S, \pi_{i'} \to S'$ .
59
+
60
+ We choose several pretrained generation models: GPT-2, BART, and T5. We finetune all using both our training methods to produce denoise-1S (d-1S), denoise-2S (d-2S), reorder-1S (r-1S), and reorder-2S (r-2S) versions. GPT-2 (Radford et al. 2019) is a Transformer-based language model trained on *WebText*. BART (Lewis et al. 2020) and T5 (Raffel et al. 2020) are Transformer seq2seq models. BART is trained as a denoising autoencoder to reconstruct original from noised text. T5 is designed to be effective for transfer learning. We use HuggingFace's implementations of their base versions.<sup>4</sup>
2106.04335/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-05-27T05:24:22.751Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.2 Safari/605.1.15" etag="fD-lnq_55eModc016E8J" version="14.7.2" type="device"><diagram id="REUx9BVMofb2uysn8-iU" name="Page-1">7Vxbd5s4EP41Pts+JIebAD/60qS7m2zTTc9psi89Msg2CUasLCf2/voVF4EFskNi4UublxiNxBhmRt+nGcnpmIPZ8pLAeHqNfRR2DM1fdsxhxzAc12V/E8EqEwBdywQTEviZSC8Ft8F/KBfyYYvAR3NhIMU4pEEsCj0cRcijggwSgp/FYWMcit8awwmqCW49GNal3wOfTjOpazil/DMKJlP+zbrdzXpmkA/OVcyn0MfPmSh9OfNTxxwQjGl2NVsOUJjYjtsls8DFht7iwQiKaJMbxt7X4MK5/uPR/3tGgtUdweTrmZ1peYLhIn/h/GHpiluA4EXko0SJ3jH7z9OAotsYeknvM3M5k03pLMy7x0EYDnCISXqv6UPkjj0mn1OCH9Faj+25aDRmPfXXyN/sCRGKlmui/LUuEZ4hSlZsSN5r8mBZVdrPpcf03DradM1bdi6DeZBMCtWlHdlFbkq5Wc+uLawPPO2+Rx7+ioafg8cf7hn/sjUzIp/FVd7EhE7xBEcw/FRK+6WhNdYqx1xhHOfmfUCUrvJJAhcUi8ZHy4DeJbefg7x1nytLrofL9cZqrXGDSMDeGxEui5gN7tYbqZ5zB/B2qSttcWWZl/k8MbZ5d44XxENbIpNPdkgmiG4ZZ2bjEvNujRWCQkiDJ3Faq3d8VzKh7JAmUwMzE6yHhP3vAvOOs3nq1B4boFvxsuxkV5Pkcw5ncRhEE66NPV2mMOuWhtsVHDEwFkIEhsEkYtce80fi734yywIGd728Yxb4fhaNiD0SHKX6Et/GOIhoai7Q74Ch1Ldb50JtPhegnX+LgIuyeX7GAtvWdWGu563G3s2V3yRvszYEj8dzFmZV9xfP0CgitsXxLgEBZAHhoxlOXzwOYWKjD53BRce1PjYNEEZHcXLprVhY+YiYL4P7KAOoq1EhgN7jJIWtLwvK1KBcnj37UAd1Rhi7HvKkjDBygQU0NYygW0CMElBnhK6EEJy2CEHv/hyEcN7tOiIp2EV7AymkrapGlUxhnCZTuO9MsWGSKGEKW7NFpuD+P16qMFqiCpajpLefGlfsKXsAYvZwcK4wD5o88OsM3Q1OHBvIghODLpLCPhMFsyH8g6OCf1PBulAK/4vYh5QFQq5sRHhHjMPAW50eK5gq8wfdsKzTyh/MlkjhCkESIdI0HnapAo3HY0O+5vftkQ1sNThuWCKOO5IikCbBcaAAx6WesySeO73amlGprWmHrq0dOJMy2cJ9nSE1zWmFImV5mEraBHXalJrbOS7aVLBGltJmhJbJ3Xenx48qkyZguY5KeuTQLOg0xNvbo06wMVb84Kn0rAo2ZZ8s7U7iaDSP07ZWFV3c9i6S106SsfnGMCvEwiPulzZqQayCR0yRR4qU6mA8oh8mzdqwt9KEDjICcprX8lTyhXsovkhv7RGSVi74gBwtNyNOZdnCw6gMmEzjW/FGTk6yFZ9Kcvrt9NhJV5m9ma6plJ74mYR9EZKCkm9T9hmm20OiOK8D7iHhe3mTpw2O0Q/JMb3+5e9w+PCne33x9M2Iv199+cdrbfMvbupFmiCH4CrRJRFOCrGC/3JRc6yQxYbIkyp861SrtKDmW0viWqOt5QM4yOrhLRt65eZdsci4X1+NqK3RylmgvnjYBpBHkmzKDkepoPNbCik6QS7n+ZSSQizQnd3IWyldb4ta5eh9ebMPEvYBcn1LRsKuMTJtRVVXvbJ7Zhl1znUluGy1hcvmOy43x2WjIS7bR4XLbaVZKS6fHizzHaKdYZmhclLNPnJUVlAB/ulRuVptOzgqOy057cvogQV4Eqe1/e6LRcR6cPR6r2pH6lUDiCeaCnetedXZp1f1BlucKPJ7ye8wEkwM4XweeJuoUyhs8q6XuLPzqiqqSu506ty5Da9e5M41JwKJE7lsx62YYv+2+lsJriJ77/yuMj7qioC8wlooygxTU/TaSq6pmeIDi5XcF8db1h4qv5YCUpIuSe4SS0U++3t/eguTAh7UnNwBruBYPrF2nBD80CkPL1dU0GJ2KduefDN4Gm8Az/0c2tsJPPV38NwFPI1K1dTsbgfP6vi9gKfeVp0t2S/j6HmC5TZdZb2NxarIi4rgswB5HjFtbKbJzSPLKdQsPtsp3LT+QxjJka5tvzM4VlCtYuFbQbWGzhtAVdn2/+bzRrvhWB9Sb1pPcCmB0TxIUtzNx4mOF9z47FVStdLbOLXW3RuSyc4FvBbJSlCyTEOApXNNexM0bVhcHvTI0baf2L7jmWI8U1C0+3XwjM9hBXimdYHdAp5x3ucLwIqC9vDNaosYOwZbfFofOmAQBz86Tn/YcYYfM+EveEBG/Ml7sQVwqAMyVltpXen1b1NE4cc6kvyiIWA6xr5CgDXL/1yU4UT575/MT/8D</diagram></mxfile>
2106.04335/main_diagram/main_diagram.pdf ADDED
Binary file (18.1 kB). View file
 
2106.04335/paper_text/intro_method.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Bayesian optimization (BO) has served as a powerful and popular framework for global optimization in many real-world tasks, such as hyperparameter tuning [1–4], robot control [5], automatic material design [6–8], etc. To search for global optima under a small sampling budget and potentially noisy observations, BO imposes a Gaussian process (GP) prior on the unknown black-box function and continually updates the posterior as more samples are collected. BO relies on *acquisition functions* (AFs) to determine the sample location, i.e., those with larger AF values are prioritized than those with smaller ones. AFs are often designed to capture the trade-off between exploration and exploitation of the global optima. The design of AFs has been extensively studied from various perspectives, such as optimism in the face of uncertainty (e.g., GP-UCB [9]), optimizing information-theoretic metrics (e.g., entropy search methods [10–12]), and maximizing one-step improvement (e.g., expected improvement or EI [13, 14]). As a result, AFs are often handcrafted according to different perspectives of the trade-off, and the best-performing AFs can vary significantly under different types of black-box
4
+
5
+ functions [11]. This phenomenon is repeatedly observed in our experiments in Section 4. Therefore, one critical issue in BO is to design an AF that can adapt to a variety of black-box functions.
6
+
7
+ To achieve better adaptability to new tasks in BO, recent works propose to leverage *meta-data*, the data previously collected from similar tasks [15–17]. For example, in the context of hyperparameter optimization under a specific dataset, the meta-data could come from the evaluation of previous hyperparameter configurations for the same learning model over any other related dataset. In [15], meta-data is used to fine-tune the initialization of the GP parameters and thereby achieves better GP model selection for each specific task. However, the potential benefit of using meta-data for more efficient exploration via AFs is not explored. On the other hand, in [16, 17], meta-data is split into multiple subsets and then used to construct a transferable acquisition function based on some off-the-shelf AF (e.g. EI) and an ensemble of GP models. Each of the GP models is learned over a separate subset of the meta-data. However, to achieve effective knowledge transfer, this approach would require a sufficiently large amount of meta-data, which significantly limits its practical use. As a result, there remains a critical unexplored challenge in BO: *how to design an AF that can effectively adapt to a wide variety of back-box functions given only a small amount of meta-data?*
8
+
9
+ ![](_page_1_Figure_2.jpeg)
10
+
11
+ Figure 1: An illustration of the overfitting issue of DQN+MAML trained with GP functions: (a) Training curves of DQN+MAML (As EI and Random do not require any training, their lines are flat); (b)-(d) Average simple regrets of DQN+MAML and EI in testing under benchmark functions.
12
+
13
+ To tackle this challenge, we propose to rethink the use of meta-data in BO through the lens of few-shot acquisition function (FSAF) learning. Specifically, our goal is to learn an initial AF model that allows few-shot fast adaptation to each specific task during evaluation. Inspired by the similarity between AFs and Q-functions, we use a deep Q-network (DQN) as a surrogate differentiable AF, i.e., the Q-network would output an indicator for each candidate sample point given its posterior mean and variance as well as other related information. Given the parametric nature and the differentiability of a Q-network, it is natural to leverage optimization-based few-shot learning approaches, such as model-agnostic meta-learning (MAML) [18], for the training of few-shot AFs. Despite this natural connection, we find that a direct combination of standard DQN and MAML (DQN+MAML) is prone to overfitting, as illustrated by the comparison of training and testing results. Note that in Figure 1 (with detailed configuration in Appendix B), DQN+MAML achieves better regret than EI on the training set but suffers from much higher regret during testing. We hypothesize this is because both DQN and MAML are prone to overfitting [19-22]. This issue could be particularly critical in BO due to the uncertainty required in adaptive sampling. Based on our findings and inspired by [23], we propose a Bayesian variant of DQN with the following three salient features: (i) The Bayesian DQN learns a distribution of parameters of DQN based on the Kullback-Leibler (KL) regularization framework. Through this, the problem of minimizing the Q-learning temporal-difference (TD) error in DQN is converted into a Bayesian inference problem. (ii) For the prior of the Bayesian DQN, we propose to use a demonstration policy induced by an off-the-shelf AF to further stabilize the training process; (iii) We then use the chaser meta-loss in [20], which serves as a natural companion to the proposed Bayesian DQN. As shown by the experimental results in Section 4, the proposed design effectively mitigates overfitting and achieves good generalization under various black-box functions. Moreover, with the proper design of the Q-networks, the proposed FSAF is general-purpose in the sense that it is agnostic to both the dimension and the cardinality of the input domain.
14
+
15
+ The main contributions of this paper can be summarized as follows:
16
+
17
+ • We consider a novel setting of few-shot acquisition function learning for BO and present the first few-shot acquisition function that can use a small amount of meta-data to achieve better task-specific exploration and thereby effectively adapt to a wide variety of black-box functions.
18
+
19
+ - Inspired by the similarity between AFs and Q-functions, we view DQN as a parametric and differentiable AF and use it as the base of our FSAF. We identify the important overfitting issue in the direct combination of DQN and MAML and thereafter present a Bayesian variant of DQN that mitigates overfitting and enjoys stable training through a demo-based prior.
20
+ - We extensively evaluate the proposed FSAF in a variety of tasks, including optimization benchmark functions, real-world datasets, and synthetic GP functions. We show that the proposed FSAF can indeed effectively adapt to a variety of tasks and outperform both the conventional benchmark AFs as well as the recent state-of-the-art meta-learning BO methods.
21
+
22
+ # Method
23
+
24
+ Our goal is to design a sampling policy to optimize a black-box function $f: \mathbb{X} \to \mathbb{R}$ , where $\mathbb{X} \subset \mathbb{R}^d$ denotes the compact domain of f. f is black-box in the sense that there are no special structural properties (e.g., concavity or linearity) or derivatives (e.g., gradient or Hessian) about f available to the sampling policy. In each step t, the policy selects $x_t \in \mathbb{X}$ and obtains a noisy observation $y_t = f(x_t) + \varepsilon_t$ , where $\varepsilon_t$ are i.i.d. zero-mean Gaussian noises. To evaluate a sampling policy, we define the $simple\ regret$ as $Regret(t) := \max_{x \in \mathbb{X}} f(x^*) - \max_{1 \le s \le t} f(x_s)$ , which quantifies the best sample up to t. For convenience, we let $\mathcal{F}_t := \{(x_i, y_i)\}_{i=1}^{t-1}$ denote the observations up to step t.
25
+
26
+ Bayesian optimization. To optimize f in a sample-efficient manner, BO first imposes on the space of objective functions a GP prior, which is fully characterized by a mean function and a covariance function, and then determines the next sample based on the resulting posterior [24, 25]. In each step t, given $\mathcal{F}_t$ , the posterior predictive distribution of each $x \in \mathbb{X}$ is $\mathcal{N}(\mu_t(x), \sigma_t^2(x))$ , where $\mu_t(x) := \mathbb{E}[f(x)|\mathcal{F}_t]$ and $\sigma_t(x) := (\mathbb{V}[f(x)|\mathcal{F}_t])^{\frac{1}{2}}$ can be derived in closed form. In this way, BO can be viewed as a sequential decision making problem. However, it is typically difficult to obtain the exact optimal policy due to the curse of dimensionality [24]. To obtain tractable policies, BO algorithms construct AFs $\Psi(x;\mathcal{F}_t)$ , which resort to maximizing one-step look-ahead objectives based on $\mathcal{F}_t$ and the posterior [24]. For example, EI chooses the sample location based on the improvement made by the immediate next sample in expectation, i.e., $\Psi_{\mathrm{EI}}(x;\mathcal{F}_t) = \mathbb{E}[f(x) - \max_{1 \le i \le t-1} f(x_i)|\mathcal{F}_t]$ , which enjoys a closed form in $\mu_t(x)$ , $\sigma_t(x)$ , and $\max_{1 \le i \le t-1} f(x_i)$ .
27
+
28
+ **Meta-learning with few-shot fast adaptation.** Meta-learning is a generic paradigm for generalizing the knowledge acquired during training to solving unseen tasks in the testing phase. In the few-shot setting, meta-learning is typically achieved via a bi-level framework: (i) On the upper level, the training algorithm is meant to determine a proper initial model with an aim to facilitating subsequent task-specific adaptation; (ii) On the lower level, given the initial model and the task of interest, a fast adaptation subroutine is configured to fine-tune the initial model based on a small amount of task-specific data. Specifically, during training, the learner finds a model parameterized by $\theta$ based on a collection of tasks $\mathcal{T}$ , where each task $\tau \in \mathcal{T}$ is associated with a training set $\mathcal{D}_{\tau}^{\text{tr}}$ and a validation set $\mathcal{D}_{\tau}^{\text{val}}$ . For any initial model parameters $\theta$ and training set $\mathcal{D}^{\text{tr}}$ of a task $\tau$ , let $\mathcal{M}(\theta, \mathcal{D}_{\tau}^{\text{tr}})$ be an algorithm that outputs the adapted model parameters by applying few-shot fast adaptation to $\theta$ based on $\mathcal{D}_{\tau}^{\text{tr}}$ . The performance of the adapted model is evaluated on $\mathcal{D}_{\tau}^{\text{val}}$ by a meta-loss function $\mathcal{L}(\mathcal{M}(\theta, \mathcal{D}_{\tau}^{\text{tr}}), \mathcal{D}_{\tau}^{\text{val}})$ . Accordingly, the overall training can be viewed as solving the following optimization problem:
29
+
30
+ $$\theta^* := \underset{\theta}{\operatorname{argmin}} \sum_{\tau \in \mathcal{T}} \mathcal{L}(\mathcal{M}(\theta, \mathcal{D}_{\tau}^{\operatorname{tr}}), \mathcal{D}_{\tau}^{\operatorname{val}}). \tag{1}$$
31
+
32
+ By properly configuring the loss function, the formulation in (1) is readily applicable to various learning problems, including supervised learning and RL. Note that the adaptation subroutine is typically chosen as taking one or a few gradient steps with respect to $\mathcal{L}(\cdot,\cdot)$ or some relevant loss function. For example, under the celebrated MAML [18], the adaptation subroutine is $\mathcal{M}(\theta, \mathcal{D}_{\tau}^{tr}) \equiv \theta - \eta \nabla_{\theta} \mathcal{L}(\theta, \mathcal{D}_{\tau}^{tr})$ , where $\eta$ denotes the learning rate.
33
+
34
+ **Reinforcement learning and Q-function.** Following the conventions of RL, we use $s_t$ , $a_t$ , and $r_t$ to denote the state, action, and reward obtained at each step t. Let R and $\gamma$ be the reward function and the discount factor. The goal is to find a stationary randomized policy that maximizes the total expected discounted reward $\mathbb{E}[\sum_{t=0}^{\infty} \gamma^t r_t]$ . To achieve this, given a policy $\pi$ , a helper function termed Q-function is defined as $Q^{\pi}(s,a) := \mathbb{E}[\sum_{t=0}^{\infty} \gamma^t R(s_t,a_t)|s_o = s,a_0 = a;\pi]$ . The optimal Q-function can then be defined as $Q^*(s,a) := \max_{\pi} Q^{\pi}(s,a)$ , for each state-action pair. One fundamental property of the optimal Q-function is the *Bellman optimality equation*, i.e.,
35
+
36
+ $Q^*(s,a) = \mathbb{E}\left[r_t + \gamma \max_{a'} Q^*(s_{t+1},a') \middle| s_t = a, a_t = a\right]$ . Then, the Bellman optimality operator can be defined by $[\mathcal{B}\,Q](s,a) := R(s,a) + \gamma \,\mathbb{E}_{s'}[\max_{a' \in \mathcal{A}} Q(s',a')]$ . It is known that $Q^*$ is the unique fixed point of $\mathcal{B}$ . We leverage this fact to describe the proposed FSAF in Section 3.2.
37
+
38
+ Based on the conceptual similarity between acquisition functions and Q-functions, in this section we present how to cast a deep Q-network as a parametric and differentiable instance of acquisition function. To begin with, we consider the Q-network architecture with state and action representations as the input, as typically adopted by Q-learning for large action spaces [26].
39
+
40
+ State-action representation. In BO, an action corresponds to choosing one location to sample from the input domain $\mathbb{X}$ , and the state at each step t can be fully captured by the collection of sampled points $\{(x_i,y_i)\}_{i=1}^{t-1}$ . However, this raw state representation appears problematic as its dimension depends on the number of observed sample points. Inspired by the acquisition functions, we leverage the posterior mean and variance as the *joint state-action representation* for each candidate sample location. In addition, we include the best observation so far (defined as $y_t^* := \operatorname{argmax}_{1 \leq i \leq t-1} y_i$ ) and the ratio between the current timestamp and total sampling budget T, which reflects the sampling progress in BO. In summary, at each step t, the state-action representation of each $x \in \mathbb{X}$ is designed to be a 4-tuple $(\mu_t(x), \sigma_t(x), y_t^*, \frac{t}{T})$ , which is agnostic to the dimension and cardinality of $\mathbb{X}$ .
41
+
42
+ **Reward signal.** To reflect the sampling progress, we define the reward $r_t$ as a function of the simple regret, i.e., $r_t = g(\operatorname{Regret}(t))$ , where $g: \mathbb{R}_+ \to \mathbb{R}$ is a strictly decreasing function. Practical examples include g(z) = -z and $g(z) = -\log z$ .
43
+
44
+ **Remark 1** One popular approach to construct a representation of fixed dimension is through embedding. From this viewpoint, the posterior mean and variance can be viewed as a natural embedding generated by GP inference in the context of BO. It is an interesting direction to extend the proposed design by constructing more general state and action representations via embedding techniques.
45
+
46
+ One major challenge in designing an acquisition function for BO is to address the wide variability of black-box functions and accordingly achieve a favorable explore-exploit trade-off for each task. To address this by deep Q-learning as described in Section 3.2, instead of learning a single Q-network as in the standard DQN [27], we propose to learn a distribution of Q-network parameters from a Bayesian inference perspective to achieve more robust exploration and more stable training. Inspired by [23], we adapt the regularized minimization problem to Q-learning in order to connect Q-learning and Bayesian inference as follows. Let $C(\theta)$ be the cost function that depends on the model parameter $\theta$ . Instead of finding a single model, the Bayesian approach finds a distribution $q(\theta)$ over $\theta$ that minimizes the cost function augmented with a KL-regularization penalty, i.e.,
47
+
48
+ $$\min_{q(\theta)} \Big\{ \mathbb{E}_{\theta \sim q(\theta)}[C(\theta)] + \alpha D_{\text{KL}}(q \parallel q_0) \Big\}, \tag{2}$$
49
+
50
+ where $q_0$ denotes a prior distribution over $\theta$ and $\alpha$ is a weight factor of the penalty and $D_{\text{KL}}(\cdot \| \cdot)$ denotes the Kullback-Leibler (KL) divergence between two distributions. Note that $q(\theta)$ essentially induces a distribution over the sampling policies $\pi_{\theta}$ , and accordingly $q_0$ can be interpreted as constructing a prior over the policies. By setting the derivative of the objective in (2) with respect to the measure induced by q to be zero, one can verify that the optimal solution to (2) is
51
+
52
+ $$q^*(\theta) = \frac{1}{Z} \exp\left(\frac{-C(\theta)}{\alpha}\right) q_0(\theta), \tag{3}$$
53
+
54
+ where Z is the normalizing factor. One immediate interpretation of (3) is that $q^*(\theta)$ can be viewed as the posterior distribution under the prior distribution $q_0(\theta)$ and the likelihood $\exp(-C(\theta)/\alpha)$ . To adapt the KL-regularized minimization framework to value-based RL for BO, the proposed FSAF algorithm is built on the following design principles for $C(\theta)$ and $q_0(\theta)$ :
55
+
56
+ • Use mean-squared TD error as the cost function: Recall from Section 2 that the optimal Q-function is the fixed point of the Bellman optimality backup operation. Hence, we have $\mathcal{B}Q = Q$ if
57
+
58
+ and only if Q is the optimal Q-function. Based on this observation, one principled choice of $C(\theta)$ is the squared TD error under the operator $\mathcal{B}$ , i.e., $\|\mathcal{B}\,Q-Q\|_2^2$ . Moreover, in practice, DQN typically incorporates a replay buffer $\mathcal{R}_Q$ (termed the Q-replay buffer) as well as a target Q-network to achieve better training stability [27]. Therefore, we choose the cost function as
59
+
60
+ $$C(\theta) = \mathbb{E}_{(s,a,s',r)\sim\rho} \left[ \left( \left( r + \gamma \max_{a' \in A} Q(s',a';\theta^{-}) \right) - Q(s,a;\theta) \right)^{2} \right], \tag{4}$$
61
+
62
+ where $\rho$ denotes the underlying sample distribution of the replay buffer and $\theta^-$ is the parameter of the target Q-network<sup>1</sup>. In practice, the cost $C(\theta)$ is estimated by the empirical average over a mini-batch $\mathcal D$ of samples (s,a,r,s') drawn from the replay buffer, i.e., $C(\theta)\approx\hat C(\theta)=\frac{1}{|\mathcal D|}\sum_{(s,a,r,s')\in\mathcal D}((r+\gamma\max_{a'\in\mathcal A}Q(s',a';\theta^-))-Q(s,a;\theta))^2$ .
63
+
64
+ • Construct an informative prior with the help of the existing acquisition functions: In (2), the KL-penalty with respect to a prior distribution is meant to encode prior domain knowledge as well as provide regularization that prevents the learned parameter from collapsing into a point estimate. One commonly-used choice is a uniform prior (i.e., $q(\theta) = c$ for some positive constant c), under which the KL-penalty reduces to the negative entropy of q. Given that BO is designed to optimize expensive-to-evaluate functions, it is therefore preferred to use a more informative prior for better sample efficiency. Based on the above, we propose to construct a prior with the help of a demo policy $\pi_D$ induced by existing popular AFs (e.g., EI or PI), which inherently capture critical information structure of the GP posterior. Define a similarity indicator $\delta(\pi_\theta, \pi_D)$ of $\pi_\theta$ and $\pi_D$ as
65
+
66
+ $$\delta(\pi_{\theta}, \tilde{\pi}) := \mathbb{E}_{s \sim \rho, a \sim \pi_{D}(\cdot \mid s)} \left[ \log(\pi_{\theta}(s, a)) \right], \tag{5}$$
67
+
68
+ where we slightly abuse the notation and let $\rho$ denote the state distribution induced by the replay buffers. Since the term $\log(\pi_{\theta}(s,a))$ in (5) is the log-likelihood of that the action of $\pi_{\theta}$ matches that of $\pi_{D}$ at a state s, $\delta(\pi_{\theta},\pi_{D})$ reflects how similar the two policies are on average (with respect to the state visitation distribution of $\pi_{\theta}$ ). Accordingly, we propose to design the prior $q_{0}(\theta)$ to be
69
+
70
+ $$q_o(\theta) \propto \exp\left(\delta(\pi_\theta, \pi_{\rm D})\right).$$
71
+ (6)
72
+
73
+ As it is typically difficult to directly evaluate $\delta(\pi_{\theta}, \pi_{\rm D})$ in practice, we construct another replay buffer $\mathcal{R}_{\rm D}$ (termed the *demo* replay buffer), which stores the state-action pairs produced under the state distribution $\rho$ and the demo policy $\pi_{\rm D}$ , and estimate $\delta(\pi_{\theta}, \pi_{\rm D})$ by the empirical average over a mini-batch $\mathcal{D}'$ of state-action pairs drawn from the demo replay buffer, i.e., $\delta(\pi_{\theta}, \pi_{\rm D}) \approx \hat{\delta}(\pi_{\theta}, \pi_{\rm D}) = \frac{1}{|\mathcal{D}'|} \sum_{(s,a) \in \mathcal{D}'} \log(\pi_{\theta}(s,a))$ .
74
+
75
+ • Update the Q-networks by Stein variational gradient descent: The solution in (3) is typically intractable to evaluate and hence approximation is required. We leverage the Stein variational gradient descent (SVGD), which is a general-purpose approach for Bayesian inference. Specifically, we build up N instances of Q-networks (also called *particles* in the context of the variational methods) and update the parameters via Bayesian inference. Let $\theta^{(n)}$ denote the parameters of the n-th Q-network and use $\Theta$ as a shorthand of $\{\theta^{(n)}\}_{n=1}^N$ . Under SVGD [28] and the prior described in (6), the Stein variational gradient of each particle can be derived as
76
+
77
+ $$g^{(n)}(\Theta) = \frac{1}{N} \sum_{i=1}^{N} \nabla_{\theta^{(i)}} \left( \frac{-1}{\alpha} C(\theta^{(i)}) + \delta(\pi_{\theta^{(i)}}, \pi_{D}) \right) k(\theta^{(i)}, \theta^{(n)}) + \nabla_{\theta^{(i)}} k(\theta^{(i)}, \theta^{(n)}), \quad (7)$$
78
+
79
+ where $k(\cdot,\cdot)$ is a kernel function. As mentioned above, in practice $C(\theta)$ and $\delta(\pi_{\theta},\pi_{D})$ are estimated by the corresponding empirical $\hat{C}(\theta)$ and $\hat{\delta}(\pi_{\theta},\pi_{D})$ , respectively. Let $\hat{g}^{(n)}(\Theta)$ denote the estimated Stein variational gradient based on $\hat{C}(\theta)$ and $\hat{\delta}(\pi_{\theta},\pi_{D})$ . Accordingly, the parameters of each Q-network are updated iteratively by SVGD as
80
+
81
+ $$\theta^{(n)} \leftarrow \theta^{(n)} + \eta \cdot \hat{g}^{(n)}(\Theta),$$
82
+ (8)
83
+
84
+ where $\eta$ is the learning rate. For ease of notation, we use $\mathcal{M}_{SVGD}(\Theta, \mathcal{D})$ to denote the subroutine that applies one SVGD update of (8) to all Q-networks parameterized by $\Theta$ based on some dataset $\mathcal{D}$ . Note that the update scheme in (8) serves as a natural candidate for the few-shot adaptation subroutine of the meta-learning framework described in Section 2. In Section 3.3, we will put everything together and describe the full meta-learning algorithm of FSAF.
85
+
86
+ <sup>&</sup>lt;sup>1</sup>In (4), the cost function depends implicitly on the target network parameterized by $\theta^-$ . Despite this, as the target network is updated periodically from $Q(\cdot,\cdot;\theta)$ , for notational convenience we do not make explicit the dependence of the cost function on $\theta^-$ in the notation $C(\theta)$ .
87
+
88
+ Remark 2 The presented Bayesian DQN bears some high-level resemblance to the prior works [29–32], which are inspired by the classic principle of Thompson sampling for exploration. In [30, 31], the Q-function is assumed to be linearly parameterized with a Gaussian prior on the parameters such that the posterior can be computed in closed form. On the other hand, without imposing the linearity assumption, [31] approximates the intractable posterior by maintaining an ensemble of Q-networks as a practical heuristic of Thompson sampling to DQN. Different from [29–31], we approach Bayesian DQN through the principled framework of KL regularization for parametric Bayesian inference and solve it via SVGD, without any linearity assumption. [32] starts from an entropy-regularized formulation for Q-learning and assumes that the target Q-value is drawn from a Gaussian model to obtain a tractable posterior from the perspective of variational inference. By contrast, we do not rely on the Gaussian assumption and directly find the posterior by SVGD, and for more efficient training we consider a prior induced by an acquisition function. More importantly, the presented Bayesian DQN naturally helps substantiate the few-shot learning framework described in Section 2 for BO.
89
+
90
+ **Remark 3** The regularized formulation in (2) has been extensively applied in the class of policy-based methods in RL. For example, the entropy-regularized policy optimization [33] has been applied to enable a "soft version" of policy iteration, which gives rise to the popular soft Q-learning [34] and soft actor-critic algorithms [35]. Another example is the Stein variational policy gradient method [23], which connects the policy gradient methods with Bayesian inference. Different from the prior works, we take a different path to connect the value-based RL approach with Bayesian inference.
91
+
92
+ **Remark 4** The similarity indicator defined in (5) has a similar form as the loss term in some of the classic imitation learning algorithms. For example, given an expert policy $\pi_e$ , DAgger [36] is designed to find a policy $\pi'$ that minimizes a surrogate loss $\mathbb{E}_s[\ell(s,\pi_e)]$ , where $\ell(\cdot,\cdot)$ is some loss function (e.g., 0-1 loss or hinge loss) that reflects the dissimilarity between $\pi'$ and $\pi_e$ . Despite this high-level resemblance, one fundamental difference between (5) and imitation learning is that the demo policy $\pi_D$ is not a true expert in the sense that mimicking the behavior of $\pi_D$ is not the ultimate goal of the FSAF learning process. Instead, the goal of FSAF is to learn a policy that can better adapt to new tasks and thereby outperform the existing AFs in various domains. The penalty defined in (6) is only meant to provide some prior domain knowledge to achieve more sample-efficient training. We further validate this design by providing training curves in Section 4.
93
+
94
+ Based on the Bayesian DQN design in Section 3.2, the natural way to substantiate the meta-learning framework in (1) is to leverage the Bayesian variant of MAML [20]. In the context of BO, each task typically corresponds to optimizing some type of black-box functions (e.g., GP functions from an RBF kernel with some lengthscale). FSAF implements the bi-level framework of (1) as follows: (i) On the lower level, for each task $\tau$ , FSAF enforces few-shot fast adaptation by taking K steps of SVGD as described in (8) and thereafter obtains the fast-adapted parameters denoted by $\Theta_{\tau,K}$ ; (ii) On the upper level, for each task $\tau$ , FSAF computes a meta-loss that reflects the dissimilarity between the approximated posterior induced by $\Theta_{\tau,K}$ and the true posterior distribution in (6). As the true posterior is not available, one practical solution is to approximate the true posterior by taking S additional SVGD gradient steps based on $\Theta_{\tau,K}$ and obtaining a surrogate denoted by $\Theta_{\tau,S}^*$ [20]. This design can be justified by the fact that $\Theta_{\tau,S}^*$ becomes a better approximation for the true posterior as S increases due to the nature of SVGD. For any two collections of particles $\Theta' \equiv \{\theta'^{(n)}\}$ and $\Theta'' \equiv \{\theta''^{(n)}\}$ , define $D(\Theta',\Theta'') := \sum_{n=1}^N \|\theta'^{(n)} - \theta''^{(n)}\|_2^2$ (called $chaser\ loss$ in [20]). Then, for any task $\tau$ , the meta-loss of FSAF is computed as
95
+
96
+ $$\mathcal{L}_{\text{meta}}(\Theta; \tau) = D(\Theta_{\tau, K}, \text{stopgrad}(\Theta_{\tau, S}^*)), \tag{9}$$
97
+
98
+ As will be shown by the experiments in Section 4, using small K and S empirically leads to favorable performance. The pseudo code of the training procedure of FSAF is provided in Appendix C.
99
+
100
+ **Remark 5** This paper focuses on value-based methods for training a few-shot acquisition function. Based on the proposed training framework, it is also possible to extend the idea to design an actor-critic counterpart of our FSAF. We believe this is an interesting direction for future work.
2106.05266/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2106.05266/paper_text/intro_method.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Our method on 3D hand and object pose estimation contains two main components: (i) a joint learning framework with contextual reasoning between the hand and the object; (ii) a semi-supervised pipeline which explores extra labels in videos for training.
4
+
5
+ First, we present the hand-object joint learning framework in Section [4](#sec:model){reference-type="ref" reference="sec:model"}. The model contains a shared encoder and two separate decoders for hand and object, as well as a transformer-based contextual reasoning module [4.1](#sec:cr){reference-type="ref" reference="sec:cr"} to better exploit their relations. The model is trained under fully-supervised learning.
6
+
7
+ Then, we propose the semi-supervised learning pipeline in Section [5](#sec:semisup){reference-type="ref" reference="sec:semisup"}, Constrained by the spatial-temporal consistency, we generate high-quality pseudo-labels of hand on a large-scale video dataset [@Goyal2017TheS] and re-train our model on the union of fully annotated dataset [@Hampali2019HOnnotateAM] and those confident pseudo-labels. Because of the diversity in the hand pseudo-labels, the model could both increase the accuracy of hand pose estimation and generalization. With better hand features as context via the contextual reasoning module, the object pose performance of the model could also be improved.
8
+
9
+ # Method
10
+
11
+ Our hand-object joint learning framework is presented in Figure [1](#fig: pipeline){reference-type="ref" reference="fig: pipeline"}. We use FPN [@Lin2017FeaturePN] with ResNet-50 [@He2016DeepRL] as the backbone and extract hand and object features $\mathcal{F}^h$ and $\mathcal{F}^o$ into $\mathbb{R}^{H\times W\times C}$ with the RoIAlign [@he2017mask], given their corresponding bounding boxes. Then we apply the contextual reasoning between the two features and send the enhanced feature maps with strengthened interactive context information into the hand and the object decoders respectively, which output the 3D hand mesh and 6-Dof object pose. The total loss function of the system is the sum from two decoder branches $\mathcal{L} = \mathcal{L}_{hand} + \mathcal{L}_{object}$. The contextual reasoning module, hand and object decoders, and training losses $\mathcal{L}_{hand}, \mathcal{L}_{object}$ will be discussed in the following sections.
12
+
13
+ <figure id="fig: contextual" data-latex-placement="t">
14
+ <embed src="imgs/contextual.pdf" style="width:8cm" />
15
+ <figcaption><span>The contextual reasoning (CR) module: features extracted from the region of hand intersecting with object are taken as context (key and value) to enhance the object features (query). The module adopts the Transformer architecture with an attention module and a feed-forward module. “<span class="math inline">⊗</span>” denotes matrix multiplication, and “<span class="math inline">⊕</span>” denotes element-wise sum. </span></figcaption>
16
+ </figure>
17
+
18
+ We adopt the Transformer architecture to exploit the synergy between hand and object features via the contextual reasoning (CR) module as shown in Figure [2](#fig: contextual){reference-type="ref" reference="fig: contextual"}, where the query positions in object features could be enhanced by fusing information from the interaction region.
19
+
20
+ Inside the module, we take object features $\mathcal{F}^o$ as query and hand-object intersecting regions $\mathcal{F}^{inter}$ as key and value to model their pairwise relations on the top of RoIAlign [@he2017mask]. We first apply three separate 1-D convolutions parameterized by $W_q$, $W_k$, and $W_v$ on the input features to get the query, key, and value embedding $Q$, $K$, and $V$: $$\begin{align}
21
+ Q = W_q \mathcal{F}^h, K = W_k \mathcal{F}^{inter}, V = W_v \mathcal{F}^{inter}
22
+ \end{align}$$ The above equation is also shown on the left side of Figure [2](#fig: contextual){reference-type="ref" reference="fig: contextual"}.
23
+
24
+ Then the embeddings are forwarded to the attention module to perform cross-attention between every spatial location of the query and the key. The output of the attention module $\mathrm{Att}(Q, K, V)$ is the weighted average of the value computed as follows: $$\begin{align}
25
+ \mathrm{Att}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V
26
+ \end{align}$$ where $d_k$ is the feature dimension of the key and $\mathrm{softmax}$ stands for the softmax function. The attention module output is fused to the original input to get enhanced query features $Q'$: $$\begin{align}
27
+ Q' = \mathcal{F}^o + \mathrm{Att}(Q, K, V)
28
+ \end{align}$$
29
+
30
+ Afterward, the obtained features $Q'$ are sent to the feed-forward module that consists of a two-layer MLP and layer normalization [@ba2016layer], abbreviated as $\mathrm{LN}$. The detailed architecture of the attention module and feed-forward module are shown on the right side of Figure [2](#fig: contextual){reference-type="ref" reference="fig: contextual"}. The final output of the CR module is the enhanced object features $\mathcal{F}^{o^+}$, obtained by adding up the feed-forward module output with residual connections: $$\begin{align}
31
+ \mathcal{F}^{o^+} = Q' + \mathrm{MLP}(\mathrm{LN}(Q'))
32
+ \end{align}$$
33
+
34
+ Besides, we also ablate on using different features as the query of CR module in Section [6.5.1](#sec: ablation_cr){reference-type="ref" reference="sec: ablation_cr"}, where we alternatively choose to use the features of hand or both hand-object as the query.
35
+
36
+ The hand decoder consists of a 2D joints localization network and a mesh regression network. The 2D joints localization network is an hourglass [@newell2016stacked] module which takes the hand features after RoIAlign [@he2017mask] as input and outputs 2D heatmaps for each joint $j \in \mathcal{J}^{2D}$, where $\mathcal{J}^{2D} \in \mathbb{R}^{N^h \times 2}$ and $N^h = 21$ is the number of joints. The heatmaps have the resolution $32 \times 32$. The loss function of the 2D joints localization network $\mathcal{L}_{\mathcal{H}}$ is the distance between ground-truth heatmaps $\mathcal{H}_{j}$ and predictions $\hat{\mathcal{H}}_{j}$ of each joint $j$, as $\mathcal{L}_{\mathcal{H}} = \sum_{j \in N^h} \vert\vert \mathcal{H}_{j} -\hat{\mathcal{H}}_{j} \vert\vert_2^2$.
37
+
38
+ The mesh regression network combines the hand features with the 2D heatmaps as input and predicts the parameters of the hand mesh parameterized by the MANO model [@romero2017embodied]. The MANO model maps the pose parameters $\theta \in \mathbb{R}^{48}$ and shape parameters $\beta \in \mathbb{R}^{10}$ to hand mesh vertices $\mathcal{V} \in \mathbb{R}^{778 \times 3}$ and 3D joints $\mathcal{J}^{3D} \in \mathbb{R}^{N^h \times 3}$. The inputs of the mesh regression network are forwarded to a ConvNet with four residual blocks  [@He2016DeepRL] and vectorized into a 2048-D feature vector. The output are the predicted MANO parameters $\hat{\theta}$ and $\hat{\beta}$ using three fully-connected layers. We compute the loss on both the MANO model parameters and outputs. Specifically, we compute the $L_2$ distance between the prediction $(\hat{\theta}, \hat{\beta}, \hat{\mathcal{J}^{3D}}, \hat{\mathcal{V}})$ and ground-truth $(\theta, \beta, \mathcal{J}^{3D}, \mathcal{V})$ as the loss $L_{\mathcal{M}}$ in mesh regression network. The total loss of the hand decoder is the sum of heatmap loss $\mathcal{L}_{\mathcal{H}}$ and $L_{\mathcal{M}}$: $$\begin{align}
39
+ \mathcal{L}_{hand} = \lambda_{\mathcal{H}} \cdot \mathcal{L}_{\mathcal{H}} + \mathcal{L}_{\mathcal{M}} \ .
40
+ \end{align}$$ where $\lambda_{\mathcal{H}}=0.1$ is used for balancing losses.
41
+
42
+ The object decoder consists of two streams, which has 4 shared convolution layers and 2 separate convolution layers for each stream. The first stream predicts the 2D location of pre-defined 3D control points on the object from image grid proposals, and the second stream regresses the corresponding confidence scores of each proposal. After obtaining the 2D positions of control points, the object 6-Dof pose can be computed by the PnP algorithm using the correspondence between 2D control points and original 3D control points on the object mesh. In this work, we utilize $N^o = 21$ control points, including $8$ corners, $12$ edge midpoints and $1$ center-point of the object mesh 3D bounding box.
43
+
44
+ In the first stream, we adopt the grid-based method [@Redmon2016YouOL] to better handle self-occlusion, where each grid $g$ in the object feature map gives a prediction for every control point $i \in N^o$. We use $\delta_{g, i}$ to denote the geometric distance between the grid prediction and the target control point. The loss function of the first stream is the loss sum over all the grids $g$ and control points $i$, denote as $\mathcal{L}_{p2d} = \sum_{g} \sum_{i=1}^{N^o} \vert\vert \delta_{g, i} \vert\vert_1$.
45
+
46
+ The second stream regresses a confidence score $c_{g,i}$ for each grid $g$ and control point $i$, where the confidence ground-truth $c_{g,i} = {\rm exp} (- \vert\vert \delta_{g,i} \vert\vert_2)$, which indicates the proximity of the prediction to the ground-truth 2D point locations. During test time, we pick 10 most confident proposals as the input of the PnP algorithm to solve for the object pose. The loss function of the second stream is denoted as $\mathcal{L}_{conf} = \sum_{g} \sum_{i=1}^{N^o} \vert\vert \hat{c}_{g,i} - c_{g,i} \vert\vert_2^2$, where $c_{g,i}$ and $\hat{c}_{g,i}$ are the ground-truth and predictions.
47
+
48
+ The total loss of the object decoder is: $$\begin{align}
49
+ \mathcal{L}_{object} = \lambda_{p} \cdot \mathcal{L}_{p2d} + \lambda_{c} \cdot \mathcal{L}_{conf} \ .
50
+ \end{align}$$ where $\lambda_{p}=0.5$ and $\lambda_{c}=0.1$ are hyperparameters.
51
+
52
+ <figure id="fig:vis_pseudo" data-latex-placement="t">
53
+ <embed src="imgs/vis_pseudo.pdf" style="width:8cm" />
54
+ <figcaption><span>Examples of filtered hand mesh pseudo-labels generated from the video dataset <span class="citation" data-cites="Goyal2017TheS"></span>.</span></figcaption>
55
+ </figure>
56
+
57
+ After we trained the model of hand-object pose estimation on the fully annotated dataset, we deploy it on a large-scale unlabeled video dataset [@Goyal2017TheS] for 3D hand pseudo-label generation. We leverage spatial-temporal consistency to filter out unreliable pseudo hand labels. The obtained pseudo-label examples on the video dataset [@Goyal2017TheS] can be seen in Figure [3](#fig:vis_pseudo){reference-type="ref" reference="fig:vis_pseudo"}. Note that we do not generate pseudo-labels for objects because of the need for object 3D models at inference time and the poor generalization of object pose due to the limited instances on the annotated dataset. By enlarging the fully annotated dataset with the selected pseudo-labels, we conduct self-training for both hand and object pose estimation.
58
+
59
+ We first deploy our model on video frames from a large-scale video dataset [@Goyal2017TheS] for 3D hand pose estimation. To improve the estimation robustness, we do test-time data augmentation and ensemble the predictions similar to [@radosavovic2018data]. In our experiment, we perform 8 different augmentations of each instance and average the results. The outputs of each frame include 2D joints $\mathcal{J}^{2D}$, 3D joints $\mathcal{J}^{3D}$, 3D hand mesh vertices $\mathcal{V}$, and corresponding MANO parameters $(\theta, \beta)$.
60
+
61
+ While ensemble predictions reduce the noise in generated samples, we still need to identify confident ones. To this end, we establish a pipeline for filtering by innovatively utilizing the spatial and temporal consistency constraints in the video dataset, as shown in Figure [4](#fig: pseudo-label-selecion){reference-type="ref" reference="fig: pseudo-label-selecion"}.
62
+
63
+ Filtering with spatial consistency requires the corresponding camera pose of each frame. However, it is infeasible to infer the camera pose directly on the video dataset like [@Goyal2017TheS] which has a large variety of viewpoint changes. Our solution to this problem is to leverage the correspondence between the estimated 3D joints $\mathcal{J}^{3D}$ and 2D joints $\mathcal{J}^{2D}$ and solve for the optimal camera parameters $\Pi$ that projects the 3D joints to 2D, as shown in Figure [\[fig: pseudo1\]](#fig: pseudo1){reference-type="ref" reference="fig: pseudo1"}. We use the weak-perspective camera model and use the SMPLify [@pavlakos2018learning] for the optimization. The objective is the following: $$\begin{align}
64
+ \Pi^{*} = \arg\min_{\Pi}\vert\vert \Pi\mathcal{J}^{3D}-\mathcal{J}^{2D} \vert\vert_2^2 \ ,
65
+ \end{align}$$ where $\Pi^{*}$ is the optimal camera parameters.
66
+
67
+ **IoU Constraint.** With the camera pose, we can re-project the estimated 3D mesh $\mathcal{V}$ to the image plane and calculate the Intersection-over-Union (IoU) between the provided ground bounding box $B_g$ and the re-projected mesh bounding box $B_d$, as shown in the left side of Figure [\[fig: pseudo2\]](#fig: pseudo2){reference-type="ref" reference="fig: pseudo2"}. Note that although we do not have 3D ground-truths, we leverage the 2D bounding box annotations provided by [@Goyal2017TheS]. A confident prediction should always tend to be consistent between these two boxes and has a large IoU. We set the IoU threshold as $0.6$ for confident prediction.
68
+
69
+ <figure id="fig: pseudo-label-selecion" data-latex-placement="t">
70
+ <p><br />
71
+ </p>
72
+ <figcaption><span>Pipeline of pseudo-label selection for video frames in the wild.</span></figcaption>
73
+ </figure>
74
+
75
+ <figure id="fig:vis_ho3d" data-latex-placement="t">
76
+ <embed src="imgs/vis_main.pdf" style="width:17.5cm" />
77
+ <figcaption><span>Qualitative results of predicted hand-object pose estimation on the HO-3D dataset. For each example, the first two columns show the recovered hand mesh and estimated 6-Dof object pose, the third row shows the estimated hand-object in 3D, the color of the object indicates the 3D object mesh error. Red indicated a larger estimation error. It demonstrates our method can well handle the occlusion in the interaction and recover the accurate 3D hand mesh and object 6-Dof pose.</span></figcaption>
78
+ </figure>
79
+
80
+ **Pose Re-projection Constraint.** The re-projected 3D hand joints $\Pi \mathcal{J}^{3D}$ and estimated 2D hand joints $\mathcal{J}^{2D}$ should be consistent. We first normalize these two set of joints independent of input sizes and compute the $L_2$ distance between them as $\vert\vert\mathcal{J}^{2D} - \Pi \mathcal{J}^{3D} \vert\vert_2$. When the distance is larger than the threshold $t_p$, then the prediction would be filtered out. We set $t_p=0.65$
81
+
82
+ **Biomedical Constraint.** The predicted hand pose should be natural human hands. Thus, we exploit the minimal normalized bone length of $0.1$ and physically plausible joint angle ranges within $(0, 90)$ as two additional constraints to help remove those unnatural predicted hand poses.
83
+
84
+ For each instance in the dataset, if it does not violate the three spatial consistency constraints mentioned above, we move forward to the temporal consistency constraints.
85
+
86
+ **Smoothness Constraint.** We consider the temporal consistency constraints as the smoothness of both the 2D joint predictions and the 3D mesh predictions between two consecutive frames $t-1$ and $t$. As shown in the right side of Figure [\[fig: pseudo2\]](#fig: pseudo2){reference-type="ref" reference="fig: pseudo2"}, since the frames are continuous, the model outputs should be smooth over time. Concretely, the distance of the 2D pose estimation results between these two frames $\vert\vert\mathcal{J}_{t}^{2D} - \mathcal{J}_{t-1}^{2D}\vert\vert_2$ should be less than a threshold $t_j$. Similarly, for the MANO pose parameter $\theta$, we have $\vert\vert \theta_t - \theta_{t-1} \vert\vert_2 \leq t_{\theta}$ to ensure 3D mesh smoothness, where $t_j=0.5$ and $t_{\theta}=0.01$ are two constant thresholds.
87
+
88
+ **Shape Constraint.** In each video sequence, the shape of the hands belongs to the same person should be invariant over time. Given the confident prediction subset $C$ which is the collection of frames satisfying the above constraints in each video sequence, we compute the mean hand shape as $\Bar{\beta} = \frac{1}{\vert C \vert}\sum_{t \in C} \beta_t$ and its standard deviation $\sigma_C = \sqrt{\frac{1}{\vert C \vert}\sum_{t \in C} \vert\vert \beta_t-\Bar{\beta}\vert\vert_2^2)}$. We filter out the frames in $C$ whose shape deviation from $\Bar{\beta}$ is $2$ times larger than $\sigma_C$.
89
+
90
+ We conduct self-training on the union set of the human-annotated dataset and those pseudo-labels. The diversity of the hand pseudo-labels not only improves the hand prediction, but also provides a richer context for hand-object interaction reasoning via the CR module, leading to better object pose estimation. During the retraining, since we do not have the pseudo-labels of objects, we use a binary mask to ensure only computing the loss of the hand on the pseudo-labeled dataset. The total loss function in the retraining stage is the following: $$\begin{align}
91
+ \mathcal{L} = \mathcal{L}_{hand} + \mathcal{B} \cdot \mathcal{L}_{object} \ .
92
+ \end{align}$$ where $\mathcal{B}$ is the mask which equals $1$ on the fully-annotated dataset and $0$ otherwise.
93
+
94
+ First, we test hand-object pose estimation performance on the HO-3D [@Hampali2019HOnnotateAM] dataset and visualize the prediction in Section [6.4](#exp: pose-estimate){reference-type="ref" reference="exp: pose-estimate"}. In Section [6.5](#exp-ablation){reference-type="ref" reference="exp-ablation"}, we conduct abundant ablation studies on the designs of the CR module and explore the effectiveness of semi-supervised learning. In section [6.6](#exp-gen){reference-type="ref" reference="exp-gen"}, we test our model's generalization before and after semi-supervised learning on Freihand [@zimmermann2019freihand] and FPHA [@GarciaHernando2018FirstPersonHA] dataset.
95
+
96
+ Our model is trained in an end-to-end manner from scratch both in the supervised learning phase and semi-supervised learning phase. The shared encoder in the joint learning framework is initialized with ResNet-50 pre-trained on ImageNet. We use a batch size of 24, initial learning rate $1e^{-4}$, and Adam optimizer for the training. The training lasts for 60 epochs and the learning rate is scaled by a factor of 0.7 every 10 epochs. We crop the input image to $512 \times 512$ and do data augmentation including scaling $(\pm 20\%)$, rotation $(\pm 180\degree)$, translation $(\pm 10\%)$ and color jittering $(\pm 10\%)$.
97
+
98
+ <figure id="fig: vis_att" data-latex-placement="t">
99
+ <embed src="imgs/attention_map.pdf" style="width:8cm" />
100
+ <figcaption><span>Visualization of the CR module. The top row shows the cross-attention weights (red indicates higher response), the bottom row shows corresponding object query positions (red points). Hand, object and intersection boxes are in yellow, green, and blue. The CR module gives high responses to contact regions and tends to use the contact pattern for relational reasoning.</span></figcaption>
101
+ </figure>
102
+
103
+ **HO-3D Dataset** [@Hampali2019HOnnotateAM] is used to train our model in supervised learning. The dataset contains more than 65 sequences captured with 10 different subjects and 10 objects with both 3D pose annotations of hand and object. It has $66,034$ and $11,524$ hand-object interaction images from a third-person view for training and testing. The test results are evaluated by the official online submission system.
104
+
105
+ **Something-Something Dataset** [@Goyal2017TheS] is a large-scale hand-object interaction video dataset where we conduct semi-supervised learning with only hand and object bounding boxes provided. It covers a variety of hand instances and most daily objects. We finally select $84063$ frames with pseudo hand labels.
106
+
107
+ **FPHA, Freihand Dataset** [@GarciaHernando2018FirstPersonHA; @zimmermann2019freihand] are used for validating cross-domain generalization. FPHA is an egocentric video dataset with hand-object interactions. The dataset is captured by using magnetic sensors strapped on hands. The first-person viewpoint and sensors on the hand introduce great challenges for the model's generalization. We use the same subset as previous work [@{tekin2019h+}; @hasson2019learning; @hasson2020leveraging] for testing. The test set of the Freihand dataset contains 3960 samples with hands in both indoor and outdoor environments. Hands in half of the samples are interacting with the objects, while in the other half objects are absent. It is also a challenging benchmark for models trained only on hand-object interactions. The evaluation is also performed at the online server.
2108.08265/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-02-08T14:32:35.885Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/13.6.2 Chrome/83.0.4103.122 Electron/9.2.0 Safari/537.36" version="13.6.2" etag="sr3F8ZOs9txBQz8szLY2" type="device"><diagram id="dUOFwLBICgqp3SD88DLf">7Zrbkqo4FIafpu+RgMJlOIpyVBT0jrMgEBQQ8OknqH2avadmqmZ6rqCKdPJnZWVlJeZrq3wDfNHLV686aSiM8jeSCPs3ILyRJMvMcDkKw1Og5+xTSK5p+JRmn8I2vUcvkXipbRpG9TfDBqG8SavvYoDKMgqab5p3vaLuu1mM8u+zVl4S/SJsAy//VXXSsDk9VYYmPvVllCan95lnxKun8N6NX0J98kLUfZGA+Ab4K0LNs1b0fJSPuXvPy3Oc9Be9H4Fdo7L5JwPI54Cbl7evtb3iaob3xSZX1FYvs+jaRP3vUuz57+bEryHMPhaGD0SEiqi5Dtjk5QiQzHPI6zCQi5eL7jO1H4GevqT1Q/Re25l8+P5cMa68Fv37BIC/TwDeo2qspsXjWHBjFlJ8GFTPj3IT1WmTohL3+6hpUIEN8rGD84LzmLoy5FGOrg9XIH48X3zAPE3GsQ2qsOrV1fO4xmkf4QC5x5TwXSXeFVwPvcZ7A/DZJKWqTN5IPt1zxqYj1nKCIH707e4k7hJc08amAHl4wH+5LZC75agY7kZylhvbJ49ESErD0eL0g7vpfJklfDlPleUxD0q98knqrmZaq227xFtuiGCpzdWBbb7aqgU7HHHqpRDE9/vonXc5xXHHqdkTLsyDJSNzHlZjKDMxF639hioNEPrOPNO4VNrZg4fHO28kd8SvrSv7o1VwxdC1UVAjhdiaYEF78umYF/tQ0bdxO2MgEM9ww2mQsDZQh6JlW5wodsKGEyGBOx4S1EXREsYO4j+w5QkTh3e74FhJw7oMd9yyuju3hMpmtLp0VeKGsdRaK5M1dwGlA16du0SUzbrLzlha/IG38WCph43VBieDc+wdzduHTaE7NxF7E7Qkwy3RcYc5LOBVp4xO8tQZZSd7KQxSU9ONzJwJZDHsneSSs2e+tDO3nvV7l9HC+0FcEse4vFPxAs8zK1tcUh4rJ4qJNtg/WNf0kQ1oi89ztr9VmROeT8Scz6gNjICPrRmCDrkYOUkWRYTt0IKMUlfDHS0gfBw810HcsPAryFpnxvxxbgGfCV3qfC6Sc6fZMbzjXV21ftqBc9Epei0S4o4ds6mE/24XGMiVtYvnxhmnLCu7MzRISp9qIolVSqpeEau00rjAPihBxrONKK/q1ehtr6oLk9ZgynuQL8d10r2CfdvdKQTjdj5mkZiDGeKWmYzzQZ2MQalQXIxXvWwjASpJkQhRYK59q3tEo7aAcR6R48+4dMBvrCmJnODLh3vkrCTwGTUNfO9wXDQ7M8K4IV2Hs9Js03oX9nCLGAHtOVMJVmco7bChTNuQT4s/5wbeywGfPC5rk6uoGXNqjHuNCw7fWhxUcUEzUBFXnTy7SOXitnYgH6jFZpmV2Wp1Uhd7wNwcvec1K7+ni7OLQl5AS+RdbKF0Qw2qV1TH3MKGop2sj/LoX4oNqyiKcK7emNjUO5FaeFUIBtGy6JymHEbsZNp0zMSHe6HZWg0eo3T8xVWX1AJkXbJu4N4oWMNg53N0WHVr0MbNoBnMsibaypobc+l2F7lWtxAP2AxeBf/cU/li2B5AfGjpXBy3e4F3Yb3OzptxK/pUtE7j6SbXgi10eEruAAXrTjBqXOAl15fFNYevSNLWqsT/53b4r2xti1Qv7bC4tOGC0dbVPnfc3sv8Hh9bbu3s5Bm1anYDvJ8HsT6eYRaOd4l+Unkf9jusWWkCrWbu3hhYISQ5/TYQlw8QwO1ub2zWNH9QlJFLgKvPUROcviKGQ9cwelFLejy45y//C/hKe/L3tP9C8xn7G5p/iP+G5tRE84nmE80nmk80n2g+0fznaP4+gPj21X326zf3n2M9PbF+Yv3E+on1E+sn1k+s/3HWk99ZT/6frJ9PrJ9YP7F+Yv3E+on1E+t/nPXgO+vBD7IeNz9/7vDo+/KbESD+AQ==</diagram></mxfile>
2108.08265/main_diagram/main_diagram.pdf ADDED
Binary file (2.24 kB). View file
 
2108.08265/paper_text/intro_method.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ <figure id="fig:intro" data-latex-placement="t">
4
+ <div class="center">
5
+ <embed src="img/intro.pdf" style="width:98.0%" />
6
+ </div>
7
+ <figcaption><strong>Roach: RL coach</strong> allows IL agents to benefit from dense and informative on-policy supervisions.</figcaption>
8
+ </figure>
9
+
10
+ Even though nowadays, most autonomous driving (AD) stacks [@montemerlo2008junior; @urmson2008autonomous] use individual modules for perception, planning and control, end-to-end approaches have been proposed since the 80's [@Pomerleau-1989-15721] and the success of deep learning brought them back into the research spotlight [@bojarski2016end; @xu2017end]. Numerous works have studied different network architectures for this task [@DBLP:conf/rss/BansalKO19; @hecker2020learning; @zeng2019end], yet most of these approaches use supervised learning with expert demonstrations, which is known to suffer from covariate shift [@prakash2020exploring; @ross2011reduction]. While data augmentation based on view synthesis [@amini2020learning; @bojarski2016end; @Pomerleau-1989-15721] can partially alleviate this issue, in this paper, we tackle the problem from the perspective of expert demonstrations.
11
+
12
+ Expert demonstrations are critical for end-to-end AD algorithms. While imitation learning (IL) methods directly mimic the experts' behavior [@DBLP:conf/rss/BansalKO19; @codevilla2018end], reinforcement learning (RL) methods often use expert demonstrations to improve sample efficiency by pre-training part of the model via supervised learning [@liang2018cirl; @toromanoff2020end]. In general, expert demonstrations can be divided into two categories: *(i) Off-policy*, where the expert directly controls the system, and the state/observation distribution follows the expert. Off-policy data for AD includes, for example, public driving datasets [@nuscenes2019; @lyft2020; @yu2020bdd100k]. *(ii) On-policy*, where the system is controlled by the desired agent and the expert "labels\" the data. In this case, the state/observation distribution follows the agent, but expert demonstrations are accessible. On-policy data is fundamental to alleviate covariate shift as it allows the agent to learn from its own mistakes, which the expert in the off-policy data does not exhibit. However, collecting adequate on-policy demonstrations from humans is non-trivial. While trajectories and actions taken by the human expert can be directly recorded during off-policy data collection, labeling these targets given sensor measurements turns out to be a challenging task for humans. In practice, only sparse events like human interventions are recorded, which, due to the limited information it contains, is hard to use for training and better suited for RL [@amini2020learning; @kahn2021land; @kendall2019learning] than for IL methods.
13
+
14
+ In this work we focus on automated experts, which in contrast to human experts can generate large-scale datasets with dense labels regardless of whether they are on-policy or off-policy. To achieve expert-level performance, automated experts may rely on exhaustive computations, expensive sensors or even ground truth information, so it is undesirable to deploy them directly. Even though some IL methods do not require on-policy labeling, such as GAIL [@gail] and inverse RL [@abbeel2004apprenticeship], these methods are not efficient in terms of on-policy interactions with the environment.
15
+
16
+ On the contrary, automated experts can reduce the expensive on-policy interactions. This allows IL to successfully apply automated experts to different aspects of AD. As a real-world example, Pan et al. [@DBLP:conf/rss/PanCSLYTB18] demonstrated end-to-end off-road racing with a monocular camera by imitating a model predictive control expert with access to expensive sensors. In the context of urban driving, [@prakash2020exploring] showed that a similar concept can be applied to the driving simulator CARLA [@Dosovitskiy17]. Driving simulators are an ideal proving ground for such approaches since they are inherently safe and can provide ground truth states. However, there are two caveats. The first regards the "expert\" in CARLA, commonly referred to as the Autopilot (or the roaming agent). The Autopilot has access to ground truth simulation states, but due to the use of hand-crafted rules, its driving skills are not comparable to a human expert's. Secondly, the supervision offered by most automated experts is not informative. In fact, the IL problem can be seen as a knowledge transfer problem and just learning from expert actions is inefficient.
17
+
18
+ To tackle both drawbacks and motivated by the success of model-free RL in Atari games [@hessel2018rainbow] and continuous control [@haarnoja2018soft], we propose Roach (RL coach), an RL expert that maps bird's-eye view (BEV) images to continuous actions (Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"} bottom). After training from scratch for 10M steps, Roach sets the new performance upper-bound on CARLA by outperforming the Autopilot. We then train IL agents and investigate more effective training techniques when learning from our Roach expert. Given that Roach uses a neural network policy, it serves as a better coach for IL agents also based on neural networks. Roach offers numerous informative targets for IL agents to learn from, which go far beyond deterministic action provided by other experts. Here we demonstrate the effectiveness of using action distributions, value estimations and latent features as supervisions.
19
+
20
+ Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"} shows the scheme of learning from on-policy supervisions labeled by Roach on CARLA. We also record off-policy data from Roach by using its output to drive the vehicle on CARLA. Leveraging 3D detection algorithms [@liang2019multi; @wang2019monocular] and extra sensors to synthesize the BEV, Roach could also address the scarcity of on-policy supervisions in the real world. This is feasible because on the one hand, BEV as a strong abstraction reduces the sim-to-real gap [@pmlr-v87-mueller18a], and on the other hand, on-policy labeling does not have to happen in real-time or even onboard. Hence 3D detection becomes easier given the complete sequences [@qi2021offboard].
21
+
22
+ In summary, this paper presents Roach, an RL expert that sets a new performance upper-bound on CARLA. Moreover, we demonstrate the state-of-the-art performance on both the NoCrash benchmark and the public routes of CARLA LeaderBoard using a single camera based end-to-end IL agent, which is supervised by Roach using our improved training scheme. Our repository is available at <https://github.com/zhejz/carla-roach>
23
+
24
+ # Method
25
+
26
+ In this section we describe Roach and how IL agents can benefit from diverse supervisions supplied by Roach.
27
+
28
+ Our Roach has three features. Firstly, in contrast to previous RL agents, Roach does not depend on data from other experts. Secondly, unlike the rule-based Autopilot, Roach is end-to-end trainable, hence it can generalize to new scenarios with minor engineering efforts. Thirdly, it has a high sample efficiency. Using our proposed input/output representation and exploration loss, training Roach from scratch to achieve top expert performance on the six LeaderBoard maps takes less than a week on a single GPU machine.
29
+
30
+ Roach consists of a policy network $\pi_\theta(\mathbf{a}| \mathbf{i}_\text{RL},\mathbf{m}_\text{RL})$ parameterized by $\theta$ and a value network $V_\phi(\mathbf{i}_\text{RL},\mathbf{m}_\text{RL})$ parameterized by $\phi$. The policy network maps a BEV image $\mathbf{i}_\text{RL}$ and a measurement vector $\mathbf{m}_\text{RL}$ to a distribution of actions $\mathbf{a}$. Finally the value network estimates a scalar value $v$, while taking the same inputs as the policy network.
31
+
32
+ We use a BEV semantic segmentation image $\mathbf{i}_\text{RL} \in [0,1]^{W \times H \times C}$ to reduce the problem complexity, similar to the one used in [@DBLP:conf/rss/BansalKO19; @chen2020learning; @chen2019model]. It is rendered using ground-truth simulation states and consists of $C$ grayscale images of size $W \times H$. The ego-vehicle is heading upwards and is centered in all images at $D$ pixels above the bottom, but it is not rendered. Fig. [8](#fig:bev){reference-type="ref" reference="fig:bev"} illustrates each channel of $\mathbf{i}_\text{RL}$. Drivable areas and intended routes are rendered respectively in Fig. [2](#fig:bev_road){reference-type="ref" reference="fig:bev_road"} and [3](#fig:bev_route){reference-type="ref" reference="fig:bev_route"}. In Fig. [4](#fig:bev_lane){reference-type="ref" reference="fig:bev_lane"} solid lines are white and broken lines are grey. Fig. [5](#fig:bev_vehicle){reference-type="ref" reference="fig:bev_vehicle"} is a temporal sequence of $K$ grayscale images in which cyclists and vehicles are rendered as white bounding boxes. Fig. [6](#fig:bev_walker){reference-type="ref" reference="fig:bev_walker"} is the same as Fig. [5](#fig:bev_vehicle){reference-type="ref" reference="fig:bev_vehicle"} but for pedestrians. Similarly, stop lines at traffic lights and trigger areas of stop signs are rendered in Fig. [7](#fig:bev_tl){reference-type="ref" reference="fig:bev_tl"}. Red lights and stop signs are colored by the brightest level, yellow lights by an intermediate level and green lights by a darker level. A stop sign is rendered if it is active, i.e. the ego-vehicle enters its vicinity and disappears once the ego-vehicle has made a full stop. By letting the BEV representation memorize if the ego-vehicle has stopped, we can use a network architecture without recurrent structure and hence reduce the model size of Roach. A colored combination of all channels is visualized in Fig. [1](#fig:intro){reference-type="ref" reference="fig:intro"}. We also feed Roach a measurement vector $\mathbf{m}_\text{RL} \in \mathbb{R}^6$ containing the states of the ego-vehicle not represented in the BEV, these include ground-truth measurements of steering, throttle, brake, gear, lateral and horizontal speed.
33
+
34
+ Low-level actions of CARLA are $steering \in [-1,1]$, $throttle \in [0,1]$ and $brake \in [0,1]$. An effective way to reduce the problem complexity is predicting waypoint plans which are then tracked by a PID-controller to produce low-level actions [@chen2020learning; @Rhinehart2020Deep]. However, a PID-controller is not reliable for trajectory tracking and requires excessive parameter tuning. A model-based controller would be a better solution, but CARLA's vehicle dynamics model is not directly accessible. To avoid parameter tuning and system identification, Roach directly predicts action distributions. Its action space is $\mathbf{a}\in [-1,1]^2$ for steering and acceleration, where positive acceleration corresponds to throttle and negative corresponds to brake. To describe actions we use the Beta distribution $\mathcal{B}(\alpha,\beta)$, where $\alpha,\beta>0$ are respectively the concentration on $1$ and $0$. Compared to the Gaussian distribution, which is commonly used in model-free RL, the support of the Beta distribution is bounded, thus avoiding clipping or squashing to enforce input constraints. This results in a better behaved learning problem since no $\text{tanh}$ layers are needed and the entropy and KL-divergence can be computed explicitly. Further, the modality of the Beta distribution is also suited for driving, where extreme maneuvers may often be taken, for example, emergency braking or taking a sharp turn.
35
+
36
+ We use proximal policy optimization (PPO) [@schulman2017proximal] with clipping to train the policy network $\pi_\theta$ and the value network $V_\phi$. To update both networks, we collect trajectories by executing $\pi_{\theta_k}$ on CARLA. A trajectory $\tau =\{({\mathbf{i}_{\text{RL},k}},{\mathbf{m}_{\text{RL},k}},\mathbf{a}_k,r_k)^{T}_{k=0}, z \}$ includes BEV images $\mathbf{i}_\text{RL}$, measurement vectors $\mathbf{m}_\text{RL}$, actions $\mathbf{a}$, rewards $r$ and a terminal event $z \in \mathcal{Z}$ that triggers the termination of an episode. The value network is trained to regress the expected returns, whereas the policy network is updated via $$\begin{equation}
37
+ \theta_{k+1} = \arg \max_\theta \mathop{\text{E}}_{\tau \sim \pi_{\theta_k}} \left[\mathcal{L}_\text{ppo}+ \mathcal{L}_\text{ent}+ \mathcal{L}_\text{exp}\right].
38
+ \end{equation}$$ The first objective $\mathcal{L}_\text{ppo}$ is the clipped policy gradient loss with advantages estimated using generalized advantage estimation [@schulman2016gae]. The second objective $\mathcal{L}_\text{ent}$ is a maximum entropy loss commonly employed to encourage exploration $$\begin{equation}
39
+ \mathcal{L}_\text{ent} = -\lambda_\text{ent} \cdot \text{H} \left(
40
+ \pi_\theta(\cdot | \mathbf{i}_\text{RL},\mathbf{m}_\text{RL})
41
+ \right).
42
+ \end{equation}$$ Intuitively $\mathcal{L}_\text{ent}$ pushes the action distribution towards a uniform prior because maximizing entropy is equivalent to minimizing the KL-divergence to a uniform distribution, $$\begin{equation}
43
+ \text{H} \left( \pi_\theta \right)= -
44
+ \text{KL}\left(
45
+ \pi_\theta \parallel \mathcal{U}(-1,1)
46
+ \right),
47
+ \end{equation}$$ if both distributions share the same support. This inspires us to propose a generalized form of $\mathcal{L}_\text{ent}$, which encourages exploration in sensible directions that comply with basic traffic rules. We call it the exploration loss and define it as $$\begin{equation}
48
+ \begin{split}
49
+ \mathcal{L}_\text{exp} = \, & \lambda_\text{exp} \cdot
50
+ \mathbbm{1}_{ \{T-N_{z}+1,\dots,T \} }(k) \\
51
+ \cdot
52
+ & \text{KL}(
53
+ \pi_\theta(\cdot|{\mathbf{i}_{\text{RL},k}},{\mathbf{m}_{\text{RL},k}} )\parallel p_{z}),
54
+ \end{split}
55
+ \end{equation}$$ where $\mathbbm{1}$ is the indicator function and $z \in \mathcal{Z}$ is the event that ends the episode. The terminal condition set $\mathcal{Z}$ includes collision, running traffic light/sign, route deviation and being blocked. Unlike $\mathcal{L}_\text{ent}$ which imposes a uniform prior on the actions at all time steps regardless of which $z$ is triggered, $\mathcal{L}_\text{exp}$ shifts actions within the last $N_{z}$ steps of an episode towards a predefined exploration prior $p_z$ which encodes an "advice\" to prevent the triggered event $z$ from happening again. In practice, we use $N_z=100,\forall z \in \mathcal{Z}$. If $z$ is related to a collision or running traffic light/sign, we apply $p_z=\mathcal{B}(1,2.5)$ on the acceleration to encourage Roach to slow down while the steering is unaffected. In contrast, if the car is blocked we use an acceleration prior $\mathcal{B}(2.5,1)$. For route deviations, a uniform prior $\mathcal{B}(1,1)$ is applied on the steering. Despite being equivalent to maximizing entropy in this case, the exploration loss further encourages exploration on steering angles during the last 10 seconds before the route deviation.
56
+
57
+ <figure id="fig:network" data-latex-placement="t">
58
+ <figure id="fig:net_ppo">
59
+ <embed src="img/net_ppo.pdf" />
60
+ <figcaption>Roach</figcaption>
61
+ </figure>
62
+ <figure id="fig:net_cilrs">
63
+ <embed src="img/net_cilrs.pdf" />
64
+ <figcaption>CILRS</figcaption>
65
+ </figure>
66
+ <figcaption><strong>Network architecture of Roach, the RL expert, and CILRS, the IL agent.</strong></figcaption>
67
+ </figure>
68
+
69
+ Our implementation of PPO-clip is based on [@stable-baselines3] and the network architecture is illustrated in Fig. [9](#fig:net_ppo){reference-type="ref" reference="fig:net_ppo"}. We use six convolutional layers to encode the BEV and two fully-connected (FC) layers to encode the measurement vector. Outputs of both encoders are concatenated and then processed by another two FC layers to produce a latent feature $\mathbf{j}_\text{RL}$, which is then fed into a value head and a policy head, each with two FC hidden layers. Trajectories are collected from six CARLA servers at 10 FPS, each server corresponds to one of the six LeaderBoard maps. At the beginning of each episode, a pair of start and target location is randomly selected and the desired route is computed using the A$^*$ algorithm. Once the target is reached, a new random target will be chosen, hence the episode is endless unless one of the terminal conditions in $\mathcal{Z}$ is met. We use the reward of [@toromanoff2019deep] and additionally penalize large steering changes to prevent oscillating maneuvers. To avoid infractions at high speed, we add an extra penalty proportional to the ego-vehicle's speed. More details are in the supplement.
70
+
71
+ To allow IL agents to benefit from the informative supervisions generated by Roach, we formulate a loss for each of the supervisions. Our training scheme using Roach can be applied to improve the performance of existing IL agents. Here we use DA-RB [@prakash2020exploring] (CILRS [@codevilla2019exploring] + DAGGER [@ross2011reduction]) as an example to demonstrate its effectiveness.
72
+
73
+ The network architecture of CILRS is illustrated in Fig. [10](#fig:net_cilrs){reference-type="ref" reference="fig:net_cilrs"}, it includes a perception module that encodes the camera image $\mathbf{i}_\text{IL}$ and a measurement module that encodes the measurement vector $\mathbf{m}_\text{IL}$. Outputs of both modules are concatenated and processed by FC layers to generate a bottleneck latent feature $\mathbf{j}_\text{IL}$. Navigation instructions are given as discrete high-level commands and for each kind of command a branch is constructed. All branches share the same architecture, while each branch contains an action head predicting continuous actions $\mathbf{a}$ and a speed head predicting the current speed $s$ of the ego-vehicle. The latent feature $\mathbf{j}_\text{IL}$ is processed by the branch selected by the command. The imitation objective of CILRS consists of an L1 action loss $$\begin{equation}
74
+ \label{eq: loss_a}
75
+ \mathcal{L}_\text{A} = \|\hat{\mathbf{a}} - \mathbf{a} \|_1
76
+ \end{equation}$$ and a speed prediction regularization $$\begin{equation}
77
+ \label{loss:s}
78
+ \mathcal{L}_\text{S} = \lambda_\text{S} \cdot |\hat{s}-s |,
79
+ \end{equation}$$ where $\lambda_\text{s}$ is a scalar weight, $\hat{\mathbf{a}}$ is the expert's action, $\hat{s}$ is the measured speed, $\mathbf{a}$ and $s$ are action and speed predicted by CILRS. Expert actions $\hat{\mathbf{a}}$ may come from the Autopilot, which directly outputs deterministic actions, or from Roach, where the distribution mode is taken as the deterministic output. Besides deterministic actions, Roach also predicts action distributions, values and latent features. Next we will formulate a loss function for each of them.
80
+
81
+ Inspired by [@hinton2015distilling] which suggests soft targets may provide more information per sample than hard targets, we propose a new action loss based on the action distributions as a replacement for $\mathcal{L}_\text{A}$. The action head of CILRS is modified to predict distribution parameters and the loss is formulated as a KL-divergence $$\begin{equation}
82
+ \label{eq: loss_kl}
83
+ \mathcal{L}_\text{K} = \text{KL}(\hat{\pi} \| \pi)
84
+ \end{equation}$$ between the action distribution $\hat{\pi}$ predicted by the Roach expert and $\pi$ predicted by the CILRS agent.
85
+
86
+ Feature matching is an effective way to transfer knowledge between networks and its effectiveness in supervising IL driving agents is demonstrated in [@hou2019learning; @zhao2020sam]. The latent feature $\mathbf{j}_\text{RL}$ of Roach is a compact representation that contains essential information for driving as it can be mapped to expert actions using an action head consists of only two FC layers (cf. Fig. [9](#fig:net_ppo){reference-type="ref" reference="fig:net_ppo"}). Moreover, $\mathbf{j}_\text{RL}$ is invariant to rendering and weather as Roach uses the BEV representation. Learning to embed camera images to the latent space of $\mathbf{j}_\text{RL}$ should help IL agents to generalize to new weather and new situations. Hence, we propose the feature loss $$\begin{equation}
87
+ \label{eq: loss_f}
88
+ \mathcal{L}_\text{F} = \lambda_\text{F} \cdot \| \mathbf{j}_\text{RL} - \mathbf{j}_\text{IL} \|_2^2.
89
+ \end{equation}$$ Multi-task learning with driving-related side tasks could also boost the performance of end-to-end IL driving agents as shown in [@xu2017end], which used scene segmentation as a side task. Intuitively, the value predicted by Roach contains driving relevant information because it estimates the expected future return, which relates to how dangerous a situation is. Therefore, we augment CILRS with a value head and regress value as a side task. The value loss is the mean squared error between $\hat{v}$, the value estimated by Roach, and $v$, the value predicted by CILRS, $$\begin{equation}
90
+ \label{eq: loss_v}
91
+ \mathcal{L}_\text{V} = \lambda_\text{V} \cdot (\hat{v}-v)^2.
92
+ \end{equation}$$ Our implementation follows DA-RB [@prakash2020exploring]. We choose a Resnet-34 pretrained on ImageNet as the image encoder to generate a 1000-dimensional feature given $\mathbf{i}_\text{RL} \in [0,1]^{900 \times 256 \times 3}$, a wide-angle camera image with a $100^{\circ}$ horizontal FOV. Outputs of the image and the measurement encoder are concatenated and processed by three FC layers to generate $\mathbf{j}_\text{IL} \in \mathbb{R}^{256}$, which shares the same size as $\mathbf{j}_\text{RL}$. More details are found in the supplement.
2109.02614/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-03-14T16:21:21.201Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.192 Safari/537.36" etag="sdvWrW4Hpux6-44dOq9m" version="14.4.8" type="device"><diagram id="EdX1hm0rH45FzaNLtXR5" name="Page-1">jZJda8MgFIZ/jZeDRJctt23XbjAGhVJ2OSSeRcHEYO2S9NfvdDnmg1LYTaKP50Pf8zKxqbpXLxv94RRYxhPVMfHCOH/KcvxeQT+AR/E8gNIbNaB0AgdzAYIJ0bNRcFoEBudsMM0SFq6uoQgLJr137TLs29ll10aWcAMOhbS39NOooAeaZ8nE38CUOnZOEzqpZAwmcNJSuXaGxJaJjXcuDKuq24C9ahd1GfJ2d07Hi3mow38Siq96zyHNdilfHSXv3i/H8EBVfqQ904N3pkSQMrHC397KAjRqBp4eEfqojHfnWsG1eMLEutUmwKHBcAQtWgGZDpXFHdZaUxvwAbq7909HVdBN4CoIvscQSuA5CUlO4lHpdppLFm2j5zOJw5LkhXKsPcmFC1IsbqfJ/J3N7C22vw==</diagram></mxfile>
2109.02614/main_diagram/main_diagram.pdf ADDED
Binary file (6.95 kB). View file
 
2109.02614/paper_text/intro_method.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ One approach is to combine the output of a pixel-based model with flood fill segmentation information and choose the maximally occurring color in each segment. We explored this approach and highlight several issues that can occur.
4
+
5
+ In Figure [9](#fig/paints_chainer){reference-type="ref" reference="fig/paints_chainer"} we use the popular open-source colorization model PaintsChainer and provide a color hint for every segment. The model is trained with an MSE loss in RGB space, so it learns to predict colors that are close to the user-provided color palette. When multiple colors are used, it quickly starts mixing the reference colors and diverging from the user-specified color palette.
6
+
7
+ <figure id="fig/paints_chainer" data-latex-placement="h">
8
+ <img src="figures/paints_chainer" style="width:90.0%" />
9
+ <figcaption><strong><span>Color mixing in PaintsChainer.</span></strong> </figcaption>
10
+ </figure>
11
+
12
+ To overcome the color mixing issue we can train a pixel model with categorical cross entropy loss by discretizing the input color palette into a compact label space. We use this approach with the correspondence network in Deep Exemplar Video Colorization (see Figure [10](#fig/devc){reference-type="ref" reference="fig/devc"}). The resulting output stays true to the provided color palette, but the model loses important details due to input downsampling and max-pooling in the CNN backbone (both of which are necessary to compute pixel attention on a 16GB GPU). We use DEVC in our benchmarks, but convert the pixel output to segment labels for evaluation.
13
+
14
+ <figure id="fig/devc" data-latex-placement="h">
15
+ <img src="figures/warped" style="width:90.0%" />
16
+ <figcaption><strong><span>Raw output of the correspondence subnetwork of DEVC.</span></strong> </figcaption>
17
+ </figure>
2110.05291/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-10-09T15:38:38.238Z" agent="5.0 (X11)" etag="pyl-8W7U8Y9hUXi_-HsJ" version="15.4.3"><diagram id="SGu61A3XzUEtp0NknMtw" name="Page-1">7V1bd6LIFv41vdachz6Lq4mPRowhIxAjxsDLLASDIIhH8QK//uxC1MJLIC2gcTK9elqL275+tauoz/pF191Vc6pNhoJnDJxfFGGsftHcL4oiGfoe/kEtwbrlvkqvG8ypZcQn7Ro6VjiIG4m4dW4Zg1niRN/zHN+aJBt1bzwe6H6iTZtOvWXytA/PST51opnxE4ldQ0fXnMHBaT3L8IexFix29tPAMoebJ5NEfMTVNifHDbOhZnhLrIlu/KLrU8/z15/cVX3gIONt7LK+7vHE0a1g08HYz3JB7bnONyv/DKW7hSd+SM/6x537e2PmhebMY41jaf1gY4KBARaJv3pTf+iZ3lhzGrvWh6k3HxsD9BwCvu3OaXneBBpJaLQHvh/E7tXmvgdNQ9914qOHusTqzbz5VI/l+PuOmvSX4T/Cy2pVIcwZFzY7vzcxoU3Ngf+Josz6PKQL9oDYUs2B5w78aQAnTAeO5luLpPe1OIjM7Xk7O8OH2NTHzf6Z1AmrVxyQ/8GwFvDR9COjrJs+PDAK7pHK/+be5sDvWWTTGpxAUZPV7iAKdk1PXiBbLuQSRYiDJfz/1XO1MX7B+rGvA3MKpowfDuqtn5+UCZqPSHp54V+mA8PSfcsbf1MFNgj6V/8/X9RgL2uXQ8sfdCZrIZYAzsmM02aTNVx+WCuUuadTcDGY+oPVp0kTH2U22Bdj/e/7+Ptyh5xkJW4b4qi5aTwn0Y7jG/XN8e0uI77dXRW+3Z3Et5LTqYnKBwOOtzzo0+HfzkCb6kOUYPo2wfrTI7mVTLnLJ9c9c8nkOt6LMdfi5nMfu76L3HmBY1Ehm4K9X48Y8LKfDJKZP/VGg7rneFNoGXtjhDQfluPsNWmOZY7hqw6xM4D2BxQzFkRzLT7gWoYRwdSxqExCVw5x+LuSjMNtwYvFIUUciUOqsDBkryUM4+ojApxkGfUvDpgKcXUBU7mWgOl4zhwvVv/FYUISzAXj5GhNVf0+teN7h3nqsQ+c/lv/x2g/NJmQmR8bG382Gi2/dvxM6lNj4z/t3C+U4S0Lki1bUfHJkPSbKS1PtfHsw5u62nkj8W+m9l/aV4YU1zqKp65vFH//fYD4qPz0IRB/Bn3lA/FnUn9m9dlQm6CPlhtNo29LiJbWHzgv3syK8p/m+p7vey6c4KADD5o+MiN3bEqSXxT9Ef13pAzxkXsOIz16ZG3TSmxa4LOh+Rrk9for9ThbmL+ohxW4kaq/PImUGjx4as8Za0/tKm/zodBhGCEcsZKshEKoM5JcowSLNw33LdApZ9G3CQvOWfHWA6X13ui2W2VeOvyS52qmYAurlm1SQmiuRFlYiqGyEkN+0x5K8ogVbSEUQ5MVOCWAe9Ba75XQOLjn8Wst/mno95ts+NJ59oyn16Vk3S8M2qBbYz1sudVADe4DgastWzTShTcHTXLWHwuVQXM16buzKu8OCeOpVmkFVbhKnxuhMO/Tz+NW2AAdR4v4POvFUuxBs3HHcytffX8dqs1HQpEZl6eHQymomS9PzyPVngRqU9k+Q6EE/P627r4NjaZDqT1x0e+RCzhn/SyKDXlMNrWZuG5pvD/P1rpUR0pQdfrNN1/tsYQe3INNGguwE9PvreZ6GD0/VSeB4xeCzfu6++q23kW2P34dDurkXA+EnZ510Kkp2tLYWOgugPnDsyUyijwz10de4cgrqze7JrqLNH52dKpK6q7o4J7Qn96Cfp2FO+gL0Ar+ir72Hl236NKvk35z9dzvPY7VDr/AnwxWHinhZKS8vzovNr8U7IYvyDVf4N668Lki2KOKwJlzIRRowVaol3p1d/7hfVzQ3lfe26CF1iOHKtVd6LQ6/puG2OGWSLKKCvYFSYgjV8d+qwaJY/XaPfi3ovQMpz9u79kHHUV3BW8T/WYXjspwb725WhgQTwrkhhAwYctuz/gm8tVDiKzVT5yztlqsGQFWm6lYvEVeqFdtOAb6vM0NLnHsvkWBh1wxUOGefZpPygdHj3oWbBF7E2J5ez84m3SM5iOy7hDLg5HaU0PMJvc6/cqCtqbx5CxVGeCy+cYYIKNOo7ia3MHxoE/5TquHcurNVt5rFWX8RhicHxpPzwuN6kIuO3O1N1loPaaiu1Wy77bvPmSQa31vTFpkmYQE3GrMN50RxJ+r9QyIxbcPoZPQfHfGEnSxtcf7FW9jduNWqN3jm4B1I8IU6rVACBtziWub6B5t2Rd4bo2BgFOMIOsb7CL4OkjRaNzJ22uUUOIaNMK93bU1dE/0vAeJ9tb/Qt7HukHeqpARywpPqW70x55ZrRDDmHp1jNsbsHmiAjYq70NHJw902egqaD3WaVPVGeCGDRG56Dt75yLfAZ5p9Qcb7L9svT9AvJtVfoyy17R1bgRYXwtEexSI4WgCeIPuK/JcgxI7zFKU25TImSZkKN2yu9AnKKwQxHoSSxOunYtygwZ7MXBNIAYMDeeHor2x6wxwXCQGvZWz9t5qJrnsou92tzEjWcIS4Tj8xeKoulR64sR4AjSoM7REiwTEzFTtkHG/4QVg331kiX0szrT3mh/pDTZEGPt3XVxKMmAKx1OSzNNSZ2RtY0FWyJZdI0VOIaR6bSVwOuhkQn81CnHfg24r0H8pyAJc04Uc75JRP7nx+xF/993qXJW3OVDhw1qsK5YXUFwplEMMZG/ZsvWdDd6Nifr06kk2f8db9/s4/RR50lSo1USvk5HmYGHIjaXH0zUWdAVsBW/ZOvt3hzdRNMmd2kq0u3OB6xLgIeSxJWhFStwIrCJg0dxmWqC9yPGkhKKea8/hfBJ6bnZjEYgU0MCZITRQXWfW5ybzPsU6W0/Jnh17dYl5bwFesVE/IdkQOeHOCgbljEAfwP9kZhQewbLOgtVBNx55ciWivkdukOiaW4vgtR8VWuJqIdiHBrQDvdtgI/7WdIUKsgE1RBsqTp0AXVnQNUTILcrK7enKKaCrgnon6JV4iPMGK6K86NweMjVWUoeBWlenwBomHJ+jDIZ+nLo9ZOqC1XnwZI0RwT5gN5ALxkcwVvs+ugqgQxdlIx1FsA0RHLZhnKkQN9e3hoAHJEQjaIu8ivJQJyROoMT6zquof4GIJgTwOOTqUqwzEAVd8Oro++BSatUXZSpcA+1yLZCiKEc1RQNG+JDdNxa9N9GvZvQp6kdFTqDBDuDTEYyQBDRiIkXr1jIVIpYC7Sg0DhSDDUa14Z6NW/PqDfSpmTIV0Kc2F+wRIXAmCXqHAtgD7od8/H18CrpB5oGvGghtSNCBFGWBlrYjllsZwcWVrA3+DJDvoP6B2gBylklW+iOQowt5bqIYh0qZWUENQXyr2iG9AkKIZNeIlq1QYIFlZBEY40BEQzR3g9vz6g2M3zJlaoMQLehT0RuYaNYxuoaAe65uzaegF9S+IxrOg3GqycI1tCSbxLeqHbJl6k2M3dL7kJvBXyQ1HGsHomxucJYAbKWkzq3hb1avfn/8TR9r3wr+ZvPpTeBvtky9CfxNn0G5mdmH1LcwNzT7kN5j3srsQzav3sDsQ6ZMvZHZh/Qa6Fbqh8y63sL7twz9yK1ka1wJ2jojcGgMp0M/AlUhVyMF6/ZqiNQ+83Z6G6QbAWNxWoSslGSEQGhdaZe6uSo4A75G2UqC1QPw5FLiumAhIQTJQrQCd/um+Xtom4awUQw3wJcjVgyVJcQwAziFKi0Grv1GMZyWh7fk1dRMvDYURiuKw6MrMdtI57SVmGrzzdXDvZW+se1imyzBDqgXBnt1t2+u2hAJkQ1sPQDbMKgmgeghAeWgPmmjFfssfI7eYgIekGAniKIGZAucX098ju0qmOs3gYCUcteE3pAAP0A0tk0p6hkh8uCzYKEoU9A67GQ7dh+8vSXzqL6FeomneC56Awe9Tpfk6wQuJy6PL1ow9rHRWu/tG0mOR5UIhxgDI7ABgeuF28HfkynxfXuvOnHympZ88jkJuVoyoI/MA9byuE983IZ798J95ON2bMmYf3criLm9axIyn/JVQi7uhIzY+Ul5cV8lZTpix8oWQdfx7gvcs63aQ1uA68U6hjZjLOd6UE02eeIQHdHK/INcQKues1U39n4OvYVqZx8DsOfTR2UyB4h9klhjh5gaESukoj09O6pNWDxFLAVXdKUmII+tEIA17EuHT67FO0BEQAZSp7qH67/DR1cMh5bI4Ti8aUP9E2ht4++yvyAR/s47Z4nw9zvZJUq8B8pZInzGM7NEyZnRnCVKzAEg9tJM67FTyR0Gak+p8pZ4kDMDGcUfNleQt9fwcc4XJMLHQ3uZC1lqvIvOQR1lCSsVnqPYbbyW2rYhHlMUEXjVirhNTWeuhRNPd99cOG73wXoKpTBKKEC2t0HmEbIsXt3uj4vOzH683soeR1hdlrM8eA2bWR681s1ZHrz6+gIW4VXaYb2YZLscsF9alJ5kA40/4yZFVefjuorSelBHj5MV5x53BmJfJPjm0NF6hmdE/MMGoFubSHISo7Y1JzG2Bfg8iPWDY/w+fyfqubpRnY/zq14ddSzstLmP2Iohg7iehfwMze8v/FxEYT82duynC74TT5nJ+IMRF/uxsc+k/uEp//CUf3jKpfCUQx71cpQgm8wPT/mHp/zDUy6Mp9x8tgRKWUrNNqnI3QvxlEWwx6dzw6RgdwPoG3KZN30AO4uuwA1HYq9Bfp0pl74mM8u8qeoqn86bSjAyEDhzlcesuOLyIViYUbm3ofh1HmT6yqAss+IQ+Smz4iwaaV2BvvmsvEzVl4dR1CiQ5BzeUp6bxTmtv0yLaZFD88DtXN70nO/jPNZRpCIXwhKwvXIVyHXGisySvZPLu+TUDGxAPw8Iy11BBp7zNi4v32TitJ3R+5UqZz48rTREAz9FfwpFtBJZwakIHo5QHhaLZtlYpCX1WWBNBv6y1+DhXFbppqJilwFEJoutSzKySvNY/3cVvUA2ff+81yvZL7mst81QO6F1Dlw+q2VOok0m3tk5vy5RrqT5/O5Sqm9QHkLl2L0C3xTYE+SP9NmYqyUhPdgssluRyJeRlVwK0nfB1vDXvry+5YxvNtXp5fUtqceg11UjXywqZWTKlYLDRMQkKLR2ycSBzIktl6pvCOMAqEcKHoFk9XDquviix8JlMnHTR0oK9FFmobN7GfXNhUuTOhZGoySuW+jIMKO++axbT+9PoPYCbC92xrpwbm6Os66Z+KZnoMSXZqIgu20hKHZuPRPvtBS0EdfVa8HvEsrj2abry0O1pBQ705hN3zPQJreqMyPT8PI9dKlc12t4x5pR33JGQCNSkrtswSO+UlmvV9IPZGKE3kxMl8mATdXXjMZAxY75srKby4ppVHUKZLE+zsQQ/fO+vlw5SxoRlLKGJdXmJc4wQU0rQJw3ss0/7DEu8JV9EaOhIS76zZWjoPWyFLu/pnG9GjfJczCTsbNdzYhzHLcc0lM801PcRZybeZw/inFPcS5pgud4mv+4uw/OxU3wSJNcUVzOdN4ozoVNcDs/44kmvmP8zpPXJDm3e1xVXC7iKD+4JZ/k7/onOb9JDusSl/MUrzjxnCTvN5XDnODcJvnGp/jSR/nVA2uLiVHM93uiI7jKSqEeLSUc4fl4lG23jx8HrJ84R/SsI/d8eYn4yp6jLECF6gaq/WgrvWdb6mxYgPgKoFxZgPi6iS/Ig6+vyFUe/O3dF+TB3/LlKg8+q5tdnsTsb96sTWz+6Bhr8yBfItYmNs+Ur78Ss2rZ5cFn33JmSeJ19zGWJGmr8rMr9MSh4grshiWZqM/zZiPjs1tfiCJsFixvifD6KLNEiToqb4nw6vMLmY9XqUd+2eCTXwxJclkTxw5+HeQlqrgf2usKKuaMJKrtfU4NxL9t1BNsOai9GQLFZFzv7trW7JHYEuB1a63dmuV6v8fJwTi2n/Bf4HtouKiSzIfbWtnfgfcKuK13PyTLH5LlD8myRJKljDqANguDrZ/NYH9Ilj8kywJJlgoUQm1WdaFUpcolWX75ZTcMl01C6qmWaotOqeTIL06QonJN5BxXcRUYYJRJavzqxOaZ3j9jIVnp3k9/FXY9Vi3t1fCZkVrYa7Q/n6o/1/Z//oOiZUd0cS9S8rd+iVuCXo8+V4DjGUkzf0wqLzc+Sl1adyo7M24keQUYno26Vxrx+kw5i1rK9sev68/Up8SlaqeyLhNBorCf2MgflYvePLJUOa8D7bIS+y6PdkVvvFiunKUt5jlTzitAsaK3LCxVzuvI+uJJRiX3yYWTkcqV8+LoUDz5J7dxf4kb6h1ZoncOBfD0/EDR5J7cZpEKJuWUK+fFs654EkzpFXvxZJmyR3+Fk2rKtv0VzIhm3ajuKvqbojdfy7MfybSh2DXgXjZSzxkL7XO0amkbl5VWb2TSqLyfCTwdJ2lWzRklfjYH+9kc7GdzsPXmYIBFkHes4gqM2il4c7D93rfczcECqIxp0XYsJRw6V7E52HGJLrk52FGJLro52DGJLr052F7OXMHmYEcluuTmYM/Qx+uUhBbjN7vXsDnY0Ti64OZgx+S55OZgx7HoopuDJSvb294cjCSY62NQkBkoFIOxUZtOvSV80x1tNrP05NZeyb3B0vcSg/s9Ws7majDoNHjfXIy+KOjLfyl2851b4Ue5IP520iW57RSGuYU94pVNW+YNxeInvHgWSLyLi7s95653RIvP2vn34MJ7au/CteIHF4LztAA7bYJOmGUQaBunJPG5IOS5FzCJ8+HDWuZdXG+9cizU4evU83z89Kk2GQqeMUBn/B8=</diagram></mxfile>
2110.05291/main_diagram/main_diagram.pdf ADDED
Binary file (13.5 kB). View file
 
2110.05291/paper_text/intro_method.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Sixty years ago, the route of a delivery truck would have been fixed well before the truck departed the warehouse. Today, thanks to the availability of real-time traffic information, and the option to transmit instructions to the driver to add or remove delivery locations on-the-fly, the route is no longer fixed. Nevertheless, minimizing the length or duration of the route remains an important problem. This is an instance of the Traveling Salesperson Problem (TSP), and one of a growing number of practical applications which require solving combinatorial optimization problems in real time. In these problems, there is often a cost attributed to waiting for an optimal solution or hard deadlines at which decisions must be taken. For example, the driver cannot wait for a new solution to be computed – they may miss their deliveries, or the traffic conditions may change again. There is a need for general, anytime combinatorial optimization algorithms that produce high-quality solutions under restricted computation time. This remains challenging for current approaches, as they are specialized for a specific problem (with specific assumptions and constraints), or fail to produce good solutions quickly.
4
+
5
+ **Contributions** We present a hybrid data-driven approach for approximately solving the Euclidean TSP based on Graph Neural Networks (GNNs) and Guided Local Search (GLS), which we demonstrate converges to optimal solutions more quickly than three recent learning based approaches. We provide the following contributions:
6
+
7
+ • Our GLS algorithm is guided by the *global regret* of each edge in the problem graph. This allows the algorithm to differentiate edges that are very costly to include in the solution from ones that are not so costly, whether or not they are part of the optimal solution. Thus, using regret allows us to find high-quality, rather than optimal, solutions very quickly. We are the first
8
+
9
+ to use a measure of global regret, which we define as the cost of enforcing a decision relative to the cost of an optimal solution.
10
+
11
+ - We make this computationally tractable by approximating regret with a learned representation. Our GNN-based model operates on the *line graph* of the problem graph, which allows us to build a network without node features, focusing only on edge weight.
12
+ - We present experimental results for our approach and several learning based approaches that are aligned with recent guidelines for computational testing of learning based approaches for the TSP. Our experiments emphasize the trade-off between solution quality and time.
13
+ - We reduce the mean optimality gap on the 50-node problem set from 0.268% to 0.009%, a 30× times improvement, and on the 100-node problem set from 1.534% to 0.705%, a 2× improvement. When generalizing from 20-node instances to the 100-node problem set, we reduce the mean optimality gap from 18.845% to 2.622%, a 7× improvement.
14
+
15
+ # Method
16
+
17
+ Traveling Salesperson Problem We focus on the Euclidean TSP, although our approach can be applied to other routing problems, such as the CVRP. A problem with n cities, typically denoted TSPn, is represented as a complete, undirected, weighted graph G = (V, E) with n nodes. The edges E represent connections between cities and are weighted by the Euclidean distance between the adjacent nodes. A solution, or *tour*, is a Hamiltonian cycle: a closed path through the graph that visits every node exactly once. An optimal solution is a cycle of minimum weight.
18
+
19
+ Regret Regret measures the future cost of an action that also yields an immediate reward, and is typically used to make greedy combinatorial optimization algorithms less myopic. Generally, regret is computed over a fixed horizon. For example, [Potvin & Rousseau](#page-10-9) [\(1993\)](#page-10-9) evaluate the cost of inserting a node in the best versus second-best position when constructing a CVRP solution. [Hassin](#page-10-10) [& Keinan](#page-10-10) [\(2008\)](#page-10-10) solve the TSP using regret, allowing a greedy construction heuristic to remove the most recently inserted node. It is not computationally feasible to calculate regret over a global horizon, for example, for all possible insertion sequences. However, if it were possible, a greedy algorithm could compute an optimal solution by selecting the lowest regret decision at each step.
20
+
21
+ Local Search Local Search (LS) is a general improvement heuristic. Starting from an initial solution, local search iteratively moves to *neighboring* solutions that are lower cost than the current solution according to the objective function g(s). Neighboring solutions are solutions that are reachable from a given solution by applying a certain function, or *operator*. The set of all solutions reachable from another solution by applying an operator define the neighborhood of that operator. The algorithm terminates when all neighboring solutions are inferior to the current solution, meaning the search has reached a local optimum.
22
+
23
+ Guided Local Search Guided Local Search (GLS) is a metaheuristic that sits on top of LS and allows it to escape local optima [\(Voudouris & Tsang, 1996\)](#page-11-6). To apply GLS, the designer must define some *aspects* of a solution. When trapped in a local optimum, the algorithm penalizes certain aspects of the current solution which are considered to be unfavorable. The underlying LS procedure searches using an objective function that is augmented by these penalties, thus it is incentivized to remove heavily penalized aspects from the solution. The augmented objective function h(s) is
24
+
25
+ $$h(s) = g(s) + \lambda \sum_{i=0}^{M} p_i I_i(s), \tag{1}$$
26
+
27
+ where s is a solution, g(s) is the objective function, $\lambda$ is a scaling parameter, i indexes the aspects, M is the number of aspects, $p_i$ is the current number of penalties assigned to aspect i, and $I_i$ is an indication of whether s exhibits aspect i, i.e.
28
+
29
+ $$I_i(s) = \begin{cases} 1 & \text{if } s \text{ exhibits aspect } i, \\ 0 & \text{otherwise.} \end{cases}$$
30
+ (2)
31
+
32
+ For the TSP, aspects of the solution are often defined as the edges in the problem graph. Therefore, $I_i(s)$ indicates if an edge is in the solution s. Upon reaching a local optimum, which aspects are penalized is determined by a utility function. The utility of penalising aspect i, util $_i$ , is defined as
33
+
34
+ $$\operatorname{util}_{i}(s_{*}) = I_{i}(s_{*}) \frac{c_{i}}{1 + p_{i}},\tag{3}$$
35
+
36
+ where $s_*$ is the solution at a local optimum, $I_i(s_*)$ indicates if the solution exhibits aspect i, and $c_i$ is the cost of the aspect i. The cost of an aspect measures how unfavorable it is. The higher the cost, the greater the utility of penalizing that aspect. In the context of the TSP, the cost can be the weight of the edge (Voudouris & Tsang, 1999), or a combination of various features (Arnold & Sörensen, 2019a). Conversely, the more penalties assigned to that aspect, the lower the utility of penalising it again. The aspects with the maximum utility are always penalized, which means increasing $p_i$ by one. This penalization mechanism distributes the search effort in the search space, favoring areas where a promise is shown (Voudouris & Tsang, 1996).
37
+
38
+ We use a variation of the classical GLS algorithm (see Voudouris & Tsang, 1999) that applies alternating optimisation and perturbation phases (see Arnold & Sörensen, 2019a). During an optimization phase, the local search procedure is guided by the original objective function g. During a perturbation phase, it is guided by the augmented objection function h. After an edge is penalized, the local search is applied only on the penalized edge. That is, only operations that would remove the penalized edge are considered. This differs from the local search during the optimization phase, which considers all solutions in the neighborhood of the given operator. The perturbation phase continues until K operations (operations that improve the solution according to the augmented objective function h) have been applied to the current solution. These operations perturb the solution enough for the local search to escape a local minimum. The alternating phases continue until the stopping condition is met.
39
+
40
+ Our hybrid method, shown in Figure 1, combines a machine learning model and a metaheuristic. Our GNN-based model learns an approximation of the *global regret* of including each edge of the problem graph in the solution. The metaheuristic, GLS, uses this learned regret conjunction with the original problem graph to quickly find high-quality solutions. The learned regret allows the algorithm to differentiate between edges which are costly to include in the solution and ones that are not so costly, thus improving its ability to steer the underlying local search procedure out of local minima and towards promising areas of the solution space.
41
+
42
+ <span id="page-3-0"></span>![](_page_3_Figure_10.jpeg)
43
+
44
+ Figure 1: From a TSP formulated as a graph, we take the line graph (a) and input it into our regret approximation model (b), which predicts the regret of including each edge in the solution. GLS (c) uses these predictions in conjunction with the original problem graph to quickly find a high-quality solution.
45
+
46
+ We define *global regret* as the cost of requiring a certain decision to be part of the solution relative to the cost of a globally optimal solution. Unlike previous heuristics using regret, which calculate the cost of a decision relative to some fixed number of alternatives (for example, the next best, or two next best options), our regret is measured relative to a *global optimal solution*. Decisions that are part of an optimal solution have zero regret, and all other decisions have positive regret. Mathematically,
47
+
48
+ <span id="page-4-1"></span>
49
+ $$r_i = \frac{g(s_i^*)}{g(s^*)} - 1, (4)$$
50
+
51
+ where $r_i$ is the regret of decision i, g is the objective function, $s_i^*$ is an optimal solution with i fixed, and $s^*$ a globally optimal solution. With perfect information, a greedy algorithm could construct an optimal solution by sequentially selecting the lowest-regret decisions.
52
+
53
+ In the TSP, decisions correspond to which edges are included in the solution, thus regret is defined as the cost of requiring that a certain edge be part of the solution. We posit that using regret is preferable to directly classifying which edges are part of the optimal solution (which is the approach taken by Joshi et al., 2019). Where classification can produce a probability that the edge is part of *optimal* solution, regret can differentiate between edges that are very costly to have as part of the solution and ones that are not so costly, whether or not they are part of the optimal solution. Thus, using regret furthers our goal of finding high-quality, rather than optimal, solutions with minimal computation time.
54
+
55
+ Calculating the global regret of an edge in the TSP graph requires solving the TSP itself, which is computationally intractable. Instead, we aim to learn a function $\hat{r}_{ij}$ that approximates the regret of an edge $r_{ij}$ . We use GNNs to achieve this, as they are universal function approximators that operate on graph-structured data.
56
+
57
+ **Input transformation** Typically, GNNs aggregate messages and store states on the nodes of a graph (Gilmer et al., 2017). Instead, we input the line graph of the original problem graph into our model. The line graph L(G) of an undirected graph G is a graph such that there exists a node in L(G) for every edge in G, and that for every two edges in G that share a node, there exists an edge between their corresponding nodes L(G). Figure 2 illustrates this transformation for a simple graph. The result is that our model aggregates messages and stores states on the edges of the problem graph (the nodes of the line graph). Primarily, this allows us to build models with no node features, which is advantageous as the edge weights, not the specific node locations, are relevant when solving the TSP. For a complete, undirected graph G with n nodes, there are n(n-1)/2 nodes and n(n-1)(n-2) edges in L(G). Thus, although G is complete, L(G) can be very sparse.
58
+
59
+ <span id="page-4-0"></span>![](_page_4_Picture_8.jpeg)
60
+
61
+ Figure 2: An example of a graph and the corresponding line graph. The edges in G are the nodes in L(G), and vice-versa.
62
+
63
+ **Model architecture** Our model consists of an embedding layer, several GNN layers, and an output layer. The embedding layer is an edge-wise fully connected layer that computes $d_h$ -dimensional edge embeddings from $d_x$ -dimensional edge features. Node features, if used, are concatenated onto the feature set of the adjacent edges. The forward pass of the embedding layer is written as
64
+
65
+ $$\mathbf{h}_{ij}^0 = \mathbf{W}\mathbf{x}_{ij} + \mathbf{b},\tag{5}$$
66
+
67
+ where $\mathbf{h}_{ij}^0$ is the initial embedding of edge ij, $\mathbf{W}$ is a learnable weight matrix, $\mathbf{x}_{ij}$ are the input features of edge ij (including any node features), and $\mathbf{b}$ is a set of learnable biases. We use $d_x = 1$ and $d_h = 128$ . The edge embeddings are updated using T message passing layers. Inspired by the
68
+
69
+ encoder layer of Kool et al. (2018), each layer consists of multiple sublayers. The forward pass is given by
70
+
71
+ $$\dot{\mathbf{h}}_{ij}^{t+1} = f_{\mathsf{BN}}^t(\mathbf{h}_{ij}^t + f_{\mathsf{MHA}}^t(\mathbf{h}_{ij}^t, L(G))), \tag{6}$$
72
+
73
+ $$\mathbf{h}_{ij}^{t+1} = f_{\text{BN}}^t (\dot{\mathbf{h}}_{ij}^{t+1} + f_{\text{FF}}^t (\dot{\mathbf{h}}_{ij}^{t+1})), \tag{7}$$
74
+
75
+ where $f_{\rm MHA}$ is a multi-headed GAT layer (Veličković et al., 2017), $f_{\rm FF}$ is a feedforward layer, $f_{\rm BN}$ is batch normalisation (Ioffe & Szegedy, 2015), and $\dot{\mathbf{h}}_u^{t+1}$ is a hidden state. The layers do not share parameters. The GAT layer uses M=8 heads and dimensionality $d_h/M=16$ , and the FF layer uses one hidden sublayer with dimensionality 512 and ReLU activation. Finally, the output layer is a single edge-wise fully connected layer that computes a one-dimensional output from the $d_h$ -dimensional node embeddings computed by the final message passing layer. This is written as
76
+
77
+ $$\hat{r}_{ij} = \mathbf{W} \mathbf{h}_{ij}^T + \mathbf{b},\tag{8}$$
78
+
79
+ where $\hat{r}_{ij}$ is the output for edge ij and $\mathbf{h}_{ij}^T$ is the final embedding of that edge.
80
+
81
+ We adapt GLS to use regret to solve the TSP, including how the initial solution is built, the local search procedure, and the perturbation strategy. Our GLS uses alternating optimization and perturbation phases. During an optimization phase, the local search procedure greedily accepts changes to the solution until it reaches a local minimum. During a perturbation phase, the algorithm penalizes and attempts to remove edges in the current solution with high regret, thus allowing it to escape local minima while simultaneously guiding it towards promising areas of the solution space, i.e., those with low regret. Effectively, the regret predictions produced by our model allow our GLS algorithm to undo costly decisions made during the greedy optimization phase.
82
+
83
+ **Initial solution** We use a greedy nearest neighbor algorithm to construct an initial solution. Beginning from the origin node we iteratively select the lowest-regret edge leading to an unvisited node, until all nodes have been visited.
84
+
85
+ **Local Search neighborhoods** Our LS procedure uses two solution operators for the TSP, *relocate* and 2-opt. It alternates between using either operator, and uses a "best improvement" strategy, meaning that it exhaustively searches the neighborhood corresponding with the current operator and accepts the solution that improves the objective function the most before continuing with the other operator. The algorithm terminates when no improvement can be found in either neighborhood. The relocate operator simply changes the position of a single node in the tour. The 2-opt operator selects two nodes in the tour to swap. This divides the tour into three segments: an initial segment, an intermediate segment, and a final segment. The tour is reassembled beginning with the initial segment, the intermediate segment in reverse order, and the final segment. It is a special case of the $\kappa$ -opt operator (Lin & Kernighan, 1973), although it was introduced earlier (Croes, 1958).
86
+
87
+ **Perturbation strategy** We define the cost of an edge as the predicted regret $\hat{r}_{ij}$ of that edge. The utility of penalizing edge ij, util<sub>ij</sub>, is therefore defined as
88
+
89
+ $$\operatorname{util}_{ij}(s_*) = I_{ij}(s_*) \frac{\hat{r}_{ij}}{1 + p_{ij}}, \tag{9}$$
90
+
91
+ where we remind the reader that $s_*$ is the solution at a local optimum, $I_{ij}(s_*)$ indicates if the solution contains edge ij, and $p_{ij}$ is the number of penalties assigned to that edge. The edges of maximum utility are penalized. Afterward, the local search is applied *only* on the penalized edge. That is, only operations that would remove the penalized edge are considered.
2110.05877/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2110.05877/main_diagram/main_diagram.pdf ADDED
Binary file (88.2 kB). View file
 
2110.05877/paper_text/intro_method.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ According to the World Federation of the Deaf, there are approximately 72 million Deaf people worldwide. More than 80% of them live in developing countries. Collectively, they use more than 300 different sign languages varying across different nations [@UN_SL_Day]. Loss of hearing severely limits the ability of the Deaf to communicate and thereby adversely impacts their quality of life. In the current increasingly digital world, systems to ease digital communication between Deaf and hearing people are important accessibility aids. AI has a crucial role to play in enabling this accessibility with automated tools for Sign Language Recognition (SLR). Specifically, transcription of sign language as complete sentences is referred to as Continuous Sign Language Recognition (CSLR), while recognition of individual signs is referred to as Isolated Sign Language Recognition (ISLR). There have been various efforts to build datasets and models for ISLR and CLSR tasks [@Adaloglou_2021; @koller2020quantitative]. But these results are often concentrated on a few sign languages (such as the American Sign Language) and are reported across different research communities with few standardized baselines. When compared against text- and speech-based NLP research, the progress in AI research for sign languages is significantly lagging. This lag has been recently brought to notice of the wider NLP community [@yin2021including].
4
+
5
+ For most sign languages across the world, the amount of labelled data is very low and hence they can be considered *low-resource languages*. In the NLP literature, many successful templates have been proposed for such low-resource languages. In this work, we adopt and combine many of these ideas from NLP to sign language research. We implement these ideas and release several datasets and models in an open-source library `OpenHands` with the following key contributions:
6
+
7
+ **1. Standardizing on pose as the modality:** For natural language understanding (NLU) tasks, such as sentiment classification, it is standard to use a pretrained encoder, such as BERT. This task-agnostic encoder significantly reduces need for labelled data on the NLU task. Similarly for SLR tasks, we propose to standardize on a pose-extractor as an encoder, which processes raw RGB videos and extracts the frame-wise coordinates for few keypoints. Pose-extractors are useful across sign languages and also other tasks such as action recognition [@AAAI1817135; @liu2020disentangling], and can be trained to high accuracy. Further, as we report, pose as a modality makes both training and inference for SLR tasks efficient. We release pose-based versions of existing datasets for 6 sign languages: American, Argentinian, Chinese, Greek, Indian, and Turkish.
8
+
9
+ **2. Standardized comparison of models across languages:** The progress in NLP has been earmarked by the release of standard datasets, including multilingual datasets like XGLUE [@liang2020xglue], on which various models are compared. As a step towards such standardization for ISLR, we train 4 different models spanning sequence models (LSTM and Transformer) and graph-based models (ST-GCN and SL-GCN) on 7 different datasets for sign languages mentioned above, and compare them against models proposed in the literature. We release all 28 trained models along with scripts for efficient deployment which demonstrably achieve real-time performance on CPUs and GPUs.
10
+
11
+ **3. Corpus for self-supervised training:** A defining success in NLP has been the use of self-supervised training, for instance masked-language modelling [@devlin2018bert], on large corpora of natural language text. To apply this idea to SLR, we need similarly large corpora of sign language data. To this end, we curate 1,129 hours of video data on Indian Sign Language. We pre-process these videos with a custom pipeline and extract keypoints for all frames. We release this corpus which is the first such large-scale sign language corpus for self-supervised training.
12
+
13
+ **4. Effectiveness of self-supervised training:** Self-supervised training has been demonstrated to be effective for NLP: Pretrained models require small amounts of fine-tuning data [@devlin2018bert; @baevski2020wav2vec] and multilingual pretraining allows crosslingual generalization [@hu2020xtreme]. To apply this for SLR, we evaluate multiple strategies for self-supervised pretraining of ISLR models and identify those that are effective. With the identified pretraining strategies, we demonstrate the significance of pretraining by showing improved fine-tuning performance, especially in very low-resource settings and also show high crosslingual transfer from Indian SL to other sign languages. This is the first and successful attempt that establishes the effectiveness of self-supervised learning in SLR. We release the pretrained model and the fine-tuned models for 4 different sign languages.
14
+
15
+ Through these datasets, models, and experiments we make several observations. First, in comparing standardized models across different sign languages, we find that graph-based models working on pose modality define state-of-the-art results on most sign languages. LSTM-based models lag on accuracy but are significantly faster and thus appropriate for constrained devices. Second, we firmly establish that self-supervised pretraining helps as it improves on equivalent models trained from scratch on labelled ISLR data. The performance gap is particularly high if the labelled data contains fewer samples per label, i.e., for the many sign languages which have limited resources the value of self-supervised pretraining is particularly high. Third, we establish that self-supervision in one sign language (Indian SL) can be crosslingually transferred to improve SLR on other sign languages (American, Chinese, and Argentinian). This is particularly encouraging for the long tail of over 300 sign languages that are used across the globe. Fourth, we establish that for real-time applications, pose-based modality is preferable over other modalities such as RGB, use of depth sensors, etc. due to reduced infrastructure requirements (only camera), and higher efficiency in self-supervised pretraining, fine-tuning on ISLR, and inference. We believe such standardization can help accelerate dataset collection and model benchmarking. Fifth, we observe that the trained checkpoints of the pose-based models can be directly integrated with pose estimation models to create a pipeline that can provide real-time inference even on CPUs. Such a pipeline can enable the deployment of these models in real-time video conferencing tools, perhaps even on smartphones.
16
+
17
+ As mentioned all datasets and models are released with permissible licenses in `OpenHands` with the intention to make SLR research more accessible and standardized. We hope that others contribute datasets and models to the library, especially representing the diversity of sign languages used across the globe.
18
+
19
+ The rest of the paper is organized as follows. In [2](#sec:background){reference-type="ref+label" reference="sec:background"} we present a brief overview of the existing work. In [3](#sec:standardization){reference-type="ref+label" reference="sec:standardization"} we describe our efforts in standardizing datasets and models across six different sign languages. In [4](#sec:ssl_islr){reference-type="ref+label" reference="sec:ssl_islr"} we explain our pretraining corpus and strategies for self-supervised learning and detail results that establish its effectiveness. In [5](#sec:slr_library){reference-type="ref+label" reference="sec:slr_library"} we describe in brief the functionalities of the `OpenHands` library. In [6](#sec:future_works){reference-type="ref+label" reference="sec:future_works"}, we summarize our work and also list potential follow-up work.
2110.06257/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-07-02T11:59:20.637Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" version="14.8.2" etag="plpqXhnnqXZjHE6qq4cY" type="google"><diagram id="z3TIZFfyBJidhV7hNnhL">7V1bb6M4FP41kXZfKu6Xx95mdqWpVKkj7e7TiAQ3YZbgLHGapL9+DTEJ2CZDW2ynhpc2PoAD3/mOjz9fyMS+Xe6+5tFq8QBjkE4sI95N7LuJZYW2h/8Whv3B4IT2wTDPk/hgMk+Gp+QVEKNBrJskBuvGiQjCFCWrpnEGswzMUMMW5TncNk97hmnzW1fRHDCGp1mUsta/khgtDtbA8k/2P0AyX1TfbHrh4cgyqk4mVawXUQy3B1P5cPb9xL7NIUSHT8vdLUgL7CpcDgh8aTl6vLEcZKjLBdbhgpco3ZBnI/eF9tXDghg/OynCHC3gHGZRen+y3uRwk8WgqNHApdM53yBcYaOJjT8BQnviyGiDIDYt0DIlR59hhshB08Plwz0UX9wEC27yGTERwqAon4MK5oB9evOIKeYigEuA8j0+JQdphJKXZvURYcX8eN4JOPyBYMfH0f41jk2UtosEgadVVD7PFkcJhUiSprcwhXl5rf0czMBshu1rlMN/Qe3INHAd12jF8AXkCOxaydECD7kgIOG2bxa3J8o7DrEtanSvbB/B02HwNBhA8YOhJmpNdDKYAQpKYorSZJ7h4gwDAbD9poApwfF9TQ4skzguqc1zU9OR4nE3LRZ4k4O71QPuLoP79+Hg7prqgPcY4K+uroYDvRW6yqD3P2sODNgcWOGhIgcGYnNg7IIgdng5MLCmtufJaYs9iUkwHJPgEXjLkNcgVPWOWVA68uaYBpVh/2m1oMkRg9XNq0iEpmA1GEcgeOaqQW8WgOmznAbZkZgJzVEP1pDnCHFhjcIoCFUhPypCddj7KjIf2CXo7+LyK5eU/qkduduRmsvCvipk+NFqFxXF41VF4XRZWdqfddj5LMuRm56rMMsK1ptqxlw9o9nk8AZdRSXZcGQ9w/qqOWmw3lPHeovVpr2yXs0oC8167jCLINofvT7SvkZ7i0N7v7Nb+6d9B3H6+SQVTXuuphJFe1akPnx7HE7/0nXMBvYdoe+je8mZ3bxfTkEcJ9l8SA5okt82JXqAlbW9Nidq+o4+JVf9boDafQDKqlUNuiU0oKYrEdEO05KfL+PRiB6hkoGoloKR4ShnykAYouwMpY5R31GD94FoVYfmUd9R3vWCKDul+Bgl+TZZg0m5QnQWIZBhtQIzBmlt+16er67vZbOMlqi3q89vUdt1rV2T3v2pbZszgSlikKm89DrPo33thBVMMrSu1fxYGE5cCVqawy/vOx9/ONzBiSzHR+nGHy3X2vquuq6m7YwRSUeky4lIAeNfvURklU67RiR1/scjUk81TUWkTKlii5bTarqBdBsns2Otp5ymEZUo/mwt1/syUS9TqiidgL3IPFxBLXrRQUvidFrCq6ricFvkqpOj35zQLf73tCb08+d/OKFXj613aEscHnfYQYheAVXTQwp9dYBaOjKUBlSmCHS0XLRMIyqzE++w07waBr3MLqcjWGheSNRLlEWOlkKTiXqJnXjn/NphMj8gpNd+Wkumvt9OpEy93+47nf3Y+/oxJxi9wvdKoPCtGdX96JUhTUPhspHqq4ZNdNfiND8Kie6eX2M8YK+ElkKvCBaRF9L8yNQ8rj0SHaPAmT30FW4acc9P8A7YK6Gh0Cui1awS7cX2fiTKWdcbiY5R4OzU9FXm2VESt3glUJkURM/fXkjzI3F80j0/gTsQonsmh+gK86w3SuIWr4QKR+Sq+9FrLJ8VXxKHnr1fIaghqx1OV0dlWzPq3xavBCrbGsEzsRfS1khcf+FpueQ36PDGaw6gfWzH97TcQEsDKvO9Hp6WK35pRGW+MsJjFeP1fJ6DeYQAA622e+fo3RQy985VvyNBv7QDV40fzSi8MRxH+FQocCLhKP57f2e44BWyiqaOzEqVve2lKH30KHw99aBlqEPUGgRHQ4mIarlGluGoTET1VGY0R02ZYa+lNGNIKhVSLcUZy1KZga+lOmNZKhNSPSf0aJZK7ULpuUCVZqlMSAPByulSWCox8INhSCepkLLaqdh0jS1ovyr+xQkGI5luhvXeKNOg9tDanN9pCzk+6WPMJeCpL6wfnP9+e/0x8W+SnxMfG42Jf1vesXuPjRj86++lvTAavxfHy4s6OK27M3KwTl6jaVlVAT3Z8I3rdW8m7l1R1wbBNXFGKx9EtEuG0+yQcAbJhDlMyw2QJv17LDKTp5YbIE3bUoeoYGl3IRyVmTr1VHY0R2UiquWbdhiOyhzRCfQUdjRJZUJaBYTuLJUY+KFoYXcZTalUSFlhpwOknX6dURSkWr6QlWGpVEhFT4tdCEtlBv4wpJNUSHna6W0DMubgBmSOyCsYkAlZafZnttqgDtBrMoD5zrGbXsBnVVzthwgG+ANQYdj0RcfXFLzDF7iYQ4hqx77iZ1w8wBgUZ/wP</diagram></mxfile>
2110.06257/main_diagram/main_diagram.pdf ADDED
Binary file (15.9 kB). View file
 
2110.06257/paper_text/intro_method.md ADDED
@@ -0,0 +1,449 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep learning has achieved profound success in vision and language modelling (Brown et al., 2020; Nichol et al., 2022). Still, it remains a grand challenge for deep neural networks to perform causal discovery (Yi et al., 2020; Girdhar & Ramanan, 2020; Sauer & Geiger, 2021), which is critical for interpretability, generalization, and robustness (Lake et al., 2017; Scholkopf et al. ¨ , 2021). Theoretically, structure identifiability is key to ensure a unique correspondence between observations and the underlying causal structures (Peters
4
+
5
+ *Proceedings of the* 42 nd *International Conference on Machine Learning*, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
6
+
7
+ et al., 2017). Practically, better algorithms are needed to accurately extract the causal structure from data.
8
+
9
+ In time series analysis, causal discovery identifies the underlying temporal causal structure of the observed sequences. Historical approaches rely on a restrictive assumption: stationarity (Granger, 1969; Peters et al., 2017; Tank et al., 2021; Lowe et al. ¨ , 2022), while real-world data is often nonstationary with potential hidden confounders. To address this issue, three major strategies have been recently proposed: (1) modelling nonstationary noise (Huang et al., 2020; Gong et al., 2023); (2) introducing time-dependent effects with a fixed causal structure (Huang et al., 2015; 2019); and (3) using discrete latent variables to capture structural changes over time (Saggioro et al., 2020). Despite these advances, causal discovery in nonstationary time series under realistic assumptions remains an open challenge.
10
+
11
+ This work addresses causal discovery for nonstationary time series based on a much relaxed assumption, *conditionally stationary time series*, where the dynamics of the observed system change depending on a set of discrete "state" variables. These causal structures can change not only during time, but also across samples (Lowe et al. ¨ , 2022). This assumption holds for many real-world scenarios, such as individuals whose different decisions depend on mood, past experiences, or interactions with others. The causal discovery task for such conditionally stationary time series poses different challenges depending on the observability and dependency of the states, which we classify into 3 scenarios:
12
+
13
+ States observed/independent: The states are observed and/or independent on observations (Fig. 1a). Structure identifiability can be established for both cases by Peters et al. (2013), and Balsells-Rodas et al. (2024), respectively.
14
+
15
+ States determined: The states are hidden but depend on observations directly. In Fig. 1b, the states are determined by the balls' positions (pink vs purple regions).
16
+
17
+ States recurrent: The states depend on historical events. E.g., in Fig. 1c, particles have state changes upon collision. Also in a football game a player acts differently based on earlier actions of the others.
18
+
19
+ We propose a novel framework to tackle both the theoretical and algorithmic challenges for causal discovery from condi-
20
+
21
+ <sup>1</sup> Imperial College London <sup>2</sup>University of Dundee <sup>3</sup>KTH Royal Institute of Technology. Correspondence to: Carles Balsells-Rodas <cb221@imperial.ac.uk>.
22
+
23
+ ![](_page_1_Figure_1.jpeg)
24
+
25
+ Figure 1: Graphical representations of the data generation processes. $x_t$ represents the observations of a time series, and $s_t$ denotes the state variables. The states affect observations by changing the causal structure $f_t$ for different state values.
26
+
27
+ tionally stationary time series in states determined and recurrent cases. Our contributions are summarised as follows, providing advances in both identifiability and estimation:
28
+
29
+ - We introduce conditional summary graph as a compact representation of the underlying causal graph structure for conditionally stationary time series. This efficiently addresses the exponential complexity of full-time graph in summarising the causal structure of time series data with nonstationary interactions between variables.
30
+ - We establish identifiability for the conditional summary graph and related structural properties for conditionally stationary time series satisfying "state determined" (Fig. 1b) or "state recurrent" (Fig. 1c) assumptions.
31
+ - We propose <u>State-Dependent Causal Inference</u> (SDCI) as a practical algorithm to extract the conditional summary graphs and model state dependencies, based on discrete latent variable models and graph neural networks.
32
+ - We validate SDCI on semi-synthetic data based on physical and biological systems, and real-world datasets. Compared to baselines including GNN and RNN-based approaches, SDCI achieves superior performance in identifying the underlying nonstationary structures in gene regulatory networks; and forecasting future trajectories from observations on NBA player movements.
33
+
34
+ Causally-driven time series are often modelled using structural causal models (SCM; (Pearl, 2009; Peters et al., 2017)) for describing data generation processes. Consider N sequences of length T, denoted by $\boldsymbol{x}_{1:T}$ , where $\boldsymbol{x}_t^{(i)} \in \mathbb{R}^d$ denotes the features of the i-th sequence at time t; which can incorporate high-order moments, e.g velocity, acceleration. The data dependencies in a temporal SCM are structured via a causal graph, known as the *full time graph*, $\mathcal{G}_{1:T}^{FT}$ .
35
+
36
+ For simplicity, we assume no hidden confounders, no in-
37
+
38
+ stantaneous effects, and a first-order Markov property. In stationary time series, the full time graph is static across time steps. This allows us to define the *summary graph* $\mathcal{G}^S = \{\mathcal{V}, \mathcal{E}^S\}$ , where $\mathcal{V} = \{1, \dots, N\}$ and an edge $i \to j$ exists in $\mathcal{E}^S$ if there is an edge from $\boldsymbol{x}_t^{(i)}$ to $\boldsymbol{x}_{t'}^{(j)}$ in $\mathcal{G}_{1:T}^{FT}$ for some t < t'. The identifiability of both $\mathcal{G}_{1:T}^{FT}$ and $\mathcal{G}^S$ is guaranteed under *Time Series Models with Independent Noise* (TiMINo; (Peters et al., 2013)). By further assuming an additive noise model (ANM), the SCM becomes:
39
+
40
+ $$\boldsymbol{x}_{t}^{(j)} = f^{(j)} \left( \boldsymbol{x}_{t-1}^{(i)} | \boldsymbol{x}_{t-1}^{(i)} \in \mathbf{PA}^{(j)}(t-1) \right) + \boldsymbol{\varepsilon}_{t}^{(j)}, \quad (1)$$
41
+
42
+ where $\mathbf{PA}^{(j)}(t-1)$ denotes the parents<sup>1</sup> of $\mathbf{x}_t^{(j)}$ and $\mathbf{\epsilon}_t^{(j)} \sim p_{\varepsilon}$ represents independent noise. In this setting, $\mathbf{PA}^{(j)}(t-1)$ is constant in time and aligns with the summary graph $\mathcal{G}^S$ .
43
+
44
+ Markov Switching Models (MSMs; (Hamilton, 1989)) extend time series modeling by introducing discrete latent variables $u_t \in \{1,\ldots,U\}$ that condition the autoregressive process at each time step t. For regime-dependent time series (Saggioro et al., 2020), the full-time graph $\mathcal{G}_{1:T}^{FT}$ is time-dependent, but causally stationary (Assaad et al., 2022) given the discrete latent variables $u_t, t \in \{1,\ldots,T\}$ . This leads to defining the regime-dependent graph, which is a set of graphs $\mathcal{G}_{1:U}^{RD} := \{\mathcal{G}_u^{RD} | 1 \leq u \leq U\}$ where $\mathcal{G}_u^{RD}$ encodes the causal effects of $x_t$ for $u_t = u$ . Our work utilises MSMs under the following key assumptions (m1-m3):
45
+
46
+ (m1) Conditional first-order Markov transitions, controlled by $u_{t-1}$ only: for any $t \in \{2, ..., T\}$ ,
47
+
48
+ $$p(x_t|x_{1:t-1}, u_{1:t-1}) = p(x_t|x_{t-1}, u_{t-1}).$$
49
+ (2)
50
+
51
+ (m2) Conditional stationarity: the transition distributions do not change during time: for any $u \in \{1, ..., U\}$ we have
52
+
53
+ <sup>&</sup>lt;sup>1</sup>The notation (t-1) indicates that the parents of variable j are considered at the previous time step.
54
+
55
+ ![](_page_2_Figure_1.jpeg)
56
+
57
+ Figure 2: (a) Full time graph $\mathcal{G}_{1:T}^{FT}$ of a sample considering our problem setting, (b) regime-dependent graph, and (c) conditional summary graph $\mathcal{G}_{1:K}^{CS}$ of the corresponding sample for K=2. We denote states using numbers $\{1,2\}$ inside each element. Different colors (red and blue) denote effects caused by different states.
58
+
59
+ for any $t \neq t'$ , any $\boldsymbol{\beta}, \boldsymbol{\gamma} \in \mathbb{R}^{Nd}$ such that
60
+
61
+ $$p(\mathbf{x}_t = \boldsymbol{\beta} | \mathbf{x}_{t-1} = \boldsymbol{\gamma}, u_{t-1} = u) = p(\mathbf{x}_{t'} = \boldsymbol{\beta} | \mathbf{x}_{t'-1} = \boldsymbol{\gamma}, u_{t'-1} = u).$$
62
+ (3)
63
+
64
+ $$p(\mathbf{x}_{t}|\mathbf{x}_{t-1}, u_{t-1}) = \prod_{j=1}^{N} \mathcal{N}\left(\mathbf{x}_{t}^{(j)}; f_{u_{t-1}}^{(j)}\left(\mathbf{x}_{t-1}^{(i)}|\mathbf{x}_{t-1}^{(i)} \in \mathbf{PA}_{u_{t-1}}^{(j)}(t-1)\right), \sigma^{2}\mathbf{I}\right),$$
65
+ (4)
66
+
67
+ for any $t \in \{2, ..., T\}$ , where $\mathbf{PA}_{u_{t-1}}^{(j)}(t-1)$ denotes the parents of variable j at time t-1, as described by $\mathcal{G}_{u_{t-1}}^{RD}$ .
68
+
69
+ We focus on nonstationary time series where each time step t is associated with N latent variables $\mathbf{s}_t = \{s_t^{(1)}, \dots, s_t^{(N)}\}$ . Each $s_t^{(i)} \in \{1, \dots, K\}$ controls the causal effects of $\mathbf{x}_t^{(i)}$ to future observations $\mathbf{x}_{t+1}$ . In other words, the causal influences of $\mathbf{x}_t^{(i)}$ are dynamically modified by the state $s_t^{(i)}$ . This can be viewed as a general MSM under assumptions (m1-m3) by setting $u_t = \varphi(s_t)$ with global states $U = K^N$ , for some injective $\varphi: \{1,\dots,K\}^N \to \{1,\dots,U\}$ . Fig. 2a exemplifies the full time graph illustrating these assumptions. Here, the causal effects vary across time as $\mathbf{s}_1 \neq \mathbf{s}_2 \neq \mathbf{s}_3$ , leading to nonstationarity. From a MSM view, we can extract a regime-dependent graph with $K^N$ regimes (Fig 2b).
70
+
71
+ Although the regime-dependent graph captures the general structure, it becomes inefficient as it scales exponentially with N. Furthermore, a summary graph (aggregating all the regimes) can be non-informative, due to inability to distinguish state-dependent effects. Instead, we define the conditional summary graph.
72
+
73
+ **Definition 3.1** (Conditional summary graph, first-order Markov setting). The *conditional summary graph* is a set
74
+
75
+ of K graphs, $\mathcal{G}^{CS}_{1:K} = \{\mathcal{G}^{CS}_k|1 \leq k \leq K\}$ , where K is the number of possible state values. Each summary graph $\mathcal{G}^{CS}_k := \{\mathcal{V}, \mathcal{E}^{CS}_k\}$ has the same vertices $\mathcal{V} = \{1, \dots, N\}$ . An edge $i \to j$ exists in $\mathcal{E}^{CS}_k$ if there exists t such that $s^{(i)}_t = k$ and $\boldsymbol{x}^{(i)}_t$ connects to $\boldsymbol{x}^{(j)}_{t+1}$ in $\mathcal{G}^{FT}_{1:T}$ .
76
+
77
+ For simplicity, we omit self-connections in Fig. 2c (black edges in Fig. 2a), where we show the conditional summary graph extracted from the full time graph. For k=1 we observe $s_1^{(3)}=1$ , and a "red edge" connects $\boldsymbol{x}_1^{(3)}$ and $\boldsymbol{x}_2^{(2)}$ , placing a red edge in $\mathcal{E}_1^{CS}$ . Similar reasoning applies for $\mathcal{G}_2^{CS}$ . Compared to summary or regime-dependent graphs (Fig. 2b), the conditional summary graph offers a compact and informative representation of state-dependent causal structures. In summary, the SEM for conditionally stationary time series can be formalised as follows:
78
+
79
+ $$\mathbf{x}_{t}^{(j)} = f_{\mathbf{s}_{t-1}}^{(j)} \left( \mathbf{x}_{t-1}^{(i)} | \mathbf{x}_{t-1}^{(i)} \in \mathbf{PA}_{\mathbf{s}_{t-1}}^{(j)}(t-1) \right) + \varepsilon_{t}^{(j)}.$$
80
+ (5)
81
+
82
+ This introduces the following assumption:
83
+
84
+ (m4) Multi-state dependence: The parents $\mathbf{PA}_{s_{t-1}}^{(j)}(t-1)$ of variable j at time t are defined by $\mathcal{G}_{1:K}^{CS}$ as:
85
+
86
+ $$\begin{split} \mathbf{PA}_{s_{t-1}}^{(j)}(t-1) := \Big\{ \boldsymbol{x}_{t-1}^{(i)} | \\ j \in \mathcal{C}_k^{(i)}, k = s_{t-1}^{(i)}, 1 \leq i \leq N \Big\}. \end{split} \tag{6}$$
87
+
88
+ $\begin{array}{l} \mathcal{C}_k^{(i)} \subseteq \mathcal{V} \text{ denotes the children of variable } i \text{ in state } k, \text{specified by } \mathcal{G}_k^{CS}. \text{ To illustrate, in Fig. 2a the parents of } \boldsymbol{x}_2^{(2)} \text{ are } \{\boldsymbol{x}_1^{(1)}, \boldsymbol{x}_1^{(2)}, \boldsymbol{x}_1^{(3)}\} \text{ as } 2 \in C_{s_i^{(1)}}^{(1)} \text{ and } 2 \in C_{s_i^{(3)}}^{(3)}. \end{array}$
89
+
90
+ The conditional summary graph introduces linear scaling with K. However, indexing the SEM in Eq. (5) by $s_{t-1}$ requires $K^N$ functions. Taking inspiration from interactive systems (Kipf et al., 2018; Löwe et al., 2022), we propose a two-stage interaction and aggregation framework, formalised as the following assumption:
91
+
92
+ ![](_page_3_Figure_1.jpeg)
93
+
94
+ Figure 3: SDCI extracts the *labels of conditional effects* that describe state-dependent interactions in a sample.
95
+
96
+ (m5) Global function-type dependence: The effect between any pair of variables is determined by $n_{\epsilon}$ functional effects $\mathcal{F}:=\{f_0,\ldots,f_{n_{\epsilon}-1}\}$ , where $f_0(\cdot)=\mathbf{0}$ represents the absence of an edge. Each functional effect is state-dependent, i.e., the effects from variable i at time t are determined by its state $s_t^{(i)}$ . These effects are collected in the labels of conditional effects:
97
+
98
+ $$W := \left\{ w_{i,j,k} \in \{0, \dots, n_{\epsilon} - 1\} | 1 \le i, j \le N, \right.$$
99
+ $$1 \le k \le K, w_{i,j,k} = 0 \iff j \notin \mathcal{C}_k^{(i)} \right\}, \quad (7)$$
100
+
101
+ where $w_{i,j,k}$ specifies the edge-type $i \to j$ at state k, associated to variable i. In the aggregation stage, the interactions are combined using a permutation invariant function g:
102
+
103
+ $$f_{\mathbf{s}_{t-1}}^{(j)}(\mathbf{x}_{t-1}) := g\left(\mathbf{x}_{t-1}^{(j)}, \left\{f_{e_i}\left(\mathbf{x}_{t-1}^{(i)}, \mathbf{x}_{t-1}^{(j)}\right) | i \neq j\right\}\right), \quad (8)$$
104
+
105
+ where $e_i = w_{i,j,s_{i-1}^{(i)}}$ . The function g aggregates interactions to variable j, capturing complex state-dependent dynamics. This formulation effectively reduces the exponential complexity of conditionally stationary time series, and can be efficiently implemented with graph neural networks (GNNs) (Kipf et al., 2018). Furthermore, it aligns with well-established mechanisms in physical systems, such as aggregating forces from pair-wise interactions.
106
+
107
+ We propose State-Dependent Causal Inference (SDCI), a probabilistic approach to infer the underlying causal structure from time series. Given a dataset $\mathcal{D}$ , each sample $x_{1:T} \sim \mathcal{D}$ is driven by $\mathcal{W}$ , which may differ across samples. SDCI models the joint dynamics of states, edge-types, and observations (Fig. 3), and builds on graph neural networks (GNNs) and interactive systems (Kipf et al., 2018).
108
+
109
+ **Generative model.** The joint distribution for conditionally stationary time series is dependent on $s_{1:T}$ , W, and $\psi :=$
110
+
111
+ $\{\psi_w, \psi_x, \psi_s\}$ . For observed states, we define:
112
+
113
+ $$p_{\eta}(\boldsymbol{x}_{1:T}, \mathcal{W}|\boldsymbol{s}_{1:T}) = p_{\eta}(\boldsymbol{x}_{1:T}|\boldsymbol{s}_{1:T}, \mathcal{W})p_{\eta}(\mathcal{W}), \quad (9)$$
114
+
115
+ with a factorised prior on the edge labels $p_{\psi_w}(\mathcal{W}) = \prod_{k=1}^K \prod_{ij} p_{\psi_w}(w_{ijk})$ . We can guide $\psi_w$ through domain knowledge (e.g. sparsity). Given $\mathcal{W} \sim p_{\psi_w}(\mathcal{W})$ and $s_{1:T}$ , a sequence $x_{1:T}$ is generated as
116
+
117
+ $$p_{\psi_x}(\boldsymbol{x}_{1:T}|\boldsymbol{s}_{1:T},\mathcal{W}) = \prod_{t=0}^{T-1} \prod_{j=1}^{N} p_{\psi_x}(\boldsymbol{x}_{t+1}^{(j)}|\boldsymbol{x}_t,s_t,\mathcal{W}).$$
118
+
119
+ To compute $p_{\psi_x}(\boldsymbol{x}_{t+1}^{(j)}|\boldsymbol{x}_t,\boldsymbol{s}_t,\mathcal{W})$ , the model queries edgetypes $e_t^{(ij)} = w_{ijk'}$ for $s_t^{(i)} = k'$ , retrieves pairwise interactions $f_e(\boldsymbol{x}_t^{(i)},\boldsymbol{x}_t^{(j)})$ , and aggregates them:
120
+
121
+ $$\mathbf{h}_{t}^{(ij)} = \sum_{e>0} \mathbf{1}_{\left(e_{t}^{(ij)}=e\right)} f_{e}(\mathbf{x}_{t}^{(i)}, \mathbf{x}_{t}^{(j)}),$$
122
+
123
+ $$\tilde{\mathbf{x}}_{t+1}^{(j)} = \mathbf{x}_{t}^{(j)} + g\left(\sum_{i\neq j} \mathbf{h}_{t}^{(ij)}, \mathbf{x}_{t}^{(j)}\right), \quad (10)$$
124
+
125
+ where $\mathcal{F}:=\{f_e\}_{e=1}^{n_\epsilon-1}$ and g are parametrisable functions, similar to Eq. (8). $\tilde{\boldsymbol{x}}_{t+1}^{(j)}$ denotes the mean of $\boldsymbol{x}_{t+1}^{(j)}$ , which is Gaussian distributed with covariance $\sigma^2 I$ . For latent states with influence from $\boldsymbol{x}_{1:T}$ (determined and recurrent cases; Figs. 1b, 1c), the joint distribution extends to:
126
+
127
+ $$p_{\psi}(\boldsymbol{x}_{1:T}, \boldsymbol{s}_{1:T}, \mathcal{W}) = p_{\psi_w}(\mathcal{W})$$
128
+
129
+ $$\prod_{t=1}^{T} p_{\psi_x}(\boldsymbol{x}_t | \boldsymbol{x}_{t-1}, \boldsymbol{s}_{t-1}, \mathcal{W}) p_{\psi_s}(\boldsymbol{s}_t | \boldsymbol{x}_{t:t-L_x}, \boldsymbol{s}_{t-1:t-L_s}).$$
130
+
131
+ where $L_x$ and $L_s$ are the maximum lags. The determined case fixes $L_x=0$ and $L_s=0$ ; and we assume the states are autonomous with shared parameters $\psi_s$ :
132
+
133
+ $$p_{\psi_s}(\mathbf{s}_t|\mathbf{x}_{t:t-L_x}, s_{t-1:t-L_s}) = \prod_{i=1}^{N} p_{\psi_s}(\mathbf{s}_t^{(i)}|\mathbf{x}_{t:t-L_x}^{(i)}, \mathbf{s}_{t-1:t-L_s}^{(i)}).$$
134
+ (11)
135
+
136
+ **Inference.** Building on VAE-based approaches (Kingma & Welling, 2014; Löwe et al., 2022), we introduce a variational posterior parametrised by $\phi := \{\phi_w, \phi_s\}$ ; which approximates the posterior over $\mathcal{W}$ , and $s_{1:T}$ as follows:
137
+
138
+ $$q_{\phi}(\mathcal{W}, \mathbf{s}_{1:T}|\mathbf{x}_{1:T}) = q_{\phi_{on}}(\mathcal{W}|\mathbf{x}_{1:T})q_{\phi_{o}}(\mathbf{s}_{1:T}|\mathbf{x}_{1:T}).$$
139
+ (12)
140
+
141
+ where $q_{\phi_w}$ is factorised across edges
142
+
143
+ $$q_{\phi_w}(w_{ijk}|\boldsymbol{x}_{1:T}) = \operatorname{softmax}(\boldsymbol{\phi}_{ijk}/\tau),$$
144
+
145
+ $$\boldsymbol{\phi}_{ij} = f_{\phi_w}(\boldsymbol{x}_{1:T})_{ij} \in \mathbb{R}^{K \times n_{\epsilon}}, \quad (13)$$
146
+
147
+ The function $\phi_{ij}$ extracts embeddings for any pair $i \to j$ that represents the state-dependent causal interaction, and the architecture of $f_{\phi_w}(\boldsymbol{x}_{1:T})$ is based on (Chen et al., 2021). See Appendix B.3 for details. In the determined case, exact
148
+
149
+ inference on the states is tractable (see Appendix B.2), and we set $q_{\phi_s}(\boldsymbol{s}_{1:T}|\boldsymbol{x}_{1:T}) = p_{\psi_s}(\boldsymbol{s}_{1:T}|\boldsymbol{x}_{1:T})$ .
150
+
151
+ $$q_{\phi_s}(s_t^{(i)}|\boldsymbol{x}_t^{(i)}) = \operatorname{softmax}(\hat{s}_t^{(i)}/\gamma), \ \hat{s}_t^{(i)} = f_{\phi_s}(\boldsymbol{x}_t^{(i)}), \ (14)$$
152
+ In the recurrent case, $q_{\phi_s}(\boldsymbol{s}_{1:T}|\boldsymbol{x}_{1:T}) = \prod_{t=1}^T q_{\phi_s}(\boldsymbol{s}_t|\boldsymbol{x}_{1:T})$ is implemented via a bidirectional RNN-GNN that approximates smoothing (details in Appendix B.3).
153
+
154
+ **Objective.** SDCI optimises the ELBO (Kingma & Welling, 2014) (see Appendix B.1 for derivation):
155
+
156
+ $$\log p_{\psi}(\boldsymbol{x}_{1:T}) \geq -KL\left(q_{\phi_{w}}(\mathcal{W}|\boldsymbol{x}_{1:T}) \middle| p_{\psi_{w}}(\mathcal{W})\right)$$
157
+
158
+ $$-\sum_{t=1}^{T} KL\left(q_{\phi_{s}}(\boldsymbol{s}_{t}|\boldsymbol{x}_{1:T})\middle| p_{\psi_{s}}(\boldsymbol{s}_{t}|\boldsymbol{x}_{t:t-L_{x}},\boldsymbol{s}_{t-1:t-L_{s}})\right)$$
159
+
160
+ $$+\mathbb{E}_{q_{\phi}(\mathcal{W},\boldsymbol{s}_{1:T}|\boldsymbol{x}_{1:T})}\left[\log p_{\psi_{x}}(\boldsymbol{x}_{1:T}|\boldsymbol{s}_{1:T},\mathcal{W})\right], \quad (15)$$
161
+
162
+ where reparametrisation tricks (Jang et al., 2017) ensure backpropagation for discrete samples $\mathcal{W}, s_{1:T} \sim q_{\phi}(\mathcal{W}, s_{1:T} | x_{1:T})$ . To reduce variance, we use straight-through Gumbel-softmax and fix gradients to pass through unperturbed marginals (Ahmed et al., 2023). During training, Eq. (10) becomes:
163
+
164
+ $$\mathbf{h}_{t}^{(ij)} = \sum_{k=1}^{K} \mathbf{1}_{(s_{t}^{(i)}=k)} \sum_{e>0} \mathbf{1}_{(w_{ijk}=e)} f_{e}(\mathbf{x}_{t}^{(i)}, \mathbf{x}_{t}^{(j)}), \quad (16)$$
165
+
166
+ and we optimise $\phi$ and $\psi$ using mini-batch gradient ascent.
167
+
168
+ We provide identifiability guarantees for the labels of conditional effects $\mathcal{W}$ , driving conditionally stationary time series under assumptions (m1-m5). A detailed analysis of identifiability for regime-dependent graphs under (m1-m3) and conditional summary graphs under (m1-m4) is provided in Appendix A. We first define partial identifiability for $\mathcal{W}$ .
169
+
170
+ **Definition 4.1** (Partial identifiability of conditional effects). Given observations $x_{1:T}$ and a family of models $\mathcal{M}$ satisfying (m1-m5), we say $\mathcal{M}$ is partially identifiable with respect to the labels of conditional effects if for any $p, \hat{p} \in \mathcal{M}$ , with corresponding $\mathcal{W}$ and $\widehat{\mathcal{W}}$ such that $p(x_{1:T}) = \hat{p}(x_{1:T})$ ; $K = \hat{K}$ and there exist permutations $\pi \in S_{n_{\epsilon}}$ and $\sigma^{(i)} \in S_K$ such that for any $i, j \in \{1, \dots, N\}$ and $k \in \{1, \dots, K\}$ :
171
+
172
+ $$w_{i,j,k} = \hat{w}_{i,j,\sigma^{(i)}(k)} \iff w_{i,j,k} = 0,$$
173
+ (17)
174
+
175
+ $$w_{i,j,k} = \pi \left( \hat{w}_{i,j,\sigma^{(i)}(k)} \right) \iff w_{i,j,k} \neq 0. \tag{18}$$
176
+
177
+ Remark 4.2. Mixture models can only be identified up to permutations (Yakowitz & Spragins, 1968). Contrary to previous work (Gassiat et al., 2016; Hälvä & Hyvarinen, 2020; Song et al., 2024), our results do not require knowledge of the number of components K. Eq. (17) establishes equivalences of $\mathcal{G}_{1:K}^{SG}$ up to element-wise permutations $\sigma^{(i)}$ for outgoing edges $\{C_1^{(i)},\ldots,C_K^{(i)}\}$ . Eq. (18) ensures permutation equivalence in edge labels $1,\ldots,n_\epsilon$ .
178
+
179
+ This partial identifiability definition excludes the state distribution, as no restrictions on the state dynamics are imposed. It cannot generally be achieved under (m1-m5) without further assumptions. However, our main focus in this work is to recover state-dependent structures, and we leave state distribution identifiability for future work. Below, we list sufficient conditions for identifiability.
180
+
181
+ (a1) Unique indexing of outgoing structure. For each state $s_{t-1}^{(i)} \in \{1,...,K\}^N$ , the graph representing the direct causes of i is unique. That is, for any $i \in \{1,...,N\}$ :
182
+
183
+ $$\forall k, k' \in \{1, ..., K\}, k \neq k' \quad \Leftrightarrow \quad \mathcal{C}_k^{(i)} \neq \mathcal{C}_{k'}^{(i)}. \quad (19)$$
184
+
185
+ This is stronger than the typical unique indexing assumption for mixture models (Balsells-Rodas et al., 2024), where:
186
+
187
+ $$p(\mathbf{x}_t|\mathbf{x}_{t-1},\mathbf{s}_{t-1}=(k_1,\ldots,k_N)) \neq p(\mathbf{x}_t|\mathbf{x}_{t-1},\mathbf{s}_{t-1}=(k'_1,\ldots,k'_N)),$$
188
+
189
+ for
190
+ $$(k_1, ..., k_N) \neq (k'_1, ..., k'_N)$$
191
+ .
192
+
193
+ (a2) Unique indexing of function types. Given (m1-m5):
194
+
195
+ (a2.1): For any $j \in \{1,\ldots,N\}$ , and given $g(\boldsymbol{x}^{(j)},\{\boldsymbol{h}^{(i,j)}|i\neq j\})$ , where $\boldsymbol{h}^{(i,j)}=f_e(\boldsymbol{x}^{(i)},\boldsymbol{x}^{(j)})$ for some $e\in\{0,\ldots,n_\epsilon-1\}$ , the partial derivative $\frac{\partial g}{\partial \boldsymbol{h}^{(i,j)}}$ is non-zero almost everywhere for any $i\in\{1,\ldots,N\}$ .
196
+
197
+ (a2.2): Edge-types differ almost everywhere: $f_0 := \mathbf{0}$ , and for $e \neq e', \in \{0, n_{\epsilon} - 1\}$ , the following set has zero measure:
198
+
199
+ $$\mathcal{X}_{e,e'} := \left\{ \boldsymbol{x}_1 \in \mathbb{R}^d : \exists \boldsymbol{x}_2 \in \mathbb{R}^d, \frac{\partial f_e(\boldsymbol{x}_1, \boldsymbol{x}_2)}{\partial \boldsymbol{x}_1} = \frac{\partial f'_e(\boldsymbol{x}_1, \boldsymbol{x}_2)}{\partial \boldsymbol{x}_1} \right\}. \quad (20)$$
200
+
201
+ These assumptions ensure pairwise interactions are not cancelled during aggregation, and the edge-types remain sufficiently distinct. For example, SDCI, naturally satisfies (a2.1) through GNN message passing. Additionally, implementing edge-type functions as analytic functions guarantees (a2.2). We now state the following identifiability result:
202
+
203
+ **Theorem 4.3.** Conditionally stationary time series satisfying (m1-m5) are partially identifiable with respect to conditional effects (Def. 4.1), if they meet assumptions of unique indexing of (a1) outgoing structure and (a2) function types.
204
+
205
+ *Proof sketch.* The full proof is depicted in Appendix A.5, and it can be divided into 3 major steps.
206
+
207
+ (i) Write $p(\boldsymbol{x}_t|\boldsymbol{x}_{1:t-1})$ as a mixture model. Under (m1-m3) and (a1-a2), the regime-dependent graph is identifiable up to a permutation $\tau \in S_{K^N}$ (Theorem A.6). Under state dependencies on $\boldsymbol{x}_{1:t-1}$ , the key is to show that the permutation equivalence of the transition distribution is preserved almost everywhere due to (a1-a2), no matter the overlap on $\boldsymbol{x}_{t-1}$ .
208
+
209
+ - (ii) Using (m4) and (a1), regime-dependent graph identifiability transfers to conditional summary graph identifiability (Theorem A.7).
210
+ - (iii) By (m5) and (a1-a2), partial derivatives in the aggregation step reveal permutation equivalence on the edge-types (Corollary A.8).
211
+
212
+ While several prior works (Halv ¨ a & Hyvarinen ¨ , 2020; Song et al., 2024; Balsells-Rodas et al., 2024) introduce nonstationarity via regime switching, our approach offers three key advantages:
213
+
214
+ - Unknown number of state values K: Most prior work, including HMM-ICA (Halv ¨ a & Hyvarinen ¨ , 2020) and CtrlNS (Song et al., 2024), assumes that the number of latent regimes is known. Our approach leverages Yakowitz & Spragins (1968), which enables identifiability without knowing the true number of regimes. This allows identification of K by model selection (McLachlan & Peel, 2000) (assuming convergence to the MLE), which is not theoretically supported when K requires to be known.
215
+ - No assumption on state dependency: Prior works typically assume either the "states independent" case (Halv ¨ a¨ & Hyvarinen, 2020; Balsells-Rodas et al., 2024; Rahmani & Frossard, 2025); or the "states determined" case under strong structural constraints (Song et al., 2024). Our identifiability results make no assumptions about state dependencies, allowing feedback from observations.
216
+ - Regime-dependent identifiability: Our theoretical results (Thms. A.6-A.7; Cor. A.8) build from general Markov Switching Models to our specific implementation, SDCI. Theorem A.6 introduces a novel proof strategy that extends from Yakowitz & Spragins (1968), providing a general theoretical foundation for regimedependent causal discovery (Saggioro et al., 2020).
217
+
218
+ As mentioned, SDCI considers samples x1:<sup>T</sup> ∼ D, each generated under different structures. The presented results remain valid, as identifiability holds even when considering the distribution of a single sample. However, the consistency of SDCI, along with other approaches within the same family, such as ACD (Lowe et al. ¨ , 2022), remains an open challenge. Recent works (Gong et al., 2023; Geffner et al., 2024) have shown that the validity of variational objectives for causal discovery relies on strong assumptions; including universal approximation properties of q, absence of model misspecification, and the infinite data limit. These works assume fixed structures across samples, where SDCI can be adapted by setting q<sup>ϕ</sup><sup>w</sup> (W|x1:<sup>T</sup> ) = q<sup>ϕ</sup><sup>w</sup> (W).
219
+
220
+ # Method
221
+
222
+ We review the main model assumptions that characterise SDCI. We start with the assumptions that apply to general MSMs, where we have a single global state at each time step t. In this case we denote the global state as $u_t =: \varphi(s_t) \in \{1, \dots, U\}$ for some injective $\varphi : \{1, \dots, K\}^N \to \{1, \dots, U\}$ (we use $s_t$ when referring to multiple states in SDCI).
223
+
224
+ (m1) Conditional first-order Markov transitions controlled by $u_{t-1}$ only:
225
+
226
+ $$p(\mathbf{x}_t|\mathbf{x}_{1:t-1}, \mathbf{u}_{1:t-1}) = p(\mathbf{x}_t|\mathbf{x}_{t-1}, \mathbf{u}_{t-1}), \quad \forall t \in \{2, ..., T\}.$$
227
+ (21)
228
+
229
+ (m2) Conditional stationarity: the conditional transition distribution does not change during time: for any $u \in \{1, ..., U\}$ we have for any $t \neq t'$ , any $\beta, \gamma \in \mathbb{R}^{Nd}$
230
+
231
+ $$p(\mathbf{x}_{t} = \boldsymbol{\beta} | \mathbf{x}_{t-1} = \boldsymbol{\gamma}, u_{t-1} = u) = p(\mathbf{x}_{t'} = \boldsymbol{\beta} | \mathbf{x}_{t'-1} = \boldsymbol{\gamma}, u_{t'-1} = u). \tag{22}$$
232
+
233
+ (m3) Factorisation structure based on the ANM model with Gaussian noise:
234
+
235
+ $$p(\boldsymbol{x}_{t}|\boldsymbol{x}_{t-1}, u_{t-1}) = \prod_{i=1}^{N} \mathcal{N}\left(\boldsymbol{x}_{t}^{(j)}; f_{u_{t-1}}^{(j)}\left(\boldsymbol{x}_{t-1}^{(i)}|\boldsymbol{x}_{t-1}^{(i)} \in \mathbf{PA}_{u_{t-1}}^{(j)}(t-1)\right), \sigma^{2}\mathbf{I}\right), \quad \forall t \in \{2, ..., T\}.$$
236
+ (23)
237
+
238
+ Again, we formalise assumptions specific to *conditionally stationary* data, where the total state configuration is denoted by grouping all the individual states $s_t = \{s_t^{(1)}, \dots, s_t^{(N)}\} \in \{1, \dots, K\}^N$ . Note that this defines a specific structure in a general MSM with $U := K^N$ global states.
239
+
240
+ (m4) Multi-state dependence: For each variable $i \in \{1, \dots, N\}$ at time $t \in \{1, \dots, T\}$ : $\boldsymbol{x}_t^{(i)}$ , there exists a state variable $s_t^{(i)} \in \{1, \dots, K\}$ , such that $\boldsymbol{s}_{t-1} := (s_{t-1}^{(1)}, \dots, s_{t-1}^{(N)}) \in \{1, \dots, K\}^N$ . For any variable $j \in \{1, \dots, N\}$ , $\operatorname{PA}_{\boldsymbol{s}_{t-1}}^{(j)}(t-1)$ , is defined as follows:
241
+
242
+ $$\mathbf{PA}_{s_{t-1}}^{(j)}(t-1) := \left\{ \mathbf{x}_{t-1}^{(i)} | j \in \mathcal{C}_k^{(i)}, k = s_{t-1}^{(i)}, 1 \le i \le N \right\}.$$
243
+ (24)
244
+
245
+ where $C_k^{(i)} \subseteq \mathcal{V}$ denotes the *outgoing edge structure* (children) of variable $i \in \{1, ..., K\}$ , corresponding to a state $k \in \{1, ..., K\}$ .
246
+
247
+ (m5) Global function-type dependence: We allow $n_{\epsilon}$ functional effects (including the no-edge effect). We define the labels of conditional effects as
248
+
249
+ $$W := \left\{ w_{i,j,k} \in \{0, \dots, n_{\epsilon} - 1\} : i, j \in \{1, \dots, N\}, k \in \{1, \dots, K\}, w_{i,j,k} = 0 \iff j \notin \mathcal{C}_k^{(i)} \right\}, \tag{25}$$
250
+
251
+ where each $w_{i,j,k}$ denotes an edge-type from variable i to j at state k, associated to variable i. Each edge-type corresponds to a different function over the total function interactions $\mathcal{F} = \{f_0, \dots, f_{n_{\epsilon}-1}\}$ , where $f_0(\cdot) = \mathbf{0}$ from Eq. (25). Given a state configuration $\mathbf{s}_{t-1}, f_{\mathbf{s}_{t-1}}^{(j)} : \mathbb{R}^d \to \mathbb{R}^d$ in Eq. (23) is defined as follows for any $\mathbf{x}_{t-1} \in \mathbb{R}^{Nd}$ :
252
+
253
+ $$f_{\boldsymbol{s}_{t-1}}^{(j)}(\boldsymbol{x}_{t-1}) := g\left(\boldsymbol{x}_{t-1}^{(j)}, \left\{f_{e_i}\left(\boldsymbol{x}_{t-1}^{(i)}, \boldsymbol{x}_{t-1}^{(j)}\right) | i \neq j\right\}\right), \text{ where } e_i = w_{i,j,s_{t-1}^{(i)}} \in \{0, \dots, n_{\epsilon}\},$$
254
+ (26)
255
+
256
+ where $e_i, i \in \{1, ..., N\}$ denotes the interactions between variables i and j, which are aggregated by a permutation invariant function g.
257
+
258
+ We wish to analyse the identifiability of the labels W for SDCI (m1-m5) in the case of unobserved states. However, we first require developing identifiability for regime-dependent and state-dependent structures, i.e $\mathcal{G}_{1:U}^{RD}$ and $\mathcal{G}_{1:K}^{CS}$ respectively. We define identifiability of the regime-dependent structures for a general MSM (m1-m3).
259
+
260
+ **Definition A.1** (Partial identifiability of regime-dependent graph). Given observations $\mathbf{x}_{1:T}$ and a family of models $\mathcal{M}$ satisfying (m1-m3), we say $\mathcal{M}$ is partially identifiable with respect to its regime-dependent graph if for any $p, \hat{p} \in \mathcal{M}$ , with corresponding $\mathbf{PA}_{u}^{(i)}(t-1)$ and $\widehat{\mathbf{PA}}_{\hat{u}}^{(i)}(t-1)$ for any $i \in \{1,\ldots,N\}$ , $u \in \{1,\ldots,U\}$ , and $\hat{u} \in \{1,\ldots,\hat{U}\}$ ), such that $p(\mathbf{x}_{1:T}) = \hat{p}(\mathbf{x}_{1:T})$ ; $U = \hat{U}$ and there exists a permutation $\tau \in S_{U}$ such that $\mathbf{PA}_{u}^{(i)}(t-1) = \widehat{\mathbf{PA}}_{\tau(u)}^{(i)}(t-1)$ for $i \in \{1,\ldots,N\}$ , and $u \in \{1,\ldots,U\}$ .
261
+
262
+ We also define the identifiability of the outgoing edges $C_k^{(i)}$ in terms of $\mathcal{G}_{1:K}^{CS}$ , given a model with N state variables and multi-state dependence (m1-m4).
263
+
264
+ **Definition A.2** (Partial identifiability of outgoing edge structure). Given observations $\mathbf{x}_{1:T}$ and a family of models $\mathcal{M}$ satisfying (m1-m4), we say $\mathcal{M}$ is partially identifiable with respect to the outgoing edge structure if for any $p, \hat{p} \in \mathcal{M}$ , with corresponding $\mathcal{C}_k^{(i)}$ and $\widehat{\mathcal{C}}_k^{(i)}$ for any $i \in \{1, \dots, N\}$ , $k \in \{1, \dots, K\}$ , and $\hat{k} \in \{1, \dots, \hat{K}\}$ , such that $p(\mathbf{x}_{1:T}) = \hat{p}(\mathbf{x}_{1:T})$ ; $K = \hat{K}$ and there exist permutations $\sigma^{(i)} \in S_K$ such that $\mathcal{C}_k^{(i)} = \widehat{\mathcal{C}}_{\sigma^{(i)}(k)}^{(i)}$ for $i \in \{1, \dots, N\}$ , and $k \in \{1, \dots, K\}$ .
265
+
266
+ Finally, we further define identifiability of the labels of conditional effects under global function-type dependence (m5).
267
+
268
+ **Definition A.3** (Partial identifiability of conditional effects). Given observations $\boldsymbol{x}_{1:T}$ and a family of models $\mathcal{M}$ satisfying (m1-m5), we say $\mathcal{M}$ is partially identifiable with respect to the labels of conditional effects if for any $p, \hat{p} \in \mathcal{M}$ , with corresponding $\mathcal{W}$ and $\widehat{\mathcal{W}}$ such that $p(\boldsymbol{x}_{1:T}) = \hat{p}(\boldsymbol{x}_{1:T})$ ; $K = \hat{K}$ and there exist permutations $\pi \in S_{n_{\epsilon}}$ and $\sigma^{(i)} \in S_{K}$ such that for any $i, j \in \{1, \dots, N\}$ and $k \in \{1, \dots, K\}$ :
269
+
270
+ $$w_{i,j,k} = \hat{w}_{i,j,\sigma^{(i)}(k)} \iff w_{i,j,k} = 0, \tag{27}$$
271
+
272
+ $$w_{i,j,k} = \pi \left( \hat{w}_{i,j,\sigma^{(i)}(k)} \right) \iff w_{i,j,k} \neq 0. \tag{28}$$
273
+
274
+ As mentioned in the main text, these definition consider identifiability is *partial* since it does not cover the structure capturing interaction between $x_{1:T}$ and $s_{1:T}$ as well as those within $s_{1:T}$ .
275
+
276
+ The defined partial graph identifiability cannot be achieved without further assumptions for (m1-m5). Below we list the sufficient conditions for such identifiability, some of them which we already introduced in the main text.
277
+
278
+ (a1) Unique indexing of outgoing structure. Here we assume that for each state $s_{t-1}^{(i)} \in \{1, ..., K\}^N$ , $i \in \{1, ..., N\}$ the underlying graph representing the direct causes from variable i to the rest of the variables is unique.
279
+
280
+ $$\forall k, k' \in \{1, ..., K\}, k \neq k' \quad \Leftrightarrow \quad \mathcal{C}_k^{(i)} \neq \mathcal{C}_{k'}^{(i)}, \quad \forall i \in \{1, ..., N\}.$$
281
+ (29)
282
+
283
+ This assumption has equivalences to MSMs, i.e. when considering all state variables, $s_{t-1} \in \{1, ..., K\}^N$ as a global state, $u_{t-1} \in \{1, ..., K^N\}$ .
284
+
285
+ (b1) Unique indexing of regime-dependent graph structure. For each possible state $u_{t-1} \in \{1, \dots, U\}$ the underlying graph representing the direct causes between variables is unique. In other words, the resulting U graphs are different. This implies that the following holds:
286
+
287
+ $$\forall u, u' \in \{1, ..., U\}, u \neq u' \quad \Leftrightarrow \quad \exists j \in \{1, ..., N\} \text{ s.t. } \mathbf{PA}_{u_{t-1}=u}^{(j)}(t-1) \neq \mathbf{PA}_{u_{t-1}=u'}^{(j)}(t-1). \tag{30}$$
288
+
289
+ Assume an invertible injective mapping $\varphi:\{1,\ldots,K\}^N\to\{1,\ldots,K^N\}$ such that we can map a specific state configuration $(k_1,\ldots,k_N)$ to an assigned value $k_{1:N}\in\{1,\ldots,K^N\}$ that is, $k_{1:N}=\varphi(k_1,\ldots,k_N)$ . With this view, we can write the state variables in terms of one global state variable with $K^N$ regimes. We connect assumption (a1) to assumption (b1) defined for MSMs as follows.
290
+
291
+ **Proposition A.4.** Assumption (a1) implies assumption (b1).
292
+
293
+ *Proof.* Recall the definition of $PA_{s_{t-1}}^{(j)}$ :
294
+
295
+ $$\mathbf{PA}_{s_{t-1}}^{(j)}(t-1) := \{ \mathbf{x}_{t-1}^{(i)} \mid j \in \mathcal{C}_k^{(i)}, k = s_{t-1}^{(i)}, 1 \le i \le N \}.$$
296
+ (31)
297
+
298
+ Given two state configurations $(k_1,\ldots,k_N), (k'_1,\ldots,k'_N) \in \{1,\ldots,K\}^N$ that differ in at least one $k_i, i \in \{1,\ldots,N\}$ , under (a1), we have $\mathcal{C}_{k_i}^{(i)} \neq \mathcal{C}_{k'_i}^{(i)}$ . Again from (a1), $\exists j \in \{1,\ldots,N\}$ such that $j \in \mathcal{C}_{k_i}^{(i)}$ and $j \notin \mathcal{C}_{k'_i}^{(i)}$ . From Eq. (31), and by writing $\mathbf{PA}_{\mathbf{s}_{t-1}}^{(j)}(t-1)$ , as a MSM with one global state $(u_t := \mathbf{s}_t)$ we have
299
+
300
+ $$\exists j \in \{1, \dots, N\} \ s.t. \ \boldsymbol{x}_{t-1}^{(i)} \in \mathbf{PA}_{u_{t-1} = \varphi(k_1, \dots, k_N)}^{(j)}(t-1), \text{ and } \boldsymbol{x}_{t-1}^{(i)} \notin \mathbf{PA}_{u_{t-1} = \varphi(k'_1, \dots, k'_N)}^{(j)}(t-1) \\ \Longrightarrow \exists j \in \{1, \dots, N\} \ s.t. \ \mathbf{PA}_{u_{t-1} = \varphi(k_1, \dots, k_N)}^{(j)}(t-1) \neq \mathbf{PA}_{u_{t-1} = \varphi(k'_1, \dots, k'_N)}^{(j)}(t-1)$$
301
+ (32)
302
+
303
+ Therefore, assumption (b1) also holds.
304
+
305
+ We now revisit the functional assumptions introduced in the main text, and establish connections to MSMs.
306
+
307
+ - (a2) Unique indexing of function types. Under the model assumptions (m1-m5), we assume the following:
308
+ - (a2.1): For any $j \in \{1, \dots, N\}$ , and given $g(\boldsymbol{x}^{(j)}, \{\boldsymbol{h}^{(i,j)}|i \neq j\})$ , where $\boldsymbol{h}^{(i,j)} = f_e(\boldsymbol{x}^{(i)}, \boldsymbol{x}^{(j)})$ for some $e \in \{0, \dots, n_\epsilon 1\}$ , the partial derivative $\frac{\partial g}{\partial \boldsymbol{h}^{(i,j)}}$ is non-zero almost everywhere for any $i \in \{1, \dots, N\}$ .
309
+ - (a2.2): Edge-types differ almost everywhere: $f_0 := 0$ , and for $e \neq e', \in \{0, n_{\epsilon} 1\}$ , the following set has zero measure:
310
+
311
+ $$\mathcal{X}_{e,e'} := \left\{ \boldsymbol{x}_1 \in \mathbb{R}^d : \exists \boldsymbol{x}_2 \in \mathbb{R}^d, \quad \frac{\partial f_e(\boldsymbol{x}_1, \boldsymbol{x}_2)}{\partial \boldsymbol{x}_1} = \frac{\partial f'_e(\boldsymbol{x}_1, \boldsymbol{x}_2)}{\partial \boldsymbol{x}_1} \right\}. \tag{33}$$
312
+
313
+ - (b2) Functional faithfulness. This condition considers the functional properties of $f_{s_{t-1}}^{(j)}$ in terms of its faithfulness to the graph structure. We require the following sense of faithfulness in terms of the functional behaviour for all $s_{t-1} \in \{1, ..., K\}^K$ :
314
+ - (b2.1) $f_{s_{t-1}}^{(j)}$ is differentiable w.r.t. $\mathbf{PA}_{s_{t-1}}^{(j)}(t-1)$ almost everywhere. Also all the entries of the Jacobian matrix $\frac{df_{s_{t-1}}^{(j)}}{d\mathbf{PA}_{s_{t-1}}^{(j)}(t-1)}$ , when well defined, are non-zero almost everywhere w.r.t. $\mathbf{PA}_{s_{t-1}}^{(j)}(t-1)$ .
315
+ - (b2.2) If $x_{t-1}^{(i)} \notin PA_{s_{t-1}}^{(j)}(t-1)$ , then $f_{s_{t-1}}^{(j)}$ is a constant w.r.t. $x_{t-1}^{(i)}$
316
+
317
+ Intuitively, this faithfulness condition requires the output of the function $f_{s_{t-1}}^{(j)}$ to vary if and only if $\mathbf{PA}_{s_{t-1}}^{(j)}(t-1)$ varies. We also connect (a2) to (b2).
318
+
319
+ **Proposition A.5.** (a2) implies (b2).
320
+
321
+ To see this, note that (a2.1) and (a2.2) force the derivatives w.r.t $\mathbf{PA}_{s_{t-1}}^{(j)}(t-1)$ to be non-zero almost everywhere (which implies (b2.1). Furthermore, this is only violated in a zero measure set of points, or when variable i interacts with j via $f_0$ , but this only is possible when $\mathbf{x}_{t-1}^{(i)} \notin \mathbf{PA}_{s_{t-1}}^{(j)}(t-1)$ , which implies (b2.2).
322
+
323
+ With the above assumptions we can now state the following identifiability results:
324
+
325
+ **Theorem A.6.** A Markov Switching Model (m1-m3) under assumptions (b1) and (b2) is partially identifiable with respect to its regime-dependent graph (Def. A.1).
326
+
327
+ *Proof.* Assume there exist two MSMs satisfying $p(\boldsymbol{x}_{1:T}) = \hat{p}(\boldsymbol{x}_{1:T})$ . Since wlog. $p(\boldsymbol{x}_{1:t}) = p(\boldsymbol{x}_t|\boldsymbol{x}_{1:t-1})p(\boldsymbol{x}_{1:t-1})$ , the fact that $p(\boldsymbol{x}_{1:T}) = \hat{p}(\boldsymbol{x}_{1:T})$ , and $p(\boldsymbol{x}_t|\boldsymbol{x}_{1:t-1})$ is a probability distribution imply that
328
+
329
+ $$p(\mathbf{x}_t|\mathbf{x}_{1:t-1}) = \hat{p}(\mathbf{x}_t|\mathbf{x}_{1:t-1}), \quad \forall t = 2, ..., T.$$
330
+ (34)
331
+
332
+ Now since the model assumes (m1) a conditional first-order Markov structure controlled by the previous state $s_{t-1}$ (Eq. (21)), we can show that the conditional distribution is a finite mixture distribution:
333
+
334
+ $$p(\mathbf{x}_t|\mathbf{x}_{1:t-1}) = \sum_{u=1}^{U} p(u_{t-1} = u|\mathbf{x}_{1:t-1}) p(\mathbf{x}_t|\mathbf{x}_{t-1}, u_{t-1} = u).$$
335
+ (35)
336
+
337
+ We assume (b1) and functional faithfulness (b2). Since (m3) $p(\boldsymbol{x}_{1:t}|\boldsymbol{x}_{t-1},u_{t-1}=u)$ is Gaussian (Eq. (23)), using the identifiability result of Gaussian finite mixture (Yakowitz & Spragins, 1968) we have $U=\hat{U}$ , and for almost any given $\boldsymbol{x}_{t-1}=\boldsymbol{\alpha}\in\mathbb{R}^{Nd}$ (modulo some zero-measure sets) we have the following result: for any $u\in\{1,...,U\}$ , there exists $\hat{u}(\boldsymbol{\alpha},u)\in\{1,...,U\}$ such that
338
+
339
+ $$p(u_{t-1} = u | \mathbf{x}_{1:t-2}, \mathbf{x}_{t-1} = \boldsymbol{\alpha}) = \hat{p}(u_{t-1} = \hat{u}(\boldsymbol{\alpha}, u) | \mathbf{x}_{1:t-2}, \mathbf{x}_{t-1} = \boldsymbol{\alpha}),$$
340
+
341
+ $$p(\mathbf{x}_t | \mathbf{x}_{t-1} = \boldsymbol{\alpha}, u_{t-1} = u) = \hat{p}(\mathbf{x}_t | \mathbf{x}_{t-1} = \boldsymbol{\alpha}, u_{t-1} = \hat{u}(\boldsymbol{\alpha}, u)).$$
342
+ (36)
343
+
344
+ To clarify, Proposition 2 in Yakowitz & Spragins (1968) establishes multivariate Gaussian distributions are identifiable if and only if the means or covariances are distinct. Our assumptions (b1) and (b2) ensure distinct means for almost any $x_{t-1} = \alpha \in \mathbb{R}^{Nd}$ .
345
+
346
+ The (m3) factorised Gaussian structure (Eq. (23)) and the above identification result further indicate that
347
+
348
+ $$p(\mathbf{x}_{t}^{(j)}|\mathbf{x}_{t-1} = \alpha, u_{t-1} = u) = \hat{p}(\mathbf{x}_{t}^{(j)}|\mathbf{x}_{t-1} = \alpha, u_{t-1} = \hat{u}(\alpha, u)), \quad \forall j = 1, ..., N.$$
349
+ (37)
350
+
351
+ Again wlog., let us denote the mean function of $p(\boldsymbol{x}_t^{(j)}|\boldsymbol{x}_{t-1}=\boldsymbol{\alpha},u_{t-1}=u)$ as $f_u^{(j)}$ for j=1,...,N and u=1,...,K, and we also use a short notation $\mathbf{PA}_u^{(j)}(t-1)$ to denote the input variables for $f_u^{(j)}$ . Under (b2.1) of the (b2) functional faithfulness assumption, and given that $U,N<+\infty$ , there exist an open set $\mathcal{X}\subset\mathbb{R}^{Nd}$ such that $\mu(\mathcal{X})=\mu(\mathbb{R}^{Nd})$ , where $\mu(\cdot)$ denotes the Lebesgue measure of a Euclidean space, and both Eq. (37) and the following condition hold (note that $\mathbf{PA}_u^{(j)}(t-1)\subset \boldsymbol{x}_{t-1}$ ):
352
+
353
+ all the entries of
354
+ $$\frac{df_u^{(j)}}{d\mathbf{P}\mathbf{A}_u^{(j)}(t-1)}|_{\mathbf{x}_{t-1}=\alpha}$$
355
+ are non-zero, $\forall \alpha \in \mathcal{X}, \ \forall j \in \{1,...,N\}.$ (38)
356
+
357
+ To see this: under (b2.1) we have for each $u \in \{1,...,U\}$ there exists an open set $\mathcal{X}_u \subset \mathbf{PA}_u^{(j)}(t-1)$ with $\mu(\mathcal{X}_u) = \mu(\mathbf{PA}_u^{(j)}(t-1))$ such that the partial derivatives computed within this set are non-zero. Then we can construct $\mathcal{X}$ as follows:
358
+
359
+ $$\mathcal{X} = \mathcal{X} \cap \hat{\mathcal{X}}, \quad \mathcal{X} = \bigcap_{u=1}^{U} (\mathcal{X}_{u} \times \{\boldsymbol{x}_{t-1}^{(i)} \in \mathbb{R}^{d} | \boldsymbol{x}_{t-1}^{(i)} \notin \mathbf{P} \mathbf{A}_{u}^{(j)}(t-1)\}),$$
360
+
361
+ $$\hat{\mathcal{X}} = \bigcap_{u=1}^{U} (\hat{\mathcal{X}}_{u} \times \{\boldsymbol{x}_{t-1}^{(i)} \in \mathbb{R}^{d} | \boldsymbol{x}_{t-1}^{(i)} \notin \widehat{\mathbf{P}} \mathbf{A}_{u}^{(j)}(t-1)\}). \quad (39)$$
362
+
363
+ We can show wlog. that $\mu(\mathcal{X}_u \times \{\boldsymbol{x}_{t-1}^{(i)} \in \mathbb{R}^d | \boldsymbol{x}_{t-1}^{(i)} \notin \mathbf{PA}_u^{(j)}(t-1)\}) = \mu(\mathbb{R}^{Nd})$ since the Lebesgue measure of $\mathbb{R}^{Nd}$ is a product measure. Therefore using the union bound we also have $\mu(\mathcal{X}) = \mu(\mathbb{R}^{Nd})$ .
364
+
365
+ Now we show that $\hat{u}(\boldsymbol{\alpha},u)$ is a constant for $\boldsymbol{\alpha} \in \mathcal{X}$ almost everywhere, and those $\boldsymbol{\alpha}$ values satisfy $\hat{u}(\boldsymbol{\alpha},u) = \hat{u}(u)$ and $\mathbf{PA}_{u}^{(j)}(t-1) = \widehat{\mathbf{PA}}_{\hat{u}(u)}^{(j)}(t-1)$ for all $j \in \{1,...,N\}$ .
366
+
367
+ (i) By model definition (m3) (Eq. (23)) and assumption (b2.1), within $\boldsymbol{x}_{t-1} = \boldsymbol{\alpha} \in \mathcal{X}$ we have $p(\boldsymbol{x}_t^{(j)}|\boldsymbol{x}_{t-1},u_{t-1} = u) = p(\boldsymbol{x}_t^{(j)}|\mathbf{P}\mathbf{A}_u^{(j)}(t-1),u_{t-1} = u)$ . Similarly for $\hat{p}$ we have $\hat{p}(\boldsymbol{x}_t^{(j)}|\boldsymbol{x}_{t-1},u_{t-1} = \hat{u}(\boldsymbol{\alpha},u)) = \hat{p}(\boldsymbol{x}_t^{(j)}|\widehat{\mathbf{P}}\mathbf{A}_{\hat{u}(\boldsymbol{\alpha},u)}^{(j)}(t-1),u_{t-1} = \hat{u}(\boldsymbol{\alpha},u))$ .
368
+
369
+ Then from Eq. (37) and assumption (b2.2), there exist at most a zero-measure set $\mathcal{X}_0 \subset \mathcal{X}$ of $\alpha$ values such that $\widehat{\mathbf{PA}}_{\hat{u}(\alpha,u)}^{(j)}(t-1) \neq \mathbf{PA}_{u}^{(j)}(t-1)$ . Otherwise, we can show that there exist two $\alpha \neq \alpha' \in \mathcal{X}_0$ with $\widehat{\mathbf{PA}}_{\hat{u}(\alpha,u)}^{(j)}(t-1)) = \widehat{\mathbf{PA}}_{\hat{u}(\alpha',u)}^{(j)}(t-1) \neq \mathbf{PA}_{u}^{(j)}(t-1)$ , which contradicts with Eq. (37) and (b2.2). This means, by Eq. (23) again, and notice that $\mu(\mathcal{X}\setminus\mathcal{X}_0) = \mu(\mathcal{X})$ :
370
+
371
+ $$\widehat{\mathbf{PA}}_{\hat{u}(\boldsymbol{\alpha},u)}^{(j)}(t-1) = \mathbf{PA}_{u}^{(j)}(t-1), \quad \forall \boldsymbol{\alpha} \in \mathcal{X} \backslash \mathcal{X}_{0}, \ \forall j \in \{1,...,N\}.$$
372
+
373
+ (ii) Under (b1) unique indexing of causal graph structure, we have $\hat{u}(\boldsymbol{\alpha},u)$ as a constant w.r.t. $\boldsymbol{\alpha} \in \mathcal{X} \setminus \mathcal{X}_0$ . To see this, if there exist $\boldsymbol{\alpha}, \boldsymbol{\alpha}' \in \mathcal{X} \setminus \mathcal{X}_0$ such that $\hat{u}(\boldsymbol{\alpha},u) \neq \hat{u}(\boldsymbol{\alpha}',u)$ , then from (b1), there exists $j \in \{1,...,N\}$ such that $\widehat{\mathbf{PA}}_{\hat{u}(\boldsymbol{\alpha},u)}^{(j)}(t-1)) \neq \widehat{\mathbf{PA}}_{\hat{u}(\boldsymbol{\alpha}',u)}^{(j)}(t-1))$ , a contradiction to point (i) above. We now write $\hat{u}(\boldsymbol{\alpha},u) = \hat{u}(u)$ w.l.o.g., then we have
374
+
375
+ $$\mathbf{PA}_{u}^{(j)}(t-1) = \widehat{\mathbf{PA}}_{\hat{u}(u)}^{(j)}(t-1), \quad \forall \alpha \in \mathcal{X} \backslash \mathcal{X}_{0}, \ \forall j \in \{1,...,N\}.$$
376
+
377
+ In summary, we have shown that there exists a permutation $\tau \in S_U$ such that $\hat{u} = \tau(u)$ and the following equivalence holds for all $j \in \{1, ..., N\}$ and almost everywhere for $x_{t-1} \in \mathbb{R}^{Nd}$ :
378
+
379
+ $$p(\boldsymbol{x}_{t}^{(j)}|\boldsymbol{x}_{t-1}, u_{t-1} = u) = \hat{p}(\boldsymbol{x}_{t}^{(j)}|\boldsymbol{x}_{t-1}, u_{t-1} = \tau(u)), \quad \mathbf{P}\mathbf{A}_{u}^{(j)}(t-1) = \widehat{\mathbf{P}}\mathbf{A}_{\tau(u)}^{(j)}(t-1),$$
380
+
381
+ $$\forall j \in \{1, ..., N\}, \quad \forall \boldsymbol{x}_{t-1} \in \mathcal{X} \setminus \mathcal{X}_{0}, \quad \mu(\mathcal{X} \setminus \mathcal{X}_{0}) = \mu(\mathbb{R}^{Nd}).$$
382
+
383
+ $$(40)$$
384
+
385
+ We start by introducing assumption (m4) and establish identifiability of the conditional summary graph under unique indexing of outgoing edges (a1) and functional faithfulness (b1).
386
+
387
+ **Theorem A.7.** The multi-state dependent model (m1-m4) is partially identifiable w.r.t. the outgoing edge structure (Def. A.2) up to permutation equivalence of the states, if it satisfies the assumptions of (a1) unique indexing of causal graph structure and (b2) functional faithfulness.
388
+
389
+ *Proof.* Assume there exist two models under (m1-m4), satisfying $p(\boldsymbol{x}_{1:T}) = \hat{p}(\boldsymbol{x}_{1:T})$ . For simplicity, we first want to view the above as a mixture model of $K^N$ regimes with one global state. Assume again an invertible injective mapping $\varphi: \{1,\ldots,K\}^N \to \{1,\ldots,K^N\}$ such that we can map a specific state configuration $(k_1,\ldots,k_N)$ to an assigned value $k_{1:N} \in \{1,\ldots,K^N\}$ that is, $k_{1:N} = \varphi(k_1,\ldots,k_N)$ . Therefore, for any $t \in \{1,\ldots,T\}$ , we can obtain $u_t = \varphi(\boldsymbol{s}_t)$ to reduce the multi-state model into a general MSM. Given that assumption (a1) implies (b1), from Theorem A.6 there exists a permutation permutation $\tau \in S_{K^N}$ such that $\hat{k}_{1:N} = \tau(k_{1:N})$ and the following equivalence holds for all $j \in \{1,\ldots,N\}$ and almost everywhere for $\boldsymbol{x}_{t-1} \in \mathbb{R}^{Nd}$ :
390
+
391
+ $$p(\boldsymbol{x}_{t}^{(j)}|\boldsymbol{x}_{t-1}, u_{t-1} = k_{1:N}) = \hat{p}(\boldsymbol{x}_{t}^{(j)}|\boldsymbol{x}_{t-1}, u_{t-1} = \tau(k_{1:N})), \quad \mathbf{PA}_{k_{1:N}}^{(j)}(t-1) = \widehat{\mathbf{PA}}_{\tau(k_{1:N})}^{(j)}(t-1),$$
392
+
393
+ $$\forall j \in \{1, ..., N\}, \quad \forall \boldsymbol{x}_{t-1} \in \mathcal{X} \backslash \mathcal{X}_{0}, \quad \mu(\mathcal{X} \backslash \mathcal{X}_{0}) = \mu(\mathbb{R}^{Nd}).$$
394
+
395
+ $$(41)$$
396
+
397
+ We can revert the above back to the multi-state model notation (m1-m4), obtaining the following equivalence for the causal effects:
398
+
399
+ $$\mathbf{PA}_{(k_1,\dots,k_N)}^{(j)}(t-1) = \widehat{\mathbf{PA}}_{(\varphi \circ \tau \circ \varphi^{-1})(k_1,\dots,k_N)}^{(j)}(t-1), \tag{42}$$
400
+
401
+ where the permutation $\tau$ is general and can permute both state indices and values. From (m4), recall the definition of $\mathbf{PA}_{(k_1,\ldots,k_N)}^{(j)}(t-1)$ for some $(k_1,\ldots,k_N)\in\{1,\ldots,K\}^N$ , and consider the equivalence on $\widehat{\mathbf{PA}}_{(\hat{k}_1,\ldots,\hat{k}_N)}^{(j)}(t-1)$ for some $(\hat{k}_1,\ldots,\hat{k}_N)\in\{1,\ldots,K\}^N$ :
402
+
403
+ $$\mathbf{PA}_{s_{t-1}=(k_1,\dots,k_N)}^{(j)}(t-1) = \bigcup_{i=1}^{N} \left\{ x_{t-1}^{(i)} \mid j \in \mathcal{C}_{k_i}^{(i)} \right\} = \bigcup_{i=1}^{N} \left\{ x_{t-1}^{(i)} \mid j \in \widehat{\mathcal{C}}_{\hat{k}_i}^{(i)} \right\} = \widehat{\mathbf{PA}}_{(\hat{k}_1,\dots,\hat{k}_N)}^{(j)}(t-1). \tag{43}$$
404
+
405
+ Then for a given $i \in \{1,\ldots,N\}$ and by fixing $k=k_i \in \{1,\ldots,K\}$ , we can recover the outgoing edge structure $\mathcal{C}_k^{(i)}$ from $\mathbf{PA}_{(k_1,\ldots,k_N)}^{(j)}(t-1)$ for all $j \in \{1,\ldots,N\}$ . Equivalently, by fixing $\hat{k}=\hat{k}_i \in \{1,\ldots,K\}$ , we can recover $\widehat{\mathcal{C}}_{\hat{k}}^{(i)}$ from $\widehat{\mathbf{PA}}_{(\hat{k}_1,\ldots,\hat{k}_N)}^{(j)}(t-1)$ for all $j \in \{1,\ldots,N\}$
406
+
407
+ $$\mathcal{C}_k^{(i)} = \bigcup_{j=1}^N \left\{ j \mid x_{t-1}^{(i)} \in \mathbf{PA}_{(k_1, \dots, k_{i-1}, k, k_{i+1}, \dots, k_N)}^{(j)}(t-1) \right\} = \bigcup_{j=1}^N \left\{ j \mid x_{t-1}^{(i)} \in \widehat{\mathbf{PA}}_{(\hat{k}_1, \dots, \hat{k}_{i-1}, \hat{k}, \hat{k}_{i+1}, \dots, \hat{k}_N)}^{(j)}(t-1) \right\} = \widehat{\mathcal{C}}_{\hat{k}}^{(i)}.$$
408
+
409
+ Given the unique indexing assumption (a1), every k necessarily needs to map to a different $\hat{k}$ . Therefore, $\hat{k}$ is obtained from a permutation of k, which might differ among variables. Therefore, there exists $\sigma^{(i)} \in S_K$ such that $\mathcal{C}_k^{(i)} = \widehat{\mathcal{C}}_{\sigma^{(i)}(k)}^{(i)}$ for all $i \in \{1, \dots, N\}$ and $k \in \{1, \dots, K\}$ . This implies that the multi-state dependent model (m1-m4) is partially identifiable with respect to its outgoing edge structure.
410
+
411
+ **Corollary A.8.** Conditionally stationary time series (m1-m5) are partially identifiable w.r.t. the labels of conditional effects (Def. A.3) up to permutations, if they satisfy: (a1) unique indexing of outgoing structure, and (a2) unique indexing of function types.
412
+
413
+ *Proof.* Under (m1-m4) and assuming (a1-a2), from Theorem A.6 the following equivalence holds for all $j \in \{1, ..., N\}$ and almost everywhere for $x_{t-1} \in \mathbb{R}^{Nd}$ :
414
+
415
+ $$p\left(\boldsymbol{x}_{t}^{(j)}|\boldsymbol{x}_{t-1},\boldsymbol{s}_{t-1}=(k_{1},\ldots,k_{N})\right)=\hat{p}\left(\boldsymbol{x}_{t}^{(j)}|\boldsymbol{x}_{t-1},\boldsymbol{s}_{t-1}=\left(\sigma^{(1)}(k_{1}),\ldots,\sigma^{(N)}(k_{N})\right)\right),$$
416
+
417
+ $$\forall j \in \{1,\ldots,N\}, \quad \forall \boldsymbol{x}_{t-1} \in \mathcal{X} \setminus \mathcal{X}_{0}, \quad \mu(\mathcal{X} \setminus \mathcal{X}_{0})=\mu(\mathbb{R}^{Nd}).$$
418
+
419
+ $$(44)$$
420
+
421
+ From Gaussian identifiability (Yakowitz & Spragins, 1968), we have $f_{s_{t-1}}^{(j)}(\boldsymbol{x}_{t-1}) = \hat{f}_{\hat{s}_{t-1}}^{(j)}(\boldsymbol{x}_{t-1})$ , for each $j \in \{1, \dots, N\}$ and any $\boldsymbol{x}_{t-1} \in \mathcal{X} \setminus \mathcal{X}_0$ . From (m5), we have
422
+
423
+ $$f_{\mathbf{s}_{t-1}}^{(j)}(\mathbf{x}_{t-1}) = g\left(f_{e_{t-1}^{(1,j)}}\left(\mathbf{x}_{t-1}^{(1)}, \mathbf{x}_{t-1}^{(j)}\right), \dots, f_{e_{t-1}^{(N,j)}}\left(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(j)}\right)\right) = \hat{g}\left(\hat{f}_{\hat{e}_{t-1}^{(1,j)}}\left(\mathbf{x}_{t-1}^{(1)}, \mathbf{x}_{t-1}^{(j)}\right), \dots, \hat{f}_{\hat{e}_{t-1}^{(N,j)}}\left(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(j)}\right)\right) = \hat{f}_{\hat{\mathbf{s}}_{t-1}}^{(j)}(\mathbf{x}_{t-1}, \mathbf{x}_{t-1}^{(j)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N,j)}}\left(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(j)}\right) = \hat{f}_{\hat{\mathbf{s}}_{t-1}}^{(j)}(\mathbf{x}_{t-1}, \mathbf{x}_{t-1}^{(j)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N,j)}}\left(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(j)}\right) = \hat{f}_{\hat{\mathbf{s}}_{t-1}}^{(j)}(\mathbf{x}_{t-1}, \mathbf{x}_{t-1}^{(j)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N,j)}}\left(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(j)}\right) = \hat{f}_{\hat{\mathbf{s}}_{t-1}}^{(j)}(\mathbf{x}_{t-1}, \mathbf{x}_{t-1}^{(j)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N,j)}}\left(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(j)}\right) = \hat{f}_{\hat{\mathbf{s}}_{t-1}}^{(j)}(\mathbf{x}_{t-1}, \mathbf{x}_{t-1}^{(j)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N,j)}}\left(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(j)}\right) = \hat{f}_{\hat{\mathbf{s}}_{t-1}}^{(j)}(\mathbf{x}_{t-1}, \mathbf{x}_{t-1}^{(j)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N,j)}}\left(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}\right) = \hat{f}_{\hat{\mathbf{s}}_{t-1}}^{(j)}(\mathbf{x}_{t-1}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}\left(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}\right) = \hat{f}_{\hat{\mathbf{s}}_{t-1}}^{(N)}(\mathbf{x}_{t-1}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}\left(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}\right) = \hat{f}_{\hat{\mathbf{s}}_{t-1}^{(N)}}(\mathbf{x}_{t-1}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x}_{t-1}^{(N)}, \mathbf{x}_{t-1}^{(N)}), \dots, \hat{f}_{\hat{e}_{t-1}^{(N)}}(\mathbf{x$$
424
+
425
+ where $e_{t-1}^{(i,j)} := w_{i,j,s_{t-1}^{(i)}}$ . We can directly establish a relation from $e_{t-1}^{(i,j)}$ to $\hat{e}_{t-1}^{(i,j)}$ using the partial derivative on the mean of $x_t^{(j)}$ with respect to $x_{t-1}^{(i)}$
426
+
427
+ $$\frac{\partial f_{\boldsymbol{s}_{t-1}}^{(j)}(\boldsymbol{x}_{t-1})}{\partial \boldsymbol{x}_{t-1}^{(i)}} = \frac{\partial g}{\partial \boldsymbol{h}^{(i,j)}} \frac{\partial f_{e_{t-1}^{(i,j)}}}{\partial \boldsymbol{x}_{t-1}^{(i)}} = \frac{\partial \hat{g}}{\partial \hat{\boldsymbol{h}}^{(i,j)}} \frac{\partial \hat{f}_{\hat{e}_{t-1}^{(i,j)}}}{\partial \boldsymbol{x}_{t-1}^{(i)}} = \frac{\partial \hat{f}_{\hat{\boldsymbol{s}}_{t-1}}^{(j)}(\boldsymbol{x}_{t-1})}{\partial \boldsymbol{x}_{t-1}^{(i)}}, \tag{46}$$
428
+
429
+ where the derivative of g is independent of $s_{t-1}$ , and also notably j (which will be of great importance later). Under assumption (a2.2), the set
430
+
431
+ $$\mathcal{X}_{func} = \bigcup_{e \neq e'} \left( \mathcal{X}_{e,e'} \cup \hat{\mathcal{X}}_{e,e'} \right)$$
432
+
433
+ is zero measured. $\hat{\mathcal{X}}_{e,e'}$ denote the sets in (a2.2) for another model $\hat{p}$ under (m1-m5) that satisfies (a1-a2). Assume $\mathcal{X}_0$ contains the points where the partial derivative of g with respect to $h^{(i,j)}$ is zero, where we know it has measure zero from (a2.1). Therefore, for any $\mathbf{x}_0 \in \mathcal{X} \setminus (\mathcal{X}_0 \cup (\times^N \mathcal{X}_{func}))$ ,
434
+
435
+ $$\frac{\partial g}{\partial \boldsymbol{h}^{(i,j)}} \frac{\partial f_{e_{t-1}^{(i,j)}}}{\partial \boldsymbol{x}_{t-1}^{(i)}} \bigg|_{\boldsymbol{x}_{t-1}^{(i)} = \boldsymbol{x}_{0}^{(i)}} = \frac{\partial \hat{g}}{\partial \hat{\boldsymbol{h}}^{(i,j)}} \frac{\partial \hat{f}_{\hat{e}_{t-1}^{(i,j)}}}{\partial \boldsymbol{x}_{t-1}^{(i)}} \bigg|_{\boldsymbol{x}_{t-1}^{(i)} = \boldsymbol{x}_{0}^{(i)}}.$$
436
+ (47)
437
+
438
+ Note $\mathcal{X}\setminus(\mathcal{X}_0\cup(\times^N\mathcal{X}_{func}))$ is full measured. Given that $e_{t-1}^{(i,j)}\in\{0,\dots,n_\epsilon-1\}$ , from (a2.1-a2.2) we know that if $e_{t-1}^{(i,j)}=0$ we have $\hat{e}_{t-1}^{(i,j)}=0$ due to having non-zero derivatives on $\hat{f}_e$ for any $e\neq 0$ . Furthermore from (a2.2), for any $e\neq e'$ the derivatives of $f_e$ and $f_{e'}$ are not equal. Then, any $e_{t-1}^{(i,j)}\neq 0$ must map uniquely to another $\hat{e}_{t-1}^{(i,j)}\neq 0$ , otherwise under
439
+
440
+ assumption (a2.1-a2.2) we have a contradiction with Eq. (47), as it will imply a repetition of edge-types. Therefore for each i and each j, there exists a permutation $\pi^{(i,j)} \in S_{n_{\epsilon}-1}$ such that for any $\hat{e}_{t-1}^{(i,j)} = \pi^{(i,j)}(e_{t-1}^{(i,j)})$ for $e_{t-1}^{(i,j)} \in \{1,\dots,n_{\epsilon}-1\}$ . This does not imply any direct identifiability on the set of functions $f_1,\dots,f_{n_{\epsilon}-1}$ , but the labels of conditional effects $\mathcal{W}$ . Due to permutation invariance of the aggregation g, the partial derivative terms $\frac{\partial g}{\partial h^{(i,j)}}$ are equal across i and j for fixed $x_0 \in \mathcal{X} \setminus (\mathcal{X}_0 \cup (\times^N \mathcal{X}_{func}))$ . This forces the permutations $\pi^{(i,j)}$ to be equal, as otherwise, edge-types where $e_{t-1}^{(i,j)} \neq 0$ mapping to different $\hat{e}_{t-1}^{(i,j)}$ will violate Eq. (47) under assumptions (a2.1-a2.2). Therefore, there exists $\pi \in S_{n_{\epsilon}-1}$ such that for any $\hat{e}_{t-1}^{(i,j)} = \pi(e_{t-1}^{(i,j)})$ for $e_{t-1}^{(i,j)} \in \{1,\dots,n_{\epsilon}-1\}$ . Now, recall the definition of $\mathcal{W}$ :
441
+
442
+ $$W := \left\{ w_{i,j,k} \in \{0, \dots, n_{\epsilon} - 1\} : i, j \in \{1, \dots, N\}, k \in \{1, \dots, K\}, w_{i,j,k} = 0 \iff j \notin \mathcal{C}_k^{(i)} \right\}. \tag{48}$$
443
+
444
+ From Theorem A.7, indexing w.r.t k in the above equation is equivalent up to a permutation $\sigma^{(i)}$ on $\widehat{\mathcal{W}}$ . Considering $e_{t-1}^{(i,j)} := w_{i,j,s_{t-1}^{(i)}}$ , we have the following equivalence for $\widehat{\mathcal{W}}$ when $w_{i,j,k} \neq 0$ , for any $i,j \in \{1,\ldots,N\}$ , and $k \in \{1,\ldots,K\}$ .
445
+
446
+ $$w_{i,j,k} = \pi\left(\hat{w}_{i,j,\sigma^{(i)}(k)}\right), \quad w_{i,j,k} \in \{1,\dots,n_{\epsilon}-1\}.$$
447
+ (49)
448
+
449
+ This implies partial identifiability with respect to the labels of conditional effects W (Def. A.3).
2110.08350/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-10-11T19:03:23.675Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.40 Safari/537.36" version="15.4.3" etag="nFu-MmmF8soE27bJXd33" type="google"><diagram id="HUHqVs9BoDJKp1FiwfPC">7Vpdl5o4GP41XtZDAoJcWmfcXrTbOTsX3b3MSARaJDSGUffXbyKJEBL8qDh6ujOcM5I338/z5E3y6sCdLjd/UFQkX0iEswF0os3AfRhA6IeA/xeGbWVwx6PKENM0qkygNjyn/2JpdKS1TCO80goyQjKWFrpxTvIcz5lmQ5SStV5sQTK91wLFskenNjzPUYaNYt/SiCWVdQyD2v4Jp3GiegZ+WOUskSosG14lKCLrhsl9HLhTSgir3pabKc4EdgqXqt6sI3c/MIpzdkoFWFV4RVkp5ybHxbZqsjElZSGLYcrwxgYxesnaiNVDAPuJcUFgssSMbnkR2dDIlVWkGGAo0+saWuhIW9KAFXqSUclmvG+6njB/kXO2z989Pn8+/TzCorwzcD+uk5Th5wLNRe6aq5vbErbk7T8A/rrnUyQWJGdSvEDU7QSxCRa0g/UWYHgGGFNCsQEIHz1rzZpR8gNPSUYot+Qkx2L2aZa1TChL45wnM7wQLQgkUr6qJtK8TKNIdGIFWafhIM67tBwvuAR05a+Grq89ml59U60dFZp8AXg5XyOLeP2MSQQ00vyfJVEZH1Y7rCa8gF9s6jz+FovPzwRFYkK70a5Uk3wwVatVoSuoYs7pwLQnXYhEhFbJLgfIxBNivIe8koXj6dLxL1yhyp3p3swz1REMYdB8LOpQXvQSdfi9qGMnC7tA0rwo2bs8zpMHcOBQYx/eiVyC6ziTb5RzwjNJyRpqeaG1UN71c5Z+7sa/jC8WDIA2xYBd5rsqzlGF2z5xaBoJDY2A8RCMm4/lgNKDRMIrSQS+S+RsiYQH952bSUS10btG3HeNnL+5uOHQCesH3IlGgEUjN45SuK4ZpQgt93LQx1arLou9hSnaN2btOn2SkEAHXMfg6EUPZtRmlvElcu/rup9YRRfysoaKCqhomqFSzxY96oMVM3z0Jy4pEmVyzNaE/mhGFXqJsjk6omGLzAF0Z4F4LlW1rBEMW+Ca6AYjC7rjHtA1gz0zipa4gvV3AHPkKsMROEHQA5xmdORrgSliaR4PxGqo76nHDyBhR5xktV0xvPw92Am8E9mBfbBjC0a88Ybve7+44feyw9ku1zfe8IMbbvjmRfL5r8mX/8l+3wH8rfd7aN7cJnOWvnIfSvLefOgSMZrO8d0eGA6T84sHBq+HAwO0XZrOu1h3cfKVJViQ+lIuFpie/A3R1RnrXLPXoDJ4u+3Q4vxxFGPlRghlCYlJjrLH2trySXWZz4QUEuPvmLGtxBWVjOgMcGDo9m9Rn89cJv9p5j1sZONVamsnS/fAnK6P0OF/Jo14BDyn/ppczFDjakVKOlcmiQhDNMYS6dHJOzzFGXdSr3rzNoJ2VSeUom2jQEHSvLrAyJafhKERKQg9/eAgL0Y12VWLNfX7oZ2mBvfIyrojrh3Hf5zMTK7BCwIYiuY2Kav6dQIg06LjD9zgqAJ15yKxbSSeME05dGLHfjhVOmNTOv6VpGNow/f0yLyrIvWqiWqkslbLQ5wpE++g05DnnsuVsydw1KAPHGauobbQb+ptR/sx0e1SbeZbwVCLEkeBzes4zhid6nUCUzlKTX1Kp2Nb4Mn6d2GVIOof17mP/wE=</diagram></mxfile>
2110.08350/main_diagram/main_diagram.pdf ADDED
Binary file (26.9 kB). View file
 
2110.08350/paper_text/intro_method.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ ::: table*
4
+ **Device** **Cores** **Freq. (GHz)** **RAM (GB)** **Storage (GB)** **Power (W)** **Price (\$)**
5
+ ----------------------------------------------------------------------- ----------- ----------------- -------------- ------------------ ------------------------------- ----------------
6
+ NVIDIA A100 GPU 6912 1.4 40 --- 400 12,500
7
+ Galaxy S22 Smartphone 8 $<$ 2.8 8 128 $\approx$`<!-- -->`{=html}3.6 750
8
+ Raspberry Pi 3B+ 4 1.4 1 32--512 2.3 40
9
+ [NUCLEO-F767ZI](https://os.mbed.com/platforms/ST-Nucleo-F767ZI/) (M7) 1 0.216 0.000512 0.002 0.3 9
10
+ [NUCLEO-F446RE](https://os.mbed.com/platforms/ST-Nucleo-F446RE/) (M4) 1 0.180 0.000128 0.000512 0.1 3
11
+
12
+ []{#tbl:devices-comp label="tbl:devices-comp"}
13
+ :::
14
+
15
+ There's an increasing interest in bringing on-device neural networks to what is arguably the smallest-scale viable compute platform: microcontroller-powered IoT devices. The roadblock to this, however, is the limited compute power. Microcontroller units (MCUs) are single-chip systems, typically consisting of a single-core processor (such as ARM M-series), on-chip persistent Flash and temporary SRAM memories, other peripherals (e.g. sensors, microphones, radio). Table [\[tbl:devices-comp\]](#tbl:devices-comp){reference-type="ref" reference="tbl:devices-comp"} compares technical specifications of MCUs against data center, mobile and micro-computer hardware. MCUs are designed with cost and power-efficiency in mind, which is primarily achieved by reducing the on-board memory and compute resources. The data shows that, even when compared to micro-computers like Raspberry Pi, MCUs have orders of magnitude less storage and SRAM (GBs vs 100s KB) and a slower processor (GHz vs 100s MHz).
16
+
17
+ <figure id="fig:mcu-comp" data-latex-placement="t">
18
+ <embed src="figures/mcu-comp.pdf" />
19
+ <figcaption>Deep learning inference on an MCU illustrated using relevant components. A flat memory hierarchy and its limited capacity impose significant constraints on the network architecture.</figcaption>
20
+ </figure>
21
+
22
+ For many practitioners, it is not trivial to work around the resource scarcity of microcontrollers. As a result, applications choose to offload the execution of neural networks to a more capable device, such as a smartphone or a remote "cloud" backend server. This brings numerous disadvantages, such as a lowered degree of privacy, complete reliance on a working communication link, increased response latency, and increased power usage for radio communication.
23
+
24
+ To see what properties the resource scarcity imposes on neural networks, let us take a more detailed look at how a neural network's execution may be mapped to a microcontroller (Figure [1](#fig:mcu-comp){reference-type="ref" reference="fig:mcu-comp"}), such as via the TensorFlow Lite Micro runtime [@david2020tensorflow]:
25
+
26
+ - Each layer of a neural network will be executed in some predefined order, one at a time and in its entirety.
27
+
28
+ - A layer is executed by loading its parameters (weights) from Flash, loading its input from SRAM and writing the output back to SRAM.
29
+
30
+ - All constant data, such as weights of a neural network and program code, are stored in the read-only Flash memory. Therefore, the size of a neural network is limited by Flash (e.g. $\leq$ 64KB).
31
+
32
+ - All temporary intermediate data, including activation matrices, will be stored in SRAM. Therefore, a neural network's peak activation size / memory usage is limited by the amount of SRAM (e.g. $\leq$ 64 KB).
33
+
34
+ As a compute platform for deep learning, microcontrollers pose unique challenges that do not similarly manifest on other platforms. The need to constrain SRAM usage, a relatively slow processor and orders of magnitude fewer resources requires *further specialist research* than what is typically explored in GPU/mobile-scale neural network design and compression. In fact, the need for MCU-specific methodology has spawned a research subfield called *TinyML*, which includes runtime design [@david2020tensorflow; @liberis2019neural], optimised layer implementations [@lai2018cmsis] and task-specific MCU-sized model discovery [@fedorov2019sparse; @liberis2021munas; @banbury2020micronets].
35
+
36
+ In this work (Figure [2](#fig:overall-diag){reference-type="ref" reference="fig:overall-diag"}), we tackle this problem through the use of *budgeted differentiable network pruning*. We employ bi-level gradient descent optimisation to learn the sizes of each layer in the network, resulting in iterative compression that is interleaved with network training and has negligible overhead. Our methodology formulates model size, peak memory usage and latency constraints as resource budget requirements and incorporates them into pruning as differentiable objectives. This achieves MCU specialisation currently absent from pruning literature and, compared to other MCU-level compression methods, the use of pruning and accurate objectives achieves faster compression while correctly targeting resource bottlenecks.
37
+
38
+ Differentiable pruning is evaluated on benchmark audio and image classification tasks, suitable for microcontrollers, and similar to real-world sensing applications: Speech Commands keyword spotting [@warden2018speech], Visual Wake Words [@chowdhery2019visual] person detection, CIFAR-10 [@krizhevsky09learning] and ImageNet [@deng2009imagenet] image classification. Additionally, a range of investigative studies is carried out to establish that: (a) the resource objectives are required for finding microcontroller-compatible models; (b) an accurate minimal peak memory usage computation is essential for correctly allocating the SRAM budget; (c) pruning allocates resources better than popular manual uniform scaling on MobileNet-v2 [@sandler2018mobilenetv2]; (d) training can be accelerated by terminating pruning early compared to training without pruning; (e) pruning prioritises computationally-expensive layers of the network; (f) pruned models have viable latency and energy consumption on a microcontroller to be deployed in practice.
39
+
40
+ In summary, this work contributes to microcontroller-aware model compression by:
41
+
42
+ 1. *Proposing a structured differentiable pruning method* for finding microcontroller-compatible models. Compared to related microcontroller-specialist work, the method accurately targets resource usage for neural networks and offers fast compression with negligible overhead during training (or even accelerates it).
43
+
44
+ 2. *Conducting ablation, comparative and analysis studies* to show that each aspect of this methodology is essential for achieving viable and performant microcontroller-compatible models.
45
+
46
+ 3. *Exercising differentiable pruning on fourteen architecture and dataset combinations* to produce state-of-the-art models at a variety of microcontroller-friendly resource usage *vs* accuracy trade-off points. Results show an improved key resource usage of up to 40% compared to related work and up to 80$\times$ compared to original models.
47
+
48
+ # Method
49
+
50
+ The pruning of neural network weights is implemented by applying channel-wise binary masks $\mathbf{M}$ to the output of each layer. This effectively sets specific output channels of a layer all to $\mathbf{0}$, allowing the parameters responsible for computing and consuming the masked-out channels to be safely removed from the network. An unpruned neural network is denoted as $f$ and its pruned counterpart as $f^\mathbf{M}$.
51
+
52
+ **Definition of $\mathcal{P}_\text{TSK}$.** Pruning seeks to minimise (or improve upon) the difference between the classification loss of the unpruned and the pruned network on unseen data while only adjusting elements of the binary mask $\mathbf{M}$: $$\begin{multline}
53
+ \text{argmin}_\mathbf{M} [\mathcal{L}_{ce}(f^\mathbf{M}, \mathcal{D}) - \mathcal{L}_{ce}(f, \mathcal{D})] = \\ \text{argmin}_\mathbf{M} \mathcal{L}_{ce}(f^\mathbf{M}, \mathcal{D})
54
+ \end{multline}$$
55
+
56
+ where $\mathcal{L}_{ce}$ stands for classification loss (cross-entropy) and $\mathcal{D}$ for a validation dataset. Therefore, the classification loss can be used as task-specific feedback for pruning: $$\begin{equation}
57
+ \mathcal{P}_\textsc{TSK} \stackrel{\text{def}}{=} \mathcal{L}_{ce}(f^\mathbf{M}, \mathcal{D})
58
+ \end{equation}$$
59
+
60
+ **Channel importance.** Pruning algorithms typically use a parameter importance (*salience*) criterion to determine which parameters to keep. Both for sparse (unstructured) and structured pruning, popular choices include magnitude-based [@han2015learning], norm-based ($L_1$ or $L_2$) [@lin2020dynamic], Hessian-based [@theis2018faster], batch normalisation (BN)-specific [@liu2017learning], or gradient flow-based [@lee2018snip; @wang2020picking] salience metrics.
61
+
62
+ The methodology focuses on convolutional neural networks (CNNs), assembled out of "Conv-BatchNorm (BN)-ReLU" layer sequences, fully connected (FC) layers, and other parameter-free layers. In the former case, the salience $\vec{s}$ is chosen to be scaling coefficients $\vec{\gamma}$ within batch normalisation, and in the latter case, the $L_2$ norm of weights of a neuron. More specifically, the importance of a particular channel/neuron $i$ is $s_i$, given by: $$\begin{align*}
63
+ \text{Conv-BN-ReLU}(\mathbf{x}) & = \text{ReLU}[ \vec{\gamma}\:\text{Norm}(\mathbf{K} * \mathbf{x}) + \vec{\beta}] \\
64
+ s_i & = |\gamma_i| \\
65
+ \text{FC}(\mathbf{x}) & = \mathbf{Wx} + \vec{b} \\
66
+ s_i & = ||\mathbf{W}_{:,i}||_2
67
+ \end{align*}$$ where $\mathbf{x}$ is the input tensor, $\mathbf{K}$, $\mathbf{W}$ and $\vec{b}$ are parameter tensors, $*$ is a convolution operation and "Norm" is a mean-zero variance-one normalisation function, and $\vec{\gamma}$ and $\vec{\beta}$ are learned batch normalisation scaling and offset parameters.
68
+
69
+ **Pruning by salience.** Suppose that within a layer $L$ with $C$ output channels, a proportion $\pi_L$ is set to be kept and not pruned. During pruning, all channels are ranked according to their salience ($\vec{s_L}$) and top $\pi_L$ ($\pi_L^{th}$ quantile) are kept (entries in $\textbf{M}$ are set to 1) and bottom $1 - \pi_L$ are discarded ($\textbf{M}$ set to 0).
70
+
71
+ Therefore, for a given layer $L$, $\textbf{M}$ is computed by *a hard threshold* function, denoted $M$, of channel salience, with the salience threshold $\tau_L$ picked to satisfy the constraint that only $\pi_L$ proportion of channels have salience $s_{L,i} > \tau_L$. Remaining text refers to $\vec{s_L}$ and $s_{L, i}$ as $\vec{s}$ and $s_i$, respectively, making the layer implicit for brevity.
72
+
73
+ **Differentiable pruning.** Gradient update to $\pi_L$ is calculated assuming a continuous relaxation of the hard threshold function on the backwards pass only. Following DSA [@ning2020dsa], we use a sigmoid function in the log-domain of saliences, which is differentiable with respect to $\tau_L$: $$\begin{equation}
74
+ \label{eqn:smooth-mask}
75
+ M(s_i, s_0) = \frac{1}{1 + e^{-[\ln s_i - \ln s_0]}} = \frac{1}{1 + s_0 / s_i}
76
+ \end{equation}$$ and enforce a constraint that the proportion of kept channels should equal the desired channel width multiplier $\pi_L$: $$\begin{equation}
77
+ \label{eqn:pi-constraint}
78
+ \frac{1}{C} \sum_j^C M(s_j, s_0) = \pi_i
79
+ \end{equation}$$
80
+
81
+ Differentiating Eqn. [\[eqn:pi-constraint\]](#eqn:pi-constraint){reference-type="ref" reference="eqn:pi-constraint"} w.r.t. $\pi_i$ and rearranging gives: $$\begin{align}
82
+ \frac{1}{C} \sum_j^C \frac{\partial M(s_j, s_0)}{\partial s_0} \frac{\partial s_0}{\partial \pi_i} = 1 \\
83
+ \frac{\partial s_0}{\partial \pi_i} = \frac{C}{\sum_j^C \frac{\partial M(s_j, s_0)}{\partial s_0}} \label{eqn:pi-constraint-deriv}
84
+ \end{align}$$
85
+
86
+ This is used to connect task loss gradients with respect to the mask $M$ back to each $\pi_L$: $$\begin{multline}
87
+ \label{eqn:deriv-convert}
88
+ \frac{\partial \mathcal{P}_\textsc{TSK}}{\partial \pi_i} = [\sum_j^C \frac{\partial \mathcal{P}_\textsc{TSK}}{\partial M(s_j, s_0)}\frac{\partial M(s_j, s_0)}{\partial s_0}] \frac{\partial s_0}{\partial \pi_i} \\
89
+ \stackrel{{\text{Eqn. \ref{eqn:pi-constraint-deriv}}}}{=} C \sum_j^C \frac{\partial \mathcal{P}_\textsc{TSK}}{\partial M(s_j, s_0)}\frac{\partial M(s_j, s_0)/\partial s_0}{\sum_k^C \partial M(s_k, s_0)/\partial s_0}
90
+ \end{multline}$$
91
+
92
+ **Focus regime for bottleneck objectives.** One out of three resource usage objectives considered is a *bottleneck objective*: only a few layers (the bottleneck) contribute to peak memory usage at a time. This differs from model size and MACs, to which all layers contribute to a certain extent.
93
+
94
+ As per Equation [\[eqn:ch5:pruning-loss\]](#eqn:ch5:pruning-loss){reference-type="ref" reference="eqn:ch5:pruning-loss"}, gradients from both $\mathcal{P}_\textsc{TSK}$ and $\mathcal{P}_\textsc{RES}$ are combined to update $\vec{\pi}$, which may needlessly shrink layers that are not in the bottleneck when other resource constraints are already satisfied. To prevent this, $\pi_L$s is updated only for bottleneck layers, *i.e.* layers for which $\frac{\partial \mathcal{P}_\textsc{RES}}{\partial \pi_L} > 0$.
95
+
96
+ <figure id="fig:res-pru" data-latex-placement="h">
97
+ <embed src="figures/res-pru.pdf" />
98
+ <figcaption>Three examples of pruning over <code>Add</code>, such as residual connections. All summands share the same pruning mask.</figcaption>
99
+ </figure>
100
+
101
+ **Pruning residual connections.** Many modern CNNs feature multiple processing paths aggregated using addition. To correctly preserve the addition of pruned feature maps, their surviving (unpruned) channels must have matching positions, *i.e.* their pruning masks have to be equal [@ning2020dsa; @van2020single]. It is also possible to work around this requirement with padding and channel rearrangement, albeit at extra implementation overhead [@lemaire2019structured].
102
+
103
+ Figure [3](#fig:res-pru){reference-type="ref" reference="fig:res-pru"} shows examples of tying pruning masks in practice, including residual connections [@he2016deep]. Groups of layers that share the pruning mask are pre-computed in advance using connected components search; the combined salience is the maximum of saliences of each layer.
2112.01036/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-11-18T04:43:12.649Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.7.3 Chrome/91.0.4472.164 Electron/13.6.1 Safari/537.36" etag="13zrn_r6YoMfNDs7i9OB" version="15.7.3" type="device"><diagram id="WlfBNp3svzzV1grW7QyO" name="Page-1">lHzXkqy6tuXX7MdzA28ehYfEm8S8dOC9SZKEhK+/qGrtY7pvR3RnxKrKEkIS0pxjjjEl1l84P37lNV0aYy7K4S8MKb5/4cJfGIaSCHX/giXnb8k/aAz5LanXtvhT618FXnuVfwr/rvZpi/L9HxW3eR62dvnPwnyepjLf/qMsXdf5+M9q1Tz8Z69LWv/pEflXgZenQ/l/VAvbYmt+Sxny32orZVs3f/eMIn+ujOnflf808W7SYj7+rS9c/Avn13nefr+NX74c4Oz9PS+/DUn/l6v/HNhaTtv/yw1vDdH91Cn/11K8Kb+K9nfH/INGWOy3oT0dPn+e+S+MGu4muWq+W74Hvp1/ZoN6fea/L/zj/bNW4K5AMMv3XxfvbzX8bc/tz+32/G63dp7efzd7D/G35d96f2bnn51gW/mF5c02DncBen9Nh7ae7u/5/ajlehfs5bq19yKBPxfGtijg7dxa3sNKs5+mkPvvBQ7ixwRI7i9SgG19tvl36D9Nw5H8sToM+edo/n1W/0w07LL8/lvRn1mWy3kst/W8q/y5+g8MY/+L/L3rj9mjLIP9F8X+lh3/siMc/2Mtzb/ZEPGnLP1juvU/e/jX6t5f/izw/99ik//DYv9v03/b6QK/tuOPa/xzrvU0K4e/F/O+ns3bNo93hQFe4NK8r9f5MxX8PMzrT1N49fP5H9Zrmxe4FO/l12Wr9lveA+d+ugR/lyJ/l9zfi3RLb2P7/ROTlqn+C+PbJ2e5B/KQ6xncH9MLGjGo4Tf4pyDzIL5/893joM/7Sw3EQXSeLgFKvOAz9y+MQ6taYcDhaq6bvF3Pbeu6Mfq0cTnjnm6pbgZfG4RTEx4S37tjoI1yYPbTpyCnuTKY6yzgimDS+kbv58Q4Fsf3Er8L8L3KCxp+2RIKv6bJxrDCJK/dLi5awK2SlaPUlus85aTIrOdvGGUgyLjvrHzqtNZqzwSIecxAmnkrRecZEYSAM7lmVpoPBwALnEveJPnI78qzYQLCOGJRnnk5RokZEzoHmO4wK1zLgwO9K/O7FCJ5CriXsd2VgaP8VJaIFet+KvezItU8IBAA7vWS0j+VB0AwsHIN5Fkj1/CnstMTd4knwsoIZ5uwMse9wHjfLjg4X4OwVu/KbecYpiMislyXMiECAeEEI0Xgc96V6wMR6ournbRWky5qu9i6Fw6VwzqXVRU+r2A8mGfmfue78/OufHAgh5XrqN3T38penYdAhc8r5DLzXP9UDoTupzIAaT2N229l3ofzrfqwsiMyZ+Kyyu/6QVOAi7XCtWNf948d3++fDNuWSSoze+8A2zGaHnuV1+8t5e3N0of8vZl9VEUCL9BYSEOb39iNpGlcv6+9YMtsOmhFaiHGod6rBOTYVJH3dF+YjIR+2VX57XAX/+Rq9RHgGAjnut2Zo0SjwiacPFHmQyJI55fWosvndc9v41NpyczoXj1sGrk99n4E4W4hhuZY7bIekf5dZljB/fMus0T7j7Ey5MXQhL128OEZgo0oTJj8Tf+Sk+e0cRaVFEJiFMbS2LbhlvV7f76XOXtbNNcDjoUjJ76/zVGw+4tFE5Jli6niea9AmK5M9bpMb/z9/M5QCWdhXaM9JekT3Utkx1kaPul+dTiOW+TPH1vJ0tbtSFJ69W8fTvDHsiiJvmquQBV7f7h5/VWFwL9Cofv2R1q3mcVUhNFCd5aIA5SPZZrWkGUoCY86hvxeVzdJ7G0ocOwGFX/SIs/j5K7tbe+XazdwZV2zZQressvN6/iWj1PYGkuJ1eQLVUppru8vPb0eSn83YpH4bR1447FMXPZyOFPf/AG4lP++oyc1XDqY7HLQOM/ZK0vUSYZvh6a8W3tKxiGGRL2x1GBV1E7fK3G3FzFk+WNDUUFe/7THbKgdZHjbX7uDI/xKXbGfSm2Vr3mmF3y3jAIv/ljfmvPDxxHlT5b9Ws9xlxKTfX8dtH79VL+tXhMelX9uQc91P+7lkB90cw/XyHI2ig/LuV1axCwURzOjkykkiDPJ4beQLu915Bj2t6HBPrHK0sTWn7mgswAl7H+6oPNDVvYaQ7+69INeunwUu/lQbMtpcHcvGTU+g2p9PeFyUOWfu3CUefwOLIAtcf2vze2yfaFBcIi102d7uKX0U4lswwieNJ3dnw3/Z7/xhzalsFbE+07pxkdOxEwI4b0P55aJ8WLxSPRVEgqclYZjQvm3jxLdf4Caxkv28X0glsLXZiDFI8+/4kCOPlNEU1Tw7Kpq9aeTWKW9kS8dleEYb1Rhhb4UsF3z0jTX/RqkM9rZLo1bRgmj9OOnaXynWBpdp/JZWJEpU9KfYa8R7v5ZESP6XBMca05XmrbsxoMtrO/7c+Oq8GWmysXqgPfVM9XFpw7idldZ1xRPxbU0ayNn1rub6X4mL6BwJo+z63gTg7pvCXb84KCM7VF0V5IzfPdG+iPaFV5zdT24S7M/bJPH93uKy4Aj6HtaQ/zMgdiVn+YAY1NSF45/EPgjZwvYib1R7/oH/40htqJG2fIMX6P0vnSlpv8dw7K03GJ7I10QbyLV8vw98Zwn0AhC0FlKNKqxtRCbAOe9skIatreR5PffA90XnRpA/O/icqceVNDs1i6QnC5zL9dBVzYpKqwRcGwGokp8ruzZb/DOZWDM92zY9trQLL1XUVisX8xuof+MXEuH+tMwdUIPbqtacbYADwCcqa+B4JIO+oPeW0lBB7pG8xoL7hRQsJDqpPfDLM9IBB9YxaL92F4b9QC1CTkC190ccr8x8FtdXTVK+36vNFmqYT1zMm0bduvu0mI+fsyNu/H6+uCI5soieai16rEygUUwRhMQCvZ6eyXZWRaFqefFQX3KHF2HMSA3EB6JFXL1pln2PvVl8uolhDM+1FbQ4SL1R//6Ag+xXxROq7q1qV5jFTcVEhydo9ibCXH1yquge31Z9e7pgBZoAcVDvW19YRXvFHW0Or5u3Yxe4r0l5s/iAQHGd128pGld019VlVw4iRkV82bCHrF/58Pe7vmwd5ogmM1T8g0FfOve7AOaSb5EV204tc0tjV6syk10A38lay/e8TNSD/4b/HFgmr5XgPuANMXkGVP+nhFlrYfX9iw2FEVxHRepEGMdUH5Uf7pq2wFDJ+IEaBcIGnEtnDfdch4SBWd6scnZMO74cdFfHtpHUJB15oDHgtxmKmUHSG3YO43u3XPFKZQD45kOc+8HznaMqHaRP/BixaTCtFGpNRivcrNcuyp9UzqLfkeQI3QSWXxtwePTu1cp3CscW0hmsKtJS3S0miZvD4Q+4yHP6W/mc1dKAtBm/NkLfsWpY9gmc0PognWVexYmktPQ5gqkCdeGJYqogt33yczIlNq08Mkr5c0GEKp+4NfQX7d3O7/e7b2jlWgcGtKPOphBAuyarBVnVu++CJftZoTO1mu9b4OmGOq2ceAo+DwmMwfMtgqtV5ahOYdvuUbK35Utl3tlWfvFrdAvCTrnptcTiPgkmuqDQWg9iqLdZKaPj1eDEB7bM9XYagNxAzk6hwDmvYrLsn2sV9mQco9Yv+1aA46u6saVWEQ/XyiBct9lAXWj5qqyaDWa3MjJVf6z6/f2iUdGXr1oxv8WlUrV1xd1xIdzzN5WRCpIE3JuvigWStxEjTQ9RNbNPrth5z6zBv1T9HALPWzTBQ2fLhKhc4jALP76Wl6iIHRNje8v7NPRUYoC2TG1xW55lp7CPbwIzZb4KaRS7teT+HWAduYG2kdhIvSezGOn61vE/gSSZQaWw0Rs5x4faG13I59oLyn0OgBI7okq39L8fmX0Md2WIHyL8yu6jtCPkG7ipXbfMiNhTUJMDSfaehF4DZHaPG89yLVVgeD7DVA3yqwSPYfVxvPdZwceZ0oqaYOvlxjas9wPNgBMjR7OshzW8MwOnvp24ccImeY1nWbzgTGLuwmCJG17Gsbh27zamzHOXYdjGK87FWdUznrdQuauAwGg6hT7JFJ1CZzSZHkHyCFaBICU5sLVwAQUzQwP7kREURliZsvWfdteT4Nq9AHD13X19yuIUflL3MzXVL55L5n+I1lhLNrTr1M03kui1lcB1a2E+Hf0FhMjQJn7qU+GYahshIH8xSVgHk+ee2mfpyCN5upfPhPfUUwGWv3cy5rOkmR5n+HtvF2QCdDfkIr7Oh4+lnaW5zeZtz7Bk5sfWekBCRh1VAYqWuuPQCMXdZrOrwYZ+QbZLS9qzMlCs0eEU4BjZd7c+9JnAnuPinvp09Os/G6oZNjPUEnb+dSMxbslva4TKpA4wQOASN4HnXB4BdW7BLkJCs4Har0f6bZsznClLxFj8U8xWVF5yMGuqvY8K1RhbHc01Rk7X/L55mXSr+5po1+w5B2DCyAhW54rOp+uxq1CYg22luGIqCXzYEeezV8bXy2veCNJen7itVxUGf7RS+WErdDncPcL22BLbFRSLvOWNxxfdM33OmnRlW2ytxnFzRGtVKzbmcTe7X0jWMOMZNM2Evo+qKzXwcQ7B6ZW4v0mdX4SBSHzUvoPlXeny+izrkAir4arGThwyNlRMD+aKJ9vMOARY6W27euZx354xRhTiMPP0oemQsGzdNTw4ayxnICvTt0GBGdCDrqUCUoHIlgksqH1WzSEuKQ9fPS9PW6QM4qbdN/8QdBJdvS0mBd2THRvOdsbK1T9DZXLvRaoMHrDR9ZqHvHW3SKv1eQePKjxmfta/jU8bxpSOnEt11d/98asLP9KzT7rmJXobzWrrvE21U11bSd1B/bP3gBnExilf1QswQedMobh0tQ1n9gPg7tlg1WaENm6j5Ewv3zAmexrjLjaD7G097GyQJtuFq/x9FE8vxeKE0FbG5jB00W0j8wNbdPNJpx+t8nlcBo71pDsR14tHGSA2ux3eFePqAUVDJ+NagDiH3tGHkivDoFr8TevLJ9cM29PnXWeh62y3jXk7tdBep2nHDXq4uxHc+T2mT3JFyCZgAQPzdUtuTH8h/vqgTnzx5g86e/ogLqnuhOthYd7WwMwl2W51aNKEMRNi3r1y5Drr9Vuid3DpEJq3+xBDxWvh/YkhGXV/+FcJ1O/hxWMXcuBcPddQwWc8wRdY49rLfbKgfWTEXammdz+C31VenUYgnAludu7cJUM4eCZRoCoSl5CFMKcxdcGP5SY1DiBrxvxjEFdoTi/Cqab9gG6Jp7dvjok8W65mxwXfSE1g5hhCoqaqQWnw2lk/upJfpDi/p79/EWVonB3/U66kg6dHlJY1Q9P9JfGM8OjSiGWY2OiTROOYpAFueF9Na7kH0KF1TiMUc0RvRB8gRqlPDFHXe9l4grv61pzVZeOoYivGxFSasjBA0ZsvcFANlkZdro6VpXhPSIV37FvY1TnXHC3urAK5g5qiP68a5OxMXT4ZMQI7IFqovJR2HfPMszNHHaLJS3JdrfXP4HjIMCYFQf1HZ7imDGFWk8yOUEgcfrZ6x53KA5C3rFHJRd33pIGKa9GBfrDfpr7wcv7GRnZW1rskYI2KCM/1PJGrHu8aKqSXXWNWUmSF/4JUce4TUW1FwGA8jlYjYF+aWx5l0kDg2bEqfLnUNRH8pPm++r2SkEv6LWbg9XkFRgXkjzgbGJNZkKq6rDcR5VfmAEZI5wnO9slTS5MpmAUhj006TAQeQOBO0eZ/W2pb9afhZiijfA8jJk+bGlC1rKcewdHsqy9O/fqjnUeDwGfL5T2OxaBC7tuFgz0i4phETlDdaIzWKcpwA/MozLuIM75Xav1CjMv2A7h/lsaGI3i/ksinrqsbYv74QSQcHNIqJYZ32q6g/SS2oCuPtKOJb04B4Ko32X2MIuZflMBAbE1cnvxInFHlM3wZY1/maq8jE+hdNux9iI20BL/iV2zLFjp01Dy1OIGCwpJN/R8nB6V6O5NeQJVFWXVcqozNSEzf5Ww81HzbIokvt4nbHnJ30zlhKmyTfhCi8ubjcslDMPQ6FZp3JXQN4YKaNBQz1CKNoWuBqdIj8R78pu6IqgZ2A6fLBSbF9L0rjwBTcRbsVNPMyFfMqMiju6GtoZg9fGbsUsnN7JdIBF+LOlSnrPZcpGvB4fC7lUcU3N+bqqXYTBUPMKkhvfFo8r/CHdbfOd6ek+t+NF4DCPdzFx9vbNKNCEeEn8yFZbt71lc9cAwHWCcr4HBjs9l193UHFuNNEQbPbjzwbYCwz7VEpt97FUZfYZM70+j9ZZV5tAsM9J+Tg8RsoBpJV6T/2Dk97tGkqE30b2Bqnd8o6CDKQEBDpPm17AaojWrJvFnQq/lRja8Zo+ETcXwAMTi/KTiJps2CIqiDs/PbITbA/42F/1LMG4Z7+TQZ5cfezl2FIB9Pkp5qsH8E8Luf1Mo3+M1XHk9wdcBdLbve4M7ZvPKoS/U0IlfTkDCBJ5yf57QKLYs+eDZ+hFjlnDtanZhBopq1K9tV1FAz47vt82A1nl9TxKM2+TcdtVNUy0It4bz1spKj4D0wOwXwz0lRq9Nhzse2gmAf0sNzYTcPVh0Hd2B9EpdHL3Em5x8uIFdFoecYu8rVdzOvw/SHvSxohz/+WKvhgA6Dyx1kd7blBVWj2V7id+sIU5LNv1OlxO0ix3wLXyaQXhC5jIdHLiEuC/b5uGi70cLwMUIcdrB1Vbtb/okPsfUUVRa4Bu9PtdgrgkDCMJ57b/R19sCDBwC4ahLxUpCFVF6hSAiQEBh6BBlHmFa/tCiZ8T4asEaokFj0ybcPsFWz4HMQCibCgzqOlUmkl0ioNcPpA9ZAgb4geF13Y257wxz6ZAHmafon7vLmbLDSZbbTGx6AM00bo6N6N+YWWHsSJkr1HETB8ugvOrI0bLyxb/9eyCSjoO6JXRvPLFthqk+M9wNqA+EJqWpTDLDD+0Es72JQ/dEb5QixX7fZYIPyU3mFeBlaR4rTi1Z7KzqZWgHLt81wn2B6KlBG0pOl59LTX9jFzXmBvHLUZqMxwFurqCrI/VDhINzZmtdjQetd7bnKnRujU0w8byP2qNlw/VTmCH2k36DPMJZiPeY6YoyfYLF5/APuxfkTlVPUCeBBERkkkJxbnkxX0rAP3d25m4t+9TLNvFSn9lp9TkrOhXmO2G35iTbYNcURnAd5bg4TbjJ/LOumsgbRp5/Xg/wMhUP7liEy6IwuH7ItwcV7DAM9CPF4vzBR9VYh+vVRdykxdrNW8JWTq7oRcuX+BY9GQN6WLkaBoyeioIYuybHE3YcV0zdSJkPyr16vdl9P0/75cDSQPXt72OZIwx5DfOaXYf64/OxMbV8LnwxA2LI1u3EDfz6zEnY1ZU14LQD5axld+CivNEAILUq+BSLE5U/++9HwJLneqqRwADSsc9yZXMARpvBEcDNwBB3WnipRKi/qadQbQ/AgnPqGA7y5RRNNcP+ygevGqApHrUc6Mitcqzw2dNXfDQbzH+j7yRAZxRGe8K75VBWvJSh/tRYW67mE/px18z3gvCEznwUYDS4YjePzs0muKrE+k1yMRDbS/BtMx3KtFRKNwBWGdOIcMzaoncrcFVZ33Azi5kw1Hx++YVlcHXhK31aZzGTDHNc3kEv+/4Mf6++cz8djNctg2qYn+lRpw6CxybUqrpQemBQwUnw6FWDV/RBTRvj7/UBuudoOOrzJv1ccRolPm9RNlOL99Dre/Nh20p2kZdvxo+4gUHevIa8mLpfl4vpIWzzgC+8e21Ja42lHD0KGOwlTrs+2+mLT3RuFlAUxC23F/Osi9ZtZqgdlJvwcI/aFcNiix4j9FDmgqHNs5UKtWtgqp2tij/C6TyYMxsNjzkD9BPUvrM5YMYR11ER+SORxjml3/7FD9zNrS6UPC6YGJg5TwDdDMk9d72XLQnf4PH8ZC02CmR/qTYA/B0DcXz99DyyOh9p8oeLgmy0Yq8ZKBAk8PFLaaI78oL7UH3tlI92gFHSRxJkPDe1BLGCgqARdoegYi+aTSkYo2QgK16rr+jrWJ/hycF8nJTzRr0WrLpkg7Go2QstOjBe6SnOIkYsTAPJ2tu9g/l5RDiBiCMMIs2Nf9ZqD/i4cEaT37wlrn2Tzs7B9jnSsoTHCa229f3utZAqo0meZC6x6uxekm+iygqot7tGh6KKkqZp/pzOadod7lHhanE7zaNvqJRBYvwzKEvKYvgmS/aIrqFKNAfpbVXpyOB3H4yTf1RUr0LzFgLx/Nk3gplHdTpSDa46mMHXdrl1jEpObTBMlhXm80LSr2R8b4laI4yAAN/aPNGpTdcUqXaN+m/KfTvCblKXWXo+mDYyv2UQGmA6n6F0nJPostDqpPFkoW416w8b8uB5Nf/CuErpOAFyuN2l7zXnsTGQagqL6BHCiGmUj8YpxXaw95ioGIViVqBxNlupz7TmtORIm5VnBMXQ5ug5z4HfTaNTte4hSa2edI9OtaHwfjU3GINSwyN74ZgdMvUgyZAy+BrSabxe6u1H4ixXWqaruvgNkChYH47zGnUBrVK3kmESBk5PWLPiW+hVv8bUi+DmVmxbIJqEVx0gkD+A1NHED2COy3WGmCNV4w7y80W/2q57LwvmFMAUXyH5OmwH+BUJxd/6dPBlONb1eZ49uNiN73lwHFXb08M+vj9VFU0Xu4Z7VWL5tlzRcbP1BUgzfDSpXB4mnurf3m02CSg7LmfSc4BJ3tLiUxVd1id4LNWbTKYXhWGPnXSyq/LcTRVme72eK3IDq4DeyhEcN2C2Qjo8q2f0FOmMq4txMYYaOMe4ws0cXL415KgjlSn2MJ7TLgtYgfTZS9WbpoZgim8Zy0KlZ6mRjsNhiMUopo4X3QtiAC3SfEmk7Dp8PU/V8fhP2ZW6WmZY4KLQXYGivC9fkmYy4qNncQuKFord8gUJBCbaJdqQZCBlpnbM42iLKDIhOrv4n2kbkVZJA9uz8bPyFuqAs/OiaHpYLVDWrUQPMPkfn98Xb28Dzz1eIYCElb2JZeCrnU6J6MtpQCiFWLUTswQ+AZL1gsLHluM833YDHubNRYwfYA7JnmUip+ZfXI2ucnmHmU890Eq2hOYJHmlRBtsr+tZ+liSaTTGzjtcOeA36gwXyYyzfGEcf5cEswy623VcOeeYD542lh4pgl0e0P7yLCGUktNZKtOPbMjnxAIJUtgyveqrkcBfdovg7bkMzPhDnpuzLQTnffDPDtbmVdhv3XepO6vYkOc2HCgI7dxqymKcmvMbowwfcGRq4v+OAvsYECj11sr8o3B2cv3PwchziA/EnKiBSjHPYPT8DXmFY5qWJuXjt7rk7+eCg+x78wW4yVPxMHgyds74UIQ6wB+C+BGBoWudv9qfy7ld4ZVAzsGz76mjh5uaSFW5c+x3DJpTxBv3dstwnKUn1Kbvp+rP8fLujPWTV0EWJ6tH2J5N1+zrvaYBrl8TXiFmTcz64mXwE45Ck6cCRjM+TYb281wvxQgkYRYCTPe1BC9W+seFTkpklbWwtihnhhpsCeKlgeHE64qRyAw//crPvWOoswN1M3swHHZ80enFvnQBPKNTxxFks/e3eUd46QlCewP2mjycv2oZIMYYoPsawTA4X5uIJUMxfpJyps18nnhPB/WjL0vVbEbi+a9UcR3D8GLQDlwzP1mSGiKlhDnbl2GGXZwhZDmQuCdhrhctXJpZ5bn6izq2uH2LBoHY6S2PxwBjvCbtbI0t85IRh37K6MlmWYb4Pd9GCorzeIQyAP/saIItOSKFTjuNNAhKQtJF1ViQg4hmQKAHw1S7rcBh43ONeCi3dfPQT7yFUVBZfvMNy9ur8YEQgXiSuszD+wj2BFjybIKZr1crFIU+12Sx32ai7cCw/LydWNJKQE56URbafugeSG6ZxFNmE7tBvb6GgQ+JomO8ZRXKMbKaHJEksEq6Y9VYyz5mdySCXN85D1BGiCZ8aq+LjCNfeb+qAUMK35qrJlekYdT2GyMaJBBgnJEdIPbccba8rq9PN2qgZCDAE10DdbD5dS1Wrt25RrdxH203eH01ZOaCOavnolwnI6SBQc7/3bZM7Ws1MilMliYdrO2uQOMmbMR+HFqGrdtPOTwKEAQ42Wbc+UCSwLPs7O5Bh0Y+x+Y77rki6LX2VvJaeCYfQLgzV9NEky8c3w2Z9RhQlFofsaIH+YtSgAuSzkzG4b4iO2S1Fw71clR6Uq9E6H3e2YQ4qf3SPA6kX2XaQd3Za5gpXU3/vtm1pzpk1cMfB9nntJljPc4QEQbze26rIb6AWF4vOJAnpHeGb0vqwxcOctaZlxvCb3pzciDeqjfqYLq3U7j36W35TvhfGOTr5k65WRrUWOV33BksbLcOKx3ya2xGbcXT2F0NJZ414pc88Dp6lkHi3am9gkipzo3tJAwy/nAvRw5G4ecIsiEBqral3HEVvWy+HPJngHePcZuEND1JoHvltM328VAhDKZe6P7kxL9UOJRNZJ+VwyOZHKFTbQn5qg4yzq/qyOkowwSMocFGt6XTAiMsRo3XCaoaE5ySmvkd+9hnMYX1APTdrPsrzG0CBH1S6wjirSA8Oc2sKV+T9r3HJadBhPUk+YRqraF9s0qbc515Bona7J3BoPBXF7RY/xy25Gzq7sW/4IPpgWlgt3jgIs7XHyJxonxeRX6mI+lrHZ2zWoEWvuFvqZ906FJ8kBBCB6swwhd7gm6btNhwocX+GcnTlH8qTuPWr7h6xq7XGRuem6LBiiIx1+jQYQX5iiR/XTv55dHWlPadJh3uzEo/rxD7aLr1QWwjPLLxhAuCw7BJ/GK79Vvcdp6kH06bIu+wNN5TOStJDj7MUX2H7rhjqAMuwN5EPs6SE0THQIt8pw0tM8+zNGbEelQ0Pc0YSkWpPflpMaatq2zSlN/V5CMa4k0+DQj/usiy10jvxyGNh/RQYZQTVoIt+SpbO7D4WsTsEfkkYlikZICgDb4H4e/Eoj3TiAOZnl7afAUPpvdyqxmjUzDHCoH3YUWqmsX8voMMrQVnW3i3nwMprELT4MAbhiH0WnaJ8HYtNa0Ir9cTSumQUQgio48++Rc2LycGbaQ2kB5ACbPHsrd9h9pKbPpAZMW1pyRVbuBply+qy6NWr9navqXNcMKpWS9tvBY20/qbmDW2tHVG0yeZG9SGbrZAM8O7mxPOvb0B1S46pWo7LijhSp48uB++brXWTeQNdQGuhhRIkRtI7Z70NBoLOj1prNCtgmNtDTJyGgaP8kxGk1fSoNoA14KO9HFEb02ctwrwxh1ROYxya1ooSsS2M+j2QHr++0Nk83Hyxs3HrZzbUbbkXXlrso+ZXCF4I9Cye1ts6ZaVatJAGfzSfnaAMSRV4gRHFwjE345uxR/6m3zIr2gvLbuPEGFkdl+HPpoE0HW49bmJbPR0EESbD00+WxvJP6Dz50ha39GPDhHhs2lVYoLYSgJIDMpYHJIuqKhAF3lUiY5cmL91OKQ8xikyyCu6HPkEmGt9P8yFoI6aSWgf6TS2HT8BrdxDv6MvRvbB4nmKDA4GnxFp7aMoF8O1tvbkm5KRargnn3b7Vw0HAW27TyikQ94U8QXdzfDAEn4ZqpIOm211EvLRnILPp0xOGrs5rY+aa0+E6gCagY0i7MzWTcu6SG7U25TlYOJceshI7U91aH5n2iRRNjRTCN4Lcz1psr4UR9SqNVjb/CchJLxpYQ9mukLH6V4Gzr3CEXXNKjNxqJ63YMC0XEHeO6FJd3wavC/+zN8Nscvlwsi7RxgvzHH/n/WwLQZ9kkFlQtVbvvYzScN62Vlt5GgRr32Z9I76jD9eq3Vyyty/8KCfpVfPS3CY9+Ogyvc/eZxcUluOFNsi7KYLwl/tO2bL+B0iGPPNBa1BH+tXClDMhb3i94SHEt7b1HHyunDHZYBaKcoQu1NtLRyezFYX1Yla4kvN0N5Rlh/Hqx6WHIZdoec5lDgUMU8/iGqg0kdxS3Imi0fSiT/4dzk4+Spp+vcfCHTVDp1l9CovNlkQ+PmICA14VXPx5GB7WY0erfpxHqskTdgYb6EcaGXCaIOIyqhCurlmjXfctqnvDslJHtQ7A9Wu+p4lO/xwU4JLM9dHeFH2hFl+6k7St8Exm1DV9yixvwCZpQuTK+ShEvW5MigUNAVDWsR1rxpjiravGLfMDr+b1nfYhEOUGMkiCaLEAqPzYaj0C+h13SmAV7HVRq0DPIgA8AF7gck8Bv2mp2Uo+/w3sttFLjqNrcR4eUlFrcP/xmxeBhBPZbrdfULOhUTrPMoS572inzwHGNjJ9IIx2kZ9pLbVIoZfV1EQg+wGU6LqqHlvHJy9SflmDGBAUG6PkQO85Gc+3cBTJC8t6p49h5uX4pIUpVuSCIqx7c5eHCakyn8LcdUC+i2hM5D7hBaJahjmsOWND61H9LnsiQeOqp4eqCenx8CmknyjgDdeHSZ3nc2dq8Y5e1KIQfOxbhFRhS3tskLh08ifhcM1ANKd78wpTQCKwZAcQUbuzVvlpPeJKufhMMVUBU8BWyHjXL6RSwJTcevDQBQgHxGx92NeJ3GJJttLSz2g6K6qyIHtHwZC9Q51SoiwK9xVTu58YJPzyUjpcrD7ZN8PgCdLmLNrjZL5nZ+EGF9wChZbbiXDlh3VdjuXCbaeqKEhWhycoicY7q/7MP27iW9eSI+hoTRghXThiTktjbsJmepLbCgzidCB+uD27y9f0eFcfoL4FBKaX5hp/saKfkJI07RVpREnskrHScTHEIJJZ+s35oUN3lDzdqmrIabatXryRFh5U6vWXSKOttT3rK4roVeWOzq6sMVceN/CKZIJfK8l1IqPKolEi24VdA9R4D3hstBUjVvMnQhELV8SS5jYkLoSgpFaEWm4qVtdxSD7p1Rvi+WYdPet8UyVhvtV7EaPxEI+LPWiv0kTqFmL+PkwZAK0kQiVaLw8z0H8ObyeeGhAr+RS2jUjtsbUYRw8tFMoWafoMAul5BeGD2A4ST4HU553mzIPksMGmQGtyVFxlSkKrtL2irx8Rw1SG++CfcCu7Vts4V2u1qWVa6vxmlAbn7cyr9CB3Y9LnyEoPLLABZ4R66Hb4PJMsFb+3R/IkfOdTeRDxJV7URe9gvI9oX96jZKQVNN8aNL4SNZsKfeglAH5HUQt7vNfeZs/NmSWiyUZONPmcM1G4qTRCH7EYklGL2aqOU5UbPcI6e9Qo8MW+wYrhxWcKh+rNYAyuG6lZeTeyHYNCy1wr4A4DiJHdtg3Fvtel6KeIxbw3Pml6VlKoPhKkHl85+VZ0gPn9EYtui43NU9GKMjUpBGIy+aWonLIG27eFnQNNHJr0qTrfmV4wUZZ0i3Ye8eNlG8A5zKQNeLeNu0odZetmv+4XsPybqyXyurrTqfQoRJq0ajDhBa1x9TaSV6eI8E+Ob9TtQJEie3ySLNoji2U/E4K4h+WtTFFihCxxjkk3rXMMbJPuhfU6DOTgth7XPxgZQTVzwnMAqfE6rsP3FcGQNK8SiGjiap/tX+nO9E5YC0Gtk1h2vNkYJD6dJ5b4ZTTJl90PJhwgLUYja+oeLAYy39qRpbxvD7eSuVcxKvznzaLFgfsTqDXZDmN1Bo8lJTzNC1Sik2P3mF5J9+S3T83x8peg4naVO6MTX7iLMr3SFuJnfZ5qnMeyWBvvDIFeN4CN7+Wvj1gYelODDrUogwVqFsiHsHBf2nHelx1Yy7F+aCJk1pdyUQ2pmR94fFXCPHpm+IxwKjtyIiD001nnXwielwJOGJD40TlE7SbIXbnd1OjxAZ3q3LSPLKashccEbFwaZw8mv1YjxNZPRN5WDzNsWq/W5mnfK9YnDeuTuW7QEg2I2Yg/Tus1J64epuNgnoQoHRyMt6zRauDfEggvRucK7qtHe/7xRT25jjuquua7fE82yaeQPMveyKGdshGISHONxrdLq9e1KoMPGFC9n5yXUvGAE8ej5d+fFcKzD5ieFvFX6+xzAnu84SjmA9TEuseMWR48EqmXRWTrlwY+MvERNE6gGsfhzRDcWljTv7T7nI7zezmFkH72bQbhdbR7MQsSg4O3oz52Z3+UfMTEqu6a7aV8UWsnjcW4Bb0dTc1kOkcWzCe+mNrEnItalRncpCeqcamRSR00V/kE0239XIB/Oa5kyGsajNJ4K/ihIYLYbPH4xoxdcbN9iuGWy21tJRaydr6E374DfU87qCiDKbMj2kcIbHYSQlguoePeF3iKhNwIkmDb77jSvjfSU1u5Bzc3UFq+iR/rMw5tEph82BBHdLyrdDXsGhtDGX0/NltVYNy5WcopoMpBNREhWuUmK8cM0NPvjGQPDPld8Tl7Aw2RBIDzrcT/qiXS6Z8W5itencFz7EtDEchOVlk47qAwfs3xpqLLxj0VbDGYHLRGpebIq2LET6kb6AnfsWk607FvnbS/3nb1BjLimVwAyPzmuUBV5pZQmesidmlrn2IiWzB1JRcGXgnfGe7zO0F5a5Lr1v9P4HYxNrfga5eOfmgC9ridXky/t+MPttwEFmvC1N+tmZxXszlvJpAKR0UdtRMi3nh14TLRZj7ivEYbz6nVwDXGXGr77k1u4/Dx6LnE4g77aGzchbFl4emFfb6IT4u5QfmGcS1aH1gMusEuuQncIs2YhCeXBh43MQeWj0OGqdO7ycwfNQn3wKtDhAfQviLXH2RWJoFU7vF3sfEcyCfo9hMIjtFFvvnCEutzPpnoRn0t4yFYv5LHLOQ9UWpG+M1iXn1jgf9duE/SM9kCsXohq6S148l7wqnqB4EGtyTKe7d9pI6wNEhPAqcBp9eIxkxRyipf5YCKuBMTdTL5z4xcVV7r1JDhnyTko/y4Rw4ToylzIPJqmJ80sluTFG9YXmuCic811CQsSYWzfr0/i3WIPJk0OFffUpy86CckAKdsJqCmfA3u+/Gc+a7rYz4M5okH8T27NmAq4l34wLFH+cklXMicIb1OR3AEobiQ4CYTpRPLoTJpDoneYk8BIUz3gqS0Jnm6o/XM95FyUqwSQvJH9cJpoIQhxqN6WZuqq2NlYG8iM+MHFn0XAic1mL6VmcEy/Jn4Xo9OUqOwIIQeV6flqICguz7v50rtnGI7KxN+GEB2nisMkjxH6L2cmwt4Pd/0xfnDKCunWk7jXLdVcx4FYn7tVhlbSCL1kLTJFeHnrHO7ZvBZxpMGNn7fg2R1Uq1C8uPfmMD7gnCY0vp602EPgVuk6T0LfjKZfqNiz1oztWHSclS2TjnFQqGxR5xaYIKolMM3D1m9EjOUsOFxTjpbdXyx9/wI9xY9gTIJXtne6NRCMxzs5n18+iQEEprIa1he57uQWcX2AT+zKAH3/wgv/YnPXLu7PYGbqsaeMU2JWekdthFO98MiB4gtJU0cSVeeMTUawaIiaruJHwkg3BP4lfeIjYoZVAl97GZ/Y5P9UKhGyQ/s+RjBZ/GqDSwTg57FtpYNl1alRDe2860dlw3ccrLT1K3s0xos8eK5MNtFKsurxYdskK6sGHv5gaOKTHJT0NycsPGhSLcIE7jvtXx/TsbAt+e4zW3PbsJtEBp5jVa+1o3Qc18wQza/4hlNSw2m/ks1velshqfJGejIo1F2ckLmV/nhT03jsHWixyIwOQ8wEyejyUgpreGq3NCMvCViwJyOXaJHYbVG7+aIbrrfDPD5YbLRbz0dEdtbsjqR5GqMdzv47nZJwKFiDkjEvxfiKxdaRAfcrDuiA+6JTEfNOS8QaKYb/ry6iLUSiSumTD8cIYZarYDEVrVzCKovI139fL4FyMzNPj6Rjq98SUHiAfKazQbR5qyPWMxGzKUUqytO8ZR3E0xEgUvPP2/iNNllzWfJxJiNcjkz99OPacFOiMwoCjM+O2UVv0UlvIBTJ4+LnV5Co7ikLb0RpEDTeduTC4SkIKgPMV+7+SWDhkPbtk+QIHu+srmc6d69g2JDNlxD1GbzXbMsG94ijrRyXekZprOhLdW3TOCczx2QPqH8OboBfNScAkNGa75jz4kXva1F8IURMIlQ7Jc/CqY1KR8l6BAmxxegjKiIzTTJOQ2JIvH6cj/zIBjn6N6Er7voqKcTORFVLx8JwVF2lofHwwbETX+2vrOfDUYRZVR/JyI6/xDIMsgmNyYNU+ZPXqlCZoA6MehoRzr1fq9JB8vE47NSNZEM5HpgXHu2WhoonhmdH5sk4Tzy3KtXhpdNlvJzBO4TmMZqKHCezy2yF9z9wI65BqjOYq22rq+fwIXH078Skc0+jLtt6pQXx/FGsc1o2cnTok51CneT5mNm6tl+HGsx+aO1N0h4IkO3bD4H07aNJIfbahKBj6vOlb5OIr+Z4UvjIeXzVbPINVT4LuThVy/e104pcWTu5hhfxxlyeD5scSrRdqJRwzMeGDQMfn23hiqytoMLN2iqz9PBmNYUHssXbhR4oI2Zp1g67pdPaTeMmh1mA+DGISehxIGi2nSJEwoowDkJVkISVA0jx032mwFfuAUTwE3GPKiB1e1CLCzps5rmD6dOCC8CIzMOnEBlW/BwPDSxqPXP1wRPXFj17Aj9svRsJlm6OaJGPciv06Sy73htouF13UVS15x/b2Kuj/xEDTqlnTXP5d9lpSgMn9bdCsNKVpVP91pnbfWX9rk2lgrqN6jXqEvrxhlbhFtfcXIT+jOwhVQ3blyRQrUs3zUKvP6CabvQMeux32pnEy3ReGmHL2lz2/vmsgXJlbWAz7bFnRzhvdf9lvOFqvIsd7mn/RnHg1VHbaHvkTs+0m3B/njnQQBf54IrfoTvh2ppMYZS+otYePw6x+trqqpePYjfvDSI/NdttyX7QRM1pVG3bfQZsynZWcBNuNV0cv2V9SYkTSwFh6ZmFWu69Bjyjc6Orsf240Q8FAij2B6CqsHtFROeTjkp9Es6B6eYADilqwLHOFd8RiiuFY8gXexKfEstPCGx+fSThdTtjmS1Oes3hsPN3mSZ3I0tI6RO2DMtCF7aEjQEOfriIrKg+zJNgNirNta9NwslmfLJG6mVy2jTVj8yIOzXHs4wnqTgmmoLqomag3uSrAiN57jMcVVsfP1e5s8bzlO8C9/dVG2WUfY+LpNJzqOehJ4M7ez9CCw3xd4L68tQVUgqCCmKV9LJ68ZyVj5PaKFxoScs4RYLCL911RvYmn+fWGvYXGNIL/TsEsGL3yfO6ww1SrbFZ9FKzdcd7Qh3evRAF5MqbOGb01I9CvscIj7RHeppMY4HMHgS3REN0ZLmrXBbsqsWlDQFFnIvF0fQPJ32TW7UCns8kMJDNlHsWcqx6mW4wJGYNXt0Wcp9Dpdj3qf6c+g1gCn7/kmAIriNAEiiXXa73Vd0mr73WFQfMJVuf84jGIZSxaNQCG8aWD8rwyREblcMtvKop29Jkw6ERxU1x1O3jsV9mz1usk9lR8UWYCvS5jfBSK5XqNtq34q+e8FcjZ65Gs5YiGnwEwafeg8yuQzfUWXXfBqf7rCi8BBGZVhwtEle3KThM9dfOxV1a99I05DV8umjVqCqk7BkrH5FjtVZMzIqcdOW3ayYnLhkaQjNKhHy5nLTM1LHsItIYy47HG8EyN5FDl9GgXkSIe3BLRDq2xtT9lE2+blzsXzS806+eHgAuGXb5cT4dlJI3UffsvSpiEkIRdAdIf/Di1Cy+3mN5ItipobF1RxUTPyYzSNamoXszETeFKW0H8naEqaG6OHgu8vD6qXTb7rXx8UvTiK0DgNOKANNDCFeuhgWvrNPkji+7IH3HcePNBaTsygyEb5ZKeG3FrV8xxcHgnvlHQsk336YFWp7WlF/ILlOo4Vd41fyatnUbWCevwljP4dZLV5yD+/5liurOZMpb0XL6hFP3JOp+Iz6FWhWuFVBrR+yYA0kunpITljeHkjPakdlatUHa6BnmEZFhzlzfl6w/miAr/GMc6fiMqJ3o9YokrImMftARpf+BG2y+Gp31g5QOlDXyilOXzInb91BXev6DnY1axKSet68IFkbvux3wR+c2jvXdtuARiLyjIcCkKFT19zk+/eTmrP68iq8Vw5ZDPuvHEtKxT5Ga8EU6MXzUrHLYyrFYke8zPw8Zw3/Bvp66G8NhhPOd4lTY7ByN3WaQv3Kj5wc/j8DHT4X9okN01xzjy/V7P5jMVvpyMiCFA7Nee+rg7eN1i61U7UP3Lz5sCPCs4wcD6M2B47tUKeBnIEnEcjlpIgUbBTgmcO6AZUZY8DlPg8P3knM8q2bAoD3KXbTxCe0Nc1hTRTuWKSYWuafbuCZYS+4XELhmEetvI3svSj8Of4Ynnr45HjMBTVPK0mOaFLKvcQenAlfb1tfPAzY5/i+jnyIiMAcgwfX5BtePs79c32TbaBJIx6Z2sdvOSIt1gc1uB6Q1Ivr4O7LFPhsYJZXjbyHZ8keojJbqQAI95g5YKhr1Pfqz3/28nhYTsHfOrXoCJlZpMN6KJ5lX5j80SPseQdiTj9pxNNsc9irchFK5bIGhKTpU9XboMBA3R95cO03F+zy1BeY+qas35tu7hRJzwggwBPUsU3SSY6i1XTxO79w+PXwZQUEDPvxC3LLpyqHkfds6/bnUBgkHxHdJQYMOqzErpgCzvbccuGQdDvfutWEhOUmf0jy+jIYShZ6cJgO/hTx2DWfsp8neGYjrrSDONEsS3NrTube5SOW8vYsZuTrN6A2rItsUocF9k6EuRgTnU6/4JHJo2ejre7GobXrF+cw6zzjfQCawtuIth1Y8zXXI+qfPDp7aOrsSDPTG01A2ac6p3HKnnE4DwBxOZOjBr7wCkPz+wPJzSUfDATWYmMLizQDLAs/jqch4wKeWi0dZUioDlWUPdt5fXkZBNVXO2kb8B3XpNaA3nNXS9MjPjgN4SmfbJLOZpUQMmGV1+VixSmUMUpFcyI8aO+VHV6QC0s4vAG59IHW4iZPTlRrlEvt2upKXDWvXf/N3pVtuYkl26+5j+XFPDwyC8QoQCC9MY8ChBAIvv5y0naV7ZSrXN3OSsldubpdmRLjjjhx9o4zRJYvbshA0yFnz2QFIwFUjsOCRpG2djLWYoO5skxTbmhbsQnrAMRqvB9heq8KUyqTneFZY9saEluALC5qxcj+TIF0CXVaCSHRGU3TUOfaRVtlWRxxErLMoHS8nYvtovEtvzaF8rpvsAgtuvGyn5IcYo+TpkWygdLydZUAZrciNws3O9eQNBfZs2v6x8tZBBMPql5PG0s+GiKbZavwP2p+NpjXLL3KwQVlVwIFT/WZFYEazyIXTrwKLCqDyfwgh7cqnkzVHdRVmfZgaiSYkiVLaoop1SILZwNP/VQ9BMrpMJjqrYrkBsblkYHM82mwkw0Bur3EOOAEmKHEzgo2ijkN703kuAq9umvn83TeXfZbfRMesGohSgTp1VvhZfqNbp259NgdxVWClXMLl847mz016II0Ir/hbMwKGwidq6tZTvaFs2VrP6bttRbCLQstQ2jXTiD2rfcyQJu6+tUND0wQIfDR2mvnlB4YFFrFae9eVHFYybqS0RbjGVGMuhOc7tTTfLh0jogP0mWplNzaxzhQppBxdfAt1KFU0V5PQH7gptfPNF6ezQsGTdxZxVm69JfJKzgt0Sp5OTKnDVeFOybzjjh3oDlt1rQb0EZ2DnZFYG2nHyfVskq5c+yTB+buqS5L5vGs9lRjGdXcVfxZq815QMqlEG+1iKQbTN59nF1VrXHEayUESpMgYLyd4fn62c7D3c2vh2DZxHFvxMeX2T2rJsxA95YzKgw8VVn7spFse4wDrPvk7G+8Q+imxhya8LCitNKNQD+dGkr2jyvw+0t/UaWuSw1zwxqAoTon2ucBA42VuAYZmGySirE0e1wJav50MrI9H3ZIJts35nS6Hljd4jRBN5W1x7kImw1lE0JJ+iOamVLsdmfPBQO0aJsRK2lWOIQg+1pzcu1lXv2wQLs0FE+9sQWBI/DTE+M5YK6bt/Q0LsVtNpWMga96BBmtxN7jTDv7VLCz8q6CuYnb6OJAr3Z3CPgMh3v6SOx8fUtDcerXp6Z2VUbxeZYn9JtgwfKVrP0sn16oFyHHDa+nt2MHOpHC0k/5DsZzf7uPN6Ll8otJH+CrTWgX5lRv6Ugf00tS7naBx3MmdANNf7fGIBJtSWKxFsZoa1g67GZuSBzTBmObZh60aXfTq9NNOF4D90jwdBjl1+KobwiljhVtu6VWlmCmQeYGWVLX7GEHgeTOxosnQeoKvkEdOcjWKM7m3OGi5jaakOFtIFaajqupy+UbXlvSs+m3hH+EhatKIcgaWRv3FlIztAZrsozpZmtnQQWSy4OFypTXTM5wBprrrCaHpEtjizszrhuenKzKeRElrtfSuIYmxWwwRe+LZikbn7+inHOUL5OEaKVlbyAryBc6kLvOvWasYLPNSMda6+1RI9glGCNYp5KXIZDyQ9zhCuvH/iytSh2HDT/l91OpbsllDRQemMdpBlcBLeltjfbABYmpchweiGp2lRwnK2690bHTo8MIneHb5yywTou/Bd2oVary6aAeIwUs6QjNlNnmu2aSmdO53+yBWuKlfX5u9unA7ouuOYV7QLyO5KysbfGkb6PCjGwKOWhrR3Ch7M0G3256VXkJmS+L7ra+OtbdNnCSWhhipNOlg8weBEESltQ7lFt/lZWAsAO54CsrbdiM+3x9ZTSz8PoEqbgpHK6F2sWeeWDniQad+cHZhz5KDboeby4uFS6cq4m30OsyiOn0WsgJZ+cJp0VHogIN8s1JgcNbtFyo8+HiyvbFWGWs1Jq4Er6kY+SE4Lbs8jLUETmIO0LFMR6jNpPJmchkL7t2OCsbsB4AAuj2gMALe8Ajz3YiKrS9ygphR+1ZdTdtCsgXwVYZReIhF4WkNuXad+yGyc9VlYFps7LdUQo2uJIczpw5tcT1NuzZl20z7OVEq3VwzIOcYtzUOosDGGeXsWGqSQgP7SxezsyBzYdrVfL2viJ76GVbI5DWASN/Yq3bBuzyYMR5EqcX9jlshHgvUkSRpogmHHfEGRydXWDRkaq8g+0dBXQzjsET0PunyBkTPb3aK18E5ifq41B547lxDkZizvlBraC9xWYKHeJBsKuUDj26nURwiy3piWOcr7VuIhljdLWw9r3NotrXiQtdxR4EksYv4hnjK/I275nVYfFoPEIwmKPG7keD3IPnNpMoWAh8roOrS+bsoXLCUYrFy1wyNwVCBBfIIyxLuwyjaYxtCZ02dGJTXOk9ZuxvIdNTgdFwBZ/v1yBuJVZ1JJS5OyztHsOUmxple0MrrEInFNcGibqXeVR0sdfb+f8+7hW28YNbh4NugU1CQ0132lkBQyzXKnAn9LDhdBK/1VOuzsfwIDP9IG7P9oXZxClNn69+EGRpyvb0mVs7fGCJUwJFuKsf4ZeO99BuTVO5dH4wHk3RZXGGtnUGZ8dRRvhyU3I3nbQUzSnbLveRXtzKinGyhoCZwh1U7jhczZ1BttQQ7C7jleujbGud0I8JJghHG91GCMbqpxY9KhcomgNHkS4qlFGhW+KehnXmNaZ7PsEpHG+vQ82vLyAJBkiPsBaPsenkBoF2ih31yDKilpfKcWFgfEdlgTHErc+OzjKv/V3BpJxa1qCjI46EZBeCW9JDHRjdydxSYaxlVecvsyOn0b45cE3W7sVwg8x95d6AA27NElVSOjqd+JX1j4d5yUQGCrulYcmbEEGHEOwvRiDnZESVImQcxlzSabN4YOK+mPI8D0abgi1MB3UUn9zacQMdTB9jVr3ckOFuLKBebTr8OvQGe1CjchVM5AnB6YXdMypYz3yZ5pSz47Pungre4UBcc7DrlDh+oOn2xhZP1GJDvLpySkpN+1PKg1xYhYz1aIPui1opGGnpQ98AFDY+LQkHDYnWQIt7EqyJlb7RoVwupVo/NXqXadbBEXtsK3PxeajLpA6Chg8uu1EcSm0+9X40jk1isKYLVah9cE234RgUztTTOfURw5LsdDfp54s5YfgyHiU/TV3G3k0Yd93XZ8wahMppvY0JTedi9XUa3mygfHe4MckZRGJnDDTwjlXDo6vqL9yuAd6O9DHORDANEkcubYNhrO2FDNwgNfWXbeTyRNwosHt02Ji7aO1grhfor4wTmNP6MtcIy9T24GOSWPfR7WXxv9IvhEbcMJ+AUtuzGagvisocGZb2Bdi1TEsG05+AsTyFlsBQOhPXvb7bXuT2ZZHDdU7DQQ8Z2F+qI1YbcsbMN3zvAzo1UBd0iRjVEaCbIKxKtudUfg1EUQ4eHT2FU4Wv5KdKFSUIUfQSWmywQeMTznb5HIa2WlSjXaTRfEiOoiVdyDlFOcOhpCOY64wg2wJ3yHNJm37sLsSVEVsov6XH5qpQpaCLK3+8XKbzTCg8YAS0VnoVzzk8BFjRYf+yEwmEx5RRN0MKUeUK4seNMsH/YBWwNFLMTQNrUaIcLxQcUPKxWPmSU6IwbG6VbTUUu9w90SRL05dWNlqk3qInvgZh6rQhj7fFM3TvyhIEYe55i4GuJIV6HgIdM7HiW3+l6WDMS9wVaX1r6wSkcaQZwSCoz3je5frN8eJPshMx3gmGyA4mqPUVh9QHfbZJ92LF3GLCBF0jPyLGZUTa25aSMqIlV6WLwrKWQLoH8k1nyhXBf9sbCrKlqtOrx+w2dtgU3RCfGGAzRoK1tSVStYGM0apSxuBCGw3tgWblKjtpCJrwsOwzx4uHHcHEMkfTtSY+0CqZxnuLyroWVhZ0JRQrjyyRzTWR6qEqjChSJTi8qGmG8+JpPI+LWDfN3kO8mJmVsjhO+2EAm2StT6fFIDmuSgF1vhEMK0Dcsj4o6C0pT8k7qgRfQ5yV8hJanKbac9O49/Gu19sLQgNQjp79smvhuNteNxi7Cqwp3oeEwYy0Z7qM186KrhEvu4XihwXcEwPxZQ29BqQcm1I0aJqa2VkfgKa/InYY95CIrbyf4x1FhlwLOgYmvhGLlQbLoBdWb64O8hn5SXfQ20UpJpWtdl3umejszWN1OUlph80smXaMSQJZK231KqOXWGtKV20ladUq55kxrH1erlyXXlAOWfv+9UCbDIngNLt4z9wuAw52phW2Sckjoti1e0apTW2ooxBrqFOizWAwUuflA0pEK3Znnk71Lnfxs5NZKrS2RTQCvG13ZNLLxexWUiXW1LY0LYUAWXmFycvNpfQy1Zkmx2LUxFGKWl0DVlDBWXcEazKAIi782d7Sl0se100eUasi0jHhMmUVue9x7wKvHaPC1qa4MEwbNxGGoteEFpuQmON0chbFYMSKVmhI2pBewiOgmz1WCiN5xXJOBmG7gbLV+RyFrqGKctqRnIJuYoTcX0U2Rc8dtVtezKZfXtYaWzbDbV/2w9KuTXqEnFmqqb454FbH+kqz2HE9MVpfYtAMKBJz3ZeVDJ+KSVsYkMz1IQRB5oXKo3y80AixlTZO49ZUwZmzHV/T3XZLOIcrYzaNYV+XMUCPDO/ILl5fmq5iK0XwAjUMw/5imxe4Qk/O8bIGQw7eLmmBkrSEpGl66IBe7uGhuACR8BK+3Z2ik8qAK6VsGgbH7C0hA/P7N3R5GOF0Fvi28pYAF09sxR0kmG6LGUE81qPo1gXhE0ErBDkw+EsOep7drI02q8oRJ6lNRwzv6SBI4qHoT4fzdAV8OZXJzMhSEbBymtyBWHO5ATfrKm6l+AR1yUqgsw3TREmQLXCCMYVf1hrSYBPl6MaGKASavqgOtwtjqxBidvzaqTfWNUURBE9NMbtIZXJ15lRLpm5nHLOOSQwuXfxbf+DdHWZcUN7ptkwewkgTxnGcztVxtxuWDVi1OhzhxDlZtk9mmZVciTSRpZY/dwvJ36YojkUex8no6A/SVbMYhlFiZSeILnndUn1eAw4Nchq7kjxdjscWZxQYRZtm2KQTLG8kCGwdJiJcP1/2IEZsDvh6zBZikbjJQkpQE5UosS0dsjxJ8batD2zQMV42mIPbobDuHdFsb+U7g5IMe4N6R9yAUEZAbOQG9r3klmnCyFiIosi1cQ/vNzMKFN9OugxuZlpeM5/JI3a6gZ3L5NPtWnqQWII9AdavkDPZYfUNW5VoiZlRWkIdaUBgg3DGdvfGbotzB1kG+5X/nM3fYQz6ZvN3BCKwDyj+avN3GEdeb/4OY9jvG8W/xQbwxBvt9s90XRL0QRMl/zPb/MP0t9v8r6cQH14bGiH/4V3+qTcyMlioAv0PWvp3Y322M4bSH1D4dTkH/J80NLzGlR+o5wCqMnQ/jsLvpVQ+gf5VtZJ76OAo8XWtCwqD79W6gEn4TrjDf0Ksu0EjtPMNg94lHSQgG9oz4t9w6LOB/gycl4oVoPbEi3NNeTEkdhdE4NupD7qvvfY1iH9pl7/2sXdFCP1rhNYLFd0l+Wt0XlXzSIu6/rIYCBWBzdRQ9jL0bZV88U1I4Rh+v63+BIA/fYtiH2gY+Tpeo8QHCEYoBMI//XvHaT+sEhNFyfWWBEkQ93rs+4e8hbmwf9BcCRzjCXnPXDRBogHxxuaCV1hx4lt7UR9+N9XLvw9urx+Izj/NXqu1qBi7Zy8KCVHire1FEh8QlPzaXMTaSUIkQSOf/n1wa92jxm8WDNMUuR8MYyIk8PewFgxDH7601sO0ru+9NvEA5AejyK9AXJkzcZf8wPewot6QGhI/4s5NzIAaeYCJ18HlUkRfe/G3XPqVE0cRTf/uxJ/r490NQgJDsN8JQp+QT+LPlfj+mind49mfP+uTOhiK8ev6ffcQ/nSHz3XqPtNZ/FXtNvS1JL+01z5KPp36h7FeXQ1+dTkEgqnftcPnyw1BnyXDq8u92P53BP47dyD/dYef4g4IjL2233/jDsS7uMO9FMF/6Q4/YHZytbvwp6r8QcxOwN/aCSXXj4j/0OzIq8th6Fuafb7C0sRY4Vmv/LAMi4g4ar/h90LAl2mhz+mbn1L9M+h/rPjnx4/D/ttPfvWkEka+8gkIuV8kFH6jrNJ9L6H/wkv+q+Th7/VD/0es/BsCod+amfiiA/knssR3zUxAf90BvDV5/g1Gka+hwWHsHjQwQryGBv6zqP7fYfP2icM/t8nfyxp+B5wP0Jc/PwGriYdhr2zxmrg2RkiPkbkjfyNe04a1B7c//dn2Q95mbRPUwh+fsl9j98cxagvKAr8gVibDMH9qi6CVfo1ncisG/9Pp4PcD+H1tZh//4m9ffMXPn/9o1tcFJ/22ylwI/vzJx1MJGP38wR9nv/z11elm0hcrbCDs8PcDxJ/a9SNd+LMDPyX4PhKBH/DQv2RLf48GrXQvmL844FPE/C5LQqivmi+J/SkL+tOjwRSCl7v/p5TnrneS7+md8H/indAHmia/cU6I/tc5/75zUvDfcU4YI97SO/8U0X8mPxwkVHo340hEVBKmb9JN/c51P1A0QtEYiRIUhnxtGIL8QCIkBaE4BUEESd5JE6MfIJhEIYimcQik3PA7Hd53jvnp5AC+m3b8GQRZOIVJHBdN9n2x9IuxY4x6pYHWhveBukOOqX+SHKN32fFPEUFBVGUvPcz/jI3pVzYmEOID+Q9Ok2HF7exehEyhREJblCY1UPw3+AdI/lsLIBp/JQ5XT7/n//DnsPj1WAv2VuAgfw3OJQ868Gt0DX+gewo/sio1/P2D31uCcR3qovnsgnHQV8Z6mWJ4YTcfIPzrD5GXT++rqj839N+ci3EXcPStAP+BmRjf8ca/99J/20eRb6Y+Qa99E78HFf5mvvkDrOm/VOd/bqK/50f0h99nnXyFD/0B+QkU5f6T/sDA6DsjdHuNznuD9gPjpY8BGkp9eBTMfmBQ8TEww8kP6B3K8S6g/cDQ22OARt6bzPwukN0bk3hIyGj8A/EgjRP5gQz/Y4AGw9g9PfAuoL196v9ngYaiH6hHQe0H5MI7oza/QuG9MfsBxv8gnvYFOo8H4+Orgde07fFQ/NmK4dupYXRKviSlXyWfUQKl0fifon6PB/zPVh0PBPwf9PHxYH8a4fIloXw8GJ9GynxJMR8PxqeRN1+RzofDEX18xfPHQqYHgex59M5rFvo4KP5s/fNNL05gJEHdXYuHwhiKY59G7r5eX7T+/LN09nHM8bOl1QORqjts9nFw/9la7IFwf0VmHwf1N9Zujxh87rDix7HH0wwk3SHFj4Pi0yi0e5z4cWB8fIU2/96xPUg2G/3ZcuyBurE7LPpxgMd+tn57IOBf0+UHwv1ni8AHwv0OX34g4N9YN0ZxElLhgxDmB4L9F9aHd5jxAwH/CwvEO2T6gYB/nhmBd9j0A+H4+Nru8/ol8jEAex4Z95oZPwqGP1vDPVDUfE2LHwX0X1gE3uHED4I6/sYK8KEI8aNg/gurvzts+FFQ/9nS7+06xtfE9lEwfJ4plHdY7aOA+PgTKD8BRhOPMtEEfx459ZrVPhCMv/Dkx9fE9oFw/9mi7M5GkPc3t30XbvtAwL+1kntH4F/R2weC/RfWcncY7uMA/yObzD1IP/ma5D4QjM8zSfMOz30gHB9fc/2+Fzv+IBNIiOfRWHeWqT0OjI+vsr6/TO1xUHzjqY4PlpR9IOB/YaX2epna48D+NKNn95apPQ6Mjz8J8k+WqT0OjM+9TO1hcCQfXxV9xgx9mI05yOfRQHd46APh+Pga6PtE9IFgfBpZdI9XPhCOT6OLXtPEB0Lxacaj7vHEB8Lx8af3/QlRfCAcn0a33GWKDwTkW1XyFts++d/ah5qGv9nIFoORDxDxyswI8U/uQk3+gKZ68wLe+NeFORAYu7PJ7+cdhb+uMkO8ES7UD2ikv7Mf8u84/+dofS5XgN3B5o7LoG/lMtQPKKF/am/u/3u1NfdvHyudpHXR7T+1dvD75tMVPrfxlyjwRcGSHligyerkiyombFrc+OCSf7rMesi1vxRjsku+jBQ/ocj4n3vgD0eYO1tjo/+oY/yAtPvXMd7BMe6E03/WMX5ArP7rGO/gGMR7O8YPqO9/HeMdHIN6b8f4gYTC+9Av+jVj/4fp1w/kCP5320yaIvfbTEyEBH6/4Pefe+Dz0K8fSHr86xjv4BjvTr9+YPj5X8d4B8d4d/r1A8mffx3jHRzjvekX/XjZr6/Pejf2Rf+b/Pp+k0ngGE/u7tRIEyQa/I0mQ/9txfLe7Iv+N/n1mI7x3uyL/jf59ZiO8d7si/43+fWYjvHu7Othk1+f95l/P/r1b/Lr+21mbTFUjN1rMxQSosTfaTNPl/yi/01+PaZjvDv9+jf59ZiO8e7069/k12M6xnvTL/hHauK9+Zw4lPh2fi2CYdDdeaGf1wJ8NTPu84dvAM8D6Rb8dcuBPvydCbu/2/qv3fMfgXy+wtLEWOFZr/ywDIuIOGq/IT+uCF635CCh0rtpbOjl52dg9be9+zNbnL/+80uE8XsI42+H8BsvMP7dDK/B/guT/z3H/P4c8fVz6I8f7M2AfOO9nd4SyDsL6B4L2zfevukfwPaLRXWPBe0bb9H0D0D75UK7R8J27WieHts/Ft89FrJvvHXuP4DslwvyHgtb5Omx/XKR3mNh+8YFUP4JbL9cuPdY4L5xkZO3BHf+c7AeC+c33gzqH2a6TwX9E6u17+8u8RTIP7+8u7chxVNA//zq7/UeFk8B/PNrw3vbXjwD9PDzS8d7O2U8BfTPry3vbq7xFNg/sfb8owL1V1jdmbj9/jD/AjL0NYN/CuSfWKN+l8A/BfDPL1rv8PenQP75Nesr+v4UuD+/Yr3D3p8C+ecXrHfI+1Mg//yK9R53fwbokSdWrH+Uu1472AeH+RdQp6+p+1Mg/8Ta9LvU/SmAf361eoe6PwXyz69WX1H3p8D9+cXqHer+FMg/v1i9Q92fAvnnl6v3qPtTQP/EevVzTe81zH8B1WPC/AuI09fU/RmQR59Ym36Xuj8F8M+vVu9Q96dA/vnV6ivq/hS4P79YvUPdnwL55xerd6j7UyD//HL1HnV/CuifWK9+3lj6Jdw8OMy/gDh9Td2fAvkn1qbfpe5PAfzzq9U71P0ZkMeeX62+ou5Pgfvzi9U71P0pkH9+sXqHuj8F8s8vV+9R96eA/on16vyl00MPjvMvoE7vLFd9CuifWJ1+f7nqUyD//IL13nLVp4D++RXr6+WqTwH88yvWe8tVnwF6/Pkl673lqk8B/fOr1rvLVZ8C+yfWrfMd7B8L3F9Amt7h7Y+L9xPr0e+T9ceF+/ll6T2G/rh4P78WfU3LHxft59ef97j44+L9/KLzHgF/XLyfX2veZd0PADhaxgxsG9gNsjfXbJ59m1d++wGRkzQx0/fttP4V1cHlUkTfINw2g/2x8AMCDPJqX3hRZHHyj2+8T6+P3bXCpwdK4iz5U7C/ABG/U8zh82d9UgdDMSZfXfweip/uYLbF+iR/9AzfbCwP0/A3VSIu7bWPkk+n/WGMV1eiob+60hD0WTK8utKLWX9/8f/c0j/A+v+3LU18ZR8EJv5DS1P0t5bG/llLw3f3lyPq4ZMJvzI6cb62n7/47WP9FmY9AKO62x9frr9l4L9acKkuny+1PtvHq3387pU3reFy+Np/grrIGuBcqw+8FJIBQbWIgpr59MWpiGNwOtuDUjKfij8Af+oAVi/o4ez/4Ty41nVovyw3841rftfd/kYcX33iG16IQBT2gbqTMbnnmN9a/efF7B8ps/2fteUvW+zrQokkD5HkU7Tlb8wGUz+pLSMQ/Q+35R8pf/G5Usvfqcjy0woLwQhBf9tMMJK8t739/RIsb0Zt3rG6xV/Y8meVYPn88Rtg9348/L/G7k6S8J3hRN6xYsXPgvNvF1p5QzTfb2jmZ6H592urvCGcyNPD+XfLqbwhmO83uvKzwPz7FVTeEM73Gzz5af3Q3y6a8oZwvt/gyE+D8++n0t4Qz/cb/Piv8Zxf4fPuaL7f4MZbkM4HBPiJFdH3h6IfCN83Vk0pnZLg1NdKHyVQGo3/Mcr6eNC/dWGHd4X+XpmThwH++cXY/WImDwPw88uz+yVLHgbg55ds3ylM8jAIP7GK+2Or6EcB8xfQcHeLjDwKvm+s6QiMJCjyLpOAMTBb9OPg3Jfc4+XnH6bVD2SQN5aFj8aqHwj5N9aLj0WqHwj3N9aRDxmC7hcleRCLvGMVhjck4Q+E7/OryO8UGHkUgJ9YRf6x5fOj5P3funTCu3aMd9j5I0H/xlryXaG/W0jkYZB/Y+H5aDz8kaB/Y00axUlIhY9CxB8J+F9Ze96vJfIw0P/K4vN+MZGHgf4XmP15v2bIoyD8jpUTfhYdJ8kHgfIXkI53yn88CrpvrBsfjG4/DOy/ssC8V/rjUXB/Y3X5WET7YVD/lZXlvbIfj4L7E09f/T6Rfhh0f4HprPfKdzwKvE88mfXzdsrEw0zReceyBG/Ioh8I4LeuPvBgRPqRkH9jgZim0frzOFz6kaB/a/X4ntC/LsbxQMD/yvrxXjWOB4L+iSe7fp9UPxLAv8AE2Hu1NR4I4SfWhb/vc4w/ytSbd6wZ8JbLGh8I4CdWgt9f1vhA+P6vLWt8HOjfugDAY83AfiTgn39U8W7Ri8cB+I2l4Tsta3wggH/RZY0PhPATK73fNyZ+nM1p3nHn/rfkyY+E8BPruu8T5UcC+Pml3t3iEw+E8PNrvTvlJh4I3+cfB7xbYOJxECaeeOrnnxDfR0L4+bXb/SISDwTxPa72M7aYl09BlvwPbTEPf7t3NoHe1TdvtcP8ST/NQ3DeRefbaFTVQPvn9G5VkP/OuFD0e/P548OX/ckJ4rUTsEFUZS8tdD1TSpqkD4b17B/0ir/VtH+GFdHXG6DjyIfPe0N+YUWEJu600zcz4w9kWN6gTAApMITwpw3kQcoE4Pi3ZsMp6gP6zXV+tFTAGoNfXQ55fbmfVy/grsnfrMrLr2FyDIO+NRH64dviSj9qcRy9c7Fvn+mN7X0vA/UWkZqmXypJvIrUnyF5/CBNfluVh8Q/fJ7t/2WIvud9gHkR9BtF6R/Iev3dJntnlhRN/1hTXlsyu3bJT9CU8W9aHwz9h0VeXrXjV+Vi3rgR/0BW7l8P+GsPgOlvi/P8sAcg2KsNVhH4VZN/Yze4lzt8i1geRQRxL5aD6l7/BzY6+njTsP/8xcMHdxp+ZT/qc0//VWyH78V26BVJ+2mx/V6y8m2EFEGAJYPfmlRs++R3IfWMhv2maiICoXfMSt3rsj/v3PDf2HTiYdgrW7wmro0R0mNk7sgfKZl5yYMO/Fq8JDP+SEKoQZjUZnsphqIFyYiwHYb2tB5Qgy/+UL33Nl77NpExtADu4NIlEXjltLi9zON4uSXz+VPo8yfr73EwBKtPffwTEbsm+z+EK/assZugrZS1zPqj224uuBn4DfzJSxxzWP/LlduJnNdfMkaoBWu/w5gEjblw938IC6fZhmKmnbLbHS87e1dkWa5VQb5jtRVrMctrR6n5WeG3IlftTq5ykly9aq4x3rSpRi1zDMyBiP0FXt8TYWkUHZPVlUV0TKOYBL8MRwJdmsZEkFjHl9GMF5JHjYSW/MCUsihgRV/P2pvnh4wbsrd2c82CTMlsnYH0qWXEljMCuG0hnndZnc3bTX5lGYZmrEUaRGmK1oNbTWcwbToIUstJBxhrEb60GH1Xtxu24JgJXg/mRtGDooBhz9qwHsxYm5eDRaxHypeDq3YjZhyDQQyz2ksMPh1cMxgFDs4YqVXw3ns52Kqw9RNbAAdDrKmDg1n2zJzW03kL5TLGy+T14KK0NN0SIEnKEgkTGB5ieS2AwHuuB2cTxGcLm1lBJh9LvygPxmo4WPKySJJl8L68tqX24e7Wrjef14MnlonAwZlfjMHHg+0s8hgZvC8fSdS+/3Swy5cvBzNMkDWn4ePBnAPwlh1wsCVQ83FHbz7aD7gCMFYPbEef139GdFz/pegiOQYSNVYWY1paXiHnZPl4SrL2r+IV/3gyvU3jI/iCRDwS+PxADzhJour63RlcmQ5qJQ4MSJvk1UqMdNBl6NKsXzTakTybaXIr0R16jeT0yoNnwKxlbcssIWgp0qD4DFNXHIJKJzE6VZqXFd/cIYKEauEx3ZoktLbY9RX49QoH4I7pKKk+7qyfaYa7/rt+ZgjmJ2el8IUiMbMvwctTGO0TCN84g3rDG9sqDqGfEBCOEAhNIsOAGsbH86MxiUAqmK0YlgZPjt0+Xo4At19o+IjTdNykHGfHEFUmgZolAdu1148IJQCFvvfHACdneEygEaVJ8KbjUqIoauAvfwwJTRprQxKDpbo4AOCrYRAiuWRsDG/McbuLspvMu87i8eWtmoKsCA0qxbQCNGcRm5hk2zVN79EUIaJ+SeG3ZSkbkV4dBTy7RhyuQRxFh+N6tD1czjszB5bd6QUVc4aZDHbJFdwhAFejCSFtHD4NCGXnOF1F9tNm5RisgaOrd6C5TVOHpJK8lrhFW4YNuNvF3xP1ojKNmdQKa1tjaggqTnFFnSfr1faiNgkelg00URspMZKrJdbr+RSevPiQH+PL7/4Y1pkF1RfzZpbgCW9iGY/zJjOSc9uSHToaWozGn7yvj7j6agnSNQw/es+0foo15vprrVT9Nf141aVB/eTTKfDcj9NqDmlL5uvjamFE+4fJsNYmLSAGjMKhVkoE5B5C0eIGj0xWO7Kgdie4UG3OSGooQuG0rFsaDMGPn25BRpO0GTMEvqniS/RSpSke9e3GNKwc3Y0JJR9mN+3Pe2AOIvl0FgpT248P5oIrsdVHnxslc4FddxIyqwpHbwjI/cY3Nc3dk2S4/gzo7/c9XEld9LKNsJ4prvGRFRAdhPDKAdhSBzTubBw+J9gGoJKzlCd9vEcCjy+BmkQTenvbQsaGy3RXPJw47nxwJf/a+CRBuPsyTXunmbFeHHNpUWEJPOMaVWi+SnhkVOwgiFQnY4IWLteeFjW0BHTR25dLoyNBk3DfJPvY8HWJED89du+ju08W0fzr0oBnjchUUbpR29Kxcbtc17jK36gm3SGZyznyHKjCXmUOxSjTO12YNztDMQa8pe31MuULeC6BUtEhXKYLVsvjcESmlzgoIaPvrwdJITraJ/IqmCmasVlW77p83Jo6h44rxInLYuQKq4fOESOUyTWfmFOeEAuKXiHwT0TH4CbmQFyyl/iv1QfDzzdDFKK9H6xfLYHu3E5ekhi7eLhApXsYBKLguBV41uZJCMLIMMByWRsKEJsY1j6HsVgPF+0YrX/XZBWXsgvif3lIRmJLuPlojDzOqhJ73llwTx/jFMl5FGkZQcauS7ivBnBmV1P6pdVMs89JmhxT34v7G2IWoP2c2IL01L2mq5jqrl7Vo3TMbBnGaqqM4Xe4Bb9E7yEhQANaTvpyitmZh5kOlxu1qluphXzwwjLij9NwHogtk+mAI7DlyjrHNQbe0qVMT+I4rpbGE9nLWlYiTc0sdqPY6dsXd2PXeL1cUUjZSQI+yZls0xKG+KCPxkAoGLPhfAznJI51NYon4ppEcF+fXHxgvOloeGw2KIY5NlVyPFcixGpXYohJrxOrqTrfGBsyzwRKyqoxyHZuxCsV4i2VJeiVCbFZz8lMeb7R8nqnCXigwWxs2B76M5JyVpz5veWoxkrcRc7uDtwcb0GAcXY7NCFJVVHPaXpcUBzRUupCeRVkfsTDHFY8zJHEMGqwN9EAM1yxW9kHcJOo85dMszKT7XI17jcrzXWdHs/sw4jOvjxxN/dTAybJ1QLslQkCRGqRzWdENn1Wn4d9PMAwjKqoQHgIbTHJVXaaJTMtpi4FFGOKDgSNQ8bPK92ytiIBkO5MvNW0tf9YyBsH/MON8Sy0mG0HrW4qhhMTmODuJDyW+x4lYJY5zUHdVo5rDdMJVhb8JbwYB3xDFX6i5Agns62U7WRypXQGefEBRyhFPL6ZvM0F611Fb0xRpMOp2kwb5ajCadPYo8tXIQd4TrUyn/Wgo8sUITdXvJOy8skrjm2OqbyxJGPoHUUrJ/WeERtUqTvfJ2J6HBs9xANiULw9t0lWNgAR2RZd6mpZW7f1sXXbF7/HcosE9CNzW+bImBmebaxWXu+F7eiyhciwX/r1NOCKnmpqEwoz122jRww19HxhJ4mnt95FyqDko2WTbrUsbZ7ZHrRLjIzY5rxnBLQRdHlLQaTq+/6oU83VQdOa96ZhHyh0OjCHHHB0FmKoSy903XA1zkmOSxVkfLyuUaNwLw9sgvjk/gxjMHvrOibL5UjedEoGH9fIyabOvqzGYo/6WpSeScq5xalMZMsNtoStNbX2EPsyExzxNr/BiCeyDXEiydo3VvZZ1iN7bRXQPgUbNeDJ1HdMzgWdiKksxFOd05+7s8DzZZ6h4xm5lqQfwIxk6UpnFhxNNt7oLZhiilzjEQH7sSVxfQ38bOcq1w3lwyuY00hmq9h86Ui6ljEsyqfL3XQF3rZe5OqPCQEvE8McV6CSi9heziE5Nasn8Ld4vgk7i69OgG6iibKe0kJehoOY6jWkccbQDERqfV4VIlukMYSOa4Bao0wvkq2XDhxXXkfGZnVRxk3mZh81ZZ+ME+0yVAZPVtdNRr0PJ464ld5V86j83Mx6fgV9FrsSBFEcxsA7eBd9KVbG2JYliiCcaqWsllr9sgqZ9RgQANJyY85YIHeuleg0ZzGSB8cug4ttvFOYhtkoujexMyQIm/pADWE/DsN5rxG5WiNo3/fOuLgHWLphK/PVN7eoEnVne+xBXzQGNyvO7bNI9OcYSFsRctbeWzhqLti+nZ0piiLCE+jIz+yRaU8zx56V654XT3rvLA51WHsxiVGy/ZhkZHg8dpfZWxtv6YY8aG9Qyt4sGz0lZhhFK5k3ru6ebbdhYjMio2V+4spwpm5dBe/kpplvCmDkA2C3nKBQMw3cHuJnHjwrdWEvi9piyOW02S1qs9dTp6xTCdynTsVh3itaZyMoqqqYzIgsbzMMdrxM5JFFUyDdRcBNYGbewsZlGwzdYNVLcBYQGr3GjeEnk+SOsmy27YaItWHtTVXKjLqoXXmZ+FH3FP7HYMlZGusCQtbte7iddwrb80ejNpUQhQTl2Namb5vcMnBpdz4MOE62ezST4jREr2qymcFVyLle7wuuQSfIaROwod1dwPP5S7vaSfGXcJDsQYtXjmgEQla0OHIp1hOZ3gtxOih8vqrc1DhP1GFkmaYQOScPrJdEgUedN9WViMp5R6mtugFEXvZ63bXAI4dTTL1ooqhdgwEHaT0xDDdbn8bJjk8HArK4VryShMfbhgprDkCNZnm0t7LCxVgdcNAuOcKkKzCdiOekuooGDxWVrQNfhu0a5LR4Jd0rf+BVnD7ZyoHjR0TYrXK20nqg+nMikirFlUHvDV5ZyTjI7kcDX3qd3XJMhrbszXCWer/SkMQ6ZFK2VOvdqJ7mzoFehSXVY9WqZuX+MDRZni7DTKwd+3XMGWvgqU21TWmMc8vNyfO6PMu4o7nV2FU2GIkOIlt51Y7URz5gNeZy8tnM8ZCgcpAkhvOyFZbT7MBotBqKFZgi0xCNI2N/PFFraGtWNmFVo4l3k5WbBwUKX+RVxwIGqLROiZbZCTaAguHCk+wyhxd/hrZQJdfuzuBWXpns2bwd9ipt7SdTpu2ljnY3C6pUjrBkvzyEL5ojMudwj58ZnHJxZqvsVEPKNWe7O1eM3nLT6bgnbyeLySqinOGM3+5Wb2D0rutW9ShjGLbSokq+UXj/0WuHo1mBpEJgruxB9TZ2BfyJ95K0+sS5Ziq71D1zKguW8UZnp8kMa+2ZMjdPfSZUmwmpGs0rdf24tl/QVsVziUAQm+CjOfJLQmEWGioY46fHM+97IGdxM5kXSowrLM9luTAfmCyFUa7n9V1QuXB/tM3iXEJHe5W7x2khFyijIN0LmDijMt4qURJqb+oxmnBhvLROdCYSgV9vfTmWCelZFaCwsuPN8EcaT9XbNACxHDkdlaZBQQEsNtp567eHVHohVEiGgj4qn/wzhHZAoyQzYsn9aiY2tm87o02zxNI2wnmNCAFRR8wW9NhqjjBhY4TIvFORNPHWJ5LREbnlWjq3MbuqCyOm1k4NUvfr0fhBq0u00Q4QuAOR+8k2Ntc7SyA3M5kFcixwulxb/Z6xLIjR2o0FOxZHsNQpAFpP1Fmex1FyX6k2O20sCF/7Hhnvdu1wzKFkyWVG3Zp7fZw4aZx9LbyInXkigA9K0Au1XCPW+rxwIONlupzCBMcX9OrBlra6imx2PMMk+9rINfhGIt0lOeag0/RZWbpOG3l7fEnz3VSzJ0ArqJSVg2X44moLdNwCNJE81AFVtWj2KktnRAOMEeBkhqOoSLFOxdSGoidFnDRIGhh31/qheSuIW1jNsRDAOb+ftJacTLGB+iRpKwuFwrBYb25nJW1ttzzaLjDplDQEDNsPBujoOxlBfLwF6kSlkFLZMI6rT6m2duKsUxZKtaHaDhlBuL8lGkLCqHMWsb0qKUO3u7I8c2RbD5MN/bCq6RLQS2JgVHkblDRuHyKGF9T1M7NuhVBdqQAPmQo+nDkBW3uUQXMkhTvrstSd9nyyK06Z7dOucnT2yNJKvBHstU0UGGxtACG582wHJU8bf73bZs/IsiDJhpXOgQ6Y+TkBNz8ptkng2M2+egUnOoO+mUGqbOBvwOOifGAjEQwq+qtKY5cjucZQHnZzYu+J/rAh09qKg+lo77lB7iFYd02LO3YEHcVic0ltHj4Kq2In9voRP0uUDFnqzjMVCMmmjxm7oNn55o4RMecgqmIU0WG34OctC4PbyygiR1ybp2dNo4jDCSQ17Bvqp86VX6/FlTtbrYgenXKbosSVmcvnS5gKOoiH2KdMhWE6Y3hIK0bTLUabzzWFTNfFzMomn4YMyrHC37Lzli54it7LCdI6yDnVqhBqLtdcqQwjiYBbhri5b7YCYAFNj50bZ0tJl0sGHetKh8ccqN7TBWZKkBLgwWOSXO+ltd+HaSO8ALp0a2RDM3o60oHgTQzWWS+puMYkNYwgiMl2QhNiR5db3UW9YdQuOYx4XYWLc7AjZIoZer9NpCZj2pcubP1/40nr82o7qZ+Zm8WQ4TiOOWrp+TkCbSEDjfhsuThI4G3Wnz1wiiE8XtGwvwoHGtuZabsDGSgil2+mmfou2VqOU+Q1nEXZChLot/G2KNOVphog3GrWRUlS1WfELWKeKXYvUmqmW+y0VWaGcVapoeiAu7udqsIjI56DHQovwkpOrmxNd52FNwf7JqbsyF0m3KzVU0pYzv5MLznGqBxjyJ14GZowNiokHBN0ZQ2HIKGDW7NYbtGZLleAt6n5PWAuzcQyC3+okiLf7uDLtmCYheIPQQmsLZu3YI9dp6YkiCBGB7Lf926bYRrD8/Myfux97cFFmInHLLlLaZFPfUJNIUhgICbWVBBltl6QvNCivU85ckxrgkYizcCD6XDpvsZDxpP0DejUVSI5imYCMZU6QZVHg3XdbE1xqro7sLcW5NIBD9JnwZnHHatLFisau7yhg4lRdG3l2JB6O1A96DsCavFUVEeZrt6cM99SwuTMXZz1QUQVZbICU+3TjAwtSPXp3qgBfcDnAUmEou5dScttzUGoyz28RilcqMZRwjgPHyRuw9hhEB02ViYadCuriWe6O67M+fULrCJqpU5YVdp3GXk77GCtzSEnOYmNtp2YlSuo8ol4IcLu3NKZKh9qpbKGfc+XuwxpQOJ5PCnbgvb6a6x7yEv6DfAIq8Mup1DdbJqr2zkseqXHGB+JdM9kR1dkBKgRPaEtOCHqEobbj3TLrlp2rybF0Q4caiTlfbtRCS8aMbPQG8lkRmVD8TtrMy2swq9kfp+luW/XJ47bL1vmrG9sMGLhdd2GQtVJWltQTNd1TW4D5BBtOT89ZV6/lD7bKAdl5S1eIR0X/0xKi3ARbAlhVC/dKQijVYTvHpClsWx+RNGNrmoBdYXZc6Xmo+NEQdVNSODKjnnbdq2PQOe67cNlkl/a/EFrCi7ib4gGYshQjtga+NWWFZGlTDKGVSaYNbrRAka5wC4DZTLvEDSKpU7rXLYujc/9LPs8xeCWOSc9HTHMyaRQiGFbRhNGkj/LmKdeiD2fDluGZuampFjAlwM4UDTzJk2crDF5vM0kV4VWlWN4+4pcDlM+gPw3fDm6cAuD3h6zVzkUxudNnV0zpEh6fQ/acZm3q0E4TKWuG0bL0Y2Zb8td2ACrYv3tGAmuUCy8Y+pBnQTJJtm5jJEcSIifWqVTy57ZyZI6oHp4oDxPcbjuY1hmltI7B3tjjlucoqbFnshuHPfex28vkRPU2nmVQRnIz1SwlbnuduAzWe4I1dUId8Y4eMmYs3+FdRPhVvswqm0pKOxwOrnvURLGrhdB0gODs+HltvJh0ziOAietjB/auRq+8hp8obKq7xaqAmGbY7jYXm2LG/1BjOApBp29yCrLdZgdYQ+3ecfEMbbK7U6fs7jY5S3QDpuV8LDbbCd48eBvT6CFUgvo2mxzk8JmxuhyacrCi3CaJ2oOT5pNzS58dTPHGiymRaGdJUPSVcS1uQlu1Zmr2ZVbLTA+LSAx0LI2z5QtIPfscumGo3dhtvtrWCAnHq8W2WQYbu0DUbS/VhzUW1exceoFLCIQU3ppmQ0IEujpRijC7sTxu63sKLM0FTXoJR3oCJ3mQU6YwwZm3JwfLYw42H6ri+7JP9Z4yinZ4t8s41rvWZCPEyNOy/qYlruw1jo5PMNxyZyWYBZaAcE6Kgdk7bJbO/N58lEMEk6gE8nX+Gf0Zo2eOlbLo5W3HDJHJ8O5Nh0WNwx+OwOvLRynPHe4TCmiLerdQbZG+xgNgkzzsD3utBKGN5sgCKJ9MzfNaLHbFJXjtdFsq5wIKOiAXutNF9AIOkiieYJ7T8byCbeHNLEk5uM4GCu9qKhKBu7Nu8L8Mm4EMo9yMwUKsDrTMjdzx/YnP2HlHEEkaUNdz1BwE7XbKlEziOIhxjEGW7AyfacLRNH71S1gbyVm5sGO6irObQY8WmUQ7CIqF8LkIcLhriPlRuHwWB4y2qkHaMtxcnQD/SqhohgTgeEudcxYmz4wYkYgPnkCYUTXkm1uJUJRm+MBS6kNQfWMwpp0Ku+DjFWOU5D3HMVvNKX1923rOmVzstJiN4lioR7LbSmbQHif8zUYM4mC+mbHUiNg6u4xhBL3pomzdj7LazsSWilVQlVWhZsL+W6/tazzSeXhNNilEkjCAHi8jBYufCU7GSIvGNsWQlEwgo7Z6cS40pXBVfjouCDHtbPqA4vL2trJtwt5Lsry0nWIFTO6cPbw82RajJOCpd5iv7fQrp76fj/PFbPQA1dxzDSlRUXW4+lyTVO/WejeG9MEiYZu8aeVrXeM2IJXE5Nuq6OBeqt2+SAymxGVQnFfgyRvYnCBDHf9ntl26QU/NmcCQbYjboVLau8GmW/Nftn30BpYeXhVjsy0BsyCD+p9uvf3AhmyWXzqtDpjrOnUg8EcVFo15EmFUl2oQH9O7miG5nGHXmQ1zzMQTNEhpGmg9AzZV1HwGEJ8EgLL9leDaIziK44oEGbmnfezbNncNSkTVU5CxN3BoLkym81lcUSxxX3O38eroCiA2E3OgEAggpnAOY67YqgrU3s6mQIMNZBKd861GU5QsQlc0zbRObU7YgLonAmSrHuDSbJCJGuQ/D/MtzNnDjXHbs8eAwgrvRJL15FLlRDgs5Uznugh6Yi1InN1obDiN9zBsKz9xcyZrb5yEe0lMHt4RVO+lXFnNoN7KVm7mWtWk5uw8/SZ2QZx4g5n/5Y54fGomATVqmhmMeda3dKMtD0lF4Qlp2SiunoUivImeRx1BbjRZJ1idLf1x629YJ4EeUafCuZh9UxWmBheTAqKk21ZtNiFLGD0cig8/TBB1krZu4mwbtGge32+Ku3iUJXBrpGHPc4qDlAQyDySgMXsFf588q+cy86ehjojypDL6QiEntyYNxiMDra31j1bFnYF8cePQaQ4tV65v9ZoiiChHRz1zi5GezfiWxY034mb6EECip+K3Lq0+vOGP7jIlmFvGEORpMqt7E/mdjf+HALNQNPFuST5lZuLhjewxe3k5Z6E5vDHIcuxEY+B2oQrXd8n11s5FZMka6ogEhVcvGSy1rbO2QrDFt3RUbBWkSLOXZm8D/ohUVEZS9Sue4q2o0qNhQXGQC/CWOHerBVPrnITvCUeGuJAZ4IQYjtv2DCcGFOc0EyHY7pzbfTGto5lyC0PRjM5PapVtFHIbrfqBDBDITs0rEGTt/LiR4XFu8nM7G7Bds8JpiYQlCYI25OXHKcdyMVjTNzeoKQl5qpvOFZg1lfrurIaYnfn7IyMZTGWO7lFzR7rfaFTtU9lIAfbs3Q9Si0IWRZgLkdmzDZs1FMHiWPbPWyt6norxBRsBq14ircIZe/B7XrfELYRppmrrE51mqao23bXKW6cLBcPdIAv4xpM6M+AQgcsy+kYICBBLqm0gIGIpwGixDA3ZTEmiwLTPVZTKMHgwNfD6AFFZXDxxUtaO4smSmCEBUdVGvS/YEygYPa5eyAz2YiEOgqUVk9GSctK75Rcz9Zho+CYdORwSaCrptxCkaZrUxw28Aja7SoUVEAcNf3SwlCE4HmzFUWRhrweMS6b0LZaq9Hw7oJyIOrwfoM2uZFyBx9VLhdiAqGEK/RekVLd0rLs5EEDK2DMqYEiCFcjw1LGLDVKVc+0jAIBBmNzoJv1/c6Q5fSiGkQhVf6wkvdtnqQWk/mZNFVdw0hBzRNtNVZFHllKRjUbKz0ebVQZaQ1HcU4/cAfPwFTZzIt2jzGeizKDpBpXIBJomv6IDmBY5PaU307juBFVU7xtokzcH1mI3IGumpzyY3d1dC/v9z5BCPEkWYqrninZTRl8X0oIGDeET+EqRb0x6TcVk/RaYV13rQlyUNG23E5Q1kmmBV3C2dB7YE31MpqmoVhzmIMRB9PhlJVg7ecTIAjCchn6jXRh5Hih4Rbs6sBuMUcX+60pTHqr5AV18m7Bysm1w0AUfnUgEyMwK5u8JbeAq/hT68/cTKY9JRudFPRjjgS5EiLxtp31YTroB3+uFooQ5wyyE4faThxNQIfRyOyaOqbhzl9N6iLoYi2Q6p2wlSe0vMCIhdFUlrVRi8KOAE/GOEubh5a/gIkUio3filA9LTIIQwEb7F5yY3agTJtQoK2ARQGbPwGhWsTSXqkllO7ls1ESvM5s3RgV5IwMagRbLMHvGySjcDBPoqkq6GWcQa/7LdBzreLAHDcwMOO4qbqhrF4ga4taNcVO4JybtkiBWyIVju9BGisuzvSxCNjrakEs25V7xiLRQBCGVfxMq+TOyXCNffUVUmvdQDJhjYMgWzudqBmuoth3UhmSz/1pf9AzpoCXQ9ll+6ywCO54xBiBka0WpNBzdFCU0QQPiq0/dXLaSS+U57jLzlm5PeyUQhvISBcsWvCgUxbsNYqX9sjROWRWdN2WWarsm0YFY7Mih6rYeDJ3ZEcMHpizcAEJgMkwE3Sr7cyLPI4oSWypIoAuSaXtPHFORdWzWWPjbOiqjOvMRULkgkV1K248f6pJgSs39VkIovDCagfVT3IO5IxELFD2XNPp4pBmpq6LF+K65bXTiO81Ar7uuq7LNpV1OHGIl+15anNi0loVnABPrHa37YRy4rnuSNFUQjH8puYM5nBbOJiDSqFm2n0ZFNcagckxGdJcy+XQ0jy32Jp+oAcHZzWgxW3cJMnsVc4xPaeAoMV5B8Y7IddOJQhHRQ660cCpPCNBllAbjHeJ6dO4RcYJx4nTg4wRt4zoIp1tDtUIspdscwXMiCoSQ0rpeKcQpiR3nZqeM3u08yxCeS0tlKC4pcBJs1ugr6GtMH2C1OlIS694PsSixlzK9mg7y80lyi5CZCVCpY1wImYH7ibO0QtjJfMa3DGFAccb96gdK2vOhlqD4HabKbliuBS1thAdJUHHkXzKCJJyMKUDg+TMVTlbgnIK9pkA8sYslFq5NilKIYjY0FHybYIqdLmBxmaj+plutVU/055qShV/Vg4OrN949wyBlsWRapEFtJgJBpSj2/w6YoQmyjzHU4IQW/qg3UJ6ii7kRaIFs6Pp4dRQWpgdEu9l0EBspl12GoQi3VsQxDearc40iURXz9pziSkMwdUECfGDbqZeDJsbl0lYRkIiF6dhWWYEntttfG0UGzsYZjHyEAI/hikYD90zoaDdrvkVI7UDccxURl2pZX11OWXtxEtysVTbi/ezkKMMzxFCpmyVzcKgw8W4sLnHipmUYdaluMiTBTEXqQhSK4Z2Z2jPlCvHZ2r3mhO5OJFkMQqQHVQUYDZVMIOuq7SLA7W0Qb1MjMLDJ4/ctUSLS9EOH4g+T+baQNlgkjYHq8kK4yqRDhbAgRaA8A1B67vGw7mjBDUN/J6OXjrkYyVoSE6YOz6k1dsGoL9hMTNjNwdoVTtBSntB0jGH0hJ2RFkV7nlBP43NUIOUbK2wPCqnBbEtZ+SccPCY6hgCZkFkSjZWEkwC3IZC6TmScfuqCKtcuPhXtpDLNqHXtvCinMRzxoltcayYqyqRY2tfR35DsxxfuFHZ+CD8RY6VFLRzZURNajm3+P/2vmvbUSTb9mv6sWvgzSPeCCtACN4Q3iOMEHz9JXaZrsrM7q5TJ/O29mnlqDEq95YSoRkRa825YsVEJ9boqfoRawDecJ9AE+KkzjULvldMGbTX80nagiVUW0NFhr159fPByFA55siqSdMK4ZTlTDZNLJJSH0sszFBU3gujp5BYeEhx+3ptDee6xM9mq6Q1Jcn71CbnVtU1ktY6P5ktUeCCNcAQxsm8ndtW3UFqZC2VxT5FqtQhmzczdUtCDUpiWJBeM4jNc1ovx8d8zWvdNCNbMVeGrcf4EYUa+dEowIa3swvXhuDyuXDX7LAs+UvYw2fDJYz0CNg4iQls2q+JoOWFQdBMgTEwbVu22SNUMmmKfsh8z8k57UG6IBDFOtSIvGDSDKNwbanWEFM/UDtlzITed2LkyV5gGI5hHO/MXnj0oKVGKbrc07PKQktZlsyFvjmJSa6C/cdnnHgiit0eVvlkctrXU/uS+qD2fX2QWwNyGx6dIErd8aUbU/Uqk8NoqAIjuR6Q6JqirHPFhXdcupuN4GEEHcB4Qz5iPOgP4SjgO3Kr7ToAlZd1iRJDyPABhujzwV1OBqDKXARq1x4+Jdc2lOqQ47FsaHo/Z/UZzlvlOTxCEUyuvDspKh+tJ5eA6o5gnGZfqMi+XB5ULhzZixhkjAtcExMzZCjXGRCXSlpCFlV1SLWriZOpBBCB4bYyAmxV5ihdzFOQyTt3kw2FR2RmTiS0qgdcTkBJblw5sAQwmwnofLX2DTrEkmRGqXsjyVuSpQle2zICPSrYTkXCJFBXNtTjGzMhN9zlChWy5fa8IaCDtNiSct2o51aZqM56h0AhpbLDztLJ3HfbPINtpyxJcFoDHZRY4WxZvcXLOXTNfYghuDU7BBN3FDK6oTBmfjYc8VzyFGRXTHA61/RD2rvTlC2MMvEQKC/1OXqnBTfERbF7ZLh+DYMzHsgVG4AYhFNDPdsfdOjIkts5ywq86y2zFo5ICxqVau0ukHBpzpd8v17JUWHXysrMNpZPR+AV8BDdR5ytBEqRBD2F5h3ZG6DxTqBttBSutOp2mCwkZwEJi2MisT4ISkqGKemsIHke+PiFHJ0m6A/WUdP2M5JD6plNg3BtV2Hd6ZV0MlUgDiHmPpruxjClKAAlmg8nw9M+mrdDR/GwEb/w84xFVlualK35Jgxki9gtDY87ToK5TGB5oSMD6jNFMXXCWaSxCKY0WCLIbnJIKqQ1wvcPEUNl+vnEXcBWdq6UQazkSpFLpFi5RSs29mT3o3jCH3qn9VczWhHPYljd1/xzhfY9ThPBNJ/CC+baS+aAiC9ygiY4K+UsgrU7p5QSR6Z45kzhytdiVsAauvMM94BhEzlNY23R22z3IlbcWlYwuJg1YLCp1II1YlI4pSS9ma2bIhXaFamsViWYJ/L0RgRNls5vsolCKFTTIyNzjsi2NjIpsSWP2hSDtfQ8zzDy3HdZ2wQk4Jz2QpK9HAH1EUJ5e4/xSdYYxK3XQDiXSFtcZDVJI4OAQEzGnwQRE2ZjuRb/YJki8A1yU+xnTw6IIImaSdqn4HS3dMZejbD0uHMZVJnSSubBfs9PhuYmNhfxfa82O9OuPlREWYHwdzAbR2fGOaW7Yu7GcoUyrzCU3E5LeLs+riZNLx0EnVfTGakkRTBJZG2DLEp7begieiTmfdWhlZ1rVFsQ/ArUzAb6ACL9vu6r68q8LqpOxmPXjs1dur5HD6q2/Zz3cg1HbutEB0zoknFoCk9KFV3pvCD8ykRJq9+KvGYGHeoP7UgTzrMGW8nsPWllbploOFlRt2NyVbL8QOmZ0xBhjup4ClZJwXnt7mF14eYlZznpiRFBOUqVXgl39AxTtVwmwjJeNiWIA0nI9ekGgVXXMDNXS08XMhH4oAYVbBI6zSg3T1r5gX2Stj3tlmcO67iQmE+Nd3knClw1FtC+KiIO2VPcDbMz62pfGb7utjx+guC5y8wGEhLX2qugHgS5SueDGp0WplLsg/bhSXcrQZuAhYpt74Di16j7yLhc8WPWgwqbWiu5sVnHiNVhQbt4rOmkSDJYrweLXTrFhiqrYduII0JyBW7GGcbrqKPPlOHvlMYm7FO7PuLFFbRwX4+sejamdOosnIsAeZacloUrecYggWQLlSuHUstzRWIWpoG1urPvcsYxrNCuJTctIwjPLkPVpIDeS/vRh+ATj3AUcB5sINWpR0wHtERqaXK1tF1lFglbeJXlicK2OcNnDi2sak/yfOnW7bnbCR8tj7ln/H0tH0nPixTKTLZyetiPU8pdqUDRzka5y0/YfOD6oB+C3rp2RWfY683rN3Qw1I7aBiVLb2CTHsvaIYc6pVHP8uJ1x+xnPfTJsimF712jp/oko6sK8UIxB+2E6A/5fHt0AdhyOWZbivi0FQ/+s66YuiZtWJCY7mZdSRfCkN4OMX7Y+YqdduYiYFLBi7xlTUGmPo9IT8zpwzu4gVxyRXAaL4Fv4YzB+QW2Xtcpi0bdypHWl+DpNFuKDPLOwVI2HpZXorhigpnOkrz2DLy5lR4+PF2aMi6mj0CDhR7DumboPpUUqrSlBPWKe6VzLH1XYQiwk1Hi1yMptE+jPajoMLMXGRl0KmZKPVNi6J5RwpJqOryBMzZFZdjWoZMe98nKJkaCHIP1GDw+eC6jyH2JKdS+Yw9xLi9CKJmgdCUlOprxzx7s89teemiS/dD/F+ZcBUhfMk8rtbVV5ZHTseiF6Hks/MaSCs+kDVD6OzSTfS9me6I8MbEV2FYq/srp98ofOtKIW5RTSf3SlSqztwEbWe75ILeBfzrVbGiyq7UWFnoGuWXgyIG+3LGlRM5eOoG8dh1PSMBUjZWyHXOINL3jL2zkOWxHrUjcNjdE6abiZnyoSbAHnq0CaEB7Cmy94rc09MT0ETwHC40ZaWOqx8bwtl5dXeOOhOayXajrEfXVGweC9T089XxcY6mq+89bwCkT4rnPgV3CmroNIFYPeBaWVtA5FwBV3fAkc0iiuD6Xp8jmhwKqccYumM0pBL0nCHmU9rSBBdQOsDzs3MsNHxVOrRSf4i444KNc+7jaVABH1ApJo24s0dUqDVw4wvKYY1Swjb4qImHEb/l9WgZzFTg8LFA2P6Q4vpMXQAA2yQiZnHBVsO/HscaU52u/6tQF9YIDXYuhMmxKXMa2WunChqxPbT45dqu3er4w4MxBJlI7kHy5U20cPsSezPig3MuEqdlJ3ZGte66+yhtByz4gf0TNbzqM6ULQKrs5K5rSZjoyYTcjOCHX54ChuArKtxLVmLrbY8/9VInK1U8wvkaVblgzhtfOLufGcm5vQtnLHbrqjGRfRpAkORbTaik2BuZ+mciddZtWkjcl7do+L7NiWxPIeFql3JaARGo+buEjxPW36lwVjUtTjtjQwXTcJK3hSubji3vEBM7l+dUQx/tE+jUI3AJJPm7eRyXTLRTkkquG2nRqDEvmJkWIzxdWixIDKBClkj9xgNXLAUXwMxrEuD1n6xOZ+pP/KOGNkTveScsjOpVgGjZWMa1LHfqMCIfS6Kf7NiUSLVsuw/U0DB56wmJO9JGf2fJxrjHUUFR6C0hCuKXOaul+d3xZaGUCU45CW9TkS0C0ujcokFLOwiIyEHth3Mw5BXpGNYoInx5GfcQm6yQThRyvyOXUMsvgZDMzdBS8JfOYFmyUpSJZWPYzt8+0d047K4rOmbWZjSnsHOvfHgJxi7PBBWyQzMwAubuerQhUeFDQ2OiQ9iSLhwjj2ec+PD86Y8DpOXY+l1vVoRbj63EOZ65atWDl3kGFrL8HPRylKij9p0p00NkbGoWbp0GnQn7gHdTf04XbVJVFxo5sE89gHYbqWAkOW0Iu9bPCNkXLmQLCGN36EMmWH83WOTjiOXocDPCyULfWLR0NEspDstpX8axSzrHAH+cq9FhYiBkcco+BeEqJeiU9ttdswWYOIKNWtbed8VTj7H8cXURKEUdlQyJPNh8ArZYAYqtYMQiqdz0a3bg/BEjP9i7a4bYrP3Fe5Bjo3hsFpPa3+kojFmQMqZDtQYRG3DlEBJg5k/3HSZzitpv9llIBYsFsTPV19zG1wIdgNz1JjGCr5FF4Jhl/Z+w8PO10d+cL+Yxb4gRBCRz18yPcGR/neeUkxGPV3yWmYOGyrEPIu13utz7tyfp8JMUCL9gCy43iOd5ut2YSUKiU8ky7IRrtW2J+yATWXo6EtPjSslYNsygxwTQ3UnVtqw+d62QOvMu3DBXyyWN3W94wO3mRvQqiYnRg5BYWkJ7EWbvAYSgY7+elb3h9a88H4at28lqToRQKihO3GG/LD5oD7WENdI4+tr5vHxuMAkwp7gO7kvGCQUMjGWwbFlQaXzg586kG6ESvIm1x0+pHjtvITViXkcixsMHHFWHLrVQjT3aM67ZYOA5w5Nh7LTd3C0+lS8ucL4yhj7oMcN7mqzWg5wV8MFswij2Yo6Vp4+KdQXv6U8RuvQvybhnZ6c6ynJ7MPZxWUjcoXR6B3aR+7am8t07rmHRuaz4KyN+gphpmlwVl20KU/Hk0MM9FFXuP7hsWH8zwrnKA8rmKkcQqzD8HfHWzO+eqmxjaEntwjKdtNzHoDxvsTLDsa6uiN47RSZD86mr0FWgsmzPYoMmWi41QpcGfhifYKHCYMqAuQmqfn1xEnv1r8QDVALBxyIowtsKw2u1CBzMEw9ohkgISlDUty3bWRDFPsAXjgU3G2MsZs3rwAT9El6zrF1bpIE5g9Ju+ohgsWbyDor6BXEt3u3eg48LMe5uvh6Gmb6KpGS2s54103wzi9mz3WdCdqtpxYu/j50HMtZbriEYj1C3n2Pg5jASBoN34MH0/kxR5qe5jr47uUF7GwlSYfGLy8VpFeWG3JcSO9yA8CP3mWXyk6UdcEX0lTaccZpx6B2U73zbytp5zexZMQb+rqyuqfVm7xjB74X4rGe42D+fO5qdHXs8xlygKR7P7ebOWtl1ppVUH8rhz24Wq2XucptjzwHEuMOKrP50UUw0QmNDu2MCh+9buT0NRtOyE/VyXZq7u/Zi3Kb3AoRKR8LkstB6xCMkemINwK1F3dkfa6aAoNGUUTDUzGaOhRqDndavIvC0X+8oBgdAK5corKtheMUB3ykbAT9xeWdlgGDs9K4ytbyPaQwRbCqsXDVYmTGIJOiRml7zQgLodmSw3eu2I4WCzNxy680ynVygP6S1KME6cQ9hnYvjOXvGErNMoZIRasZBqmk0Yp9ILp0dmLMFFmX3IAL8ea4AwGkbM3uUmUBM5C/YkaQFMnnU32lG20PG5Gx8nnLvgwT8fhmLRlPyogzTspPha42Alg3k2nTzzHCHTQLsSUBWiwvgEwclR51Rt2svLBczQINFCGjsnA+M/86zWkTF+XpBSt9hCF+/wVoW8E0wbymkU0YqWyd2uI9HvR7bDzt2pZjQhzPwSnJwW85Z/9D7kYtWqbCZlOwwCOtFtQRdMsZ+Tc4lX2QDjBk8D7nVGITiOuscsFUqGnE5Q4kCzINQ0YZv50OzMGho5vVa3iF3WM0tNm/LR9OqBkn19wZjEOyYBIwpWWj2sOiOjaHoEgnICpXRr2VavaVIFvfq8f9DA/JLpBiawD1mnM4e4uKbYaQx/yq7FetHMdThPRo0a9EV+wELJICNUxgfBCPe7r1lKXQrueQe1Gu12VlHKhAyd6xDwrR/eTUr96ZpZORcF27kZYdCEkekmuNswTg7SsPT504oEzXzMuKFLSnpxYdNTlI4fbrS2X22zMnuolYOiTKteNlhhuEU+mFYhHxf7OdquSutXV1zv0wpFCx6wd4FFh5anLphPOmALhHjWendb5Fm6PNhA2sj+gd850ABc0uWwIVzZybjmwpMkLhnW8b7AVKvPffAiGK8+jpE8YcRQkSDrvYwKTr2xXodiwCsjlGZZTq1TOJaYoUKa37jn4WTW4uYW1X05ozsrYmqFMLYvMargg3h5RhB/ui1haLuSw0xHHl+jQAi3JLkJ4GSliB5a1HRtV2gw9h5XNCO61snIYMtRk3wB5Dq6DvQY3MN7SUfnAtT5Cz9wY1DV4sTz6lwmKTOLLeziUjDNGnKER9glS6vtnmr6c+bl2irxZoPDowPFmOk8PPGSPWCJGLXGbMgelFHhpr/ZHwesF5XhcvTGnrtk169ToeQwFNEG1ruMBA/1xpTh4CrVltuMXDF5Lm9C98Rj/NAdxD6Ok/dQbkWIE5eDF4RjwaX1g3cbO3e2sZxnRsUhqUd9npHAos7ZznWPb2r0yt3J0FpeJcGvn1Igyhl9as0BkcEq7oeMHk5dKiQPyLkZy6VX0aenjas2qSCdsO4Z21QKSR+GRhKwm7lXOwY+AxXaJ9aGNF2fs6cnUTzc02CU4nrDE5xfVXt6jDZaFmo55HZWnlDj4MO2AHoZWQ5kbZZZ51XpGrxnHBGDdjuCRG8mGI5azSOgUm3AsLHLgcY7kRqeeZEwzLQJVddxIWl2vZ9jyblNIkRJ46VqOKp5JGwswuCeWzU9Jtk0yNzWfkw8ZXXxdu0Tou9GHG/hMJVqkV5ZAxxvG+8cSNhbO+1r3Fwxz2i9E1vEM5qetseyP8O5IXE9aKncRQ85Ig7mAutszeDEna3A7kvnubRnpHsOTc0lpVdB7s2IZ7Dz2rOMrozXulY+zF5OJ9NOuEOnJhUmUYO4mifZMa0dkRbtilyORMxqGwk5qmU0jywd+FTezQbCSXJTtNJLECav19jbHwcXrOLI5an8oKzPg24+CJzsIQZjLkweWDgZxjCcdTv34AYW3U+uJDMeRS9ugs9xl8Ug825lXn40hQHycSWrUAdJhxbpEZGZrdzmmF9FzYrnajQAYTnIHxTenxQC44nmrYaNXgQ0OBsXyY1D9GZBZ/HBBKFqmuo5ZyV2Sk+BGJdb0kNPt2By3dzxIrJpxnpgfiwEWKWRd9Ayudb0dc6rtimt/M7a1Nj3aO0xReLMWFk2tHHv8xZ2Nw7uHTiyH1DRkzMJHrAqKvamb5Kjr/aJAXH5Jl0LcOAVpOZpAeRml1YKBNZkphMTNzzk5i+2o0LtwFzUXFxTH1NsIklrunLqdNcxos4euKWDM65hrjJaze4lSbZoYxeYIy+3TtyKUYTwkJbv+xlJNj4NYOLah/yJdO631fFifvCbicGH2lNL1ODwjij1dMjPljJie86pe17s3o2B1qBg72QNIxFUPeYdjWP9SDL27oBeWaarZNpRHcIOgFhNLg+YvmjCminkYPr2o+9NiS1BFRe1E+Ryp0C5hGoPQkgMZtd11L3x0F7dd1dchTw3KQPvt/K063zPH0uhWi4dFqPl8Jgua1pAbLjqeqyYKK0shwSwhgO5TXg6hY5khcjePesaTncRNB7Uo5F1thKaIpvnh/AP9Ws+W0ueLUo0oexBoOC1ubMiUON57MGpX4NDZTBZBMrtWSerpXmzdijTEbRGgpYsRdIyTK13RbibeHbNtCBS22C2tGcdKx2MKw8Gsu7t7KQyAdJeagY4ATqU2E3FHmJBwxcLCQ+h1wz9dl/v5+lyMuRbgNU7USHIqD1LPzeedO9ulc+eKa4W7ILbuWw7O2zboTvSibzMOZh96yB0qxerWp2JcxT78sj6pRFuJxba55vTuJE49v7HBm3mGYt3C5goRuDQvuj3jJ4ZFDrE6ehNmjgfZF3NaZvxzThBvRXOzlq7BdPgivgsTXutFvYlwYEyhczFxU/QgFJlv7RAfuCWP240Xt2tCYNW7q7hLF1d99UvOT3Va2UPmVbm6tuZyf0Q5wKa0zddfwJt5BTAFYF13PGxarZdKYPrtD7o3dM8liySTRupzjbrbaj5u95Y24xUeyk+GxHJZEw5/9xdVR9xxO8lBMrSKGL8s+lfjbtT3M7PazNHu5wko5mEH909hybMQXorGA0GM1U9ctmD7EeMA6y7dS9P3iUMS2eC7hYcKB10IzLatqOUa3gAf5nGSZOGITMtmTUBQ3Vb+soDBpqoSQMqMPkqlY/KGnE1avi2NfMLfxuQXHGeTNsuAWvYnC4YlnpknEmQZcohhIq8PtDckhJvuPse2KBF+5w4SLPKIQQ5Nrpb6B999fMOnbOb2I7mCQSO6Jq1jO+CXjd/H2lcSvp8rRgTP/QI8rBT54Iz/XalorNdDDXMrZxsiDN9jLtLwHf4dqFD4nw1TjSUZNem7RpPY9Qrz/KE8RRsWFnI5poX6wf1IpSk443sGQ4giZS20RZnGC+up0sii7bH7xYdwItD6BPTNic6Nh7ZlFbnc+TznAU9wdI/HzGIRHuS2O2dMfsGloLzxs2pazlgb9Mqoj4bnkbdPoVwibyQ4OlbXCxlaMiE2iSqfjpRB0uwsij3ojxtGjY4Q6C4I/vJKkhDyXeoq0T5EcXZggsmrXDQlLw9Z+Kg6biWeVwh8/qe3a1rT1xDWFg0CkGOyNp5zxu1QUewJquE7k5OHtWguDzbqEL53erOd6C57loapEOW2Nyd8bxb6+Z1wYsosSyVudwsipEx1RjLbq+6K7+gnBsq0yohemU7MmRHxU5HyjB4S84KDts96ETv/QtqRucUYwS7rXgFAiU/xJsX2AjHu3QodRw2rxl/WSvtRO5HoPBBH6cVLQJa0acGHcEUJNbadXkgqtlDcrR20vsP18lClxEG8+rc88hu9+sJpFG70pQ20MJYBUc6blbGnIpztypMex/lC1BLvHQp7t0lm9lLOXTt7QKIV0hu6rEWW+MUl1bsUEigH4lgohxZxk/yqKkfIfPj0N3pqj2a4RS5aSPMCTIYUqCwgSBIwp75QXW6HrISEHYgF67qQRvkx6U4vjKa23jTQhpuCcFSakPiWwG7rTRI5oF7uV1RajaMRJ486rZzni4+b/6QQ8xgNEJBuGdfaHcDiUs0KuRWhW/PeJ+oezB5ijOZh4yVegtXbx/lGCUluBO7f2x1xC7iPaAyTB5xnyvkRuSKny8DziombESAAHojIPDCBfDIu5OKKu0cskI4UxdWO69yCV1FYJVRpj4yqSQlV0fuOM/rtdA0Bqat2vEeUiTjahrcOWvtieU5X9gP2wxnb2mticIiKijGy+y7OIN9dgWb14aE8JuTJ/udCdhiXuqKdy41OUIftkagrAN2/sTGcEzY48GO8yquH+xzloXkIlJEmWWILoRn4g7enU+w6Ep1McDOmQK6GcfgFej9NnYfqZEtzsEXwfATTTjX/uPeuYGZWlsRaDV0sdlcpW94FJ1rdUBDb5AIbnckI3XN+9IYFpIz5tAIR+7tds1ZVu7mqc4skDQ+iXeMr8nndmGOCYvHjxCCQY8ae3mY5AXct5XG0U7gWxMtHlmwQe3eHlIiTlvFPFUIETwgj7A8G3KMpjG2JwzaNAi5XOgLZl6eN2akIrPjSr64HEHcTu06JNRtCPb+gmHqU4vzi6mXdmkQqueAQt1HHxVdXox++9vPXmHyNXoOOEgLbHozteys31WwxbLUkbeigcwZJP5s1kLbwlugMOMsnu7OxMhJRtP35RpFeZaxI33njoQPRqJNoRj3jBD+SLxBf7IsdRqu0SO0RI/FGdoxGJx9PBSEr+SKexqkrepu1Q/FFRnFk6KarT1HzHo7Q9WZw7XCnRVbuwF3Gb86buXUGIQRppgghA56ihGMNdoeDdUJirfIVaVJg3Lq5lW4r2ODtST0yKc4heP9Mjf88QUkwQTlEdbmMTZbvSjS28TVQpYR9aJSw52B8TOVR+ac9Ff24e7bke9KJuO0qgGJjggJySkFr6LnJjKH1jpRt0TP6+G6b66SxZcu4Lq8v4g3GdnG2nuCCXiyKlTN6Lht+YP1P4Jtz0UGug17x5JPIYaCG/AXI5B7+kDV8sa4jLVnq7z7oHFfzHieB7tN0QmmoyZOWq9xvcgA7WPMoZc78nZ+lNCodQO+zKPJBlpcHYKJbBGc3tkLo4HzzNO6ZZyT3A2vLXmXA3HNxZY1da+RbjiyI7bU7kC8dnBKSsvGNuNBLaxGHs3DAemLOigYaRvz2AEU5CstCYGOxEegxX0J1sXakA2oUCqpMdrOGHLdDlxxxE4Kl9znpkqbKOr4aDo/xLnSt3a8xo9Hl5qs5UE16gSe5XUcg8K51t6zK2LakpOdV+M+WSuG749QumaZxzjnFeOWS3PH7Fmo3d6XLWi9l8dcp2FZhopz8GTSO4jE7iPSwXesOx49VH/pDR2Y7ciY4EwM06Bw5NEO2MY6TWTkRZllfNjIFakoq7AXumzCTXo/W8cFxoVxI2s9vswSY7nWB1dMEpsxfn4c/lfHndCJJ3YloMzxHQYay7K2HgxLXwXYsy1bAe1PYLB8lZbAVjqTNKNxPk1K/3HIYdmy22zcGPi61yHWmErObE/8cgV0aqYmdI8ZzRWgpyAcSnbkNP4IRHEBbh1tb2uNH+SnzlQ1uqHodLPZSEaTFmeHYrvdHK2sH06ZxVuQhqItTeSWoZzpUlIIep0R5FTiLnmvaOuaeDuxMGIPFc8s7BaVqgRDPPjjNK33jVB5wAhovfJrnnN5CLCi4PLhRALhCWU23ZxBVHWA+LNRJvgP1gBLI8XCMrEeJarHRMERpYTlwZfcCoVh66Se6rk8F15LkyxNT71i9khzQlu+AWGqlcnwufum4S8sQRDWhbcZaCEp1PcRKMzFmu+vB00He17iucyaZ9+koIwjbQgGQWPO8x43yuF0XRU3ZvwWhsgBJqjjK87ZFeRsix7FmnkmhAVSI/9AzOmB9M8TJeVETx5KF4UVPYUMH9Sb7pQngv/3TxRUSzV31ML8+RiwNX4iV2KGrQSJjtWWSrUMmQ+7zhiTuznozZlpVqnzVkfQlIeVKxNOPhaCxjJXN/QuCWiNzJKLTeVDD6s7ehCKg0dWiLykUjPXpRnHmgTfJi3LcV5sH/fHLjZdd/ERP2E2tSrD9TLPwCTruDs9AcVxTYqo+5NgWAHi9uNGQbakfLUYqAq8DHF2xkto2a6N72XJeMWH0egnhAaghL7z4Vr4OJ8WGWMPgbUmlxthMg/atzzG7zfV0IkPt1A82MFnYiC+HKHXhNSwq0STpqmN3YwZaPoFcW7JCInYwfs53lUVyLOhMLJwWSwPGqyALKw9PQPUM4rWcNHnpJarxtbnofAtdPO3Rz21UjZgG0tmA2ORQNZKJ6PO6T3Ru8rTekk6tMp9Y0z7UlQH16V3lEOO3H+80SFvRNRuHj4yz2nGgTOtcEorHhHFob8wamPpcxPfsI5qU30Dm5EGrwQoER/Y3Xk6M4bCw+9ubmvQsRbRGPC2c8hk02QNB6kSG+pUWbZKgKq8yhSVPFV+rrnr6tqMlrpq2WhHwIpqOB9CcCYDKOLyujknepqKpOmKmDoUkYEJ05rX5GXE/Qk+EqPKNpa4M0yfdDGGoktKi92N2JJsdXfVZMSaVmlIkkk/5RGQZsNaZSS/3O/pLJxkKD8mn6vSDVRTbv8g12hYGaG4HiKboreBOu8fw2ZMH2eNbYfhTh9+WPrSZSHkblJDjV2A2wN7VbvdSZqV0ccKgzZAkZjlUtUK3JarvjOgmHuFEATZdqqIi8dEI8RJkt3Oa6iSszYnWbLz6US4wcJYXWc6y/6I0JDhXcXDm6kbarZWBT/SbrfbODnWBNdo64bTEQw5+LRnJUrSEpJlWTAAvTzCczkBkfARvr2zapDqjKuVYpkmx1xsIQf9/TJdBQ842wS+r/09wsWWrblAgum+3BDEZ32K7j0QPhG0RpCAwT9q0Nvm5X0sHypHXKU+e2D4SEdRmszl2Ab3dQF8OVPI3MwzEbBymjyDWDM9wTQbau6g+AQ15RXQ2aZloSSoFrjRI4M/zhrSwEQ5frI3FAJLX9Tm58Q4GoRYA38k9c5eMhRB8MwS80mq0sXdMj1dh7MZ5gOTmly2X59jwHtnzJxQ3h1OTHGDke6WJEm21eH5PO8yOLU6h3DqtrZzJfPcThciSxWp5+/DTvLPNU4SkcdxMg6vs7ToNsMwaqKeBdEjlxM1Fg3g0KCmca7IdgrDHmdUGEW7bpazFVZkCQLWYSLCjdt0ATFCDvDjPSeIRZIuv1GClmpEhZ3oG8uTFO84xsxGA+PnszV7AwobfojmF7s4m5RkOjLqh7gJoYyAOMgT+F5y+7piZCLEcew5uI+P8oYCxXeWptnLLdvvtjsZYu0TOJcp7XOpfEisgCfA8RJyJweseWKHEq0wK84qaCBNCBiEM453Mc8nnAsUBfiVfxfT97/DGPzvzfzhXx8g/sdnaX2fB7V80/gdeRu/v43f38bvb+P3t/H72/j9bfz+t7fx+9v4/W38/jZ+fxu/v43f38bvb+P3v72N39/G72/j97fx+9v4/W38/jZ+fxu/v43f38bvf3sbv7+N39/G72/j97fx+9v4/W38/jZ+fxu/v43f38bvb+P3t/H72/j9bfz+Nn5/G7+/jd/fxu9v4/e38fvb+P1t/P42fn8bv7+N39/G72/j97fx+9v4/W9v4/e38fvb+P1t/P42fn8bv7+N39/G72/j97fx+9v4/W38/jZ+fxu/v43f38bvb+P3t/H72/j9bfz+Nn5/G7+/jd/fxu9v4/e38fvb+P1t/P42fn8bv7+N39/G72/j97fx+9v4/W38/jZ+/yzG7z/hf7B+RxCK+gmFX8D9nf737u9plzDj2K/HT3FzZPkyPnAp5vb4HB4+/pr13eyUO3g3Agzap3ns69T/5Vthv/3mdy7w0MeffwVvmuTpvwT3d6jh0Neg/fq78aCTc/lI/3Dxb6H4yydYfXncyW9jR6LQl0OHE+hPoCz7+wtN/TLG6S//9h8j8tXlaBj7+nLYT9AXoztHY57OX13uY4B/g+CvjzkM/ftBB8b9w1dDl6RZtDTz/2hdIFDcd10az9Ht14tD/3K9wAgJ/fFBCTTyrQcloN9YKjj2g9YJDP97zD6edQCeWvDxnIK1KOfUGaIYvLqO0fDHVfPngf3XY/jvw9DvQaN/ghH8G7jRP/366++P3J94wMSLIvf8GrX/NJjoZwcTpX56FSyxz44lTv6Eki8CJv7ZwSSJn14ESuKzQ0njPxGvssjJzw4mfHDGV1nk1KcHEz3I+6ug+Se014uiuX2Fzn8YS+RPSJoXxfIbJPP14P286udr2vl66H5vhZSVTfP7Z+5RMfAs/g8T1NeD/XtrqS9gT/CUSr4uf/1/pbKvB/qnF12/J7evB++nl2G/p7uvB++nl2Z/IMCvh+/nVWvbbzzjRaD8/Frta0b8Ouh+b+32MpTta6L8MqCj31vkvQzo3+DJr4P699Z+r0uTXwfzH6wI/3OYf4M9vw7qn35z7Rvk+XXQ/fTK71vc+XXg/bzKb/stCb5IZR793jLvZYjGNzj1C8H+vdXfy8D+Nal+IdS/t1D8AvUUPtgG+SKs+oVg/8EK8j8H+1e0+nVAx36wgnwpXv1CsP+flZDfINwvBPv3VpEvwbhfCN/Pqxd/wZIkXwPIzy8Nv+bVr4Lt99aFX5LqLENehVS/CuQ/WD6+FKN+Fcx/sHZ8ITr9Koj/YN34nwss3+DSr4L5520Z/eeE+UWwxb+3NnwJtvwq4H7eBtJfgKSJV2mS+fWM8CfE8p+z5ReC9we3hb4SYX4h1L+3BHwZ1L/BmV8I9v+z6vAr2vxCoP/o7cVXYs4vBPvnbS795+T5heD9/A2n3+DPL4Tv59V+v/aZw/iLNMsQn1/rfeO44OvA+3nV3j8/Lvg66H5v/fcy9OFbxwVfB/b/s7rw6+OCrwP6p98Z/NZxwdeB9/M2jf6L44KvA+//zeOCr4Pv51V0v2KJvoy5C/H59ds3OPEL4ft59ds/J8WvAy/56SXdt9jvC+H76TXd1zT3hdD93pruJXjuC+H7eVs1/wXRfSF8P71O+ybTfSGA/4RS+2tG3b8rQYgii5P/eOX3Ft5f4/5aRt0wTKJfmR7BxE8EDf32B/3jRf+saTeM/Fou/u3CFPTHS/1gw27yT+jIH+DSTpM8RJKfYfC/HCEEQqG/ONgw/eWlaOQnFPv/O95/Qtf+78Yb/dZ4/+bK/8dIfQz6uF1/uezHD8HvX+Gff/hp++Wnfzpnfsbu51/ZeAZbf4cdPoRrY6ebLUTEvyOvNbdQ7IsJAWP4X5tbFPrFldAjp1Pwn5pax1hH2+/eNoA3TP+T20Z/WSP/mKs/X/Ovztxvjt2fEAn/1YEKQumffj0N99t8OjIX8hejFfTrMdLfHiyC/LDU9K8W6x8GnPhgatMQdX8YeeK+9PMvA/z36WOEmeMNGDU8//Hi8bcc/J9tjnny66WOO/v5aj+/9tWU+h8xy386R/4HWgf9Ukoi+K9r+w888RsziaB/o5T/G5L4zcH4EyLnv/lhLjD8JU34eJYL+RfXHvLrsbU/PMrliyfD/ODl9ydk13/1iH8r3mIk9tuzl/73ARf//xxwf4AO/GojOo5p+s/NBIEhWIL4BDPh71/SIZL+qwoB+XdX+sEz4Fti8Oc8CQb2H0ny97/7q2lYSx8puCH4d6n4W59yG7/8zVdv/GKSHql2/uO0jJoy78CcPaZQeswvFiTkMo4a5pcX2jJJwD9nx/S4718eJwUm8C9k+Lguzv4N58G1lrn/+bt9c8Z/Bw4AfzWhyG8wAPIbMxqD/vnk/V+l/29t4P3oiYG8J8ZX2zj4l+TwPz81/sTe439z9RCDvlzMFP0T/BeJIf4ly/zWxX5wjvgzzwn7bxbi1Ndj9FfpAPQVH/hx1eFvj/V/RgSQB/cTPoMI+GqsKei7jfVXl/rRY/0t+v+jczz6zvFfkj/oy1Luj0zxx49jD0bmHzNpjIZC75MUvOP/AQ==</diagram></mxfile>
2112.01036/main_diagram/main_diagram.pdf ADDED
Binary file (59.1 kB). View file
 
2112.01036/paper_text/intro_method.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ This paper tackles the problem of unsupervised part segmentation. Discovering object parts in images is a fundamental problem in computer vision as parts provide an intermediate representation that is robust to object appearance and pose variation [17, 48]. Many high-level tasks benefit from part representations, such as 3D reconstruction [31, 68], pose estimation [27, 42], and image editing [14, 67]. Keypoints and part segmentation maps are among the most commonly used forms. However, their supervised training [12, 43, 56] requires pixel-level annotations for every new application domain since labels hardly generalize to other object categories and the number of parts and their granularity vary across tasks.
4
+
5
+ On the side of keypoint detection, several unsupervised detectors exist [20, 65] but segmentation methods are still in their infancy [17, 32]. Segmenting parts without pixellevel annotation is difficult because it requires disentangling parts from other parts and the foreground from the background. Existing unsupervised1 methods mainly follow the same strategy as applied for unsupervised keypoint detection [52]. Real images are transformed by an affine map or a thin plate spline to find those parts that are equivariant under the known deformation. For precise reconstruction they require additional information, such as saliency maps [17] or assume the objects to be consistently centered [32], which is constraining. For example, when applied to face datasets [33], the neck and shoulders are often ignored although part of almost every image.
6
+
7
+ Our goal is to improve the unsupervised part segmentation task. We propose to first train a generative adversarial network (GAN) [10] to generate images that are internally conditioned on *latent masks*. This GAN formulation alleviates the dependency on image pairs and pre-defined image transformation in existing autoencoder networks. In this way, the network learns the part distribution from the dataset instead of from pre-defined image transformations. Subsequently, we use the generator to synthesize virtually infinite mask-image pairs for training a segmentation network. Figure 1 provides an overview of our model.
8
+
9
+ The key question we address is how to design a GAN that generates images with part segmentation masks that are meaningful, i.e., group pixels into regions that typically move together and have shared appearance across images. We start from a backbone architecture that is borrowed from supervised segmentation networks [3, 43] and the GAN strategy is inspired by its recent application to unsupervised keypoint detection [14]. Our innovation is the hierarchical generation of the image via multiple abstraction levels, including the use of masks. In the first level, we use Gaussian noise to generate part appearance embeddings and a set of *2D latent points*. Unlike [14] which continues straight from points to image generation, we first group points to define the position and the scale of each part. In the second abstraction level, we use a part-relative positional encoding to
10
+
11
+ <sup>1</sup>Most existing literature refers to *unsupervised* when training on single images without annotation, and *self-supervised* when training on auxiliary tasks using multi-views or videos. We follow this convention.
12
+
13
+ ![](_page_1_Figure_0.jpeg)
14
+
15
+ Figure 1. GANSeg. A segmentation network (right) is trained on mask-image pairs generated by a new hierarchical image generator (left; from points, to mask, over foreground to the image). It is unsupervised and applies to faces, persons, birds, and flowers.
16
+
17
+ generate 2D feature maps and then generate the mask with a CNN. In the third level, the foreground image is generated from a combination of feature maps with corresponding appearance embeddings. Independently, a background image is generated with a randomized position to disentangle foreground and background. Finally the foreground is blended with the background. The generated masks are used here a second time to define the blend weight.
18
+
19
+ The key to the success of our GAN framework are several design choices that preserve translational equivariance of parts [22], which are not applicable to the traditional autoencoder approaches, as explained in Appendix E. As a result, by not knowing the absolute location, the convolutional network is forced to condition the image purely on the spatial extent of the masks; moving the part mask will move the image part. This is a crucial inductive bias in our unsupervised learning. Our contributions are threefold:
20
+
21
+ - 1. An unsupervised GAN approach to generate maskimage pairs for training part segmentation;
22
+ - 2. A novel hierarchical image generator, which encourages the segmentation of meaningful parts;
23
+ - 3. Alleviating prior assumptions on saliency maps and object position.
24
+
25
+ Ethics - Risks. GANs can be abused for creating deep fakes. However, our method does not work towards editing nor improving image quality but scene understanding. Our final output is a detector, which can not be abused to generate new images but unwanted surveillance applications are a risk. Benefits. Since our method is entirely unsupervised, it could be applied on objects, animals, or situations that have not yet been labeled.
26
+
27
+ # Method
28
+
29
+ We train a Generative Adversarial Network (GAN) [10] to generate points, part masks, foreground, background, and the image in a hierarchy. Figure 2 gives an overview of our method. In a second stage, we generate mask-image pairs to train the Deeplab V3 [3] segmentation network, thereby enabling unsupervised part segmentation. Our core network architecture design principle is to build a hierarchy that preserves the translation equivariance of its part representations at each of the following three stages.
30
+
31
+ In the first level, we utilize independent noise vectors to generate the locations and appearances of K parts. We found training to be most stable by first predicting $n_{\rm per} \times K$ points separated into K groups with $n_{\rm per}$ points. The part location and scale are computed from the mean and standard deviation of the corresponding $n_{\rm per}$ points, which regularizes training. Figure 3 gives an overview of the underlying **Point Generator** module. It takes two noise vectors as input $\mathbf{z}_{\rm point}, \mathbf{z}_{\rm app} \sim \mathcal{N}(\mathbf{0}^{D_{\rm noise}}, \mathbf{I}^{D_{\rm noise}})$ , where $D_{\rm noise}$ is the noise dimension. We use a 3-layer multilayer perceptron (MLP) to map $\mathbf{z}_{\rm point}$ to $n_{\rm per} \times K$ points $\{\mathbf{x}_k^1,...,\mathbf{x}_k^{n_{\rm per}}\}_{k=1}^K$ . Then we calculate the part locations $\{\mathbf{x}_1,...,\mathbf{x}_K\}$ and part scales $\{\sigma_1,...,\sigma_K\}$ ,
32
+
33
+ $$\mathbf{x}_{k} = \frac{1}{n_{\text{per}}} \sum_{i=1}^{n_{\text{per}}} \mathbf{x}_{k}^{i}, \quad \sigma_{k} = \frac{\sqrt{\sum_{i}^{n_{\text{per}}} \|\mathbf{x}_{k}^{i} - \mathbf{x}_{k}\|^{2}}}{n_{\text{per}} - 1},$$
34
+
35
+ $$\text{with } \{\mathbf{x}_{k}^{1}, ..., \mathbf{x}_{k}^{n_{\text{per}}}\}_{k=1}^{K} = \text{MLP}_{\text{point}}(\mathbf{z}_{\text{point}}), \tag{1}$$
36
+
37
+ where k = 1, ..., K.
38
+
39
+ ![](_page_3_Picture_0.jpeg)
40
+
41
+ Figure 3. Level 1. The Point Generator uses two Gaussian noise vectors to generate part location, and part appearance embeddings, respectively.
42
+
43
+ We use another 3-layer MLP to map $\mathbf{z}_{\text{app}}$ to a part appearance vector $\mathbf{w}^{\text{dynamic}} \in \mathbb{R}^{D_{\text{emb}}}$ . Following [14], we define a constant embedding vector $\mathbf{w}_k^{\text{const}} \in \mathbb{R}^{D_{\text{emb}}}$ for each part. We then perform an elementwise multiplication between $\mathbf{w}^{\text{dynamic}}$ and $\mathbf{w}_k^{\text{const}}$ to obtain the final part embedding $\mathbf{w}_k \in \mathbb{R}^{D_{\text{emb}}}$ . That is,
44
+
45
+ $$\mathbf{w}^{\text{dynamic}} = \text{MLP}_{\text{app}}(\mathbf{z}_{\text{app}}) \tag{2}$$
46
+
47
+ $$\mathbf{w}_k = \mathbf{w}^{\text{dynamic}} \otimes \mathbf{w}_k^{\text{const}} \tag{3}$$
48
+
49
+ where $\otimes$ is the elementwise product. It is important that the noise source for appearance and position is independent to prevent the appearance from interfering with the location information.
50
+
51
+ In the second level, the mask generation, we use Gaussian heatmaps to model the local independence and positional encoding [39] to generate masks relative to the predicted part location. We encode relative instead of absolute position between the points and the image pixels to keep long-distance relations and to prevent leaking the absolute coordinate information, which would violate the translation equivariance. To further preserve the translation equivariance, we initialize the positional encoding in a larger grid than the real image range and crop to a fixed margin size after each 2x upsampling [22], which prevents convolutional layers from passing on boundary information (see Section 3.3 for additional details).
52
+
53
+ These operations are implemented with the **Mask Generator** illustrated in Figure 4. It takes the $n_{\text{per}} \times K$ points $\{\mathbf{x}_k^1,...,\mathbf{x}_k^{n_{\text{per}}}\}_{k=1}^K$ , part locations $\mathbf{x}_k$ and part scales $\sigma_K$ , and part embeddings $\mathbf{w}_k$ as input. We generate a Gaussian heatmap for each part using the mean and standard deviation of each part defined in Equation 1. The embedding $\mathbf{w}_k$ is then multiplied with every pixel of the corresponding heatmap, generating a spatially localized embedding map. We assume the additivity of feature maps (see supplementary for more details). All K part-specific embeddings are summed to form a single feature map $\mathbf{W}_{\text{mask}} \in$
54
+
55
+ ![](_page_3_Picture_9.jpeg)
56
+
57
+ Figure 4. Level 2. The Mask Generator uses points, part locations, part scales, and part appearance embeddings to generate masks.
58
+
59
+ $\mathbb{R}^{D_{\text{emb}} \times H \times W}$ . Formally we write,
60
+
61
+ $$\mathbf{H}_{k}(\mathbf{p}) = \exp\left(-\|\mathbf{p} - \mathbf{x}_{k}\|_{2}^{2} / \sigma_{k}^{2}\right),$$
62
+
63
+ $$\mathbf{W}_{\text{mask}}(\mathbf{p}) = \sum_{k=1}^{K} \mathbf{H}_{k}(\mathbf{p}) \mathbf{w}_{k}.$$
64
+ (4)
65
+
66
+ Note that we use $\sigma_k^2$ instead of $2\sigma_k^2$ to obtain sharper heatmaps, making it easier for the network to generate sharp masks.
67
+
68
+ The generated embedding map $\mathbf{W}_{\text{mask}}$ will subsequently be used to generate masks, together with the mask starting tensor $\mathbf{M}^{(0)} \in \mathbb{R}^{D_{\text{emb}} \times H \times W}$ . To avoid leaking absolute position information, we do not use a constant tensor [23] or a linearly mapped noise [39]. Instead, we use low frequency positional encoding [51] of the difference between the pixel position and the $n_{\text{per}} \times K$ points. That is,
69
+
70
+ $$\mathbf{M}^{(0)}(\mathbf{p}) = \left[ \sin(\pi FC([\mathbf{p} - \mathbf{x}_1^1, ..., \mathbf{p} - \mathbf{x}_K^{n_{\text{per}}}])), \\ \cos(\pi FC([\mathbf{p} - \mathbf{x}_1^1, ..., \mathbf{p} - \mathbf{x}_K^{n_{\text{per}}}])) \right]$$
71
+ (5)
72
+
73
+ where FC stands for a fully connected layer without activation function followed (a linear projection).
74
+
75
+ With the mask starting tensor $\mathbf{M}^{(0)}$ and mask embedding map $\mathbf{W}_{\text{mask}}$ defined, we generate masks $\mathbf{M} = [\mathbf{M}_{\text{bg}}, \mathbf{M}_1, ..., \mathbf{M}_K] \in \mathbb{R}^{(K+1) \times H \times W}$ with SPADE Res-Blocks [39],
76
+
77
+ $$\mathbf{M}^{(i)} = \text{SPADE ResBlock}(\mathbf{M}^{(i-1)}, \mathbf{W}_{\text{mask}})$$
78
+
79
+ $$\mathbf{M} = \text{softmax}(\mathbf{M}^{(T_{\text{mask}})})$$
80
+ (6)
81
+
82
+ where $i = 1, ..., T_{\text{mask}}$ , $T_{\text{mask}}$ is the number of blocks, and an additional channel is reserved for the background. For more details of SPADE ResBlock, we refer the reader to our supplementary document and the original paper [39]. In theory, the Batch Normalization [18] may leak absolute position information and break the translation equivariance. However, in practice, experiments have shown SPADE has strong local disentanglement [14, 39, 67].
83
+
84
+ ![](_page_4_Figure_0.jpeg)
85
+
86
+ Figure 5. Level 3 - Part I. The foreground Generator uses part locations, part appearance embeddings, and masks.
87
+
88
+ In the third level, we generate the foreground and the background separately and blend them linearly by reusing the masks from the previous level. The **Foreground Generator** is illustrated in Figure 5. It takes the K+1 masks M, K part locations $\mathbf{x}_k$ , and K part appearance embedding $\mathbf{w}_k$ as input. Similar to the procedure that generates masks, we first broadcast the embedding with the corresponding mask to generate the foreground embedding map $\mathbf{W}_{\mathrm{fg}} \in \mathbb{R}^{D_{\mathrm{emb}} \times H \times W}$ ,
89
+
90
+ $$\mathbf{W}_{fg}(\mathbf{p}) = \sum_{k=1}^{K} \mathbf{M}_k(\mathbf{p}) \mathbf{w}_k. \tag{7}$$
91
+
92
+ We then use K part locations to generate the foreground starting tensor $\mathbf{F}^{(0)}$ with low frequency positional encoding similar to Equation 5. Finally we use SPADE ResBlocks to generate the Foreground feature map $\mathbf{F} \in \mathbb{R}^{D_{\text{emb}} \times H \times W}$ ,
93
+
94
+ $$\mathbf{F}^{(i)} = \text{SPADE ResBlock}(\mathbf{F}^{(i-1)}, \mathbf{W}_{\text{fg}}),$$
95
+
96
+ $$\mathbf{F} = \mathbf{F}^{(T_{\text{fg}})}.$$
97
+ (8)
98
+
99
+ where $i=1,...,T_{\rm fg}$ and $T_{\rm fg}$ is the number of SPADE ResBlocks. Independent of this, the **Background Generator** takes two noise vectors as input $\mathbf{z}_{\rm bg.app} \sim \mathcal{N}(\mathbf{0}^{D_{\rm noise}},\mathbf{I}^{D_{\rm noise}}\times D_{\rm noise})$ , $\mathbf{u}_{\rm bg.pos} \sim \mathcal{U}([-1,1]^2)$ . We first use a 3-layer MLP to map $\mathbf{z}_{\rm bg.app}$ to a background appearance vector $\mathbf{w}_{\rm bg} \in \mathbb{R}^{D_{\rm emb}}$ ,
100
+
101
+ $$\mathbf{w}_{bg} = MLP_{bg\_app}(\mathbf{z}_{bg\_app}). \tag{9}$$
102
+
103
+ Positional encoding is used on the difference between the background center $\mathbf{u}_{bg\text{-pos}}$ and the pixel position, to generate the background starting tensor $\mathbf{B}^{(0)}$ , similar to Equation 5.
104
+
105
+ Finally we use AdaIN ConvBlocks [15] to generate the background feature map $\mathbf{B} \in \mathbb{R}^{D_{\text{emb}} \times H \times W}$ ,
106
+
107
+ $$\mathbf{B}^{(i)} = \text{AdaIN ConvBlock}(\mathbf{B}^{(i-1)}, \mathbf{W}_{\text{bg}})$$
108
+ (10)
109
+
110
+ ![](_page_4_Picture_12.jpeg)
111
+
112
+ Figure 6. Level 4 - Part II. The Background Generator uses a Gaussian noise vector, and a random position to generate translation-invariant background.
113
+
114
+ where $i = 1, ..., T_{bg}$ and $T_{bg}$ is the number of AdaIN ConvBlocks. For more detail on AdaIN ConvBlocks, we refer readers to our supplementary, or the original paper [15].
115
+
116
+ **Combining Foreground and Background.** Recall that we generate the background mask $\mathbf{M}_{bg} \in \mathbb{R}^{H \times W}$ along with the part masks. This is used to combine our Foreground and Background together. The final image is generated by feeding the feature map into a two-layer CNN. That is,
117
+
118
+ $$\mathbf{I} = \operatorname{Conv}((1 - \mathbf{M}_{\operatorname{bg}}) \otimes F + \mathbf{M}_{\operatorname{bg}} \otimes B) \tag{11}$$
119
+
120
+ where $\otimes$ is the pixel-wise product.
121
+
122
+ **Upsampling and Cropping.** For simplicity, we used size $H \times W$ for all above feature maps. For mask, foreground and background generation, the starting tensor has a 10-pixel-wide margin at each border, same as [22], so that the central feature map will not be interfering with the boundary. For example, instead of generating a $H_0 \times W_0$ grid in range $[-1,1]^2$ , we generate a $(H_0+20)\times(W_0+20)$ grid in range
123
+
124
+ $$[-1 - 20/H_0, 1 + 20/H_0] \times [-1 - 20/W_0, 1 + 20/W_0].$$
125
+ (12)
126
+
127
+ We use this grid to calculate the starting tensors for masks, foreground, and background. After each SPADE ResBlock and each AdaIN ConvBlock, we use 2x upsampling on the feature maps. The margin becomes 20-pixel wide. To bound the otherwise increasing boundary width, we subsequently crop the feature map to keep the 10-pixel margin. The Gaussian heatmaps are calculated on a grid with a 10-pixel-wide-margin, separately for each resolution.
128
+
129
+ Our hierarchical GAN is trained end-to-end on image collections using the following loss functions.
130
+
131
+ **Adversarial Loss.** We denote $\mathcal{G}$ as the generator and $\mathcal{D}$ as the discriminator. We use the non-saturating loss [10],
132
+
133
+ $$\mathcal{L}_{GAN}(\mathcal{G}) = \mathbb{E}_{\mathbf{z} \sim \mathcal{N}} \log(\exp(-\mathcal{D}(\mathcal{G}(\mathbf{z}))) + 1)$$
134
+ (13)
135
+
136
+ for the generator, and logistic loss,
137
+
138
+ $$\mathcal{L}_{GAN}(\mathcal{D}) = \mathbb{E}_{\mathbf{z} \sim \mathcal{N}} \log(\exp(\mathcal{D}(\mathcal{G}(\mathbf{z}))) + 1) + \\ \mathbb{E}_{\mathbf{x} \sim \mathcal{D}_{data}} \log(\exp(-\mathcal{D}(\mathbf{x})) + 1)$$
139
+ (14)
140
+
141
+ for the discriminator, with gradient penalty [35] applied only on real data,
142
+
143
+ $$\mathcal{L}_{gp}(\mathcal{D}) = \mathbb{E}_{\mathbf{x} \sim p_{data}} \nabla \mathcal{D}(\mathbf{x}). \tag{15}$$
144
+
145
+ Geometric Concentration Loss Pixels from the same segment are usually connected and concentrated around its center as assumed by [17]. We enforce the mask to be in an area around its center with the geometric concentration loss
146
+
147
+ $$\mathcal{L}_{\text{con}}(\mathcal{G}) = \sum_{k=1}^{K} \sum_{\mathbf{p}} \frac{\mathbf{M}_k(\mathbf{p})}{\sum_{\mathbf{p}'} \mathbf{M}_k(\mathbf{p}')} \|\mathbf{p} - \mathbf{x}_k\|_2^2.$$
148
+ (16)
149
+
150
+ Note that the background mask is not constrained by L(G)con. We found that this loss alone can cause parts to collapse since it encourages a small mask area. To mitigate this problem, we introduce the Area Loss below.
151
+
152
+ Area Loss. We force the mask area to be larger than the area of the Gaussian heatmap, which is not predefined but predicted from the generated points in level 1. If the part is visible, this loss encourages the visibility of the mask. Otherwise, the area loss encourages a smaller (close to a zero area) Gaussian heatmap.
153
+
154
+ $$\mathcal{L}_{\text{area}}(\mathcal{G}) = \sum_{k=1}^{K} \max \left( 0, \sum_{\mathbf{p}} \mathbf{H}_{k}(\mathbf{p}) - \sum_{\mathbf{p}} \mathbf{M}_{k}(\mathbf{p}) \right). \tag{17}$$
155
+
156
+ We will empirically show that this loss makes the masks more consistent.
157
+
158
+ The final loss for the discriminator is
159
+
160
+ $$\mathcal{L}(\mathcal{D}) = \mathcal{L}_{GAN}(\mathcal{D}) + \lambda_{gp} \mathcal{L}_{gp}(\mathcal{D}), \tag{18}$$
161
+
162
+ and the final loss for the generator is
163
+
164
+ $$\mathcal{L}(\mathcal{G}) = \mathcal{L}_{GAN}(\mathcal{G}) + \lambda_{con}\mathcal{L}_{con}(\mathcal{G}) + \lambda_{area}\mathcal{L}_{area}(\mathcal{G}).$$
165
+ (19)
2112.04728/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2112.04728/main_diagram/main_diagram.pdf ADDED
Binary file (19 kB). View file
 
2112.04728/paper_text/intro_method.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Lifelong machine learning, otherwise known as continual and never-ending learning [@chen2016lifelong], stands as one of the greatest challenges facing artificial intelligence research , especially with respect to models parameterized by deep neural networks (DNNs). In this problem context, an agent must attempt to learn not just one single prediction task using one single dataset, but rather, it must learn online *across* several task datasets, much as human agents do, aggregating and transferring its knowledge as new pattern vectors are encountered. DNNs particularly struggle to learn continually due to the well-known classical fact that they tend to *catastrophically forget* [@french1999catastrophic], or rather, they completely erase the knowledge acquired during the learning of earlier tasks when processing samples from new tasks.
4
+
5
+ While catastrophic forgetting has been explored extensively in DNNs, very little work has focused on the occurrence and reduction of forgetting in less popular architectures, such as self-organizing maps (SOMs) [@kohonen1982self], especially in the context of unsupervised problem scenarios. SOMs are a type of competitive neural system where internal neurons compete for the right to activate in the presence of a data pattern and synaptic parameters are adjusted through Hebbian learning [@hebb1949organization; @martinetz1993competitive]. Neuronal units can be arranged in a topological fashion, i.e., a neighbourhood based on Euclidean distance between the activation values of units, as well as a spatial fashion, i.e., units arranged in Cartesian plane. Given that units in a competitive learning system like the SOM tend to specialize for certain types of patterns (serving as data prototypes), such a system would appear to be suited for combating catastrophic forgetting. This would especially appear to be the case, since, in theory, competing units would lead to reduced neural cross-talk [@ororbia2021continual], which has been shown to be a key source of forgetting [@mccloskey_catastrophic_1989; @ratcliff_connectionist_1990; @french1999catastrophic]. Furthermore, approaches, such as that presented by Gepperth *et al.*  [@7346155], have advocated the use of SOMs for continual learning. However, the SOM in its purest form, as we observe in our experiments, is itself prone to forgetting [@frontiersForgetting].
6
+
7
+ In this work (Figure [1](#fig:SOM_replay){reference-type="ref" reference="fig:SOM_replay"}), we propose the continual SOM (c-SOM), an SOM neural model that actively reduces the amount of forgetting that it experiences when incrementally learning from data. Specifically, we modify the SOM's decay function to be task-dependent and extend its units to self-induce a form of generative rehearsal/replay to improve memory retention.
8
+
9
+ <figure id="fig:SOM_replay" data-latex-placement="t">
10
+ <img src="images/replay_som.png" style="width:67.5%" />
11
+ <figcaption>The proposed continual SOM (c-SOM) neural system processing a sequence of tasks, each containing two different classes of digits. <span class="math inline">𝒟<sup><em>t</em></sup></span> denotes the <span class="math inline"><em>t</em></span>th dataset associated with task <span class="math inline"><em>t</em></span> and <span class="math inline">sample(𝒟<sup><em>t</em> − <em>k</em></sup>)</span> (where <span class="math inline"><em>k</em> = 1, 2, ...</span>) indicates that the c-SOM samples its units to synthesize patterns for any task <span class="math inline"><em>t</em> − <em>k</em></span> in order to refresh its memory and mitigate forgetting while learning from the current task <span class="math inline"><em>t</em></span>.</figcaption>
12
+ </figure>
13
+
14
+ # Method
15
+
16
+ Consider a sequence of $N$ tasks, denoted by $\mathcal{S} = \bigcup_{t=1}^T \mathcal{T}_t$. Each task $\mathcal{T}_t$ has a training dataset (with $C$ classes), $\mathcal{D}_{train}^{(t)} = \bigcup_{i=1}^{N_t} \{(\mathbf{x}_i^{(t)}, \mathbf{y}_i^{(t)})\}$, where $\mathbf{x}_i^{(t)} \in \mathcal{R}^{D \times 1}$ is a data pattern and $\mathbf{y}_i^{(t)} \in \{0,1\}^{C \times 1}$ is its label vector ($N_t$ is the number of data points in task $T_t$). Similarly, $\mathcal{D}_{test}^{(t)}$ is the test dataset for task $\mathcal{T}_t$. When a lifelong learning model is finished training on task $\mathcal{T}_t$ using $\mathcal{D}_{train}^{(t)}$, $\mathcal{D}_{train}^{(t)}$ will be lost as soon as the model proceeds to task $\mathcal{T}_{t+1}$ to $\mathcal{T}_T$ ($\mathcal{D}_{test}^{(t)}$ is only used for evaluation). The objective is to maximize the agent's performance on task $\mathcal{T}_t$ while minimizing how much its performance on prior tasks $\mathcal{T}_1$ to $\mathcal{T}_{t-1}$ degrades.
17
+
18
+ For our c-SOM (Algorithm [\[alg:som_training\]](#alg:som_training){reference-type="ref" reference="alg:som_training"}), $\sigma$, $\lambda$ are, respectively, the initial radius and learning rate values. $\sigma_t$, $\lambda_t$ are their values for task $\mathcal{T}_t$. Both values decay as follows: $$\begin{align}
19
+ \label{eqn:decay_lr}
20
+ \lambda_t &= \lambda (1 + t * \exp{(t / \tau_\lambda)})^{-1} \\
21
+ \sigma_t &= \sigma (1 + t * \exp{(t / \tau_\sigma)})^{-1} \mbox{.}
22
+ %\lambda_t = \lambda (1 + t * \exp{(task / time\_const)})^{-1}
23
+ \end{align}$$ $\tau_\lambda = \tau_\sigma = 1000$ are the time constants. $u$ is the best matching unit (BMU) while $v$ is any other unit in the SOM (with $m$ units). $h(u, v, t)$ represents the neighbourhood function between units $u$ and $v$ for task $\mathcal{T}_t$. Each unit $v$ in our SOM is composed of two coupled vectors -- its prototype weights $\mathbf{w}_v \in \mathcal{R}^{D \times 1}$ and its running variance weights $\mathbf{w}^\sigma_v \in \mathcal{R}^{D \times 1}$, where $\mathbf{w}_v$ is updated via a Hebbian update [@martinetz1993competitive] and $\mathbf{w}^\mu_v$ and $\mathbf{w}^\sigma_v$ are updated via Welford's algorithm for calculating variance [@welford1962note]. $\phi(c)$ returns the number of units trained on class $c$ (out of $C$) , while $\rho_v$ stores the class that unit $v$ maps to. At each simulation step, the c-SOM generates $K$ samples from $m_r$ randomly chosen trained units via: $\mathbf{s}^v = \mathbf{w}_v + \sqrt{\mathbf{w}^\sigma_v} \odot \mathbf{\epsilon}$ ($\mathbf{\epsilon} \sim \mathcal{N}(0,1)$). The $K \times m_r$ samples are then used to refresh the c-SOM (via the same update rules).
24
+
25
+ :::: algorithm
26
+ **Input**: Task input, $\mathbf{x}_i^{(t)}$, c-SOM weight parameters $\{\mathbf{w}_v, \mathbf{w}^\sigma_v \}$\
27
+ **Parameter**: $\sigma, \lambda$
28
+
29
+ ::: algorithmic
30
+ // (Eqn 1 & 2) Update $\mathbf{w}_v$ on ($\mathbf{x}_i^{(t)}$, $\sigma_t$, $\lambda_t$) // (Hebbian update for the prototypes) Update $\mathbf{w}^\sigma_v$ (for all units) // (Welford update for the variance parameters) Generate $K$ samples from $m_r$ randomly chosen (trained) units & retrain c-SOM on these
31
+ :::
32
+ ::::
33
+
34
+ <figure id="fig:som_comparison" data-latex-placement="!ht">
35
+ <p><img src="images/best_no_decay.jpg" style="width:38.5%" alt="image" /> <img src="images/best_double_resample.jpg" style="width:38.5%" alt="image" /></p>
36
+ <figcaption>Classical SOM (Left) versus the proposed c-SOM (Right). Both models were trained on Split-MNIST in a class incremental fashion (yielding <span class="math inline">10</span> tasks in total, one per uniqe digit).</figcaption>
37
+ </figure>
38
+
39
+ Figure [2](#fig:som_comparison){reference-type="ref" reference="fig:som_comparison"} (Left) displays the final output after training a classical SOM on the benchmark Split-MNIST ($C = 1$ per task) with a grid shape of $10 \times 10$ (this small grid size was chosen to exacerbate forgetting). Note that Split-MNIST contains $28\times28$ images with gray-scale pixel values, i.e., range is $[0,255]$, each coming from one of ten possible classes (digits $0$ through $9$). As seen among its prototypes, this SOM remembered only classes $6$, $8$ and $9$. Except for $6$, classes $8$ and $9$ are the ones encountered in the last task of Split-MNIST. In contrast, Figure [2](#fig:som_comparison){reference-type="ref" reference="fig:som_comparison"} (Right) shows our proposed c-SOM with the same shape. Desirably, our model contains units that represent digits in all classes encountered across all tasks of Split-MNIST, exhibiting far less forgetting. Note that all SOMs were trained in a *class incremental* fashion.
40
+
41
+ :::: algorithm
42
+ **Input**: $\mathbf{X}_{test}$, where $\mathbf{X}^{c}_{test}$ is all $\mathbf{x}_i$ w/ $\arg\max(\mathbf{y}_i) \equiv c$\
43
+ **Parameter**: Weights $\{\mathbf{w}_v, \mathbf{w}^\sigma_v\}$
44
+
45
+ ::: algorithmic
46
+ ) // RMS = "root-mean-square"
47
+ :::
48
+ ::::
49
+
50
+ In order to measure the performance of the incrementally learned SOMs, we designed a novel metric, $\alpha_{\text{mem}}$ (see Algorithm [\[alg:alpha_metric\]](#alg:alpha_metric){reference-type="ref" reference="alg:alpha_metric"}). An ideal lifelong learning SOM, assuming equal variance among classes and tasks, would have $\alpha_{\text{mem}}=0$, which would indicate an equal representation of classes across units. Therefore, lower $\alpha_{\text{mem}}$ values represent better SOM performance. We experimented with different parameter settings of ($\sigma, \lambda$) for three different SOM versions -- a classical SOM and two c-SOM variants. The $\sigma$ value was selected out of {2,3,4,5} while the $\lambda$ was chosen from {0.001, 0.001, 0.007, 0.01}. Table [1](#tab:results){reference-type="ref" reference="tab:results"} presents the best $\alpha_{\text{mem}}$ value along with their corresponding mean and standard deviations over $10$ trials.
51
+
52
+ ::: {#tab:results}
53
+ Model $(\sigma, \lambda)$ $\alpha_{mem}$ $\mu_{\alpha_{mem}}$ $\sigma_{\alpha_{mem}}$
54
+ ----------------- --------------------- ---------------- ---------------------- -------------------------
55
+ SOM (2,0.001) 12.75 17.97 3.02
56
+ c-SOM (K=1) (3,0.01) 11.85 12.49 0.45
57
+ **c-SOM (K=2)** **(3,0.01)** **9.9** **12.15** **1.25**
58
+
59
+ : Parameters with best (minimum) $\alpha_{\text{mem}}$ values for each model ($m_r = 1$). The mean and standard deviation for $\alpha_{mem}$, i.e., $\mu_{\alpha_{mem}}$ and $\sigma_{\alpha_{mem}}$, over $10$ trials are reported for the Split-MNIST benchmark.
60
+ :::
2112.07917/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2112.07917/paper_text/intro_method.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In the last decades, it has been witnessed that modern Optical Character Recognition (OCR) algorithms are able to read textual content from pictures of complex scenes, which is an incredible development, leading to enormous interest from both academia and industry. The limitation of existing methods, and particularly their poorer performance on arbitrarily shaped scene text, have been repeatedly identified [@liao2020masktext; @liu2020abcnet; @chng2019icdar2019]. This can be seen in the trend of worse predictions for instances with curved shapes, varied fonts, distortions, etc.
4
+
5
+ The focus of research in the OCR community has moved on from horizontal [@tian2016detecting; @liao2017textboxes] and multi-oriented text [@zhou2017east; @yao2012detecting; @liu2017deep] to arbitrarily shaped text [@liu2020abcnet; @lyu2018mask] in recent years, accompanied by the annotation format from horizontal rectangles, to quadrilaterals, and to polygons. The fact that regular bounding boxes are prone to involve noises has been well studied in previous works (see Fig. [6](#fig:trn_tgt){reference-type="ref" reference="fig:trn_tgt"}), which has proved that character-level and polygonal annotations can effectively lift the model performance [@lyu2018mask; @liao2020masktext; @xing2019convolutional]. Furthermore, many efforts have been made to develop more sophisticated representations to fit arbitrarily shaped text instances [@feng2019textdragon; @wang2020textray; @liu2020abcnet; @zhu2021fourier; @long2018textsnake] (see Fig. [10](#fig:novel_representations){reference-type="ref" reference="fig:novel_representations"}). For example, Text Dragon [@feng2019textdragon] utilizes character-level bounding boxes to generate centerlines for enabling the prediction of local geometry attributes, ABCNet [@liu2020abcnet] converts polygon annotations to Bezier curves for representing curved text instances, and Text Snake [@long2018textsnake] describes text instances by a series of ordered disks centered at symmetric axes. However, these novel representations are primarily and carefully designed by experts based on prior knowledge, heavily relying on highly customized network architecture (*e.g.*, specified Region of Interest (RoI) modules) and consuming more expensive annotations (*e.g.*, character-level annotations), limiting their generalization ability for practical applications.
6
+
7
+ <figure id="fig:trn_tgt" data-latex-placement="t!">
8
+ <figure id="fig:tgt_rect">
9
+ <embed src="figs/gt_rect.pdf" />
10
+ <figcaption>Rectangle (55s).</figcaption>
11
+ </figure>
12
+ <figure id="fig:tgt_quad">
13
+ <embed src="figs/gt_quad.pdf" />
14
+ <figcaption>Quadrilateral (96s).</figcaption>
15
+ </figure>
16
+ <figure id="fig:tgt_char">
17
+ <embed src="figs/gt_char.pdf" />
18
+ <figcaption>Character (581s).</figcaption>
19
+ </figure>
20
+ <figure id="fig:tgt_poly">
21
+ <embed src="figs/gt_polygon.pdf" />
22
+ <figcaption>Polygon (172s).</figcaption>
23
+ </figure>
24
+ <figure id="fig:tgt_point">
25
+ <embed src="figs/gt_point_acmmm.pdf" />
26
+ <figcaption>Single-Point (11s).</figcaption>
27
+ </figure>
28
+ <figcaption>Different annotation styles and their time cost (for all the text instances in the sample image) measured by the LabelMetool. Green areas are positive samples, while red dashed boxes are noises that may be possibly included. The time cost of single-point annotation is more than 50 times faster than character-level annotation.</figcaption>
29
+ </figure>
30
+
31
+ <figure id="fig:novel_representations" data-latex-placement="t!">
32
+ <figure id="fig:textdragon">
33
+ <embed src="figs/textdragon.pdf" style="width:2.4cm;height:1.6cm" />
34
+ <figcaption>Text Dragon <span class="citation" data-cites="feng2019textdragon"></span></figcaption>
35
+ </figure>
36
+ <figure id="fig:bzcurve">
37
+ <embed src="figs/bezier_curve.pdf" style="width:2.4cm;height:1.6cm" />
38
+ <figcaption>ABCNet <span class="citation" data-cites="liu2020abcnet"></span></figcaption>
39
+ </figure>
40
+ <figure id="fig:textsnake">
41
+ <embed src="figs/textsnake.pdf" style="width:2.4cm;height:1.6cm" />
42
+ <figcaption>Text Snake <span class="citation" data-cites="long2018textsnake"></span></figcaption>
43
+ </figure>
44
+ <figcaption>Some recent representations of text instances.</figcaption>
45
+ </figure>
46
+
47
+ To reduce the cost of data labeling, some researchers [@tian2017wetext; @bartz2018see; @hu2017wordsup; @baek2019character] have explored training the OCR models with coarse annotations in a weakly-supervised manner. These methods can mainly be separated into two categories, *i.e.*, (1) bootstrapping labels to finer granularity and (2) training with partial annotations. The former usually derives character-level labels from word- or line-level annotations; thus, the models could enjoy the well-understood advantage of character-level supervision without introducing overhead costs. The latter is committed to achieving competitive performance with fewer training samples. However, both methods still rely on the costly bounding box annotations.
48
+
49
+ One of the underlying problems that prevent replacing the bounding box with a simpler annotation format, such as a single-point, is that most text spotters rely on RoI-like sampling strategies to extract the shared backbone features. For example, Mask TextSpotter requires mask prediction inside a RoI [@liao2020masktext]; ABCNet [@liu2020abcnet] proposes BezeirAlign, while TextDragon [@feng2019textdragon] introduces RoISlide to unify the detection and recognition heads. In this paper, inspired by the recent success of a sequence-based object detector Pix2Seq [@chen2021pix2seq], we show that the text spotter can be trained with a single-point, also termed as the indicated point (see Fig. [5](#fig:tgt_point){reference-type="ref" reference="fig:tgt_point"}). Thanks to such a concise form of annotation, labeling time can be significantly saved, *e.g.*, it only takes less than one-fiftieth of the time to label single-points for the sample image shown in Fig. [6](#fig:trn_tgt){reference-type="ref" reference="fig:trn_tgt"} compared with annotating character-level bounding boxes, which is extremely tortuous especially for the small and vague text instances. Another motivating factor in selecting point annotation is that a clean and efficient OCR pipeline can be developed, discarding the complex post-processing module and sampling strategies; thus, the ambiguity introduced by RoIs (see red dashed regions in Fig. [6](#fig:trn_tgt){reference-type="ref" reference="fig:trn_tgt"}) can be alleviated. To the best of our knowledge, this is the first attempt to simplify the bounding box to a single-point supervision signal in the OCR community. The main contributions of this work are summarized as follows:
50
+
51
+ - For the first time, we show that the text spotters can be supervised by a simple yet effective single-point representation. Such a straightforward annotation paradigm can considerably reduce the labeling costs, making it possible to access large-scale OCR data in the future.
52
+
53
+ - We propose a new Transformer-based [@vaswani2017attention] scene text spotter, which forms the text spotting as a language modeling task. Given an input image, our method predicts a discrete token sequence that includes both detection and recognition results. Benefiting from such a concise pipeline, the complex post-processing and sampling strategies designed based on prior knowledge can be discarded, showing great potential in terms of flexibility and generality.
54
+
55
+ - To evaluate the effectiveness of proposed methods, extensive experiments and ablations are conducted on four widely used OCR datasets, *i.e.*, ICDAR 2013 [@karatzas2013icdar], ICDAR 2015 [@karatzas2015icdar], Total-Text [@ch2017total], and SCUT-CTW1500 [@liu2019curved], involving both horizontal and arbitrarily shaped texts. The results show that the proposed SPTS can achieve state-of-the-art performance compared with existing approaches.
56
+
57
+ <figure id="fig:method" data-latex-placement="t!">
58
+ <embed src="figs/method_acmmm.2.pdf" style="width:95.0%" />
59
+ <figcaption>Overall framework of the proposed SPTS. The visual and contextual features are first extracted by a series of CNN and Transformer encoders. Then, the features are auto-regressively decoded into a sequence that contains both localization and recognition information, which is subsequently translated into point coordinates and text transcriptions. Only a point-level annotation is required for training.</figcaption>
60
+ </figure>
61
+
62
+ # Method
63
+
64
+ Most of the existing text spotting algorithms treat the problem as two sub-tasks, *i.e.*, text detection and recognition, albeit the entire network might be end-to-end optimized. Customized modules such as BezierAlign [@liu2020abcnet], RoISlide [@feng2019textdragon], and RoIMasking [@liao2020masktext] are required to bridge the detection and recognition modules, where backbone features are cropped and shared between detection and recognition heads. Under such types of design, the recognition and detection modules are highly coupled. For example, the features fed to the recognition head are usually cropped from the ground-truth bounding box at the training stage since detection results are not good enough in the first iterations; thus, the recognition result is susceptible to interference from the detected bounding box during the test phase.
65
+
66
+ Recently, Pix2Seq [@chen2021pix2seq] pioneered to cast the generic object detection problem as a language modeling task, based on an intuitive assumption that if a deep model knows what and where the target is, it can be taught to tell the results by the desired sequence. Thanks to the concise pipeline, labels with different attributes such as location coordinates and object categories can be integrated into a single sequence, enabling an end-to-end trainable framework without task-specific modules (*e.g.*, Region Proposal Networks and RoI pooling layers), which can thus be adapted to the text spotting task. Inspired by this, we propose Single-Point Text Spotting (SPTS). Unlike Pix2Seq which is designed for object detection only and still requires the bounding boxes for all instances, our SPTS tackles both text detection and recognition as a end-to-end sequence prediction task, using the single point location and text annotations. Compared with existing text spotting approaches, SPTS follows a much more simple and concise pipeline where the input images are translated into a sequence containing location and recognition results, genuinely enabling the text detection and recognition task simultaneously.
67
+
68
+ Specifically, as shown in Fig. [11](#fig:method){reference-type="ref" reference="fig:method"}, each input image is first sequentially encoded by a CNN and a Transformer encoder [@vaswani2017attention] to extract visual and contextual features. Then, the captured features are decoded by a Transformer decoder [@vaswani2017attention], where tokens are predicted in an auto-regressive manner. Unlike previous algorithms, we further simplify the bounding box to the center of the text instance, the corner point located at the top left of the first character, or the random point within the text instance, as described in Fig. [18](#fig:pts_pos){reference-type="ref" reference="fig:pts_pos"}. Benefiting from such a simple yet effective representation, the modules carefully designed based on prior knowledge, such as grouping strategies utilized in segmentation-based methods and feature sampling blocks equipped in box-based text spotters, can be eschewed. Therefore, the recognition accuracy will not be limited by poor detection results, significantly improving the model robustness.
69
+
70
+ The fact that a sequence can carry information with multiple attributes naturally enables the text spotting task, where text instances are simultaneously localized and recognized. To express the target text instances by a sequence, it is required to convert the continuous descriptions (*e.g.*, bounding boxes) to a discretized space. To this end, as shown in Fig. [12](#fig:seq_constrcut){reference-type="ref" reference="fig:seq_constrcut"}, we follow Pix2Seq [@chen2021pix2seq] to build the target sequence; what distinguishes our methods is that we further simplify the bounding box to a single-point and use the variable-length transcription instead of the single-token object category.
71
+
72
+ Specifically, the continuous coordinates of the central point of the text instance are uniformly discretized into integers between $[1, n_{bins}]$, where $n_{bins}$ controls the degree of discretization. For example, an image with a long side of 800 pixels requires only $n_{bins}=800$ to achieve zero quantization error. Note that the central point of the text instance is obtained by averaging the upper and lower midpoints as shown in Fig. [15](#fig:ctr){reference-type="ref" reference="fig:ctr"}. As so far, a text instance can thereby be represented by a sequence of three parts, *i.e.*, $[x, y, t]$, where $(x, y)$ are the discretized coordinates and $t$ is the transcription text. Notably, the transcriptions are inherently discrete, *i.e.*, each of the characters represents a category, thus can be easily appended to the sequence. However, different from the generic object detection that has a relatively fixed vocabulary (each $t$ represents an object category, such as pedestrian), $t$ can be any natural language text of any length in our task, resulting in a variable length of the target sequence, which may further cause misalignment issues and can consume more computational resources. To eliminate such problems, we pad or truncate the texts to a fixed length $l_{tr}$, where the $<$PAD$>$ token is used to fill the vacancy for shorter text instances. In addition, like other language modeling methods, $<$SOS$>$ and $<$EOS$>$ tokens are inserted to the head and tail of the sequence, indicating the start and the end of a sequence, respectively. Therefore, given an image that contains $n_{ti}$ text instances, the constructed sequence will include $(2+l_{tr})\times n_{ti}$ discrete tokens, where the text instances would be randomly ordered, following previous works [@chen2021pix2seq]. Supposing there are $n_{cls}$ categories of characters (*e.g.*, 97 for English characters and symbols), the vocabulary size of the dictionary used to tokenize the sequence can be calculated as $n_{bins} + n_{cls} + 3$, where the extra three classes are for $<$PAD$>$, $<$SOS$>$, and $<$EOS$>$ tokens. Empirically, we set the $l_{tr}$ and $n_{bins}$ to 25 and 1,000, respectively, in our experiments. Moreover, the maximum value of $n_{ti}$ is set to 60, which means the sequence containing more than 60 text instances will be truncated.
73
+
74
+ <figure id="fig:seq_constrcut" data-latex-placement="t!">
75
+ <embed src="figs/sequence_construction.2.drawio.pdf" />
76
+ <figcaption>Pipeline of the sequence construction.</figcaption>
77
+ </figure>
78
+
79
+ <figure id="fig:input_output_seq" data-latex-placement="t!">
80
+ <embed src="figs/input_output_seq.pdf" style="width:90.0%" />
81
+ <figcaption>Input and output sequences of the decoder.</figcaption>
82
+ </figure>
83
+
84
+ Based on the constructed sequence, the input and output sequences of the Transformer decoder are shown in Fig. [13](#fig:input_output_seq){reference-type="ref" reference="fig:input_output_seq"}. Since the SPTS is trained to predict tokens, it only requires to maximize the likelihood loss at training time, which can be written as: $$\begin{equation}
85
+ \label{eq_objective}
86
+ %\rm{maximize} \sum_{i=1}^{L} \textbf{w}_j \ log \ P(\tilde{\textbf{s}}_i | \textbf{I}, \textbf{s}_{1:i-1}),
87
+ %
88
+ %
89
+ {\rm maximize} \sum_{i=1}^{L} {{\bf w}}_i \log
90
+ P(\tilde{{{\bf s}}}_i | {{\bf I}}, {{\bf s}}_{1:i}),
91
+ \end{equation}$$ where $\textbf{I}$ is the input image, $\tilde{\textbf{s}}$ is the output sequence, $\textbf{s}$ is the input sequence, $L$ is the length of the sequence, and $\textbf{w}_i$ is the weight of the likelihood of the $i$-th token, which is empirically set to 1.
92
+
93
+ At the inference stage, SPTS auto-regressively predicts the tokens until the end of the sequence token $<$EOS$>$ occurs. The predicted sequence will subsequently be divided into multiple segments, each of which contains $2 + l_{tr}$ tokens. Then, the tokens can be easily translated into the point coordinates and transcriptions, yielding the text spotting results. In addition, the likelihood of all tokens in the corresponding segment is averaged and assigned as a confidence score to filter the original outputs, which effectively removes redundant and false-positive predictions.
2203.07881/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff