Eric03 commited on
Commit
f9e4653
·
verified ·
1 Parent(s): afd60aa

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2006.08085/main_diagram/main_diagram.drawio +1 -0
  2. 2006.08085/main_diagram/main_diagram.pdf +0 -0
  3. 2006.08085/paper_text/intro_method.md +44 -0
  4. 2006.08228/paper_text/intro_method.md +39 -0
  5. 2012.07489/main_diagram/main_diagram.drawio +1 -0
  6. 2012.07489/main_diagram/main_diagram.pdf +0 -0
  7. 2012.07489/paper_text/intro_method.md +24 -0
  8. 2108.08815/main_diagram/main_diagram.drawio +0 -0
  9. 2108.08815/paper_text/intro_method.md +115 -0
  10. 2109.10637/main_diagram/main_diagram.drawio +1 -0
  11. 2109.10637/main_diagram/main_diagram.pdf +0 -0
  12. 2109.10637/paper_text/intro_method.md +36 -0
  13. 2111.08919/main_diagram/main_diagram.drawio +0 -0
  14. 2111.08919/paper_text/intro_method.md +18 -0
  15. 2112.08907/main_diagram/main_diagram.drawio +0 -0
  16. 2112.08907/main_diagram/main_diagram.pdf +0 -0
  17. 2112.08907/paper_text/intro_method.md +67 -0
  18. 2201.11192/main_diagram/main_diagram.drawio +1 -0
  19. 2201.11192/main_diagram/main_diagram.pdf +0 -0
  20. 2201.11192/paper_text/intro_method.md +97 -0
  21. 2202.01079/main_diagram/main_diagram.drawio +1 -0
  22. 2202.01079/main_diagram/main_diagram.pdf +0 -0
  23. 2202.01079/paper_text/intro_method.md +33 -0
  24. 2202.11781/main_diagram/main_diagram.drawio +0 -0
  25. 2202.11781/paper_text/intro_method.md +42 -0
  26. 2203.15996/main_diagram/main_diagram.drawio +1 -0
  27. 2203.15996/main_diagram/main_diagram.pdf +0 -0
  28. 2203.15996/paper_text/intro_method.md +49 -0
  29. 2203.16421/main_diagram/main_diagram.drawio +0 -0
  30. 2203.16421/paper_text/intro_method.md +53 -0
  31. 2205.02455/main_diagram/main_diagram.drawio +1 -0
  32. 2205.02455/paper_text/intro_method.md +111 -0
  33. 2205.11775/main_diagram/main_diagram.drawio +0 -0
  34. 2205.11775/main_diagram/main_diagram.pdf +0 -0
  35. 2205.11775/paper_text/intro_method.md +28 -0
  36. 2206.11474/main_diagram/main_diagram.drawio +0 -0
  37. 2206.11474/paper_text/intro_method.md +140 -0
  38. 2207.01769/main_diagram/main_diagram.drawio +0 -0
  39. 2207.01769/paper_text/intro_method.md +111 -0
  40. 2207.09090/main_diagram/main_diagram.drawio +1 -0
  41. 2207.09090/main_diagram/main_diagram.pdf +0 -0
  42. 2207.09090/paper_text/intro_method.md +0 -0
  43. 2209.10492/main_diagram/main_diagram.drawio +1 -0
  44. 2209.10492/main_diagram/main_diagram.pdf +0 -0
  45. 2209.10492/paper_text/intro_method.md +61 -0
  46. 2210.06041/main_diagram/main_diagram.drawio +1 -0
  47. 2210.06041/main_diagram/main_diagram.pdf +0 -0
  48. 2210.06041/paper_text/intro_method.md +157 -0
  49. 2211.03524/main_diagram/main_diagram.drawio +0 -0
  50. 2211.03524/paper_text/intro_method.md +215 -0
2006.08085/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-06-03T01:25:34.466Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36" version="14.7.4" etag="ffcvoPK0gQOlZjK7t4RH" type="device"><diagram id="tWJ2v2nmeRhB0n_LnIjX">7V3bkqM4Ev2aeiwCoQvosbp7evZhJqJjeyNm95GyaZsYbLwY12W/fiUuNghZlrGEoQp3RJctsMB5MqXUyVTyAL9u3n7Pwt36z3QZJQ+eu3x7gN8ePM8LEGZ/eMt72QKAW7WssnhZtZ0afsb/i6pGt2o9xMto3zoxT9Mkj3ftxkW63UaLvNUWZln62j7tV5q0r7oLV1Gn4eciTLqtf8XLfF22Btg9tf8jilfr/Pj7qiObsD65ativw2X62miCvz3Ar1ma5uW7zdvXKOHSq+VSfu/7maPHG8uiba7zBa/8wkuYHKrfVt1X/l7/WHaLO/52GebhPk8z9v7L6zrOo5+7cMEPvDKUWds63yTsE2Bv93mW/h19TZM0K/qAbvFiR6rLRVkevZ29ZXAUBFOhKN1EefbOTqm/QH0HlV+q9IfgSn1eT2DASsDrBg51W1jBvzr2fZIQe1MJSS4wOD2BefCeAkMTFJh/T4Hh6QkMgnsKjFwWWJYetsuIn+/eIqpfcZLU7dt0Gxka0LrqhtyO9ACViI8YEJ9/WXzRdvnEp032aZGE+328aEsseovzf3PhOrj69J9K1Pz9t7fmh/fqQynlevr0jqKMlsLcu08P2SJqQZ2H2SqqNcjTlndDmlgizLoti5Iwj1/adyGTcHWFH2nMLnwaPQKvYwwCTOVPqr7XnJ3FrmjguAi4HibF/7jdrSt0W4ql022hAEcxaOlEMHGdQGPTCdrRCdF0tXUCEcfHAfURLP4nwVBKQac3MWHUnZi6Q6utialerkzCjOjITIZ4zGTc0wvIZ8hrzQcTr90RsGYvAFwFf+VQLMP9uvBUgK6daMM9GmxRjV4Fgi8OWrpoih0he6Mf0Fgqf040A9cQmmJHNtHUWMd/TjQBMGWcnZ5s4jlBmgEFbfFghJ3qrodwTjR4htE4JzW8TSe/ZpFHYzkYWvFYEFV2a9GmNIiVsdmUz2wI0MbrnhY2JWYFSJbRCI7Mwny1KfS1MB/fy8JkPAtJck41psUdnlSF/PeQ1gce90VI7Imd4KHd2+kge7fif/9Ks7+jrO6L3UbZXXmwo4TM3PK22oVJvNpynWS4s37gF26U8SJMnqoDm3i55F//kkXsXsLnoiuufDsulkJQ+MsD/sb7OuRpeb9F1waM/JG0iTBUR9MaWihjSz0TRi1jQUxA9o2NoOzgz3WYLT8scFRwCAlygg50R7fRNHZ1H7PT33HVhZUXkQ+0V68AhG6hultzA2stARXShcJH2W8vUan3BaZ1ON5tAu+qgK+0pBkaYrqAEMXFyUn4HCU/mB3lcSo1zD+EE57TPE83EsvN053MwNNDnsRbdjt1qoNb/o7CLdu8rXgKhvMSReHGWewOhuwYdELKyPUd0jFldLTvptKeWm+yZg1CZmzuKROcOAL6A3qkOtkLlzzS6Q1ujwAJgTNCHdxvRHsMLnZlcBSTkRomfI2nJPlntDwwC6i6e85OPgYJN9wkts97/ud7wlTho7ojIgFTd9F0RjyJYooLjV62KONfDDqSf6ThMt6uPgt2x3hqEztiy5GUMSPXYQeJDDugCxe7n3i315jOwv2udAt+xW/cmTGSsAKEScztyl41nt8kehmpYkL03iREj/AdRX87WyEXPZyE6APvjqK/nXWQix5NQvQAiIzPgLKHMtbAxET9e8q8613XBWu6XC3H7GNO5AK0mMo4ehmdZ8ILgzNRcJYoWMYZO7G8GFsfcSjMJAt7DmmSQAKRIMl9tUkj1PmjswZYoYouoI1Jd8llFW0NBmRGu/9EHfiKSNrgpq2RFDKDbQzseiS9l2n3TWjZ52GWX8F5wg76DOrv3yF7HQU7Xja02A123kQJcPxm/kRPnhS6vuM1dUO4iufUHIp52hRq5K3MiuB5LnXw+VQZfHTDr96hINJD1LUGtUaKzeKQvRzjt6PJt5HtWhlbug0QGA8sAqkdB/bFTEc9jWBQhe+N06r16PkbFhJOibj7+dIPRK3z2ZvyDnqrp8Y+qp7qiQLaUNBH13FdotRS/uFHlMXs9rnHcLPmjkZLMWDuw9mphnjAwT0HMijsFTkm7BpWWyQMmASq1VY8HwPDaqux02ueQNl4jVq+kt+dQLtqebUOwnZCa3u3n8WpFcloz1kJrnOncQAdl/REXulC857Fcc0g+NdtXRsY/OZsVDtN494SLm6ECfrOSJ5yWRXYXFYhb8w60cJ/fIOEMGH33fwNPU+1ZvPZ4p25Q9AnlJLAtZaYhGYWdeioCSFtd0LwTAdmVtHMrNpkVtVgD02sIg1idQb7hhoVfPE+lpgJ0iBPZ7CNgS049UNbdt8dip9rjYc725bMBEkIpi3mSpjTrQZJUN8ST58MegbCefrlhiAJ7/i8J2+T15nJPa3yQ4HvwAY8pAv87eQev8h5BtGiEuCZ3Osz8N9C56kHe7t0Hp4MnVevNVplCMZWzQ9Tz2mPB70JPeJTRfDKLqFX69s4tWLUhShwoOTh+tJ7xFdWLxmK3sMTLHJNfNEmh6z0iDX4sNGknuCxWZOP/HaOcBvHvmVduHd13ne3WPgRT7CEd3DXqvq4b8reXexnbIXHqK8ui9STnqB1ycujxSCHdu3Jgv1o0FKjsx8xxIfdAQs74CkV7K7hbXp4ZGxzUhBYsakAqUuN2fPpJli9G7hUlfw0rImRvpTJXUyMSkxsbOwKAKhTFb+vXQHgdvqyZ0xEg0/5rEEyS/kPwAXqBIius2ozclZnBs8aYGdrmRptTIYtNUbmfCebaAdUnQExsGnPuU1Dgi1mQAxs2fOmUS2GQUxgJsRIBgRwXXUKhG8x/kHmfaJa2ANlCgRx+6ZA8I5VKRD23Pc57UmvKDBU5kBw5G/PgSiuokqCsKcGGqTZ+CgRrxvnGvBRm32Th+6yxToYnUFRdaCr9ywK1ZGugXh7X4MiG51BsdvuBr4G5BX9vrk59zCpGuHxmBTs8n6od1Im8BBomYpQ7BlI6WgLhqTBLo3OkI7Vdhv0vCTUMYRJGShzPpxJVffWpOr90c1c7J9BMwO+hKwXezNoThqMzujMCXmXYl93Mq4pPTeuRr5lXGMrYAJQ1xxuMC4EhzWuecPY4LEwiKTbD06EucR/tcmi+hqEymfVAft4Y0KGxluDOZnxvmVNKtDtw1u0BtEzI3wTwq0omFADfXCLDvpmNxmhxF2X0objPGJK3PO7gTAFt6bttEF4KQ6G8ADEQ3DXjWITUgN0KSZm4umJ/CrNAJnwDAzadgrspbvVm9tmrbjARZJL8TJ0Hi79oYK042XwXloxwV1iAGJJ+GxAtj+Y0kaxYGxJ+QB5FwJovVkTblWqENpAzH8w5yNpPtKTOvSsIvAqO+h2r+wxYE5607vrXoUO4pXdNVFpMvPvBaXgLKgkz8SwUhCXLd0GUQrTpe/bqoBkqsCcDTZGPAjPRB+7XgCqfOKF+EBQ7Qo+2Fct2cRujdUKF9MVoOGi9YHpovUfVrEw8dWYa2dCeqqFgy1V8oP27RtXJA0OcVakAn9IFWknvdUKsLn7DnrVyQQxrVhURlaWT8fc78JtS8Oue3R2xFnhLEz4eZ77tNsl8SIseOXT0zfLS3yQp296qK15QvYtlNRQOVKgpp+qXXtStmH9V7pjA8bq/aNiSuBxGayCUTaeGYHRGwbGH1map4s0+agwFnsgz1eDkmKKzGDKPmYpl/1pgOZRsD/TZcTP+D8=</diagram></mxfile>
2006.08085/main_diagram/main_diagram.pdf ADDED
Binary file (37.3 kB). View file
 
2006.08085/paper_text/intro_method.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Parallelism is a ubiquitous method to accelerate model training [1, 2, 3, 4]. A parallel learning system usually consists of three layers (Table 1): an application to solve, a communication protocol deciding how parallel workers coordinate, and a network topology determining how workers are connected. Traditional design for these layers usually follows a centralized setup: in the application layer, training data is required to be shuffled and shared among parallel workers; while in the protocol and network layers, workers either communicate via a fault-tolerant single central node (e.g. Parameter Server) [5, 6, 7] or a fully-connected topology (e.g. AllReduce) [8, 9]. This centralized design limits the scalability of learning systems in two aspects. First, in many scenarios, such as Federated Learning [10, 11] and Internet of Things (IOT) [12], a shuffled dataset or a complete (bipartite) communication graph is not possible or affordable to obtain. Second, a centralized communication protocol can significantly slow down the training, especially with a low-bandwidth or high-latency network [13, 14, 15].
4
+
5
+ Table 1: Design choice of centralization and decentralization in different layers of a parallel machine learning system. The protocol specifies how workers communicate. The topology refers to the overlay network that logically connects all the workers.
6
+
7
+ | Layer | Centralized | Decentralized | |
8
+ |---------------------|-----------------------------------------|-----------------------------------------|--|
9
+ | Application | Shuffled Data | Unshuffled Data<br>(Federated Learning) | |
10
+ | Protocol | AllReduce/AllGather<br>Parameter Server | Gossip | |
11
+ | Network<br>Topology | Complete-<br>(Bipartite) Graph | Arbitrary Graph | |
12
+
13
+ <sup>∗</sup>Corresponds to: yl2967@cornell.edu †Corresponds to: cdesa@cs.cornell.edu
14
+
15
+ ![](_page_1_Figure_0.jpeg)
16
+
17
+ Figure 1: Figure illustrating how decentralization in different layers lead to different learning systems. From left to right: : A fully centralized system where workers sample from shared and shuffled data; ○2 : Based on ○1 , workers maintain their own data sources, making it decentralized in the application layer; ○3 : Based on ○2 , workers are decentralized in the topology layer; ○4 : A fully decentralized system in all three layers where the workers communicate via Gossip. Our framework and theory are applicable to all kinds of decentralized learning systems.
18
+
19
+ The rise of decentralization. To mitigate these limitations, decentralization comes to the rescue. Decentralizing the application and network allows workers to learn with unshuffled local datasets [16] and arbitrary topologies [17, 18]. Furthermore, the decentralized protocol, i.e. Gossip, helps to balance load, and has been shown to outperform centralized protocols in many cases [19, 20, 21, 22].
20
+
21
+ Understanding decentralization with layers. Many decentralized training designs have been proposed, which can lead to confusion as the term "decentralization" is used inconsistently in the literature. Some works use "decentralized" to refer to approaches that can tolerate non-iid or unshuffled datasets [16], while others use it to mean gossip communication [19], and still others use it to mean a sparse topology graph [23]. To eliminate this ambiguity, we formulate Table 1, which summarizes the different "ways" a system can be decentralized. Note that the choices to decentralize different layers are independent, e.g., the centralized protocol AllReduce can still be implemented on a decentralized topology like the Ring graph [23].
22
+
23
+ The theoretical limits of decentralization. Despite the empirical success, the best convergence rates achievable by decentralized training—and how they interact with different notions of decentralization remains an open question. Previous works often show complexity of a given decentralized algorithm with respect to the number of iterations T or the number of workers n, ignoring other factors including network topologies, function parameters or data distribution. Although a series of decentralized algorithms have been proposed showing theoretical improvements—such as using variance reduction [24], acceleration [17], or matching [25]—we do not know how close they are to an "optimal" rate or whether further improvement is possible.
24
+
25
+ In light of this, a natural question is: What is the optimal complexity in decentralized training? Has it been achieved by any algorithm yet? Previous works have made initial attempts on this question, by analyzing this theoretical limit in a non-stochastic or (strongly) convex setting [17, 26, 27, 28, 29, 30]. These results provide great heuristics but still leave the central question open, since stochastic methods are usually used in practice and many real-world problems of interest are non-convex (e.g. deep learning). In this paper we give the first full answer to this question: our contributions are as follows.
26
+
27
+ - In Section 4, we prove the first (to our knowledge) tight lower bound for decentralized training in a stochastic non-convex setting. Our results reveal an asymptotic gap between our lower bound and known convergence rates of existing algorithms.
28
+ - In Section 5, we prove our lower bound is tight by exhibiting an algorithm called DeFacto that achieves it—albeit while only being decentralized in the sense of the application and network layers.
29
+ - In Section 6, we propose DeTAG, a practical algorithm that achieves the lower bound with only a logarithm gap and that is decentralized in all three layers.
30
+ - In Section 7, we experimentally evaluate DeTAG on the CIFAR benchmark and show it converges faster compared to decentralized learning baselines.
31
+
32
+ Table 2: Complexity comparison among different algorithms in the stochastic non-convex setting on arbitrary graphs. The blue text are the results from this paper. Definitions to all the parameters can be found in Section 3. Other algorithms like EXTRA [69] or MSDA [70] are not comparable since they are designed for (strongly) convex problems. Additionally, Liu and Zhang [71] provides alternative complexity bound for algorithms like D-PSGD which improves upon the spectral gap. However, the new bound would compromise the dependency on $\epsilon$ , which does not conflict with our comparison here.
33
+
34
+ | | Source | Protocol | Sample Complexity | Comm. Complexity | Gap to Lower Bound |
35
+ |-------------|---------------------|-----------|------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------|
36
+ | Lower Bound | Theorem 1 | Central | $\Omega\left(\frac{\Delta L\sigma^2}{nB\epsilon^4}\right)$ | $\Omega\left(\frac{\Delta LD}{\epsilon^2}\right)$ | / |
37
+ | | Corollary 1 | Decentral | $\Omega\left(\frac{\Delta L\sigma^2}{nB\epsilon^4}\right)$ | $\Omega\left(\frac{\Delta L}{\epsilon^2 \sqrt{1-\lambda}}\right)$ | / |
38
+ | Upper Bound | DeFacto (Theorem 2) | Central | $O\left(\frac{\Delta L \sigma^2}{nB\epsilon^4}\right)$ | $O\left(\frac{\Delta LD}{\epsilon^2}\right)$ | O(1) |
39
+ | | DeTAG (Theorem 3) | Decentral | $O\left(\frac{\Delta L \sigma^2}{nB\epsilon^4}\right)$ | $O\left(\frac{\Delta L \log\left(\frac{\varsigma_0 n}{\epsilon \sqrt{\Delta L}}\right)}{\epsilon^2 \sqrt{1-\lambda}}\right)$ | $O\left(\log\left(\frac{\varsigma_0 n}{\epsilon \sqrt{\Delta L}}\right)\right)$ |
40
+ | | D-PSGD [19] | Decentral | $O\left(\frac{\Delta L\sigma^2}{nB\epsilon^4}\right)$ | $O\left(\frac{\Delta L n \varsigma}{\epsilon^2 (1-\lambda)^2}\right)$ | $O\left(\frac{n\varsigma}{(1-\lambda)^{\frac{3}{2}}}\right)$ |
41
+ | | SGP [72] | Decentral | $O\left(\frac{\Delta L\sigma^2}{nB\epsilon^4}\right)$ | $O\left(\frac{\Delta L n \varsigma}{\epsilon^2 (1-\lambda)^2}\right)$ | $O\left(\frac{n\varsigma}{(1-\lambda)^{\frac{3}{2}}}\right)$ |
42
+ | | $D^2 [24]$ | Decentral | $O\left(\frac{\Delta L \sigma^2}{nB\epsilon^4}\right)$ | $O\left(\frac{\lambda^2 \Delta L n \varsigma_0}{\epsilon^2 (1-\lambda)^3}\right)$ | $O\left(\frac{\lambda^2 n \varsigma_0}{(1-\lambda)^{\frac{5}{2}}}\right)$ |
43
+ | | DSGT [42] | Decentral | $O\left(\frac{\Delta L\sigma^2}{nB\epsilon^4}\right)$ | $O\left(\frac{\lambda^2 \Delta L n \varsigma_0}{\epsilon^2 (1-\lambda)^3}\right)$ | $O\left(\frac{\lambda^2 n \varsigma_0}{(1-\lambda)^{\frac{5}{2}}}\right)$ |
44
+ | | GT-DSGD [44] | Decentral | $O\left(\frac{\Delta L\sigma^2}{nB\epsilon^4}\right)$ | $O\left(\frac{\lambda^2 \Delta L n \varsigma_0}{\epsilon^2 (1-\lambda)^3}\right)$ | $O\left(\frac{\lambda^2 n \varsigma_0}{(1-\lambda)^{\frac{5}{2}}}\right)$ |
2006.08228/paper_text/intro_method.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ For most of the MNIST and Fashion MNIST classification tasks, we use the standard Lenet-300-100 MLP and Lenet-5-caffe CNN architecture together with Relu activations for hidden layers and softmax cross-entropy loss on logit outputs. One exception is the toy example reported in Sec. [\[subsec:probing\]](#subsec:probing){reference-type="ref" reference="subsec:probing"}, where we used 2 linear output neurons to perform regression.
4
+
5
+ For the CIFAR-10 and SVHN datasets, we used a CNN model consisting of 4 convolution layers followed by 2 feedforward layers with dropout (Table [1](#tab:keras_net){reference-type="ref" reference="tab:keras_net"}). This can be considered as a slightly scaled-up version of the CNN from the Keras tutorial[^2], in which the only modification we made was to double the number of filters in each convolutional layer.
6
+
7
+ ::: {#tab:keras_net}
8
+ Operation Filter size \# Filters Stride Dropout rate Activation
9
+ --------------- -------------- ------------ -------------- -------------- ------------
10
+ 3x32x32 input -- -- -- -- --
11
+ Conv $3 \times 3$ 64 $1 \times 1$ -- ReLu
12
+ Conv $3 \times 3$ 64 $1 \times 1$ -- ReLu
13
+ MaxPool -- -- $2 \times 2$ 0.25 --
14
+ Conv $3 \times 3$ 128 $1 \times 1$ -- ReLu
15
+ Conv $3 \times 3$ 128 $1 \times 1$ -- ReLu
16
+ MaxPool -- -- $2 \times 2$ 0.25 --
17
+ FC -- 512 -- 0.5 ReLu
18
+ FC -- 10 -- -- Softmax
19
+
20
+ : The Conv-4 architecture used for the CIFAR-10 and SVHN tasks. The dropout rate is defined to be the fraction of the input units to drop.
21
+ :::
22
+
23
+ In Table [\[tab:ntt_params\]](#tab:ntt_params){reference-type="ref" reference="tab:ntt_params"} we summarize the hyperparameters used in the NTT optimization stage. For each experiment, we initialized NTT teachers using the Glorot initialization scheme [@Glorot10]; we then perform perform gradient-based optimization using the Adam optimizer [@Kingma2015adam] with various learning rates and batch sizes (see Table [\[tab:ntt_params\]](#tab:ntt_params){reference-type="ref" reference="tab:ntt_params"}).\
24
+
25
+ The toy example in Section [\[subsec:probing\]](#subsec:probing){reference-type="ref" reference="subsec:probing"} of the main text deserves some additional comments. For this task, we used 500 images from the MNIST dataset, containing 250 images of each digit 0 and 1. We performed 5000 iterations of full-batch gradient descent for this task; for this reason, 5000 is also the total number of epochs.
26
+
27
+ Regarding the supervised learning experiments, we spared 10% of the training data for model validation purposes and only used 90% for model training. We used the Adam optimizer with learning rate 1e-3, $\beta_1 = 0.9$, and $\beta_2 = 0.999$ for all supervised learning tasks except for the visualization task in Sec. [\[subsec:probing\]](#subsec:probing){reference-type="ref" reference="subsec:probing"}, in which the stochastic gradient descent optimizer with learning rate 0.01 was used. In addition, all experiments, except for the visualization task, used a minibatch-size of 64. For MNIST and Fashion MNIST experiments, we performed optimization for 50 epochs. On CIFAR-10, we trained for 600 epochs.
28
+
29
+ In this subsection, we first recap the SNIP pruning method [@Lee2018snip] and introduce two straightforward extensions of it, Layerwise-SNIP and Logit-SNIP, which were used as baselines for NTT. Finally, we point out some technicalities of random pruning baselines.
30
+
31
+ Recall that SNIP [@Lee2018snip] assigns each neural network parameter $\theta$ a sensitivity score $S(\theta)$ defined as
32
+
33
+ $$S(\theta) = \left| \theta \cdot \frac{\partial L_{\bm{\theta}} }{\partial \theta} \right|,$$ where $L_{\bm{\theta}} = \sum_{i = 1}^{n_B} L( f(\bm{x}_i, \bm{\theta}), \bm{y}_i)$ is the loss evaluated over a batch of $n_B$ number of input-output data pairs $\{\bm{x}_i, \bm{y}_i\}_{i = 1}^{n_B}$ and $\bm{\theta}$ is the vector of randomly initialized parameters. @Lee2018snip proposed to remove neural network parameters with lowest sensitivity scores. That is, in its original formulation, SNIP is a global pruning method. To be consistent with [@Lee2018snip], we reserve the terminology SNIP to only be used in global pruning context. A straightforward way to turn SNIP into a layerwise pruning method is to remove a fixed fraction of the parameters having the lowest sensitivity scores from each layer. We call this extension Layerwise-SNIP.
34
+
35
+ The SNIP and Layerwise-SNIP methods described above depend on labels. Here we provide a label-free extension: We modify the sensitivity score $S(\theta)$ into a logit-based sensitivity score $\tilde{S}(\theta)$ defined as
36
+
37
+ $$\tilde{S}(\theta) = \left| \theta \cdot \frac{\partial Z_{\bm{\theta}} }{\partial \theta} \right|,$$ where $Z_{\bm{\theta}} = \sum_{i = 1}^{n_B} \|f(\bm{x}_i, \bm{\theta})\|_2^2$. We can perform either global or layerwise pruning in reference to the scores $\tilde{S}(\theta)$. We refer the global pruning criteria as Logit-SNIP and layerwise criteria as layerwise Logit-SNIP. When the context is clear, we may use the terminology Logit-SNIP to refer to either its layerwise or global variant.
38
+
39
+ In the main text we have explained two ways to randomly sample sparse neural networks. Note that for these random methods, their global pruning variant is equivalent to their respective layerwise variant: In either formulation, each weight parameter receives an identical chance to be removed and therefore the expected fraction of pruned parameters for each layer is the same. In each run, we randomly remove such expected fraction of parameters from each layer.
2012.07489/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-03-15T19:25:21.122Z" agent="5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36" etag="DRrds0_-MnsHbF7Vog24" version="14.4.8" type="device"><diagram id="1PEgvlJM-QoFYbGOJfO5" name="Page-1">7V1bc6M4Fv41qdp5CIUE4vKYOE73VPUl29nZ3rctArJNBxsP4E7Sv34kzE1CNuAI7Db2Q2ILEHDOd646kq60yfL1Q+SsF59DDwdXUPVer7S7KwiBhgD5R1veti2WnjXMI9/LTiobHv1fOGtUs9aN7+GYOTEJwyDx12yjG65W2E2YNieKwhf2tFkYsHddO3Nca3h0naDe+t33kkX2Fkgt2z9if77I7wzU7MjSyU/OGuKF44UvlSZteqVNojBMtt+WrxMcUOLldNled7/jaPFgEV4lbS746/ZPcOvd2z/vXXP14fN8tTG+XevbXn46wSZ74exhk7ecAuS51/RrQhiMf4W0u9s1jvwlTnBUbX8oG29fFn6CH9eOS698IWeQtkWyDMgvQL5GYeIkfrgiP69tlTTM/CCYhEEYpffUZrMZdF3SHidR+IzzI6twRfq7rb95RoyfOErwa6Upo8QHHJIHi97IKdlRCHXFNLdXZciElq3kr/9SshogtWheVDitIcXQMpxlGJsXtynZQL5knOjAFdSeK+7mCTfT+yncrDzsfXoqGhz3eR7R1q+bJPApWdN2z4mev5Ju/IRSRVVUxDbCtJWeGW9FVYN17nmqizHskXuaCVjW5eJY4RsS8MyCCuqJZWYzy/DKu6EaiXItcOLYd1kmbamVaxm4j1TYY7RWnVBVQqh1QuRtEQ6IHP5kdZ2INtkdHkKfPEnBB11XGT7oyGK7iMNN5OLsqqpu4jpCKtjfUeJEc5zUOkpZVbz24dyzxsg9DVqsFNlEJ6LDGKgZzX31zEOktWAiofxj9jOMkkU4D1dOMC1bb6OtpqRqjvwqz/kUhuuM0z9wkrxlnoqzScImHHTTgluK73nP3DNqRNE74YEAUCCwi4/BSqipKpYKLBtZtqoTt8cYlNk5EfYxm2Vlg4XkTNiTijVs9GnCDJuhp2bVTZghkH2QN0q3YAA0k/T8lCBUWRMGbQ7IbTUghKaCdK4vpCDOR+xbLuAYmcj7IdCGhzER1dAAh2Vfi3isVGugU6AlirNw+ulRzemGoUDE0dRUTEGcBQvnnAmzSKuxGw/v03jGGIUFGTWnXbH0w+TFUJv76ltkRhl58UyEtkpMjVp+NDkMhTYYlJuwhRPSSQHyiSXHsz3Uo8IzgMYS0LALiahqOyDABvFFFNCXsoNtwqOzkxNTY8MWzeTc57aCYVrq/o56FgytQ6bW3UTB223kuM84aeMiVMWplpmVIRMmKxM6wXnuFFRgAEUiAfoKd/QWEeT5yYNG7ITORikIISUPG7pKhaXXutNBvbsdskFo67xVTlvTE+LdDy+4W+XhOzwjAvxV5Mv2aaQKrT7KvCLiVCXRnYehy4ZcRy2zwl1xVXvg7D47bQHae34/WEIt9NV7PKM7NLXu9JpnVD0iwRTYGke6/DcTCYryXmZpM+SPt40y9WWDBpfmUDlt6xt11v/csB+5j6LZA4gebIZHEvnOap4OITTInTDy4ITRdTGazeSIHAk1uIFuaCu5Sq4gLXfSGKnTyjFx+VI3ysgdqAZk2WGbCjzQQpblJkVnww6cIruZh6dcqkA6tzytx3yAaXO6kQhfPfQxBXjrsVjBENk7I6DB4ixM8VWyz/h7E+YHrreUvCEnAH39Wh4k3+b0//M1ObTChGBxkn4jr0MYGsW0S9LsLvL7kOfe3mp7YV2f4teko97MmpzAn9Pw1SVMS2uRKLN81wlusgNL3/N2Kmp20FACAAzIBr9aPiJUYb+9R93IZ77ImslgfjibpeJJi/GW602WR4Bq+ufDw1+jZz2C5WBGk78Le2O/KJHEsj8vwswZDKq8VmeOyx7+j7/EVMC/4Bfy91u4dFZ1cOT38FvhyxDh664CIJ/vdzSggnywagOBQdEHhVSLcbRTdgJsezK5v+/TCeCTx0InQMSzPp0AUXJKhh0IwrlPwzjWDIxMTG2TF1NNEVj+YeVU5KzL4LgbhTFlOCFXFK7fRsdsAABX40BryY+slU1RwvC3MPRfLoY+DQBMhUWVpmqKbZQVmbZ5ZIQ1h5F9Iixe06OHgqyC03ao297uAsZ8nI+6JqgCxmOruxZJ2jPMJgLetVQPTOIDyA8H8D31nEo0RQUqHb0Ta5+kP5UOKvVWCgl+Gq1U13h+En6LaPYbp6s5YHS1Iy2QpAnTmzsRIrIc54qaWkpbrQ8mDYsY0UBSR83B8fv9zu+2AVxcWYoYg0dM2/xHf5iRkP3oCTPeBTMUI3yFs6BSZFDEWM3x9AUxR0UMN6G/QNDRECNhmLUnxEwuiKEIyauq85hWP7InY+32fd+XoXVcF5N4dzUnZ6ShLw1hl0/Y80gb/fFEyeIT8r6ODgW1YVoNCYdpheMzvSGhRXFUpwnObcrO7u+nxmQih6o2ZCc468goypwaiz17nPRntajJ7oGw97A3wpp0zl196tdRidyiLEw6kafppx8iIxUollbJdh6fxrbIu+ibxnfqZDrdmxh9B43haWiIfM73oIS9Vafa1JBEWK7IHunWaRC2xRRE6YQtijv6IKxhnwZhOy0bIAuxNxPrTuuJsNaJEFaCP95uTMJf0oUIL4MS1M/WiukJxbCEfvRhCVtCwhCKoPD3Bkc+jkfHZ4T4dUHQ0SukbJHfKoPJnpM4T06Mx8dlfrUR47TKBoAq8vZkpFxkD1puO5leUVK0HoW8nq7c0CM4GBvugKpxiw6gRuANW4gP1OZK/K7Ik5UevuuUHpYjLSqdO3SHxwpXfgVUEzVmdUTThnqE625fuHUl3oCK8ksDXFlkp+D7EkZLgotf+cSlwHljoMhWAJ4vFHWu0s+2m3JfovmLPSJRNN/kdJH4tTsSw02y3tA7JngVE6wQWtLHNpwlZfnqKV6z14tqU+WQ4l1vXqPnxwPsStfXc7fSRV8tmj/9iw4KEyQSqKl0PDb/avyREjW1mZQeM2fpB2/bywpCX6UjhERCI98Jaq1FZ7Gziq9jEsXN2D55GmfHqKa4zoSeHgzwLCkOlhOdrrkXgTReyu7Iff9DSHoGLK9HQY9cQfreSZDkvO4FTckWPeqwFDyy7PSg+zwZuu+MnQ7W/6Vzpfc7HTBfLHcgp2N3kekp6sppN125fY9NLess96EmXqenoq5QKQfMwzG6vVbQUvruaqGD+3idA2h80Z6Z+eNNXQsHpinxJZe73bA6On0NLW5CGNKFw/fQtAdV07sz+Keoph87K0R1Ow2LvgieLwkH82zF0omfawrvN/LuP8qQt4s3/9vEhnK96+8NMOEQdcTY8Axs8+C2GDTcbeS2WONiJ0NoibV8WeZhLLFwi6bTNT+FM18JNMIqlurBUZvxV0Gv+9RP3xFYlxms19WF/ObbWerZ7IH0RsdKSV/cjovbcXE7xo6mcxjQ6LaewOjcGl3jUwyWIZqZBfPCctmOzc1D9L85/vJj/t8/Hz59s6fgPn68ljCp0+inaqfI+VbcCCblKvZhRjz/0+A2NRXsI50vcjUItnbXIu5SPEOBrVsRw/kChi9MR8Jt1waoZhXiZ/cSSpIXS5Ni/voJuUZnKK3a9huAhP+nA0oJZa49GczHBvyMwwqa3PZsmoFOCj9ApNU4ZlRW9suI7DnxotgKqcKknQQ7kRX9UL4tSsYNg98Br/VexlxHNQbJW89PzLYWkxLPiW2GHLZZ3IKOtY4kbaZkQvF9ZO2kJMaEqIb8bDHRyMpDMaHnZBtKlLttlH1mbNOBrdh2rRSt+wKrJs/FgRVyt/22fnMuIsSFajy1D+ZbW0PaVSPXtruEQ2jkbluU/uagaJbB3lFxAOPmKAHmLHmOP/3/I1gE/4Y/7KkwVcTxrdN2oJWNn9N9n7nVHnD62RXkXIlDDgEUdkYhOh205iYepUtGNaxsVGyQUoWLJmdNCCHhm5epvmyr0G+0+y6cCVayR3QBrRq0+opxhaACLcQ530IpIZTBv0La3+0aRwQ9KV3L9oey8Z1SP5thw3V7lHpgCRb7sBWtYeYrUottSRmxR4qh9cWhxjRW91XI9yoKaXI/vcg9hZVGpJxdcGzgPZnEZlz2Oo8CTjDemgyh5TfxzqlTJaJoLQXN7k5G8jMKKZ5L54m88+Jz6GF6xj8=</diagram></mxfile>
2012.07489/main_diagram/main_diagram.pdf ADDED
Binary file (43.2 kB). View file
 
2012.07489/paper_text/intro_method.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ With the advent of deep learning, significant progress has been made in various image understanding tasks, including image classification, object detection, and image segmentation. The state-of-the-art methods can impressively classify images into 10k classes [\[15\]](#page-8-0) and detect 9k different objects [\[49\]](#page-10-0). In contrast, segmentation models have been trained for a fairly limited number of common classes. The ability to segment a greater variety of objects, including small and rare object classes, is critical to many real-life appli-
4
+
5
+ ![](_page_0_Figure_8.jpeg)
6
+
7
+ <span id="page-0-0"></span>Figure 1. The left y-axis shows the maximum batch size that can fit in a single GPU for DeepLabV3+ model vs number of classes in the dataset. The right y-axis with markers in yellow and green color shows pixel accuracy for our model and baseline for following datasets (number of classes): Cityscapes (19), ADE20k (150), COCO-Stuff10k (182) and COCO+LVIS (1284).
8
+
9
+ cations like autonomous driving [\[2\]](#page-8-1) and the scene exploration [\[7\]](#page-8-2). The scaling of existing segmentation models has several unresolved challenges. One of the challenges is the unbalanced distribution of classes. As mentioned in [\[21\]](#page-8-3), due to the Zipfian distribution of classes in natural settings, there is a long tail of rare and small object classes that do not have a sufficient number of examples to train the model. The lack of segmentation datasets with a multitude of classes for learning and evaluation also limits us to develop scalable segmentation models. In fact, one can also argue from the other side. The reason for limited classes in existing segmentation datasets is the discouraging computational demand, alongside the labor-intensive annotations. The task of semantic segmentation is essentially a pixellevel classification of an image. Typically, it is performed by predicting an output tensor of H × W × C for image size H ×W and C number of semantic classes [\[36\]](#page-9-0). This is desirable during the pixel-wise classification by employing cross-entropy loss on the C-dimensional predictions. Unfortunately, the memory demand for such predictions hap<span id="page-1-0"></span>pens to be a major bottleneck for a large number of classes, which is illustrated in Figure [1.](#page-0-0)
10
+
11
+ Most existing works [\[53,](#page-10-1) [62,](#page-10-2) [20,](#page-8-4) [8\]](#page-8-5) primarily focus on the accuracy for datasets with a few hundred semantic classes using multiple GPUs. With the release of LVIS dataset [\[21\]](#page-8-3), efforts are being made in scaling the instance segmentation models with a large number of classes. In contrast to semantic segmentation, the task of instance segmentation is performed by classification at the region level. However, for a rich and complete understanding of the scene, semantic segmentation followed by panoptic segmentation [\[29\]](#page-9-1) is the way to go forward. Therefore, it stands to reason that the semantic segmentation networks in the real-world will eventually have to get exposed to the classes at least high as that of classification, *i.e*. 10K. Unfortunately, the benchmark results on ADE20k segmentation dataset with 150 classes require 4-8 GPUs during training [\[64\]](#page-10-3). This shows that a large number of GPUs has fueled the models for semantic segmentation. Such demand for computational resources hinders researchers in emerging economies and small-scale industries from leveraging these models for research and developing further applications.
12
+
13
+ Naive approaches for training segmentation models on large number of classes and limited GPU memory may be designed by reducing the image resolution or batch size. Such solutions regrettably compromise the performance. As shown in [\[55\]](#page-10-4), lower resolutions (or higher strides) result in blurry boundaries and coarse predictions and miss small but essential regions, such as poles and traffic signs. On the other hand, [\[65\]](#page-10-5) has already demonstrated the need of larger batch size to achieve state-of-the-art results. While techniques like gradient accumulation [\[24\]](#page-8-6) and group normalization [\[58\]](#page-10-6) helps to reduce the effect of low batch size, but they fail to solve the problem completely when single batch size does not fit into GPU memory. When more than one GPU is available, the authors in [\[62\]](#page-10-2) offers a promising synchronized multi-GPU batch normalization technique to increase the effective batch size. Such solutions allow scaling of classes at the cost of scaling the GPUs. However, it is interesting to look into the possibility of scaling the training for multiple classes with a single GPU, which remains unexplored. Figure [1](#page-0-0) also illustrates an example case: the maximum adjustable batch size of 512×512 versus the number of classes, in one standard GPU (Titan XP) while training the DeepLabV3+ model with ResNet50 backbone. As expected, the batch size sharply decreases, leading to only one image per batch for 1320 classes.
14
+
15
+ In this work, we propose a novel training methodology for which the memory requirement does not increase with the number of semantic classes. To the best of our knowledge, this is the first work to study efficient training methods for semantic segmentation models beyond 1K classes. Such scaling is achieved by reducing the output channels of existing networks and learning a low dimensional embedding of semantic classes. We also propose an efficient strategy to learn and exploit such embedding for the task of semantic image segmentation. Our main motive is to improve the scalability of the existing segmentation networks, instead of competing against, by endowing them the possibility of using only one GPU during training for a very high number of semantic classes. The major contributions of this paper are summarized as follows:
16
+
17
+ - We propose a novel scalable approach for training semantic segmentation networks for a large number of classes using only one GPU's memory.
18
+ - We experimentally demonstrate that the proposed method achieves 2.7x better mIoU scores on a dataset with 1284 classes, when compared against its counterpart, while retaining a competitive performance in the regime of a lower number of classes.
19
+ - For efficiency and generalization, we introduce an approximate method to cross-entropy measure and a semantic embedding space regularization term.
20
+ - Our method is theoretically grounded in terms of probabilistic interpretation and underlying assumptions.
21
+
22
+ # Method
23
+
24
+ We summarize the loss computation part of the proposed method in Algorithm 1. The loss computation for segmentation network $M_d$ uses images I with semantic masks S. Note that our algorithm requires an efficient GPU-compatible nearest neighbour search function represented by kNN(), which takes a database and query vectors as inputs. Please refer to Figure 3 for visual illustration of the algorithmic steps. The computed loss is then used to train our network illustrated in Figure 2.
2108.08815/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2108.08815/paper_text/intro_method.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Recent years have witnessed several breakthroughs in the generation of high dimensional data such as images [\[6,](#page-8-0) [8,](#page-8-1) [25\]](#page-8-2) or videos [\[37,](#page-9-0) [41\]](#page-9-1). However, most practical and commercial applications require to control generated visual data on inputs provided by the user. For instance, in image manipulation, photo editing software [\[1\]](#page-8-3) applies deep learning models to allow users to change portions of an image [\[28,](#page-8-4) [32,](#page-8-5) [49\]](#page-9-2).
4
+
5
+ Regarding videos, several possible ways to control the generated sequences have been considered. For instance, the generation of frames can be conditioned on simple categorical attributes [\[13\]](#page-8-6), short sentences [\[23\]](#page-8-7) or sound [\[36\]](#page-9-3). An interesting recent research direction comprises works that attempt to condition the video generation process providing motion information as input [\[34,](#page-9-4) [35,](#page-9-5) [37,](#page-9-0) [44\]](#page-9-6). These approaches allow to generate videos of moving faces
6
+
7
+ ![](_page_0_Figure_10.jpeg)
8
+
9
+ <span id="page-0-0"></span>Figure 1. Illustration of the video generation process of Click to Move (C2M): 1) the user selects the objects in a scene and specify their movements. 2) Our network models the interactions between *all* objects through the GCN and 3) predicts their displacement. 4) The network produces a realistic and temporally consistent video.
10
+
11
+ [\[44\]](#page-9-6), human silhouettes and, in general, of arbitrary objects [\[34,](#page-9-4) [35,](#page-9-5) [37\]](#page-9-0). However, these works mainly deal with videos depicting a single object. It is indeed extremely more challenging to animate images and generate videos when multiple objects are present in the scene, as there is no simple way to disentangle the information associated with each object and easily model and control its movement.
12
+
13
+ This paper introduces Click to Move (C2M), the first approach that allows users to generate videos in complex scenes by conditioning the movements of specific objects through mouse clicks. Fig[.1](#page-0-0) illustrates the video generation process of C2M. The user only needs to select few objects in the scene and to specify the 2D location where each object should move. Our proposed framework receives as inputs an initial frame with its segmentation map and synthesizes a video sequence depicting objects for which movements are coherent with the user inputs. The proposed deep architecture comprises three main modules: (i) an appearance encoder that extracts the feature representation from the first frame and the associated segmentation map, (ii) a motion <span id="page-1-0"></span>module that predicts motion information from user inputs and image features, and (iii) a generation module that outputs the synthesised frame sequence. In complex scenes with multiple objects, modelling interactions is essential to generate coherent videos. To this aim, we propose to adopt a Graph Neural Network (GCN), which models object interactions and infers the plausible displacements for all the objects in the video, while respecting the user's constraints. Experimental results show that our approach outperforms previous video generation methods on two publicly available datasets and demonstrate the effectiveness of the proposed GCN framework in modelling object interactions in complex scenes.
14
+
15
+ Our work is inspired by previous literature that generates videos from an initial frame and the associated segmentation maps [\[29,](#page-8-8) [33\]](#page-9-7). From these works, we inherit a twostage procedure where we first estimate the optical flows between an initial frame and all the generated frames, and subsequently refine the image obtained by warping the initial frame according to the estimated optical flows. However, our framework improves over these previous works as it allows the user the possibility to directly control the video generation process with simple mouse clicks. Similarly to the work of Hao *et al.* [\[12\]](#page-8-9), we propose to control object movements via sparse motion inputs. However, thanks to the GCN, our approach can deal with scenes with multiple objects, while [\[12\]](#page-8-9) cannot. Furthermore, the method in [\[12\]](#page-8-9) does not explicitly consider the notion of *object*, as it does not use any instance segmentation information, and does not model the temporal relation between multiple frames. We instead work on multiple frames and in the semantic space, so the user can intuitively select the object of interest and move it in a temporal consistent way. The use of semantic information is motivated by recent findings in the area of image manipulation where it has been shown that semantic maps are beneficial in complex scenes [\[2,](#page-8-10) [21\]](#page-8-11).
16
+
17
+ Contributions. Overall, the main contributions of our work are as follows:
18
+
19
+ - We propose Click to Move (C2M), a novel approach for video generation of complex scenes that permits user interaction by selecting objects in the scene and specifying their final location through mouse clicks.
20
+ - We introduce a novel deep architecture that leverages the initial video frame and its associated segmentation map to compute the motion representations that enable the generation of frame sequence. Our deep network incorporates a novel GCN that models the interaction between objects to infer the motion of all the objects in the scene.
21
+ - Through an extensive experimental evaluation, we demonstrate that the proposed approach outperforms its competitors [\[29,](#page-8-8)[33\]](#page-9-7) in term of video quality metrics
22
+
23
+ and can synthesize videos where object movements follow the user inputs.
24
+
25
+ # Method
26
+
27
+ We aim at generating a video from its initial frame $\mathbf{X}_0 \in \mathbb{R}^{H \times W \times 3}$ and a set of user-provided 2D vectors that specify the motion of the key objects in the scene. At test time, we assume that we also have at our disposal the instance segmentation maps of the initial frame. Our system is trained on a dataset of videos composed of T frames with the corresponding instance segmentation maps at every frame. As we will see later, in practice, instance segmentation is obtained using a pre-trained model.
28
+
29
+ Considering a set of C classes, we assume that N objects are detected at time t in the frame $\mathbf{X}_t \in \mathbb{R}^{H \times W \times 3}$ . The instance segmentation is represented via a segmentation map $\mathbf{S}_t \in \{0,1\}^{H \times W \times C}$ , a class label map $\mathbf{C}_t \in$ $\{1,...,C\}^{H\times W}$ and an instance map $\mathbf{I}_t \in \{1,...,N\}^{H\times W}$ that specifies the instance index for every pixel. At test time, the user provides the motion of the M objects in the scene by drawing 2D arrows corresponding to the displacement between the barycenter of the object in $X_0$ and the object's desired position at time T (See Fig. 1). Notably, the user is free to provide motion vectors for as many objects as desired. Therefore the motion vectors are represented by a list $\mathcal{M} = \{(\boldsymbol{\delta}_m, i_m), 1 \leq m \leq M\}$ , where $\boldsymbol{\delta}_m \in \mathbb{R}^2$ contains the barycenter displacement of the object with instance index $i_m$ . At training time, the list $\mathcal{M}$ is obtained by randomly sampling objects in every video and estimating their corresponding $\delta_m$ , which is defined as the displacement of the instance segmentation's barycenters between the first and last frame.
30
+
31
+ The proposed framework is articulated in three main modules, as illustrated in Fig. 2. First, the Appearance encoding is in charge of encoding the initial frame. This module receives as input the concatenation of the initial frame $\mathbf{X}_0$ , the segmentation $\mathbf{S}_0$ and the instance map $\mathbf{I}_0$ , while it outputs a feature map $z_a$ via the use of an Encoder $E_A$ . Second, the Motion encoding, predicts the video motion from the motion vectors provided by the user and the image features $z_a$ . This module includes a novel Graph Convolutional Network (GCN) that infers the motion of all the objects in the scene by combining the object motion vectors in $\mathcal{M}$ and the image features $z_a$ . This motion module is described in Sec. 3.2 while the details specific to our GCN are given in Sec. 3.1. Finally, the Generation module is in charge of combining the encoded appearance and the predicted motion to generate every frame of the output video.
32
+
33
+ Our GCN aims at inferring the motion of all the objects in the scene by combining the motion vectors provided by the user and the image features $z_a$ . This section first describes the specific message-passing algorithm that we introduce to model the motion vectors. Then we show how our GCN is embedded into a Variational Auto-Encoder (VAE) framework to allow sampling the possible object motions that respect the user's constraints.
34
+
35
+ Handling user control with GCNs. We propose to use a graph to model the interactions between the objects in the scene. Each node corresponds to one of the N objects detected in $\mathbf{X}_0$ . The graph is obtained fully connecting all the objects with each other. Let us introduce the following notations: $f_n$ is the feature vector for the $n^{th}$ object and is extracted from $\mathbf{z}_a$ via region-wise average pooling. $\mathbf{d}_n \in \mathbb{R}^2$ is the estimated barycenter displacement for the $n^{th}$ object. Finally, $u_n \in \{0,1\}$ is a binary value that specifies whether the object motion has been provided by the user $(u_n = 1)$ or if it should be inferred $(u_n = 0)$ .
36
+
37
+ In a standard GCN [46], the layer-wise propagation rule specifies how the features $f_n^{(k)}$ at iteration k of the node n are computed from the features of it neighbouring nodes at the previous iteration $f_j^{(k-1)}$ :
38
+
39
+ <span id="page-2-1"></span>
40
+ $$\boldsymbol{f}_n^{(k)} = \sum_{j \in \mathbf{N}(n)} \frac{1}{\sqrt{\mathcal{D}_{nj}}} \boldsymbol{\theta}^{\top} \boldsymbol{f}_n^{(k-1)}$$
41
+ (1)
42
+
43
+ where N(n) denotes the neighbours of the node n, $\theta$ are the trainable parameters and $\mathcal{D}_{nj}$ is a normalization factor equal to the sum of the degree of the nodes n and j. In our context, we need to modify this update rule to take into account that
44
+
45
+ ![](_page_3_Picture_0.jpeg)
46
+
47
+ ![](_page_3_Picture_1.jpeg)
48
+
49
+ Figure 2. Our network is composed of three modules, namely (i) Appearance encoding, (ii) Motion encoding, and (iii) Generation module. The Appearance Encoding focuses on learning the visual appearance from $X_0$ . The Motion Encoding models the interactions between the objects, predicts their displacement, encodes the motion, and generates the optical flow and occlusion mask for the Generation Module, which focuses on generating temporal consistent and realistic videos. On the right, we show our GCN module to model objects' interactions.
50
+
51
+ <span id="page-3-0"></span>the object motion of each node is either known or unknown. Besides, we propose two different propagation rules for the node features $f_n$ and the motion vectors $d_n$ . We propose to make these rules depending on $u_n$ . If $u_n = 1$ , the node corresponds to an object with a motion controlled by the user and we update only the features:
52
+
53
+ $$f_n^{(k)} = f_n^{(k-1)} + \sum_{j \in \mathbf{N}(n)} \frac{1}{\sqrt{\mathcal{D}_{nj}}} \theta_f^{\top} (f_n^{(k-1)} \oplus d_n^{(k-1)})$$
54
+ (2)
55
+
56
+ $$d_n^{(k)} = d_n^{(k-1)}. (3)$$
57
+
58
+ Here, $\theta_f$ denotes the trainable parameters and $\oplus$ is the concatenation operation. This formulation allows propagating feature information through the node while keeping the object motion constant for the nodes with known motion. Note that, in (2), we opt for a residual update since the messages from the neighbouring nodes are added to the current value $f_n^{(k-1)}$ . Our preliminary results showed that (1) update rule ended up with all the nodes having the exact same features. On the contrary, the residual update that helped objects converging to better features. Indeed, this residual update can be seen as skip connections, similar to those of resnet architectures, that allow gradient information to pass through the GCN updates and mitigate vanishing gradient problems.
59
+
60
+ If $u_n = 0$ , the node corresponds to an object with unknown motion and we update both the features and the motion vector. The feature update remains identical to (2) and the motion vector is updated as follows:
61
+
62
+ $$\boldsymbol{d}_{n}^{(k)} = \boldsymbol{d}_{n}^{(k-1)} + \sum_{j \in \mathbf{N}(n)} \frac{1}{\sqrt{\mathcal{D}_{nj}}} \boldsymbol{\theta}_{d}^{\top} (\boldsymbol{f}_{n}^{(k-1)} \oplus \boldsymbol{d}_{n}^{(k-1)})$$
63
+ (4)
64
+
65
+ where $\theta_d$ denotes the trainable parameters for the motion estimation. This novel propagation rule allows to aggregate the information contained in the neighbouring nodes to refine the motion estimation of nodes with unknown motion.
66
+
67
+ In the next section, we detail how this GCN is embedded into a VAE framework in order to sample possible object motions.
68
+
69
+ <span id="page-3-1"></span>Overall architecture for motion sampling. Our GCN is embedded into a VAE framework composed of an encoder and a decoder network. At training time, we employ an encoder and a decoder while only the decoder is used at test time, as illustrated in Fig. 2-Right. Note that the features $f_n$ condition both the encoder and the decoder. The goal of the encoder network is not map the input value $d_n$ of every node to a latent space $z_n$ . This encoder is implemented using a GCN that employs the propagation rule described in Sec 3.1 and receives as input $f_n \oplus d_n$ for every node. For every node, the latent variable $z_n$ is given by $f_n^{(k)}$ after the last message propagation update. We assume $z_n$ follows a unit Gaussian distribution ( $z_n \sim \mathcal{N}(0,1)$ ). The decoder network receives as input the randomly sampled latent variable $z_n$ for the nodes with unknown motion (i.e. $u_n = 0$ ) and is trained to reconstruct the input motion $d_n$ . The decoder is implemented with another GCN with the same propagation rules and with inputs $f_n^{(0)} \oplus d_n^{(0)}$ where $f_n^{(0)} = f_n$ and:
70
+
71
+ <span id="page-3-2"></span>
72
+ $$\boldsymbol{d}_{n}^{(0)} = \begin{cases} FC(\boldsymbol{z}_{n}) & \text{if } u_{n} = 0\\ \sum_{m=1}^{M} \mathbb{1}(i_{m} = n)\boldsymbol{\delta}_{m} & \text{if } u_{n} = 1. \end{cases}$$
73
+ (5)
74
+
75
+ where $\mathbb{I}$ denotes the indicator function and FC(.) denotes a fully-connected layer that projects the sampled latent variable $z_n$ to the space of $d_n$ (i.e. $\mathbb{R}^2$ ). Intuitively, the sum in (5) iterates over all the objects in $\mathcal{M}$ to select the corresponding motion vector provided by the user.
76
+
77
+ At test time, the GCN encoder is not used. The latent variable $z_n$ is sampled according to our unit Gaussian prior distribution for every object with unknown motion and for-
78
+
79
+ <span id="page-4-1"></span>warded to the decoder. The decoder outputs the 2D motion of every object in the scene.
80
+
81
+ This module is in charge of predicting the optical flows and the occlusion maps between the initial frame $\mathbf{X}_0$ and every frame that has to be generated. To this aim, for every time step t, we compute a binary tensor $\mathbf{B}_t \in \{0,1\}^{H \times W}$ that specifies the locations of the objects in the scene. At time t=0, the object-location map $\mathbf{B}_0$ is computed from the instance segmentation map $\mathbf{I}_0$ :
82
+
83
+ $$\forall (i,j) \in H \times W, \mathbf{B}_0[i,j] = \sum_{n=0}^{N} \mathbb{1}(\mathbf{I}_0[i,j] = n).$$
84
+ (6)
85
+
86
+ For t>0, $\mathbf{B}_t$ cannot be estimated with the previous equation since $\mathbf{I}_t$ is not known at test time. Instead, we consider a simple rigid model for every object and obtain $\mathbf{B}_t$ by warping $\mathbf{B}_0$ according to the the object motion $d_t$ . At training time, $d_t$ is estimated from the segmentation maps while, at test time, we employ $\hat{d}_t$ , which is the displacement predicted by our GCN. Finally, this object-location tensor is mapped to a latent tensor $z_s$ via an encoder $E_s$ .
87
+
88
+ Note that, the output video cannot be fully encoded via the initial frame and the motion of each object since there exist other sources of variability such as the appearance of new objects or change in object sizes. Therefore, we introduce a latent motion variable $z_m$ that encodes all the motion information that cannot be described by $z_s$ and $z_a$ . We employ an auto-encoder strategy at training time, estimating $z_m$ from the complete video sequence with an encoder $E_M$ . More precisely, $E_M$ receives as input the concatenation of all the video frames, the instance segmentation maps $S_0$ and $I_0$ , and the optical flow for every frame. At test time, the latent motion code $z_m$ is sampled according to the prior distribution (i.e. $z_m \sim \mathcal{N}(0, I)$ ).
89
+
90
+ Finally, we provide the latent variables $z_a$ , $z_s$ and $z_m$ to the same decoder, which outputs the bi-directional optical flows and occlusion maps. More precisely, the decoder outputs the forward and the backward optical flow at every time steps denoted by $\mathbf{F}_t^f$ and $\mathbf{F}_t^b$ respectively and the corresponding occlusion maps $\mathbf{O}_t^f$ and $\mathbf{O}_t^b$ . Note that the backward optical flows and occlusion maps are then provided to the generation modules, while the forward optical flow and occlusion maps are used only for loss computation.
91
+
92
+ We employ a generation module inspired by [35]. After two down-sampling convolutional blocks applied on the initial frame $\mathbf{X}_0$ , we obtain a feature map. We proceed independently for every frame to generate and warp the feature
93
+
94
+ map according to the optical flow predicted by the motion module. Then we multiply the warped feature map by the occlusion map predicted by the occlusion estimator to diminish the impact of the features corresponding to the occluded parts. Finally, the masked feature maps are fed to a subsequent network to output the generated video. This network is composed of several residual blocks, followed by two up-sampling convolutional blocks.
95
+
96
+ **Objective functions.** Our GCN framework employs the evidence lower bound of the VAE framework. It is composed of a reconstruction term on the predicted motion vector and the Kullback-Leibler divergence (KL) between the conditional distribution of $z_n$ and its unit Gaussian prior:
97
+
98
+ $$\mathcal{L}_{VAE} = \frac{1}{N} \sum_{n=0}^{N} \| \boldsymbol{d}_{n} - \hat{\boldsymbol{d}}_{n} \|_{1} - \mathcal{D}_{KL}(\boldsymbol{z}_{n} \| \mathcal{N}(0, I)), \quad (7)$$
99
+
100
+ where $\hat{d}_n$ is the displacement predicted by the GCN.
101
+
102
+ Forward-backward Consistency. Similarly to [32], we ensure the cycle consistency between forward and backward optical flows. More precisely, for every non-occluded pixel location p, we minimize the $L_1$ distance between the corresponding optical flows:
103
+
104
+ $$\mathcal{L}_{Fc}(F^f, F^b) = \frac{1}{T} \sum_{i=1}^{T} \sum_{\boldsymbol{p}} \mathbf{O}_t^f(\boldsymbol{p}) |\mathbf{F}_t^f(\boldsymbol{p}) - \mathbf{F}_t^b(\boldsymbol{p} + \mathbf{F}_t^f(\boldsymbol{p}))|_1 + \mathbf{O}_t^b(\boldsymbol{p}) |\mathbf{F}_t^b(\boldsymbol{p}) - \mathbf{F}_t^f(\boldsymbol{p} + \mathbf{F}_t^b(\boldsymbol{p}))|_1$$
105
+ (8)
106
+
107
+ *Smoothness.* Following [33], we employ a smoothness loss that penalizes high gradient values in the optical-flow map that do not correspond to high-gradient values in the image $X_0$ (for more details refer to [33]).
108
+
109
+ Supervised flow. To improve the quality of the generated videos in our multi-objects setting, we take advantage of a pre-trained FlowNet2 [15] network for optical flow and occlusion estimation. FlowNet2 provides high quality optical flow maps that we use as supervision for our motion decoder network using a standard L1 loss.
110
+
111
+ Motion Encoding uncertainty. To allow the sampling of $z_m$ at test time, the output of the motion encoder $E_M$ is mapped to a unit Gaussian distribution via the KL-divergence:
112
+
113
+ $$\mathcal{L}_m = -\mathcal{D}_{KL}(z_m || \mathcal{N}(0, I)) \tag{9}$$
114
+
115
+ Generation module. The generation module is trained using state-of-the-art losses for video generation. Following [16, 26, 42] we adopt a PatchGAN discriminator trained with a Least Square loss. For the generator, we apply the structural similarity loss [43], the perceptual loss [17], feature matching loss [42], and a standard pixel-level reconstruction L1 loss.
2109.10637/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-02-07T00:48:17.906Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.146 Safari/537.36" version="14.2.9" etag="o8sjAj7Zal6yPRytSuQC" type="device"><diagram id="8ROi6mTqjpBdapGSyzK2">7Vpbj6M2FP41kdqHiQzmlsdNdrZ9aKVV56Hdp8oFJ9A6OHWcCemvrw02YOzMsBuYiaqiGSU+vn/nfsgCbvbVDwwd8p9phsnCB1m1gB8Xvu8Fvr+QfyC7NJQYhA1hx4pMDeoIT8U/WBGBop6KDB+NgZxSwouDSUxpWeKUGzTEGD2bw7aUmLse0A5bhKcUEZv6a5HxvKEmIejoP+Jil+udPaB69kgPVoRjjjJ67pHg4wJuGKW8+bavNphI8DQuzbxPV3rbgzFc8jETFCOeETmpu6lz8Yu+LKOnMsNyvLeA63NecPx0QKnsPQv2ClrO90R1bwtCNpRQJtolLcWg9ZEz+leLk99SBsPUQTDjuLp6Ga+FSMgWpnvM2UUMUROilUJViVW4aprnjkeeBj7v8ScAWjaUXOzapTvoxBeFnhtJ+DqSOBNSpJqU8ZzuaInIY0ddd1gD0erG/ETpQSH8J+b8olQCnTg18cdVwX+T05d+qJpfel0fK7V03bjoRinu2swKdfNLv6+bVrf0vOaC8lYGw470xFJFUnrNEdthzcNkNF8ZJogXz+byLiapqZ9pIVZs5SH2IkMeVv4yNNdojqqm9dVkuJJm70UbImup5orWUrXYtFcaJUnBnUlSfK+S5IPpJWmsuocOJkWESxNIa8lJlXkT5L9P0pyLS8Jt/fRJ0U5+FuXhxPUCYutmjabvRoMsVlI89qTt3RF0PCrUTcssR7fuCAxsuTh7VD/TWOoYmJY6btWpb6uBw1Zrnt/Cu2hS3okg43khj9Us8QfTHR6oPHDnTMWhF4BuWq9n7QNQ98jNenQFw0RiEJhmNYEOOYidLvt2MYgtMfjFNrSEiKASv84SdDw0kea2qCQbh0BL0JLEBTSM4ApmEwEaDQCNXYC6YqAp9Cq5D8fV+aDAT/pe6AEsQRC84orq1mfMCnF9zMb7p5XDP/nv559WFi/2qHo4iCzJNlR+dWvUP509qp/Z1WRgd/wgekO7o7PYHm8+iUnguxB8/wIfwNdlXwKyNMVhbapnBtMzwISJC8zVTDbH80YYnTL7ILN+0UqlZBbptVD33SLdEXD3wAwdYGrarTkTTJbJqnvMDEpwdxn0elfm8qPTqZc3CcL5kivPrng0ygf/C7oXQPCWujem5vG/7o3XPSH419UiANE3ly9eXthPZtS3EdUMu0LYExARLRzkuH21k8Xk5ZbQc5ojxpeoLCkXeNPy9zoCJ8WuFAMJ3sqg5iAPj9njs+Dt0aGjquzYD13CqcqQJr5etHJlt4ktRx6MJ9BKV2nijhFnaknRfvASMJFhjOIRPHAlQtPwwC4xOGJscT0+gN5VGXdgqJFPcVknKWsJVpEi8kF17IssI9fSVtOvzaABcTiIr2MX+oEDfTiFW7ITe9stzZ+HXvE2/ZzxDUvhcGCTkrEFbOG90aU3rNby4ws7DUvlkd7pejQ4tJeDGeJLc4pvdkJ2ZeKIuJ0IF3v54u+2VNgMAzf1M5Nf8VxV0zC0tWqSrNWuKNypVs3xWuCK5IYmRyCcTasGvIfeq1o1PNtwxs1a5dt1jL3wJbmtVxeM2N2qVRyZOZRbrYKZ1KoV4PeqkU79nk6bBeOV7zu+qNPn6UlpSssUcVyK/7uVyqGxh9BVyXe/Iou+GjXR7H5z0piC7pc78PFf</diagram></mxfile>
2109.10637/main_diagram/main_diagram.pdf ADDED
Binary file (18.9 kB). View file
 
2109.10637/paper_text/intro_method.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ India is home to some of the world's most biodiverse regions, housing numerous endemic species [@bharucha2002biodiversity]. *Most forest areas in India are cohabited -- these are not protected areas* [@fsi2019]. Local communities maintain and take great care of these forests. High densities of carnivores and herbivores cohabiting with humans result in human-wildlife conflicts leading to loss of crops and cattle for humans , loss of wildlife, and in some cases, loss of human life. [@woodroffe2005people]. The number of human-animal conflicts in recent years in the state of Maharashtra, India [@pinjakartimes] for the years 2014-2018 ranged between 4496 and 8311 for cattle kills, 22568 and 41,737 for crop damage cases.
4
+
5
+ One such region, which we focus on in this paper as an illustrative case study, is the Bramhapuri Forest Division in Chandrapur, Maharashtra, India, which is home to 2.8 tigers and 19000 humans per square kilometer. Studies by our on-field partner, a non-government organization (NGO), showed that more than fifty percent of the households in the Bharmapuri Forest Division had experienced crop depredation and livestock loss due to wildlife. Such conflicts impose an economic and psychological cost on the community. Additionally, the costs also spill over to conservation efforts as in many cases these conflict situations prompt retaliatory killings of wildlife and burning of forests. Figure [1](#fig:human_animal){reference-type="ref" reference="fig:human_animal"} shows a map of human-animal conflicts in the Bramhapuri Forest Division across 2014-17.
6
+
7
+ A big bottleneck in the mitigation of these conflicts is the lack of timely interventions. If one can predict these human-wildlife conflicts, it can help the government and NGOs execute timely interventions to reduce the loss of crops, livestock, and human-life. We aim to build AI-based solutions to help with such interventions. To that end, the main objective of this paper is to predict the intensity of human-wildlife conflicts in a particular region as an illustrative case study to learn lessons that can be utilized in other ecological domains that grapple with frequent cases of human-wildlife conflicts. In order to make such predictions, the basic requirement is the presence of conflict data over the years. Through years of interactions with the government, and conducting ground surveys, our partner NGO has collected a detailed human-animal conflict dataset since 2014.
8
+
9
+ **Contributions.** *To the best of our knowledge, this is the first effort at predicting human-wildlife conflicts in unprotected areas* and this results in three main challenges. The first and foremost challenge is the need to identify the "right" features that will assist in the accurate prediction of conflicts. Based on observations from the data and consultations with domain experts, conflicts tend to happen in certain types of areas (near water bodies, low elevation areas, etc.) depending on the time period. Second, the conflicts are very sparse and not evenly distributed temporally and spatially. For instance, the dataset used in this paper has only 0.38 conflicts per month per 100 km$^2$. This poses a major challenge while trying to apply traditional machine learning tools to predict conflicts. Thirdly, for predictions to be useful, they have to be at a spatial granularity of a large village or few small villages ($\approx$ 4 km $\times$ 4 km), which is challenging.
10
+
11
+ To address these challenges, we make the following key contributions: (i) We investigate a wide variety of features and conclude that simple features (like latitude, longitude, and terrain elevation) are insufficient in predicting conflicts successfully; (ii) Therefore, we move to more complex features such as satellite images and we provide a novel way of generating more training data for training Convolutional Neural Networks (CNNs) to make intensity predictions; (iii) To better handle sparse training data, we provide a way to apply curriculum learning and also provide a novel hierarchical classification approach. Finally, on the real test data set, our methods provide a prediction accuracy of 80.4% for a spatial granularity of 4 km $\times$ 4 km. In addition to the results on a real data set, we are also in the process of deploying interventions on the ground based on our predictions (details in Section [6](#sec:pilot){reference-type="ref" reference="sec:pilot"}).
12
+
13
+ <figure id="fig:human_animal" data-latex-placement="ht">
14
+ <img src="images/Human-Animal-Conflicts-With-Animals.png" style="width:3.3in;height:1.2in" />
15
+ <figcaption>GPS plot of human-animal conflicts in the Bramhapuri Forest Division, India from 2014 to 2017</figcaption>
16
+ </figure>
17
+
18
+ # Method
19
+
20
+ We have filtered past data of conflicts, ${\cal C}$ where each conflict $i \in {\cal C}$ occurs at a location, $l_i$ (latitude, longitude) and during a time period, $t_i$ (e.g., Feb 2015). Using the past conflict data for training, the objective is to predict conflict intensity, $y_r$ (e.g., low, medium, high), in a given region $r$ (defined as a continuous area of land covering a large village or few small villages) of size roughly $m$ km $\times$ $n$ km, *during a different time period than the one for which conflict data is provided*.
21
+
22
+ In order to predict conflict intensity, we first convert the given conflict data into a learning problem with $R$ training examples $\{(<x_r,t_r>,y_r)_{r \in R}\}$, where:\
23
+ **--** $x_r$ represents the features corresponding to a region $r$ of size $m$ km $\times$ $n$ km.\
24
+ **--** $t_r$ is time period of interest (e.g., February of 2015) in the training data.\
25
+ **--** $y_r$ is the intensity of conflicts given by $f(\sum_{i \in {\cal C}} I_{l_i, r})$, where $f(.)$ maps the number of conflicts to an intensity (e.g., low, medium, high in case of classification and the actual number of conflicts in case of regression) and $I$ is the indicator function that is 1 if $l_i \in r$ and 0 otherwise.
26
+
27
+ Formally, the objective is to build a predictor, $\mathcal{P}$ so as to minimize the loss between the predicted intensity, $\mathcal{P}(x_r,t_r)$ and ground truth intensity, $y_r$ for a test data set (where $t_r$ will be for the time period of test dataset). There are three key challenges in building such a predictor:\
28
+ ***Challenge 1***: Identifying the features to be considered for each region, i.e., $x_r$ to accurately predict $y_r$.\
29
+ ***Challenge 2***: Predicting accurately with a few training examples and a significant class imbalance.\
30
+ ***Challenge 3***: Predicting conflict intensities for regions at the right spatial granularity ($\approx$ 4 km $\times$ 4 km).
31
+
32
+ Towards addressing the three challenges, we first investigate different types of features and identify the ones that provide the highest accuracy. We then provide our key technical contributions that handle the sparsity of training examples and provide predictions at the desired spatial granularity of regions. To address ***challenge 1***, we work through a progression of features from simple to complex. We begin with a fixed set of regions (obtained through clustering of conflicts) and use the region identifier (cluster number) as $x_r$. This is described in the first part of Section [5.2](#sec:clustering){reference-type="ref" reference="sec:clustering"}. However, since having just a region identifier does not capture the connectivity between regions, we compute an embedding for a region determined based on the connections to other nearby regions in the later part of Section [5.2](#sec:clustering){reference-type="ref" reference="sec:clustering"}. Then, to evaluate the importance of terrain properties, we use the elevation of the region (only data available at the right level of granularity) in Section [5.3](#sec:spatial){reference-type="ref" reference="sec:spatial"}. Finally, in Section [5.4](#sec:satimg){reference-type="ref" reference="sec:satimg"}, we employ satellite imagery for a given region to not only capture context (e.g., forests, water sources, croplands, settlements, roads, etc.), but also the neighborhood of the region. The features that are common for all sections (*train* and *test* datasets) are mentioned below and the specialized features for each prediction method are mentioned in the corresponding sections. Depending on whether we are training or testing, $t_r$ is different. $t_r$ is the month and year for a specific example in training/testing data.
33
+
34
+ *train* 2014-16
35
+ --------- ---------
36
+ *test* 2017
2111.08919/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2111.08919/paper_text/intro_method.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Video Captioning [\[4\]](#page-8-0) aims to generate a text describing the visual content of a given video. Driven by the neural encoder-decoder paradigm, research in video captioning has made significant progress [\[29,](#page-9-0) [36\]](#page-9-1). To make further advances in video captioning, it is essential to accurately evaluate generated captions. The most ideal metric is human evaluation while carrying human judgments is timeconsuming and labor-intensive. Thus, various automatic metrics are applied for video caption evaluation.
4
+
5
+ However, most of the widely applied video caption metrics like BLEU [\[19\]](#page-8-1), ROUGE [\[12\]](#page-8-2), CIDEr [\[28\]](#page-9-2), and BERTScore [\[35\]](#page-9-3) come from the other tasks, such as machine translation, text summarization and image captioning, which may neglect the special characteristic of video captioning and then limit the development of video captioning. Furthermore, these automatic metrics require humanlabeled references — and thus they are called referencebased metrics — and such requirements cause three intrinsic drawbacks: (1) They can not be used when provided videos have no human-labeled references, which is not uncommon in this age that millions of reference-free videos are produced online every day. (2) They may overpenalize the correct captions since references hardly describe all details of videos due to the one-to-many nature [\[33\]](#page-9-4) of captioning task, especially when the number of references is limited. Fig[.1](#page-0-0) (a) shows one such example where a candidate caption correctly describes the "a rock" while reference-based metrics punish this word since references do not contain it. (3) As pointed by [\[23\]](#page-9-5), these reference-based metrics may under-penalize the captions with "hallucinating" descriptions since these metrics only measure similarity to references, and the visual relevance cannot be fully captured. For example, as shown in Fig[.1](#page-0-0) (b), due to the word "games" appearing in the references, some reference-metrics return higher scores for caption B than caption A, even though "different games" is a "hallucinating" phrase which is not related to the video.
6
+
7
+ These drawbacks inspire us to develop a reference-free metric. From the human evaluator's viewpoint, if a caption is *consistent* with the source video, *i.e.*, the visual contents in the video are comprehensively and accurately described by the caption, this caption is a high-quality one, and not necessarily be similar to the reference in literal or semantics. A promising evaluation metric should imitate the human evaluation process, and introduce video content into the evaluation. Nowadays, due to the boom of the largescale vision-language pre-training models [\[11,](#page-8-3) [17,](#page-8-4) [21\]](#page-8-5), the gaps between the visual and linguistic embeddings have been further narrowed, enabling us to judge whether a caption is consistent with a video.
8
+
9
+ Motivated by these research progresses, we propose a *reference-free* metric EMScore (Embedding Matchingbased score) for evaluating video captions, which exploits a pre-trained large-scale vision-language model to extract visual and linguistic embeddings. Specifically, to obtain a comprehensive comparison between the video and caption, EMScore averages the matching scores of both coarsegrained (video and caption) and fine-grained (frames and words) levels. For the coarse-grained one, we compute the similarity between the global embeddings of the video and the candidate caption, which take the overall understanding of the video into account and evaluate candidates from a global perspective. For the fine-grained embedding matching, we compute the sum of cosine similarities between the frame and word embeddings, which takes the detailed characteristic of the video (visual elements change over time) into account. Also, it provides more interpretability for EM-Score. Furthermore, considering the potential information gain, such as syntactic structure in references, and doing embedding matching in the same language domain is easier than cross-modal domains, we extend EMScore to the conditions where human-labeled references are available and name the extended metric EMScore ref.
10
+
11
+ Currently, there is no available video caption quality dataset that can be used to evaluate metrics. To facilitate the development of video captioning evaluation metrics, we are the first to collect a video caption quality dataset VATEX-EVAL which contains 54,000 human ratings for video-caption pairs. Experiments on VATEX-EVAL show the following advantages of our EMScore by introducing the video in evaluating. First, EMScore has a higher human correlation compared with some popular automatic metrics like BLEU, ROUGE, or CIDEr. Second, EMScore has low reference dependency, *e.g.*, EMScore's 0-reference Kendall's correlation with humans is similar to BLEU 1's 4-reference correlation or EMScore ref's 1 reference is similar to CIDEr's 9-reference correlations. Therefore, EMScore can significantly reduce the cost of manually annotating references. Third, EMScore is more robust to quality drift that it achieves higher correlations compared with the other automatic metrics when evaluating captions of different qualities. Furthermore, we collect another dataset ActivityNet-FOIL which contains "hallucinating" captions to verify the sensitivity of EMScore. Experiment results show that EMScore is more effective to identify "hallucinating" captions than the other metrics.
12
+
13
+ Our contributions are summarized as follows:
14
+
15
+ - We propose a reference-free video captioning metric EMScore that directly measures consistency with video contents in both coarse-grained and fine-grained levels, and extend it to reference-available condition.
16
+ - We collect two datasets VATEX-EVAL and ActivityNet-FOIL for researchers to study the metrics' correlation with human judgments and sensitivity in the "hallucinating" case, respectively.
17
+
18
+ <span id="page-2-0"></span> Exhaustive experimental results verify that EMScore has a higher human correlation and is able to effectively identify the "hallucinating" captions.
2112.08907/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2112.08907/main_diagram/main_diagram.pdf ADDED
Binary file (89.3 kB). View file
 
2112.08907/paper_text/intro_method.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ ::: wrapfigure
4
+ r0pt \[\]![image](figures/figure_1.pdf){width="46%"}
5
+ :::
6
+
7
+ Explainable AI refers to artificial intelligence methods and techniques that provide human-understandable insights into how and why an AI system chooses actions or makes predictions. Such explanations are critical for ensuring reliability and improving trustworthiness by increasing user understanding of the underlying model. In this work we specifically focus on creating deep reinforcement learning (RL) agents that can explain their actions in sequential decision making environments through natural language.
8
+
9
+ In contrast to the majority of contemporary work in the area which focuses on supervised machine learning problems which require singular instance level *local explanations* [@you2016image; @xu2015show; @wang2017residual; @wiegreffe2021teach], such environments---in which agents need to reason causally about actions over a long series of steps---require an agent to take into account both environmentally grounded context as well as goals when producing explanations. Agents implicitly contain beliefs regarding the downstream effects---the changes to the world---that actions taken at the current timestep will have. This requires explanations in these environments to contain an additional *temporally extended* component taking the full trajectory's context into account---complementary to the *immediate* step-by-step explanations.
10
+
11
+ Interactive Fiction (IF) games (Fig. [\[fig:figure1\]](#fig:figure1){reference-type="ref" reference="fig:figure1"}) are partially observable environments where an agent perceives and acts upon a world using potentially incomplete textual natural language descriptions. They are structured as long puzzles and quests that require agents to reason about thousands of locations, characters, and objects over hundreds of steps, creating chains of dependencies that an agent must fulfill to complete the overall task. They provide ideal experimental test-beds for creating agents that can both reason in text and explain it.
12
+
13
+ We introduce an approach to game playing agents---**Hierarchically Explainable Reinforcement Learning (HEX-RL)**---that is designed to be *inherently* explainable, in the sense that its internal state representation---i.e. belief state about the world---takes the form of a symbolic, human-interpretable knowledge graph (KG) that is built as the agent explores the world. The graph is encoded by a Graph Attention network (GAT) [@velivckovic2017graph] *extended* to contain a hierarchical graph attention mechanism that focuses on different sub-graphs in the overall KG representation. Each of these sub-graphs contains different information such as attributes of objects, objects the player has, objects in the room, current location, etc. Using these encoding networks in conjunction with the underlying world KG, the agent is able to create *immediate explanations* akin to a running commentary that points to the facts within this knowledge graph that most influence its current choice of actions when attempting to achieve the tasks in the game on a step-by-step basis.
14
+
15
+ While graph attention can tell us which elements in the KG are attended to when maximizing expected reward from the current state, it cannot explain the intermediate, unrewarded dependencies that need to be satisfied to meet the long term task goals. For example, in the game *zork1*, the agent needs to pick up a lamp early on in the game---an unrewarded action---but the lamp is only used much later on to progress through a location without light. Thus, our agent additionally analyzes an overall episode trajectory---a sequence of knowledge graph states and actions from when the agent first starts in a world to either task completion or agent death---to find the intermediate set of states that are most important for completing the overall task. This information is used to generate a *temporally extended explanation* that condenses the *immediate step-by-step explanations* to only the most important steps required to fulfill dependencies for the task.
16
+
17
+ Our contributions are as twofold: (1) we create an inherently explainable agent that uses an ever-updating knowledge-graph based state representation to generate step-by-step immediate explanations for executed actions as well as performing a post-hoc analysis to create temporal explanations; and (2) a thorough experimental study against strong baselines that shows that our agent generates significantly improved explanations for its actions when rated by human participants *unfamiliar with the domain* while not losing any task performance compared to the current state-of-the-art knowledge graph-based agents.
18
+
19
+ # Method
20
+
21
+ At each step, a total score $R_{t}$ and an observation $o_t$ is received---consisting of $\left(o_{t_{\text{desc}}}, o_{t_{\text{game}}}, o_{t_{\text{inv}}}, a_{t-1}\right)$ corresponding to the room description, game feedback, inventory, and previous action and are processed using a GRU based encoder using the hidden state from the previous step, combining them into a single observation embedding $\mathbf{o}_{t} \in \mathbb{R}^{d_{text} \times c}$ (bottom of Fig. [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}).
22
+
23
+ The full knowledge graph $G_t$ is processed via Graph Attention Networks (GATs) [@velivckovic2017graph] followed by a linear layer to get the graph representation $\mathbf{g}_{\mathbf{t}} \in \mathbb{R}^{d_{text}}$ (middle of Fig. [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}). We compute *LSTM attention* between $\mathbf{o}_{t}$ and $\mathbf{g}_{\mathbf{t}}$ as: $$\begin{align}
24
+ \boldsymbol{\alpha}_{\text{LSTM}}=\operatorname{softmax}\left(\boldsymbol{W}_{\text{l}} \boldsymbol{h}_{\text{LSTM}}+\boldsymbol{b}_{\text{l}}\right)\\
25
+ \boldsymbol{h}_{\text {LSTM}}=\tanh \left(\boldsymbol{W}_{\mathrm{o}} \boldsymbol{o}_{\text {t}} \oplus\left(\boldsymbol{W}_{\mathrm{g}} \mathbf{g}_{\text {t}}+\boldsymbol{b}_{\mathrm{g}}\right)\right)
26
+ \end{align}$$ where $\oplus$ denotes the addition of a matrix and a vector. $\boldsymbol{W}_{\mathrm{l}} \in \mathbb{R}^{d_{text} \times d_{text}}$, $\boldsymbol{W}_{\mathrm{g}} \in \mathbb{R}^{d_{text} \times d_{text}}$, $\boldsymbol{W}_{\mathrm{o}} \in \mathbb{R}^{d_{text} \times d_{text}}$ are weights and $\boldsymbol{b}_{\mathrm{l}} \in \mathbb{R}^{d_{text}}$, $\boldsymbol{b}_{\mathrm{o}} \in \mathbb{R}^{d_{text}}$ are biases. The overall representation vector is updated as: $$\begin{equation}
27
+ \mathbf{q}_{\mathbf{t}} = \mathbf{g}_{\mathbf{t}} + \sum_{i}^{c} \boldsymbol{\alpha}_{\mathrm{LSTM}, i} \odot \boldsymbol{o}_{\mathrm{t}, i}
28
+ \end{equation}$$ where $\odot$ denotes dot-product and $c$ is the number of $\boldsymbol{o}_{\mathrm{t}}$'s components.
29
+
30
+ Sub-graphs are also encoded by GATs to get the graph representation $\mathbf{g}'_{\mathbf{t}} \in \mathbb{R}^{d_{G_t,sub} \times m}$ (no. of subgraphs). The *Hierarchical Graph Attention* between $\mathbf{q}_{t} \in \mathbb{R}^{d_{G_t,sub}}$[^1] and $\mathbf{g}'_{\mathbf{t}}$ is calculated by: $$\begin{align}
31
+ \boldsymbol{\alpha}_{\text{Hierarchical}}=\operatorname{softmax}\left(\boldsymbol{W}_{\text{H}} \boldsymbol{h}_{\text{H}}+\boldsymbol{b}_{\text{H}}\right)\\
32
+ \boldsymbol{h}_{\text {H}}=\tanh \left(\boldsymbol{W}_{\mathrm{g}'} \boldsymbol{g}'_{\text {t}} \oplus\left(\boldsymbol{W}_{\mathrm{q}} \boldsymbol{q}_{\text {t}}+\boldsymbol{b}_{\mathrm{q}}\right)\right)
33
+ \end{align}$$ where $\boldsymbol{W}_{\mathrm{H}} \in \mathbb{R}^{d_{G_t,sub} \times d_{G_t,sub}}$, $\boldsymbol{W}_{\mathrm{g}'} \in \mathbb{R}^{d_{G_t,sub} \times d_{G_t,sub}}$, $\boldsymbol{W}_{\mathrm{q}} \in \mathbb{R}^{d_{G_t,sub} \times d_{G_t,sub}}$ are weights and $\boldsymbol{b}_{\mathrm{H}} \in \mathbb{R}^{d_{G_t,sub}}$, $\boldsymbol{b}_{\mathrm{q}} \in \mathbb{R}^{d_{G_t,sub}}$ are biases. Then we get state representation, consisting of the textual observations full knowledge graph and sub-knowledge graph. $$\begin{align}
34
+ \mathbf{v}_{\mathbf{t}} = \mathbf{q}_{\mathbf{t}} + \sum_{i}^{s} \boldsymbol{\alpha}_{\mathrm{Hierarchical}, i} \odot \boldsymbol{g}'_{\mathrm{t}, i}
35
+ \end{align}$$ where $s$ is the number of sub-graphs ($4$ in our paper). The full architecture can be found in Figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}.
36
+
37
+ The agent is trained via the Advantage Actor Critic (A2C) [@mnih2016asynchronous] method to maximize long term expected reward in the game in a manner otherwise unchanged from @ammanabrolu2020avoid (See Appendix [\[app:a2c\]](#app:a2c){reference-type="ref" reference="app:a2c"}). These attention values thus reflect the portions of the knowledge graphs that the agent must focus on to best achieve this goal of maximizing reward.
38
+
39
+ The graph attention $\boldsymbol{\alpha}_\mathrm{Hierarchical}$ is used to capture the relative importance of game state observations and KG entities in influencing action choice. For each sub-graph, the graph attention, $\boldsymbol{\alpha}_\mathrm{Hierarchical, i} \in \mathbb{R}^{n_{\mathrm{nodes}} \times m}$ is summed over all the channels $m$ to obtain $\boldsymbol{\alpha}'_\mathrm{Hierarchical, i} \in \mathbb{R}^{n_{\mathrm{nodes}} \times 1}$, showing the importance of the KG nodes in the $i$th sub-graph. The top-$k$ valid entities (and corresponding edges) with highest absolute value of its attention form the set of knowledge graph triplets that best locally explain the action $a_t$.
40
+
41
+ In order to make the explanation more readable for a human reader, we further transform knowledge graph triplets to natural language by template filling.
42
+
43
+ We create templates for each type of sub-graphs $G^{atr}, G^{inv}, G^{obj}, G^{loc}$.
44
+
45
+ - $\langle object, is, attribute \rangle$$\rightarrow$"*Object is attribute*"
46
+
47
+ - $\langle player, has, object \rangle$$\rightarrow$"*I have object*"
48
+
49
+ - $\langle object, in, location \rangle$$\rightarrow$"*Object is in location*"
50
+
51
+ - $\langle location 1, direction, location 2 \rangle$$\rightarrow$"*location 1 is in the direction of location 2*", e.g. $\langle forest, north, house \rangle$ is converted to "*Forest is in the north of house*"
52
+
53
+ More examples can be found in Appendix [\[app:template\]](#app:template){reference-type="ref" reference="app:template"}.
54
+
55
+ Graph attention tells us which entities in the KG are attended to when making a decision, but is not enough alone for explaining "why" actions are the right ones in the context of fulfilling dependencies that may potentially be unrewarded by the game---especially given the fact that there are potentially multiple ways of achieving the overall task. HEX-RL thus saves trajectories for hundreds of test time rollouts of the games, performed once a policy has been trained (Table [\[tab:log\]](#tab:log){reference-type="ref" reference="tab:log"} and Appendix [\[app:hexrl_hyper\]](#app:hexrl_hyper){reference-type="ref" reference="app:hexrl_hyper"}). The game trajectories consist of all the game states, actions taken, predicted critic values, game scores, the knowledge graphs, and the immediate step level explanations generated as previously described. HEX-RL produces a temporal explanation by performing a post-hoc analysis on these game trajectories. The agent then analyzes and filters these trajectories in an attempt to find the subset of states that are most crucial to achieving the task as summarized in Figure [\[fig:bayes\]](#fig:bayes){reference-type="ref" reference="fig:bayes"}---then using that subset of states to generate temporal trajectory level explanations.
56
+
57
+ We first train a Bayesian model to predict the conditional probability $\mathbb{P}(A \mid B_i)$ of a game step ($A$) given any other possible game step ($B_i$) in the game trajectories. More specifically, each game step is composed of 3 elements, game state $o_t$, action $a_t$ and the current knowledge graph $G_t$. The key intuition here being that state, action pairs that appear in a certain ordering in multiple trajectories are more likely to dependant on each other.
58
+
59
+ The set of game steps with the highest $\mathbb{P}(A\mid B_i)$ is used to explain taking the action associated with game state $A$. For example, "take egg" ($A$) is required to "open egg" ($B$), and $\mathbb{P}(A \mid B) = 1$, hence "open egg" is used as a reason why action "take egg" must be taken first. The initial set of game states $\mathbf{X}$ is filtered into $\mathbf{X}_1$ by working backwards from the final goal state by finding the set of states that form the most likely chain of causal dependencies that lead to it. Details can be found in Appendix [\[app:hexrl_hyper\]](#app:hexrl_hyper){reference-type="ref" reference="app:hexrl_hyper"} and [\[app:bays\]](#app:bays){reference-type="ref" reference="app:bays"}.
60
+
61
+ ::: wrapfigure
62
+ R0.5 \[+0\]![image](figures/xRL_bay.pdf){width="46%"}
63
+ :::
64
+
65
+ Following this, we apply a GPT-2 [@Radford2019LanguageMA] language model trained to generate actions based on transcripts of text games from human play-throughs to further filter out important states---known as the Contextual Action Language Model (CALM) [@yao-etal-2020-keep]. As this language model is trained on human transcripts, we hypothesize that it is able to further filter down the set of important states by finding the states that have corresponding actions that a human player would be more likely to perform---thus potentially leading to more natural explanations. CALM takes into observation $o_t$, action $a_t$ and the following observation $o_{t+1}$, and predicts next valid actions $a_{t+1}$. In our work, we use CALM as a filter to look for the relations between a game step $A$ and the explanation candidates $B_i \in \mathbf{X}_1$. We feed CALM with the prompt $o_A,a_A,o_{B_i}$ to get an action candidate set. When the two game steps $A$ and $B_i$ are highly correlated, given $o_A$, $a_A$ and $o_{B_i}$, CALM should successfully predict $a_{B_i}$ with high probability. The game steps $B_i$, whose associated action $a_{B_i}$ is in this generated action candidates set, are saved as the next set of filtered important candidate game states ($\mathbf{X}_2$).
66
+
67
+ To better account for the irregularities of the puzzle like environment, we adopt a *semantic filter* to obtain the final important state set $\mathbf{X}_3$. Here, given $A,B_i \in \mathbf{X}_2$, states are further filtered on the basis of whether one of these scenarios occurs: (1) $a_A$ and $a_{B_i}$ contain the same entities, e.g. "*take egg*" and "*open egg*". (2) $G_A$ and $G_{B_i}$ share the same entities, e.g. "lamp" occurs in both observations. (3) $A$ and $B_i$ occur in the same location, e.g. after taking action $a_A$, the player enters "kitchen" and $B$ occurs in "kitchen". (4) The state has a non-zero reward or a high absolute critic value, indicating that it is either a state important for achieving the goals of the game or it is a state to be avoided. The final set of important game states $\mathbf{X}_3$ is used to synthesize post-hoc temporal explanations for why an action was performed in a particular state---as seen in Figure [\[fig:figure1\]](#fig:figure1){reference-type="ref" reference="fig:figure1"}---taking into account the overall context of the dependencies required to be satisfied and building on the immediate step level explanations for each given state in $\mathbf{X}_3$. Ablation studies pin-pointing the relative contributions of the different filters are found in Section [4.4](#sec:ablation){reference-type="ref" reference="sec:ablation"}. We concluded that all three steps of the filtering process to identify important states are necessary for creating coherent temporal explanations that effectively take into account the context of the agent's goals.
2201.11192/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-09-08T19:29:34.356Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" etag="Wo7w9-LHW_Hll5-QM8IG" version="15.0.6" type="google"><diagram id="C5RBs43oDa-KdzZeNtuy" name="Page-1">7VjLdtowEP0an5Mu4PgBJiwxUNI0fQVa2u6ErdhqhAWyCDhfX8mW8UMkkAAnpM0GpOuRLM2duSNZs7rT1YCCWfCJeBBrpu6tNKunmaZhtvivAOIUaLXTvk+RJ01yYIjuoQR1iS6QB6OSISMEMzQrgy4JQ+iyEgYoJcuy2Q3B5bfOgA8VYOgCrKJj5LEgRc+zXQn8AiI/yN5s2HJ/U5AZy51EAfDIsgBZfc3qUkJY2pquuhAL12V+GX+Ix/jq1h5cfovm4LvzcfT5Ry2d7P1Thqy3QGHInj31/PL2ft6ejF3zdxCaljdz8aBmpVPfAbyQ/pJ7ZXHmQOhxf8ouoSwgPgkB7ueoQ8ki9KB4jc57uc0VITMOGhz8AxmLZXCABSMcCtgUy6fpO8WLKpxt2bC0i8iCuvCRXUoCGaA+ZI/YmWtWeTJAMoWMxnwchRgwdFdeHJBx6a/tct/zhnT/E6gwGgoXHYyTyZGrmTbmK3cmlLd80erPF3xRJNzI1xWY8CQu+Rhg5Ie87XKXQsqBO0gZ4nnSkQ+myPNSOmGE7sEkmU8QOiMoZMlum47W7K35EhPAlbYhieXgPHWKTD4ch6rn5ew1va63mulUUoUy7nfmRk7+VWymYEJubiIeE1Xy1mt4Pp+6QueIQiiEJWE04q0zWPfr/L/nXGhmV9A4gy7iapl0gkSW3ikEl9NtGSAGhzOQJMCSK/im1NpA1W7J9SAlxrle4sNoyP4yF9m2hIKCvtr6kdKn+V8ombmjkjX2VLJkaIdSEBcMpBCoySRjotGoxESrUpG22GdykYdBuoKDpqVhvAVK0R36npGyX80zFTKu+XmqNiK1YSAOVkrZuxZF79+oecbjCiuKnlkuenvWvKx0tsojjlcCVXY7E3IH/SR7+AMHkSmIIpXls87AUQvfllJXzsljFT6rKnIvXfhaJ6Fn3KU0/inHJ51folNvZt3eqviwF5doOp4ONnbUQfsQBVOpcLa9uSKup0jXJUcdPgHPldhYCUoOJZ9MBMeLaGdrm3QaFenMtOh07wvq9W9EGMBFmTxFPWxUQ7z9wnpoK37sAjoRF2V9yIh7u6HadL9kYLSYZKiZYXwVBRieaF1qtk+MB8N6q0PVc/ZrubmZJ3lzU7+oOBCT5Q7HSefVHCfN4x0neTf/WJ2Skn/wt/p/AQ==</diagram></mxfile>
2201.11192/main_diagram/main_diagram.pdf ADDED
Binary file (11.8 kB). View file
 
2201.11192/paper_text/intro_method.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The degradation of the natural world is unprecedented in human history and a key driver of the climate crisis and the Holocene extinction (Ceballos and Ehrlich 2018). Forests play a significant role in the planet's carbon cycle, directly impacting local and global climate through its biogeophysical effects and as carbon sinks, sequestering and storing carbon through photosynthesis (Griscom et al. 2017).
4
+
5
+ However, since the year 2000, we have lost 361 million ha of forest cover, equivalent to the size of Europe, mainly in tropical areas (Hansen et al. 2013). This accounts for 18%
6
+
7
+ Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
8
+
9
+ of global anthropogenic emissions and contributes to driving up atmospherical carbon levels (IPCC 2019). Forests, especially tropical forests, also provide habitats for 80% of land-based biodiversity and with the increasing risk and frequency of wildfires, droughts, and extreme weather, forest ecosystems are under severe pressure (Shi et al. 2021).
10
+
11
+ To avoid planetary tipping points (Rockstöm et al. 2009) and maintain a stable and livable climate, mankind urgently need to reduce carbon emissions until 2050 and restore essential ecosystems (IPCC 2021). Forests and natural carbon sequestration are important climate change mitigation strategies (Canadell and Raupach 2008) with a biophysical mitigation potential of 5,380 MtCO2 per year on average until 2050 (IPCC 2019).
12
+
13
+ Forestry is a large industry and the causes of deforestation are mostly economically driven (FAO 2020) (Geist and Lambin 2001). For the last 20 years, major conservation efforts have been underway to mitigate and safeguard against these losses. One of the global financing strategies is carbon offsets (Blaufelder et al. 2021). Initially, it started as the Clean Development Mechanism (CDM) under the Kyoto Protocol, allowing governments and business organizations from industrialized countries to invest in forestry in developing countries by buying carbon credits to offset industrialized emissions (FAO 2020) Several other independent bodies have later developed official standards for verifying and certifying carbon offsetting projects, such as the Gold Standard (GS) and the Verified Carbon Standard (VERRA). The certification process for forest carbon offsetting projects is capital and labour intensive, especially due to the high cost of manual monitoring, verification and reporting (MVR) of the forest carbon stock.
14
+
15
+ The carbon offsetting market is rapidly increasing and expected to grow by a factor of 100 until 2050 due to high demand and available capital (Blaufelder et al. 2021). However, the main obstacle is limited supply of offsetting projects as forest owners lack upfront capital and market access (Kreibich and Hermwille 2021).
16
+
17
+ Recent research investigations (Badgley et al. 2021; West et al. 2020) have shown that the current manual forest carbon stock practices systematically overestimate forestry carbon
18
+
19
+ offsetting projects with up to 29% of the offsets analyzed, totaling up to 30 million tCO2e (CO<sup>2</sup> equivalents) and worth approximately \$410 million. The overestimation was identified to come from subjective estimations and modeling of the carbon stock baseline and of the project's additionally and leakage reporting. There is thus a need for higher quality carbon offsetting protocols and higher transparency and accountability of the MVR of these projects [\(Haya et al. 2020\)](#page-5-10).
20
+
21
+ There are three key aspects that are important for the use of remote sensing in MVR of forest carbon stock. One aspect is financial; using available and accessible technology and sensors to lower the cost and upfront capital requirements for forest owners to get certified, especially in low and middle-income countries. The second aspect is reducing subjectivity in estimating carbon stock and increasing trustworthiness and transparency in the carbon offsetting certification protocols. And lastly, the solutions need to be scalable due to the urgency of financing forest restoration, especially in tropical regions.
22
+
23
+ Various verification bodies, new ventures, and academia are currently developing remote sensing technologies to automate parts of the certification process of forestry carbon offsetting projects [\(Narine, Popescu, and Malambo 2020;](#page-6-3) [Dao et al. 2019\)](#page-5-11). Satellite imagery is increasing in quality and availability and, combined with state-of-the-art deep learning and lidar, promises to soon map every tree on earth [\(Hanan and Anchang 2020\)](#page-5-12) and to enable forest aboveground biomass and carbon to be estimated at scale [\(Saatchi](#page-6-4) [et al. 2011;](#page-6-4) [Santoro et al. 2021\)](#page-6-5). Compared to current manual estimates, these advancements reduce time and cost and increase transparency and accountability, thus lowering the threshold for forest owners and buyers to enter the market [\(Lutjens, Liebenwein, and Kramer 2019\)](#page-6-6). Nevertheless, ¨ these algorithms risk additionally contributing to the systematic overestimation of carbon stocks, not reducing it, and are not applicable for small-scale forests, below 10,000 ha [\(White et al. 2018\)](#page-7-1), [\(Global Forest Watch 2019\)](#page-5-13).
24
+
25
+ Accurately estimating forest carbon stock, especially for small scale carbon offset projects, presents several interesting machine learning challenges, such as high variance of species and occlusion of individual tree crowns. There are many promising approaches, such as hyperspectral species classification [\(Schiefer et al. 2020\)](#page-6-7), lidar-based height measurements [\(Ganz, Kaber, and Adler 2019\)](#page-5-14) and individ- ¨ ual tree crown segmentation across sites [\(Weinstein et al.](#page-7-2) [2020b\)](#page-7-2). However, these applications have been developed mainly on datasets from temperate forests and, to the knowledge of the authors, there is no publicly available dataset of tropical forests with both aerial imagery and ground truth field measurements.
26
+
27
+ Here, we present ReforesTree, a dataset of six tropical agroforestry reforestation project sites with individual tree crown bounding boxes of over 4,600 trees matched with their respective diameter at breast height (DBH), species, species group, aboveground biomass (AGB), and carbon stock. This dataset represents ground truth field data mapped with low-cost, high-resolution RGB drone imagery to be used to train new models for carbon offsetting protocols and for benchmark existing models.
28
+
29
+ ![](_page_1_Figure_5.jpeg)
30
+
31
+ Figure 1: Drone imagery of each site of the ReforesTree dataset with a resolution of 2cm/px. The red dots are the locations of the trees measured in field surveys, plotted to make clear that the coverage of drone images were larger than the field measured area.
32
+
33
+ To summarize, with ReforestTree, we contribute the following: 1) the first publicly available dataset of tropical agro-forestry containing both ground truth field data matched with high resolution RGB drone imagery at the individual tree level and 2) a methodology for reducing the current overestimation of forest carbon stock through deep learning and aerial imagery for carbon offsetting projects.
34
+
35
+ # Method
36
+
37
+ The following are three types of methods to estimate forest carbon stock remotely, adapted from [\(Sun and Liu 2019\)](#page-6-13); 1) inventory-based models, based on national and regional forest inventories and regression models, are known to overestimate due to over-representations of dense commercial forests in the data, [\(Global Forest Watch 2019\)](#page-5-13). 2) Satellitebased models leveraging datasets from optical remote sensing, synthetic aperture radar satellites (SAR), and lidar (Li-DAR) to create global aboveground biomass and carbon maps [\(Santoro et al. 2021;](#page-6-5) [Saatchi et al. 2011;](#page-6-4) [Spawn, Sul](#page-6-14)[livan, and Lark 2020\)](#page-6-14). 3) Ecosystem-based models using topography, elevation, slope, aspect, and other environmental factors to construct statistical models and quantitatively describe the process of forest carbon cycle to estimate forest carbon stock[\(Ma et al. 2021\)](#page-6-9).
38
+
39
+ The most scalable and affordable of these methods are, evidently, satellites-based models. Nevertheless, these models and global maps are yet to estimate carbon stock at local scale and provide accurate estimates of highly heterogeneous and dense forest areas due to their low resolution of 30-300m [\(Bagheri, Shataee, and Erfanifard 2021\)](#page-5-16). An individual tree-based model that takes the individual overstory trees into account can provide this accuracy, especially if fused with geostatistical and satellite data.
40
+
41
+ In recent years, researchers have achieved high accuracy
42
+
43
+ | SITE<br>NO. | NO. OF<br>TREES | NO. OF<br>SPECIES | SITE<br>AREA | TOTAL<br>AGB | TOTAL<br>CO2E |
44
+ |-------------|-----------------|-------------------|--------------|--------------|---------------|
45
+ | 1 | 743 | 18 | 0.51 | 8 | 5 |
46
+ | 2 | 929 | 22 | 0.62 | 15 | 9 |
47
+ | 3 | 789 | 20 | 0.48 | 10 | 6 |
48
+ | 4 | 484 | 12 | 0.47 | 5 | 3 |
49
+ | 5 | 872 | 14 | 0.56 | 15 | 9 |
50
+ | 6 | 846 | 16 | 0.53 | 12 | 7 |
51
+ | TOTAL | 4463 | 28 | 3.17 | 66 | 40 |
52
+
53
+ Table 1: Overview of the six project sites in Ecuador, as gathered in field measurements. Aboveground biomass (AGB) is measured in metric tons and area in hectares.
54
+
55
+ for standard forestry inventory tasks such as individual tree crown detection [\(Weinstein et al. 2019\)](#page-7-5), lidar-based height estimation [\(Ganz, Kaber, and Adler 2019\)](#page-5-14), and species clas- ¨ sification [\(Miyoshi et al. 2020;](#page-6-15) [Schiefer et al. 2020;](#page-6-7) [Mayr](#page-6-16) ¨ a¨ [et al. 2021\)](#page-6-16), using deep learning models and aerial imagery. This shows high potential for combining high-resolution imagery with deep learning models as a method for accurate carbon stock estimation for small-scale reforestation projects [\(Sun and Liu 2019\)](#page-6-13).
56
+
57
+ As most tropical forests are situated in low to middle income countries, without access to hyperspectral, lidar and other more advanced sensors, the models need to be developed using available technologies. A trade-off for accuracy and data availability is basic high-resolution RGB drone imagery. Drone imagery (1-3cm/px resolution), combined with CNN, has previously been used to directly estimate biomass and carbon stock in individual mangrove trees [\(Jones et al.](#page-5-17) [2020\)](#page-5-17) or indirectly by detecting species or tree metrics such as DBH or height [\(Naf˚](#page-6-17) alt [2018;](#page-6-17) [Omasa et al. 2003\)](#page-6-18), achiev- ¨ ing an accuracy similar to manual field measurements. And by leveraging multi-fusion approaches [\(Du and Zare 2020;](#page-5-18) [Zhang 2010\)](#page-7-6), e.g. combining low-resolution satellite, highresolution drone imagery, and field measurements and contextual ecological or topological data, and multi-task learning [\(Crawshaw 2020\)](#page-5-19), e.g. tree metrics and carbon storage factors as auxiliary tasks, these models can replace and scale the existing manual forest inventory.
58
+
59
+ There are several datasets for tree detection and classification from drone imagery such as the NEON dataset [\(Weinstein et al. 2020a\)](#page-6-19), or the Swedish Forest Agency mainly from temperate forests from the US or Europe. To our knowledge, there are no publicly available datasets including both field measurements and drone imagery of heterogeneous tropical forests.
60
+
61
+ The ReforesTree dataset consists of six agro-forestry sites in the central coastal region of Ecuador. The sites are of dry tropical forest type and eligible for carbon offsetting certification with forest inventory done and drone imagery captured in 2020. See Table 1 for information on each site.
62
+
63
+ Field measurements were done by hand for all live trees and bushes within the site boundaries and include GPS location, species, and diameter at breast height (DBH) per tree. Drone imagery was captured in 2020 by an RGB camera from a Mavic 2 Pro drone with a resolution of 2cm per pixel. Each site is around 0.5 ha, mainly containing banana trees (Musaceae) and cacao plants (Cacao), planted in 2016-2019.
64
+
65
+ $$AGB_{fruit} = 0.1466 * DBH^{2.223} \tag{1}$$
66
+
67
+ $$AGB_{musacea} = 0.030 * DBH^{2.13} \tag{2}$$
68
+
69
+ $$AGB_{cacao} = 0.1208 * DBH^{1.98}$$
70
+ (3)
71
+
72
+ $$AGB_{timber} = 21.3 - 6.95 * DBH + 0.74 * DBH^2$$
73
+ (4)
74
+
75
+ The aboveground biomass (AGB) is calculated using published allometric equations for tropical agro-forestry, namely Eq[.1](#page-3-0) for fruit trees, including citrus fruits [\(Segura,](#page-6-20) [Kanninen, and Suarez 2006\)](#page-6-20), Eq[.2](#page-3-1) banana trees [\(Van Noord-](#page-6-21) ´ [wijk et al. 2002\)](#page-6-21), Eq[.3](#page-3-2) for cacao [\(Yuliasmara, Wibawa, and](#page-7-7) [Prawoto 2009\)](#page-7-7), and Eq[.4](#page-3-3) for shade trees (timber) [\(Brown](#page-5-20) [and Iverson 1992\)](#page-5-20). These are commonly used in global certification standards. The carbon stock is calculated through the standard forest inventory methodology using a root-toshoot ratio of 22%, which is standard for dry tropical reforestation sites [\(Ma et al. 2019\)](#page-6-8).
76
+
77
+ The raw data is processed in several steps as seen in Figure 3. The goal of this process is to have a machine learning ready dataset that consists of matched drone image of an individual tree with the trees labels, such as AGB value. All the drone images have been cropped to fit tightly the boundaries of the field measured areas. The details of this cropping process, and the code repository, are in the Appendix.
78
+
79
+ Initially the RGB orthomosaics are cut into 4000×4000 tiles and sent through DeepForest, a python package for predicting individual tree crowns from RGB imagery [\(We](#page-7-5)[instein et al. 2019\)](#page-7-5), fine-tuned on some manually labelled bounding boxes from the sites. Afterwards, the bounding boxes containing more than 80% white were filtered out, e.g. bounding boxes lying on the border of the drone imagery, and manually labeled to banana and non-banana, due to the easily recognizable characteristics of banana trees, resulting in clear bounding boxes of all trees as shown in Figure [4.](#page-3-4)
80
+
81
+ To fuse the tree information extracted from the ground measurements with the bounding boxes of the trees detected, we used OneForest, a recent machine learning approach for fusing citizen data with drone imagery. To remove noise introduced in both GPS locations, OneForest uses a greedy optimal transport algorithm. This is a known coupling method to map between two GPS positions (center of bounding box from drone imagery and GPS location of tree from field data). Developed by Villani [\(Villani 2003\)](#page-6-22), the methods finds the minimal distance between two distributions via a convex linear program optimizing for a matching that moves the mass from one distribution to the other with minimal cost. The cost is usually defined as the euclidean distance or the Kulback-Leibler divergence between
82
+
83
+ <span id="page-3-2"></span><span id="page-3-1"></span><span id="page-3-0"></span>![](_page_3_Figure_11.jpeg)
84
+
85
+ <span id="page-3-3"></span>Figure 3: The raw data and data processing pipeline for the ReforesTree dataset, resulting in labels matched to bounding boxes per tree.
86
+
87
+ <span id="page-3-4"></span>![](_page_3_Picture_13.jpeg)
88
+
89
+ Figure 4: Bounding box annotations per tree, as a result of fine-tuned DeepForest tree crown detection and manual cleaning. Red boxes represent banana trees and blue boxes represent other species.
90
+
91
+ the distributions. The optimum, i.e. the minimal distance between the two distributions, is called the Wasserstein metric.
92
+
93
+ With a dataset of matched bounding boxes and tree labels, we fine-tuned a basic pre-trained CNN, ResNet18 [\(He et al.](#page-5-21) [2015\)](#page-5-21) with a mean-square-error loss to estimate individual tree AGB. The results were satisfying despite the simple baseline model, and proves that the individual tree estimation from drone imagery has potential.
94
+
95
+ Fourteen images were identified as being larger than the expected crown size of a tree, and they were center cropped at 800×800. To preserve the crown size information, the smaller images were zero-padded up to 800×800, before all images were resized to fit the network architecture.
96
+
97
+ The dataset has is unbalanced with regards to species, of which 43% is cacao and 32% is banana. Additionally, due to the trees being planted between 2016-2019, many of the trees have similar size (e.g. DBH) and half of the trees have DBH between 7-10cm. The training dataset consisted of equal number of samples of species and DBH, and from the different project sites.
2202.01079/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-01-26T02:42:54.997Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36" version="16.4.3" etag="Of4avcq09CxhvM8RPR5-" type="onedrive"><diagram id="U-FGqGGSkjgMan5Ck0i1">7V1bb6M4FP41eRwEvmEep+1eHma1I420l6cVASdBQyEidNrOr18TLsHYFDfhFpqkUoNjDHzfsX3OhzlZwfvHl98Sd7/7I/ZZuAKm/7KCDysALEIp/5eVvOYlCIK8YJsEflHpVPAt+MmKQrMofQp8dhAqpnEcpsFeLPTiKGJeKpS5SRI/i9U2cSgede9uiyOap4JvnhsyqdrfgZ/u8lIK7FP57yzY7sojW8TJv3l0y8pFE4ed68fPtWPBX1bwPonjNP/0+HLPwgy8Epf8hH5t+bY6sYRFqc4OBe4/3PCpuLbivNLX8mKT+CnyWVbfWsG7512Qsm9718u+feb08rJd+hgWX2/iKC344mTBu0OaxN/ZfRzGybE1SMEaEpLVDMKwVu5jRn3Ey2PedpBmhgGzBorzY0nKXlqv0aqQ4ybH4keWJq+8SrEDcLBhknyvwuAwLrafT/RZFOdluxp1tDAAt7CYbdX6CVT+ocBVjTEcG2PiUbbeKDB2Gd14Q2EsAIwQHg9gJAF8H0cbPkREHEFgfk2YH3gpR6EddrMbdhHmKI5YA+GiKIlTNw3iqGi1ydZxuziFks0aR+ghe/dDCgTQQEjkBVKZF4gMC8jUwB6owUpqPrnP/CKO1MQpC6KPTBJwqEFtcXRCjqLzAMOCw5BEVCT9sB4uo+LyYWoXJ8FP3oobFkfspU+YRmOoIpYBoQQ4UAxWpAew7Y8EdgnYNFDTDwQ1MieF2pGg/hJEzL1wNJ8p1GhSqMsg6C1/kkX+5yzO4Vte6B4OgScCK7Kgmv1qYJrHlwr+xjdlLJQdgb0E6T+1z/9mx+Ijb7718FIc+rjxWm5EHIp/6hu1vbLN027HrVeBU+aXQVsZW8VPiceEKS51ky1LhYFYg/caqdiUSS3LEhZyn+KHeBIqposjfI0DfuDTYGmLZoXtk1mVreSXVOxYD+sabSHQaAvwtoBzeonN5rBIzR4tscJBzzitm3G+2zht2Tjp3IwT44bbhM3zzZObumHWXlZnyz1aqIbkcbPQhoVS2UKduVmo0599IsvsbKtHi5QFolH81E4pbpz4C9tgPN9JKRUtFWrSdChGBFop/CwU6GbsNSrQsngzSuQ1DdBoQqBl4ebmN3T5DeXUVnccyjF4Pp5DI/AiFJzpNjSiLgIbDfXoM8ja1s0aO60RKawRz80apUiLWGfa49thltRsj9Ypy4E36+y0TqywTjI363R6ss1miDWgNQINwXTLzW3f6qEU60jcdVnd1KalvFynoRmXElyNNqqgzerBcwEamlyHO+ge9vk6mk3wkvVJuT+KXmB1g1Tqj9bn7K3rDtbWxLQZuzUUamB01H4h2fvNUawx7uWvS9F8EVEbA1yNNTE9g1vo3m+B2weIYEQQZTFjaBCpmb0HBxGOCKIsVAwNIjm+BgcRjQiiLEIMDeIGZ+/Jxko8IrgawsN1jpVkRBA14uWeQbw7vgcH0R4RxOsI684M0c4JB1caYd3cQjhUzq5V5NUwDe0QrhHTEHOwEK485au0PMHuoGh4ANctzzCBI1gfoPBtOYJvfGVJwKFkSf1YzUINQy06d11+wHO7yds0OQwtg6DzzBdbuLOtHi14sQtjWkxrNjZT3SSqbuZDwyGta6G0zadpiiZSia0DWJKGMDG0nIWdxmMFimADKih1bANaNZDI5U4J1JASDjt3n330ntb8X9e9z3Xezb6sqwLX+749dr4/n9IwiFhR7rvJ9z+rR2lMw8RiITiWat1OPQUtbU5kvjfI6vhBwunLn15g7iHV9SRzy5FJLedvNTkdlPbxAALUUDKul8NTNDU8h9jmc1iNQzQehxpCyvVyeJJmh+cQ0cn6oYaOc70cnnS34TmEdLJ+qCEXXS+HJwF6eA4B59Cue3QjdkQNuep6SRxzQiyXIoxBmoY8dr2kjTkDOpN1PKQhNF0vh6fb8gNyOAZLVyWmlIrJv7XPRymlEv8eqlQMrYJe1tAnbhRWJTTm2owFO3XtM/VAqFq7CWU7WIaYDZtPzDUTU7QIONzG3NdatX1W4dB+wqSckKpMJOab5wWaMtN761OhPv+Qn/G56hPSUJ/KAfIQRNuQFb3wvSvO2+7eys8Al2592x3iY8afsmOaBnRs4CCH8m4LbcvGVZXi6Jnq7lCbUAqhbWLIfTEhX8QnQHSHyI7MDxgbqJGLgCqmOmJgxZp20JCqzxpH36GdLZPM3vLfYNLkshKIR+JS1tC+xNsgH4smT6jSA8TN+zUIYoPI2WygCt8+fA5Z31oYvm3rw2vgEsVM3gu4KuEJZX/4PsuZ5rnhyr77srIf/uP/vTja8I9FjaXAj8UUZZl5O/WXnoTUCxsqCamdjfCwQDJaVqePgf4itJ81xQir1u1Rj3leTyFox5wMxAENEygI65YWpb3Eq4tQhnyyJljlm202YCpKMRR09vEoxYsQijrd7Q5KL1pi+06qEZqo92INtWn+VDvEhq6Camb5mNnT9F6Ipuq971Ax5kvpHOdYgATtfkRKF7EOqDOtwgSUWnIq38FIXMRCoGuaVU3M++t4/C5ikVBn1snxOylyxrtHitvEmp+ZHLCNonnJAUIiGqsv+GEzi5WjkCJVK2B7wF+ZQrjKdz4DyHtAGDdzKpnyQo6h1Ei82OQq+WXMeK0+cAyKT5qnmBcdUShKomLz2iv3s4MQyUetRlJqUEvWXftfyI8XoQpdOhluwmD/V9lN+Off+xulpaQbWM7SPNQkSW760BvkDujhNieOUUlfhFJ0aQw6YI/G1oTk3jSjgckFE5K7CPXoUo13SHLhhOTeVKWJ5uLmL2WMSfoipKZL77kO2aPxhOQu4kG1Oc/FzV9uGpNcjSfYxs7XiKl8E8RS5dnpBYBFrACaVBjIbWiSh1PIImQd33bWqkeINhtGBh+b2thrT1Q5FJlldq3rJnPSiL6DTEXCzMHIXIQ8M6nX0EGmInHnYGQuQo6ZNGLvIFMzp08vZC5CfpnzMKtIWDoYmYuQWyaNvDvIVCROHYzMRcgoc54zFQlcByNTlk1K96tGJ7+WVKTMDYNthobHryl7hPwuu+LAc8PPxRePge9nu98ljCNZxN0ZWMXj0LxdfLfCWeoB9ymNC7Tfs+LkrCUmHciXv/NTpgoZ7wkrWyFxEDNakSWz0XWDwHrrN0FJlhRrPH4UCswH54dYWdrMEz9i30FUbxFpL+TIAotV5ve+NmJqC+kQ7qkjleuiqgdJ5XENDUQNVcglH7zfWA41EKiNa1Ac2ADRXYDdC0GyBHLrOzUxCmBhEjIbz2QrUjr01JX4ZhLHaX0xIfecd3/EPstq/A8=</diagram></mxfile>
2202.01079/main_diagram/main_diagram.pdf ADDED
Binary file (22.8 kB). View file
 
2202.01079/paper_text/intro_method.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ As \"life machines,\" proteins play vital roles in almost all cellular processes, such as transcription, translation, signaling, and cell cycle control. Understanding the relationship between protein structures and their sequences brings significant scientific impacts and social benefits in many fields, such as bioenergy, medicine, and agriculture [@huo2011conversion; @williams2019plasma]. While AlphaFold2 [@jumper2021highly] has tentatively solved protein folding from 1D sequences to 3D structures, its reverse problem, i.e., protein design raised by [@pabo1983molecular] that aims to predict amino acid sequences from known 3D structures, has fewer breakthroughs in the ML community. The main reasons hindering the research progress include: (1) The lack of large-scale standardized benchmarks; (2) The difficulty in improving protein design accuracy; (3) Many methods are neither efficient nor open source. Therefore, a crucial question grips our heart: How to benchmark protein design and develop an effective and efficient open-sourced method?
4
+
5
+ Previous benchmarks may suffer from biased testing and unfair comparisons. Since SPIN [@li2014direct] introduced the TS50 consisting of 50 native structures, it has beening served as a common test set for evaluating different methods [@o2018spin2; @wang2018computational; @chen2019improve; @jing2020learning; @zhang2020prodconn; @qi2020densecpd; @strokach2020fast]. However, such a few proteins do not cover the vast protein space and are more likely to lead to biased tests. Besides, there are no canonical training and validation sets, which means that different methods may use various training sets. If the training data is inconsistent, how can we convince researchers that the performance gain comes from the difference of methods rather than data? Especially when the test set is small, adding training samples that match the test set distribution may dramatic performance fluctuations. For fair comparisions, how to establish a large-scale standardized benchmark?
6
+
7
+ Extracting expressive residue representations is a key challenge for accurate protein design, where both sequential and structural properties need to be conisdered. For general 3D points, structural features should be rotationally and translationally invariant in the classification task. Regarding proteins, we should further consider the regular structure, number, and order of amino acids. Previous studies [@o2018spin2; @wang2018computational; @ingraham2019generative; @jing2020learning] may have overlooked some important protein features and better model designs; thus, none of them exceed 50% accuracy except DenseCPD [@qi2020densecpd]. For higher accuracy, how to construct better protein features and neural models to learn expressive residue representations?
8
+
9
+ Improving the model efficiency is necessary for rapid iteration of researches and applications. Current advanced GNN and CNN methods have severe speed defects due to the sequential prediction paradigm. For example, GraphTrans [@ingraham2019generative] and GVP [@jing2020learning] predict residues one by one during inference rather than in parallel, which means that it calls the model multiple times to get the entire protein sequence. Moreover, DenseCPD [@qi2020densecpd] takes 7 minutes to predict a 120-length protein on their server [^1]. How can we improve the model efficiency while ensuring accuracy?
10
+
11
+ To address these problems, we establish a new protein design benchmark -- AlphaDesign, and develop a graph model called ADesign to achieve SOTA accuracy and efficiency. Firstly, AlphaDesign compares various graph models on consistent training, validation, and testing sets, where all these datasets come from the AlphaFold Protein Structure Database [@varadi2021alphafold]. In contrast to previous studies [@ingraham2019generative; @jing2020learning] that use limited-length proteins and do not distinguish species, we extend the experimental setups to length-free and species-aware. Secondly, we improve the model accuracy by introducing protein angles as new features, using a simplified graph transformer encoder (SGT), and proposing a constraint-aware protein decoder (CPD). Thirdly, SGT and CPD also improve model efficiency by simplifying the training and testing procedures. Experiments show that ADesign significantly outperforms previous methods in both accuracy (+8%) and efficiency (40+ times faster than before).
12
+
13
+ # Method
14
+
15
+ The structure-based protein design aims to find the amino acids sequence $\mathcal{S} = \{s_i: 1 \leq i \leq n\}$ that folds into a known 3D structure $\mathcal{X}=\{\boldsymbol{x}_{i} \in \mathbb{R}^3: 1 \leq i \leq n \}$, where $n$ is the number of residues and the natural proteins are composed by 20 types of amino acids, i.e., $1\leq s_i \leq 20$. Formally, that is to learn a function $\mathcal{F}_{\theta}$: $$\begin{align}
16
+ \label{eq:protein_dedign}
17
+ \mathcal{F}_{\theta}: \mathcal{X} \mapsto \mathcal{S}.
18
+ \end{align}$$ Because homologous proteins always share similar structures [@pearson2005limits], the problem itself is underdetermined, i.e., the valid amino acid sequence may not be unique. In addition, the need to consider both 1D sequential and 3D structural information further increases the difficulty of algorithm design. As a result, the published algorithms are still not accurate enough.
19
+
20
+ These methods use **m**ulti-**l**ayer **p**erceptron (MLP) to predict the type of each residue. The MLP outputs the probability of 20 amino acids for each residue, and the input feature construction is the main difference between various methods. SPIN [@li2014direct] integrates torsion angles ($\phi$ and $\psi$), fragment-derived sequence profiles and structure-derived energy profiles to predict protein sequences. SPIN2 [@o2018spin2] adds backbone angles ($\theta$ and $\tau$), local concat number and neighborhood distance to improve the accuracy from 30% to 34%. [@wang2018computational] uses backbone dihedrals ($\phi$, $\psi$ and $\omega$), sovlent accessible surface area of backbone atoms ($C_{\alpha}, N, C,$ and $O$), secondary structure types (helix, sheet, loop), $C_{\alpha}-C_{\alpha}$ distance and unit direction vectors of $C_{\alpha}-C_{\alpha}$, $C_{\alpha}-N$ and $C_{\alpha}-C$, which acheives 33.0% accuracy on 50 test proteins. The MLP methods have a high inference speed, but their accuracy is relatively low due to the partial considering of structural information. The complex feature engineering requires multiple databases and computational tools, limiting the widespread usage.
21
+
22
+ CNN methods extract protein features directly from the 3D structure, which can be further classified as 2D CNN-based and 3D CNN-based, with the latter achieving better results. The 2D CNN-based SPROF [@chen2019improve] extracts structural features from the distance matrix, and improves the accuracy to 39.8%. In contrast, 3D CNN-based methods extract residue features from the atom distribution in a three-dimensional grid box. For each residue, the atomic density distribution is computed after being translated and rotated to a standard position so that the model can learn translation and rotation invariant features. ProDCoNN [@zhang2020prodconn] designs a nine-layer 3D CNN to predict the corresponding residues at each position, which uses multi-scale convolution kernels and achieves 42.2% accuracy. DenseCPD [@qi2020densecpd] further uses the DensetNet architecture [@huang2017densely] to boost the accuracy to 55.53%. Although 3DCNN-based models improve accuracy, their inference is slow, probably because they require separate pre-processing and prediction for each residue. In addition, none of the above 3D CNN models are open source.
23
+
24
+ Graph methods represent the 3D structure as a $k$-NN graph, then use graph neural networks [@defferrard2016convolutional; @kipf2016semi; @velivckovic2017graph; @zhou2020graph; @zhang2020deep] to extract residue features while considering structural constraints. The protein graph encodes the residue information in node vectors and constructs edges and edge features between neighboring residues. GraphTrans [@ingraham2019generative] achieves 36.4% accuracy using graph attention and auto-regression generation strategies. GVP [@jing2020learning] increases the accuracy to 40.2% by proposing the geometric vector perceptron, which learns both scalar and vector features in an equivariant and invariant manner with respect to rotations and reflections. Another related work is ProteinSolver [@strokach2020fast], but it was mainly developed for scenarios where partial sequences are known and does not report results on standard datasets. Firstly, since both GraphTrans and GVP use rotated, translation-invariant features, there is no need to rotate each residue separately as in CNN, thus improving the training efficiency. However, during testing, GraphTrans and GVP use an autoregressive mechanism to generate residuals one by one rather than in parallel, which weakens their advantage in inference speed. Secondly, the well-exploited structural information helps GNN obtain higher accuracy than MLPs. In summary, GNN can achieve a good balance between efficiency and accuracy.
25
+
26
+ <figure id="fig:overview" data-latex-placement="t">
27
+ <embed src="fig/overview.pdf" style="width:6.5in" />
28
+ <figcaption> Overview of ADesign. Compared with GraphTrans, StructGNN <span class="citation" data-cites="ingraham2019generative"></span> and GVP <span class="citation" data-cites="jing2020learning"></span>, we add new protein features, simplify the graph transformer, and propose a confidence-aware protein decoder to improve accuracy. </figcaption>
29
+ </figure>
30
+
31
+ There are several challenges that hinder the research progress: (C1) As shown in Table. [\[tab:dataset\]](#tab:dataset){reference-type="ref" reference="tab:dataset"}, different methods may use inconsistent datasets, which will affect the researcher's judgment of the model's ability. (C2) The reported prediction accuracy is still low, i.e., none of the existing models exceeds 50% accuracy except DenseCPD. (C3) The most accurate CNN model has poor efficiency, e.g., DenseCPD takes 7 minutes to predict a 120-length protein on their server. By the way, (C4) the accessibility is poor. Because most protein design models are not open source, subsequent studies become relatively difficult.
32
+
33
+ For C1, we establish the *new AlphaDesign benchmark* in section. [4](#sec:experiment){reference-type="ref" reference="sec:experiment"}. For C2 and C3, we propose *ADesign method to acheive SOTA accuracy and efficiency*, seeing section. [3](#sec:method){reference-type="ref" reference="sec:method"} and section. [4](#sec:experiment){reference-type="ref" reference="sec:experiment"}. As to C4, our model will be open source to accelerate research progress.
2202.11781/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2202.11781/paper_text/intro_method.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ The global-focal block in the RadioTransformer architecture is detailed in Figure [1](#fig_1){reference-type="ref" reference="fig_1"}. The global and focal blocks are cascaded in parallel. The shifting window for each block is shown with the window in red color. High contrast patterns are learned by the focal blocks, shown in the orange box in Figure [1](#fig_1){reference-type="ref" reference="fig_1"} and low contrast patterns are learned by global blocks, shown in the blue box in Figure [1](#fig_1){reference-type="ref" reference="fig_1"}. The TWL connection averages the features between the intermediate global and focal blocks.
4
+
5
+ RSNA Pneumonia Detection challenge[@rsna], and Cell Pneumonia[@cell] are pneumonia classification datasets consisting of radiographs with presence and absence of pneumonia. SIIM-FISABIO-RSNA COVID-19 Detection[@siim] dataset categorizes radiographs as negative for pneumonia, and typical, indeterminate, or atypical for COVID-19. COVID-19 Radiography database[@rad; @rad_1] comprises chest radiographs with COVID-19, normal, lung opacity and viral pneumonia classes. NIH Chest X-rays[@nih] and VinBigData Chest X-ray Abnormalities Detection[@vbd] datasets comprise 14 common thorax diseases. We further include the more recent large-scale RSNA-MIDRC[@midrc; @midrc_1; @clark2013cancer] and TCIA-SBU COVID-19 datasets [@sbu; @clark2013cancer] that contain only COVID-19 chest radiographs.
6
+
7
+ Figure [2](#fig_aug){reference-type="ref" reference="fig_aug"} illustrates the various augmentations for different blocks of *RadioTransformer*. The images in the first and second rows are the inputs to the student focal and global blocks, respectively. The images in the third and fourth rows are the inputs to teacher focal and global blocks, respectively. As seen in the images, the teacher network implements hard augmentations compared to the student network. The focal block has a higher contrast value than the global block. For stateless augmentations, we use tf.image.stateless_random_contrast(.), tf.image.stateless_random_brightness(.), tf.image.stateless-\
8
+ \_random_hue(.), and tf.image.stateless_random_saturation(.). More details on the augmentation parameters are provided in Supplementary table [\[tab_aug_supp\]](#tab_aug_supp){reference-type="ref" reference="tab_aug_supp"}.
9
+
10
+ ::: center
11
+ :::
12
+
13
+ :::: sidewaystable
14
+ ::: center
15
+ :::
16
+ ::::
17
+
18
+ <figure id="fig_aug" data-latex-placement="t">
19
+ <embed src="aug.pdf" style="width:100.0%" />
20
+ <figcaption><strong>Augmentations</strong>: The different augmentations to input images of student global-focal, and teacher global-focal blocks are shown.</figcaption>
21
+ </figure>
22
+
23
+ :::: sidewaystable
24
+ ::: center
25
+ :::
26
+ ::::
27
+
28
+ <figure id="fig_rsna_cam" data-latex-placement="t">
29
+ <embed src="cam_1.pdf" style="width:100.0%" />
30
+ <figcaption><strong>Qualitative Comparison on RSNA dataset:</strong> Comparison of class activation maps from RadioTransformer w/o (HVAT+VAL) and RadioTransformer.</figcaption>
31
+ </figure>
32
+
33
+ <figure id="fig_rad_cam" data-latex-placement="t">
34
+ <embed src="cam_0.pdf" style="width:100.0%" />
35
+ <figcaption><strong>Qualitative Comparison on Radiography dataset:</strong> Comparison of class activation maps from RadioTransformer w/o (HVAT+VAL) and RadioTransformer.</figcaption>
36
+ </figure>
37
+
38
+ In addition to the AUC and F1 scores provided in the main paper, here we show the accuracy, precision, and recall values for classification tasks in the 8 datasets. In Supplementary table [\[tab_1_supp\]](#tab_1_supp){reference-type="ref" reference="tab_1_supp"}, the performance metrics for pneumonia classification datasets such as Cell, and RSNA Pneumonia Challenge dataset, and COVID-19 classification datasets such as SIIM-RSNA-FISABIO COVID-19 challenge, and Radiography dataset are shown. In Supplementary table [\[tab_2_supp\]](#tab_2_supp){reference-type="ref" reference="tab_2_supp"}, we show the performance metrics for 14 thoracic diseases classification tasks (in the NIH, and VinBigData datasets), and the COVID-19 classification task (in MIDRC and TCIA-SBU datasets).
39
+
40
+ We supplement our qualitative results (in Section 5.2 of the main paper) with additional class activation maps for both the datasets i.e., RSNA, and Radiography. In Figure [3](#fig_rsna_cam){reference-type="ref" reference="fig_rsna_cam"}, the RadT w/o (HVAT+VAL) and RadT class activation maps are shown for Normal and Pneumonia cases. Similarly, in Figure  [4](#fig_rad_cam){reference-type="ref" reference="fig_rad_cam"}, the RadT w/o (HVAT+VAL), and RadT class activation maps are shown for Normal and COVID-19 cases. For both the datasets, the maps of RadT w/o (HVAT+VAL) show discrete patterns, and those of RadT show comparatively continuous patterns. In addition to all the previous discussions, we discuss another interesting finding. In the fourth row of Figure [3](#fig_rsna_cam){reference-type="ref" reference="fig_rsna_cam"}, we observe that apart from clear attention on the white/fluid regions, there are some extraneous attention regions in the shoulders. Again, this phenomenon is not observed in the fourth row of Figure [4](#fig_rad_cam){reference-type="ref" reference="fig_rad_cam"}. This is clearly explainable from the ablation study in the main paper. For the RSNA dataset, the global block is showing better performance, hence the global block is activated in this case. The global block focuses on high-level features and in this case, it hypothesizes to identify features from non-relevant regions(like shoulder, etc) in addition to the white/fluid regions in the lungs. Whereas in the Radiography dataset, the focal block is activated and the attention regions perfectly intersect with the white/fluid regions.
41
+
42
+ Parvo, Magno, and Konio cells are ganglion cells that transfer information generated by the photoreceptors in the retina to the visual cortex in the brain. Structurally, Magno cells are larger, and have thick axons with more myelin while Parvo cells are smaller, and have less myelin and thinner axons. Functionally, the Magno cells have a large receptive field; they respond rapidly to changing stimuli and detect robust/global details like luminance, motion, stereopsis, and depth. Parvo cells, on the other hand, have a smaller receptive field, respond slowly to stimuli, and detect finer/local details like chromatic modulation and the form of an object. The Global-Focal blocks in *RadioTransformer* are inspired by these cellular pathways.
2203.15996/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2022-03-28T09:12:40.675Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.8.4 Chrome/91.0.4472.164 Electron/13.6.2 Safari/537.36" version="15.8.4" etag="7UKq2AL1WNPtoEwywwlX" type="device"><diagram id="BSuxRIVWLPs8-VsYV-6p">7VtRc6soFP41PjYjImgem7a7+7A7k5ne2bv7SBQT5xrJImnT++sXIkZFsjGtsZlu7EyrB0Q43+HjcDh14MN69ysnm9UfLKaZ47nxzoGPjucFLpK/leCtFCAXlIIlT+NS5NaC5/QnLYWgkm7TmBZaVooEY5lIN21hxPKcRqIlI5yz13a1hGVxS7AhS9oRPEck60q/p7FYldLQC2r5bzRdrqovAzwtS9akqqxHUqxIzF4bIvjkwAfOmCjv1rsHmindtfXyy5HSQ8c4zUWfF7zyhReSbfXYdL/EWzVYzrZ5TFV914Gz11Uq6POGRKr0VaIrZSuxzuQTkLcJy4XGC0D5XAjOfhyUpGqQLF3m8j6jiezh7IVykUrN3mvxggnB1qqlNMseWMb4vhcwmSaB6oRusVHyNJM/j6qk0qX6jB6XbJ3ujuoGHDQuLZWyNRX8TVbRL/hYg6StFEJ3An23vmBZ/FqbAAj0K6sG/JWMaKtbHr5UAyNvNDZ2nOBpnOTwN+o2yejuXhm51ALNY337GGWkKNKoDZeBbQUTmASeD2AwhcCFvh/6ZVMaWDQJseficIoRArKiV5YeQMYTF3kQhBiHHvLc0GYWTahMpJH6sSGN99cBXBp3ZulJaBtYIQtUlYzTjIj0pd28DT/9hTlL5YcPloOnbcvpmEDBtjyi+q3m9DQaCgAyTbDdkCB8SUWnob05HYbdy8J8i4XhTE3RTcvO8D9bRU6zLM3pXaW7e1kFTGBdKu+W6u83uhNzvs0pr1pb8KrsHIns/aaWGZZ/JidlZEGzOStSkTJFOZE0Gtm/mot+Nyqs0zhW3zpwV+eNir0ONU2bTiiOIptNx8F04brmlKjn5UBcBmHbkDzXm6AOffnQNieCquZHCAzdFpqj4GD/ehYafMOpt0NQsXwTl/BCuAQdXL4zHqtVcL2gcZzmy2JYoHpTG4kDYqW2hY+R6w+k+qCtegAtuscW3XsD6D7s6P4bJ3mRML6m/KNqb2szJjRMrNrEUUgXyTlADaB25Bpqdy1qdy+k9ulpKrr5vFfj8/ruQD4vDEbzeat4wgVZ9WyyHHN6y2+2VR2MyKoA3FyNo8h44fW4hKAbJLqw7/EpLoYX9pgM3qUmQ48Az4eIJ0m8IxtQvMAIW0AYgl/c0yr1L6XRbkDj/+K1QTNq5I/otYEeO/3R3DY8qUu8AIXYcNvARG0toFwIAUQIweHdNgkZf/tL9XqCpEerBX9LwZ07cUFQSR53emTl01vzaU55KnFQEae90LG7gqVfpJHsbTPjeIcIdbxD63p2rrNotgs9I1Y1oLPYDY3MOb0TnKQ5VavhujzmMmxdTmPRtuS2seQsN3fVWtQ/2mhjq/Z8aVi15w5E7kY0BFlIxhYNMd35d5FMNxziPHnO9MEJu2zTRaC/Zjkt0p9ksW9K6W2jDGnfcTRzkPL8yFawQvPFGeCaNMOZIDrcfDcdCCFoHEPA8KQTaQsAD7IqdGMoHZyW0mQ3R4etz3I1Fs6hV+8PZoy7LPaIZlylgxf+p07Drgov5SJXbXxNh+6Imht6tXHpIHq17MNvXNohDzNUgqT71rzwaFxaMcdncik2g3RjcmmfbIhr5NKym1fBpV95c3xMzWNwaXfz+xVDZdj/xFDZ8EfU1xAqC3qE4i8VKrMcLt88gC5CZqjh83ZTlhPpG2BdwMy0mk902Wy7vzKzrdiQvAVYI4XOSKb7k0Vksc3IfpAqq04uJ40EubKlIzly7YyyE4y4z5CbkejHcv/aCezO588BwIXGEuTZEkKsS9AAyWywuxV9B5wNj+uGpx/0wNMWoB4ET9tRdBPPEoR5uqEq7fWGFjbTSUedfT32v4OeMMHW1T5hghPfx80rdMY+YioHf72ZQeYZDcCGEbz3sAdMhzrskY/1/9yU1et/XIJP/wI=</diagram></mxfile>
2203.15996/main_diagram/main_diagram.pdf ADDED
Binary file (55.9 kB). View file
 
2203.15996/paper_text/intro_method.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Large pre-trained language models (PLMs) [@devlin-etal-2019-bert; @liu2019roberta] have achieved great success in a variety of NLP tasks. However, it is difficult to deploy them for real-world applications where computation and memory resources are limited. Reducing the pre-trained model size and speeding up the inference have become a critical issue.
4
+
5
+ Pruning is a common technique for model compression. It identifies and removes redundant or less important neurons from the networks. From the view of the model structure, pruning methods can be categorized into *unstructured pruning* and *structured pruning*. In the unstructured pruning, each model parameter is individually removed if it reaches some criteria based on the magnitude or importance score [@DBLP:journals/corr/HanPTD15; @DBLP:conf/iclr/ZhuG18; @DBLP:conf/nips/Sanh0R20]. The unstructured pruning results in sparse matrices and allows for significant model compression, but the inference speed can hardly be improved without specialized devices. While in the structured pruning, rows or columns of the parameters are removed from the weight matrices [@DBLP:journals/corr/abs-1910-06360; @DBLP:conf/nips/MichelLN19; @voita-etal-2019-analyzing; @lagunas-etal-2021-block; @DBLP:conf/nips/HouHSJCL20]. Thus, the resulting model speeds up on the common CPU and GPU devices.
6
+
7
+ Pruning methods can also be classified into optimization-free methods [@DBLP:conf/nips/MichelLN19] and the ones that involve optimization [@frankle2018the; @lagunas-etal-2021-block]. The latter usually achieves higher performance, but the former runs faster and is more convenient to use.
8
+
9
+ Pruning PLMs has been of growing interest. Most of the works focus on reducing transformer size while ignoring the vocabulary [@abdaoui-etal-2020-load]. Pruning vocabulary can greatly reduce the model size for multilingual PLMs.
10
+
11
+ In this paper, we present TextPruner, a model pruning toolkit for PLMs. It combines both transformer pruning and vocabulary pruning. The purpose of TextPruner is to offer a universal, fast, and easy-to-use tool for model compression. We expect it can be accessible to users with little model training experience. Therefore, we implement the structured optimization-free pruning methods for its convenient use and fast computation. Pruning a base-sized model only requires several minutes with TextPruner. TextPruner can also be a useful analysis tool for inspecting the importance of the neurons in the model.
12
+
13
+ TextPruner has the following highlights:
14
+
15
+ - TextPruner is designed to be easy to use. It provides both Python API and Command Line Interface (CLI). Working with either of them requires only a couple of lines of simple code. Besides, TextPruner is non-intrusive and compatible with Transformers [@wolf-etal-2020-transformers], which means users do not have to change their models that are built on the Transformers library.
16
+
17
+ - TextPruner works with different models and tasks. It has been tested on tasks like text classification, machine reading comprehension (MRC), named entity recognition (NER). TextPruner is also designed to be extensible for other models.
18
+
19
+ - TextPruner is flexible. Users can control the pruning process and explore pruning strategies via tuning the configurations to find the optimal configurations for the specific tasks.
20
+
21
+ # Method
22
+
23
+ We briefly recall the multi-head attention (MHA) and the feed-forward network (FFN) in the transformers [@DBLP:conf/nips/VaswaniSPUJGKP17]. Then we describe how we prune the attention heads and the FFN based on the importance scores.
24
+
25
+ Suppose the input to a transformer is $\bm{X}\in \mathbb{R}^{n\times d}$ where $n$ is the sequence length and $d$ is the hidden size. the MHA layer with $N_h$ heads is parameterized by $\bm{W}^Q_i,\bm{W}^K_i,\bm{W}^V_i, \bm{W}^O_i \in \mathbb{R}^{d_h\times d}$ $$\begin{equation}
26
+ \mathrm{MHA}(\bm{X}) = \sum_i^{N_h} \mathrm{Att}_{\bm{W}^Q_i,\bm{W}^K_i,\bm{W}^V_i, \bm{W}^O_i}(\bm{X})
27
+ \end{equation}$$ where $d_h= d / N_h$ is the hidden size of each head. $\mathrm{Att}_{\bm{W}^Q_i,\bm{W}^K_i,\bm{W}^V_i, \bm{W}^O_i}(\bm{X})$ is the bilinear self-attention $$\begin{flalign}
28
+ & \mathrm{Att}_{\bm{W}^Q_i,\bm{W}^K_i,\bm{W}^V_i, \bm{W}^O_i}(\bm{X}) = \nonumber \\
29
+ & \mathrm{softmax}(\frac{\bm{X}(\bm{W}^Q_i)^\top\bm{W}^K_i\bm{X}^\top}{\sqrt{d}})\bm{X}(\bm{W}^V_i)^\top \bm{W}^O_i
30
+ \end{flalign}$$
31
+
32
+ Each transformer contains a fully connected feed-forward network (FFN) following MHA. It consists of two linear transformations with a GeLU activation in between $$\begin{flalign}
33
+ \textrm{FFN}&_{\bm{W}_1,\bm{b}_1,\bm{W}_2,\bm{b}_2}(\bm{X}) = \nonumber \\
34
+ & \textrm{GeLU}(\bm{X} \bm{W}_1 + \bm{b}_1)\bm{W_2} + \bm{b}_2
35
+ \end{flalign}$$ where $\bm{W}_1 \in \mathbb{R}^{d\times d_{ff}}$, $\bm{W}_2 \in \mathbb{R}^{d_{ff}\times d}$, $\bm{b}_1 \in \mathbb{R}^{d_{ff}}$, $\bm{b}_2 \in \mathbb{R}^{d}$. $d_{ff}$ is the FFN hidden size. The adding operations are broadcasted along the sequence length dimension $n$.
36
+
37
+ With the hidden size fixed, The size of a transformer can be reduced by removing the attention heads or removing the intermediate neurons in the FFN layer (decreasing $d_{ff}$, which is mathematically equal to removing columns from $\bm{W}_1$ and rows from $\bm{W}_2$). Following @DBLP:conf/nips/MichelLN19, we sort all the attention heads and FFN neurons according to their proxy importance scores and then remove them iteratively.
38
+
39
+ A commonly used importance score is the sensitivity of the loss with respect to the values of the neurons. We denote a set of neurons or their outputs as $\bm\Theta$. Its importance score is computed by $$\begin{equation}
40
+ \label{ISeq}
41
+ \mathrm{IS}(\bm\Theta) = \mathbb{E}_{x\sim X}\left\lvert \frac{\partial \mathcal{L}(x)}{\partial\bm\Theta}\bm\Theta \right\rvert
42
+ \end{equation}$$ The expression in the absolute sign is the first-order Taylor approximation of the loss $\mathcal{L}$ around $\Theta=0$. Taking $\bm\Theta$ to be the output of an attention head $h_i$, $\mathrm{IS}(\bm\Theta)$ gives the importance score of the head $i$; Taking $\bm\Theta$ to be the set of the $i$-th column of $\bm{W}_1$, $i$-the row of $\bm{W}_2$ and the $i$-th element of $\bm{b}_1$, $\mathrm{IS}(\bm\Theta)$ gives the importance score of the $i$-th intermeidate neuron in the FFN layer.
43
+
44
+ A lower importance score means the loss is less sensitive to the neurons. Therefore, the neurons are pruned in the order of increasing scores. In practice, we use the development set or a subset of the training set to compute the importance score.
45
+
46
+ In equation [\[ISeq\]](#ISeq){reference-type="eqref" reference="ISeq"}, the loss $\mathcal{L}$ usually is the training loss. However, there can be other choices of $\mathcal{L}$. We propose to use the Kullback--Leibler divergence to measure the varitaion of the model outputs: $$\begin{equation}
47
+ \label{ISkl}
48
+ \mathcal{L}_{\mathrm{KL}}(x)=\mathrm{KL}(\mathrm{stopgrad}(q(x))||p(x))
49
+ \end{equation}$$ where $q(x)$ is the original model prediction distribution and $p(x)$ is the to-be-pruned model prediction distribution. The `stopgrad` operation is used to stop back-propagating gradients. An increase in $\mathcal{L}_{\mathrm{KL}}$ indicates an increase in the diviation of $p(x)$ from the original prediction $q(x)$. Thus the gradient of $\mathcal{L}_{\mathrm{KL}}$ reflects the sensitivity of the model to the value of the neurons. Evaluation of $\mathcal{L}_{\mathrm{KL}}$ does not require label information. Therefore the pruning process can be performed in a self-supervised way where the unpruned model provides the soft-labels $q(x)$. We call the method *self-supervised pruning*. TextPruner supports both supervised pruning (where $\mathcal{L}$ is the training loss) and self-supervised pruning. We will compare them in the experiments.
2203.16421/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2203.16421/paper_text/intro_method.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ There is increasing interest in training embodied AI agents that interact with the world based on visual perception. Recently, Batra et al. [2] introduced rearrangement as a challenge bringing together machine learning, vision, and robotics. Common household tasks such as "put dishes in the cupboard" or "get cup from the cabinet" can be viewed as object rearrangement. A key challenge in such tasks is identifying which parts can be opened and how they open.
4
+
5
+ To address this problem, we introduce the *openable part detection* (OPD) task, where the goal is to identify openable parts, and their motion type and parameters (see Figure 1). We focus on predicting the openable parts ("door", "drawer", and "lid") of an object, and their motion parameters from a single RGB image input. More specifically, we focus on container objects (e.g. cabinets, ovens, etc). Containers need to be opened to look for hidden objects to reach, or to place away objects. Methods that can identify what can be opened and how offer a useful abstraction that can be used in more complex tasks [20, 25–27, 35].
6
+
7
+ Prior work [9, 13, 41] has looked at predicting motion types and parameters from 3D point clouds. Point-cloud based methods rely on depth information, often sampled from a reconstructed mesh that aggregates information from multiple views. In addition, Li et al. [13] assumes that the kinematic chain and the part structure of the objects are given. To handle different kinematic structures, they train a separate model for each structure. This approach is not scalable to the wide variety of structures found in the real world, even for objects in the same category. For example, cabinets may have two, three or four drawers.
8
+
9
+ We study the task of identifying openable parts and motion parameters from a single-view image, and investigate whether a structure-agnostic image-based approach can be effective at this task. Our approach OPDRCNN extends instance segmentation networks (specifically MaskRCNN) to identify openable parts and predict their motion parameters.
10
+
11
+ Table 1. Summary of prior work on motion parameter estimation (motion type, axis, and rotation origin). We indicate the input modality, whether the method has cross-category generalization (CC), whether part segmentations are predicted (Seg) as opposed to using ground-truth parts, whether object pose (OP) and part state (PS) are predicted. Most prior work takes point cloud (PC) inputs. In contrast, our input is single-view images (RGB, D, or RGB-D).
12
+
13
+ | Method | Input | CC | Seg | OP | PS | #cat | #obj | #part |
14
+ |-------------------------|---------------------|--------------|--------------|--------------|--------------|------|-------|-------|
15
+ | Snapshot [8] | 3D mesh | | | | | | 368 | 368 |
16
+ | Shape2Motion [34] | PC | $\checkmark$ | $\checkmark$ | | | 45 | 2440 | 6762 |
17
+ | RPMNet [41] | PC | $\checkmark$ | $\checkmark$ | | | | 969 | 1420 |
18
+ | DeepPartInduction [43] | Pair of PCs | | $\checkmark$ | | | 16 | 16881 | |
19
+ | MultiBodySync [10] | Multiple PCs | | $\checkmark$ | | | 16 | | |
20
+ | ScrewNet [11] | Depth video | $\checkmark$ | | | | 9 | 4496 | 4496 |
21
+ | Liu et al. [16] | RGB video | $\checkmark$ | $\checkmark$ | | | 3 | | |
22
+ | ANCSH [13] | Single-view PC | | $\checkmark$ | $\checkmark$ | $\checkmark$ | 5 | 237 | 343 |
23
+ | Abbatematteo et al. [1] | RGB-D | | $\checkmark$ | | | 5 | 6 | 8 |
24
+ | VIAOP [44] | RGB-D | $\checkmark$ | | | | 6 | 149 | 618 |
25
+ | OPDRCNN (ours) | Single-view RGB(-D) | ✓ | ✓ | ✓ | ✓ | 11 | 683 | 1343 |
26
+
27
+ This structure-agnostic approach allows us to more easily achieve cross-category generality. That is, a single model can tackle instances from a variety of object categories (e.g. cabinets, washing machines). In addition, we investigate whether depth information is helpful for this task.
28
+
29
+ In summary, we make the following contributions: i) we propose the OPD task for predicting openable parts and their motion parameters from single-view images; ii) we contribute two datasets of objects annotated with their openable parts and motion parameters: OPDSynth and OPDReal; and iii) we design OPDRCNN: a simple image-based, structure-agnostic neural architecture for openable part detection and motion estimation across diverse object categories. We evaluate our approach with different input modalities and compare against baselines to show which aspects of the task are challenging. We show that depth is not necessary for accurate part detection and that we can predict motion parameters from RGB images. Our approach significantly outperforms baselines especially when requiring both accurate part detection and part motion parameter estimation.
30
+
31
+ # Method
32
+
33
+ To address openable part detection, we leverage a instance segmentation network to identify openable parts by treating each part as an 'object instance'. Specifically, we use Mask-RCNN [7] and add additional heads for predicting the motion parameters. Mask-RCNN receives an image and uses a backbone network for feature extraction and a region proposal network (RPN) to propose regions of interest (ROI) which are fed into branches that refine the bounding box and predict the mask and category label. By training Mask-RCNN on our OPD dataset, we can detect and segment openable parts. We attach additional branches to the output of the RoiAlign module to predict motion parameters. We consider two variants: i) OPDRCNN-C which directly predicts the motion parameters in the camera coordinate; and ii) OPDRCNN-O which predicts the extrinsic parameter (object pose in our single-object setting) and the motion parameters in the world coordinate (canonical object coordinate). Figure 3 shows the overall structure of our approach. For all models we use a cross-entropy loss for categorical prediction and a smooth L1 loss with β = 1 when regressing real values (see supplement).
34
+
35
+ OPDRCNN-C. For OPDRCNN-C, we add separate fullyconnected layers to the RoiAlign module to directly predict the motion parameters. The original MaskRCNN branches predict the part label l<sup>i</sup> and part bounding box delta δ<sup>i</sup> for each part p<sup>i</sup> in the box head. The box delta δ<sup>i</sup> is combined with the box proposal from the RPN module to calculate the final output bounding box b<sup>i</sup> . We add additional branches to predict the motion parameters φ<sup>i</sup> = [c<sup>i</sup> , a<sup>i</sup> , o<sup>i</sup> ] (motion category, motion axis, motion origin) in the camera coordi-
36
+
37
+ ![](_page_4_Figure_0.jpeg)
38
+
39
+ Figure 3. Illustration of the network structure for our OPDRCNN-C and OPDRCNN-O architectures. We leverage a MaskRCNN backbone to detect openable parts. Additional heads are trained to predict motion parameters for each part.
40
+
41
+ nate frame. We use the smooth L1 loss for the motion origin and motion axis, and the cross-entropy loss for the part joint type. Note that we only apply the motion origin loss for revolute joints, since the motion origin is not meaningful for prismatic joints.
42
+
43
+ OPDRCNN-O. For OPDRCNN-O, we add additional layers to predict the 6 DoF object pose parameters so that we can establish an object coordinate frame within which to predict motion axes and origins for openable parts. Following prior work [30], we define the object coordinate frame to have a consistent up and front orientation. The motivation is that motion origins and axes in object coordinates are more consistent than in camera coordinates, given that the same articulation can be observed from different camera viewpoints. We are only dealing with a single object per image and consistent poses are available for each annotated object, so predicting the object pose is equivalent to predicting the extrinsic parameters of the camera pose.
44
+
45
+ We regress these object pose parameters directly from the image features (see Figure 3). We add convolution layers and fully-connected layers to the backbone module and use the image features to regress the extrinsic parameters. For training OPDRCNN-O, we use the same loss function for motion parameters as OPDRCNN-C, but with predicted and ground-truth motion axes and origins in object coordinates. For the extrinsic parameters, we treat the matrix as a vector of length 12 (9 for rotation, 3 for translation) and use the smooth L1 loss.
46
+
47
+ Implementation details. We use Detectron2 [36] to implement our architecture. We initialize the network using weights from a ResNet50 pretrained on ImageNet, and train the network using SGD. Unless otherwise specified, we use a learning rate of 0.001 with linear warmup for 500 steps and then decaying the learning rate by 0.1 at 0.6 and 0.8 of the total number of steps, following the recommended
48
+
49
+ learning rate schedule for Detectron2.
50
+
51
+ We first train our model only on the detection and segmentation task with a learning rate of 0.0005 for 30000 steps. Then we pick the best weights for RGB, depth, RGBD independently. Because our OPDRCNN-C and OPDRCNN-O have the same structure for detection and segmentation, we load the weights from the best detection and segmentation models and fully train our network with all losses. In both OPDRCNN-C and OPDRCNN-O we use the ratios [1, 8, 8] for the weights of the individual motion loss components: motion category loss, motion axis loss and motion origin loss respectively. In OPDRCNN-O we use 15 as the weight for the object pose loss.
52
+
53
+ During training we employ image space data augmentation to avoid overfitting (random flip, random brightness, and random contrast). During inference, we use a greedy non-maximum suppression (NMS) with IoU threshold 0.5 and choose the predicted bounding box with highest score. We use a confidence threshold of 0.05, and allow for 100 maximum part detections per image.
2205.02455/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-01-14T08:13:39.952Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36" etag="KT87oeylkAVZTpx2NOp-" version="16.2.6" type="onedrive"><diagram id="EjI-qRGQ9dULxblBy3h0" name="Page-1">7Z1tk9o4Esc/DVW7L5Ky5Cf55Twku3V1u5W63FX29s2WBwvwhcGc8SQz+fQrgQxY9gC2kbthlEkmWEjG2D+17O6/WiP37vH5lzxezn7LEj4fUSd5Hrn3I0qpG/jiP1nysikhvu9uSqZ5mmzKnF3B5/QHVxXL0qc04StVtikqsmxepMtq4ThbLPi4qJTFeZ59r1abZPOkUrCMp7xW8Hkcz+ulX9KkmKlS4ji7N37l6XRWfjT1VZPHeFt7U7CaxUn2fe87ux9G7l2eZcXm1ePzHZ/L01c9MR9feXd7ZDlfFKc0+Bos/0G//BmFd49/TqeM30d3n94RV+3nWzx/Ut9ZHW7xUp6EPHtaJFzuhozc2++ztOCfl/FYvvtdXHdRNise5+rtVZFnX7cnS3zL20k6n38pz5Ena8fj2VPOf5FN79mu4GYxXX+k56tWd9k8y0XBIltwueuvvBjLvTpyozydcmM6j1cr9TqJV7Ptwcq9fFbfJH4qMlmULQoFGpFHE8/T6UJsjMV55Lmq8DF+TOcS2l/5/Bsv0nFctlQ7K08bzwv+/OoVIdvrLLoIzx55kb+IKmUDR3WPsneoPXzfkUYjVTbbg8wr6YsV3dPtrnfXX7xQCLTCgTbg4Mm//p0k+mEyCm9HNJiLQ7l9yCucBP9/kiyvT9O71foM34gKhC2fd2+KV1P5/49ReD/yP4i9/bT6S/wmYvtn8U++Tt/JLfW5OoziW4jOz4+DGK+WG4swSZ8lD7c7jA/yU6Fsn8IRdSe+/JEN8jhJxWXee+9h/bPtAXvvBOs/GnrsdNLkWesPm6fB5tVp87wG2DxjrLlArNF91rCSNpnQ8biZtMkkidY81UhLgofA70Faf8zcskmJGfOBMfMbrmwiBnm1meXFLJtmi3j+YVd6O37Kv20Hkerl3DX4Z5YtVZX/8aJ4USdcjTN7gPDntPhDNZev/ytfv/fV1v3z3lv3L+XGQnz7P8odyI29VnJz12y9Vbarj7+VAVFufIoLMdItNlfcoXX4HIfFjtMEmOP44fqdzTmVJ/IwKOK8Z0/5mB80BOoKFXE+5cXx4akOX87ncZF+qx6LAZR8lCgRC1JrkMKeIK2b3uR5/LJXYZmli2K1t+dPsmBnHCmtjsHq5u7jidXLZ4sdw5sD2BG9/SZ9IA/wQL61kRbzbpgzWHsZokGJtAfJaQdSBZvmp+CLQCYAsYxhO8uoVaeRP4RlZChxPvlO8o0C7cLawKgDNGY4MUJJhe9rQcaHsIGu5g5U5LxmA/XqhEQj8zawPInADkJ3QLcNaeO2YWP+mtuGx8FDs9vmgfme70C6bShl7zX6lHcOzj3oA5FWcQ++o0gdhEnM2eQV0kKBGm80rMGYcXHiEDkIoxCaswADZyN6i5U0HiROwptJS/zQjZIm0qIokL5fRKTRENykhRgGT4Ea1vha4nOWeM2oRWHihGETaow+uH3ia+cPrgmzBo0aQ4LakHdqb2H81FFzCfQAGlgJCZiExMMnIQkoBstjJSTnl5B46CQkgZWQXJ+EJMAmIRFHUL+yVxv3b++k7RRCHZ3RsxuW0uCjnt2wbzCgk2fX1wL5pW/rNdeuXn+QwD8lXSIV8IH/6+bVA+HVd6rPGN4Q/JUzARDwZ14tcIRB2EhZCz4pCJ9hS3sKIRfYnW9cPJuSC1wJ0QGoXEDYWjTQOF2gsTebuidkcONIDuoCjtdXD9WGjSNSVf4QlHOfeM2U31LH6U45XyQ3cs7jaOvuFCUfU3lllIWO86KsMZZehHRcFqtqBKyzhCCaG9+twh8OoSMsT4pVNlxVZMavaWjAtQ0hlC/eahuG9ZHCaxtCKFe81TYYDjGi0zaEKDSoVttgHDV4bUMIJUKFkzu3CjFeqNxZBw1c7Bw2iVC1K201NGZgCNFpaLYTfzF4RyqhOOsfObN/RA1wx/0jDNgvjWdWuPk5bWfn8aLJC6Oe5HXyzLEg0KzyANPbxGCAE/O3OMW3BaGgaQ4EqyiheYNpDloMp05PZLoZNV24EB6Ozen1BxIuXKYQrOvd5wl3jhfMed8YdD/TWKYXxYCSBakXSK88RBs2mLpy9pjBBEmZFaKY12mnvJx/ygurxXThJ72EUFM77aQXc07ICNukl9LSWpVKM2iXGmPTQQPXqDAHA2dWo2KcNHiNCoMS3lmNiuEQHjqNCqNIUMMqHbjU8VNHDV6jwlAo76xGxTBo4BoV1iS706601agMk+fFo3WrM3SiF4ZDG4f1XuqCvV41eZwPPMZRt25qcISkrCDqvAEJplQfxwMSUd+ARE8i8UzUtYKoYcljIIvH+JogyneH0AK4DaMqggCuNbtg8KOYp+tX48Cm4Eckw96H3yZj6MAtiIpVT64Q0EpyheP1g3AIzvGIu82rdK6a8ghE1qjnqwvcIylHqJ51YZBbGTza7k6yxn737P05B7LbsEvYMRyrQyB1cV2o2KaeqSNwoFVdDEpDaOU2Qybq8Al4GAeHfNDqbUyjFjBwEQSUhNDqbYbNnxB40GrVCEpFaPU2w6IWutADaBlmt3qbq9Lb6KB5AfTwGdEG0LRLbQU3hpbZ0gQ3rGHaz9CCmwiH0A/rzdQFC270Rd2iENodEUHl2LN+r+FWRCUOAb9vt6s8X+Nde22V5wa96sCg4XDkW7eXcaNGHOAnROriSQhkRVPmRFPRyckzIpBl56gmmoqiIVb2cLFmD7L4g+HfV3vVzxx7OBeWO5HHfrKPq5RqtyAPRK1KNal25A2hVi09VNgwN5S70oJe99fAmdi6ZxoDe29QWdcCmQCDbSSEDiH+9BqetlHclVqRc3twYSWhEYpsf1bVcnZPEq0n+3MCcLUBlFzPClsMR5i1mB8hHriPHIVezypbDJPGGLSD3GlS62mX2ipbhkklg0DZQsvZF1bZcm3KFl1VB65skQ8uMKxZZctwcx/glS3UgVLrWWXLkKSBK1uohzNLhXVslYZAXfmjji3qgHhk9dn4pFzi4NTp+4QEg8S3cGapGCgbiylhwYlRrWPBMaAOA5OMqLZgbDWEcbyBw8gQHQZRwov9DmO7C1R36ZsysVt3AZGheYg0mNcjQ7ts/Pvmbex5n45HGGlehnaEri64XjJ5IPpfPWPoMDI0H6na0pAMzYJed66CmdjyQO0knyub5FPP2IVgmo8DNXfRCoGGTQVBHB84F4RyDSJgDas441LjCTXWwGVA6jkJmjUrAzJMGgIZUJO2UbvUVgZkaHkxjDIgHPpDrLdTFywD0hezg5cBERzLDiMl7UJlQLVV1OFlQARK2mhlQIMuow4uAyIUA2jW92XeqMF7vgiKTITW82V80WEEni8fT1oDK/syGWUiJ0eZSN9wak8i614KFERaHoF4dEDWlmMguiofz7q3VleFA3/SV4Xb0xzjSe0Brqs6E0ftqQciD0TQymB0VXim+SDQVb010EHW9WTR4Yk+xxs47hDr1/p4ZgaZzz5mO0bd6zl0x4hqM4DY4RlA9QaDZD0rzyO4nw6rluNSww+sQQ8JrhsiKPSQVjdk2CcMrxvym3RDG6xWy3hxElhBE1gVXMfxXPD0r5IrMQjHe2TtKC6bbkrEN9ocQ7XY5HHxHPy4BF2FPKj1qJstVms6BBzU+RSv5Dj4H+nxiBdjvjp4rEd39/GpeMp51x3+OytiidfvT48PXBqgbCJ+7T5G7slZnwY5kcGRDdNHLj/+p89//f6zsDnr4ranWrNCbW1PxdpUb//cqmGgZHNJx+liqm4OZdcXdmF+o5Rmj2mSrO9/Z1me/hBN4/Jz5/EDn4t734TnmvbtHEaE0ooR8dzofV2a4YZ+3Y5sC89vSAI8c5U6OTOrjw5HHhyq2kQdmvM8NjQLMNs9SJhPNeue+nihag79eLGNearO4tKDDxeUsD7V/SG8VwGiKG7/sIHtaSf2tFLmebynhTC5CbSeVs3LUa8esj7VS+GO4a6GJx7SSTDRfVBr5hlxBzxrVzs5f3rYNw9CN2ey7gIL3IOdp1a/6ko+Gg4ng7ieA5yu527qJGI724mdjQWndjYGk9OtXDtcdYaAHu5stfoqfc6r9bUQqFbfVGdDmtCq08hmO9upna3FyiAgI9t2wuwuvdvhaFCtASFHHtn0sU1rYKq/Yc2h1SW3qO1vJ/a3NuleQNwj+kMbKR8zXx2tmNO3QTUlqqn+hidDWKf+Zsc302oHkP7G9P5WHvGrj2LU6dlA01O07W9iM89kpGpXPY+Xs9+yhMsafwM=</diagram></mxfile>
2205.02455/paper_text/intro_method.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Emotions are intrinsic to humans and guide their behavior and are indicative of the underlying thought process [\(Minsky,](#page-10-0) [2007\)](#page-10-0). Consequently, understanding and recognizing emotions is vital for developing AI technologies (e.g., personal digital assistants) that interact directly with humans. During a conversation between a number of people, there is a constant ebb and flow of emotions experienced and expressed by each person. The task of multimodal emotion recognition addresses the problem of monitoring the emotions expressed (via various modalities, e.g., video (face), audio (speech)) by individuals in different settings such as conversations.
4
+
5
+ Emotions are physiological, behavioral, and communicative reactions to cognitively processed stimuli [\(Planalp et al.,](#page-10-1) [2018\)](#page-10-1). Emotions are often a result of internal physiological changes, and
6
+
7
+ <span id="page-0-0"></span>![](_page_0_Figure_9.jpeg)
8
+
9
+ Figure 1: An example conversation between two speakers, with corresponding emotions evoked for each utterance.
10
+
11
+ these physiological reactions may not be noticeable by others and are therefore intra-personal. For example, in a conversational setting, an emotion may be a communicative reaction that has its origin in a sentence spoken by another person, acting as a stimulus. The emotional states expressed in utterances correlate with the context directly; for example, if the underlying context is about a happy topic like celebrating a festival or description of a vacation, there will be more positive emotions like joy and surprise. Consider the example shown in Figure [1,](#page-0-0) where the context depicts an exciting conversation. Speaker-1 being excited about his admission affects the flow of emotions in the entire context. The emotion states of Speaker-2 show the dependency on Speaker-1 in U2, U<sup>4</sup> and U6, and maintains intra-personal state depicted in U<sup>8</sup> and U<sup>10</sup> by being curious about the responses of Speaker-1. The example conversation portrays the effect of global information as well as inter and intra dependency of speakers on the emotional states of the utterances. Moreover, emotions are a multimodal phenomenon; a person takes cues from different modalities (e.g., audio, video) to infer the emotions of others, since, very often, the information in different modalities complement each other. In this paper, we leverage these intuitions and propose COGMEN: COntextualized Graph neural network based Multimodal Emotion recognitioN architecture that addresses both, the effects of context on the utterances and inter and intra dependency for predicting the per-utterance emotion of each speaker during the conversation. There has been a lot of work on unimodal (using text only) prediction, but our focus is on multimodal emotion prediction. As is done in literature on multimodal emotion prediction, we do not focus on comparison with unimodal models. As shown via experiments and ablation studies, our model leverages both the sources (i.e., local and global) of information to give state-of-the-art (SOTA) results on the multimodal emotion recognition datasets IEMOCAP and MOSEI. In a nutshell, we make the following contributions in this paper:
12
+
13
+ - We propose a Contextualized Graph Neural Network (GNN) based Multimodal Emotion Recognition architecture for predicting per utterance per speaker emotion in a conversation. Our model leverages both local and global information in a conversation. We use GraphTransformers [\(Shi et al.,](#page-11-0) [2021\)](#page-11-0) for modeling speaker relations in multimodal emotion recognition systems.
14
+ - Our model gives SOTA results on the multimodal Emotion recognition datasets of IEMO-CAP and MOSEI.
15
+ - We perform a thorough analysis of the model and its different components to show the importance of local and global information along with the importance of the GNN component. We release the code for models and experiments: [https://github.](https://github.com/Exploration-Lab/COGMEN) [com/Exploration-Lab/COGMEN](https://github.com/Exploration-Lab/COGMEN)
16
+
17
+ # Method
18
+
19
+ In a conversation involving different speakers, there is a continuous ebb and flow in the emotions of each of the speakers, usually triggered by the context and reactions of other speakers. Inspired by this intuition, we propose a multimodal emotion prediction model that leverages contextual information, inter-speaker and intra-speaker relations in a conversation.
20
+
21
+ In our model, we leverage both the context of dialogue and the effect of nearby utterances. We model these two sources of information via two means: 1) Global Information: How to capture the impact of underlying context on the emotional state of an utterance? 2) Local information: How to establish relations between the nearby utterances that preserve both inter-speaker and intra-speaker dependence on utterances in a dialogue?
22
+
23
+ Global Information: We want to have a unified model that can capture the underlying context and handle its effect on each utterance present in the dialogue. A transformer encoder [\(Vaswani et al.,](#page-11-15) [2017\)](#page-11-15) architecture is a suitable choice for this goal. Instead of following the conventional sequential encoding by adding positional encodings to the input, in our approach, a simple transformer encoder without any positional encodings leverages the entire context to generate distributed representations (features) efficiently corresponding to each utterance. The transformer facilitates the flow of information from all utterances when predicting emotion for a particular utterance.
24
+
25
+ Local Information: The emotion expressed in an utterance is often triggered by the information in neighboring utterances. We establish relations between the nearby utterances in a way that is capable of capturing both inter-speaker and intra-speaker effects of stimulus over the emotion state of an utterance. Our approach comes close to DialogueGCN [\(Ghosal et al.,](#page-9-6) [2019\)](#page-9-6), and we define a graph where each utterance is a node, and directed edges represent various relations. We define relations (directed edges) between nodes Rij = u<sup>i</sup> → u<sup>j</sup> , where the direction of the arrow represents the spoken order of utterances. We categorize the directed relations into two types, for self-dependent relations between the utterances spoken by the same speaker Rintra, and interrelations between the utterances spoken by different speakers Rinter. We propose to use Relational GCN [\(Schlichtkrull et al.,](#page-11-16) [2018\)](#page-11-16) followed by a GraphTransformer [\(Shi et al.,](#page-11-0) [2021\)](#page-11-0) to capture dependency defined by the relations.
26
+
27
+ Figure [2](#page-3-0) shows the detailed architecture. The input utterances go as input to the Context Extractor module, which is responsible for capturing the global context. The features extracted for each utterance by the context extractor form a graph based on interactions between the speakers. The graph goes as input to a Relational GCN, followed by GraphTransformer, which uses the formed graph to capture the inter and intra-relations between the utterances. Finally, two linear layers acting as an emotion classifier use the features obtained for all
28
+
29
+ <span id="page-3-0"></span>![](_page_3_Figure_0.jpeg)
30
+
31
+ Figure 2: The proposed model (COGMEN) architecture.
32
+
33
+ the utterances to predict the corresponding emotions.
34
+
35
+ **Context Extractor:** Context Extractor takes concatenated features of multiple modalities (audio, video, text) as input for each dialogue utterance $(u_i; i = 1, ..., n)$ and captures the context using a transformer encoder. The feature vector for an utterance $u_i$ with the input features corresponding to available modalities, audio $(u_i^{(a)} \in \mathbb{R}^{d_a})$ , text $(u_i^{(t)} \in \mathbb{R}^{d_t})$ and video $(u_i^{(v)} \in \mathbb{R}^{d_v})$ is:
36
+
37
+ $$\mathbf{x}_{i}^{(atv)} = [u_{i}^{(a)} \oplus u_{i}^{(t)} \oplus u_{i}^{(v)}] \in \mathbb{R}^{d}$$
38
+
39
+ where $d = d_a + d_t + d_v$ . The combined features matrix for all utterances in a dialogue is given by:
40
+
41
+ $$\mathbf{X} = \mathbf{x}^{(atv)} = [\mathbf{x}_1^{(atv)}, \mathbf{x}_2^{(atv)}, \dots, \mathbf{x}_n^{(atv)}]^T$$
42
+
43
+ We define a Query, a Key, and a Value vector for encoding the input features $\mathbf{X} \in \mathbb{R}^{n \times d}$ as follows:
44
+
45
+ $$Q^{(h)} = \mathbf{X}W_{h,q},$$
46
+
47
+ $$K^{(h)} = \mathbf{X}W_{h,k},$$
48
+
49
+ $$V^{(h)} = \mathbf{X}W_{h,v},$$
50
+
51
+ where, $W_{h,q}, W_{h,k}, W_{h,v} \in \mathbb{R}^{d \times k}$
52
+
53
+ The attention mechanism captures the interaction between the Key and Query vectors to output an attention map $\alpha^{(h)}$ , where $\sigma_j$ denotes the softmax function over the row vectors indexed by j:
54
+
55
+ $$\alpha^{(h)} = \sigma_j \left( \frac{Q^{(h)} (K^{(h)})^T}{\sqrt{k}} \right)$$
56
+
57
+ where $\alpha^{(h)} \in \mathbb{R}^{n \times n}$ represents the attention weights for a single attention head (h). The obtained attention map is used to compute a weighted sum of the values for each utterance:
58
+
59
+ $$\begin{aligned} \operatorname{head}^{(h)} &= \alpha^{(h)}(V^{(h)}) \in \mathbb{R}^{n \times k} \\ \mathbf{U}' &= [\operatorname{head}^{(1)} \oplus \operatorname{head}^{(2)} \oplus \ldots \operatorname{head}^{(H)}] W^o \end{aligned}$$
60
+
61
+ where, $W^o \in \mathbb{R}^{kH \times d}$ and H represents the total number of heads in multi-head attention. Note $\mathbf{U}' \in \mathbb{R}^{n \times d}$ . We add residual connection $\mathbf{X}$ and apply LayerNorm, followed by a feed forward and Add & Norm layer:
62
+
63
+ $$\mathbf{U} = \text{LayerNorm} \left( \mathbf{X} + \mathbf{U}'; \gamma_1, \beta_1 \right);$$
64
+
65
+ $$\mathbf{Z}' = \text{ReLU} \left( \mathbf{U}W_1 \right) W_2;$$
66
+
67
+ $$\mathbf{Z} = \text{LayerNorm} \left( \mathbf{U} + \mathbf{Z}'; \gamma_2, \beta_2 \right);$$
68
+
69
+ where, $\gamma_1, \beta_1 \in \mathbb{R}^d$ , $W_1 \in \mathbb{R}^{d \times m}$ , $W_2 \in \mathbb{R}^{m \times d}$ , and $\gamma_2, \beta_2 \in \mathbb{R}^d$ . The transformer encoder provides features corresponding to every utterance in a dialogue $([\mathbf{z}_1, \mathbf{z}_2, \dots, \mathbf{z}_n]^T = \mathbf{Z} \in \mathbb{R}^{n \times d})$ .
70
+
71
+ **Graph Formation:** A graph captures inter and intra-speaker dependency between utterances. Every utterance acts as a node of a graph that is connected using directed relations (past and future relations). We define relation types as speaker to speaker. Formally, consider a conversation between M speakers defined as a dialogue $\mathcal{D} = \{\mathcal{U}^{S_1}, \mathcal{U}^{S_2}, \dots, \mathcal{U}^{S_M}\}$ , where $\mathcal{U}^{S_1} = \{u_1^{(S_1)}, u_2^{(S_1)}, \dots, u_n^{(S_1)}\}$ represent the set of utterances spoken by speaker-1. We define intra relations between the utterances spoken by the same
72
+
73
+ speaker, Rintra ∈ {US<sup>i</sup> → USi}, and inter relations between the utterances spoken by different speakers, Rinter ∈ {US<sup>i</sup> → US<sup>j</sup> }i6=<sup>j</sup> . We further consider a window size and use P and F as hyperparameters to form relations between the past P utterances and future F utterances for every utterance in a dialogue. For instance, Rintra and Rinter for utterance u (S1) i (spoken by speaker-1) are defined as:
74
+
75
+ $$\begin{split} \mathcal{R}_{intra}(u_i^{(S_1)}) &= \{ \, u_i^{(S_1)} \leftarrow u_{i-\mathcal{P}}^{(S_1)} \ldots u_i^{(S_1)} \leftarrow u_{i-1}^{(S_1)}, \\ u_i^{(S_1)} \leftarrow u_i^{(S_1)}, u_i^{(S_1)} \rightarrow u_{i+1}^{(S_1)} \ldots u_i^{(S_1)} \rightarrow u_{i+\mathcal{F}}^{(S_1)} \} \\ \mathcal{R}_{inter}(u_i^{(S_1)}) &= \{ \, u_i^{(S_1)} \leftarrow u_{i-\mathcal{P}}^{(S_2)}, \ldots, u_i^{(S_1)} \leftarrow u_{i-1}^{(S_2)}, \\ u_i^{(S_1)} \rightarrow u_{i+1}^{(S_2)}, \ldots, u_i^{(S_1)} \rightarrow u_{i+\mathcal{F}}^{(S_2)} \} \end{split}$$
76
+
77
+ where ← and → represent the past and future relation type respectively (example in Appendix [F\)](#page-15-0).
78
+
79
+ Relational Graph Convolutional Network (RGCN): The vanilla RGCN [\(Schlichtkrull](#page-11-16) [et al.,](#page-11-16) [2018\)](#page-11-16) helps accumulate relation-specific transformations of neighboring nodes depending on the type and direction of edges present in the graph through a normalized sum. In our case, it captures the inter-speaker and intra-speaker dependency on the connected utterances.
80
+
81
+ $$\mathbf{x}_{i}' = \Theta_{\text{root}} \cdot \mathbf{z}_{i} + \sum_{r \in \mathcal{R}} \sum_{j \in \mathcal{N}_{r}(i)} \frac{1}{|\mathcal{N}_{r}(i)|} \Theta_{r} \cdot \mathbf{z}_{j}$$
82
+
83
+ where Nr(i) denotes the set of neighbor indices of node i under relation r ∈ R, Θroot and Θ<sup>r</sup> denote the learnable parameters of RGCN, |Nr(i)| is the normalization constant and z<sup>j</sup> is the utterance level feature coming from the transformer.
84
+
85
+ GraphTransformer: For extracting rich representation from the node features, we use a GraphTransformer [\(Shi et al.,](#page-11-0) [2021\)](#page-11-0). GraphTransformer adopts the vanilla multi-head attention into graph learning by taking into account nodes connected via edges. Given node features H = x 0 1 , x 0 2 , . . . , x 0 <sup>n</sup> obtained from RGCN,
86
+
87
+ $$\mathbf{h}_{i}' = \mathbf{W}_{1}\mathbf{x}_{i}' + \sum_{j \in \mathcal{N}(i)} \alpha_{i,j}\mathbf{W}_{2}\mathbf{x}_{j}'$$
88
+
89
+ where the attention coefficients αi,j are computed via multi-head dot product attention:
90
+
91
+ $$\alpha_{i,j} = \operatorname{softmax}\left(\frac{\left(\mathbf{W}_{3}\mathbf{x}_{i}'\right)^{\top}\left(\mathbf{W}_{4}\mathbf{x}_{j}'\right)}{\sqrt{d}}\right)$$
92
+
93
+ <span id="page-4-0"></span>
94
+
95
+ | Dataset | Number of dialogues [utterances] | | | | |
96
+ |---------|----------------------------------|------------|------------|--|--|
97
+ | | train<br>valid | | test | | |
98
+ | IEMOCAP | 120 [5810 (5146+664)] | 31 [1623] | | | |
99
+ | MOSEI | 2249 [16327] | 300 [1871] | 646 [4662] | | |
100
+
101
+ Table 1: Dataset Statistics.
102
+
103
+ Emotion Classifier: A linear layer over the features extracted by GraphTransformer (h 0 i ) predicts the emotion corresponding to the utterance.
104
+
105
+ $$h_i = \text{ReLU}(W_1 \mathbf{h}_i' + b_1)$$
106
+
107
+ $$\mathcal{P}_i = \text{softmax}(W_2 h_i + b_2)$$
108
+
109
+ $$\hat{y}_i = \arg \max(\mathcal{P}_i)$$
110
+
111
+ where yˆ<sup>i</sup> is the emotion label predicted for the utterance u<sup>i</sup> .
2205.11775/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2205.11775/main_diagram/main_diagram.pdf ADDED
Binary file (74.6 kB). View file
 
2205.11775/paper_text/intro_method.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep Learning has witnessed widespread adoption in many critical real-world domains such as finance, healthcare, etc [\(LeCun et al.,](#page-9-0) [2015\)](#page-9-0). Incorporating prior knowledge such as monotonicity in trained models helps in improving the performance and generalization ability of the trained models [\(Mitchell,](#page-9-1) [1980;](#page-9-1) [Dugas et al.,](#page-8-0) [2000\)](#page-8-0). The introduction of structural biases such as monotonicity makes models also more data-efficient, enabling a leap in predictive power on
4
+
5
+ *Proceedings of the* 40 th *International Conference on Machine Learning*, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).
6
+
7
+ smaller datasets [\(Velickovi](#page-10-0) ˇ c´, [2019\)](#page-10-0). Apart from the requirements of having models with high accuracy, there is also a need for transparency and interpretability, and monotonicity helps in partially achieving the above requirements [\(Gupta](#page-8-1) [et al.,](#page-8-1) [2016\)](#page-8-1). Due to legal, ethical and/or safety concerns, monotonicity of predictive models with respect to some input or all the inputs is required in numerous domains such as financial (house pricing, credit scoring, insurance risk), healthcare (medical diagnosis, patient medication) and legal (criminal sentencing) to list just a few. For example, when using machine learning to predict admission decisions, it may seem unfair to select student X over student Y, if Y has a higher score than X, while all other aspects of the two are identical [\(Liu et al.,](#page-9-2) [2020\)](#page-9-2). In another example, one would expect an individual with a higher salary to have a higher loan amount approved, all else being equal [\(Sivaraman et al.,](#page-9-3) [2020\)](#page-9-3). A model without such a monotonic property would not, and certainly should not, be trusted by society to provide a basis for such important decisions.
8
+
9
+ Monotonicity has been an active area of research and the existing methods on the subject can be broadly categorized into two types:
10
+
11
+ - 1. Monotonic architectures by construction: neural architectures guaranteeing monotonicity by construction [\(Archer & Wang,](#page-8-2) [1993;](#page-8-2) [Sill,](#page-9-4) [1997;](#page-9-4) [Daniels & Velikova,](#page-8-3) [2010;](#page-8-3) [Milani Fard et al.,](#page-9-5) [2016;](#page-9-5) [You et al.,](#page-10-1) [2017\)](#page-10-1).
12
+ - 2. Monotonicity by regularization: enforcing monotonicity in neural networks during training by employing a modified loss function or a heuristic regularization term [\(Sill & Abu-Mostafa,](#page-9-6) [1996;](#page-9-6) [Gupta et al.,](#page-8-4) [2019\)](#page-8-4).
13
+
14
+ The simplest method to achieve monotonicity by construction is to constrain the weights of the fully connected neural network to have only non-negative (for non-decreasing variables) or only non-positive values (for non-ascending) variables when used in conjunction with a monotonic activation function, a technique known for 30 years [\(Archer & Wang,](#page-8-2) [1993\)](#page-8-2). When used in conjunction with saturated (bounded) activation functions such as the sigmoid and hyperbolic tangent, these models are difficult to train, i.e. they do not converge to a good solution. On the other hand, when used with non-saturated (unbounded) convex activation functions such as ReLU [\(Nair & Hinton,](#page-9-7) [2010\)](#page-9-7), the resulting models
15
+
16
+ <sup>1</sup>Airt Research, Zagreb, Croatia <sup>2</sup>Algebra University College, Zagreb, Croatia. Correspondence to: Davor Runje <davor@airt.ai>, Sharath M Shankaranarayana <sharathms@alumni.iitm.ac.in>.
17
+
18
+ are always convex [\(Liu et al.,](#page-9-2) [2020\)](#page-9-2), severely limiting the applicability of the method in practice.
19
+
20
+ Our main contribution is a modification of the method above which, in conjunction with non-saturated activation functions, is capable of approximating non-convex functions as well: when the original activation function is used with additional two monotonic activation functions constructed from it in a neural network with constrained weights, it can approximate any monotone continuous functions.
21
+
22
+ The resulting model is guaranteed to be monotonic, can be used in conjunction with popular convex monotonic nonsaturated activation function, doesn't have any additional parameters compared to a non-monotonic fully-connected network for the same task, and can be trained without any additional requirements on the learning procedure. Experimental results show it is exceeding the performance of all other state-of-the-art methods, all while being both simpler (in the number of parameters) and easier to train.
23
+
24
+ Our contributions can be summarized as follows:
25
+
26
+ - 1. A modification to an existing constrained neural network layer enabling it to model arbitrary monotonic function when used with non-saturated monotone convex activation functions such as ReLU, ELU, SELU, and alike.
27
+ - 2. Experimental comparisons with other recent works showing that the proposed architecture can yield equal or better results than the previous state-of-the-art and with significantly fewer parameters.
28
+ - 3. A proof showing that the proposed architecture can approximate any monotone continuous function on a compact subset of R <sup>n</sup> for a large class of non-saturated activation functions.
2206.11474/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2206.11474/paper_text/intro_method.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Conditional image generation, usually class conditional, aims to generate the specific class of high-quality images. There are a lot of generative models which are able to make high-quality conditional generation based on joint-training scheme, like Generative Adversial Networks (GAN) [7,2] or Variational Autoencoder (VAE)
4
+
5
+ <sup>⋆</sup> The first two authors contributed equally to this paper.
6
+
7
+ <sup>⋆⋆</sup> The corresponding author is Xi Li.
8
+
9
+ ![](_page_1_Figure_2.jpeg)
10
+
11
+ Fig. 1: The visualization of denoising sampling process. The classifier gradient, a kind of class information in conditional generation, quickly converge to 0 in the previous method. It will lead to the collapse from conditional generation to unconditional generation, while our method recovers the gradient guidance and succeed to generate fine-grained features in the subsequent iterations.
12
+
13
+ [27,38]. However, when the condition requirement is changed, the generative models will be retrained, which is very inconvenient.
14
+
15
+ Denoising Diffusion Probabilistic Model (DDPM) [10,23,33] is a class of iterative generation models, which has made remarkable performance in unconditional image generation recently. The flexibility of DDPM [6,32] is that it can be easily extended to conditional variants by introducing an independent noise-aware classifier. Recent researches modeled the prior denoising distribution by training an unconditional DDPM, following the training scheme of Denoising Score Matching [39], and computed likelihood score by backwarding the classifier gradient. Dhariwal et al. [6] further proposed fixed scaling factor to improve the predicted probability of generated samples for DDPM, achieving superior performance than GAN on several image generation benchmarks.
16
+
17
+ In conditional generation process of DDPM, by backwarding the gradient of classification probability to image, the classifier provides high-level semantic
18
+
19
+ <sup>∗</sup>Equal contribution
20
+
21
+ Preprint. Under review.
22
+
23
+ information in the early stage of iterations, and gradually strengthens finegrained features in the subsequent iterations, both of which are indispensable. However, there exists a huge gap between discriminating the class of a image and generating a specific class of image with fine-grained textures. As shown in Fig.1, the predicted distribution of the classifier for noisy images tends to quickly converge to the desired class distribution, which is one-hot distribution, leading to the early vanishing of conditional gradient guidance. This is because that the incompletely generated image, which is still a noisy image and lacks fine-grained features, can be easily classified in the middle of denoising process. In this way, the image is considered to have been completely generated, and will no longer be guided by classifier gradient containing class information. As a result, the conditional generation process will degrade into an unconditional generation process in the later stage.
24
+
25
+ Therefore, our motivation is to enable the classifier to continuously give conditional guidance throughout the entire denoising process. We propose two simple but effective schemes from the procedure of sampling and the design of classifier training.
26
+
27
+ From the perspective of sampling procedure, we focus on how to detect the gradient vanishing and rescale the gradient to avoid the existence of gradient vanishing or recover the gradient when the vanishing does happen. We propose Entropy-Driven conditional Sampling (EDS) method, which is able to adaptively measure the level of gradient vanishing and rescale the gradient guidance to a appropriate level. In design of training classifier, we propose Entropy-Constraint Training (ECT), which will penalize the classifier when it gives a overconfident classification probability to a generated noisy image, thus constraining the classifier to provide more gentle guidance.
28
+
29
+ Our contributions can be summarized as follows:
30
+
31
+ - We are the first to discover the problem of vanishing gradient guidance for ddpm-based conditional generation methods, and point out that category information guidance should be continuously provided throughout the entire generation process.
32
+ - We propose EDS to alleviate the vanishing guidance by dynamically measuring and rescaling gradient guidance. At the training stage of classifier, for alleviating the vanishing gradient caused by one-hot label supervision, we utilize discrete uniform distribution to build an entropy-aware optimization term, which is Entropy-Constraint Training scheme (ECT).
33
+ - We conduct experiments on ImageNet1000 and achieve state-of-the-art FID (Fr´echet Inception Distance) results at various resolutions. On ImageNet1000 256×256, with our proposed sampling scheme and trained classifier, the pretrained conditional and unconditional DDPM model can achieve 10.89% (4.59 to 4.09) and 43.5% (12 to 6.78) FID (Fr´echet Inception Distance) improvement respectively.
34
+
35
+ # Method
36
+
37
+ We start by introducing the conditional generation process for diffusion models with classifier guidance (Sect.(3.1)). For ease of description, we will firstly introduce our dynamic scaling technology to recover gradient guidance adaptively in the sampling process and its motivation (Sect.(3.2)). Then, we describe the entropy-aware optimization loss for alleviating vanishing conditional guidance from training perspective (Sect.(3.3)), utilizing the uniform distribution, which is a more dense distribution.
38
+
39
+ The goal of conditional image generation is to model probability density p(x|y). To be specific, in class-conditional image generation, y represents the desired class label, which tends to be one-hot distribution, and x represents the image sample. For diffusion models, it can be implemented by introducing an independent classifier, as shown in Fig.2.
40
+
41
+ Specifically, the goal is converted into modeling the conditional transition distribution $p(\mathbf{x}_{t-1}|\mathbf{x}_t,\mathbf{y})$ , derived from Markov chain sampling scheme:
42
+
43
+ $$p_{\varphi}(\mathbf{x}_{0}|\mathbf{y}) = \int p_{\varphi}(\mathbf{x}_{0:T}|\mathbf{y}) d\mathbf{x}_{1:T},$$
44
+
45
+ $$p_{\varphi}(\mathbf{x}_{0:T}|\mathbf{y}) = p(\mathbf{x}_{T}) \prod_{t=1}^{T} p_{\varphi}(\mathbf{x}_{t-1}|\mathbf{x}_{t}, \mathbf{y}),$$
46
+ (6)
47
+
48
+ where $\varphi$ represents the model and T is the total length of Markov chain, which tends to be large. Then, we decompose the conditional transition distribution into two independent terms:
49
+
50
+ $$p_{\varphi}(\mathbf{x}_{t-1}|\mathbf{x}_t, \mathbf{y}) = Zp_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_t)p_{\phi}(\mathbf{y}|\mathbf{x}_t), \tag{7}$$
51
+
52
+ where Z is a normalizing constant independent from $\mathbf{x}_{t-1}$ and $\varphi$ can be seen as the combination of models $\theta$ and $\varphi$ , which is proven theoretically [32,6]. Furthermore, the log density of Eq.(7) can be approximated as a Gaussian distribution [6]:
53
+
54
+ $$\log(p_{\varphi}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{y})) \approx \log p(\mathbf{z}) + \log Z$$
55
+
56
+ $$\mathbf{z} \sim \mathcal{N}(\frac{1}{\sqrt{\alpha_{t}}}(\mathbf{x}_{t} - \frac{1 - \alpha_{t}}{\sqrt{1 - \bar{\alpha}_{t}}} \boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t})) + \sigma_{t}^{2} \mathbf{g}, \sigma_{t}^{2} \mathbf{I}),$$
57
+ (8)
58
+
59
+ where $\mathbf{g} = s \nabla_{\mathbf{x}_t} \log p_{\phi}(\mathbf{y}|\mathbf{x}_t)$ and s is the gradient scale. $\mathbf{g}$ can be derived by backwarding the gradient of a pretrained classifier about noisy data $\mathbf{x}_t$ . Usually, s is set to a constant [6] to improve the predicted probability. $\epsilon_{\theta}(\mathbf{x}_t)$ is the pretrained noise estimator for diffusion models and $\sigma_t^2$ is the hyperparameters for unconditional DDPM to control the variance.
60
+
61
+ In this way, unconditional diffusion models with parameterized noise estimator $\epsilon_{\theta}$ can be extended to conditional generative models by introducing the conditionaware guidance $\nabla_{\mathbf{x}_t} \log p_{\phi}(\mathbf{y}|\mathbf{x}_t)$ , as shown in Fig.2. Specifically, we start with the prior noise $\mathbf{x}_T \sim \mathcal{N}(0, \mathbf{I})$ , and utilize DDPM-based (Eq.(8)) sampler to make transition iteratively to generate the samples conditioned on desired class $\mathbf{y}$ . The sampler can also be extended into DDIM iteration method (Eq.(3)). The extension details for conditional process with classifier guidance can refer to Dhariwal *et al.*[37,6].
62
+
63
+ During the conditional generation process, we observe that the guidance provided from noise-aware classifier tends to vanish prematurely, which could be attributed to the discrepancy between generative pattern and discriminative pattern. For example, the noisy samples with high-level semantic information, such as contour or the color, may be guided iteratively by the classifier with nearly one-hot distribution, in which situation the gradient guidance tends to be weak or vanish, while the samples still lack the condition-aware semantic details. In this way, the condition-aware textures of generated samples are guided by unconditional denoising process (Eq.(3)).
64
+
65
+ ![](_page_6_Figure_2.jpeg)
66
+
67
+ Fig. 2: Pipeline for Entropy-driven Sampling process. Sampler represents a class of iteration method (DDPM or DDIM), which is non-parametric. All models are pretrained without gradient updating in the sampling process.
68
+
69
+ ![](_page_6_Figure_4.jpeg)
70
+
71
+ Fig. 3: The various gradient vanishing points for different samples. We randomly generate 5000 samples and define the time point, in which the gradient norm is smaller than 0.15, as the gradient vanishing point. Premature vanishing of gradients occurs in almost every sample.
72
+
73
+ An intuitive solution is to manually select a time step during the sampling process, after which the semantic details tend to vanish in most instances, and rescale the conditional gradient by an empirical constant after the selected time step. However, since the stochasticity in the generation process of diffusion models [4], the denoising trajectories for generated samples would differ from each other. It will lead to the various initial vanishing points, as shown in Fig.3. At the same time, considering the learning bias of classifier for different conditional classes, the level of recovery factor for each class may also differ. In summary, the effective scaling factor can be related to the current time step, the class condition, and the stochasticity of generation process. The experiment design and results about more intuitive approaches can be seen in Sect.4.3.
74
+
75
+ Motivated from above, we propose an dynamic scaling technology for the conditional diffusion generation to recover the semantic details adaptively for each sample. We noticed that, when time step is close to T, the predicted distribution tends to be dense, which can be nearly approximated as uniform distribution. The reason is that the noisy data derived from Eq.(1) can be approximated as random noise, N (0, I), in which state the gradient guidance is obvious. As time step declines to 0 inversely, the noise hidden in sample will be removed
76
+
77
+ **Require:** a pretrained diffusion model $\epsilon_{\theta}(\mathbf{x}_t)$ , classifier $p_{\phi}(\hat{\mathbf{y}}|\mathbf{x}_t)$ , and desired class condition y 1: $\mathbf{x}_T \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ 2: **for** t = T, ..., 1 **do** 3: $s \leftarrow \gamma * \frac{\mathcal{H}(\mathcal{U}(\hat{\mathbf{y}}))}{\mathcal{H}(p_{\phi}(\hat{\mathbf{y}}|\mathbf{x}_t))}$ if EDS, else $s \leftarrow \gamma$ 4: **if** use DDPM **then** $\mathbf{z} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ if t > 1, else $\mathbf{z} \leftarrow \mathbf{0}$ 5: $\mathbf{g} \leftarrow s \cdot \nabla_{\mathbf{x}_t} \log p_{\phi}(\mathbf{y}|\mathbf{x}_t),$ 6: $\mathbf{x}_{t-1} \leftarrow \frac{1}{\sqrt{\alpha_t}} \left( \mathbf{x}_t - \frac{1-\alpha_t}{\sqrt{1-\bar{\alpha}_t}} \boldsymbol{\epsilon}_{\theta}(\mathbf{x}_t) \right) + \sigma_t^2 \mathbf{g} + \sigma_t \mathbf{z}$ 7: else if use DDIM then $\hat{\boldsymbol{\epsilon}} \leftarrow \boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t}) - s \cdot \nabla_{\mathbf{x}_{t}} \log p_{\phi}(\mathbf{y}|\mathbf{x}_{t}),$ $\mathbf{x}_{t-1} \leftarrow \sqrt{\bar{\alpha}_{t-1}} \left( \frac{\mathbf{x}_{t} - \sqrt{1 - \bar{\alpha}_{t}} \hat{\boldsymbol{\epsilon}}}{\sqrt{\bar{\alpha}_{t}}} \right) + \sqrt{1 - \bar{\alpha}_{t-1}} \hat{\boldsymbol{\epsilon}}$ 8: 9: 10: 11: 12: end for 13: return $\mathbf{x}_0$
78
+
79
+ gradually and the predicted distribution tends to be close to one-hot, in which case the classifier gradient is invalid to provide semantic details for generation. Statistically, entropy can represent the sparsity of the predicted distribution, inspiring us to take it into consideration:
80
+
81
+ $$\mathcal{H}(p_{\phi}(\tilde{\mathbf{y}}|\mathbf{x}_{t})) = -\mathbb{E}_{\tilde{\mathbf{y}}|\mathbf{x}_{t}} \log p_{\phi}(\tilde{\mathbf{y}}|x_{t})$$
82
+
83
+ $$= -\sum_{i=1}^{|Y|} p_{\phi}(\tilde{\mathbf{y}}_{i}|\mathbf{x}_{t}) \log p_{\phi}(\tilde{\mathbf{y}}_{i}|\mathbf{x}_{t}),$$
84
+ (9)
85
+
86
+ where $p_{\phi}(\tilde{\mathbf{y}}|\mathbf{x}_t)$ represents the predicted distribution from classifier and Y represents the set of all class conditions.
87
+
88
+ In this paper, we utilize $\mathcal{H}(p_{\phi}(\tilde{\mathbf{y}}|\mathbf{x}_t))$ to adaptively fit various gradient vanishing time step. Furthermore, the entropy $\mathcal{H}(p_{\phi}(\tilde{\mathbf{y}}|\mathbf{x}_t))$ can also capture bias caused by different conditions, due to its sample-aware rescaling effect. Thus, when we sample from a pretrained noise estimator $\epsilon_{\theta}$ in Eq.((8)) conditionally, we reformulate the gradient term $\mathbf{g}$ as following, which is shown in Fig.2:
89
+
90
+ $$\mathbf{g}' = s(x_t, \phi) * \nabla_{\mathbf{x}_t} \log p_{\phi}(\mathbf{y}|\mathbf{x}_t),$$
91
+
92
+ $$s(x_t, \phi) = \gamma * \frac{\mathcal{H}(\mathcal{U}(\tilde{\mathbf{y}}))}{\mathcal{H}(p_{\phi}(\tilde{\mathbf{y}}|\mathbf{x}_t))}$$
93
+ (10)
94
+
95
+ where $\gamma$ is a hyper-parameter to balance the guiding gradient and entropy-aware scaling factor $s(\mathbf{x}_t, \phi)$ . In order to maintain the numerical range, we renormalize the entropy by its theoretical upper bound $\mathcal{H}(\mathcal{U}(\tilde{\mathbf{y}}))$ , where $\mathcal{U}(\tilde{\mathbf{y}})$ represents the uniform distribution of class variable, so that the gradients are almost not rescaled when t is close to T.
96
+
97
+ ```
98
+ Require: training set \mathbb{D}, a neural classifier \phi, training set \mathbb{D}, total time steps T
99
+ 1: repeat
100
+ 2:
101
+ (\mathbf{x}_0, \mathbf{y}) \leftarrow \text{sample from } \mathbb{D}
102
+ 3:
103
+ t \sim \mathcal{U}(\{1,\ldots,T\})
104
+ \mathbf{x}_t \sim q(\mathbf{x}_t|\mathbf{x}_0) (Eq.2)
105
+ 4:
106
+ 5:
107
+ L_{CE} \leftarrow \mathbf{y} \log p_{\phi}(\tilde{\mathbf{y}}|\mathbf{x}_t)
108
+ 6:
109
+ if use ECT then
110
+ 7:
111
+ L_{ECT} \leftarrow -\mathcal{H}(p_{\phi}(\tilde{\mathbf{y}}|\mathbf{x}_t))
112
+ 8:
113
+ Take gradient descent step on \nabla_{\phi}(L_{CE} + \eta L_{ECT})
114
+ 9:
115
+ else
116
+ 10:
117
+ Take gradient descent step on \nabla_{\phi} L_{CE}
118
+ 11:
119
+ end if
120
+ 12: until converged
121
+ ```
122
+
123
+ From the training perspective, the vanishing gradient can be partly attributed to the label supervision pattern for noise-aware classifier. Since one-hot distribution is very sparse and is utilized to supervise the noisy data, the predicted distributions are inclined to converge to one-hot under noisy samples in the sampling process, so that the gradient guidances are too weak to generate condition-aware semantic details at sampling stage.
124
+
125
+ Specifically, given the dataset $(\mathbf{x}_0, \mathbf{y}) \sim \mathbb{D}$ and a prior diffusion process (Eq.(1)), the classifier will be trained under noisy data $\mathbf{x}_t$ to build the gradient field in Eq.(8) of each time step. To alleviate the weak guidance caused by the sparsity, we utilize discrete uniform distribution, which is a dense distribution and has maximum entropy, as a perturbing distribution and introduce the optimization term at the training stage of the classifier to constrain the predicted distribution from the classifier as followed:
126
+
127
+ $$\mathcal{L}_{ECT}(\mathbf{x}_{t}, \mathbf{y}) = D_{KL}(p_{\phi}(\tilde{\mathbf{y}}|\mathbf{x}_{t})||\mathcal{U}(\tilde{\mathbf{y}}))$$
128
+
129
+ $$= \mathbb{E}_{\tilde{\mathbf{y}}|\mathbf{x}_{t}} \log p_{\phi}(\tilde{\mathbf{y}}|\mathbf{x}_{t}) - \mathbb{E}_{\tilde{\mathbf{y}}|\mathbf{x}_{t}} \log \mathcal{U}(\tilde{\mathbf{y}})$$
130
+
131
+ $$= -\mathcal{H}(p_{\phi}(\tilde{\mathbf{y}}|\mathbf{x}_{t})) + \mathbf{C},$$
132
+ (11)
133
+
134
+ where **C** is a constant term independent from the parameter $\phi$ . This loss term is equivalent to maximizing entropy of the predicted distribution $p(\tilde{\mathbf{y}}|\mathbf{x}_t)$ . The whole training loss of guiding classifier is composed of the normal cross-entropy loss and entropy constraint training loss (ECT), which is formally given by:
135
+
136
+ $$\mathcal{L}_{tot}(\mathbf{x}_t, \mathbf{y}) = \mathcal{L}_{CE}(\mathbf{x}_t, \mathbf{y}) + \eta \mathcal{L}_{ECT}(\mathbf{x}_t, \mathbf{y}), \tag{12}$$
137
+
138
+ where $\eta$ is a hyper-parameter to adjust the divergence about predicted label distribution and the uniform distribution.
139
+
140
+ Different from Entropy-driven Sampling, the proposed training scheme tries to alleviate the vanishing guidance by adjusting the gradient direction in sampling process, instead of the gradient scale. Thus, entropy-constrain training scheme can be complementary with entropy-driven sampling.
2207.01769/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2207.01769/paper_text/intro_method.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Approaches that generate saliency or importance maps based on the decision of deep neural networks (DNNs) are critical in several machine learning application areas including explainable AI and weakly supervised object detection and semantic segmentation. High-quality saliency maps increase the understanding and interpretability of a DNN's decision-making process, and can increase the accuracy of segmentation and detection results.
4
+
5
+ Since the development of DNNs, numerous approaches have been proposed to efficiently produce high-quality saliency maps. However, most methods have limited transferability and versatility. Existing methods are designed for DNN models with specific structures (i.e. a global average pooling layer), for certain types of visualisation (for details refer to Sec. ), or to address a specific limitation. For instance, CAM requires a network with global average pooling. Guided backpropagation (Guided-BP) is restricted to gradient-based approaches. Score-CAM seeks to reduce the method's running-time, while SmoothGrad aims to generate saliency maps with lower noise.
6
+
7
+ In this work, we propose Saliency Enhancing with Scaling and Sliding (SESS), a model and method agnostic black-box extension to existing saliency visualisation approaches. SESS is only applied to the input and output spaces, and thus does not need to access the internal structure and features of DNNs, and is not sensitive to the design of the base saliency method. It also addresses multiple limitations that plague existing saliency methods. For example, in Fig. , SESS shows improvements when applied to three different saliency methods. The saliency map extracted with the gradient-based approach (Guided-BP) is discriminative but noisy. Saliency maps generated by the activation-based method Grad-CAM and perturbation-based method RISE generate smooth saliency maps, but lack detail around the target object and fail to precisely separate the target object from the scene. With SESS, the results of all three methods become less noisy and a more discriminative boundary around the target is obtained.
8
+
9
+ [!th]
10
+ \centering
11
+ \includegraphics[width=\linewidth]{images/fig1/movtivation}
12
+ \caption{Example results of three well-known deep neural network visualisation methods with and without SESS. Each of these methods represents one type of saliency map extraction technique. With SESS, all methods generate less noisy and more discriminative saliency maps. The results are extracted with ResNet50, and layer4 is used for Grad-CAM. Target ImageNet class ID is 444 (bicycle-built-for-two).}
13
+
14
+ SESS addresses the following limitations of existing approaches:
15
+
16
+ - {\bf{Weak scale invariance:}} Several studies claim that generated saliency maps are inconsistent when there are scale differences , and we also observe that generated saliency maps are less discriminative when the target objects are comparatively small (See Fig. and Fig. ).
17
+ - {\bf{Inability to detect multiple occurrences:}} Some deep visualisation methods (i.e., Grad-CAM) fail to capture multiple occurrences of the same object in a scene . (See Fig. ).
18
+ - {\bf{Impacted by distractors:}} Extracted saliency maps frequently incorrectly highlight regions when distractors exist. This is especially true when the class of the distractor returns a high confidence score, or is correlated with the target class.
19
+ - {\bf{Noisy results:}} Saliency maps extracted with gradient based visualisation approaches appear visually noisy as shown in Fig .
20
+ - {\bf{Less discriminative results:}} Activation based approaches (e.g., Grad-CAM) tend to be less discriminative, often highlighting large regions around the target such that background regions are often incorrectly captured as being salient.
21
+ - {\bf{Fixed input size requirements:}} Neural networks with fully-connected layers like VGG-16 require a fixed input size. Moreover, models perform better when the input size at inference is the same as the input size during training. As such, most visualisation methods resize the input to a fixed size. This impacts the resolution and aspect ratio, and may cause poor visualisation results .
22
+
23
+ SESS is a remedy for all of the limitations mentioned above. SESS extracts multiple equally sized (i.e., $224 \times 224$) patches from different regions of multiple scaled versions of an input image through resizing and sliding window operations. This step ensures that it is robust to scale variance and multiple occurrences.
24
+ Moreover, since each extracted patch is equal in size to the default input size of the model, SESS takes advantage of high-resolution inputs and respects the aspect ratio of the input image. Each extracted patch will contribute to the final saliency map, and the final saliency map is the fusion of the saliency maps extracted from patches. In the fusion step, SESS considers the confidence score of each patch, which serves to reduce noise and the impact of distractors while increasing SESS's discriminative power.
25
+
26
+ The increased performance of SESS is achieved countered a reduction in efficiency due to the use of multiple patches. Quantitative ablation studies show using more scales and denser sliding windows are beneficial, but increase computational costs. To reduce this cost, SESS uses a pre-filtering step that filters out background regions with low target class activation scores. Compared to saliency extraction, the inference step is efficient as it only requires a single forward pass and can exploit parallel computation and batch processing. As such, SESS obtains improved saliency masks with a small increase in run-time requirements. Ablation studies show that the proposed method outperforms its base saliency methods when using pre-filtering with a high pre-filter ratio. In a Pointing Game experiment all methods with SESS achieved significant improvements, despite of a pre-filter ratio of $99\%$ that excludes the majority of extracted patches from saliency generation.
27
+
28
+ We quantitatively and qualitatively evaluate SESS and conduct ablation studies regarding multiple scales, pre-filtering and fusion. All experimental results show that SESS is a useful and versatile extension to existing saliency methods.
29
+
30
+ To summarize, the main contributions of this work are as follows:
31
+
32
+ - We propose, SESS, a model and method agnostic black-box extension to existing saliency methods which is simple and efficient.
33
+ - We demonstrate that SESS increases the visual quality of saliency maps, and improves their performance on object recognition and localisation tasks.
34
+
35
+ # Method
36
+
37
+ In this section, we introduce SESS. A system diagram is shown in Figure , and the main steps are described in Algorithm . The implementation of SESS is simple and includes six steps: multi-scaling, sliding window, pre-filtering, saliency extraction, saliency fusion, and smoothing. The first four steps are applied to the input space, and the last two steps are applied at the output space. SESS is therefore a black-box extension and a model and method agnostic approach. Each of these steps will be discussed in detail in this section.
38
+ [t]
39
+ \centering
40
+ \includegraphics[width=\linewidth]{images/SESS_v3}
41
+ \caption{The SESS Process: SESS includes six major steps: multi-scaling, sliding window, pre-filtering, saliency extraction, saliency fusion and smoothing.}
42
+
43
+ \renewcommand{\algorithmicrequire}{Input:}
44
+ \renewcommand{\algorithmicensure}{Output:}
45
+ [!ht]
46
+ \caption{SESS}
47
+
48
+ [1]
49
+ \Require Image $I$, Model $f$, Target class $c$, Scale n, Window size $(w, h)$, Pre-filtering ratio $r$
50
+ \Ensure Saliency map $L^c_{sess}$
51
+ \State $M, P \gets []$
52
+ \For{$i \in [1, \dots, n]$}
53
+ \Comment{Scaling}
54
+ \State $M$.append(resize(I, 224 + 64 $\times$ $(i - 1)$))
55
+ \EndFor
56
+
57
+ \For{$m \in M$}
58
+ \Comment{Extracting patches}
59
+ \State $P$.append(sliding-window(m, w, h))
60
+ \EndFor
61
+
62
+ \State $B$ $\gets$ batchify($P$)
63
+ \State $S^c$ $\gets$ $f(B, c)$
64
+ \Comment{$S^c$ as activation scores of class $c$ }
65
+ \State $S_{fil.}^c, P_{fil.} \gets$ pre-filtering($S^c$, $P$, $r$)
66
+ \Comment{filter out patches whose class $c$ activation score is lower than top $(100 - r)\%$}
67
+ \State $A$ $\gets$ saliency\_extraction($P_{fil.}$, $f$, $c$)
68
+ \Comment{get saliency maps of patches after pre-filtering}
69
+ \State $L$ $\gets$ calibration($P_{fil.}$, $A$) \Comment{$L$ is a tensor with shape $n \times w \times h$ }
70
+ \State $L' \gets L \otimes S_{fil.}^c$ \Comment{Apply channel-wise weight}
71
+ \State $L^c_{sess} =$ weighted\_average($L'$) \Comment{Apply binary weights to obtain the average of the non-zero values}
72
+ \State \Return $L^c_{sess}$
73
+
74
+ \noindent{\bfseries{Multi-scaling:}} Generating multiple scaled versions of the input image $I$ is the first step of SESS. In this study, the number of scales, $n$, ranges from 1 to 12. The set of sizes of all scales is equal to $\{224 + 64*(i-1) | i \in \{1, 2, \dots,n\}\}$. The smallest size is equal to the default size of pre-trained models, and the largest size is approximately four times the smallest size. The smaller side of $I$ is resized to the given scale, while respecting the original aspect ratio. $M$ represents the set of all $I$s at different scales.
75
+ Benefits of multi-scaling include:
76
+
77
+ - Most saliency extraction methods are scale-variant. Thus saliency maps generated at different scales are inconsistent. By using multiple scales and combining the saliency results from these, scale-invariance is achievable.
78
+ - Small objects will be distinct and visible in salience maps after scaling.
79
+
80
+ \newcommand{\ceil}[1]{\lceil {#1} \rceil}
81
+ \noindent{\bfseries{Sliding window:}} For efficiency the sliding window step occurs after multi-scaling, which calls $n$ resizing operations. A sliding window is applied to each image in $M$ to extract patches. The width $w$ and height $h$ of the sliding window is set to $224$. Thus patch sizes are equal to the default input size of pre-trained models in PyTorch\footnote{https://pytorch.org}. The sliding operation starts from the top-left corner of the given image, and slides through from top to bottom and left to right. By default, for efficiency, the step-size of the sliding window is set to 224, in other words there is no overlap between neighbouring windows. However, patches at image boundaries are allowed to overlap with their neighbours to ensure that the entire image is sampled. The minimum number of generated patches is $\sum_{i=1}^{n}\ceil{0.25i +0.75}^2$. When $I$ has equal width and height and $n=1$, only one patch of size $224 \times 224$ will be extracted, and SESS will return the same results as it's base saliency visualisation method. Thus, SESS can be viewed as a generalisation of existing saliency extraction methods.
82
+
83
+ \noindent{\bfseries{Pre-filtering:}} To increase the efficiency of SESS, a pre-filtering step is introduced. Generating saliency maps for each extracted patch is computationally expensive. Generally, only a few patches are extracted that contain objects which belong to the target class, and they have comparatively large target class activation scores. Calculating target class activation scores requires only a forward pass, and can be sped-up by exploiting batch operations. After sorting the patches based on activation scores, only patches that have a score in the top $(100 - r)\%$ of patches are selected to generate saliency maps. Here, we denote $r$ the pre-filter ratio. When $r=0$, no pre-filter is applied. As shown in Fig. , when $r$ increases only the region which covers the target object remains, and the number of patches is greatly reduced. For instance, only four patches from an initial set of 303 patches are retained after applying a pre-filter with $r=99$, and these patches are exclusively focussed on the target object. Of course, a large pre-filter ratio i.e., $r > 50$ will decrease the quality of the generated saliency maps as shown in Fig.
84
+ . Note we use notation $S^c_{fil}$ to represent the class ``c" activation scores of the remaining patches after filtering.
85
+
86
+ [!th]
87
+ \centering
88
+ \includegraphics[width=\linewidth]{images/pre-filter/pre-filter-saliency.pdf}
89
+ \caption{Visualisation of regions and saliency maps after pre-filtering, when computing saliency maps for the target classes ``tiger cat" (top row) and ``bull mastiff" (bottom row). All patches that overlap with the red region are removed after pre-filtering.}
90
+
91
+ \noindent{\bfseries{Saliency extraction:}} The saliency maps for the patches retained after pre-filtering are extracted with a base saliency extraction method. Any saliency extraction method is suitable; however real-time saliency extraction methods including Grad-CAM, Guided-BP and Group-CAM are recommended for efficiency. Each extracted saliency map is normalised with Min-Max normalisation.
92
+
93
+ \noindent{\bfseries{Saliency fusion:}} Since each patch is extracted from a different position or a scaled version of $I$, a calibration step is applied before fusion. Each saliency map is overlayed on a zero mask image which has the same size as the scaled $I$ from which it was extracted. Then all masks are resized to the same size as $I$. Here, notation $L$ represents the channel-wise concatenation of all masks. $L$ has $n$ channels of size $w \times h$. Before fusion, a channel-wise weight is applied. $S^c_{fil}$, the activation scores of patches after filtering, is used as the weight. The weighted $L'$ is then obtained using,
94
+
95
+ ¬
96
+ L' = L \otimes S^c_{fil}.
97
+
98
+ Finally, a weighted average that excludes non-zero values is applied at each spatial position for fusion. The modified weighted average is used over uniform average to ignore the zero saliency values introduced during the calibration step. Thus, the saliency value at $(i, j)$ of the final saliency map becomes,
99
+
100
+ L_{sess}(i,j) = \frac{\sum_{i=1}^{n}L'(n,i, j)*\sigma(L'(n,i, j))}{\sum_{i=1}^{n}\sigma(L'(n,i, j))},
101
+
102
+ where $\sigma(x)=1$ if $x>\theta$, else $\sigma(x)=0$, and $\theta = 0$. A Min-Max normalisation is applied after fusion.
103
+
104
+ \noindent{\bfseries{Smoothing:}} Visual artefacts typically remain between patches after fusion, as shown in Fig. . Gaussian filtering is applied to eliminate these artefacts. This paper sets the kernel size to 11 and $\sigma$ = 5.
105
+
106
+ [!htb]
107
+ \centering
108
+ \subfloat[input]{\includegraphics[width=0.25\linewidth]{images/smooth/both.png}}\quad
109
+ \subfloat[before smoothing]{\includegraphics[width=0.25\linewidth]{images/smooth/1_false}}\quad
110
+ \subfloat[after smoothing]{\includegraphics[width=0.25\linewidth]{images/smooth/1_true}}
111
+ \caption{An example of the effect of the smoothing step. After the smoothing step, edge artefacts are removed and the generated saliency is more visually pleasing.}
2207.09090/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-02-16T13:55:36.499Z" agent="5.0 (X11)" etag="mUJyPy2gXo11UII99Csg" version="14.3.1" type="device"><diagram id="pEGLPcy5Alr7zN_z4LwX" name="Page-1">5Vpbj+o2EP41kbYPi+I4Nx5Z2J6qPSuthKp2n458iJe4NTHHMQv019chdu5hgQBBh33ZeGxPxvN9Y884GHC82HzhaBm+sABTwzKDjQEnhmUB6JjyXyLZphLftFPBnJMgFZm5YEr+w2qmlq5IgGMlS0WCMSrIsiycsSjCM1GSIc7ZujzsndGgJFiiOa4JpjNE69K/SCBCtQq9rET+GybzUL8ZmKpngfRgpSIOUcDWhSXDZwOOOWMifVpsxpgmziv75deW3swwjiNxyITf3eBRQDSd4yiYjn6Qmf+n9wgsZZzY6hXjQDpANRkXIZuzCNHnXPrE2SoKcKLWlK18zFfGllIIpPAfLMRWoYlWgklRKBZU9UqL+fZvNX/XeEsaA0c3J5ti52SrWqmtiYGtPtDOZis+w/sWrriE+ByLPeOsDClJccwWWNoj53FMkSAfZTuQ4to8G5fDIR8UIsegk+r9QHSl3lRDq4zFOiQCT5dot/S1jMiy35U6zAXe7PdgfcV6AlBTVDwDx0/b6zw6LFONCQuRMfQv5SSzVwqD3ihsHUhhu08KW7dHYXto3hiF4TFOAldyklt2ku9JblfdBPxh3U1weKlIBzW3XPWwyoP7rdh3+Ui3D4z0FkgPjvTd1BHnaFsYsGQkEnFB82siyJli6RxCMcWGTjkP+WQ8VBblzEgtyHmSLaUDdWAv1NkQkTLHclTzTXNFPufESRqX4k1XPtQBdP0SgJZbUZHyVM2qxPwZsLRv70iBZvlIgb7b85HiNjjJpfKtTwH5kI9zsVt5KnpnO3xzB7o/Vkx3PMY7eo/kAOAvN3mn1jLdxgIvtC5pbaqu/Aopbnjx2W15eJm8/nKkKRXuSBaIMkFiwdm/eMwo41ISsSjZHN4JpRURomQeyeZMsgZL+VPCKSJryZHqWJAg2O0sTYwsc/YcpPRNfWTrrdmqH+G2WWelLsnPzkq/lZXdYB/LQZxRinncCv7PCrPjVWEGfcM87A6z2wSz1kK04I8C2qQ6Kl591yJQGFcQG9b4ZJVWq8rBYLBTbJ6s+6VN971RG4KBV+H20KtXISCrTa5Cb9B0uFazyygYJdePiTMpimMya60qrAuVFafebRU86zTsGlrWMYvMboazMgCWVaTZbS2LrCmyK7mX5VUUtaSjx9Y5jt1s8GXrFm9/Htd1J+22H3/FiEdynzg59evX/IcpexcLJHvM1y9Z1vidf7rf3lceCRznsF0YXHUX1pvu2bOMOGXFg+GahjNOnMkpkcB4T7IpQiyQ4U2+CdndXmn8rFxwrAoR3IZLQejXaaDPzPPToOk64Bw1xefw3x360KteCoN+Cw3NxkuAH3+TkEsVsnjwJneJtlOOdejYPaPdfqnVGW1+nxENXO/GMG5PeDtjjO4TY2d4a3E8rDn7qh9jit9idM/Fv8XozevzqtxtRvM63+uh2RB/7XcbivEBisPs43TB50mA6eAwkq8kyd8Rfu3rcgLYlZCx7bKKwy8n4H5FJ38rk838t2Pp8PwXePD5fw==</diagram></mxfile>
2207.09090/main_diagram/main_diagram.pdf ADDED
Binary file (17.1 kB). View file
 
2207.09090/paper_text/intro_method.md ADDED
The diff for this file is too large to render. See raw diff
 
2209.10492/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-09-14T18:17:46.520Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" etag="iJzu_lOic4fNE3SLqsq-" version="20.3.0" type="device"><diagram id="kUZxto-48bg4xqBv0s7G" name="Page-1">7V1Zd6o8F/4t30XX+56LupiHS1BxHqk43LwLBRVlksHp13+JghWhrW2lao+esyokIWD29GTvnfCEZ411wZHtac1SVP0JQ5T1E557wjCapMFfWLDZFxAEtS+YOJqyL0JeC0Rtq+4L0bDU1xTVDcr2RZ5l6Z5mRwtHlmmqIy9SJjuOtYo2G1u6Eimw5YkaeQxYII5kXY0162qKN92XMuRR66KqTabhnVEkqDHksHFQ4E5lxVodFeH5JzzrWJa3PzLWWVWHYxcdF+GN2sODOarpnXNBEVGGCj78byxNlsiCKlKShzyjCL7vZynrfvCTg8f1NuEYOJZvKirsBnnC+dVU81TRlkewdgWIDsqmnqGDMxQcji3TE2RD0yHBs5ahjUBnomy64KsmBg0COqNYcJ61dMvZ3Qzff0C56znW/DDou641XT9qmSfynEAeWoY1pmWCvvn48AQjtlQdT10fFQXDVVAtQ/WcDWgS1DIB5QLWZdGgh9UrI1Ahf0+PmAAP+S/gvcmh51fygIOAQp+iFnZtan1IFSEnCKlSBaewCFlIkoqRBafJBLKQWGp0Qa9Nly9L0ZiE/35OipLIhdHUj5IrRqymoz57jqyZgD5ggC3F14HROaUg+MFelEyJY3Y8wEGRrGsTE5yOwDCqoJyHw6cBK8MFFYamKPA2iXwR5ZwLkARFmUwoIe8oN5REM0lyhKREF4aNDbmqAGscnAZjGR0Ny/Gm1sQyZb1qWXZAlpnqeZtAOmTfs6JEU9ea14OXZ8jgrB90Bo9z6+OTTXDygbghu8+OIWQnbAjFTzWVo7M3SedavjNS3xmawMrA4XiXwI6qy562jIKXJEIFlzYtDTzKgTEwlIhwBY2x0S7Az5uoXnDVCbkPj/F1DqBjkik2n0VVdkbTJ4ySDSgN5tC1d2NJ6VAahw44msAjTp9YjuZNjXdUL3qG6o3qxxyZZ3JETNZBDbX7vKN7LyGpOB4lCIHFxTQ0iMdCSqUlpChG3ayUgnF2NkcXwdN+2B88eb1sd3Ys3QdrW1T1pQqV84nYI1cQe2bfbi917xKFvLSC+J4mj8Mh+Bd5InlADAx0iuSw4BsPvongm9x/ZzKZJzIXl3rknKLwZmJwMxFL7OvX2vdwnhBoDYYh4pgLI+JaA6XT0hr4B3j4xzXEZ/RDWuIdOjnOkO+bEu/wuY/kO4fGCAw60mz3LaY/IqTs2nvXzVhbQ4Y4NcGKrDLjUaIJHjHqcJyOqaXoOCLGEuBwWHb5USZu39CinxGjqNicCtU3xCh0DN6bGMUnorm43+DexIjGM9Gp5dUF6eCBvWFB+pQ9OoKlbJpyhZ0rV+xtyVXcS1pK0TypqEKqdJJcsRSNy9SF5OrVGRPKFU5e2UCRcZgfzMohh0aGm1r4Vljx7O64kwMNWHv9WhdO5Me+q1lm2BV4tH1v+8pfjNpD9B3Sl6QycQrjRJJPDk1tuh93yTxo/B3rGPO8vkHmRNdrqNkubyIT9OOtmchPYc2fMpFne2huzETGPTQ58u6hJ0vHTCR+ZRPJxsZZRP9Jb6DH4zE2ShxohRoCTZMWxifZhKjej2L8OOZ72KnvOEOoU1EKIcaxkWKSjBSGp0Vj4kHjS9KYJWNYBI07iolkwJmaIN9uEPg74aUjLJKWNzkMzH2IRbCLR5O/R3EqjkXi4YI7wyIYwUQli2SubCATYuvY3SOR2DCjV8YheDw0chEbNbIM21Hdv9JQARgfJTIRdyiTeIKNSiuaiceTZsXmc5D7/FGayr/5tefII+9ZNpVn3td05U+MdN9JWHkjoe+HElbw0Nl/SFiJw8afTVhh7yCM9nVAcZVsNJw4E2lQt5WWgidMIODfS+Sl/FblixPR/F2GSbCwibkkaU0D8The/KyJRTFgY3fK+cTMHthhPyf890DnXfeQR4LzwORCjkhuiiU1PWlDRtv8+bPjLHh7QDLZnjqyq572i59ck8iM9wQQjpUZcSGeZaPTWhqJuwHxsNsILEzLCIUw9aaN0HdMUGqmhj3T1BC3NanF447f+0+RImgsCsTxK8+2iISYxa8Sq7TiVsS5mYcEcVNiRSRkHmJ3L1bsSeA/FLOridUdpPh/NRqcrljhdypWcYfG/YeDydA63YxY3UFo4zbF6lx/w42BQCLub7j/REQSPc2xuHIaIpEQ2bj/HAuCjWJtkmWvO8xkSpGNvzT6TjDRpbYUFXeUYwmbS6SW6Ukm7WHwoO+X1SQRld+kADCe5DZNyxiF2yk86HsZ+aWigS4KPc8MhhmBl6dv0k4+D/p+WX6xk6AzljB7YOP0Tc38JmSS3n/+BsmcStGV14pRKVnB14DO3ydJLBlLy47bwhB+/MhyByo+MftEAkcZbn/ym5I2SCqKRW8gaSMhAfBGXCXXTr4Icyo+Tr4I3F834gwJn/tI5mqyN5qK/nc37DklyNnOxlC0LgEH0ShcYIh4JjXNxCUotc20QvfBDUrQvaY9UecmWIf5gbcieQkJM+8kPJF/d35TCDo+zG9i0pLcO1iLeWeSS58b7qZuKy5HJ/pck/e6eiQwvhUGPLHMZwt4Wp4a+g7C698U8LQigfS54Dd0l9yKIMfB7/2vcYo5b8grr7Zmf2ajn5StWdqbvJJENC/iMNsPu9hLYWyT1487OiVsyrvFMh/uPzM6UOXVHxfSJ+aia7TvyzV3BhsKwoUkHWVOXHh0gqyH3qCfmd8m+MNf9wq+V4IhFwI9FHq6tJtMiD+iSQHI1Ch2eDVGotf1b6cYTuA3R68P934aywFaC3Xp6+bJvyKO+N3XMFyUQU72Zk98HclPamAUSdjm+ev8Ea5wfnDJt/zQOHNrXBI31N/gknuNo94Sj5AUdWM8krChdGpLOj+xsvP9JZ1/PrGm87FA88RzEGFAKmH3sJ9dn4liWAIHnpDDnco2PHQ1c6KrHHy7HBgORXPUkQeNFqixfPj8n3q709feN/HRu9HQHJmPp4IfPCCeHDwx9JmkEkehCCZDJL2ZK05VMFmhUtMsH1qfjzUL+oZmcX1bdZbanQOWt/jvPSN0xEAXQ7dslH3oeAoKnZBuh1IZJq2NBlD8Z/yJX81CuclX1hxeRXPGbvyBFbgRH/3hyY90xYtlP1diTHDR9BRSZRQipggunJ6CnniDGCLuvvvR9BT0l+7zd6Myee42xbe1fu7w3N/KXHm8COoo9n39F0F9P8F6N9WLobG963/3Ak/RNwwZhpa4kPr/hOT/51dMx76lZS5iTqKRXzIBraHhcuufsSd40gTuEnzVdFRFG3kPxvoJxiIR4mPGohJ3JE6Rta6ZjB7WfGMacI2Mu9CofLwZ8dUy7rhGYcJ2tnbVc1jpPy6rNHE54d3QMdqHbiDNkOEjH0SuKg9VvWm5WjAzHlqeZxmggQ4reHk0n+x4JCSDoo5lf6cfToXWgxwTT/LZ3ZALS5GwBHYle4Do3P4UE2xzAtSSJvGN9gqpFCYWBz51sTPNdybgKNuC5zrP1cBXrj/S+w1YUBP4mpTvwfHb/We4b3yWuMzT8CCHt8UXvcaVTHmwhQ+n+JRfrk15Q8q3sKLq47BVtseXuj34PHQe/GmsuYJUWRFDcMy/zPV8S2pZQq/bbiJMttXOCfgqr23EeT+7rtFZ3Zv1ARfxLN/rdwVdleW8MJNblbLe7mCS7XD1FT8sbGau+9LpMBNkTmSLoDl0wfMDfIiw1jIvqXBCItRol66tVAs1RzPNIpYY1q9OgbLnqTphL3Oc65h1iVCF/tatsO5gS+cnHD4ZzpcTqTbbDDnUYjqLmcFUqVZTKo2FhiKXakyrJOiu1MUGrW5Wq7PEQFfcgdBfczg1VRfldkdX2qKSxQwruwT3mhZ1UubbyHDtIbIiVMiyLymlrNSSQGW3VGvYHVsyjbrTKc+hfuQbqtISmsXeApgvoZUlsTL2IkxZz21IFa6x8EGTyrrZMlpOTnVoWuvboJ2OekjZmvO86RFLyq77LKU36oQ06jMvEvXSKoGr6mXTUMk5D+61IYpjPSfbTpkTmhWPbQ/B0wjijBkUXJOoF0XG7DfL+WIbXGYWfNNhDELEVkh3ZI1duzLagtbLIbLtYtuGLVA9ZN0EJT7PFidlwHP8S607LO4YkwcDQuadeXkymUCJg//5NwxJgnJ6BwyfrHJCkEzCljBoAmxBsUwYer+4xnkbt7i2bCbiliCtCYIQZzL8F9ljkPDrz37WswM348A2wKbH5iGsD/qG1ablGLJ+VLeUHU0G30ApyZ7vqO4H7Uay/VaTVTCWsJIIsAKiA2uoOs/gV440oLBiV1qOPYUBsl0Fti+DSOk5AD2w+IB7wjoN2GAzuBMS+G33NZ4DOhuD/sM77az2nvbWKnqbleUo0Qc79AV+y3Cuge5gn3uY9hxwT6Td8KDvn0/ItduzHFIKgxsC7Q7IkGi7XhV1ZDk7/+qzN9VGc1N1g8fTTGBcwvE5bXtEy3fbHT1OpN1Yt2TvdHAUzbV1eRM21zVQgSH/0wwbIBzZDIDyacjtYrE78njj80tuhXoo3gvZ+8D8Ex6+s2K9l1Bmp0s2kXisBU2a2YeA5uJ6LCkZ84GcHsjpgZzuHjnhDH2ibOhD1O2KyCkpPfHXaZw+PNcCjcNPHLmavsapGUDjKPhyQej5kuNhEp+tVhuQSB9pnA4SaJwOJi4ROlfjc4XqumvZ6nyM8uPSdlLKbctIq0/qQAOVxL6WH5YQqZSr5rjCulVmrVK/oHJ1xOpKSmOTFTWh3BKhGqGN9Xg1wc0tEAd+PO6pmDZqywvU5RAL2dbW9V6zK03o3rKNGFZTbo/Lqjbrr7ozrDUToBz327npjBGpgtJr80K3nWO2zYa8Zhjfn4ljbN0sEiVtMZx2RrXcuJ4d5NvsrDjrTjsuwVVYjNU6XL7GdrnuiM4rEleZ1IclRjSkNmlIcw2rb7yqp20G6zJHYTVbnAwlDqrWATrvahOnMi/mV50Fx/oyXSKl9oKoDxzTNbqD2nw2xAfaumZmuUqT2iJMxXCdaa9TE3qGgS01rL3B9HEP1UumQSIuPaq466qNLagcKpo2VMfKFs0zaM12aEy1abyPvPBjjCbLbdcyaK1YHE5F0GxY9dm1LRQYeVnt0xufkEmuLregMikVNsyq5BngmM8vuwV+UQOgSBgsFajAgWISSA5BuYVtoxWLXey4Jq8LL3PRbxnZ7GW0DEtm0KhbmWHoTDxggeIZKilkgWXSgjZJOSS/TtHsiDrfKxpOXou9dvqKpl0y67jKaPmquxnobWxBmnZXmUkfqZlyqGakbttH7FyrnS2Ah8NrnRytW1nD4xmulxfXiNMlfI5uEyVWK2GSX3qpCHKXay2rgK+5oYwpVrnuKhWRFOblvrNW8AGO+ttB3xwbBjBwQt9v5daDgkcri0ataPLzSmG8ecErjfoapzpzM2dyRcHrTDp4EyX6yGTYqg2Kw81aAEpEnKCD2kJ6GRAO0RT4LQnHz0Q65e6iUqo06+a8XS5a+GaAyN1s1qz3sxW8grUbqqlVxZ7gD3I9PSfW5xW14/HIwK617bZu9wihpKBorQx+gd9ectlxxademr5S7o46WxXMQ0xtY20Yna5ZolyRBxtt1WIkTkDwTQfCoK3L97d5SiOqKgcgkDAh500z2/BaEtdsdnWxMNOqDQhzOHo+2votrp7r4R4FBlczJHq28KXWwqYVfbCsDxTrBSuNXpTxENDR3F01hVeObM6yqJrThJyVGizBSDRDsDGNkTARQjNIMjZJC5qgf4Uf+YFNHtjk78AmOEVnTrYTvxVwEs66HqrmoWoequb+VQ1JYTFVg5Ipqxpw6ljQ9X6oK8B1Vvu1uXj+/w==</diagram></mxfile>
2209.10492/main_diagram/main_diagram.pdf ADDED
Binary file (33.8 kB). View file
 
2209.10492/paper_text/intro_method.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Recent progress in pre-trained language models has led to state-of-the-art abstractive summarization models that are capable of generating highly fluent and concise summaries [@lewis2020bart; @zhang2020pegasus; @raffel2020exploring]. Abstractive summarization models do not suffer from the restrictive nature of extractive summarization systems that only copy parts of the source document. However, their ability to generate non-factual content [@cao2018faithful; @maynez2020faithfulness] and their lack of clear interpretability makes it harder to debug their errors and deploy them in real-world scenarios. Towards interpretable summarization models, @jing1999decomposition [@jing2000cut] show that human summaries typically follow a cut-and-paste process. This further motivated them to propose a modular architecture involving separate operations that perform sentence extraction, sentence reduction, sentence fusion, etc. Most recent efforts on explainable abstractive summarization follow an extractive-abstractive framework that only provides supporting evidence or 'rationales' for the summary [@hsu2018unified; @gehrmann2018bottom; @liu2019text; @zhao2020seal; @li2021ease]. These models are good at highlighting words or sentences from the source document but are not able to explicitly capture the generative process of a summary, i.e., the reasoning steps performed in order to generate each summary sentence from the source document sentence(s), like sentence compression, fusion, etc. In this work, we seek to bridge this gap by proposing a novel *Summarization Program* framework for explaining abstractive summarization, which views summarization as a systematic reasoning process over source document sentences.
4
+
5
+ <figure id="fig:main_fig" data-latex-placement="t">
6
+ <embed src="figures/main_fig.pdf" style="width:95.0%" />
7
+ <figcaption>Example of a Summarization Program showing the generative process of the two summary sentences (marked with labels S1’ and S2’ in yellow) from the three document sentences (marked with labels D1, D2, and D3 in blue) using compression, paraphrase and fusion neural modules. The Summarization Program consists of two trees, one for each summary sentence. The summary sentences are the tree roots while the document sentences are the leaves and the edges are directed from the document sentences towards the summary sentences. The intermediate generations are labeled with I1 and I2.</figcaption>
8
+ </figure>
9
+
10
+ A Summarization Program (SP), described formally in [3](#sec:summ_trees){reference-type="ref+label" reference="sec:summ_trees"}, is a modular executable program that consists of an (ordered) list of binary trees, each encoding the generative process of an abstractive summary sentence from the source document. Fig. [1](#fig:main_fig){reference-type="ref" reference="fig:main_fig"} shows an example. The leaves in a Summarization Program are the source document sentences (typically, only a small subset that are relevant for generating the summary). Each intermediate node in an SP represents a generation from a neural module (shown with labeled edges) which are then composed to derive the final summary sentences at the roots of the trees. We consider three neural modules for building SPs -- sentence compression, paraphrasing, and fusion [@jing1999decomposition; @jing2000cut]. The modules are trained via finetuning a pre-trained language model on task-specific data. We evaluate the promise of Summarization Programs by asking the following two research questions (see Fig. [2](#fig:pipeline_fig){reference-type="ref" reference="fig:pipeline_fig"} for an overview).
11
+
12
+ 1. **RQ1.** *Given a human-written abstractive summary, can we develop an algorithm for identifying a Summarization Program that effectively represents the generative process of the summary?*
13
+
14
+ 2. **RQ2.** *Using the SPs identified in RQ1 as supervision, can we develop a model that generates Summarization Programs as interpretable intermediate representations for generating summaries?*
15
+
16
+ We answer the first research question by automatically identifying SPs for human-written summaries ([4](#sec:search_algo){reference-type="ref+label" reference="sec:search_algo"}). Specifically, we develop an efficient best-first search algorithm, [SP-Search]{.smallcaps} that iteratively applies different neural modules to a given set of extracted document sentences in order to generate newer sentences, such that the ROUGE [@lin-2004-rouge] score with respect to the gold summary is maximized by these new sentences. [SP-Search]{.smallcaps} achieves efficiency through important design choices including maintaining a priority queue that scores, ranks, and prunes intermediate generations ([4.2](#sec:design_choices){reference-type="ref+label" reference="sec:design_choices"}). [SP-Search]{.smallcaps} outputs Summarization Programs that effectively reproduce human summaries, significantly outperforming several baselines and obtaining a high ROUGE-2 of 40 on the CNN/DailyMail dataset [@hermann2015teaching] ([6.1](#sec:eval_rq1){reference-type="ref+label" reference="sec:eval_rq1"}). Moreover, human evaluation of the Summarization Programs shows that our neural modules are highly faithful,[^2] performing operations they are supposed to and also generating outputs that are mostly factual to their inputs ([6.2](#sec:module_faithfulness_eval){reference-type="ref+label" reference="sec:module_faithfulness_eval"}).
17
+
18
+ We leverage [SP-Search]{.smallcaps} to identify oracle programs for human-written summaries that also serve as automatic training data for answering our second research question. In particular, we propose two seq2seq models for Summarization Program generation, given a source document as input ([5](#sec:summ_tree_model){reference-type="ref+label" reference="sec:summ_tree_model"}, Fig. [2](#fig:pipeline_fig){reference-type="ref" reference="fig:pipeline_fig"}). In our first Extract-and-Build SP generation model, a state-of-the-art extractive summarization model first selects a set of document sentences, which are then passed to another program-generating model. In our second Joint SP generation model, sentence extraction and Summarization Program generation happen as part of a single model. So that a seq2seq model can generate Summarization Programs, we represent them as linear bracketed strings composed of sentence identifiers and neural modules (See Fig [2](#fig:pipeline_fig){reference-type="ref" reference="fig:pipeline_fig"}). Next, we finetune a pre-trained language model, BART [@lewis2020bart] to generate SPs which are then executed to obtain the final summaries.
19
+
20
+ As a way to evaluate whether Summarization Programs improve the interpretability of abstractive summarization, we conduct a small-scale *simulation study* [@doshi2017towards; @hase-bansal-2020-evaluating; @zhou-etal-2022-exsum] where we ask humans to simulate model's *reasoning* by writing Summarization Programs for model-generated summaries ([6.4](#sec:simulatability){reference-type="ref+label" reference="sec:simulatability"}). We observe that after seeing model-generated SPs, humans are able to better predict model SPs for unseen samples, such that the executed summaries match more closely with the model summaries. Overall, our contributions are:
21
+
22
+ - We introduce *Summarization Program*, an interpretable modular framework for explainable abstractive summarization. Summarization Programs consist of an ordered list of binary trees, each encoding the step-by-step generative process of an abstractive summary sentence through the use of different neural modules (sentence compression, paraphrasing, and fusion).
23
+
24
+ - We propose an efficient best-first search method, [SP-Search]{.smallcaps} that identifies Summarization Programs for human-written summaries, obtaining a high ROUGE-2 of 40 on the CNN/DailyMail dataset with neural modules that are highly faithful to their intended behavior.
25
+
26
+ - We present initial Summarization Program generation models that generate Summarization Programs from a source document, which are then executed to obtain final summaries. We demonstrate that SPs improve the interpretability of summarization models by allowing humans to better *simulate* model behavior.
27
+
28
+ # Method
29
+
30
+ At a high level, [SP-Search]{.smallcaps} operates as follows (see Algorithm [\[algo\]](#algo){reference-type="ref" reference="algo"}). For each summary sentence, it initializes a priority queue, each element $(s_1, s_2, m, h)$ of which represents a module $m$, the sentences $s_1$ and $s_2$ on which $m$ is defined ($s_2$ is empty for 'compression' and 'paraphrase' modules) and the corresponding tree height $h$. The initial elements in the queue represent possible operations only on the document sentences. Next, at each step of the search, [SP-Search]{.smallcaps} pops an element from the queue and executes the operation over the corresponding operands to generate a new sentence. Each module generates top-5 outputs using beam search and the one with the highest R-L score with the corresponding summary sentence is chosen. Using this new sentence, [SP-Search]{.smallcaps} creates new elements for the priority queue by considering all potential operations involving the new sentence and other available sentences. Then, these new elements may be explored in the next steps of the search.
31
+
32
+ As a best-first search method, [SP-Search]{.smallcaps} ranks the elements in the queue according to the following scoring function. $$\begin{equation}
33
+ \left.\begin{aligned}
34
+ \textrm{Score}(s_1, s_2, m, h) = \textrm{max}\big(\textrm{R-L}(s_1, S), \ \textrm{R-L}(s_2, S)\big)
35
+ \end{aligned}\right.
36
+ \label{eqn:score}
37
+ \end{equation}$$
38
+
39
+ The scoring function considers the maximum of the R-L scores with respect to generations $s_1$ and $s_2$ such that if the module $m$ is executed, it can possibly lead to a sentence that is of an even higher R-L score than its children. Since the objective of [SP-Search]{.smallcaps} is to maximise the R-L score with respect to the summary sentence, this scoring function encourages prioritizing elements in the queue that can potentially lead to higher scores. Whenever a new sentence is generated, its R-L score with the gold summary sentence is computed and accordingly, the maximum score and the corresponding best possible root node of the tree are updated. Upon completion of the search (when the queue becomes empty), the node with the maximum score is chosen as the root node and the entire tree is constructed by backtracking from it. Since searching for the best Summarization Program exhaustively is prohibitive, [SP-Search]{.smallcaps} achieves efficiency through the important design choices discussed below.[^7]
40
+
41
+ **Top-k Document Sentences.** The search space grows exponentially at each depth (because of the fusion operation), but one way to limit the growth is to ensure that [SP-Search]{.smallcaps} starts with a small number of document sentences. A summary is typically constructed by using information from only a small number of sentences from the document. Hence, we rank each document sentence by the fraction of unigram overlap with the summary and build the SP with only the top-k document sentences ('k' being a small number) as eligible source sentences.
42
+
43
+ **Filtering Queue Elements.** The search space is also dependent on how many elements are added to the queue at each step and how many of those are expanded further. Note that whenever a new output is generated via a module, it can potentially fuse with all previous generations to create new queue elements. Since doing this exhaustively increases the search space, [SP-Search]{.smallcaps} defines certain heuristics for choosing the elements that will be added to the queue: (1) a sentence that has been compressed once is not compressed again, (2) a sentence that has been paraphrased once is not paraphrased again, (3) each document sentence is used at most once in a tree, (4) two sentences are not fused if they are intermediate generations from the same sentence, and (5) since fusion is not a symmetric operation and can lead to different generations based on the order, sentence fusion happens keeping the temporal order of the sentences in the document intact.
44
+
45
+ **Priority Queue and Pruning.** [SP-Search]{.smallcaps} performs an additional step of pruning for the elements that are added to the queue. It maintains a fixed queue size and while expanding the elements in the queue in a best-first manner, it maintains a ranked list of the elements according to Eq. [\[eqn:score\]](#eqn:score){reference-type="ref" reference="eqn:score"} such that only the top-ranked elements are kept in the queue and the rest are pruned.
46
+
47
+ **Parent has a higher ROUGE than children.** Each module generates multiple outputs through beam search and [SP-Search]{.smallcaps} ensures that the best output has a higher R-L score than the nodes on which the module is defined. If this is not the case, the corresponding branch of the search is not expanded further. This constraint generalizes to the property that every node in a Summarization Program will have a higher R-L score (with respect to the summary sentence) than all other nodes in the subtree rooted at that node. Conceptually, this greedy approach ensures that every reasoning step in a Summarization Program is a step closer to the summary (according to a scoring function).
48
+
49
+ **Maximum Tree Height.** [SP-Search]{.smallcaps} chooses a maximum height of the trees, beyond which the nodes are not expanded further during the search.
50
+
51
+ **Batching Module Executions.** Instead of executing each module separately, which requires a forward pass over a neural model and can be time-consuming, [SP-Search]{.smallcaps} executes all operations in the queue together by batching at each depth of the search.
52
+
53
+ Given that we have proposed [SP-Search]{.smallcaps} for identifying SPs for human summaries, we now want to leverage the algorithm to generate training data for building SP generation models, as part of RQ2. In particular, we use [SP-Search]{.smallcaps} to identify Summarization Programs for all training samples in the CNN/DailyMail dataset [@hermann2015teaching].[^8] Our second research question now asks whether these identified SPs can be used as supervision for developing abstractive summarization models via the generation of Summarization Programs (see the right part of Fig. [2](#fig:pipeline_fig){reference-type="ref" reference="fig:pipeline_fig"}). For generating a Summarization Program, we define a supervised learning problem $f : \mathcal{D} \rightarrow \mathcal{P}$ that takes a document $\mathcal{D}$ as input and generates a Summarization Program $\mathcal{P}$. The challenges involved in developing such a model are (1) extracting a subset of relevant document sentences, (2) learning meaningful compositions of the sentences through the appropriate modules.
54
+
55
+ Given the above setup, we propose two initial models for our task. The SPs are encoded as linear strings such that they can be generated by a sequence-to-sequence generative model. We first label each sentence in the document with unique identifiers like '$<$D1$>$', '$<$D2$>$', '$<$D3$>$', etc. The Summarization Program is then represented using a nested bracketing structure composed of the modules and the sentence identifiers as shown in Fig. [2](#fig:pipeline_fig){reference-type="ref" reference="fig:pipeline_fig"}.
56
+
57
+ We develop this model based on the hypothesis that a document can contain a large number of sentences but only a small fraction of those are typically useful for generating a summary. We build a training corpus consisting of samples of the form ($\mathcal{D}_k$, $\mathcal{P}$) where $\mathcal{D}_k$ represents the set of $k$ document sentences on which the Summarization Program $\mathcal{P}$ is built. We finetune a pre-trained language model, BART [@lewis2020bart] on this corpus such that it can generate a Summarization Program given a small number of document sentences. During inference, given a document, we first extract a set of relevant sentences using a state-of-the-art extractive summarization model, MatchSum [@zhong2020extractive], which are then used to generate the program. While this model simplifies the learning problem by separating out the sentence extraction phase from the program building phase, it is worth noting that relevant sentence extraction is an essential first *blackbox* step towards generating good Summarization Programs.
58
+
59
+ Our next model aims to solve sentence extraction and SP generation as part of a single model. We finetune BART on the parallel corpus of document and Summarization Programs and the model learns to directly generate the program given an entire document as input. This approach makes the sentence extraction step implicit to the program generation, and like the Extract-And-Build model, sentence extraction can still be considered a *blackbox* operation.
60
+
61
+ Once an SP is generated, we parse and execute it to obtain the final summary. Program execution only requires iterative inference over the corresponding neural modules. We check for well-formedness of the Summarization Program by ensuring that the generated sequence (1) has balanced parentheses, (2) does not contain any out-of-vocabulary token (vocabulary being defined by the module names, sentence identifiers, and parentheses), and (3) has consistent number of operands for an operation. During inference, we decode up to top-k SPs using beam search and execute the first well-formed one. In cases when none of the top-k outputs is well-formed,[^9] we generate the corresponding extractive summary.
2210.06041/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-05-17T20:55:13.442Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36" etag="mcTbtuL8vOv6IgPct4q1" version="18.0.6" type="google"><diagram id="TBEmJzI77ZtA-Zt-R4Br" name="Page-1">7V1pc5u6Gv41nmk/JMO+fMza9k5y2mk795zeL3ewwQkn2PhgsvXXH2EWg3jFZiTkROkSW4AQ0qN30/uImXqxevkUOZv729D1gpkiuS8z9XKmKLKs6OhXUvKaluiGnRbcRb6bnbQv+OH/9rJCKSt99F1vWzkxDsMg9jfVwkW4XnuLuFLmRFH4XD1tGQbVu26cu+yO0r7gx8IJvNppf/pufJ+VGrq2P/DZ8+/us1srqmqkR1ZOcXZasL133PC5dDP1aqZeRGEYp59WLxdekPRe3jFpi64JR4uWRd467nLB5fx6/rT64/of9+LJenLN65ub/53IRvYgT07wmD1z1tz4Ne+EKHxcu15SjTxTz5/v/dj7sXEWydFnNOyo7D5eBdnhZbiOs3FEg6ie15uZtfzJi2LvpVSUNfuTF668OHpFp+RHpRwOGYpkO4PVc2lMtOyc+9JwqGZW6GQ4uCsq3/cU+pB1Vq+OU9o7znMRmLKvYRTfh3fh2gmu9qXn+65N+mp/zk0YbrIO/duL49esR53HOBzY3UlbGjs78gIn9p+qwIf6Lbv0W+ijW5QGyVJOVaUyToqlV6vZho/RwsuuLGMVrMyqVqZilcVOdOfFtcrOosh5LZ22SU7YtjZcKn6wZzD1Dk3tejn6kLZvj7xiCA4Ao24KMNYHxS79VAZF1Q/BpWZUK5Mo4lKr3krrgMXGSyjhTxP4AwaiBMCqKNPsQ/CnV+vSaMKvCibN6oK/xmsOBqCsPPx/tXh4coy//+P5n65/3v7lnxTKn2czRpOqVoyiSzUrRgGtGHkEKwbuN9Xmv99khbt+M4S4K897BUO2pmEd31W+1SpSsIpGEm61+6hSv3ZVz6cj1CAHwwjQyJ5v0Ie7eDfiaUGChMS9TWclKv3nMXErz3/6K+Q9K9If3jP6/3u4ctb7gxW85oVJTSfbHajO0AmytnkpX5HfVkv+6hfz5cw8387My6wkaw563LRF1Vai4lLLD5Uy187KD5Lx+OwFT17sL5x8SmSVQlMkkTrozOAs8O/WqHDlu+5uMjpZwQLNHS8aSXBZuKquCy4VkFs46EcTW+qRIMoRiIIRlXuT3CAKih6lQzPvBAYJgWE3nDU8VGDYraJaLRfh+gkdvFovQhcNQSuW5iywVMJODSJEMC39ILgIgzDaNUddWgtvsUDl2zgKH7zSkbmla/pYhpeBaT2pjjaLJdo6GKvv2uySRjK7VJ2N2ZXfp2+76PqS+Typy7TtJtF2gDRapBMwkUTR3dz5gJqJbi5hvz6mKnInupaZfEguWYXrcJtKk+I4ICOlubN4uNvB96R6ww/Iy07voWh29kHPbofLxOuSvEufhyDyEvTeOHMvqKK/u8yKPPQQznxXXzJPMnSgyvXzmX7ZbSY1iQFckBXLQNlNZ+WFFkjAnUinsixXw3knWVUHR4CqtarVCsLlcuvFGJhHCoU0GXmTq+QfsRN7iU5ezT3X9dd37Cy8qgp1Hc9agirUWFgeMkOPxSaU9apNqKp1LW2z1NKyLtR0RRTY9qllwWM0QFOfykq1LjzKNVYAGGi23Kyv2y+hpLKheByPjm0kHFuCY6tipuzUjq0MrWhNpkahis4WcdjBp6WjPZd68gfUnrsfNsBLrizdOv0ZSauaVUAWKrQESIMlIBXILcG16to9S5Kekj4KnO3WXzSrRNQ90etfmYrdffmVfDnV86+XL+WDl6/ZN2L/pqqn6Sm0jjq31M060M152aEetIbZToMXLjC41CoiKOXR1gqgxYKe6NijYQ+AX+VjBDTUUPXix6Vq0Ldf+XXo876S5EsrotIBaHhy9TiRVwDk0CUzvKKaaUkbeY2LCjwoymsniGNvLVQlLVVpYQg066oSWp6npyq5CoFAFV2E64UTC0RSQqRitIdEINlOb+GVHFPmBZGRj8ZNIJIaIjlzJ1RIbQ82GPu5D4WBWDYPC2MRNhA7R/NaDcY8QN5qMap8WYyyap+qEjGlGUdKZ8/FxgxRPDd1rHAibqdqLbFE7HxdZpBHqpItB9d/AgUsEgDxSSZKEgkbeMvkKWbEpTyiDJ9HgLzd3fY4khLmzsJyVUiIo8HWdJeOtalZdUkqQ6HCMahAMGi6rHf0jMwQO4oTYYT81VOrzK/AEtyNodKopV6LrXOrke22w5ZB+2dJ1VdSz5J1/ptwu4UzmAiWHBMZQpYAJJnRXeqMYY3JOIuiML7K9pjO0B4zu6SUMwrvsg3g5SstrfaYxpkIxOkcg2WejlVEaUG31mCTwdKs2YUZNhTXMs+4VruuiXCGazygPBjXKu7zynQ8C9yDYYJrWXqPVp9ln5p6yTpTq9bZYAnYUi9jq0/OM2xEBtN+fEbLYLKZZTBBzW7JYGq/hFbScZd43DuCnGbapw1+4NDV0pZqKdENNTxm0UI3bDmfFgQbl804yqIThEOCX6sZnNHDZDAXmEdMCcohAVO6zh2myNm+B4TTxonr3d58mwnC4WD5hSdFTU04lME0YGGWkeynoQTE5mpp0RFxM6uNjthyPi2zzCKKO8FH5IKPuJcSgpEIdg/E2xaUxLdLSdRU3iiJec1Cj+8VrmXtN62U4fHqrcetRH4Rc5QoxfnShwFv1WRwTBLnA2k8PDrEgqpIcogxRTq9Q5zLMj70q+AqgsijmFys4QkEUycXyyApCFe33JMVZcXoqI3ZLL/qitbkJQ9dE9Ex+LDmLsogX6cnWo6TvFjEx48Oi4ZC0IJ90YdXxJq/WHQsv9pTsMWoqk8dJ4FPzRaTFe43nxB0McqQ5M6ig6LQg3V0PwtuYr5YHr4s6+iVo55/iZz7P3/8fP569fXhyneeTkyuNLQmU+GLafimoHY3fd07qIPbBS18Mfx8Jnyxgmj0nrJAEzekwtGp4mFoEmhLtaxzQEGCrKD+HDv1R7O5o/7IIOt0omgJY/83f9J2/zefj5wIQTwXYDBLQsd2vqPF/qk1mAlLggY39jjoP4UTd3TQxiMyg6Ft4F4MJQKQju/VxwbaFBibRLhxAg3dsE/NanbKcHsPqIu5kSfS+2pjQiEtAE3QCdIC0ocBb9WEyUnSAho2GqjZ9Z3dD3CV/8vKufNO5s4WAfaA/ORk25MZtUS8ZIpsswlR8ynW4drD3Im8qOs86rHdCEYu1aHtRiRA4tJzFzSFFVb+6yXr9jlYpA+3N98+CtB0cDKxFQUQNLSWFOA3tLbrualf82jgDgrT1zyCvQZNtHdsHZg6FU5mS7WUOJkmHsdu4WS2nH+wXQAC8Fhe2CcYmQShZmrTsedARB0Lx1fwMQmIslTOEEVm+Ao25lGzMU08xYshGxNEmnjrdh+7aSgXs7laWlxM3Lxq42K2nE/HHCOnZQkmJmsmZqOEeO88TLBzyDxiwcJ8iyxMU56OhQkCULy+uaZoKSy2IM04wWJL+jDgrZoMDcqLLXAsFMrZ49EHFhRMkmTD31kwtQ/c8M5wThL2BQOTZr6+iSdxMczXhwHZZemCJwIm/BR6R03MJvvFkqjwLy0MPBT5l3Avj/oqIE7pl03B8GPDoS0R1F9f5OEVUeRewv3auBoxbexYkC7pakz8dVAMXxrZOMf5NeEEIqki0lKmowHDiOSfmC5YwHQRyZtXMcLLXLgnATfFzlspwLlFw4mlaNj2qYrFFYcSf3GNTYv4a+JGaQvxFz9/bOIvPBO6xLLfGPkjcX8p8H5bqqVHCYFTPqHAiaD9Hjvt1zSnpP3CSONojzS2YRe5a/wv7xFOBCCeczKYGGlhOQC0OL+1Bo9MjIQHjSabnRfKb6PDdmy4xsOAg3Ft4w4LJcKvhW94yQTX7/CFf5Y2Ht8Xqou1cScySGtDQiEDBU3PCTJQ0ocBb9UEySkyUMD9/aY2/XrEVYiAbV9J4ys+UqPn4NnN3YMjWPAYl5JjWXQy3GBiu/AcQIsFvMmZpocRlKEYdM5R7kXvoMKWzlnS8smN85rwTaRLb731BGl6GP8XIk0zXhIEtwdhwrTXBIaGYMg2uMNQfjfus00Fh5cAKlkyTCx4qOckzMkSTps3jBG0y+OlXcqShseqNbsON6bMS9nosIEIA8+Z2L3tsbl815wOyX5dfZTOzseBU11p0iAiSw6akG8kAwSpnposMOqygLFFw+PC1YxN9ELpKBrYhC8gy0TDQu5dIxhAXZpOJ3wvSyqp2eTW1S7RTBZ5HpoIGWMDYSmnmkIiQRSGcX/4NddLaSUpva1avZfShsXWayiBUSf7ksLof3NGvzK10a93Mfr7KvoBCnsK4hlfW3LvRA5uBw5cqpAlvaZKJWaKPm92H0UvMZGtkJf17hW9TFLIhXQapuixRVeTpnZXqg2Xu2j3xmtoIbBxt0fh578vP18FYn6MCXFkdqawNt+atanqk1ubNF4Vw4O1ya8NqeIUnh7BolpduOdPz4bMm93DhsxbR1mDj8HYFBr8rWhwwIFnrMEbE7GEBn9TGlyxJtfgFEiRQoM3a/ChbF5AXhX0NwYaXOmtwfPW0dXgBjnCLjT4u9PgCuAUsdXgxhgxodH2ABGb9jJINsTe71Bw6Bjs2gtuadE3CoRtQI6aJRWbnUvFfudKqnDrW56jipzVZndQVbVk0Iuhwo9UK8GxWjmWtjI5uA6jlRMUhwMvRmN3kmyynsAXOiVJnj7xETSTQU2Ol6vfHYwjZ71doqvy69deccJzGLnV6muXu94ijJBhEK7L19OZsWePL37gOzvIYTsJDM0qb5mpNGaJgu+AZ3abJDijbbRJQiYKuP4TOFq7kc/65mwHxGXcDGjijhIRMGq72x6Hx1TsCEHeQ2IEyGj4+7ugd/JBOdwmLcz0dc6FYBWClbZgxd8SP7lgbdgiVEhWPiRrbWOwySVrw16eQrQK0TqNaK29QHRy0UpereolWolwboo7dF9/gGr5st48JoC6CryVtws+SjP94gP6t3LieyQZZ+b5lx2fMSn/CCPjGCW6J7u6Z0IS3TZM1TFGgmotQ1AH3v2XJ6iUwYqnaY8HVrKH1Uu40MPk142HBGLyko0Ui2hOdMAeD1v1FdAhgY3tVn118ClFYLYFfmMEwZafH75/DzY/Nze3v52bh6/R99+X4PvThah8G6ISeFtgDcBkh8mcUlSCWFWEpDwaSTk29qaXlOQFKyEp37WklG3uJCU5tiQk5ZuSlAD2aEpK9DUKk4Esjn1CXXt/G7rJO5Ov/gU=</diagram></mxfile>
2210.06041/main_diagram/main_diagram.pdf ADDED
Binary file (41.8 kB). View file
 
2210.06041/paper_text/intro_method.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Reinforcement learning (RL) has achieved remarkable progress in games [@mnih2013dqn; @vinyals2019starcraft; @guan2022perfectdou], financial trading [@fang2021universal] and robotics [@gu2017deep]. However, in its core part, without designs tailored to specific tasks, general RL paradigms are still learning implicit representations from critic loss (value predictions) and actor loss (maximizing cumulative reward). In many real-world scenarios where observations are complicated (e.g., images) or incomplete (e.g., partial observable), training an agent that is able to extract informative signals from those inputs becomes incredibly sample-inefficient.
4
+
5
+ Therefore, many recent works have been devoted to obtaining a good state representation, which is believed to be one of the key solutions to improve the efficacy of RL [@laskin2020curl; @laskin2020reinforcement]. One of the main streams is adding auxiliary losses to update the state encoder. Under the hood, it resorts to informative and dense learning signals in order to encode various prior knowledge and regularization [@shelhamer2016loss], and obtain better latent representations. Over the years, a series of works have attempted to figure out the form of the most helpful auxiliary loss for RL. Quite a few advances have been made, including observation reconstruction [@yarats2019sac_ae], reward prediction [@DBLP:conf/iclr/JaderbergMCSLSK17], environment dynamics prediction [@shelhamer2016loss; @de2018integrating; @ota2020ofe], etc. But we note two problems in this evolving process: (i) each of the loss designs listed above are obtained through empirical trial-and-errors based on expert designs, thus heavily relying on human labor and expertise; (ii) few works have used the final performance of RL as an optimization objective to directly search the auxiliary loss, indicating that these designs could be sub-optimal. To resolve the issues of the existing handcrafted solution mentioned above, we decide to automate the process of designing the auxiliary loss functions of RL and propose a principled solution named Automated Auxiliary Loss Search (A2LS). A2LS formulates the problem as a bi-level optimization where we try to find the best auxiliary loss, which, to the most extent, helps train a good RL agent. The outer loop searches for auxiliary losses based on RL performance to ensure the searched losses align with the RL objective, while the inner loop performs RL training with the searched auxiliary loss function. Specifically, A2LS utilizes an evolutionary strategy to search the configuration of auxiliary losses over a novel search space of size $7.5 \times 10^{20}$ that covers many existing solutions. By searching on a small set of simulated *training environments* of continuous control from Deepmind Control suite (DMC) [@tassa2018deepmind], [A2LS]{.smallcaps} finalizes a loss, namely `A2-winner`.
6
+
7
+ To evaluate the generalizability of the discovered auxiliary loss `A2-winner`, we test `A2-winner` on a wide set of *test environments*, including both image-based and vector-based (with proprioceptive features like positions, velocities and accelerations as inputs) tasks. Extensive experiments show the searched loss function is highly effective and largely outperforms strong baseline methods. More importantly, the searched auxiliary loss generalizes well to unseen settings such as (i) different robots of control; (ii) different data types of observation; (iii) partially observable settings; (iv) different network architectures; and (v) even to a totally different discrete control domain (Atari 2600 games [@bellemare2013arcade]). In the end, we make detailed statistical analyses on the relation between RL performance and patterns of auxiliary losses based on the data of whole evolutionary search process, providing useful insights on future studies of auxiliary loss designs and representation learning in RL.
8
+
9
+ <figure id="fig:auxi_flow" data-latex-placement="t">
10
+ <embed src="figs/auxi_rl_flow.drawio_8.pdf" style="width:100.0%" />
11
+ <figcaption>Overview of <span class="smallcaps">A2LS</span>. <span class="smallcaps">A2LS</span> contains an inner loop (left) and an outer loop (right). The inner loop performs an RL training procedure with searched auxiliary loss functions. The outer loop searches auxiliary loss functions using an evolutionary algorithm to select the better auxiliary losses.</figcaption>
12
+ </figure>
13
+
14
+ # Method
15
+
16
+ We consider the standard Markov Decision Process (MDP) $\mathcal{E}$ where the state, action and reward at time step $t$ are denoted as $(s_t, a_t, r_t)$. The sequence of rollout data sampled by the agent in the episodic environment is $( {s_0, \ldots, s_t, a_t, r_t, s_{t+1}, \cdots, s_{T}} )$, where $T$ represents the episode length. Suppose the RL agent is parameterized by $\omega$ (either the policy $\pi$ or the state-action value function $Q$), with a state encoder $g_\theta$ parameterized by $\theta\subseteq\omega$ which plays a key role for representation learning in RL. The agent is required to maximize its cumulative rewards in environment $\mathcal{E}$ by optimizing $\omega$, noted as $\mathcal{R}(\omega; \mathcal{E})=\mathbb{E}_{\pi}[\sum_{t=0}^{T-1} r_t]$.
17
+
18
+ In this paper, we aim to find the optimal auxiliary loss function $\mathcal{L}_{\text{Aux}}$ such that the agent can reach the best performance by optimizing $\omega$ under a combination of an arbitrary RL loss function $\mathcal{L}_{\text{RL}}$ together with an auxiliary loss $\mathcal{L}_{\text{Aux}}$. Formally, our optimization goal is: $$\begin{equation}
19
+ \max_{\mathcal{L}_{\text{Aux}}} \quad \mathcal{R}(\min_{\omega}\mathcal{L}_{\text{RL}}(\omega; \mathcal{E}) + \lambda \mathcal{L}_{\text{Aux}}(\theta; \mathcal{E}); \mathcal{E})~,
20
+ \label{eq:nested_optimization}
21
+ \end{equation}$$ where $\lambda$ is a hyper-parameter balancing the relative weight of the auxiliary loss. The left part (inner loop) of [1](#fig:auxi_flow){reference-type="ref+Label" reference="fig:auxi_flow"} illustrates how data and gradients flow in RL training when an auxiliary loss is enabled. Some instances of $\mathcal{L}_{\text{RL}}$ and $\mathcal{L}_{\text{Aux}}$ are given in [\[appendix: loss_example\]](#appendix: loss_example){reference-type="ref+Label" reference="appendix: loss_example"}. Unfortunately, existing auxiliary losses $\mathcal{L}_{\text{Aux}}$ are handcrafted, which heavily rely on expert knowledge, and may not generalize well in different scenarios as shown in the experiment part. To find better auxiliary loss functions for representation learning in RL, we introduce our principled solution in the following section.
22
+
23
+ To meet our goal of finding top-performing auxiliary loss functions without expert assignment, we turn to the help of automated loss search, which has shown promising results in the automated machine learning (AutoML) community [@li2019lfs; @li2021autoloss_zero; @wang2020loss]. Correspondingly, we propose Automated Auxiliary Loss Search (A2LS), a principled solution for resolving the above bi-level optimization problem in . A2LS resolves the inner problem as a standard RL training procedure; for the outer one, A2LS defines a finite and discrete search space ([3.1](#sec:search_space){reference-type="ref+Label" reference="sec:search_space"}), and designs a novel evolution strategy to efficiently explore the space ([3.2](#sec:search_algorithm){reference-type="ref+Label" reference="sec:search_algorithm"}).
24
+
25
+ []{#tab:existing_in_search_space label="tab:existing_in_search_space"}
26
+
27
+ We have argued that almost all existing auxiliary losses require expert knowledge, and we expect to search for a better one automatically. To this end, it is clear that we should design a search space that satisfies the following desiderata.
28
+
29
+ - **Generalization**: the search space should cover most of the existing handcrafted auxiliary losses to ensure the searched results can be no worse than handcrafted losses;
30
+
31
+ - **Atomicity**: the search space should be composed of several independent dimensions to fit into any general search algorithm [@configspace] and support an efficient search scheme;
32
+
33
+ - **Sufficiency**: the search space should be large enough to contain the top-performing solutions.
34
+
35
+ Given the criteria, we conclude and list some existing auxiliary losses in [\[tab:existing_in_search_space\]](#tab:existing_in_search_space){reference-type="ref+Label" reference="tab:existing_in_search_space"} and find their commonalities, as well as differences. We realize that these losses share similar components and computation flow. As shown in [2](#fig:auxi_computation_graph){reference-type="ref+Label" reference="fig:auxi_computation_graph"}, when training the RL agent, the loss firstly selects a sequence $\{s_{t}, a_{t}, r_{t}\}_{t=i}^{i+k}$ from the replay buffer, when $k$ is called *horizon*. The agent then tries to predict some elements in the sequence (called *target*) based on another picked set of elements from the sequence (called *source*). Finally, the loss calculates and minimizes the prediction error (rigorously defined with *operator*). To be more specific, the encoder part $g_\theta$ of the agent, first encodes the *source* into latent representations, which is further fed into a predictor $h$ to get a prediction $y$; the auxiliary loss is computed by the prediction $y$ and the target $\hat{y}$ that is translated from the *target* by a target encoder $g_{\hat{\theta}}$, using an *operator* $f$. The target encoder is updated in an momentum manner as shown in [2](#fig:auxi_computation_graph){reference-type="ref+Label" reference="fig:auxi_computation_graph"} (details are given in [\[sec:ablation_target_encoders\]](#sec:ablation_target_encoders){reference-type="ref+Label" reference="sec:ablation_target_encoders"}). Formally, $$\begin{equation}
36
+ \mathcal{L}_{\text{Aux}}(\theta; \mathcal{E}) = f\Big(h\big(g_\theta(\text{seq}_\text{source})\big), g_{\hat{\theta}}(\text{seq}_{\text{target}})\Big)~,
37
+ \label{eq:computational_graph}
38
+ \end{equation}$$ where $\text{seq}_\text{source},\text{seq}_\text{target}\subseteq\{s_{t}, a_{t}, r_{t}\}_{t=i}^{i+k}$ are both subsets of the candidate sequence. And for simplicity, we will denote $g_\theta(s_t, a_t, r_t, s_{t+1}, \cdots)$ as short for $[g_\theta(s_t), a_t, r_t, g_\theta(s_{t+1}), \cdots]$ for the rest of this paper (the encoder $g$ only deals with states $\{s_i\}$). Thereafter, we observe that these existing auxiliary losses differ in two dimensions, i.e., *input elements* and *operator*, where *input elements* are further combined by *horizon*, *source* and *target*. These differences compose our search dimensions of the whole space. We then illustrate the search ranges of these dimensions in detail.
39
+
40
+ **Input elements.**
41
+
42
+ <figure id="fig:auxi_computation_graph" data-latex-placement="t">
43
+ <embed src="figs/AutoRRL-Auxi.drawio_9.pdf" style="width:90.0%" />
44
+ <figcaption>Overview of the search space <span class="math inline">{ℐ, <em>f</em>}</span> and the computation graph of auxiliary loss functions. <span class="math inline">ℐ</span> selects a candidate sequence <span class="math inline">{<em>s</em><sub><em>t</em></sub>, <em>a</em><sub><em>t</em></sub>, <em>r</em><sub><em>t</em></sub>}<sub><em>t</em> = <em>i</em></sub><sup><em>i</em> + <em>k</em></sup></span> with <em>horizon</em> <span class="math inline"><em>k</em></span>; then determine a <em>source</em> and a <em>target</em> as arbitrary subsets of the sequence; an encoder <span class="math inline"><em>g</em><sub><em>θ</em></sub></span> first encodes the <em>source</em> into latent representations, which is fed into a predictor <span class="math inline"><em>h</em></span> to get a prediction <span class="math inline"><em>y</em></span>; the auxiliary loss is computed over the prediction <span class="math inline"><em>y</em></span> and the ground truth <span class="math inline"><em>ŷ</em></span> that is translated from the <em>target</em> by a target encoder <span class="math inline"><em>g</em><sub><em>θ̂</em></sub></span>, using a operator <span class="math inline"><em>f</em></span>. </figcaption>
45
+ </figure>
46
+
47
+ The *input elements* denote all inputs to the loss functions, which can be further disassembled as *horizon*, *source* and *target*. Different from previous automated loss search works, the *target* here is not "ground-truth" because auxiliary losses in RL have no labels beforehand. Instead, both *source* and *target* are generated via interacting with the environment in a self-supervised manner. Particularly, the *input elements* first determine a candidate sequence $\{s_{t}, a_{t}, r_{t}\}_{t=i}^{i+k}$ with *horizon* $k$. Then, it chooses two subsets from the candidate sequence as *source* and *target* respectively. For example, the subsets can be $\{s_t\}, \{s_t,s_{t+1}\}$, or $\{s_t,r_{t+1},a_{t+2}\}, \{s_t,s_{t+1},a_{t+1}\}$, etc.
48
+
49
+ **Operator.** Given a prediction $y$ and its target $\hat{y}$, the auxiliary loss is computed by an operator $f$, which is often a similarity measure. In our work, we cover all different operators $f$ used by the previous works, including inner product (Inner) [@he2020moco; @stooke2021decoupling], bilinear inner product (Bilinear) [@laskin2020curl], cosine similarity (Cosine) [@chen2020simCLR], mean squared error (MSE) [@ota2020ofe; @de2018integrating] and normalized mean squared error (N-MSE) [@schwarzer2020spr]. Additionally, other works also utilize contrastive objectives, e.g., InfoNCE loss [@oord2018cpc], incorporating the trick to sample un-paired predictions and targets as negative samples and maximize the distances between them. This technique is orthogonal to the five similarity measures mentioned above, so we make it optional and create $5 \times 2 = 10$ different operators in total.
50
+
51
+ **Final design.** In the light of preceding discussion, with the definition of *input elements* and *operator*, we finish the design of the search space, which satisfactorily meets the desiderata mentioned above. Specifically, the space is **generalizable** to cover most of the existing handcrafted auxiliary losses; additionally, the **atomicity** is embodied by the compositionality that all *input elements* work with any *operator*; most importantly, the search space is **sufficiently** large with a total size of $7.5 \times 10^{20}$ (detailed calculation can be found in [\[sec:calculation_complexity\]](#sec:calculation_complexity){reference-type="ref+Label" reference="sec:calculation_complexity"}) to find better solutions.
52
+
53
+ The success of evolution strategies in exploring large, multi-dimensional search space has been proven in many works [@DBLP:conf/nips/HouthooftEPG18; @co2020evolvingRL]. Similarly, A2LS adopts an evolutionary algorithm [@real2019regularized] to search for top-performing auxiliary loss functions over the designed search space. In its essence, the evolutionary algorithm (i) keeps a population of loss function candidates; (ii) evaluates their performance; (iii) eliminates the worst and evolves into a new better population. Note that step (ii) of "evaluating" is very costly because it needs to train the RL agents with dozens of different auxiliary loss functions. Therefore, our key technical contribution contains how to further reduce the search cost ([3.2.1](#sec:pruning){reference-type="ref+Label" reference="sec:pruning"}) and how to make an efficient search procedure ([3.2.2](#sec:evolution){reference-type="ref+Label" reference="sec:evolution"}).
54
+
55
+ In our preliminary experiment, we find out the dimension of *operator* in the search space can be simplified. In particular, MSE outperforms all the others by significant gaps in most cases. So we effectively prune other choices of *operators* except MSE. See [\[Ablation Study on Search Space Pruning\]](#Ablation Study on Search Space Pruning){reference-type="ref+Label" reference="Ablation Study on Search Space Pruning"} for complete comparative results and an ablation study on the effectiveness of search space pruning.
56
+
57
+ Our evolution procedure roughly contains four important components: (i) **evaluation and selection**: a population of candidate auxiliary losses is evaluated through an inner loop of RL training, then we select the top candidates for the next evolution stage (i.e., generation); (ii) **mutation**: the selected candidates mutate to form a new population and move to the next stage; (iii) **loss rejection**: filter out and skip evaluating invalid auxiliary losses for the next stage; and (iv) **bootstrapping initial population**: assign more chance to initial auxiliary losses that may contain useful patterns by prior knowledge for higher efficiency. The step-by-step evolution algorithm is provided in [\[alg:method\]](#alg:method){reference-type="ref+Label" reference="alg:method"} in the appendix, and an overview of the [A2LS]{.smallcaps} pipeline is illustrated in [1](#fig:auxi_flow){reference-type="ref+Label" reference="fig:auxi_flow"}. We next describe them in detail.
58
+
59
+ **Evaluation and selection.** At each evolution stage, we first train a population of candidates with a population size $P=100$ by the inner loop of RL training. The candidates are then sorted by computing the approximated *area under learning curve* (AULC) [@ghiassian2020improving; @stadie2015incentivizing], which is a single metric reflecting both the convergence speed and the final performance [@viering2021shape] with low variance of results. After each training stage, the top-25% candidates are selected to generate the population for the next stage. We include an ablation study on the effectiveness of AULC in [\[Effectiveness of AULC scores\]](#Effectiveness of AULC scores){reference-type="ref+Label" reference="Effectiveness of AULC scores"}.
60
+
61
+ **Mutation.** To obtain a new population of auxiliary loss functions, we propose a novel mutation strategy. First, we represent both the *source* and the *target* of the input elements as a pair of binary masks, where each bit of the mask represents *selected* by 1 or *not selected* by 0. For instance, given a candidate sequence $\{s_t,a_t,r_t,s_{t+1},a_{t+1},r_{t+1}\}$, the binary mask of this subset sequence $\{s_t,a_t,r_{t+1}\}$ is denoted as $110001$. Afterward, we adopt four types of mutations, also shown in [3](#fig:evolution_operations){reference-type="ref+Label" reference="fig:evolution_operations"}: (i) replacement (50% of the population): flip the given binary mask with probability $p = \frac{1}{2 \cdot (3k + 3)}$ with the horizon length $k$; (ii) crossover (20%): generate a new candidate by randomly combining the mask bits of two candidates with the same horizon length in the population; (iii) horizon decrease and horizon increase (10%): append new binary masks to the tail or delete existing binary masks at the back. (iv) random generation (20%): every bit of the binary mask is generated from a Bernoulli distribution $\mathcal{B}(0.5)$.
62
+
63
+ <figure id="fig:evolution_operations" data-latex-placement="t">
64
+ <embed src="figs/AutoRRL-Evolution-Mutations-v2.drawio_2.pdf" style="width:100.0%" />
65
+ <figcaption> Four types of mutation strategy for evolution. We represent both the <em>source</em> and the <em>target</em> of the input elements as a pair of binary masks, where each bit of the binary mask represents <em>selected</em> (green block) by 1 or <em>not selected</em> (white block) by 0. </figcaption>
66
+ </figure>
67
+
68
+ **Loss rejection protocol.** Since the auxiliary loss needs to be differentiable with respect to the parameters of the state encoder, we perform a gradient flow check on randomly generated loss functions during evolution and skip evaluating invalid auxiliary losses. Concretely, the following conditions must be satisfied to make a valid loss function: (i) having at least one state element in $\text{seq}_{\text{source}}$ to make sure the gradient of auxiliary loss can propagate back to the state encoder; (ii) $\text{seq}_{\text{target}}$ is not empty; (iii) the horizon should be within a reasonable range ($1\leq k\leq10$ in our experiments). If a loss is rejected, we repeat the mutation to fill the population.
69
+
70
+ **Bootstrapping initial population.** To improve the computational efficiency so that the algorithm can find reasonable loss functions quickly, we incorporate prior knowledge into the initialization of the search. Particularly, before the first stage of evolution, we bootstrap the initial population with a prior distribution that assigns high probability to auxiliary loss functions containing useful patterns like dynamics and reward prediction. More implementation details are provided in [\[sec:evolution_details\]](#sec:evolution_details){reference-type="ref+Label" reference="sec:evolution_details"}.
71
+
72
+ <figure id="fig:image_evolution_process" data-latex-placement="htbp">
73
+ <embed src="figs/new_pixel_evolution_process-crop.pdf" />
74
+ <figcaption> Evolution process in the three training (image-based) environments. Every white dot represents a candidate of auxiliary loss, and y-axis shows its corresponding approximated AULC score <span class="citation" data-cites="ghiassian2020improving stadie2015incentivizing"></span>. The horizontal lines show the scores of the baselines. The AULC score is approximated with the average evaluation score at 100k, 200k, 300k, 400k, 500k time steps. </figcaption>
75
+ </figure>
76
+
77
+ As mentioned in [1](#sec:introduction){reference-type="ref+Label" reference="sec:introduction"}, we expect to find auxiliary losses that align with the RL objective and generalize well to unseen *test environments*. To do so, we use A2LS to search over a small set of *training environments*, and then test the searched results on a wide range of *test environments*. In this section, we first introduce the evolution on *training environments* and search results.
78
+
79
+ The *training environments* are chosen as three image-based (observations for agents are images) continuous control tasks in DMC benchmark [@tassa2018deepmind], Cheetah-Run, Reacher-Easy, and Walker-Walk. For each environment, we set the total budget to 16k GPU hours (on NVIDIA P100) and terminate the search when the resource is exhausted. Due to computation complexity, we only run one seed for each inner loop RL training, but we try to prevent such randomness by cross validation (see [4.2](#sec:search-reasults){reference-type="ref+Label" reference="sec:search-reasults"}). We use the same network architecture and hyperparameters config as CURL [@laskin2020curl] (see [\[sec:hyper_image\]](#sec:hyper_image){reference-type="ref+Label" reference="sec:hyper_image"} for details) to train the RL agents. To evaluate the population during evolution, we measure [A2LS]{.smallcaps} as compared to SAC, SAC-wo-aug, and CURL, where we randomly crop images from $100 \times 100$ to $84 \times 84$ as data augmentation (the same technique used in CURL[@laskin2020curl]) for all methods except SAC-wo-aug. The whole evolution process on three environments is demonstrated in [4](#fig:image_evolution_process){reference-type="ref+Label" reference="fig:image_evolution_process"}. Even in the early stages (e.g., stage 1), some of the auxiliary loss candidates already surpass baselines, indicating the high potential of automated loss search. The overall AULC scores of the population continue to improve when more stages come in (detailed numbers are summerized in [\[appendix:trend\]](#appendix:trend){reference-type="ref+Label" reference="appendix:trend"}). Judging from the trend, we believe the performances could improve even more if we had further increased the budget.
80
+
81
+ ::: wrapfigure
82
+ r0.37 ![image](figs/bagging-image-crop.pdf){width="37%"}
83
+ :::
84
+
85
+ Although some candidates in the population have achieved remarkable AULC scores in the evolution ([4](#fig:image_evolution_process){reference-type="ref+Label" reference="fig:image_evolution_process"}), they were only evaluated with one random seed in one environment, making their robustness under question. To ensure that we find a consistently-useful auxiliary loss, we conduct a cross validation. We first choose the top 5 candidates of stage-5 of the evolution on Cheetah-Run (detailed top candidates during the whole evolution procedure are provided in [\[sec:top_losses\]](#sec:top_losses){reference-type="ref+Label" reference="sec:top_losses"}). For each of the five candidates, we repeat the RL training on all three *training environments*, shown in [\[fig:bagging-image\]](#fig:bagging-image){reference-type="ref+Label" reference="fig:bagging-image"}. Finally, we mark the best among five (green bar in [\[fig:bagging-image\]](#fig:bagging-image){reference-type="ref+Label" reference="fig:bagging-image"}) as our final searched result. We call it `A2-winner`, which has the following form: $$\begin{equation}
86
+ \mathcal{L}_{\text{Aux}}(\theta; \mathcal{E}) =
87
+ \|h\big(g_\theta(s_{t+1}, a_{t+1}, a_{t+2}, a_{t+3})\big) - g_{\hat{\theta}}(r_t, r_{t+1}, s_{t+2}, s_{t+3})\|_2~.
88
+ \label{eq:method-image}
89
+ \end{equation}$$
90
+
91
+ []{#sec:experiments label="sec:experiments"} To verify the effectiveness of the searched results, we conduct various generalization experiments on a wide range of *test environments* in depth. Implementation details and more ablation studies are given in [\[sec:implementation_details\]](#sec:implementation_details){reference-type="ref+Label" reference="sec:implementation_details"} and [\[Further Experiment Results\]](#Further Experiment Results){reference-type="ref+Label" reference="Further Experiment Results"}.
92
+
93
+ **Generalize to unseen image-based tasks.** We first investigate the generalizability of `A2-winner` to unseen image-based tasks by training agents with `A2-winner` on common DMC tasks and compare with model-based and model-free baselines that use different auxiliary loss functions (see [\[sec:baselines\]](#sec:baselines){reference-type="ref+Label" reference="sec:baselines"} for details about baseline methods). The results are summarized in [\[tab:image_generalization\]](#tab:image_generalization){reference-type="ref+Label" reference="tab:image_generalization"} where `A2-winner` greatly outperforms other baseline methods on most tasks, including unseen *test environments*. This implies that `A2-winner` is a robust and effective auxiliary loss for image-based continuous control tasks to improve both the efficiency and final performance.
94
+
95
+ ::: center
96
+ []{#tab:image_generalization label="tab:image_generalization"}
97
+ :::
98
+
99
+ :::: wraptable
100
+ r0.6
101
+
102
+ []{#tab:atari label="tab:atari"}
103
+
104
+ ::: center
105
+ :::
106
+ ::::
107
+
108
+ **Generalize to totally different benchmark domains.** To further verify the generalizability of `A2-winner` on totally different benchmark domains other than DMC tasks, we conduct experiments on the Atari 2600 Games [@bellemare2013arcade], where we take Efficient Rainbow [@van2019effrainbow] as the base RL algorithm and add `A2-winner` to obtain a better state representation. Results are shown in [\[tab:atari\]](#tab:atari){reference-type="ref+Label" reference="tab:atari"} where `A2-winner` outperforms all baselines, showing strong evidence of the generalization and potential usages of `A2-winner`. Note that the base RL algorithm used in Atari is a value-based method, indicating that `A2-winner` generalizes well to both value-based and policy-based RL algorithms.
109
+
110
+ r
111
+
112
+ []{#tab:state_generalization label="tab:state_generalization"}
113
+
114
+ **Generalize to different observation types.** To see whether `A2-winner` (searched in image-based environments) is able to generalize to the environments with different observation types, we test `A2-winner` on vector-based (inputs for RL agents are proprioceptive features such as positions, velocities and accelerations) tasks of DMC and list the results in [\[tab:state_generalization\]](#tab:state_generalization){reference-type="ref+Label" reference="tab:state_generalization"}. Concretely, we compare `A2-winner` with SAC-Identity, SAC and CURL, where SAC-Identity does not have state encoder while the others share the same state encoder architecture (See [\[sec:implementation_encoder_architectures\]](#sec:implementation_encoder_architectures){reference-type="ref+Label" reference="sec:implementation_encoder_architectures"} and [\[sec:ablation_encoder_architectures\]](#sec:ablation_encoder_architectures){reference-type="ref+Label" reference="sec:ablation_encoder_architectures"} for detailed implementations). To our delight, `A2-winner` still outperforms all baselines in 12 out of 18 environments, showing `A2-winner` can also benefit RL performance in vector-based observations. Moreover, the performance gain is particularly significant in more complex environments like Humanoid, where SAC barely learns anything at 1000K time steps. In order to get a deeper understanding of this phenomenon, we additionally visualize the Q loss landscape for both methods in [\[sec:landscape\]](#sec:landscape){reference-type="ref+Label" reference="sec:landscape"}.
115
+
116
+ ::: wrapfigure
117
+ r0.45 ![image](figs/ablation_pixel_encoder-crop.pdf){width="45%"}
118
+ :::
119
+
120
+ The architecture of a neural network defines a hypothesis space of functions to be optimized. During the evolutionary search in [4.1](#sec: Evolution on Image-based RL){reference-type="ref+Label" reference="sec: Evolution on Image-based RL"}, the encoder architecture has been kept static as a 4-layer convolutional neural network. Since encoder architecture may have a large impact on the RL training process [@ota2021training; @bjorck2021towards], we test `A2-winner` with three encoders with different depth of neural networks. The result is shown in [\[fig:method_different_encoder_depth_image\]](#fig:method_different_encoder_depth_image){reference-type="ref+Label" reference="fig:method_different_encoder_depth_image"}. Note that even though the auxiliary loss is searched with a 4-layer encoder, the 6-layer convolutional encoder is able to perform better in both two environments. This proves that the auxiliary loss function of `A2-winner` is able to improve RL performance with a deeper and more expressive image encoder. Moreover, the ranking of RL performance (6-layer \> 4-layer \> 2-layer) is consistent across the two environments. This shows that the auxiliary loss function of `A2-winner` does not overfit one specific architecture of the encoder.
121
+
122
+ ::: wrapfigure
123
+ r0.45 ![image](figs/new_ablation_pomdp-crop.pdf){width="45%"}
124
+ :::
125
+
126
+ Claiming the generality of a method based on conclusions drawn just on fully observable environments like DMC is very dangerous. Therefore, we conduct an ablation study on the Partially Observable Markov Decision Process (POMDP) setting to see whether `A2-winner` is able to perform well in POMDP. We random mask 20% of the state dimensions (e.g., 15 dimensions -\> 12 dimensions) to form a POMDP environment in DMC. As demonstrated in [\[fig:method_pomdp\]](#fig:method_pomdp){reference-type="ref+Label" reference="fig:method_pomdp"}, `A2-winner` consistently outperforms CURL and SAC-DenseMLP in the POMDP setting in Hopper-Hop and Cheetah-Run, showing that `A2-winner` is not only effective in fully observable environments but also partially observable environments.
127
+
128
+ **To search or not?** As shown above, the searched result `A2-winner` can generalize well to all kinds of different settings. A natural question here is, however, for a new type of domain, why not perform a new evolution search, instead of simply using the previously searched result? To compare these two solutions, we conduct another evolutionary search similar to [4.1](#sec: Evolution on Image-based RL){reference-type="ref+Label" reference="sec: Evolution on Image-based RL"} but replaced the three image-based tasks with three vector-based ones (marked by $\dag$ in [\[tab:state_generalization\]](#tab:state_generalization){reference-type="ref+Label" reference="tab:state_generalization"}) from scratch. More details are summarized in [\[Evolution on Vector-based RL\]](#Evolution on Vector-based RL){reference-type="ref+Label" reference="Evolution on Vector-based RL"}. We name the searched result as "`A2-winner-v`". As shown in [\[tab:state_generalization\]](#tab:state_generalization){reference-type="ref+Label" reference="tab:state_generalization"}, `A2-winner-v` is a very strong-performing loss for vector-based tasks, even stronger than `A2-winner`. Actually, `A2-winner-v` is able to outperform baselines in 16 out of 18 environments (with 15 unseen *test environments*), while `A2-winner` only outperforms baselines in 12 out of 18 environments. However, please note that it costs another 5k GPU hours (on NVIDIA P100) to search for `A2-winner-v` while there is no additional cost to directly use `A2-winner`. It is a trade-off between lower computational cost and better performance.
129
+
130
+ In this section, we analyze all the loss functions we have evaluated during the evolution procedure as a whole dataset in order to gain some insights into the role of auxiliary loss in RL performance. By doing so, we hope to shed light on future auxiliary loss designs. We will also release this "dataset" publicly to facilitate future research.
131
+
132
+ **Typical patterns.** We say that an auxiliary loss candidate has a certain pattern if the pattern's *source* is a subset of the candidate's *source*, and the pattern's *target* is a subset of the candidate's *target*. For instance, a loss candidate of $\bm{\{s_t, a_t\} \rightarrow \{s_{t+1}, s_{t+2}\}}$ has the pattern $\{s_t, a_t\} \rightarrow \{s_{t+1}\}$, and does not have the pattern $\{a_t, s_{t+1}\} \rightarrow \{s_t\}$. We then try to analyze whether a certain pattern is helpful to representation learning in RL in expectation.
133
+
134
+ Specifically, we analyze the following patterns: (i) forward dynamics $\{s_t, a_t\} \rightarrow \{s_{t+1}\}$; (ii) inverse dynamics $\{a_t, s_{t+1}\} \rightarrow \{s_t\}$; (iii) reward prediction $\{s_t, a_t\} \rightarrow \{r_t\}$; (iv) action inference $\{s_t, s_{t+1}\} \rightarrow \{a_t\}$ and (v) state reconstruction in the latent space $\{s_t\} \rightarrow \{s_{t}\}$. For each of these patterns, we categorize all the loss functions we have evaluated into (i) *with* or (ii) *without* this pattern. We then calculate the average RL performances of these two categories, summarized in [\[tab:analyze_losses\]](#tab:analyze_losses){reference-type="ref+Label" reference="tab:analyze_losses"}. Some interesting observations are as follows.
135
+
136
+ - Forward dynamics is helpful in most tasks and improves RL performance on Reacher-Easy (image) and Cheetah-Run (vector) significantly (p-value$<$`<!-- -->`{=html}0.05).
137
+
138
+ - State reconstruction in the latent space improves RL performance in image-based tasks but undermines vector-based tasks. The improvements in image-based tasks could be attributed to the combination of augmentation techniques, which, combined with reconstruction loss, enforces the extraction of meaningful features. In contrast, no augmentation is used in the vector-based setting, and thus the encoder learns no useful representations. This also explains why CURL performs poorly in vector-based experiments.
139
+
140
+ - In the vector-based setting, some typical human-designed patterns (e.g., reward prediction, inverse dynamics, and action inference) can be very detrimental to RL performance, implying that some renowned techniques in loss designs might not work well under atypical settings.
141
+
142
+ :::: subtable
143
+ ::: center
144
+ []{#tab:typical_patterns label="tab:typical_patterns"}
145
+ :::
146
+ ::::
147
+
148
+ :::: subtable
149
+ ::: center
150
+ :::
151
+
152
+ []{#tab:prediction_targets label="tab:prediction_targets"}
153
+ ::::
154
+
155
+ []{#tab:analyze_losses label="tab:analyze_losses"}
156
+
157
+ **Number of Sources and Targets.** We further investigate whether it is more beneficial to use a small number of sources to predict a large number of targets ($n_{target} > n_{source}$, e.g., using $s_t$ to predict $s_{t+1},s_{t+2}, s_{t+3}$), or the other way around ($n_{target} < n_{source}$, e.g., using $s_t, s_{t+1}, s_{t+2}$ to predict $s_{t+3}$). Statistical results are shown in [\[tab:analyze_losses\]](#tab:analyze_losses){reference-type="ref+Label" reference="tab:analyze_losses"}, where we find that auxiliary losses with more states on the *target* side have a significant advantage over losses with more states on the *source* side. This result echoes recent works [@stooke2021decoupling; @schwarzer2020spr]: predicting more states leads to strong performance gains.
2211.03524/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2211.03524/paper_text/intro_method.md ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Current e-commerce sites such as Amazon, Ebay, etc., construct review platforms to collect user feedback concerning their products. These platforms play a fundamental role in online transactions since they help future consumers collect useful reviews which assist them in deciding whether to make the purchase or not. Unfortunately, nowadays the number of user-generated reviews is overwhelming, raising doubts related to the relevance and veracity of reviews. Therefore, there is a need to verify the quality of reviews before publishing them to prospective customers. As a result, this inspires a recent surge of interest targeting the Review Helpfulness Prediction (RHP) problem.
4
+
5
+ The Cooks Standard 6-Quart Stainless Steel Stockpot with Lid is made with 18/10 stainless steel with an aluminum disc layered in the bottom. The aluminum disc bottom provides even heat distribution and prevents hot spots. Tempered glass lid with steam hole vent makes viewing food easy. Stainless steel riveted handles offer durability. Induction compatible. Works on gas, electric, glass, ceramic, etc. Oven safe to 500F, glass lid to 350F. Dishwasher safe.
6
+
7
+ ![](_page_0_Picture_12.jpeg)
8
+
9
+ I needed a stainless steel pot for canning my tomatoes. I learned the hard way that you have to use a non-reactive pot or else your end result will be inedible (I thought I was using stainless steel but quickly realized it wasnt) I headed to Amazon and came across this Cooks Standard SS Cookpot with cover and bought it after reading the reviews. I have had it for just under a year and it still looks just as good as the day I bought it. I couldn't be happier with my purchase! Oh, and by the way, this one actually is stainless steel unlike the other pot I bought that said it was and wasn't.
10
+
11
+ I ordered it on May 21st. What a waste of time and money.
12
+
13
+ ![](_page_0_Picture_17.jpeg)
14
+
15
+ | | Review 1 | Review 2 |
16
+ |-----------------|----------|----------|
17
+ | Label score | 4 | 1 |
18
+ | MCR score | 0.168 | 3.637 |
19
+ | Our Model score | 4.651 | 0.743 |
20
+
21
+ Table 1: Example of unreasonable predictions in the Multimodal Review Helpfulness Prediction task.
22
+
23
+ Two principal groups of early efforts focus on purely textual data. The first group follows feature engineering techniques, retrieving argument-based features [\(Liu et al.,](#page-9-0) [2017\)](#page-9-0), lexical features [\(Kr](#page-9-1)[ishnamoorthy,](#page-9-1) [2015\)](#page-9-1), and semantic features [\(Kim](#page-9-2)
24
+
25
+ <sup>∗</sup>Corresponding Author
26
+
27
+ et al., 2006), as input to their classifier. Inherently, their methods are labor-intensive and vulnerable to the typical issues of conventional machine learning methods. Instead of relying on manual features, the second group leverages deep neural models, for instance, RNN (Alsmadi et al., 2020) and CNN (Chen et al., 2018), to learn rich features automatically. Nonetheless, their approach is ineffective because the helpfulness of a review is not only contingent upon textual information but also other modalities.
28
+
29
+ To cope with the above issues, recent works (Liu et al., 2021b; Han et al., 2022) proposed to utilize multi-modality via the Multi-perspective Coherent Reasoning (MCR) model. Hypothesizing that a review is helpful if it exhibits coherent text and images with the product information, those works take into account both textual and visual modality of the inputs, then estimate their coherence level to discern whether the reviews are helpful or unhelpful. However, the MCR model contains a detrimental drawback. Particularly, it aims to maximize the scores $s_p$ of positive (helpful) product-review pairs while minimizing those $s_n$ of negative (unhelpful) pairs. Hence, it was assumed that following the aforementioned manner would project features with similar semantics to stay close and those with disparate ones to be distant apart. Unfortunately, in multimodal learning, this was shown not to be the case, causing the model to learn ad-hoc representations (Zolfaghari et al., 2021). This is one reason leading to unreasonable predictions of MCR in Table 1. As it can be seen, even though Review 1 closely relates to the product of "6-Quart Stainless Steel Stockpot", the model classifies it as unhelpful. In addition, the target of Review 2's text content is vague because it does not specifically correspond to the "Stockpot". In fact, it can be used for any product. Moreover, the image does not clearly show any hint of the "Stockpot" as well. Despite such vagueness, the output of MCR for Review 2 is still helpful.
30
+
31
+ As a remedy to this problem, we propose Cross-modal Contrastive Learning to mine the mutual information of cross-modal relations in the input to capture more sensible representations. Nonetheless, plainly applying symmetric gradient pattern, which is similar to MCR that they assign equivalent penalty to $s_n$ and $s_p$ , is inflexible. In cases that $s_p$ is small and $s_n$ is already negatively skewed, or both $s_p$ and $s_n$ are positively skewed, it is irrational to
32
+
33
+ assign equivalent penalties to both $s_p$ and $s_n$ . Last but not least, MCR directly leverages Coherent Reasoning, repeatedly enforcing alignment among modalities in the input. This ignores the unaligned nature of multimodal input, for example, images might only refer to a particular section in the text, hence do not completely align with the textual content. In consequence, strictly forming alignment can make the model learn inefficient multimodal representations (Tsai et al., 2019).
34
+
35
+ To overcome the above problems, we propose an adaptive scheme to accomplish the flexibility in the optimization of our contrastive learning stage. Finally, we propose to adopt a multimodal attention module that reinforces one modality's high-level features with low-level ones of other modalities. This not only relaxes the alignment assumption but also informs one modality of information of others, encouraging refined representation learning.
36
+
37
+ In sum, our contributions are three-fold:
38
+
39
+ - We propose an Adaptive Cross-modal Contrastive Learning for Review Helpfulness Prediction task by polishing cross-modal relation representations.
40
+ - We propose a Multimodal Interaction module which correlates modalities' features without depending upon the alignment assumption.
41
+ - We conducted extensive experiments on two datasets for the RHP problem and found that our method outperforms other baselines which are both textual-only and multimodal, and obtains state-of-the-art results on those benchmarks.
42
+
43
+ # Method
44
+
45
+ In this section we delineate the overall architecture of our MRHP model. Particular modules of our system are depicted in Figure 1.
46
+
47
+ Given a product item p, which consists of a description $T^p$ and images $I^p$ , and a set of reviews $R = \{r_1, \ldots, r_N\}$ , where each review is composed of user-generated text $T^r_i$ and images $I^r_i$ , RHP model's task is to generate the scores
48
+
49
+ $$s_i = f(p, r_i), \quad 1 \le i \le N \tag{1}$$
50
+
51
+ where N is the number of reviews for product p and f is the scoring function of the RHP model. Empirically, each score estimated by f indicates the
52
+
53
+ <span id="page-2-0"></span>![](_page_2_Figure_0.jpeg)
54
+
55
+ Figure 1: Diagram of our Multimodal Review Helpfulness Prediction model.
56
+
57
+ helpfulness level of each review, and the groundtruth is the descending sort order of helpfulness scores.
58
+
59
+ Our model accepts product description $T^p$ , product images $I^p$ , review text $T^r_i$ , and review images $I^r_i$ as input. The encoding process of those elements is described as follows.
60
+
61
+ **Text Encoding** Product description and review text are sequences of words. Each sequence is indexed into the word embedding layer and then passed into the respective LSTM layer for product or review.
62
+
63
+ $$K^p = LSTM^p(\mathbf{W_{emb}}(T^p)) \tag{2}$$
64
+
65
+ $$K^r = LSTM^r(\mathbf{W_{emb}}(T^r)) \tag{3}$$
66
+
67
+ where $K^p \in \mathbb{R}^{l_p \times d}$ , $K^r \in \mathbb{R}^{l_r \times d}$ , $l_p$ and $l_r$ are the sequence lengths of product and review text respectively, and d is the hidden size.
68
+
69
+ **Image Encoding** We follow Anderson et al. (2018) to take detected objects as embeddings of the image. In particular, a pre-trained Faster R-CNN is applied to extract ROI features for m objects $\{\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_m\}$ from the product and review images. Subsequently, we encode extracted features using the self-attention module (SelfAttn) (Vaswani et al., 2017)
70
+
71
+ $$A = SelfAttn(\{\mathbf{a}_1, \mathbf{a}_2, ..., \mathbf{a}_m\}) \tag{4}$$
72
+
73
+ where $A \in \mathbb{R}^{m \times d}$ and d is the hidden size. Here we use $A^p$ and $A^r$ to indicate product and review image features, respectively.
74
+
75
+ We consider two components $\gamma$ , $\eta$ with their inputs $X_{\gamma}$ , $X_{\eta}$ , where $\eta$ is the concatenation of input elements apart from the one in $\gamma$ . For instance, if $\gamma = K^p$ , then $\eta = [K^r, A^p, A^r]$ , where [.,.] indicates the concatenation operation. We define each cross-modal attention block to have three components Q, K, and V:
76
+
77
+ $$Q_{\gamma} = X_{\gamma} \cdot W_{Q_{\gamma}} \tag{5}$$
78
+
79
+ $$K_{\eta} = X_{\eta} \cdot W_{K_{\eta}} \tag{6}$$
80
+
81
+ $$V_{\eta} = X_{\eta} \cdot W_{V_{\eta}} \tag{7}$$
82
+
83
+ where $W_{Q_{\gamma}} \in \mathbb{R}^{d_{\gamma} \times d_k}$ , $W_{K_{\eta}} \in \mathbb{R}^{d_{\eta} \times d_k}$ , and $W_{V_{\eta}} \in \mathbb{R}^{d_{\eta} \times d_v}$ are weight matrices. The interaction between $\gamma$ and $\eta$ is computed in the cross-attention manner
84
+
85
+ $$Z_{\gamma} = CM_{\gamma}(X_{\gamma}, X_{\eta}) = \operatorname{softmax}\left(\frac{Q_{\gamma} \cdot K_{\eta}^{T}}{\sqrt{d_{k}}}\right) \cdot V_{\eta}$$
86
+ (8)
87
+
88
+ Our full module comprises D layers of the abovementioned attention block, as indicated in the right part of Figure 1. Theoretically, the computation is carried out as follows
89
+
90
+ $$Q_{\gamma}[0] = X_{\gamma} \tag{9}$$
91
+
92
+ $$T[i] = CM_{\gamma}[i](LN(Q_{\gamma}[i-1]), LN(X_{\eta})) \quad (10)$$
93
+
94
+ $$U_{\gamma}[i] = T[i] + Q_{\gamma}[i-1] \tag{11}$$
95
+
96
+ $$Q_{\gamma}[i] = \text{GeLU}(\text{Linear}(U_{\gamma}[i])) \tag{12}$$
97
+
98
+ where *LN* denotes layer normalization operator. We iteratively estimate cross-modal features for product text, product images, review text, and review images with a view to obtaining H<sup>p</sup> , V p , H<sup>r</sup> , and V r .
99
+
100
+ $$H^p = Q_k^p[D], \quad V^p = Q_a^p[D]$$
101
+ (13)
102
+
103
+ $$H^r = Q_k^r[D], \quad V^r = Q_a^r[D]$$
104
+ (14)
105
+
106
+ After our cross-modal interaction module, we proceed to pass features to undertake relation fusion in three paths: intra-modal, inter-modal, and intra-review.
107
+
108
+ Intra-modal Fusion The intra-modal alignment is calculated for two relation kinds: (1) product text - review text and (2) product image - review image. Firstly, we learn alignment among intramodal features via self-attention modules
109
+
110
+ $$H^{\text{intraM}} = \text{SelfAttn}([H^p, H^r]) \tag{15}$$
111
+
112
+ $$V^{\text{intraM}} = \text{SelfAttn}([V^p, V^r]) \tag{16}$$
113
+
114
+ Then intra-modal hidden representations are fed to a CNN, and continuously a max-pooling layer to attain salient entries
115
+
116
+ $$\mathbf{z}^{\text{intraM}} = \text{MaxPool}(\text{CNN}([H^{\text{intraM}}, V^{\text{intraM}}]))$$
117
+ (17)
118
+
119
+ Inter-modal Fusion Similar to intra-modal alignment, inter-modal one is calculated for two types of relations as well: (1) product text - review image and (2) product image - review text. The first step is also to relate feature components using selfattention modules
120
+
121
+ $$H^{\text{prd\_txt-rvw\_img}} = \text{SelfAttn}([H^p, V^r])$$
122
+ (18)
123
+
124
+ $$H^{\text{prd\_img - rvw\_txt}} = \text{SelfAttn}([V^p, H^r])$$
125
+ (19)
126
+
127
+ We adopt a mean-pool layer to aggregate intermodal features and then concatenate the pooled vectors to construct the final inter-modal representation
128
+
129
+ $$I^{\text{prd\_txt - rev\_img}} = \text{MeanPool}(H^{\text{prd\_txt - rvw\_img}})$$
130
+ (20)
131
+
132
+ $$I^{\text{prd\_img - rev\_txt}} = \text{MeanPool}(H^{\text{prd\_img - rvw\_txt}})$$
133
+ (21)
134
+
135
+ $$\mathbf{z}^{\text{interM}} = [I^{\text{prd\_txt - rvw\_img}}, I^{\text{prd\_img - rvw\_txt}}] \quad (22)$$
136
+
137
+ Intra-review Fusion The estimation of intrareview module completely mimics the inter-modal manner. The only discrimination is that the estimation is taken upon two different relations: (1) product text - product image and (2) review text review image.
138
+
139
+ $$H^{\text{prd\_txt - prd\_img}} = \text{SelfAttn}([H^p, V^p])$$
140
+ (23)
141
+
142
+ $$H^{\text{rvw\_txt - rev\_img}} = \text{SelfAttn}([H^r, V^r])$$
143
+ (24)
144
+
145
+ $$G^{\text{prd\_txt - prd\_img}} = \text{MeanPool}(H^{\text{prd\_txt - prd\_img}})$$
146
+ (25)
147
+
148
+ $$G^{\text{rvw\_txt - rvw\_img}} = \text{MeanPool}(H^{\text{rvw\_txt - rvw\_img}})$$
149
+ (26)
150
+
151
+ $$\mathbf{z}^{\text{intraR}} = [G^{\text{prd\_txt - prd\_img}}, G^{\text{rvw\_txt - rvw\_img}}] \quad (27)$$
152
+
153
+ Finally, we concatenate intra-modal, inter-modal, and intra-review output, and then feed the concatenated vector to the linear layer to obtain the ranking score:
154
+
155
+ $$\mathbf{z}^{\text{final}} = [\mathbf{z}^{\text{intraM}}, \mathbf{z}^{\text{interM}}, \mathbf{z}^{\text{intraR}}]$$
156
+ (28)
157
+
158
+ $$f(p, r_i) = \operatorname{Linear}(\mathbf{z}^{\text{final}})$$
159
+ (29)
160
+
161
+ In this section, we explain the formulation and adaptive pattern along with its derivation of our Cross-modal Contrastive Learning.
162
+
163
+ Cross-modal Contrastive Learning First of all, we extract hidden states of helpful product-review pairs. Second of all, hidden features are maxpooled to extract meaningful entries.
164
+
165
+ $$\mathbf{h}^p = \text{MaxPool}(H^p), \ \mathbf{h}^r = \text{MaxPool}(H^r)$$
166
+ (30)
167
+
168
+ $$\mathbf{v}^p = \text{MaxPool}(V^p), \ \mathbf{v}^r = \text{MaxPool}(V^r)$$
169
+ (31)
170
+
171
+ We formulate our contrastive learning framework taking positive and negative pairs from the abovementioned cross-modal features. In our framework, we hypothesize that pairs established by modalities of the same sample are positive, whereas those formed by modalities of distinct ones are negative.
172
+
173
+ $$\mathcal{L}_{CE} = -\sum_{i=1}^{B} sim(\mathbf{t}_{i}^{1}, \mathbf{t}_{i}^{2}) + \sum_{j=1, k=1, j \neq k}^{B} sim(\mathbf{t}_{j}^{1}, \mathbf{t}_{k}^{2})$$
174
+ (32)
175
+
176
+ where t 1 , t <sup>2</sup> ∈ {h p , h r , v p , v <sup>r</sup>}, and B denotes the batch size in the training process.
177
+
178
+ Adaptive Weighting The standard contrastive objective suffers from inflexible optimization due to
179
+
180
+ irrational gradient assignment to positive and negative pairs. As a result, to tackle the problem, we propose the Adaptive Weighting Strategy for our contrastive framework. Initially, we introduce weights $\epsilon^p$ and $\epsilon^n$ to represent distances from the optimum, then integrate them into positive and negative terms of our loss.
181
+
182
+ <span id="page-4-0"></span>
183
+ $$\mathcal{L}_{\text{AdaptiveCE}} = -\sum_{i=1}^{B} \epsilon_{i}^{p} \cdot \text{sim}(\mathbf{t}_{i}^{1}, \mathbf{t}_{i}^{2})$$
184
+
185
+ $$+ \sum_{j=1, k=1, j \neq k}^{B} \epsilon_{j,k}^{n} \cdot \text{sim}(\mathbf{t}_{j}^{1}, \mathbf{t}_{k}^{2})$$
186
+ (33)
187
+
188
+ where $\epsilon_i^p = [o^p - \sin(\mathbf{t}_i^1, \mathbf{t}_i^2)]_+$ and $\epsilon_{j,k}^n = [\sin(\mathbf{t}_j^1, \mathbf{t}_k^2) - o^n]_+$ . Investigating the intuition to determine the values for $o^p$ and $o^n$ , we continue to conduct derivation and arrive in the following theorem
189
+
190
+ <span id="page-4-1"></span>**Theorem 1** Adaptive Contrastive Loss (33) has the hyperspherical form:
191
+
192
+ $$\begin{split} \mathcal{L}_{AdaptiveCE} &= \sum_{i=1}^{B} \left( \textit{sim}(\mathbf{t}_{i}^{1}, \mathbf{t}_{i}^{2}) - \frac{o^{p}}{2} \right)^{2} \\ &+ \sum_{j=1, k=1, j \neq k}^{B} \left( \textit{sim}(\mathbf{t}_{j}^{1}, \mathbf{t}_{k}^{2}) - \frac{o^{n}}{2} \right)^{2} - C, \\ \textit{where } C > 0 \end{split}$$
193
+
194
+ We provide the proof for Theorem (1) in the Appendix section. As a consequence, theoretically the contrastive objective arrives in the optimum when $\sin(\mathbf{t}_i^1,\mathbf{t}_i^2)=\frac{o^p}{2}$ and $\sin(\mathbf{t}_j^1,\mathbf{t}_k^2)=\frac{o^n}{2}$ . Based upon this observation, in our experiments we set $o^p=2$ and $o^n=0$ .
195
+
196
+ For the Review Helpfulness Prediction problem, the model's parameters are updated according to the pairwise ranking loss as follows
197
+
198
+ $$\mathcal{L}_{\text{ranking}} = \sum_{i} \max(0, \beta - f(p_i, r^+) + f(p_i, r^-))$$
199
+
200
+ where $r^+$ and $r^-$ are random reviews in which $r^+$ possesses a higher helpfulness level than $r^-$ . We jointly combine the contrastive goal with the ranking objective of the Review Helpfulness Prediction problem to train our model
201
+
202
+ $$\mathcal{L} = \mathcal{L}_{AdaptiveCE} + \mathcal{L}_{ranking}$$
203
+ (35)
204
+
205
+ <span id="page-4-2"></span>
206
+
207
+ | Dataset | C124 | Category (Product / Review) | | | | | |
208
+ |---------|-------------|-----------------------------|--------------|----------|--|--|--|
209
+ | | Split | Clothing | Electronics. | Home | | | |
210
+ | Lazada | Train & Dev | 8K/130K | 5K/52K | 4K/16K | | | |
211
+ | | Test | 2K/32K | 1K/13K | 1K/13K | | | |
212
+ | Amazon | Train & Dev | 16K/349K | 13K/325K | 18K/462K | | | |
213
+ | | Test | 4K/87K | 3K/80K | 5K/111K | | | |
214
+
215
+ Table 2: Statistics of MRHP datasets.