Eric03 commited on
Commit
e97120e
·
verified ·
1 Parent(s): 2e47673

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2002.06478/main_diagram/main_diagram.drawio +1 -0
  2. 2002.06478/main_diagram/main_diagram.pdf +0 -0
  3. 2002.06478/paper_text/intro_method.md +37 -0
  4. 2004.13313/main_diagram/main_diagram.drawio +1 -0
  5. 2004.13313/main_diagram/main_diagram.pdf +0 -0
  6. 2004.13313/paper_text/intro_method.md +266 -0
  7. 2005.00571/main_diagram/main_diagram.drawio +1 -0
  8. 2005.00571/main_diagram/main_diagram.pdf +0 -0
  9. 2005.00571/paper_text/intro_method.md +64 -0
  10. 2006.15057/main_diagram/main_diagram.drawio +0 -0
  11. 2006.15057/paper_text/intro_method.md +60 -0
  12. 2101.08314/main_diagram/main_diagram.drawio +1 -0
  13. 2101.08314/main_diagram/main_diagram.pdf +0 -0
  14. 2101.08314/paper_text/intro_method.md +464 -0
  15. 2101.09868/main_diagram/main_diagram.drawio +0 -0
  16. 2101.09868/paper_text/intro_method.md +84 -0
  17. 2102.09337/main_diagram/main_diagram.drawio +1 -0
  18. 2102.09337/main_diagram/main_diagram.pdf +0 -0
  19. 2102.09337/paper_text/intro_method.md +124 -0
  20. 2103.06818/main_diagram/main_diagram.drawio +0 -0
  21. 2103.06818/paper_text/intro_method.md +87 -0
  22. 2103.13516/main_diagram/main_diagram.drawio +0 -0
  23. 2103.13516/paper_text/intro_method.md +55 -0
  24. 2104.00764/main_diagram/main_diagram.drawio +1 -0
  25. 2104.00764/main_diagram/main_diagram.pdf +0 -0
  26. 2104.00764/paper_text/intro_method.md +98 -0
  27. 2104.14207/main_diagram/main_diagram.drawio +0 -0
  28. 2104.14207/paper_text/intro_method.md +90 -0
  29. 2105.05912/main_diagram/main_diagram.drawio +0 -0
  30. 2105.05912/paper_text/intro_method.md +97 -0
  31. 2106.02960/main_diagram/main_diagram.drawio +0 -0
  32. 2106.02960/main_diagram/main_diagram.pdf +0 -0
  33. 2106.02960/paper_text/intro_method.md +82 -0
  34. 2107.04086/main_diagram/main_diagram.drawio +1 -0
  35. 2107.04086/paper_text/intro_method.md +19 -0
  36. 2108.01806/main_diagram/main_diagram.drawio +1 -0
  37. 2108.01806/main_diagram/main_diagram.pdf +0 -0
  38. 2108.01806/paper_text/intro_method.md +152 -0
  39. 2108.02388/main_diagram/main_diagram.drawio +0 -0
  40. 2108.02388/paper_text/intro_method.md +190 -0
  41. 2108.10949/main_diagram/main_diagram.drawio +1 -0
  42. 2108.10949/main_diagram/main_diagram.pdf +0 -0
  43. 2108.10949/paper_text/intro_method.md +126 -0
  44. 2108.13393/main_diagram/main_diagram.drawio +0 -0
  45. 2108.13393/paper_text/intro_method.md +138 -0
  46. 2109.04853/main_diagram/main_diagram.drawio +1 -0
  47. 2109.04853/main_diagram/main_diagram.pdf +0 -0
  48. 2109.04853/paper_text/intro_method.md +17 -0
  49. 2110.13578/main_diagram/main_diagram.drawio +1 -0
  50. 2110.13578/main_diagram/main_diagram.pdf +0 -0
2002.06478/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile modified="2019-08-12T21:41:56.213Z" host="www.draw.io" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.90 Safari/537.36" etag="k6pO9ytTh3fpvXXDDLt6" version="11.1.4" type="google"><diagram name="Page-1" id="c7558073-3199-34d8-9f00-42111426c3f3">7Vrdc9o4EP9rmLl7ICPb2MBjIEnbubaXG9ppey8dYQusi7BcWSSkf/2tbPlThjhfQNr4AaPV6mt3f7sryT1nutq8ETgOP/CAsJ6Ngk3POevZ8Lg2vBTlNqMMx15GWAoaZCSrJMzoT6KJSFPXNCBJjVFyziSN60SfRxHxZY2GheA3dbYFZ/VRY7wkBmHmY2ZSv9BAhhl1ZHsl/S2hyzAf2fLGWc0c+1dLwdeRHq9nO4v0yapXOO9LLzQJccBvKiTnvOdMBecy+7faTAlTss3FlrW72FJbzFuQSHZpgNGn2eI99z9//Pf7JHiD2OLqn75lD/Ts5G0uERKAgHSRCxnyJY8wOy+pk3TVRPWLoFTyvOc8BqIFxP+IlLda23gtOZBCuWK6lmyo/Kqan7i69K1Sc7bRPaeF27wQSXH7NWdThW9lD6pYNktLebsFj6SeiAVCn5hy06JM+Fr4WgZaKhKLJZE75edknEpold60Ct4QviIwGWAQhGFJr+tWh7XxLgu+UoHwR+uwXZ96iteYrXWnl1hIoFwKHvMEAy/YsGMpGfyxWitEgfIUDHG0ZlhQqabFyDVhyZ+GEdyEVJJZjFOB3AD0oRfM6DKCog+SIwII10RIClA61RUrGgSpgaQy150pdSVS8KsCX47ioIxNOeMiHc65gGc63aoaNRDZ7JSvrh06LlhE2kg7pIGloXxTwtsaaRCHFWh76PFKaTcSZB8SZFYFYiXg7gAZ2gPIWoWVh4+7oYesQ0Fvy3xMRAIEmVT2H+OobO/9WCunPwGUfo8ZQCyEoAWAKipqppITtfmeQiWKN/CbMeYW3KRf04TOKVMoT+tCwCaJiuoViJdGZrMYBwGNlpWKclLeUr9Z9r7IlpWR/waMMhzHqrHmmIu8crae93PvhKkw0ADglnXjFSShP/E8ZUCprGgkU5W5k557di9vxPCcsEkRrituRwfsLva7G9mGgyoSFr2GWtBvc1x9dIKsgXZLnS1Wd3eppFNh4YtFAuBpmnQx6sOtPIdnxciryi1Cz68STgZ3xpIx2mcocZAh/3eQcaJPAoqvQaZjkCliR4co4x1ZlLEMAyj8be5uJzTC6Zw+EFgevD8qZ1C47XnpyhsGU5qD0mcbaCuqz1BYQR1Kn3Z8NlTXwGvgklEw2KXS7pB1co3tyv/svYLW2q6z3yAzeF3q61Jf9FI/R5IqkEdc/ZAbtQbIeeIs54nbEto9RN7O0fDpIlcj29w0nGnucIdevYssyOpW1ROqRkdDZOzdB42usshudHUqBL6tsOndwtYpe6NR6zjbZtbkd63d/E2R3Jc/F2EZdbIVPmkqbzlmWHpxueRT5oQtJw+DZ4HQvc11jLYAY6vB3tXimUwKmdtDIzvVe8UzmviCriBVlVz8NtmpaxuK2Wd+un4X+Gd/LeYxe7v+gj8n7o8rpz88JNjvcwXwCHDnAu1wom8f14bPNnf8JqSof6UGjwL4nfIVpF/EPIWbq1nPCBZ+CH9O2fLE5PltgOg4dRi63sG3ibnlVTR9Ae5RNXwXJRJHPkkVuFzBssH0eGRo5qWeszluPQHyRsjQxn4P2mz3oF7xHinQXi9Gd9+CdvCu6Mi869jA3Mznwsx/X+wVRW7Ij7+iQCdo0NgI9bXij/fCorDN15S0c0p6DLHwoEnpC7vNsLvemY8PdpnRuvHwDhpiH/Lt0T11vA+dHl9IbbkiDnGsctfL6S+TsfaHww4b+efKWVvh1Pb9SRNfUXCqPpqEUsQjJcEAJ2ERwKowKUy+tPJv1botJt8OL7QTXnvJPDtf5FbU57aoL6c98tS8P6rveIZOwyy2HHWbHTlbtk53HL8/IMFqtbo8vayhXYBJHV0GLbjeNTtn/XG73e3E1VN842N73rCeQdtPY052vVfLqffwkAQbiuXnyRl7+Q24c/4/</diagram></mxfile>
2002.06478/main_diagram/main_diagram.pdf ADDED
Binary file (22.6 kB). View file
 
2002.06478/paper_text/intro_method.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ [Formulating the every step according to the problem statement. We can do this when everything else is done.]{style="color: red"}\
4
+ Our bottom-up strategy start with the initialization of a pool of sub-parts from input point sets. We use a part proposal network as shown in Figure [\[fig:subpart\]](#fig:subpart){reference-type="ref" reference="fig:subpart"}. Then, we use a MergeNet to determine if a pair of the proposal should be merged. If the answer is yes, we put the merged larger sub-part into the sub-part pool, replacing the input pair of sub-parts. We repeat the process until no new sub-part can be merged. The final pool will be the segmentation results.
5
+
6
+ [The partness score will be used for merging policy.]{style="color: red"}\
7
+ [Given a sub-part, this module will predict the partness score. For the MergeNet, the input is a pair of sub-parts. The two modules seem a little bit repetitive. We can replace the MergeNet with the partness score and a threshold. The reason why we have two modules is that the regression task is too difficult to have a good performance. I train the MergeNet in a binary classification way to alleviate this problem.]{style="color: red"}\
8
+ The binary mask prediction task is very difficult. The sub-part proposal may contain points from multiple instances. Also, it can not guarantee the small ball does not across the boundary in the inference phase. In our experiments, we found the binary mask prediction will be much more bad if the small ball across the boundary. All in all, we need a module to evaluate the quality of the proposals. Similar to objectness score defined in 2d image object detection [@alexe2012measuring], we propose the partness score to evalute the quality of the proposals. The partness score is defined as $S(P) = \{\max_{j} \frac{Num\{Label\{p_i\} == j\}}{Num\{P\}},p_i \in P, j \in Labels\}$. We will feed the part proposal into a PointNet and directly optimize the parameters by $\ell_2$ loss. In the next step, we will use this score to guide our merging policy.
9
+
10
+ <figure id="fig:purity" data-latex-placement="h">
11
+ <img src="fig/puritynet.png" style="width:80.0%" />
12
+ <figcaption>Partness score regression module.</figcaption>
13
+ </figure>
14
+
15
+ [Formulating a clustering operation on a set.]{style="color: red"}\
16
+ $$\text{stochastic process?} \quad \{G_0, G_1, \cdots, G_n\}$$ $$\text{Ultimate gobal}: \max \; mIOU,\quad \text{Currently Objective}: \|G_i - G_i^{gt}\|$$
17
+
18
+ Given a pair of part proposal, MergeNet aims to predict if they can be merged. In the early stage, we will choose the pair only it is very close in Eucildean distance. We will put the merged proposal into the sub-part proposal pool and remove input proposal pair from the sub-part proposal pool. We repeat the process until no new proposal generated in an iterative way. The final pool will be the segmentation results.
19
+
20
+ When the patch is small, the relation between patch is only located in a small arena. When the patch becomes large, the relation is actually across the space, such as
21
+
22
+ <figure id="fig:mergenet" data-latex-placement="h">
23
+ <img src="fig/mergenet.png" style="width:110.0%" />
24
+ <figcaption>MergeNet Structure.</figcaption>
25
+ </figure>
26
+
27
+ [Formulating the merging policy.]{style="color: red"}\
28
+ [The score for every pair beforce Softmax will be the partness score $\times$ MergeNet predition.]{style="color: red"}\
29
+ [On policy gradient descent? Hao used this term.]{style="color: red"}\
30
+ [The policy $\mathcal{\pi}_{purity}$ is just feeding purity score $\times$ MergeNet predition into a Softmax layer and choice the argmax term. However, we might not always choose the argmax term but also consider other information such as the relative size. So we use a small network to learn the policy $\mathcal{\pi}_{strategy}$.]{style="color: red"}\
31
+ [Meta Learning: training on class A for the policy $\mathcal{\pi}_{purity}$, finetuning the small network on class B for the policy $\mathcal{\pi}_{strategy}$, and finally test on class C.]{style="color: red"}
32
+
33
+ $$\max \sum_j \mathcal{\pi}_{strategy}(G_j)\mathcal{\pi}_{purity}(G_j) R(G_j)$$
34
+
35
+ For the inference phase, we will choice the highest partness score pair in each iteration and feed the pair into MergeNet to determine if we merge it. For training, We will on-online training all modules and the training samples are generated by inferencing our model.
36
+
37
+ [Currently, we have a very simple strategy that refine the sub-proposals between the parts in the final stage.]{style="color: red"}
2004.13313/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="www.draw.io" modified="2020-04-17T19:18:20.538Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36" etag="15AKLmRcQeNuOVlUyumr" version="12.9.14" type="device"><diagram id="9USpvWsKpHugUz3UrNLn" name="Page-1">7V1dl6M2Ev01ffap+4D4fpyeTrLJbvbMbM/uJk85NKhtTjByMJ52z69fCYPBEh4LLJCw1fMwILCw615JVaWq4s76uNr9lIfr5a8ohukdMOLdnfV0B/CfY+H/SMv7vsUMTHvfssiTuGprGp6Tb7BqNKrWbRLDzdGNBUJpkayPGyOUZTAqjtrCPEdvx7e9ovT4qetwAZmG5yhM2db/JXGx3Lf6wGva/w6TxbJ+sukG+yursL65+iWbZRijt1aT9cOd9TFHqNgfrXYfYUqkV8tl/7kfT1w9fLEcZgXPB778d/EP5+eXzS+P4c/fXOtfxZff/rj39718DdNt9YOrL1u81xLAvWBh45PHt2VSwOd1GJErbxhw3LYsVik+M/Fh1RXMC7g7+R3Nwy/HnIFoBYv8Hd+yO0a9Yotdy+6tEb1bNS1bUrertrACe3HouJEHPqhE0kM8gVriCRQTTw2XKvIxPdUEZKolIOCoJiBLLQHZqs1ApqOWgCj5WJZs+bhqySdQTT6eWvKhp2j5AlJMBaKnaPkCUkwJoqdo6QICPIt8Fn8g1gg+i9Jws0miY7lsihz9ebAwSAvcJcVv+Nh4cKqz38lZdfy0a5+81ycZ/j2tD5HT3+v+yEnzsfKs/tz+68KYMYUoTPBPQts8gt8RRjWaijBfwOLsvM2C3ALR6QCxbsthGhbJ1+Pv24Vs9YRPKMG/5MAhC1DLvEmRY/87q0+1jSq6I3o6c6mO9oJgOiqJdvjZF3APaO71556nuSeCe7bmXn/uBZp7IrjHYxZp7nWpupp7l3KPx+TU3OvyY2juXcg9S897jHdec28i7ul5bwD3tJ0rhHs8bkzNPW3njsE9Hg+x5p62c8fgHo/z/Va4V4OoFb5pyFfvmGjy9SKf1viEkE/vqg0hn1b5hJAPaPINIJ/2LwshH09MXH/yjU4kxW0CYJwI7evLD9um+OFNzA+978pG2GqbYCLy6Q0INnpZk28i8ukdiCEznzZIhZBPb0EMmfk0+YSQT+9BDJn5tDdECPn0JsSQmU+TTwT5anlo8vWa+bQrTgj59CbEkJlPk08I+cAo5BtIpCGklTLzaSezEPLpTYgx+EHDOlt+gA6j4DlCOWRYkqNtFsO4QvJMfvAryoqqCJBJGPKapOlHlKK87Mt6DeMgdg7Ual0JoW1a4E5IhrFDZRjfmzX6Z1KMLeM0Hy5LMebJtesn6GPB/uiQf12CNco/fGWRh3GCJVhfy1AGmUFu4RaEH5oURHauIQaQe+sYENCBhwM6ABkxK57B4wv68w64KX7y40uOjxbkyGRQ2izDNTlMEVr/M1klxXmsNtWAACxusRFBCLpwc1zH9KAY+VPj4SDqcxn3zljDgU/47BCZo/ADxYTPKgadwreuQvh0uRLp0mfXgU7p21chfboWinTps/uPndJ/ugrp04VWpEvf5FCCWjZJpaDE4WZZakXm96wTq7rzU1gUMM/Ke4DRKJTnbQ9p/g5KOXJo+XP7O9wzHY1tcnLA28avrqdptEEmJ2GaLAiCEQYI4oHxSIZBEoXph+rCKonjFO77KIflarcg9Usf9hVDwf5/0m1pgD7YNjkmYgLlw5IcRkWCspJkOcH4MUNFtKyeT49WB/qx3TVaffBiua6olerBOYLP9h1muNaITmK0OByhQhrPk2qfenhyRN9oPE+PT/UA5Yho0YCeVk7VA5QjSkQDelrfVQ9Q1sl6xW4m2u8q3eCoH3YbjiZHNU+Ty+lq+nwV4veVYz+PuT1h4Vd6epBem9tVrDY3PYDlC0ix2sr0EJMvoH6RlTfi0fKAII+W68v1aHms/qDUDCq9dLbHE18ncQaVLyDF1mB6BpUvIJ2PyGpsZ4OgPKnzuy8qQtOXHKHp6ZSwIeSzNflEkE9n5Qwhn6HJJ4B8Po9mezPkczT5JiWfzsoZQj6t8wkh3ziJEddOPq3ziSBfoJdd1rusl92JyAc0+QaQTy+7Qsg3TgEU+fmIkldGUfmIdEdT5yOaBhud+FSHCmy2L61ImX0bfkqrmeFSk0xnnness1mKXhh1xQu82K5j2GJc8ZQnHtQCl/cyeHaIdiEArgaBQDUETFY97ELAuhoE6PQsBSBgfRNdENhXAwGdo6UABKye2kCwDrMjCbt/bRG5QBLP7/cxYB/wDaax3jUXG3gYIONTQDat5SNni69tACoqVgGE+9UBvZGwGToR7LA+XZoIxnQ0ti7XEfjJ4isxbsb0ZY8A5WIzmXQlBWSkWHgmHTujgIR45tEJJaSegHgCPCYUUKCcgBSL8KUNAgUkxBOmMaGEaH1dvoRUi5Kl6x4oICHVwmQpCbnSBcSafHIFxKhD8kWkmE5Na0PSBcS1sY+Nv+fqFKYv6O2HpuGxbMAX6lzZctdg4u0wEFjtfQzjIfD8M3sZ5dknmCdYjiTXV3jlzlrL5KiYLdV+tmmzl07r5bWfT5qLU9nPPo91qDyVgU9R2ZkRk4FmshAm89jwyjPZpydlbFfOh8rWVVDZB+5RR3VAwWRM5rEDx2dyw0o3MGlWnq0BfhkrVVcAgH9mtuN2oFOeeJMm7dhk44oTnJJsHs01b+TFXPUl+oq4xuOcGJ1rvMyYPd5+XSZR0jrGFQWqpEYmVH2S+4oTesw7Q/X7+l3TdUfG1JPHNThdHP9odRtZi+Ilp/proDAaW/TkOjmNr8Hh4jRW6URKmlAqSzVTr4jKanhcGlrahjMxLVW3U00wlu0wOdl44ismJVv5DqHLySZdP/XmoQIwZtHgeVM6lXkiYaalcqDevCl1kb6aeRMYioU2Mm9Mk10WDHSkg8mVUKCchBSLbaQDQhSQ0Dg5p0omPJ+du6sRJWvudqj8Ujr9uHce82RT9Q2V6eIgkadJNIBEHYmdt0wiR5NoCIluqGyW6iRywVxJBDSJVNGJ5kuiG6qhprpONF8S8ey53Q6JpC5ndPWf+ZDohsrHq76czZdE/cpRzKc2meSlSRghqH7oGnij80PULucS5ck3lBWXxoe0J6XDFDXBpqTcTAJ6q3FoeRTHpCvicGbFYLjC99Zta3LD5jvfmE73r3jc8HPfo2C2itrIVJit0nxHAVVKid445OYgk2A8EgcdjyrrMg0HRSXLaA6yq6FBU2cwC+ktu5E46PlMcaFJWAjUiE6SHEokt3o2vWoz273cM6ZrHnVUh+yKni8NGWs2UCP46BKzt1cIvXh+qx8qp0cC10hQI8322keCUpacHgldI8EeZ5NloFdKshZziJg87/Oypeo7dCTlcAuRKpzARFaP7fWyx9mfmSv/eAPygS1Vy7gi/o2ztTNX/vHWewG2XA/Z1fCv3lf/Hv+a8u8E737l3+MQ+q+d5d+N8o9cWYZxSW5yEqEVoXd5zJY8R/ixSUFk5TbUuyie+55Kkrg/ZEm0mBSADipZ44V02ywon/u8Jaa+c103/e3ozvWMC/0zORwBC9e08fc2G7DYBRa4RbDodBL5YAEusP66QbCYzBb5YHFELLTVoPY6cniNBjkJ02RB3pERYVGQjMHH2m3yobqwSuK49KvgPtak59VugVFZPoRE79qA/f+k21LrebCJzGMiOFA+LMlhVCSIdJWhnEjrMUNFtKyeT+P5+gqiTjxj98V1XFEzJb0fcW8Cm4G0VkCPlrbRILU4zP05Qho70I/tLkh98GK5giAFDrMp1pF+NjGi7IyqEe3xJgAFEeXIwNCInkKUCXlQAFAOG08DenqIKgco4HAajW20T2Cb0zsSB+9NS/BW0KGSjvlSN1aB6WWaz84sYFxd8t+rx7rsexnc84NAvVe4shNQLzN6dhDQxrF0CPDMd34NuL23S3o0TkPDp136nb0Tv10SWBwbk3NU2mQ5P+RrbRaH1qYRPb0IKoiodlBetKaqh6ijvVkXvZNQOUQDjkjLOQIqaxm163cdyQL0sD+vERWzjCqA6JVuCslaRqUjilnWgejeFbC0ajfAv2EKv4YZNvOB8cs2XiTZouUvaO5juNDLtXkmuqgGpbKPWTu4jWl1E6ZMnGB0qOZXlBXPyTfyNUwSPifeIWpRDlGbXVwP2W1toOmUdHFAu12TMQ30E4q2K1ha1//BwOWbIsxijXbvnSRLAby79gZpvD9vYfk7NNg91mVnOqzxaY5Q0XZYkUXwVxRDcsf/AQ==</diagram></mxfile>
2004.13313/main_diagram/main_diagram.pdf ADDED
Binary file (76.1 kB). View file
 
2004.13313/paper_text/intro_method.md ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Neural rankers based on Transformer architectures [\(Vaswani et al.,](#page-9-0) [2017\)](#page-9-0) fine-tuned from BERT [\(Devlin et al.,](#page-8-0) [2019\)](#page-8-0) achieve current state-of-theart (SOTA) ranking effectiveness [\(Nogueira and](#page-8-1) [Cho,](#page-8-1) [2019;](#page-8-1) [Craswell et al.,](#page-8-2) [2019\)](#page-8-2). The power of the Transformer comes from self-attention, the process by which all possible pairs of input tokens interact to understand their connections and contextualize their representations. Self-attention provides detailed, token-level information for matching, which is critical to the effectiveness of Transformer-based rankers [\(Wu et al.,](#page-9-1) [2019\)](#page-9-1).
4
+
5
+ When used for ranking, a Transformer ranker takes in the concatenation of a query and document, applies a series of self-attention operations, and outputs from its last layer a relevance prediction [\(Nogueira and Cho,](#page-8-1) [2019\)](#page-8-1). The entire ranker runs like a black box and hidden states have no explicit meanings. This represents a clear distinction from earlier neural ranking models that keep separate text representation and distance (interaction) functions. Transformer rankers are slow [\(Nogueira et al.,](#page-9-2) [2019\)](#page-9-2), and the black-box design makes it hard to interpret their behavior.
6
+
7
+ We hypothesize that a Transformerbased ranker simultaneously performs text representation and query-document interaction as it processes the concatenated pair. Guided by this hypothesis, we decouple representation and interaction with a *MO*dualarized *RE*ranking *S*ystem (MORES). MORES consists of three *Transformer* modules: the Document Representation Module, the Query Representation Module, and the Interaction Module. The two Representation Modules run independently of each other. The Document Representation Module uses self-attention to embed each document token conditioned on all document tokens. The Query Representation Module embeds each query token conditioned on all query tokens. The Interaction Module performs attention from *query* representations to *document* representations to generate match signals and aggregates them through self-attention over query tokens to make a relevance prediction.
8
+
9
+ By disentangling the Transformer into modules for representation and interaction, MORES can take advantage of the indexing process: while the interaction must be done online, document representations can be computed offline. We further propose two strategies to pre-compute document representations that can be used by the Interaction Module for ranking.
10
+
11
+ Our experiments on a large supervised ranking dataset demonstrate the effectiveness and efficiency of MORES. It is as effective as a stateof-the-art BERT ranker and can be up to 120× faster at ranking. A domain adaptation experiment shows that the modular design does not affect the model transfer capability, so MORES can be used under low-resource settings with simple adaptation techniques. By adapting individual modules, we discovered differences between represen-
12
+
13
+ <span id="page-0-0"></span><sup>1</sup>Open source code at [https://github.com/](https://github.com/luyug/MORES) [luyug/MORES](https://github.com/luyug/MORES)
14
+
15
+ tations and interaction in adaptation. The modular design also makes MORES more interpretable, as shown by our attention analysis, providing new understanding of black-box Transformer rankers.
16
+
17
+ # Method
18
+
19
+ In this section, we introduce the Modularized Reranking System (MORES), how MORES can speed up retrieval, and how to effectively train and initialize MORES.
20
+
21
+ A typical Transformer ranker takes in the *concatenation* of a query qry and a document doc as input. At each layer, the Transformer generates a new contextualized embedding for each token based on its attention to all tokens in the concatenated text. This formulation poses two challenges. First, in terms of speed, the attention consumes time quadratic to the input length. As shown in Table [1,](#page-3-0) for a query of q tokens and a document of d tokens, the Transformer would require assessments of (d + q) <sup>2</sup> pairs of tokens. Second, as query and document attention is entangled from the first layer, it is challenging to interpret the model.
22
+
23
+ MORES aims to address both problems by disentangling the Transformer ranker into document representation, query representation, and interaction, each with a dedicated Transformer, as shown in [Figure 1.](#page-2-0) The document representation is query-agnostic and can be computed off-line. The interaction uses query-to-document attention, which further reduces online complexity. This separation also assigns roles to each module, making the model more transparent and interpretable.
24
+
25
+ <span id="page-2-0"></span>Figure 1: An illustration of the attention within a MORES model using two layers of Interaction Blocks ( $2 \times$ IB). Representation Modules only show 1 layer of attention due to space limits. In a real model, Document Representation Module and Query Representation Module are deeper than shown here.
26
+
27
+ ![](_page_2_Figure_1.jpeg)
28
+
29
+ The two **Representation Modules** use Transformer encoders (Vaswani et al., 2017) to embed documents and queries respectively and independently. In particular, for documents,
30
+
31
+ $$H_l^{doc} = \operatorname{Encoder}_l^{doc}(H_{l-1}^{doc}) \tag{1}$$
32
+
33
+ $$H_1^{doc} = \text{Encoder}_1^{doc}(\text{lookup}(doc))$$
34
+ (2)
35
+
36
+ and for queries,
37
+
38
+ $$H_l^{qry} = \text{Encoder}_l^{qry}(H_{l-1}^q) \tag{3}$$
39
+
40
+ $$H_1^{qry} = \text{Encoder}_1^{qry}(\text{lookup}(qry))$$
41
+ (4)
42
+
43
+ where lookup represents word² and position embeddings, and Encoder represents a Transformer encoder layer. Query and document Representation Modules can use different numbers of layers. Let M and N denote the number of layers for document and query representations respectively. The hidden states from the last layers are used as the Representation Modules' output. Formally, for a document of length d, query of length q, and model dimension n, let matrix $D = H_M^{doc} \in \mathbb{R}^{d \times n}$ be the output of the Document Representation Module and $Q = H_N^{qry} \in \mathbb{R}^{q \times n}$ be the output of the Query Representation module.
44
+
45
+ The **Interaction Module** uses the Representation Modules' outputs, Q and D, to make a relevance judgement. The module consists of a stack of Interaction Blocks (IB), a novel attentive block
46
+
47
+ that performs query-to-document cross-attention, followed by query self-attention<sup>3</sup>, as shown in Figure 1. Here, we write cross-attention from X to Y as Attend(X,Y), self-attention over X as Attend(X,X) and layer norm as LN. Let,
48
+
49
+ <span id="page-2-4"></span><span id="page-2-3"></span>
50
+ $$Q_{\mathbf{x}} = \mathsf{LN}(\mathsf{Attend}(Q, D) + Q) \tag{5}$$
51
+
52
+ $$Q_{\text{self}} = \text{LN}\left(\text{Attend}(Q_{x}, Q_{x}) + Q_{x}\right)$$
53
+ (6)
54
+
55
+ Equation 5 models interactions from query tokens to document token. Each query token in Q attends to document embeddings in D to produce relevance signals. Then, Equation 6 collects and exchanges signals among query tokens by having the query tokens attending to each other. The output of the first Interaction Block (IB) is then computed with a feed-forward network (FFN) on the query token embeddings with residual connections,
56
+
57
+ $$IB(Q, D) = LN (FFN (Q_{self}) + Q_{self})$$
58
+ (7)
59
+
60
+ We employ multiple Interaction Blocks to iteratively repeat this process and refine the hidden query token representations, modeling multiple rounds of interactions, producing a series of hidden states, while keeping document representation D unchanged,
61
+
62
+ $$H_l^{IB} = IB_l(H_{l-1}^{IB}, D)$$
63
+ (8)
64
+
65
+ $$H_1^{IB} = \mathbf{IB}_1(Q, D) \tag{9}$$
66
+
67
+ The Interaction Block (IB) is a core component of MORES. As shown in Table 1, its attention avoids the heavy full-attention over the concatenated query-document sequence, i.e. $(d+q)^2$ terms, saving online computation.
68
+
69
+ To induce relevance, we project the [CLS] token's embedding in the last $(K^{\rm th})$ IB's output to a score,
70
+
71
+ $$score(qry, doc) = \mathbf{w}^T CLS(H_K^{IB})$$
72
+ (10)
73
+
74
+ MORES's modular design allows us to precompute and reuse representations. The Query Representation Module runs once when receiving the new query; the representation is then repeatedly used to rank the candidate documents. More importantly, the document representations can be built offline. We detail two representation
75
+
76
+ <span id="page-2-1"></span><sup>&</sup>lt;sup>2</sup>We use WordPiece tokens, following BERT.
77
+
78
+ <span id="page-2-2"></span><sup>&</sup>lt;sup>3</sup>We use multi-head version of attention in the Interaction Blocks (IB).
79
+
80
+ <span id="page-3-0"></span>Table 1: Time complexity of MORES and a typical Transformer ranker, e.g., a standard BERT ranker. We write q for query length, d for document length, n for Transformer's hidden layer dimension, and Ndoc for number of candidate documents to be ranked for each query. For interaction, Reuse-S1 corresponds to document representation reuse strategy, and Reuse-S2 projected document representation reuse strategy.
81
+
82
+ | | Total, | Online, | Online, | |
83
+ |----------------------------|-----------------------|-----------------------|----------------|--|
84
+ | | 1 Query-Document Pair | 1 Query-Document Pair | Ndoc Documents | |
85
+ | Typical Transformer Ranker | 2 + | 2 + | 2 + | |
86
+ | | 2 | 2 | 2 | |
87
+ | | n(d + q) | n(d + q) | (n(d + q) | |
88
+ | | n | n | n | |
89
+ | | (d + q) | (d + q) | (d + q))Ndoc | |
90
+ | Document Representation | nd2 +<br>2<br>n<br>d | 0 | 0 | |
91
+ | Query Representation | nq2 + | nq2 + | (nq2 + | |
92
+ | | 2 | 2 | 2 | |
93
+ | | n | n | n | |
94
+ | | q | q | q) | |
95
+ | Interaction w/ Reuse-S1 | 2 | 2 | 2 | |
96
+ | | 2 | 2 | 2 | |
97
+ | | n(qd + q | n(qd + q | (n(qd + q | |
98
+ | | ) + n | ) + n | ) + n | |
99
+ | | (q + d) | (q + d) | (q + d))Ndoc | |
100
+ | Interaction w/ Reuse-S2 | 2 | 2 | 2 | |
101
+ | | 2 | 2 | 2 | |
102
+ | | n(qd + q | n(qd + q | (n(qd + q | |
103
+ | | ) + n | ) + n | ) + n | |
104
+ | | (q + d) | q | q)Ndoc | |
105
+
106
+ reuse strategies with different time vs. space trade-offs: 1) a document representation reuse strategy that stores the Document Representation Module's output, and 2) a projected document representation reuse strategy that stores the Interaction Module's intermediate transformed document representations. These strategies have the same overall math, produce *the same* ranking results, and only differ in time/space efficiency.
107
+
108
+ Document Representation Reuse Strategy (Reuse-S1) runs the Document Representation Module offline, pre-computing document representations D for all documents in the collection. When receiving a new query, MORES looks up document representations D for candidate documents, runs the Query Representation Module to get a query's representation Q, and feeds both to the Interaction Module to score. This strategy reduces computation by not running the Document Representation Module at query time.
109
+
110
+ Projected Document Representation Reuse Strategy (Reuse-S2) further moves documentrelated computation performed in the Interaction Module offline. In an IB, the cross-attention operation first projects document representation D with key and value linear projections [\(Vaswani](#page-9-0) [et al.,](#page-9-0) [2017\)](#page-9-0)
111
+
112
+ $$D_k = DW_k, \ D_v = DW_v \tag{11}$$
113
+
114
+ where Wk, W<sup>v</sup> are the projection matrices. For each IB, Reuse-S2 pre-computes and stores Dproj [4](#page-3-1) ,
115
+
116
+ $$D_{proj} = \{DW_k, DW_v\} \tag{12}$$
117
+
118
+ Using Reuse-S2, the Interaction Module no longer needs to compute the document projections at online evaluation time. Reuse-S2 takes more storage: for each IB, both key and value projections of D are stored, meaning that an Interaction Module with l IBs will store 2l projected versions of D. With this extra pre-computation, Reuse-S2 trades storage for further speed-up.
119
+
120
+ [Table 1](#page-3-0) analyzes the online time complexity of MORES and compares it to the time complexity of a standard BERT ranker. We note that MORES can move all document only computation offline. Reuse-S1 avoids the document self attention term d 2 , which is often the most expensive part due to long document length. Reuse-S2 further removes from online computation the document transformation term n <sup>2</sup>d, one that is linear in document length and quadratic in model dimension.
121
+
122
+ MORES needs to learn three Transformers: two Representation Modules and one Interaction Module. The three Transformer modules are *coupled* during training and *decoupled* when used. To train MORES, we connect the three Transformers and enforce module coupling with end-to-end training using the pointwise loss function [\(Dai and Callan,](#page-8-7) [2019\)](#page-8-7). When training is finished, we store the three Transformer modules separately and apply each module at the desired offline/online time.
123
+
124
+ We would like to use pre-trained LM weights to ease optimization and improve generalization. However, there is no existing pre-trained LM that involves cross-attention interaction that can be used to initialize the Interaction Module. To avoid expensive pre-training, we introduce BERT weight assisted initialization. We use one copy of BERT weights to initialize the Document Representation Module. We split another copy of BERT weights between Query Representation and Interaction Modules. For MORES with l IBs, the first 12−l layers of the BERT weights initialize the Query Representation Module, and the remaining
125
+
126
+ <span id="page-3-1"></span><sup>4</sup>We pre-compute for all attention heads in our multi-head implementation
127
+
128
+ l layers' weights initialize the Interaction Module. This initialization scheme ensures that Query Representation Module and the IBs use consecutive layers from BERT. As a result, upon initialization, the output of the Query Representation Module and the input of the first IB will live in the same space. In addition, for IBs, query to document attention initializes with the same BERT attention weights as query self-attention. In practice, we found initializing query to document attention weights important; random initialization leads to substantially worse performance. Details can be found in subsection 4.2.
129
+
130
+ The first experiment compares the effectiveness and efficiency of MORES to a state-of-the-art BERT ranker for supervised ranking.
131
+
132
+ We use the MS MARCO passage ranking collection (MS MARCO) (Nguyen et al., 2016) and evaluate on two query sets with distinct characteristics: Dev Queries have a single relevant document with a binary relevance label. Following Nguyen et al. (2016), we used MRR@10 to evaluate the ranking accuracy on this query set. TREC2019 DL Queries is the evaluation set used in the TREC 2019 Deep Learning Track. Its queries have multiple relevant documents with graded relevance. Following Craswell et al. (2019), we used MRR, NDCG@10, and MAP@1000 as evaluation metrics. All methods were evaluated in a reranking task to re-rank the top 1000 documents of the MS MARCO official BM25 retrieval results.
133
+
134
+ We test MORES effectiveness with a varied number of Interaction Blocks (IB) to study the effects of varying the complexity of query-document interaction. Models using 1 layer of IB ( $1 \times$ IB) up to 4 layers of IB ( $4 \times$ IB) are tested.
135
+
136
+ We compare MORES with the BERT ranker, a state-of-the-art ranker fine-tuned from BERT, which processes concatenated query-document pairs. Both rankers are trained with the MS MARCO training set consisting of single relevance queries. We train MORES on a 2M subset of Marco's training set. We use stochastic gradient descent to train the model with a batch size of 128. We use AdamW optimizer with a
137
+
138
+ learning rate of 3e-5, a warm-up of 1000 steps and a linear learning rate scheduler for all MORES variants. Our baseline BERT model is trained with similar training setup to match performance reported by Nogueira and Cho (2019). Our BERT ranker re-implementation has better performance compared to that reported by Nogueira and Cho (2019). The BERT ranker and all MORES models are implemented with Pytorch (Paszke et al., 2019) based on the huggingface implementation of Transformers (Wolf et al., 2019).
139
+
140
+ We aim to test that MORES' accuracy is equivalent to the original BERT ranker (while achieving higher efficiency). To establish equivalence, statistical significance testing was performed with a non-inferiority test commonly used in the medical field to test that two treatments have similar effectiveness (Jayasinghe et al., 2015). In this test, rather than testing to reject the null hypothesis $H_0$ : $\mu_{\rm BERT} = \mu_{\rm MORES}$ , we test to reject $H_0'$ : $\mu_{\rm BERT} - \mu_{\rm MORES} > \delta$ for some small margin $\delta$ . By rejecting $H_0'$ we accept the alternative hypothesis, which is that any reduction of performance in MORES compared to the original BERT ranker is inconsequential. We set the margin $\delta$ to 2% and 5% of the mean of the BERT ranker.
141
+
142
+ Table 2 reports the accuracy of MORES and the baseline BERT-based ranker. The experiments show that MORES with $1 \times$ IB can achieve 95%of BERT performance. MORES with $2 \times$ IB can achieve performance comparable to the BERT ranker with a 2% margin. Three IBs does not improve accuracy and four hurts accuracy. We believe that this is due to increased optimization difficulties which outweighs improved model capacity. Recall that for MORES we have one set of artificial cross attention weights per IB not initialized with real pre-trained weights. Performance results are consistent across the two query sets, showing that MORES can identify strong relevant documents (Dev Queries), and can also generalize to ranking multiple, weaker relevant documents (TREC2019 DL Queries).
143
+
144
+ The results show that MORES can achieve ranking accuracy competitive with state-of-the-art ranking models, and suggest that the entangled and computationally expensive full-attention Transformer can be replaced by MORES's lightweight, modularized design. Document
145
+
146
+ <span id="page-5-0"></span>Table 2: Effectiveness of MORES models and baseline rankers on the MS MARCO Passage Corpus. \* and $\dagger$ indicate non-inferiority (Section 4.1) with p < 0.05 to the BERT ranker using a 5% or 2% margin, respectively.
147
+
148
+ | | MS MARCO Passage Ranking | | | | | | |
149
+ |---------------------|--------------------------|---------------------------------|------------------|--------------------|--|--|--|
150
+ | | Dev Queries | Dev Queries TREC2019 DL Queries | | | | | |
151
+ | Model | MRR | MRR | NDCG@10 | MAP | | | |
152
+ | BERT ranker | 0.3527 | 0.9349 | 0.7032 | 0.4836 | | | |
153
+ | MORES $1 \times IB$ | 0.3334* | 0.8953* | 0.6721* | 0.4516* | | | |
154
+ | mores $2 \times IB$ | $0.3456^{\dagger}$ | $0.9283^{\dagger}$ | $0.7026^\dagger$ | $0.4777^{\dagger}$ | | | |
155
+ | mores $3 \times IB$ | $0.3423^{\dagger}$ | $0.9271^{\dagger}$ | $0.6980^\dagger$ | $0.4687^{*}$ | | | |
156
+ | mores $4 \times IB$ | $0.3307^*$ | $0.9322^{\dagger}$ | $0.6565^{*}$ | $0.4559^*$ | | | |
157
+
158
+ <span id="page-5-1"></span>Table 3: Ranking Accuracy of MORES when using / not using attention weights copied from BERT to initialize Interaction Module. The models were tested on the MS MARCO dataset with the Dev Queries.
159
+
160
+ | | Dev Queries | | TREC2019 DL | |
161
+ |--------|-------------|--------|-------------|--------|
162
+ | | MRR@10 | MRR | NDCG@10 | MAP |
163
+ | copy | 0.3456 | 0.9283 | 0.7026 | 0.4777 |
164
+ | random | 0.2723 | 0.8430 | 0.6059 | 0.3702 |
165
+
166
+ and query representations can be computed independently without seeing each other. With the contextualized representation, 2 layers of lightweight interaction are sufficient to estimate relevance.
167
+
168
+ We also investigate IB initialization and compare MORES $2\times$ IB initialized by our proposed initialization method (copy self attention weight of BERT as IB cross attention weight), with a random initialization method (cross attention weights randomly initialized). Table 3 shows that random initialization leads to a substantial drop in performance, likely due to difficulty in optimization.
169
+
170
+ Section 3.2 introduces two representation reuse strategies for MORES with different time vs. space trade-offs. This experiment measures MORES' real-time processing speeds with these two strategies and compares them with measurement for the BERT ranker. We test MORES $1 \times 1B$ and MORES $2 \times 1B$ . Additional IB layers incur more computation but do not improve effectiveness, and are hence not considered. We record average time for ranking one query with 1000 candidate documents on an 8-core CPU and a single GPU. We measured ranking speed with documents of length 128 and 512 with a fixed query length of 16. Tables 4 (a) and (b) show the speed tests for the
171
+
172
+ <span id="page-5-3"></span>Table 4: Average time in seconds to evaluate one query with 1,000 candidate documents, and the space used to store pre-computed representations for each document. Len: input document length.
173
+
174
+ (a) Document Representation Reuse (Reuse-S1)
175
+
176
+ | | | CPU | | GPU | | Space |
177
+ |-----|-------------|------|-----|--------|------|-------|
178
+ | Len | Model | Time | | Time | | (MB) |
179
+ | | BERT ranker | 161s | - | 2.70s | - | 0 |
180
+ | 128 | mores 1×IB | 4s | 40x | 0.04s | 61x | 0.4 |
181
+ | | mores 2×IB | 8s | 20x | 0.12s | 22 x | 0.4 |
182
+ | | BERT ranker | 698s | - | 13.05s | - | 0 |
183
+ | 512 | mores 1×IB | 11s | 66x | 0.14s | 91x | 1.5 |
184
+ | | mores 2×IB | 20s | 35x | 0.32s | 40x | 1.5 |
185
+
186
+ (b) Projected Document Representation Reuse (Reuse-S2)
187
+
188
+ | | | CPU | | GPU | | Space |
189
+ |-----|-------------|------|------|--------|------|-------|
190
+ | Len | Model | Time | | Time | | (MB) |
191
+ | | BERT ranker | 161s | - | 2.70s | - | 0 |
192
+ | 128 | mores 1×IB | 2s | 85x | 0.02s | 118x | 1.5 |
193
+ | | mores 2×IB | 5s | 36x | 0.05s | 48x | 3.0 |
194
+ | | BERT ranker | 698s | - | 13.05s | - | 0 |
195
+ | 512 | mores 1×IB | 3s | 170x | 0.08s | 158x | 6.0 |
196
+ | | mores 2×IB | 6s | 124x | 0.10s | 124x | 12.0 |
197
+
198
+ two reuse strategies, respectively. We also include per document data storage size <sup>6</sup>.
199
+
200
+ We observe a substantial speedup in MORES compared to the BERT ranker, and the gain is consistent across CPUs and GPUs. The original BERT ranker took hundreds of seconds – several minutes – to generate results for one query on a CPU machine, which is impractical for real-time use. Using Reuse-S1, MORES with $1 \times IB$ was 40x faster than the BERT ranker on shorter documents (d=128); the more accurate $2 \times IB$ model also achieved 20x speedup. The difference is more profound on longer documents. As the length of the document increases, a larger portion of compute in BERT ranker is devoted to performing self-attention over the document sequence. MORES pre-computes document representations
201
+
202
+ <span id="page-5-2"></span><sup>&</sup>lt;sup>5</sup>Details are in Appendix A.1.
203
+
204
+ <span id="page-5-4"></span><sup>&</sup>lt;sup>6</sup>We report un-compressed values. Compression can further reduce data storage.
205
+
206
+ <span id="page-6-1"></span>Table 5: Domain adaptation on ClueWeb09-B. adapt-interaction and adapt-representation use MORES 2× IB. ∗ and † indicate non-inferiority (Section [4.1\)](#page-4-1) with p < 0.05 to the BERT ranker using a 5% or 2% margin, respectively.
207
+
208
+ | | Clueweb09-B | | | | | |
209
+ |----------------------|---------------|----------------|---------|---------------------|---------|---------|
210
+ | | Title Queries | | | Description Queries | | |
211
+ | | NDCG@20 | MAP<br>Prec@20 | | NDCG@20 | MAP | Prec@20 |
212
+ | BERT ranker | 0.3294 | 0.1882 | 0.3755 | 0.3597 | 0.2075 | 0.3881 |
213
+ | MORES 1×<br>IB | 0.3059 | 0.1753 | 0.3407 | 0.3472 | 0.2009 | 0.3705 |
214
+ | MORES 2×<br>IB | 0.3317† | 0.1872† | 0.3662† | 0.3571† | 0.2039† | 0.3816† |
215
+ | MORES 3×<br>IB | 0.3299† | 0.1841† | 0.3679† | 0.3476∗ | 0.2008∗ | 0.3763∗ |
216
+ | MORES 4×<br>IB | 0.3164∗ | 0.1824∗ | 0.3515 | 0.3472∗ | 0.2012∗ | 0.372∗ |
217
+ | adapt-interaction | 0.3179∗ | 0.1849† | 0.3548 | 0.3385 | 0.1976∗ | 0.3652 |
218
+ | adapt-representation | 0.3319† | 0.1865† | 0.3657∗ | 0.3557† | 0.2072† | 0.3828† |
219
+
220
+ <span id="page-6-2"></span>Table 6: Domain adaptation on Robust04. adapt-interaction and adapt-representation use MORES 2× IB. ∗ and † indicate non-inferiority (Section [4.1\)](#page-4-1) with p < 0.05 to the BERT ranker using a 5% or 2% margin, respectively.
221
+
222
+ | | Robust04 | | | | | |
223
+ |----------------------|---------------|----------------|---------|---------------------|---------|---------|
224
+ | | Title Queries | | | Description Queries | | |
225
+ | | NDCG@20 | MAP<br>Prec@20 | | NDCG@20 | MAP | Prec@20 |
226
+ | BERT ranker | 0.4632 | 0.2225 | 0.3958 | 0.5065 | 0.245 | 0.4147 |
227
+ | MORES 1×<br>IB | 0.4394∗ | 0.2097 | 0.3741∗ | 0.4683 | 0.2263 | 0.3835 |
228
+ | MORES 2×<br>IB | 0.4599† | 0.2194† | 0.3940† | 0.4846∗ | 0.2323∗ | 0.4008∗ |
229
+ | MORES 3×<br>IB | 0.4551† | 0.2135∗ | 0.3934† | 0.4854∗ | 0.2334∗ | 0.4006∗ |
230
+ | MORES 4×<br>IB | 0.4553† | 0.2177† | 0.3938† | 0.4802 | 0.2309 | 0.3980∗ |
231
+ | adapt-interaction | 0.4389 | 0.2117∗ | 0.3723 | 0.4697 | 0.2249 | 0.3896 |
232
+ | adapt-representation | 0.4564† | 0.2182† | 0.3926† | 0.4884∗ | 0.2327∗ | 0.4042∗ |
233
+
234
+ and avoids document-side self attention, yielding up to 35x to 90x speedup on longer documents (d = 512).
235
+
236
+ Reuse-S2 – the projected document reuse strategy – further enlarges the gain in speed, leading to up to 170x speedup using 1× IB, and 120x speedup using 2× IB. Recall that Reuse-S2 precomputes the document projections that will be used in MORES' Interaction Module, which is of n <sup>2</sup>d time complexity where n is the model hidden dimension (details can be found in the complexity analysis in Table [1\)](#page-3-0). In practice, n is often large, e.g., our experiment used n = 768[7](#page-6-0) . Reuse-S2 avoids the expensive n <sup>2</sup>d term at evaluation time. Note that Reuse-S2 *does not affect accuracy*; it trades space to save more time.
237
+
238
+ The second experiment uses a domain-adaptation setting to investigate whether the modular design of MORES affects adaptation and generalization ability, and how the individual Interaction and Representation Modules behave across domains.
239
+
240
+ This experiment trains MORES using the MS MARCO dataset, and adapts the model to two datasets: ClueWeb09-B and Robust04. ClueWeb09-B is a standard document retrieval collection with 50M web pages crawled in 2009. Evaluation queries come from the TREC 2009-2012 Web Tracks. We used two variants of the queries: *Title Queries* is 200 short, keyword-style queries. *Description Queries* is 200 queries that are natural language statements or questions. Robust04 is a news corpus with 0.5M documents. Evaluation queries come from TREC 2004 Robust Track, including 250 *Title Queries* and 250 *Description Queries*. We evaluate ranking performance with NDCG@20, MAP, and Prec@20.
241
+
242
+ Domain adaptation is done by taking a model trained on MS MARCO and fine-tuning the model on relevant labels from the target dataset. Due to the small query sets in ClueWeb09-B and Robust04, we use 5-fold cross-validation for finetuning and testing. Data split, initial ranking, and document pre-processing follow [Dai and Callan](#page-8-7)
243
+
244
+ <span id="page-6-0"></span><sup>7</sup>This follows model dimension in BERT
245
+
246
+ <span id="page-7-0"></span>![](_page_7_Figure_0.jpeg)
247
+
248
+ Figure 2: Visualization of attention in MORES's Representation and Interaction Modules.
249
+
250
+ [\(2019\)](#page-8-7). The domain adaptation fine-tuning procedures use a batch size of 32 and a learning rate of 5e-6 while having other training settings same as supervised ranking training.
251
+
252
+ The top 5 rows of [Table 5](#page-6-1) and [Table 6](#page-6-2) examine the effectiveness of adapting the full model of MORES. The adapted MORES models behave similarly as on MS MARCO: using two to three layers of Interaction Blocks (IB) achieves very close to BERT ranker performance on both datasets for both types of queries while using a single layer of IB is less effective. Importantly, our results show that the modular design of MORES does not hurt domain transfer, indicating that new domains and low resource domains can also use MORES through simple adaptation.
253
+
254
+ With separate representation and interaction components in MORES, we are interested to see how each is affected by adaptation. We test two extra adaptation settings on MORES 2× IB: finetuning only Interaction Module on the target domain (adapt-interaction) or only Representation Modules (adapt-representation) on target domain. Results are shown in the bottom two rows of [Table 5](#page-6-1) and [Table 6](#page-6-2) for the two data sets.
255
+
256
+ We observe that only adapting the Interaction Module to the target domain is less effective compared to adapting the full model (MORES 2× IB), suggesting that changing the behaviour of interaction is not enough to accommodate language changes across domains. On the other hand, freezing the Interaction Module and only fine-tuning the Representation Modules (adaptrepresentation) produces performance on par with full model apdatation. This result shows that it is more necessary to have domain-specific representations, while interaction patterns are more general and not totally dependent on representations.
257
+
258
+ The modular design of MORES allows Representation and Interaction to be inspected separately, providing better interpretability than a black-box Transformer ranker. [Figure 2](#page-7-0) examines the attention with MORES for a hard-to-understand query *"what is paranoid sc"* where "sc" is ambiguous, along with a relevant document *"Paranoid schizophrenia is a psychotic disorder. In-depth information on symptoms...."* [8](#page-7-1)
259
+
260
+ In the Document Representation Module [\(Fig](#page-7-0)[ure 2a](#page-7-0)), we can see that *"disorder"* uses *"psychotic"* and *"schizophrenia"* for contextualization, making itself more specific. In the Query Representation Module [\(Figure 2b](#page-7-0)), because the query is short and lacks context, *"sc"* incurs a broad but less meaningful attention. The query token *"sc"* is further contextualized in the Interaction Module [\(Figure 2c](#page-7-0)) using information from the document side – *"sc"* broadly attends to the document token in the first IB to disambiguate itself. With the extra context, *"sc"* is able to correctly attend to "schizophrenia" in the second IB to produce relevance signals [\(Figure 2d](#page-7-0)).
261
+
262
+ This example explains why MORES 1× IB performs worse than MORES with multiple IBs – ambiguous queries need to gather context from the document in the first IB before making relevance estimates in the second. More importantly, the example indicates that the query to document
263
+
264
+ <span id="page-7-1"></span><sup>8</sup>We only show the first 16 tokens due to space limitation.
265
+
266
+ attention has two distinct contributions: understand query tokens with the extra context from the document, and match query tokens to document tokens, with the former less noticed in the past. We believe MORES can be a useful tool for better interpreting and understanding SOTA black-box neural rankers.
2005.00571/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-04-25T22:56:43.239Z" agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.122 Safari/537.36" etag="e0TO-V-ZcxtLlm-CpZs0" version="12.9.2" type="google"><diagram name="页-1" id="页-1">7V1bd6q8Fv01HeN8D9sRknB7rFqt3b3Y2ovtGwIqFcEi1suvP0HAQkAKCmq76cPekmCAZGYy18pa8QzVxoumJU2GN6ai6mcQKIszVD+DUGAQ+dcpWLoFmGfdgoGlKW4R81XQ0VaqVwi80pmmqNPQibZp6rY2CRfKpmGosh0qkyzLnIdP65t6+KoTaaBGCjqypEdLXzTFHnqPxYKv8ktVGwz9KzPAqxlL/slewXQoKeY8UIQuzlDNMk3b/TRe1FTd6Tu/X9zvNbbUbm7MUg07zRe0QW2Fmss76ZF7/3jqD1v6o/IH8oJ3d/bSf+TPqbJo1clnlowVqvY1Xa+ZummREsM0yEnVgSUpGrksVUyecOK0MbVVQ9b0/xlv1eu6eT6oXzy0wQp1Wst3dQ46Tw+DDrlO9Xomjm7ezO71cH4r3pECezY6163bydWA6azql/j+hTwNOfv2dfD2OHo/x0+j+dXToNk1BuRsrt5lVtf9sbkcK12T4RdDdjV+uOLuMay+1PoyNK/6yrlFznz9JP+ct8Vaj7+7QpekUXjVZC5vmyvyUcAj/XHYfq7J5M7/I0+hSNOh6vQZ4x20JdtWLcMpqTijC7DzH6rqUk/Vq5I8GljmzFCozliXrdtxzh3aY91rcj7UbLUzkWSnq+Zk0pAyw7x2GvPO6JuRro0Otjf+n6plq4tAkTf4TdUcq7a1JKcsNtB0v+LNRBZi93j+hWskQLdsGMC0IHrTyZtKg03TX2gjHzzAZQIfnwV8ZxA11n8lAn8HAhEPogjkQAwCQUEIZDC3HYGYVGahvyyjMbUtc6RSp3wzRHmMAMYUBwiREWD46ABwqLABSKAATCrzoICH+rQ1uCAUMHVmWwt91F9qaFBrfdjTh+bAlD67AIzY27HBY9AjE1UZP/bf3m4G4LMnKyPmTVyqRvMaXTZGq/E9M2yJb0uDadl3HamDJ8LqxXhAj/eGBEfPk3a9PkFKp2nrpGMbo9ElltDzqv4w++gbpOUmc3476n/OJDh4B7X64rzL1Vb9al+s3eK7idiWXsXpVediUB93ydndD9KGNG7wAOFqhBjSwo3ZBVnJsyUF4I4HqARBg0llBFAiVAAL0gPq8/nyeg2o9lx61+5b3EP/iQzWUOjNILyqNR+qziNw4qjJaNJqwQp1aXVDSj5b7+9Ve6zVNFCb6LV3Ve69CcarzNYk8vWpcXU1vb74eH2cDEENP70PcfdzfCF9rAgVY2YwuhY5QzKV7l+r2eBeO/17+0kDd91u/Vn/aPevbtvcbGraT8PX3lXr6WZ8T4DcaDat88XkQX9/Xsr16kxmhn9v4LWsdx1w8pei0eiTsa8KsKZg9pHcdnUsTydy52E8w43/8n3BFIAxrxaGKA1GCS1G0/hlBeBPTMKfWOLvl+PPZ7ujAdA3TWMBuAYaBcB+XwagBOBvASA6NgOyTBIAmRKAvxyA+MgMCIH3Cv6U9JkHwAgiVYNY5tbaH9fTTXkU7j51odld19yGonf86vRyBXOsd1xfeN2+PlgGDtqqpZFHUS2/zFAamr5p2/DdnGiHQVGVkL8yUXizMaa0X2apumRrn2EvZ9xQeFdomxq5wS9DkquIIgL+H8NxYQ5iKgJ512z+IB++wNScWbLqtRn0WVKXwZTDgIKMLVkD1Y40s0bNpkv2YTL+JwOJOS3EsEIFByHDh0mDESoIs7vBhBEYurHDAsV3WiUCRde1yVTdQs4BzEjTibuS0dcWDrfHuaC2jnYGX5Dv6910WtQbx+AY3sZFCQcOxvQip9ueQzLUndzHzPQr/kzX0+CcnED6ZPFVST4N3P+rfkPkzty2/BpqlEjn2dRw6NrAcb3KpKudmVh1uliTJf3cqxhriqKv38YquRGpt27Kma4TB2/rXmKrZ2zduRxpbmab7v2uW89jIPmwUw8DLjKQKGZiw8K8qr53J9avz9EmyG/wqlZYaggYv+QbRxgqzBHG4aRBiJHhpWf17GieVX/G7O9ZLRBQbBKgYOnZOnG7LgPGTtKzyiWs1bEcKvH3y/F3dM8qF2cOZtOnDOPoU7DuNbCu60tjTV+6teRr0tjpUu/LkqVJOl24+brsjpn7Rc+w2lQqmkWMCM003BN029pU6aoTTfBnSkZRMwZuPQjclq4Z6h+/R9c37VyA3dSbzhdt75aZOLXt9knPL2ibuiYvN8WWX36r2nPTGkUrAmK9F1OWKOC3qvOojv8KNYlyB8efV8VMYqjhcMfciTTBS7DUnqpaVzYbdxeNi/r1Ravz9jImt37Btl+G6kS5aWovymi14l/B+TNkVu/3FzV7JuPJk9q7vql/3qv6XX3417iYNmaDN8V4udOIIlKmty00mZz/reKX6ervgn/5aFxNlneOmFHu4YeI2nBujl4gv0C6SEprJrHY5bcrvY0HH3fyfM0d49bnoD0b1mSjQ44ao5YwWojs9WfjlhwaY/WBrYlkwBv3ZGqs41FcWe2Hv8GiFVJm/Q0ZMUQUSIxGNrFilCl8n1oBTCGk8AekchyBisDxIcfRv+U2ItRUARHvTmVHRxGNFKcpTmQFBrGcKIADe434BAsNsbSgPq6ZbJm25L5O6iLIZ9piP/gowQEV57dgCpu2fIKFg9gYhVmazGdxUYrYjVKE2aIUU7wsvkdh8kzLZl4fGHwJ5g1icRR8APRAvzRvijZvDoA5rzb8bvoT40A8qLmTFJ2N2Jjo7BKQvxqQ4rHxuL+oThDCpPOspaO4/xDBiQS/JKC5ncMv0b0+WgaPaNl98tKap/UXleiRWlWLsCKw+Gu5Hye3W7iujov/KMRPQ0gP4Yijxi/9eZ6a687jzUm6YxASxWxCu3TH7KSwN1mKORl5iA0b70jEMSujcSxXnHdGTBPWc3LRGlTmjt+vR4vWEONex2W0xvfRGuFQORZHU7AOG60hHux1WS5rHPs9Wi5r/KBlDRh+dbIo5sV5yGUNCNKEOZbLGqmWNSC9rMGy4s7LGjRSSFNHW9aAgN0bJb4tvrGt05rhOyDC7Y6kB/Lt11PBjp8OHxhtnoUMAAIGCHG7YQjR20NE4Vg4cLgY4JR2e2m3l3rj8Ha7iCjxwR3bbocgt3ScL7nxGlQb8dIj6BiGPPVGWi/MZXYNF/SiAqeVJIaoxASWY3ZVOJusZq8pTqTCnIt+N/l+lH8kpcFx6dMpDTAy/Q8af04wkDQEZULDKSU0bObLCSc0bGI5yoSG01rgLgBjp5jQsEF7mdDwb+Lv2AkNkMktTNn/nMqX94P9dogBtC7aNWqCXhklDVUQEBkOIMAi7IT8HVbhJm1chX7dVoiIDUe/sP7WiEeTI5uNgmMHoNS3p6VvxdSAOx6gkjZiQjH6ttyI6bT0RXqMnaS+jVETAfzF6NsSf78Kf0fXt9DPOM7ZYezX5KhvXdWYxto4ESXMUw5ZQMUnpZbBIAwalqMaKlr3ov1Bslmu/lqhfg3WHXS5ehPnfyJAwdQARzbnSo0UQaxwgU3lqJgHnqmIAhI3fwdG0T+ey0lbU2zUmjpoOh1EZS7nr8zl3My0E87lhKjM5TxJ5XsAzJ1kLidEZS5nCcgAII+cy0m03O8S3ZsBORXRjcPb67IcNZSpd2BGOEF08wx7VNGd30bfRa9ffW/fnxiCEKLigekf4UoduiVShhqXLqyYjJq0DJzm5Wdthzwfjlj0PRJp74s6n3xw7yBfyPo7mueWxL6bgyonl9RpBcYjLkx6HLOrT4puiHZuFU1smCkIJdloLheUnJw/iqO3PdiV2AQxuaG8iA2HiQ0JycSGIMh0Pv0c1PlFEaGvYMtkj6x5pxbpe4m0qkRTPh5mpMsguCG11tETQjCM+bmzRqPK8hkMy5/5E645mojRtJFNrkdOKel0KAz2l2AChMzFRC5giu/yMw9xDimE4c19gm9Dj+D9nIyOP3PD6R6gInIhw7IC4PfG5R77AKUxM93f7DiawKJ34tzVJkDU6mGkocyvzvwl2P7JRgkQDPwuEKJsT05IlmUxmdC7i7PNa3gHLLq67lTsUyzuiEVMNyQcOLUI5+AL+x5pu6e8wSALggrH7JTylicL8n4HpbRQT8UgZekVp9QY5UByQzmZGttuuFhTgI2zdrOZAkJAcAeEvDXo/W8t4mHtDILAp/+KUfZgJ2XvPWRfksPP+KxaimRIKUwCzZjMnCbUcU9VFHKz00xSP8FOyD9JPFuu5O8wB2KjNAqN6Mu+LxVgqcwG1t9yLECWIhslS0h5/fMzA1hYEkNJDCUxHJsYwvoYsjE7QRyYGOL8Axmdh4hM2HjPoeso3EIZHl94ZLGNKRJpItEtGEcQ64m3fapneFi6FdVvZjrbryE7QA3rtqIUskfrB3ZcMqDkovy5yBklz0xlEpeUMnguIbVLDR+7Sw0X5aYEW25Patp/W9T1NChiXePQqmfnhY60fZRMZ27ZNDdmKmmppKWUkommpeiCShwpFZYJ6m8TWpLSEUjJiiGlj5KUSlI6MilhmFIrFUZLvle7pKWSlkpa+ldpif61oDj30mFpadf1qJJ6MlJPHBmkaxdUBIQSiSbAX984p7bcRcl9JfcVy32QSllAfkrC0Zhv/wU3sdRjqbqIbuFSm9qmtcxGN7EtlWzzu9hGzIdsMMD01oUw7rfY+JiVPD9ELn/G4fc3AUvGKRmnZJxTZBwUzitgIRPHNzHxlwXyzf4KZ512FM82gciBHxAU4D/HN+FHuzTk82g9WIj6fWKzoTgmcq7zoM4lS9nKG5nMsO8erWSq38FUTE4pUBhTKVD+r5sej6hOLT/yp2gd96lz4bSdCc1Lw2zONEUyZLUAciuZrWS2lMwW9jFt0t8DzCYclNmE/U0+Bv5LwZtwi5i5MGznOhCcD1TDzkgi2xo9MI8I2bbG/PdoZP2Wcf724RKYD5cggd5iJ86BJMawSWE/Qw+F/ZPH1nOh9CCl6KOIzlnnR5Kb352DSgIqCSh9sAATzk9FMTtVHJh+vvcnuV3wBfRCnLkNzXBIiXxbnewzg8hA2OGhjU6dbZPMUslNSr11Uw4+vGxj0i5bPWPrzuVIczPb9LqEiQGJc0zhMgfcQJEy7+N2OInBDcwugsmhZTpDs6lrkukwvDEV1Tnj/w==</diagram></mxfile>
2005.00571/main_diagram/main_diagram.pdf ADDED
Binary file (42.8 kB). View file
 
2005.00571/paper_text/intro_method.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ While knowledge graphs (KGs) are widely adopted in natural language processing applications, a major bottleneck hindering its usage is the sparsity of facts (Min et al., 2013), leading to extensive studies on KG completion (or reasoning) (Trouillon et al., 2016; Dettmers et al., 2018; Das et al., 2017; Xiong et al., 2017; Lin et al., 2018; Meilicke et al., 2019). Many traditional approaches on the KG reasoning task are based on logic rules (Landwehr et al., 2007, 2010; Galárraga et al., 2013, 2015). These methods are referred to as *symbolic-based methods*. Although they showed good performance (Meilicke et al., 2019, 2020), they are inherently limited by their representations and generalizability of the associated relations of the given rules.
4
+
5
+ To ameliorate such limitations, *embedding-based methods* (Bordes et al., 2013; Socher et al.,
6
+
7
+ 2013; Wang et al., 2014; Yang et al., 2014; Trouillon et al., 2016; Dettmers et al., 2018, 2017; Sun et al., 2019; Zhang et al., 2019) were proposed. They learn distributed representations for entities and relations and make predictions using the representations. Despite their superior performance, they fail to make human-friendly interpretations.
8
+
9
+ To improve the interpretability, many recent efforts formulate the task as a multi-hop reasoning problem using reinforcement learning (RL) techniques (Xiong et al., 2017; Das et al., 2017; Shen et al., 2018; Chen et al., 2018; Lin et al., 2018), referred to as *walk-based methods*. A major issue of these methods is the reward function. A "hit or not" reward is too sparse while a shaped reward using an embedding-based distance measurement Lin et al. (2018) may not always result in desirable paths.
10
+
11
+ In this paper, we propose *RuleGuider* to tackle the aforementioned reward issue in walk-based methods with the help of symbolic rules. We aim to improve the performance of walk-based methods without losing their interpretability. The RuleGuider is composed of a symbolic-based model fetching logic rules and a walk-based agent searching reasoning paths with the guidance of the rules. We also introduce a way to separate the walk-based agent to allow for further efficiency. We experimentally show the efficiency of our model without losing the interpretability.
12
+
13
+ # Method
14
+
15
+ In this section, we review the KG reasoning task. We also describe the symbolic-based and walk-based methods used in RuleGuider.
16
+
17
+ **Problem Formulation.** A KG consisting of fact triples is represented as $\mathcal{G} = \{(e_i, r, e_j)\} \subseteq \mathcal{E} \times \mathcal{R} \times \mathcal{E}$ , where $\mathcal{E}$ and $\mathcal{R}$ are the set of entities and relations, respectively. Given a query $(e_s, r_q, ?)$ where $e_s$ is a subject entity and $r_q$ is a query re-
18
+
19
+ <span id="page-0-0"></span><sup>\*</sup>Equal contributions.
20
+
21
+ Initps://github.com/derenlei/
22
+ KG-RuleGuider
23
+
24
+ <span id="page-1-0"></span>![](_page_1_Figure_0.jpeg)
25
+
26
+ Figure 1: Rule quality difference between datasets. There are exists high quality rules on WN18RR.
27
+
28
+ lation, the task of KG reasoning is to find a set of object entities E<sup>o</sup> such that (es, rq, eo) , where e<sup>o</sup> ∈ Eo, is a fact triple missing in G. We denote the queries (es, rq, ?) as *tail queries*. We note that we can also perform *head queries* (?, rq, eo). To be consistent with most existing works, we only consider tail queries in this paper.
29
+
30
+ Symbolic-based Methods. Some previous methods mine Horn rules from the KG and predict missing facts by grounding these rules. A recent method AnyBURL [\(Meilicke et al.,](#page-4-4) [2019\)](#page-4-4) showed comparable performance to the state-ofthe-art embedding-based methods. It first mines rules by sampling paths from the G, and then make predictions by matching queries to the rules. Rules are in the format: r(X, Y ) ← b1(X, A2) ∧ ... ∧ bn(An, Y ), where upper-case letters represent variables. A *rule head* is denoted by r(· · ·) and a *rule body* is denoted by the conjunction of atoms b1(· · ·), . . . , bn(· · ·). We note that r(c<sup>i</sup> , c<sup>j</sup> ) is equivalent to the fact triple (c<sup>i</sup> , r, c<sup>j</sup> ).
31
+
32
+ However, these methods have limitations. For example, rules mined from different KGs may have different qualities, which makes the reasoner hard to select rules. Figure [1](#page-1-0) shows such difference. Rules are sorted based on accuracy of predicting the target entities. The top rules from WN18RR are much more valuable than those from FB15K-237.
33
+
34
+ Walk-based Methods. Given a query (es, rq, ?), walk-based methods train an RL agent to find a path from e<sup>s</sup> to the desired object entity e<sup>o</sup> that implies the query relation rq. At step t, the current state is represented by a tuple s<sup>t</sup> = (e<sup>t</sup> ,(es, rq)), where et is the current entity. The agent then samples the next relation-entity pair to visit from possible actions A<sup>t</sup> = {(r 0 , e0 )|(e<sup>t</sup> , r0 , e0 ) ∈ G}. The agent receives a reward when it reaches eo.
35
+
36
+ RuleGuider consists of a symbolic-based method (see Section [2\)](#page-0-1), referred to as *rule miner*, and a walk-based method, referred to as *agent*. The rule
37
+
38
+ <span id="page-1-1"></span>![](_page_1_Figure_8.jpeg)
39
+
40
+ Figure 2: The architecture of two agents. The relation and entity agent interact with each other to generate a path. At each step, the entity agent first selects an entity from valid entities. The relation agent then samples a relation based on the selected entity. At the final step, they receive a hit reward based on the last selected entity and a rule guidance reward from the pre-mined rule set based on the selected path.
41
+
42
+ miner first mines logic rules and the agent traverses over the KG to learn the probability distribution of reasoning paths with the guidance (via the reward) of the rules. As the agent walks through relations and entities alternatively, we propose to separate the agent into two sub-agents: a relation and entity agents. After the separation, the search space is significantly pruned. Figure [2](#page-1-1) shows the structure of these two agents in detail.
43
+
44
+ Relation Agent. At step t (t = 1, · · · , T, T is the number of hops), the relation agent selects a single relation r<sup>t</sup> which is incident to the current entity et−1, where e0=es. Given a query (es, rq, ?) and a set of rules R, this process can be formulated as r<sup>t</sup> = P <sup>R</sup>(rq, et−1, R, h R t ) where h R t is the relation history. The agent first filter out rules whose heads are not same as rq, and then it selects r<sup>t</sup> from the t th atoms of the remaining rule bodies, i.e. bt(· · ·) in the rule pattern.
45
+
46
+ Since the rule miner provides confidence scores of rules, we first use RL techniques to pre-train this agent using the scores. During training, the agent applies the pre-trained strategy (distribution) and keeps tuning the distribution by utilizing semantic information provided by embeddings. In another words, the relation agent leverages both confidence scores of pre-mined rules as well as embedding shaped hit rewards.
47
+
48
+ Entity Agent. At step t, the agent generates the distribution of all candidate entities based on es, $r_q$ , and the entity history $\boldsymbol{h}_t^E$ . Given the current relation $r_t$ , this process can formally be represented as $e_t = P^E(e_s, r_q, r_t, \boldsymbol{h}_t^E)$ . The agent selects an entity from all entities that incident on $r_t$ . In this way, the entity and relation agent can reason independently.
49
+
50
+ In experiments, we have also tried to let the entity agent generate distribution based on relation agent pruned entity space. In this way, the entity agent takes in the selected relation and can leverage the information from the relation agent. However, the entity space may be extremely small and hard to learn. It makes the entity agent less effective, especially on large and dense KG.
51
+
52
+ **Policy Network.** The relation agent's search policy is parameterized by the embedding of $r_a$ and $h_t^R$ . The relation history is encoded using an LSTM(Hochreiter and Schmidhuber, 1997): $\boldsymbol{h}_t^R = \text{LSTM}(\boldsymbol{h}_{t-1}^R, \boldsymbol{r}_{t-1}), \text{ where } \boldsymbol{r}_{t-1} \in \mathbb{R}^d \text{ is}$ the embedding of the last relation. We initialize $h_0^R = LSTM(0, r_s)$ , where $r_s$ is a special start relation embedding to form an initial relation-entity pair with source entity embedding $e_s$ . Relation space embeddings $oldsymbol{R}_t \in \mathbb{R}^{|R_t| imes d}$ consist embeddings of all the relations in relation space $R_t$ at step t. Finally, relation agent outputs a probability distribution $d_t^R$ and samples a relation from it. $m{d}_t^R = \sigma(m{R}_t imes m{W}_1 \; ext{ReLU}(m{W}_2[m{h}_t^R; m{r}_q])) \; ext{where} \; \sigma$ is the softmax operator, $W_1$ and $W_2$ is trainable parameters. We design relation agent's historydependent policy as $\boldsymbol{\pi}^R = (\boldsymbol{d}_1^R, \boldsymbol{d}_2^R, \dots, \boldsymbol{d}_T^R)$ .
53
+
54
+ Similarly, entity agent's history-dependent policy is $\boldsymbol{\pi}^E = (\boldsymbol{d}_1^E, \boldsymbol{d}_2^E, \dots, \boldsymbol{d}_T^E)$ . Entity agent can acquire its embedding of last step $\boldsymbol{e}_{t-1}$ , entity space embeddings $\boldsymbol{E}_t$ , its history $\boldsymbol{h}_t^E = \text{LSTM}(\boldsymbol{h}_{t-1}^E, \boldsymbol{e}_{t-1})$ , and the probability distribution of entities $\boldsymbol{d}_t^E$ as follows. $\boldsymbol{d}_t^E = \sigma(\boldsymbol{E}_t \times \boldsymbol{W}_3\text{ReLU}(\boldsymbol{W}_4[\boldsymbol{h}_t^E; \boldsymbol{r}_q; \boldsymbol{e}_s; \boldsymbol{e}_t]))$ where $\boldsymbol{W}_3$ and $\boldsymbol{W}_4$ is trainable parameters. Note that entity agent uses a different LSTM to encode the entity history.
55
+
56
+ We train the model by letting the two aforementioned agents to start from specific entities and traverse through the KG in a fixed number of hops. The agents receive rewards at their final step.
57
+
58
+ **Reward Design.** Given a query, the relation agent prefers paths which direct the way to the correct object entity. Thus, given a relation path, we give reward according to its confidence retrieved from the rule miner, referred to as *rule guidance reward*
59
+
60
+ $R_r$ . We also add a Laplace smoothing $p_c=5$ to the confidence score for the final $R_r$ .
61
+
62
+ In addition to $R_r$ , the agent will also receive a hit reward $R_h$ , which is 1 if the predicted triple $\epsilon = (e_s, r_q, e_T) \in \mathcal{G}$ . Otherwise, we use the embedding of $\epsilon$ to measure reward as in Lin et al. (2018). $R_h = \mathbb{I}(\epsilon \in \mathcal{G}) + (1 - \mathbb{I}(\epsilon \in \mathcal{G})f(\epsilon))$ where $\mathbb{I}(\cdot)$ is an indicator function, $f(\epsilon)$ is a composition function for reward shaping using embeddings.
63
+
64
+ **Training Procedure.** We train the model in four stages. 1) Train relation and entity embeddings using an embedding-based method. 2) Apply a rule miner to retrieve rules and their associated confidence scores. 3) Pre-train the relation agent by freezing the entity agent and asking the relation agent to sample a path. We only use the rule miner to evaluate the path and compute $R_r$ based on the pre-mined confidence score. 4) Jointly train the relation and entity agent to leverage the embeddings to compute $R_h$ . The final reward R involves $R_r$ and $R_h$ with a constant factor $\lambda$ : $R = \lambda R_r + (1 - \lambda) R_h$ . The policy networks of two agents are trained using the REINFORCE (Williams, 1992) algorithm to maximize R.
2006.15057/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2006.15057/paper_text/intro_method.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In this section we briefly review variational autoencoders and Watson's perceptual model.
4
+
5
+ Samples from VAEs [@kingma2013auto] are drawn from $p(\ensuremath{\mathbf{x}}) = \int p(\ensuremath{\mathbf{x}}\vert \ensuremath{\mathbf{z}}) p(\ensuremath{\mathbf{z}})\,\text{d}\ensuremath{\mathbf{z}}$, where $p(\ensuremath{\mathbf{z}})$ is a prior distribution that can be freely chosen and $p(\ensuremath{\mathbf{x}}\vert \ensuremath{\mathbf{z}})$ is typically modeled by a deep neural network. The model is trained using a variational lower bound on the likelihood $$\begin{equation}
6
+ \label{eq:varbound}
7
+ \log p(\ensuremath{\mathbf{x}}) \leq \ensuremath{\mathbb{E}}_{q(\ensuremath{\mathbf{z}}| \ensuremath{\mathbf{x}})}\left\{\log p(\ensuremath{\mathbf{x}}| \ensuremath{\mathbf{z}})\right\} - \beta\ensuremath{\text{KL}(q(\ensuremath{\mathbf{z}}| \ensuremath{\mathbf{x}}) \Vert p(\ensuremath{\mathbf{z}}))} \enspace,
8
+ \end{equation}$$ where $q(\ensuremath{\mathbf{z}}| \ensuremath{\mathbf{x}})$ is an encoder function designed to approximate $p(\ensuremath{\mathbf{z}}| \ensuremath{\mathbf{x}})$ and $\beta$ is a scaling factor. We choose $p(\ensuremath{\mathbf{z}})=\mathcal{N}(0,I)$ and $q(\ensuremath{\mathbf{z}}| \ensuremath{\mathbf{x}})=\mathcal{N}(\mu_{\ensuremath{\mathbf{z}}}(\ensuremath{\mathbf{x}}),\Sigma_{\ensuremath{\mathbf{z}}}(\ensuremath{\mathbf{x}}))$, where the covariance matrix $\Sigma_{\ensuremath{\mathbf{z}}}(\ensuremath{\mathbf{x}})$ is restricted to be diagonal and both $\mu_{\ensuremath{\mathbf{z}}}$ and $\Sigma_{\ensuremath{\mathbf{z}}}(\ensuremath{\mathbf{x}})$ are modelled by deep neural networks.
9
+
10
+ It is possible to incorporate a wide range of loss functions into VAE-training. If we choose $p(\ensuremath{\mathbf{x}}| \ensuremath{\mathbf{z}})\propto \exp(-L(\ensuremath{\mathbf{x}}, \mu_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}}))$, where $\mu_\ensuremath{\mathbf{x}}$ is a neural network and we ensure that $L$ leads to a proper probability function, the first term of [\[eq:varbound\]](#eq:varbound){reference-type="eqref" reference="eq:varbound"} becomes $$\begin{equation}
11
+ \ensuremath{\mathbb{E}}_{q(\ensuremath{\mathbf{z}}| \ensuremath{\mathbf{x}})}\left\{\log p(\ensuremath{\mathbf{x}}| \ensuremath{\mathbf{z}})\right\}
12
+ = - \ensuremath{\mathbb{E}}_{q(\ensuremath{\mathbf{z}}| \ensuremath{\mathbf{x}})}\left\{L(\ensuremath{\mathbf{x}}, \mu_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}}))\right\} + \text{const}\enspace.
13
+ \end{equation}$$ Choosing $L$ freely comes at the price that we typically lose the ability to sample from $p(\ensuremath{\mathbf{x}})$ directly. If the loss is a valid unnormalized log-probability, Markov Chain Monte Carlo methods can be applied. In most applications, however, it is assumed that $\mu_\ensuremath{\mathbf{x}}(\ensuremath{\mathbf{z}}), \ensuremath{\mathbf{z}}\sim{}p(\ensuremath{\mathbf{z}})$ is a good approximation of $p(\ensuremath{\mathbf{x}})$ and most articles present means instead of samples. Typical choices for $L$ are the squared loss $L_2(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{x}}')=\lVert \ensuremath{\mathbf{x}}- \ensuremath{\mathbf{x}}'\rVert^2$ and $p$-norms $L_p(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{x}}')=\lVert \ensuremath{\mathbf{x}}- \ensuremath{\mathbf{x}}'\rVert_p$. A generalization of $p$-norm based losses is the "General and Adaptive Robust Loss Function" [@barron2019adaptive], which we refer to as *Adaptive-Loss*. When used to train VAEs for image generation, the Adaptive-Loss is applied to 2D DCT transformations of entire images. Roughly speaking, it then adapts one shape parameter (similar to a $p$-value) and one scaling parameter per frequency during training, simultaneously learning a loss function and a generative model. A common visual similarity metric based on image fidelity is given by Structured Similarity (SSIM) [@wang2004image], which bases its calculation on the covariance of patches. We refer to section [6](#sec:ssim){reference-type="ref" reference="sec:ssim"} in the supplementary material for a description of SSIM.
14
+
15
+ Another approach to define loss functions is to extract features using a deep neural network and to measure the differences between the features from original and reconstructed images [@hou2017deep]. In [@hou2017deep], it is proposed to consider the first five layers $\mathcal{L}= \{1, \dots, 5\}$ of VGGNet [@VGG17]. In [@zhang2018perceptual], different feature extraction networks, including AlexNet [@krizhevsky2012imagenet] and SqeezeNet [@iandola2016squeezenet], are tested. Furthermore, the metrics are improved by weighting each feature based on data from human perception experiments (see Section [4.1](#sec:2afc){reference-type="ref" reference="sec:2afc"}). With adaptive weights $\omega_{lc} \geq 0$ for each feature map, the resulting loss function reads $$\begin{equation}
16
+ \label{eq:vgg-loss}
17
+ L_\text{fcw}(\ensuremath{\mathbf{x}}, \ensuremath{\mathbf{x}}') = \sum_{l\in\mathcal{L}} \frac{1}{H_l W_l} \sum_{h,w,c=1}^{H_l, W_l, C_l}
18
+ \omega_{lc} (y^l_{hwc} - \hat{y}^l_{hwc})^2\enspace,
19
+ \end{equation}$$ where $H_l$, $W_l$ and $C_l$ are the height, width and number of channels (feature maps) in layer $l$. The normalized $C_l$-dimensional feature vectors are denoted by $y^l_{hw}=\ensuremath{\mathcal{F}}^l_{hw}(\ensuremath{\mathbf{x}})/\lVert \ensuremath{\mathcal{F}}^l_{hw}(\ensuremath{\mathbf{x}})\rVert$ and $\hat{y}^l_{hw}=\ensuremath{\mathcal{F}}^l_{hw}(\ensuremath{\mathbf{x}}')/\lVert \ensuremath{\mathcal{F}}^l_{hw}(\ensuremath{\mathbf{x}}')\rVert$, where $\ensuremath{\mathcal{F}}^l_{hw}(\ensuremath{\mathbf{x}})\in\mathbb R^{C_l}$ contains the features of image $\ensuremath{\mathbf{x}}$ in layer $l$ at spatial coordinates $h, w$ (see [@zhang2018perceptual] for details).
20
+
21
+ Watson's perceptual model of the human visual system [@watsonmodel] describes an image as a composition of base images of different frequencies. It accounts for the perceptual impact of luminance masking, contrast masking, and sensitivity. Input images are first divided into $K$ disjoint blocks of $B\times B$ pixels, where $B=8$. Each block is then transformed into frequency-space using the DCT. We denote the DCT coefficient $(i, j)$ of the $k$-th block by $\ensuremath{\mathbf{C}}_{ijk}$ for $1 \leq i, j \leq B$ and $1\le k\leq K$.
22
+
23
+ The Watson model computes the loss as weighted $p$-norm (typically $p=4$) in frequency-space $$\begin{equation}
24
+ D_{\text{Watson}} (\ensuremath{\mathbf{C}}, \ensuremath{\mathbf{C}}') = \sqrt[p]{\sum_{i,j,k=1}^{B,B,K} \bigg| \frac{\ensuremath{\mathbf{C}}_{ijk} - \ensuremath{\mathbf{C}}'_{ijk}}{\ensuremath{\mathbf{S}}_{ijk}} \bigg| ^{p}} \enspace,
25
+ \end{equation}$$ where $\ensuremath{\mathbf{S}}\in \mathbb{R}^{K \times B\times B}$ is derived from the DCT coefficients $\ensuremath{\mathbf{C}}$. The loss is not symmetric as $\ensuremath{\mathbf{C}}'$ does not influence $\ensuremath{\mathbf{S}}$. To compute $\ensuremath{\mathbf{S}}$, an image-independent sensitivity table $\textbf{T}\in \mathbb{R}^{B\times B}$ is defined. It stores the sensitivity of the image to changes in its individual DCT components. The table is a function of a number of parameters, including the image resolution and the distance of an observer to the image. It can be chosen freely dependent on the application, a popular choice is given in [@watermarkingbook]. Watson's model adjusts $\textbf{T}$ for each block according to the block's luminance. The luminance-masked threshold $\ensuremath{\mathbf{T}_{\mathbf{L}_{ijk}}}$ is given by $$\begin{equation}
26
+ \label{eq:lum-mask}
27
+ \ensuremath{\mathbf{T}_{\mathbf{L}_{ijk}}}= T_{ij} \bigg( \frac{\ensuremath{\mathbf{C}}_{00k}}{\bar{\ensuremath{\mathbf{C}}}_{00}} \bigg) ^{\alpha}\enspace,
28
+ \end{equation}$$ where $\alpha$ is a constant with a suggested value of $0.649$, $\ensuremath{\mathbf{C}}_{00k}$ is the d.c. coefficient (average brightness) of the $k$-th block in the original image, and $\bar{\ensuremath{\mathbf{C}}}_{00}$ is the average luminance of the entire image. As a result, brighter regions of an image are less sensitive to changes.
29
+
30
+ Contrast masking accounts for the reduction in visibility of one image component by the presence of another. If a DCT frequency is strongly present, an absolute change in its coefficient is less perceptible compared to when the frequency is less pronounced. Contrast masking gives $$\begin{equation}
31
+ \label{eq:contrast-mask}
32
+ \ensuremath{\mathbf{S}}_{ijk} = \max( \ensuremath{\mathbf{T}_{\mathbf{L}_{ijk}}}, \ \lvert \ensuremath{\mathbf{C}}_{ijk} \rvert^{r} \ \ensuremath{\mathbf{T}_{\mathbf{L}_{ijk}}}^{(1-r)}) \enspace,
33
+ \end{equation}$$ where the constant $r\in[0,1]$ has a suggested value of $0.7$.
34
+
35
+ To make the loss function differentiable we replace the maximization in the computation of $\ensuremath{\mathbf{S}}$ by a smooth-maximum function $\text{smax}(x_1,x_2,\dots)=\frac {\sum_i x_i e^{x_i}}{\sum_j e^{x_j}}$ and the equation for $\ensuremath{\mathbf{S}}$ becomes $$\begin{equation}
36
+ \label{eq:contrast-mask-mod}
37
+ \tilde{\ensuremath{\mathbf{S}}}_{ijk} = \text{smax}( \ensuremath{\mathbf{T}_{\mathbf{L}_{ijk}}}, \ \lvert \ensuremath{\mathbf{C}}_{ijk} \rvert^{r} \ \ensuremath{\mathbf{T}_{\mathbf{L}_{ijk}}}^{(1-r)}) \enspace.
38
+ \end{equation}$$ For numerical stability, we introduce a small constant $\epsilon=10^{-10}$ and arrive at the trainable Watson-loss for the coefficients of a single channel $$\begin{equation}
39
+ \label{eq:wat-dct}
40
+ L_{\text{Watson}} (\ensuremath{\mathbf{C}}, \ensuremath{\mathbf{C}}') = \sqrt[p]{\epsilon + \sum_{i,j,k=1}^{B,B,K}\bigg| \frac{\ensuremath{\mathbf{C}}_{ijk} - \ensuremath{\mathbf{C}}'_{ijk}}{\tilde{\ensuremath{\mathbf{S}}}_{ijk}} \bigg| ^{p}} \enspace.
41
+ \end{equation}$$
42
+
43
+ Watson's perceptual model is defined for a single channel (i.e., greyscale). To make the model applicable to color images, we aggregate the loss calculated on multiple separate channels to a single loss value.[^1] We represent color images in the YCbCr format, consisting of the luminance channel Y and chroma channels Cb and Cr. We calculate the single-channel losses separately and weight the results. Let $L_{\text{Y}}$, $L_{\text{Cb}}$, $L_{\text{Cr}}$ be the loss values in the luminance, blue-difference and red-difference components for any greyscale loss function. Then the corresponding multi-channel loss $L$ is calculated as $$\begin{equation}
44
+ \label{eq:watson-color}
45
+ L = \lambda_{\text{Y}} L_{\text{Y}} + \lambda_{\text{Cb}} L_{\text{Cb}} + \lambda_{\text{Cr}} L_{\text{Cr}}\enspace,
46
+ \end{equation}$$ where the weighting coefficients are learned from data, see below.
47
+
48
+ In order to be less sensitive to small translational shifts, we replace the DCT with a discrete Fourier Transform (DFT), which is in accordance with Watson's original work (e.g., [@watson85; @watson87]). The later use of the DCT was most likely motivated by its application within JPEG [@wallace1992jpeg; @watson1994image]. The DFT separates a signal into amplitude and phase information. Translation of an image affects phase, but not amplitude. We apply Watson's model on the amplitudes while we use the cosine-distance for changes in phase information. Let $\ensuremath{\mathbf{A}}\in \mathbb{R}^{B \times B}$ be the amplitudes of the DFT and let $\Phi \in \mathbb{R}^{B \times B}$ be the phase-information. We then obtain $$\begin{equation}
49
+ \label{eq:wat-dft}
50
+ L_{\text{Watson-{DFT}}} (\ensuremath{\mathbf{A}}, \Phi, \ensuremath{\mathbf{A}}', \Phi') = L_{\text{Watson}}(\ensuremath{\mathbf{A}},\ensuremath{\mathbf{A}}')
51
+ + \sum_{i,j,k=1}^{B,B,K} w_{ij}
52
+ %\overbrace{\cos^{-1} \left[ \cos ( \Phi_{ijk} - \Phi'_{ijk} )\right]}^{\Phi_{ijk} - \Phi'_{ijk} \mod \pi}\enspace,
53
+ \arccos\left[\cos( \Phi_{ijk} - \Phi'_{ijk} )\right]\enspace,
54
+ \end{equation}$$ where $w_{ij} > 0$ are individual weights of the phase-distances that can be learned (see below).
55
+
56
+ The change of representation going from DCT to DFT disentangles amplitude and phase information, but does not increase the number of parameters as the DFT of real images results in a Hermitian complex coefficient matrix (i.e., the element in row $i$ and column $j$ is the complex conjugate of the element in row $j$ and column $i$) .
57
+
58
+ Computing the loss from disjoint blocks works for the original application of Watson's perceptual model, lossy compression. However, a powerful generative model can take advantage of the static blocks, leading to noticeable artifacts at block boundaries. We solve this problem by randomly shifting the block-grid in the loss-computation during training. The offsets are drawn uniformly in the interval $\llbracket -4,4\rrbracket$ in both dimensions. In expectation, this is equivalent to computing the loss via a sliding window as in SSIM.
59
+
60
+ When benchmarking Watson's perceptual model with the suggested parameters on data from a Two-Alternative Forced-Choice (2AFC) task measuring human perception of image similarity, see Subsection [4.1](#sec:2afc){reference-type="ref" reference="sec:2afc"}, we found that the model underestimated differences in images with strong high-frequency components. This allows compression algorithms to improve compression ratios by omitting noisy image patterns, but does not model the full range of human perception and can be detrimental in image generation tasks, where the underestimation of errors in these frequencies might lead to the generation of an unnatural amount of noise. We solve this problem by training all parameters of all loss variants, including $p, \textbf{T}, \alpha, r, w_{ij}$ and for color images $\lambda_{\text{Y}}, \lambda_{\text{Cb}}$ and $\lambda_{\text{Cr}}$, on the 2AFC dataset (see Section [4.1](#sec:2afc){reference-type="ref" reference="sec:2afc"}).
2101.08314/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="www.draw.io" modified="2020-06-25T15:40:24.320Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" etag="rs5DxvFRVDdMaJ6J5zvp" version="13.3.1" type="device"><diagram id="C5RBs43oDa-KdzZeNtuy" name="Page-1">7Ztbd6IwEIB/jY96CAGUx9babnfbnt3T3bp9pJICWyQW4q2/foMEkJAeUbnUlifNBCY430wyk2AHDqerK9+Y2bfYRG5HlsxVB150ZBko0oB+hJJ1JOkrTGD5jskuSgX3zhtiQolJ546JgsyFBGOXOLOscII9D01IRmb4Pl5mL3vGbnbUmWGhnOB+Yrh56dgxiR1JB3I/lX9DjmXHIwNNj3qmRnwx+yWBbZh4uSWCow4c+hiT6Nt0NURuaLzYLuPr9di9edGuvv8KXo0/5z9+3z10I2WX+9yS/AQfeeRg1Xf4XyAvxt0HfXwGry9GZLAeslukheHOmb06subSQc6fMR2L/miyZpbUXuc47ugGG85n9AL6XKu0k36zws9rzyGO4TpvBnGwF+ukDxepjS5iZk1GkH0890wUPq9Eu5e2Q9D9zJiEvUvqnVRmk6lLWyC5e4F8glYc6h12Agk86vUITxHx1/Q+pgUqzCjM4XXWXKbeA2KXsLc8R2MygzmslWhOodAvjMsejJSjGQFVxOgGLTax3g2viFU++XH3FaUx2+4qTg80Rw/0s/RkNY9vIKCnVEVPbZaefFL0EjQfhZ7WLL3b7mlFn6J8MH79hvmdFD1V/WD0dAE9znDIM8/CLJG2POyhrKGoMfz13zCd6Klx83G772LFco2otWataBRk5jLLIual+a3hW4jsyrryGLbMrArMHMt85NKsapF9NpHt2Qg/sbNx9ThGJY4y4PAFeO5PELtrO6HkFYGsIsBnQJEdcoo2rpD87MO9A1SVvPZ6vaJxSyOQZH0uID5+QUPsYj/1yWfHdTkRTY4tjzYn1KUQlZ+H8ezQuuWMdUwd0wyHEc4G2Uy5qgkB8suxlp8QFIGnylVNCEA+ckY4KrKbilgZlBSxqr5DUdURKypl3sc3cY0gcCbF5nSp4Tld+RQewidwsO45XVQunYKHNEUeAr2nlsNeoEqtmb6o3Grpv09f6ZdGX6CqbvqiYq2l/z59dVAafYGquukP9qL/SVM7CA8EmMsA9JrxHVurnyS+/JJ5MEDB6lszwrhq+1oI8+vewQgFS2jdCMFXRJhfvA5GKFgH60bYbnEcBVDfoahqfHAvfG0SC/XyltC8qn7N9Os4q5fz500ndcyU7CnxkJo6ZpLbTadjY1Y5eNMpr6rmwlOu6ox/iD0aPBbyaHzR3BpeFo5SG0+f5sHuCBUcK1UWtFASU9p+LUp0FjSoLGpL2C46yegD3DE94E1cuF6RdiiqOvK+5JYPj0/hC4yi+Hb6QdX4yns9I7NipgtoQ0d50Ypw+v7BlUIKn0BV7B/xilH6wnqH29czNkUz966r0vTrGXC/7ac2kc5nv2p5xa9ac/EL5Yri/REFbcCLEjZVaTrgRbtdZSAfeWYlfxqps2LSuHDUkgDdwtUX4OKngAK4aDP9I1IU0OnfueDoPw==</diagram></mxfile>
2101.08314/main_diagram/main_diagram.pdf ADDED
Binary file (12.7 kB). View file
 
2101.08314/paper_text/intro_method.md ADDED
@@ -0,0 +1,464 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Strategic interactions among interconnected agents are commonly modeled using the network, or graphical, game formalism (Kearns, Littman, and Singh, 2001; Jackson and Zenou, 2015). In such games, the utility of an agent depends on his own actions as well as those by its network neighbors. Many variations of games on networks have been considered, with applications including the provision of public goods (Allouch, 2015; Buckley and Croson, 2006; Khalili, Zhang, and Liu, 2019; Yu et al., 2020), security (Hota and Sundaram, 2018; La, 2016; Vorobeychik and Letchford, 2015), and financial markets (Acemoglu et al., 2012).
4
+
5
+ ![](_page_0_Figure_14.jpeg)
6
+
7
+ Figure 1: An illustration of a multi-scale (3-level) network.
8
+
9
+ While network games are a powerful modeling framework, they fail to capture a common feature of human organization: groups and communities. Indeed, investigation of communities, or close-knit groups, in social networks is a major research thread in network science. Moreover, such groups often have a hierarchical structure (Clauset, Moor, and Newman, 2008; Girvan and Newman, 2002). For example, strategic interactions among organizations in a marketplace often boil down to interactions among their constituent business units, which are, in turn, comprised of individual decision makers. In the end, it is those lowest-level agents who ultimately accrue the consequences of these interactions (for example, corporate profits would ultimately benefit individual shareholders). Moreover, while there are clear interdependencies among organizations, individual utilities are determined by a combination of individual actions of some agents, together with *aggregate* decisions by the groups (e.g., business units, organizations). For example, an employee's bonus is determined in part by their performance in relation to their co-workers, and in part by how well their employer (organization) performs against its competitors in the marketplace.
10
+
11
+ We propose a novel *multi-scale game model* that generalizes network games to capture such hierarchical organization of individuals into groups. Figure 1 offers a stylized example in which three groups (e.g., organizations) are comprised of 2-3 subgroups each (e.g., business units), which are in turn comprised of 2-5 individual agents. Specifically, our model includes an explicit hierarchical network structure that organizes agents into groups across a series of levels. Further, each group is associated with an action which deterministically aggregates the decisions by its constituent agents. The game is grounded at the lowest level, where the agents are associated with scalar actions and utility functions that have modular structure in the strategies taken at each level of the game. For example, in Figure 1, the utility function of an individual member a<sup>j</sup> of level-3 group a (3) 3 is a function of the strategies of (i) a<sup>j</sup> 's immediate neighbors (represented by links between pairs of filled-in circles), (ii) a<sup>j</sup> 's level-2 group and its network neighbor (the small hollow circles), and (iii) a<sup>j</sup> 's level-3 group, a (3) 3 (large hollow circle) and its network neighbors, a (3) 1 and a (3) 2 .
12
+
13
+ Our next contribution is a series of iterative algorithms for computing pure strategy Nash equilibria that explicitly leverage the proposed multi-scale game representation. The first of these simply takes advantage of the compact game representation in computing equilibria. The second algorithm we propose offers a further innovation through an iterative procedure that alternates between game levels, treating groups themselves as pseudo-agents in the process. We present sufficient conditions for the convergence of this algorithm to a pure strategy Nash equilibrium through a connection to Structured Variational Inequalities (He, Yang, and Wang, 2000), although the result is limited to games with two levels. To address the latter limitation, we design a third iterative algorithm that now converges even in games with an arbitrary number of levels.
14
+
15
+ Our final contribution is an experimental evaluation of the proposed algorithms compared to best response dynamics. In particular, we demonstrate orders of magnitude improvements in scalability, enabling us to solve games that cannot be solved using a conventional network game representation.
16
+
17
+ Related Work: Network games have been an active area of research; see e.g., surveys by Jackson and Zenou (2015) and Bramoulle and Kranton (2016). We now review the most relevant papers. Conditions for the existence, uniqueness ´ and stability of Nash equilibria in network games under general best responses are studied in (Parise and Ozdaglar, 2019; Naghizadeh and Liu, 2017; Scutari et al., 2014; Bramoulle, Kranton, and D'amours, 2014). Variational inequal- ´ ities (VI) are used in these works to analyze the fixed point and contraction properties of the best response mappings. It is identified in Parise and Ozdaglar (2019); Naghizadeh and Liu (2017); Scutari et al. (2014) that when the Jacobian matrix of the best response mapping is a P-matrix or is positive definite, a feasible unique Nash equilibrium exists and can be obtained by best-response dynamics (Scutari et al., 2014; Parise and Ozdaglar, 2019). In this paper, we extended the analysis of equilibrium and best responses for a conventional network game to that in a multi-scale network game, where the utility functions are decomposed into separable utility components to which best responses are applied separately. This is similar to the generalization from a conventional VI problem to an SVI problem (He, Yang, and Wang, 2000; He, 2009; He and Yuan, 2012; Bnouhachem, Benazza, and Khalfaoui, 2013) problem.
18
+
19
+ Previous works on network games that involve group or community structure focus on finding such structures; e.g., community detection in networks using game theoretic methods have been studied in (Mcsweeney, Mehrotra, and Oh, 2017; Newman, 2004; Alvari, Hajibagheri, and Sukthankar, 2014). By contrast, our work focuses on analyzing a network game with a given group/community structure, and using the structure as an analytical tool for the analysis of equilibrium and best responses.
20
+
21
+ # Method
22
+
23
+ A general *normal-form game* is defined by a set of agents (players) I = {1, . . . , N}, with each agent a<sup>i</sup> having an action/strategy space K<sup>i</sup> and a utility function ui(x<sup>i</sup> , x<sup>−</sup>i) that i aims to maximize; x<sup>i</sup> ∈ K<sup>i</sup> and x<sup>−</sup><sup>i</sup> denotes the actions by all agents other than i. We term the collection of strategies of all agents x a strategy profile. We assume $K_i \subset \mathbb{R}$ is a compact set.
24
+
25
+ We focus on computing a *Nash equilibrium (NE)* of a normal-form game, which is a strategy profile with each agent maximizing their utility given the strategies of others. Formally, $x^*$ is a *Nash equilibrium* if for each agent i,
26
+
27
+ $$x_i^* \in \underset{x_i \in K_i}{\operatorname{argmax}} \ u_i(x_i, \boldsymbol{x}_{-i}^*). \tag{1}$$
28
+
29
+ A *network game* encodes structure in the utility functions such that they only depend on the actions by network neighbors. Formally, a network game is defined over a weighted graph (I, E), with each node an agent and E is the set of edges; the agent's utility $u_i(x_i, \boldsymbol{x}_{-i})$ reduces to $u_i(x_i, \boldsymbol{x}_{I_i})$ , where $I_i$ is the set of network neighbors of i, although we will frequently use the former for simplicity.
30
+
31
+ An agent's best response is its best strategy given the actions taken by all the other agents. Formally, the best response is a set defined by
32
+
33
+ $$BR_i(\boldsymbol{x}_{-i}, u_i) = \underset{x_i}{\operatorname{argmax}} \ u_i(x_i, \boldsymbol{x}_{-i}). \tag{2}$$
34
+
35
+ Whenever we deal with games that have a unique best response, we will use the singleton best response set to also refer to the player's best response strategy (the unique member of this set).
36
+
37
+ Clearly, a NE of a game is a fixed point of this best response correspondence. Consequently, one way to compute a NE of a game is through *best response dynamics (BRD)*, which is a process whereby agents iteratively and asynchronously (that is, one agent at a time) take the others' actions as fixed values and play a best response to them.
38
+
39
+ We are going to use this BRD algorithm as a major building block below. One important tool that is useful for analyzing BRD convergence is *Variational Inequalities (VI)*. To establish the connection between NE and VI we assume the utility functions $u_i, \forall i=1,\ldots,N$ , are continuously twice differentiable. Let $K=\prod_{i=1}^N K_i$ and define $F:\mathbb{R}^N\to\mathbb{R}^N$ as follows:
40
+
41
+ $$F(\boldsymbol{x}) := \left(-\nabla_{x_i} u_i(\boldsymbol{x})\right)_{i=1}^{N}.$$
42
+ (3)
43
+
44
+ Then $\boldsymbol{x}^*$ is said to be a solution to VI(K, F) if and only if
45
+
46
+ $$(\boldsymbol{x} - \boldsymbol{x}^*)^T F(\boldsymbol{x}^*) \ge 0, \ \forall \boldsymbol{x} \in K.$$
47
+
48
+ In other words, the solution set to VI(K, F) is equivalent to the set of NE of the game. Now, we can define the condition that will guarantee the convergence of BRD.
49
+
50
+ **Definition 1.** The $P_{\Upsilon}$ condition: The $\Upsilon$ matrix generated from $F: \mathbb{R}^N \to \mathbb{R}^N$ is given as follows
51
+
52
+ $$\Upsilon(F) = \begin{bmatrix} \alpha_1(F) & -\beta_{1,2}(F) & \cdots & -\beta_{1,N}(F) \\ -\beta_{2,1}(F) & \alpha_2(F) & \cdots & -\beta_{2,N}(F) \\ \vdots & \vdots & \ddots & \vdots \\ -\beta_{N,1}(F) & -\beta_{N,2}(F) & \cdots & \alpha_N(F) \end{bmatrix},$$
53
+
54
+ $\alpha_i(F) = \inf_{\boldsymbol{x} \in K} ||\nabla_i F_i||_2$ , $\beta_{i,j}(F) = \sup_{\boldsymbol{x} \in K} ||\nabla_j F_i||_2$ , $i \neq j$ . If $\Upsilon(F)$ is a P-matrix, that is, if all of its principal components have a positive determinant, then we say F satisfies the $P_\Upsilon$ condition.
55
+
56
+ **Theorem 1.** (Scutari et al., 2014) If F satisfies the $P_{\Upsilon}$ condition, then F is strongly monotone on K, and VI(K,F) has a unique solution. Moreover, BRD converges to the unique NE from an arbitrary initial state.
57
+
58
+ Consider a conventional network (graphical) game with the set I of N agents situated on a network G = (I, E), each with a utility function $u_i(x_i, \boldsymbol{x}_{I_i})$ , with $I_i$ the set of i's neighbors, I the full set of agents/nodes and E the set of edges connecting them. Suppose that this network G exhibits the following structure and feature of the strategic dependence among agents: agents can be partitioned into a collection of groups $\{S_k\}$ , where k is a group index, and an agent $a_i$ in the kth group (i.e., $a_i \in S_k$ ) has a utility function that depends (i) on the strategies of its network neighbors in $S_k$ , and (ii) only on the aggregate strategies of groups other than k (see, e.g., Fig. 1). Further, these groups may go on to form larger groups, whose aggregate strategies impact each other's agents, giving rise to a multi-scale structure
59
+
60
+ <sup>&</sup>lt;sup>1</sup>The edges are generally weighted, resulting in a weighted adjacency matrix on which the utility depends.
61
+
62
+ of the network. This kind of structure is very natural in a myriad of situations. For example, members of criminal organizations take stock of individual behavior by members of their own organization, but their interactions with other organizations (criminal or otherwise) are perceived in group terms (e.g., how much another group has harmed theirs). A similar multi-level interaction structure exists in national or ethnic conflicts, organizational competition in a market place, and politics. Indeed, a persistent finding in network science is that networks exhibit a multi-scale interaction structure (i.e., communities, and hierarchies of communities) (Girvan and Newman, 2002; Clauset, Moor, and Newman, 2008).
63
+
64
+ We present a general model to capture such multi-scale structure. Formally, an L-level structure is given by a hierarchical graph structure $\{G^{(l)}\}$ for each level $l, 1 \leq l \leq L$ , where $G^{(l)} = (\{S_k^{(l)}\}_k, E^{(l)})$ represents the level-l structure. The first component, $\{S_k^{(l)}\}_k$ prescribes a partition, where agents in level l-1 form disjoint groups given by this partition; each group is viewed as an agent in level l, denoted as $a_k^{(l)}$ . Notationally, while both $a_k^{(l)}$ and $S_k^{(l)}$ bear the superscript (l), the former refers to a level-l agent, while the latter is the group (of level-(l-1) agents) that the former represents. The set of level-l agents is denoted by $I^{(l)}$ and their total number $N^{(l)}$ . The second component, $E^{(l)}$ , is a set of edges that connect level-l agents, encoding the dependence relationship among the groups they represent. This structure is anchored in level 1 (the lowest level), where sets $S_k^{(1)}$ are singletons, corresponding to agents $a_k$ in the game, who constitute the set I.
65
+
66
+ To illustrate, the multi-scale structure shown in Fig. 1 is given by $G^{(1)} = G = (\{S_k^{(1)}\}_k = I, E^{(1)} = E)$ , as well as how level-1 agents are grouped into level-2 agents, how level-2 agents are further grouped into level-3 agents, and the edges connecting these groups at each level.
67
+
68
+ It should be obvious that the above multi-scale representation of a graphical game is a generalization of a conventional graphical game, as any such game essentially corresponds to a L=1 multi-scale representation. On the other hand, not all conventional graphical games have a meaningful L>1 multi-scale representation (with non-singleton groups of level-1 agents); this is because our assumption that an agent's utility only depends on the aggregate decisions by groups other than the one they belong to implies certain properties of the dependence structure. For the remainder of this paper we will proceed with a given multi-scale structure defined above, while in Appendix G we outline a set of conditions on a graphical game G that allows us to represent it in a (non-trivial) multi-scale fashion.
69
+
70
+ Since the resulting multi-scale network is strictly hierarchical, we can define a *direct supervisor* of agent $a_i^{(l)}$ in level-l to be the agent $a_k^{(l+1)}$ corresponding to the level-(l+1) group k that the former belongs to. Similarly, two agents who belong in the same level-l group k are (level-l) group mates. Finally, note that any level-l agent $a_i$ belongs to exactly one group in each level l. We index a level-l group to which $a_i$ belongs by $k_{il}$ .
71
+
72
+ In order to capture the agent dependence on aggregate actions, we define an aggregation function $\sigma_k^{(l)}$ for each level-l group k that maps individual actions of group members to $\mathbb{R}$ (a group strategy). Specifically, consider a level-l group $S_k^{(l)}$ with level-(l-1) agents in this group playing a strategy profile $\boldsymbol{x}_{S_k^{(l)}}$ . The (scalar) group strategy, which is also the strategy for the corresponding level-(l+1) agent, is determined by the aggregation function,
73
+
74
+ $$x_k^{(l)} = \sigma_k^{(l)}(\mathbf{x}_{S_k^{(l)}}).$$
75
+ (5)
76
+
77
+ A natural example of this is linear (e.g., agents respond to total levels of violence by other criminal organizations): $\sigma_k^{(l)}(\boldsymbol{x}_{S_k^{(l)}}) = \sum_{i \in S_k^{(l)}} x_i^{(l)}.$
78
+
79
+ The L-level structure above is captured strategically by introducing structure into the utility functions of agents. Let $I_{kil}$ denote the set of neighbors of level-l group k to which level-1 agent $a_i$ belongs; i.e., this is the set of level-l groups that interact with agent $a_i$ 's group. This level-1 agent's utility function can be decomposed as follows:
80
+
81
+ $$u_i(x_i, \boldsymbol{x}_{-i}) = \sum_{l=1}^{L} u_{k_{il}}^{(l)} \left( x_{k_{il}}^{(l)}, \boldsymbol{x}_{I_{k_{il}}}^{(l)} \right). \tag{6}$$
82
+
83
+ In this definition, the level-l strategies $x_k^{(l)}$ are implicitly functions of the level-1 strategies of agents that comprise the group, per a recursive application of Eqn. (5). Consequently, the utility is an additive function of the hierarchy of group-level components for increasingly (with l) abstract group of agents. Note that conventional network games are a special case with only a single level (L=1).
84
+
85
+ To illustrate, if we consider just two levels (a collection of individuals and groups to which they directly belong), the utility function of each agent a<sup>i</sup> is a sum of two components:
86
+
87
+ $$u_i(x_i, \boldsymbol{x}_{-i}) = u_{k_{i1}}^{(1)} \left( x_{k_{i1}}^{(1)}, \boldsymbol{x}_{I_{k_{i1}}}^{(1)} \right) + u_{k_{i2}}^{(2)} \left( x_{k_{i2}}^{(2)}, \boldsymbol{x}_{I_{k_{i2}}}^{(2)} \right).$$
88
+
89
+ In the first component, x (1) ki<sup>1</sup> = x<sup>i</sup> , since level-1 groups correspond to individual agents, whereas x (1) Iki<sup>1</sup> is the strategy profile of i's neighbors *belonging to the same group as* i, given by E(1). The second utility component now depends only on the aggregate strategy x (2) ki<sup>2</sup> of the group to which i belongs, as well as the aggregate strategies of the groups with which i's group interacts, given by E(2) .
90
+
91
+ Consider the BRD algorithm (formalized in Algorithm 1) in which we iteratively select an agent who plays a best response to the strategy of the rest from the previous iteration.
92
+
93
+ ```
94
+ Initialize the game, t = 0, xi(0) = (x0)i, i = 1, · · · , N;
95
+ while not converged do
96
+ for i = 1:N do
97
+ xi(t + 1) = BRi(x−i(t), ui)
98
+ end
99
+ t ← t + 1
100
+ end
101
+ ```
102
+
103
+ The conventional BRD algorithm operates on the "flattened" utility function which evaluates utilities explicitly as functions of the strategies played by all agents a<sup>i</sup> ∈ I. Our goal henceforth is to develop algorithms that take advantage of the special multi-scale structure and enable significantly better scalability than standard BRD, while preserving the convergence properties of BRD.
104
+
105
+ The simplest way to take advantage of the multi-scale representation is to directly leverage the structure of the utility function in computing best responses. Specifically, the multi-scale utility function is more compact than one that explicitly accounts for the strategies of all neighbors of i (which includes *all* of the players in groups other than the one i belongs to). This typically results in a direct computational benefit to computing a best response. For example, in a game with a linear best response, this can result in an exponential reduction in the number of linear operations.
106
+
107
+ The resulting algorithm, *Multi-Scale Best-Response Dynamics (MS-BRD)*, which takes advantage of our utility representation is formalized as Algorithm 2. The main difference from BRD is that it explicitly uses the multi-scale utility representation: in each iteration, it updates the aggregated strategies at all levels for the groups to which the most recent best-responding agent belongs. Since MS-BRD simply performs operations identical to BRD but efficiently, its convergence is guaranteed under the same conditions (see Theorem 1). Next, we present iterative algorithms for computing NE that take further advantage of the multi-scale structure, and study their convergence.
108
+
109
+ In order to take full advantage of the multi-scale game structure, we now aim to develop algorithms that treat groups explicitly as agents, with the idea that iterative interactions among these can significantly speed up convergence. Of course, in our model groups are not actual agents in the game: utility functions are only defined for agents in level 1. However, note that we already have well-defined group strategies – these are just the aggregations of agent strategies at the level immediately below, per the aggregation function (5). Moreover, we have natural utilities for groups as well: we can use the corresponding group-level component of the utility of any agent in the group (note that these are identical for all group members in Eqn. (6)). However, using these as group utilities will in fact not work: since ultimately the game is only among the agents in level 1, equilibria of all of the games at more abstract levels *must be consistent with equilibrium strategies in level 1*. On the other hand, we need to enforce consistency only between neighboring levels, since that fully captures the across-level interdependence induced by the aggregation function.
110
+
111
+ ```
112
+ Initialize the game, t = 0, x
113
+ (1)
114
+ i
115
+ (0) = (x0)i, i = 1, . . . , N
116
+ for l = 2:L do
117
+ for k = 1:N
118
+ do
119
+ x
120
+ k
121
+ (0) = σ
122
+ k
123
+ (x
124
+ S
125
+ k
126
+ (0));
127
+ end
128
+ end
129
+ while not converged do
130
+ for i = 1:N (Level-1) do
131
+ x
132
+ (1)
133
+ i
134
+ (t + 1) = BRi(x
135
+ (1)
136
+ −i
137
+ (t), ui)
138
+ end
139
+ for l = 2:L do
140
+ for k = 1:N
141
+ (l)
142
+ do
143
+ x
144
+ (l)
145
+ k
146
+ (t + 1) = σ
147
+ k
148
+ (x
149
+ S
150
+ k
151
+ (t + 1));
152
+ end
153
+ end
154
+ t ← t + 1;
155
+ end
156
+ ```
157
+
158
+ Therefore, we define the following *pseudo-utility functions* for agents at levels other than 1, with agent k in level l corresponding to a subset of agents from level l − 1:
159
+
160
+ $$\hat{u}_{k}^{(l)} = u_{k}^{(l)} \left( x_{k}^{(l)}, \boldsymbol{x}_{I_{k}}^{(l)} \right) - L_{k}^{(l,l-1)} \left( x_{k}^{(l)}, \sigma_{k}^{(l)}(\boldsymbol{x}_{S_{k}^{(l)}}) \right) - L_{k}^{(l,l+1)} \left( \sigma_{k}^{(l+1)}(\boldsymbol{x}_{S_{k}^{(l+1)}}), x_{k}^{(l+1)} \right).$$
161
+
162
+ $$(7)$$
163
+
164
+ The first term is the level-l component of the utility of any level-1 agent in group k. The second and third terms model the inter-level inconsistency loss that penalizes a level-l agent a (l) k , where L (l,l+1) k and L (l,l−1) i penalize its inconsistency with the level-(l + 1) and level-(l − 1) entities respectively. In general, L (l,l+1) k is a different function from L (l+1,l) k ; we elaborate on this further below.
165
+
166
+ The central idea behind the second algorithm we propose is simple: in addition to iterating best response steps at level 1, we now interleave them with best response steps taken by agents at higher levels, which we can since strategies and utilities of these pseudo-agents are well defined. This algorithm is similar to the augmented Lagrangian method in optimization theory, where penalty terms are added to relax an equality constraint and turn the problem into one with separable operators. We can decompose this type of problem into smaller subproblems and solve the subproblems sequentially using the alternating direction method (ADM) (Yuan and Li, 2011; Bnouhachem, Benazza, and Khalfaoui, 2013). The games at adjacent levels are coupled through the equality constraints on their action profiles given by Eqn (5), and the penalty functions are updated before starting a new iteration. The full algorithm, which we call *Separated Hierarchical BRD (SH-BRD)*, is provided in Algorithm (3).
167
+
168
+ The penalty updating rule in iteration t of Algorithm (3) is:
169
+
170
+ 1. For
171
+ $$l=2,\ldots,L, i=1,\ldots,N^{(l)}$$
172
+
173
+ $$L_i^{(l,l-1)}\left(x_i^{(l)},\sigma_i^{(l)}(\boldsymbol{x}_{S_i^{(l)}}(t+1))\right)$$
174
+
175
+ $$=h_i^{(l)}\left[x_i^{(l)}-\sigma_i^{(l)}(\boldsymbol{x}_{S_i^{(l)}}(t+1))+\lambda_i^{(l)}(t)\right]^2.$$
176
+ (8)
177
+ 2. For $l=1,\ldots,L-1; i=1,\ldots,N^{(l)},$ where $a_i^{(l)}\in S_k^{(l+1)}$
178
+ $$L_k^{(l,l+1)}\left(\sigma_k^{(l+1)}(\boldsymbol{x}_{S_k^{(l+1)}}),x_k^{(l+1)}(t)\right)$$
179
+
180
+ $$=h_k^{(l+1)}\left[\sigma_k^{(l+1)}(\boldsymbol{x}_{S_k^{(l+1)}})-x_k^{(l+1)}(t)-\lambda_k^{(l+1)}(t)\right]^2.$$
181
+ (9)
182
+
183
+ 3. For
184
+ $$l = 2, ..., L, i = 1, ..., N^{(l)}$$
185
+
186
+ $$\lambda_i^{(l)}(t+1)$$
187
+
188
+ $$= \lambda_i^{(l)}(t) - h_i^{(l)} \left[ \sigma_i^{(l)}(\boldsymbol{x}_{S_i^{(l)}}(t+1)) - x_i^{(l)}(t+1) \right]. \tag{10}$$
189
+
190
+ When updating, all other variables are treated as fixed, and λ (l) (0), h (l) <sup>i</sup> > 0 are chosen arbitrarily.
191
+
192
+ ```
193
+ Initialize the game, t = 0, x
194
+ (1)
195
+ i
196
+ (0) = (x0)i, i = 1, . . . , N(0)
197
+ for l = 2:L do
198
+ for k = 1:N
199
+ do
200
+ x
201
+ k
202
+ (0) = σ
203
+ k
204
+ (x
205
+ S
206
+ k
207
+ (0));
208
+ end
209
+ end
210
+ while not converged do
211
+ for l = 1:L do
212
+ for i = 1:N
213
+ (l)
214
+ (l to l − 1 Penalty Update, if l > 1) do
215
+ Update L
216
+ (l,l−1)
217
+ i
218
+ end
219
+ for i = 1:N
220
+ (l)
221
+ (l to l + 1 Penalty Update, if l < L) do
222
+ Update L
223
+ (l,l+1)
224
+ k
225
+ , where a
226
+ (l)
227
+ i ∈ S
228
+ (l+1)
229
+ k
230
+ end
231
+ for i = 1:N
232
+ (l)
233
+ (Best Response) do
234
+ x
235
+ i
236
+ (t + 1) = BRi
237
+
238
+ σ
239
+ i
240
+ (x
241
+ S
242
+ i
243
+ (t + 1)),
244
+ x
245
+ (l)
246
+ Ii
247
+ (t), x
248
+ (l+1)
249
+ k
250
+ (t), uˆ
251
+ i
252
+
253
+ end
254
+ end
255
+ t ← t + 1;
256
+ end
257
+ ```
258
+
259
+ Unlike MS-BRD, the convergence of the SH-BRD algorithm is non-trivial. To prove it, we exploit a connection between this algorithm and Structured Variational Inequalities (SVI) with separable operators (He, 2009; He and Yuan, 2012; Bnouhachem, Benazza, and Khalfaoui, 2013). To formally state the convergence result, we need to make several explicit assumptions.
260
+
261
+ Assumption 1. *The functions* u (l) i , ∀l = 1, . . . , L, ∀i = 1, . . . , N(l−1) *are twice continuously differentiable.*
262
+
263
+ Assumption 2. <sup>−</sup>O<sup>x</sup> u (l) i *are monotone* <sup>∀</sup><sup>l</sup> = 1, . . . , L, <sup>∀</sup><sup>i</sup> = 1, . . . , N(l−1)*. The solution set of* <sup>O</sup><sup>x</sup> u (l) <sup>i</sup> = 0, ∀l = 1, . . . , L, ∀i = 1, . . . , N(l−1) *is nonempty, with solutions in the interior of the action spaces.*
264
+
265
+ Let F (l) be defined as in Equation (3) for each level-l pseudo-utility.
266
+
267
+ Assumption 3. F (l) *satisfy the* P<sup>Υ</sup> *condition.*
268
+
269
+ Note that these assumptions directly generalize the conditions required for the convergence of BRD to our multi-scale pseudo-utilities. The following theorem formally states that SH-BRD converges to a NE for *2-level games*.
270
+
271
+ Theorem 2. *Suppose* L = 2*. If Assumptions 1 and 3 hold,* SH-BRD *converges to a NE, which is unique.*
272
+
273
+ The full proof of this theorem, which makes use of the connection between SH-BRD and SVI, is provided in the Supplement due to space constraint. The central issue, however, is that there are no established convergence guarantees for ADM-based algorithms for SVI with 3 or more separable operators. Alternative algorithms for SVI can extend to the case of 3 operators using parallel operator updates with regularization terms, but no approaches exist that can handle more than 3 operators (He, 2009). We thus propose an algorithm for iteratively solving multi-scale games that uses the general idea from SH-BRD, but packs all levels into two *meta-levels*. The two meta-levels each has to be
274
+
275
+ comprised of consecutive levels. For example, if we have 5 levels, we can have {1, 2, 3} and {4, 5} combinations, but not {1, 2, 4} and {3, 5}. Upon grouping levels together to obtain a meta-game with only two meta-levels, we can apply what amounts to a 2-level version of the SH-BRD. This yields an algorithm, which we call *Hybrid Hierarchical BRD (HH-BRD)*, that now provably converges to a NE for an arbitrary number of levels L given assumptions 1-3.
276
+
277
+ As presenting the general version of HH-BRD involves cumbersome notation, we illustrate the idea by presenting it for a 4-level game (Algorithm 4). The fully general version is deferred to the Supplement. In this example, the objectives of the meta-levels are defined as
278
+
279
+ $$\begin{array}{lcl} \hat{u}_{i}^{(sl_{1})} & = & u_{i}^{(1)} + u_{k_{i2}}^{(2)} - L_{k_{i3}}^{(sl_{1},sl_{2})} \bigg( \sigma_{k_{i3}}^{(3)}(\pmb{x}_{S_{k_{i3}}^{(3)}}), x_{k_{i3}}^{(3)} \bigg), \\ \hat{u}_{k_{i3}}^{(sl_{2})} & = & u_{k_{i3}}^{(3)} + u_{k_{i4}}^{(4)} - L_{k_{i3}}^{(sl_{2},sl_{1})} \bigg( x_{k_{i3}}^{(3)}, \sigma_{k_{i3}}^{(3)}(\pmb{x}_{S_{k_{i3}}^{(3)}}) \bigg) \,. \end{array}$$
280
+
281
+ ```
282
+ Initialize the game, t = 0, x
283
+ (1)
284
+ i
285
+ (0) = (x0)i, i = 1, . . . , N(0)
286
+ for l = 2:4 do
287
+ for k = 1:N
288
+ do
289
+ x
290
+ k
291
+ (0) = σ
292
+ k
293
+ (x
294
+ S
295
+ k
296
+ (0));
297
+ end
298
+ end
299
+ while not converged do
300
+ for k = 1:N
301
+ (3) (Meta-Level-1 Penalty Update) do
302
+ Update L
303
+ (sl1,sl2)
304
+ k
305
+ end
306
+ for i = 1 : N
307
+ (1) (Level-1) do
308
+ x
309
+ (1)
310
+ i
311
+ (t + 1) = BRi
312
+
313
+ x
314
+ (1)
315
+ Ii
316
+ (t), x
317
+ (2)
318
+ Iki2
319
+ (t), x
320
+ (3)
321
+ ki3
322
+ (t), uˆ
323
+ (sl1)
324
+ i
325
+
326
+ end
327
+ for j = 1:N
328
+ (2) (Level-2) do
329
+ x
330
+ (2)
331
+ j
332
+ (t + 1) = σ
333
+ (2)
334
+ j
335
+ (x
336
+ S
337
+ (2)
338
+ j
339
+ (t + 1))
340
+ end
341
+ for k = 1:N
342
+ (3) (Meta-Level-2 Penalty Update) do
343
+ Update L
344
+ (sl2,sl1)
345
+ k
346
+ end
347
+ for k = 1 : N
348
+ (3) (Level-3) do
349
+ x
350
+ (3)
351
+ k
352
+ (t + 1) = BRi
353
+
354
+ σ
355
+ (3)
356
+ k
357
+ (x
358
+ S
359
+ (3)
360
+ k
361
+ (t + 1)), x
362
+ (3)
363
+ Ik
364
+ (t),
365
+ x
366
+ (4)
367
+ −p
368
+ (t), uˆ
369
+ (sl2)
370
+ k
371
+
372
+ ,(a
373
+ (3)
374
+ k ∈ S
375
+ (4)
376
+ p )
377
+ end
378
+ for p = 1:N
379
+ (4) (Level-4) do
380
+ x
381
+ (4)
382
+ p (t + 1) = σ
383
+ (4)
384
+ p (x
385
+ S
386
+ (4)
387
+ p
388
+ (t + 1))
389
+ end
390
+ t ← t + 1;
391
+ end
392
+ ```
393
+
394
+ Theorem 3. *Suppose Assumptions 1-3 hold Then* HH-BRD *finds the unique NE.*
395
+
396
+ *Proof Sketch.* We first "flatten" the game within each meta-level to obtain an effective 2-level game. We then use Theorem 2 to show this 2-level game converges to the unique NE of the game under SH-BRD. Finally, we prove that SH-BRD and HH-BRD have the same trajectory given the same initialization, thus establishing the convergence for HH-BRD. For full proof see Supplement, Appendix D.
397
+
398
+ HH-BRD combines the advantages of both MS-BRD and SH-BRD: not only does it exploit the sparsity embedded in the network topology, but it also avoids the convergence problem of SH-BRD when the number of levels is higher than three. Indeed, there is a known challenge in the related work on structured variational inequalities that convergence is difficult when we involve three or more operators (He, 2009), which we leverage for our convergence results, with operators mapping to levels in our multi-scale game representation. One may be concerned that HH-BRD pseudocode appears to involve greater complexity (and more steps) than SH-BRD. However, this does not imply greater algorithmic complexity, but is rather due to our greater elaboration of the steps within each super level. Indeed, as our experiments below demonstrate, the superior theoretical convergence of HH-BRD also translates into a concrete computational advantage of this algorithm.
399
+
400
+ In this section, we numerically compare the three algorithms introduced in Section 4, as well as the conventional BRD. We only consider settings which satisfy Assumptions 1-3; consequently, we focus comparison on computational costs. We use two measures of computational cost: floating-point operations (FLOPs) in the case of games with a linear best response (a typical measure for such settings), and CPU time for the rest. All experiments were performed on a machine with A 6-core 2.60/4.50 GHz CPU with hyperthreaded cores, 12MB Cache, and 16GB RAM.
401
+
402
+ Games with a Linear Best Response (GLBRs) GLBRs (Bramoullé, Kranton, and D'amours, 2014; Candogan, Bimpikis, and Ozdaglar, 2012; Miura-Ko et al., 2008) feature utility functions such that an agent's best response is a linear function of its neighbors' actions. This includes quadratic utilities of the form
403
+
404
+ $$u_i(x_i, x_{I_i}) = a_i + b_i x_i + \left(\sum_{j \in I_i} g_{ij} x_j\right) x_i - c_i x_i^2, \tag{11}$$
405
+
406
+ since an agent's best response is:
407
+
408
+ $$BR_i(x_{I_i}, u_i) = \frac{\sum_{j \in I_i} g_{ij} x_j}{2c_i} - b_i.$$
409
+
410
+ We consider a 2-level GLBR and compare three algorithms: BRD (baseline), MS-BRD, and HS-BRD (note that in 2-level games, HH-BRD is identical to HS-BRD, and we thus don't include it here). We construct random 2-level games with utility functions based on Equation (11). Specifically, we generalize this utility so that Equation (11) represents only the level-1 portion, $u_i^{(1)}$ , and let the level-2 utilities be
411
+
412
+ $$u_k^{(2)}(x_k, \mathbf{x}_{I_k}) = x_k^{(2)} \sum_{p \neq k} v_{kp} x_p^{(2)}$$
413
+
414
+ for each group k. At every level, the existence of a link between two agents follows the Bernoulli distribution where $P_{exist}=0.1$ . If a link exists, we then generate a parameter for it. The parameters of the utility functions are sampled uniformly in [0,1] without requiring symmetry. Please refer to Appendix E and E.1 for further details. Results comparing BRD, MS-BRD, and SH-BRD are shown in Table 1. We observe dramatic improvement in the scalability of using MS-BRD compared to conventional BRD. This improvement stems from the representational advantage provided by multi-scale games compared to conventional graphical games (since without the multi-scale representation, we have to use the standard version of BRD for equilibrium computation). We see further improvement going from MS-BRD to SH-BRD which makes algorithmic use of the multi-scale representation.
415
+
416
+ | Size BRD | MS-BRD | SH-BRD |
417
+ |-------------------------|----------------------------------------|--------------------------------------------------------|
418
+ | $30^2 (2.51 \pm 0.18)$ | $\times 10^6 (1.03 \pm 0.07) \times$ | $10^{5}$ <b>(9.81<math>\pm</math>0.81)</b> $\times$ 10 |
419
+ | $50^2 (2.53 \pm 0.18)$ | $\times 10^{7} (5.33 \pm 0.04) \times$ | $10^{5}$ <b>(4.35</b> $\pm$ <b>0.07)</b> $\times$ 10 |
420
+ | $100^2 (4.46 \pm 0.32)$ | $\times 10^{8} (4.36 \pm 0.31) \times$ | (10 <sup>6</sup> (3.56±0.29)×10 |
421
+ | $200^2 (6.73 \pm 0.58)$ | $\times 10^9 (3.48 \pm 0.29) \times$ | $10^{7}$ (2.79±0.21)×10 |
422
+ | $500^2 (2.84 \pm 0.21)$ | $\times 10^{1} (5.69 \pm 0.41) \times$ | (10 <sup>8</sup> (4.04±0.29)×10 |
423
+
424
+ Table 1: Convergence and complexity (flops) comparison with linear best response under multiple initialization.
425
+
426
+ **Games with a Non-Linear Best Response** Next, we study the performance of the proposed algorithms in 2- and 3-level games, with the same number of groups in each level (we systematically vary the number of groups). Since SH-BRD and HH-BRD are identical in 2-level games, the latter is only used in 3-level games. All results are averaged
427
+
428
+ over 30 generated sample games. The non-linear best response fits a much broader class of utility functions than the linear best response. The best responses generally don't have closed-form representations. In this case, we can't use linear equations to find the best response and instead have to apply gradient-based methods. In our instances, the utility with non-linear best responses is generated by adding an exponential cost term to the utility function used in GLBRs. Please refer to Appendix E and E.2 for further details.
429
+
430
+ | Size | BRD | MS-BRD | SH-BRD |
431
+ |------|------------|------------|------------|
432
+ | 302 | 1.50±0.05 | 1.02±0.02 | 0.54±0.01 |
433
+ | 502 | 26.70±0.36 | 3.70±0.14 | 1.81±0.04 |
434
+ | 1002 | 1512±9 | 23.81±0.69 | 12.10±0.13 |
435
+ | 2002 | > 18000 | 287.2±5.4 | 133.6±2.5 |
436
+ | 5002 | nan | 5485±13 | 2524±10 |
437
+
438
+ Table 2: CPU times on a single machine on 2-Level games with general best response functions; all times are in seconds.
439
+
440
+ Table 2 shows the CPU time comparison between all algorithms. The scalability improvements from our proposed algorithms are substantial, with orders of magnitude speedup in some cases (e.g., from ∼ 25 minutes for the BRD baseline, down to ∼ 12 seconds for SH-BRD for games with 10K agents). Furthermore, BRD fails to solve instances with 250K agents, which can be solved by SH-BRD in ∼ 42 min. Again, we separate here the representational advantage of multi-scale games, illustrated by MS-BRD, and algorithmic advantage that comes from SH-BRD. Note that SH-BRD, which takes full advantage of the multi-scale structure, also exhibits significant improvement over MS-BRD, yielding a factor of 2-3 reduction in runtime.
441
+
442
+ | Size | BRD | MS-BRD | SH-BRD |
443
+ |------|------------|------------|-------------|
444
+ | 302 | 1.21±0.04 | 0.63± 0.01 | 0.037±0.003 |
445
+ | 502 | 23.88±0.16 | 1.99±0.04 | 0.079±0.004 |
446
+ | 1002 | 1461±14 | 15.49±0.24 | 0.304±0.006 |
447
+ | 2002 | > 18000 | 192.0±1.2 | 1.87±0.05 |
448
+ | 5002 | nan | 4258±56 s | 28.79±0.37 |
449
+
450
+ Table 3: CPU times on a single machine for 2-Level, linear/nonlinear best-response games; all times are in seconds.
451
+
452
+ Our next set of experiments involves games in which level-1 utility has a linear best response, but level-2 utility has a non-linear best response. The results are shown in Table 3. We see an even bigger advantage of SH-BRD over the others: it is now typically orders of magnitude faster than even MS-BRD, which is itself an order of magnitude faster than BRD. For example, in games with 250K agents, in which BRD fails to return a solution, MS-BRD takes more than 1 hour to find a solution, whereas SH-BRD finds a solution in under 30 seconds.
453
+
454
+ | Size BRD | MS-BRD | SH-BRD | HH<br>BRD |
455
+ |------------------|----------------------|----------------------------------|------------|
456
+ | 103<br>1.23±0.03 | 0.59±0.01 | 0.76±0.03 | 0.43±0.02 |
457
+ | 203<br>696.0±8.7 | 3.78±0.09 | 6.05±0.08 | 3.35±0.09 |
458
+ | 303<br>> 18000 | | 15.70±0.11 25.13±0.14 13.39±0.11 | |
459
+ | 503<br>nan | 68.59±0.75 138.8±1.1 | | 57.98±0.69 |
460
+ | 1003 nan | 1126±6 | 2343±21 | 877.1±11.5 |
461
+
462
+ Table 4: CPU times in seconds on a single machine on 3-Level, general best response games; all times are in seconds.
463
+
464
+ Finally, Table 4 presents the results of HH-BRD in games with > 2 levels compared to SH-BRD, which does not provably converge in such games. In this case, HH-BRD outperforms the other alternatives, with up to 22% improvement over MS-BRD; indeed, we find that SH-BRD is considerably worse even than MS-BRD.
2101.09868/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2101.09868/paper_text/intro_method.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The record-breaking performance of modern deep neural networks (DNNs) comes at a prohibitive training cost due to the required massive training data and parameters, limiting the development of the highly demanded DNN-powered intelligent solutions for numerous applications [\(Liu et al., 2018;](#page-10-0) [Wu](#page-12-0) [et al., 2018\)](#page-12-0). As an illustration, training ResNet-50 involves 10<sup>18</sup> FLOPs (floating-point operations) and can take 14 days on one state-of-the-art (SOTA) GPU [\(You et al., 2020b\)](#page-12-1). Meanwhile, the large DNN training costs have raised increasing financial and environmental concerns. For example, it is estimated that training one DNN can cost more than \$10K US dollars and emit carbon as high as a car's lifetime emissions. In parallel, recent DNN advances have fueled a tremendous need for intelligent edge devices, many of which require on-device in-situ learning to ensure the accuracy under dynamic real-world environments, where there is a mismatch between the devices' limited resources and the prohibitive training costs [\(Wang et al., 2019b;](#page-11-0) [Li et al., 2020;](#page-10-1) [You et al., 2020a\)](#page-12-2).
4
+
5
+ To address the aforementioned challenges, extensive research efforts have been devoted to developing efficient DNN training techniques. Among them, low-precision training has gained significant attention as it can largely boost the training time/energy efficiency [\(Jacob et al., 2018;](#page-10-2) [Wang et al.,](#page-11-1) [2018a;](#page-11-1) [Sun et al., 2019\)](#page-11-2). For instance, GPUs can now perform mixed-precision DNN training with 16-bit IEEE Half-Precision floating-point formats [\(Micikevicius et al., 2017b\)](#page-10-3). Despite their promise, existing low-precision works have not yet fully explored the opportunity of leveraging recent findings in understanding DNN training. In particular, existing works mostly fix the model precision during the whole training process, i.e., adopt a static quantization strategy, while recent works in DNN training optimization suggest dynamic hyper-parameters along DNNs' training trajectory. For example, [\(Li](#page-10-4)
6
+
7
+ [et al., 2019\)](#page-10-4) shows that a large initial learning rate helps the model to memorize easier-to-fit and more generalizable patterns, which aligns with the common practice to start from a large learning rate for exploration and anneal to a small one for final convergence; and [\(Smith, 2017;](#page-11-3) [Loshchilov & Hutter,](#page-10-5) [2016\)](#page-10-5) improve DNNs' classification accuracy by adopting cyclical learning rates.
8
+
9
+ In this work, we advocate dynamic precision training, and make the following contributions:
10
+
11
+ - We show that DNNs' precision seems to have a similar effect as the learning rate during DNN training, i.e., low precision with large quantization noise helps DNN training exploration while high precision with more accurate updates aids model convergence, and dynamic precision schedules help DNNs converge to a better minima. This finding opens up a design knob for simultaneously improving the optimization and efficiency of DNN training.
12
+ - We propose Cyclic Precision Training (CPT) which adopts a cyclic precision schedule along DNNs' training trajectory for pushing forward the achievable trade-offs between DNNs' accuracy and training efficiency. Furthermore, we show that the cyclic precision bounds can be automatically identified at the very early stage of training using a simple precision range test, which has a negligible computational overhead.
13
+ - Extensive experiments on five datasets and eleven models across a wide spectrum of applications (including classification and language modeling) validate the consistent effectiveness of the proposed CPT technique in boosting the training efficiency while leading to a comparable or even better accuracy. Furthermore, we provide loss surface visualization for better understanding CPT's effectiveness and discuss its connection with recent findings in understanding DNNs' training optimization.
14
+
15
+ # Method
16
+
17
+ In this section, we first introduce the hypothesis that motivates us to develop CPT using visualization examples in Sec. [3.1,](#page-2-0) and then present the CPT concept in Sec. [3.2](#page-3-0) followed by the Precision Range Test (PRT) method in Sec. [3.3,](#page-4-0) where PRT aims to automate the precision schedule for CPT.
18
+
19
+ <span id="page-2-1"></span>Table 1: The test accuracy of ResNet-38/74 trained on CIFAR-100 with different learning rate and precision combinations in the first stage. Note that the last two stages of all the experiments are trained with full precision and a learning rate of 0.01 and 0.001, respectively.
20
+
21
+ | | | ResN | et-38 | | | ResN | et-74 | |
22
+ |----------------|-------|-------|-------|-------|-------|-------|-------|-------|
23
+ | First-stage LR | 0.1 | 0.06 | 0.03 | 0.01 | 0.1 | 0.06 | 0.03 | 0.01 |
24
+ | 4-bit Acc (%) | 69.45 | 68.63 | 67.69 | 65.90 | 70.96 | 69.54 | 68.26 | 67.19 |
25
+ | 6-bit Acc (%) | 70.22 | 68.87 | 67.15 | 66.10 | 71.62 | 70.28 | 68.84 | 66.16 |
26
+ | 8-bit Acc (%) | 69.96 | 68.66 | 66.75 | 64.99 | 71.60 | 70.67 | 68.45 | 65.85 |
27
+ | FP Acc (%) | 70.45 | 69.53 | 67.47 | 64.50 | 71.66 | 70.00 | 68.69 | 65.62 |
28
+
29
+ **Hypothesis 1: DNN's precision has a similar effect as the learning rate.** Existing works (Grandvalet et al., 1997; Neelakantan et al., 2015) show that noise can help DNN training theoretically or empirically, motivating us to rethink the role of quantization in DNN training. We conjecture that low precision with large quantization noise helps DNN training exploration with an effect similar to a high learning rate, while high precision with more accurate updates aids model convergence, similar to a low learning rate.
30
+
31
+ **Validating Hypothesis 1.** Settings: To empirically justify our hypothesis, we train ResNet-38/74 on the CIFAR-100 dataset for 160 epochs following the basic training setting as in Sec. 4.1. In particular, we divide the training of 160 epochs into three stages: [0-th, 80-th], [80-th,120-th], and [120-th, 160-th]: for the first training stage of [0-th, 80-th], we adopt different learning rates and precisions for the weights and activations, while using full precision for the remaining two stages with a learning rate of 0.01 for the [80-th,120-th] epochs and 0.001 for the [120-th, 160-th] epochs in all the experiments in order to explore the relationship between the learning rate and precision in the first training stage.
32
+
33
+ Results: As shown in Tab. 1, we can observe that as the learning rate is sufficiently reduced for the first training stage, adopting a lower precision for this stage will lead to a higher accuracy than training with full precision. In particular, with the standard initial learning rate of 0.1, full precision training achieves a 1.00%/0.70% higher accuracy than the 4-bit one on ResNet-38/74, respectively; whereas as the initial learning rate decreases, this accuracy gap gradually narrows and then reverses, e.g., when the initial learning rate becomes 1e-2, training with [0-th, 80-th] of 4-bit achieves a 1.40%/1.57% higher accuracy than the full precision ones.
34
+
35
+ <u>Insights:</u> This set of experiments show that (1) when the initial learning rate is low, training with lower initial precisions consistently leads to a better accuracy than training with full precision, indicating that lowering the precision introduces a similar effect of favoring exploration as that of a high learning rate; and (2) although a low precision can alleviate the accuracy drop caused by a low learning rate, a high learning rate is in general necessary to maximize the accuracy.
36
+
37
+ Hypothesis 2: Dynamic precision helps DNN generalization. Recent findings in DNN training have motivated us to better utilize DNN precision to achieve a win-win in both DNN accuracy and efficiency. Specifically, it has been discussed that (1) DNNs learn to fit different patterns at different training stages, e.g., (Rahaman et al., 2019; Xu et al., 2019) reveal that DNN training first learns lower-frequency components and then high-frequency features, with the former being more robust to perturbations and noises; and (2) dynamic learning rate schedules help to improve the optimization in DNN training, e.g., (Li et al., 2019) points out that a large
38
+
39
+ <span id="page-2-2"></span>![](_page_2_Figure_9.jpeg)
40
+
41
+ Figure 1: Test accuracy evolution of ResNet-74 on CIFAR-100 under different schedules.
42
+
43
+ initial learning rate helps the model to memorize easier-to-fit and more generalizable patterns while (Smith, 2017; Loshchilov & Hutter, 2016) show that cyclical learning rate schedules improve DNNs' classification accuracy. These works inspire us to hypothesize that dynamic precision might help DNNs to reach a better optimum in the optimization landscape, especially considering the similar effect between the learning rate and precision validated in our Hypothesis 1.
44
+
45
+ <span id="page-3-1"></span>![](_page_3_Figure_1.jpeg)
46
+
47
+ Figure 2: Loss landscape visualization after convergence of ResNet-74 on CIFAR-100 trained with different precision schedules, where wider contours with larger intervals indicate a better local minima and a lower generalization error as analyzed in (Li et al., 2018).
48
+
49
+ **Validating Hypothesis 2.** Our Hypothesis 2 has been consistently confirmed by various empirical observations. For example, a recent work (Fu et al., 2020) proposes to progressively increase the precision during the training process, and we follow their settings to validate our hypothesis.
50
+
51
+ Settings: We train a ResNet-74 on CIFAR-100 using the same training setting as (Wang et al., 2018b) except that we quantize the weights, activations, and gradients during training; for the **progressive** precision case we uniformly increase the precision of weights and activations from 3-bit to 8-bit in the first 80 epochs and adopt static 8-bit gradients, while the static precision baseline uses 8-bit for all the weights/activations/gradients.
52
+
53
+ Results: Fig. 1 shows that training with progressive precision schedule achieves a slightly higher accuracy (+0.3%) than its static counterpart, while the former can reduce training costs. Furthermore, we visualize the loss landscape (following the method in (Li et al., 2018)) in Fig. 2(b): interestingly the progressive precision schedule helps to converge to a better local minima with wider contours, indicating a lower generalization error (Li et al., 2018) over the static 8-bit baseline in Fig. 2(a).
54
+
55
+ The progressive precision schedule in (Fu et al., 2020) relies on manual hyper-parameter tuning. As such, a natural following question would be: what kind of dynamic schedules would be effective while being simple to implement for different tasks/models? In this work, we show that a simple cyclic schedule consistently benefits the training convergence while boosting the training efficiency.
56
+
57
+ The key concept of CPT draws inspiration from (Li et al., 2019) which demonstrates that a large initial learning rate helps the model to learn more generalizable patterns. We thus hypothesize that a lower precision that leads to a short-term poor accuracy might actually help the DNN exploration during training thanks to its associated larger quantization noise, while it is well known that a higher precision enables the learning of higher-complexity, fine-grained patterns that is critical to better convergence. Together, this combination could improve the achieved ac-
58
+
59
+ <span id="page-3-2"></span>![](_page_3_Figure_9.jpeg)
60
+
61
+ <span id="page-3-3"></span>Figure 3: Static vs. Cyclic Precision Training (CPT), where CPT cyclically schedules the precision of weights and activations during training.
62
+
63
+ curacy as it might better balance coarse-grained exploration and fine-grained optimization during DNN training, which leads to the idea of CPT. Specifically, as shown in Fig. 3, CPT varies the precision cyclically between two bounds instead of fixing the precision during training, letting the models explore the optimization landscape with different granularities.
64
+
65
+ While CPT can be implemented using different cyclic scheduling methods, here we present as an example an implementation of CPT in a cosine manner:
66
+
67
+ $$B_t^n = \left[ B_{min}^n + \frac{1}{2} (B_{max}^n - B_{min}^n) (1 - \cos(\frac{t \% T_n}{T_n} \pi)) \right]$$
68
+ (1)
69
+
70
+ where $B_{min}^n$ and $B_{max}^n$ are the lower and upper precision bound, respectively, in the *n*-th cycle of precision schedule, $[\cdot]$ and % denote the rounding operation and the remainder operation, respectively,
71
+
72
+ and $B_t^n$ is the precision at the t-th global epoch which falls into the n-th cycle with a cycle length of $T_n$ . Note that the cycle length $T_n$ is equal to the total number of training epochs divided by the total number of cycles denoted as N, where N is a hyper-parameter of CPT. For example, if N=2, then a DNN training with CPT will experience two cycles of cyclic precision schedule during training. As shown in Sec. 4.3, we find that the benefits of CPT are maintained when adopting different total number of cyclic precision schedule cycles during training, i.e., CPT is not sensitive to N. A visualization example for the precision schedule can be found in Appendix A. Additionally, we find that CPT is generally effective when using different dynamic precision schedule patterns (i.e., not necessarily the cosine schedule in Eq. (1). We implement CPT following Eq. (1) in this work and discuss the potential variants in Sec. 4.3.
73
+
74
+ We visualize the training curve of CPT on ResNet-74 with CIFAR-100 in Fig. 1 and find that it achieves a 0.91% higher accuracy paired with a 36.7% reduction in the required training BitOPs (bit operations), as compared to its **static** fixed precision counterpart. In addition, Fig. 2 (c) visualizes the corresponding loss landscape, showing the effectiveness of CPT, i.e., such a simple and automated precision schedule leads to a better convergence with lower sharpness.
75
+
76
+ The concept of CPT is simple enough to be plugged into any model or task to boost the training efficiency. One remaining question is how to determine the precision bounds, i.e., $B_{min}^i$ and $B_{max}^{i}$ in Eq. (1), which we find can be automatically decided in the first cycle (i.e., $T_i = T_0$ ) of the precision schedule using a simple PRT at a negligible computational cost. Specifically, PRT starts from the lowest possible precision, e.g., 2-bit, and gradually increases the precision while monitoring the difference in the training accuracy magnitude averaged over several consecutive iterations; once this training accuracy difference is larger than a preset threshold, indicating that the training can at least partially converge, PRT would claim that the lower bound is identified. While the upper bound can be sim-
77
+
78
+ <span id="page-4-2"></span>![](_page_4_Figure_5.jpeg)
79
+
80
+ Figure 4: Illustrating the precision range test for ResNet-152 and MobileNetV2 on CIFAR-100, where the switching point which exceeds the preset threshold is denoted by red circles.
81
+
82
+ ilarly determined, there exists an alternative which suggests simply adopting the precision of CPT's static precision counterpart. The remaining cycles use the same precision bounds.
83
+
84
+ Fig. 4 visualizes the PRT for ResNet-152/MobileNetV2 trained on CIFAR-100. We can see that the lower precision bound identified when the model experiences a notable training accuracy improvement for ResNet-152 is 3-bit while that for MobileNetV2 is 4-bit, aligning with the common observation that ResNet-152 is more robust to quantization than the more compact model MobileNetV2.
2102.09337/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-05-27T10:53:03.619Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36" version="14.7.1" etag="5k0alCIHUnRMLvI5eOzj" type="device"><diagram id="u5Sja7m0wlJslkgoRvU-">7Vxtc6M2EP41/hiP3nj7mOQubWd67U1zM20/cli26WHLxfhs99dXgGSQkGOZAMGT40ssCRZ49OxqV7tkgh9Xh5/ScLP8xGY0mSAwO0zwhwniB8T8T95zLHu8vJV3LNJ4VnbBquM5/o+KTiB6d/GMbpUTM8aSLN6onRFbr2mUKX1hmrK9etqcJepdN+GCNjqeozBp9v4Zz7Kl6IUAVAM/03ixFLf2HTHwNYy+LVK2W4v7rdmaliOrUIoRp26X4Yzta1344wQ/poxl5a/V4ZEmOawSsfK6pzOjp0dO6TqzuUDMyPcw2Ym3Fs+VHSUMdD27z9HkrSgJt9s4muCHZbZKeAfkP/md0uNfvAFk4++8MXVk88OhPvjhKFrljehMm4QsTBdUPDtym+8DTyhx4lG2olwmP2VfzZCchmVtbmRfSpMwi7+rtwwFURYncac7fGYxvzECgtR3xBOCBKeRp8nYsl0aUXFZHfiGJKJKIlJdpKQSiIYkPhfhsXbaJj9h+8IjO8D8yOefzL/yAgdeewF58QL+o3xL2arNdNVVMNnMatwBqw9xlpP6DkwBJKKjJLbvynbF7LxxrDU+0zTmD0zTOvVzeXAKkKcqik/8C6pStHSR8zhJHlnC0uKF8NyPaJS/xDZL2beTxUKnntq5T0+AH+d1sCSxnJumXkI0Or2EjsIoxwUt9RK7miSoSTqjly1oSq6iabGKmC1vTlIAVVa5YyOVYti9sREoQKArAmnWkOiSuiOQM7rVezwTqq/UxGk7oQRdkNTXSi1vZL1SX7zAOfMqXS28rgUhk4Q78Lkl477vhha8ZDt+/cN+GWf0eRMWK8+exxQqU2fhdklnotG0R+LONM3o4VpKSnh8V4UnEM0aZbFr4izWGFHnp6LfL2DnNaAqQojilXONY2m2ZAu2DpNfGdsIHP6hWXYUoVO4y5jZiZG/a6p93ns5GYQpCDRPxYWozaJi4WX4zTXC3vd/ranwpRMvtQL1ZrF9g4K4SZavvKx4pmr63X93TA7cbYspvucnYHdzqAb5r0X+936RgyRE8YcopZVjL/AKXla7LvQKAt3hB03NgsigWfra2UaxgqYN4lR8Fk3hVr25rrlI1TUCe1I16Z70rGu2syMf572rBEYDqgSEDQRGqBMQqyrheX2pBBqXSqAfKlG4VYOqhM2m0cl3vQBHuN2UO9Lz+JBD2Ak+ga/uTBDgNeExoKNvYLRCx2avYlzokAHRsQnEx4WOPyA6pqiwC3P2vI+zaHm9PQPD2DNP33h38YDmzLtMyeERQZcR6Q0QU+g1NkAwtAOkE7UMbgAQW4Z0AQgyRSKjA8QfEBB4A4BgMCAghmW9FjZVEdLHqreDKKqKiEhQj4juwBTjU4zUQUyk5PgC66l57f6bp+ePkZYKP7NV32L/zZgavs4Xgb7JF3n65el3Pviwm885ypYuCVeDTJ1rNasm4vB6Uk50hUm8WPNmxOcmn9WHXKniKEzuxcAqns2Sc96mSslOtBKrUwiDplYSg1bqO62ttHL0sYnraDaL+A10+vK+rZKEo0KHoAHR6Ss2+YNGlJtEa1Mw+G6LntVEBDZRNyW7unDGkSk6eZewE9MeV2+wW8RAb2sLMFH3KbDcPBjCFlgERCNDxxkOHWwRHY0MnWBAdEyhUhd+pSYlOtVbVSfi+byovGoawy9f5OVfU9n5iYbbXUpXL2UC9GveoT8bqGbaCVCDS335sxhZaNqFsq6JOcAbT3mWp5VnOZ6vyrAuz+KSppiA00FUuUST21WxVuMFxI06q2G+rohZ6JFSJKUVT30OM65b66KH+08TUxVVba8hIFpFgodb5V9lEpjPElITwT6aXFtH/apSZrEctMvyDqUXeg5ULy2z1otGNWGgSequqgm3qmTukqvI0bhaldO05GpJ9oqr5AdZm2TFug1sXTQtt7Wr/OQU1A7UG3WdJlN7rIQBU+TUefUypyqCewq/L5WGWdALG/gF37DmBd9gklj7WOKpOLpyQHXr7Rr2CfrKleJbSB6bEOkNkFtIHp++VBogE4ZvIXlsy5AuAJEr37gB8QcE5BaSx46hDr03QC69/6vcChD4Nb8CTgH3yS19Cx9pdbYAuG2d5zbFtwZPBA/38Yer7fRgfQvHPuAKnCnwgurQqEam2GmM9vBV6egLWV2iQoP94RJ+xBSqXudnIqOf+RvN9iz9xse/sA13ChfH29o1VaLxlGVcqVgu4i7oak810L94cJv78xgb5r3FpipvVv9Eo1Si6p+U4I//Aw==</diagram></mxfile>
2102.09337/main_diagram/main_diagram.pdf ADDED
Binary file (33.1 kB). View file
 
2102.09337/paper_text/intro_method.md ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Modern datacenters consist of multiple servers jointly communicating at ultra high speeds. As the joint transmission rate of the servers surpasses the internal connection limitations (e.g., router processing rate) the communication may become congested. Congested networks suffer from reduced bandwidth utilization, the appearance of packet loss and increased application latency. Hence, avoiding and preventing congestion is an important task called congestion control.
4
+
5
+ Previous work on congestion control has mainly focused on rule-based methods. As these schemes are rule based, they are usually optimized for a single set of tasks (as we show in [\[table: comparison\]](#table: comparison){reference-type="ref+label" reference="table: comparison"}). Machine learning methods, as opposed to rule-based ones, are capable of learning and generalizing based on data and experience. Specifically, reinforcement learning (RL) automatically learns a control policy (transmission rate control in the context of congestion prevention) given an environment to interact with and a reward signal. Thus, an RL algorithm is (1) capable of solving the task without tedious manual tuning of control parameters and (2) successfully operate in a vast set of tasks, e.g., generalize, when provided with diverse training environments.
6
+
7
+ Most RL algorithms were designed under the assumption that the world can be adequately modeled as a Markov Decision Process (MDP). Unfortunately, this is seldom the case in realistic applications and specifically datacenter congestion control. As we show below, as opposed to the standard assumptions, this problem is partially-observable and consists of multiple-agents. These challenges prevent the standard RL methods from coping within such a complex environment, and we believe are one of the major reasons such methods are yet to be deployed in real world datacenters.
8
+
9
+ From an RL point of view, each agent controls the transmission of a single application. As there are multiple applications across multiple servers, this means multiple agents that, due to security reasons, are unaware of one another and unable to communicate -- hence multi-agent and partially observable. We observe that popular RL algorithms such as DQN [@mnih2015human], REINFORCE [@williams1992simple] and PPO [@schulman2017proximal] fail on such tasks ([\[tab: many-to-one comparison\]](#tab: many-to-one comparison){reference-type="ref+label" reference="tab: many-to-one comparison"}).
10
+
11
+ To overcome these challenges, we present the Analytic Deterministic Policy Gradient (ADPG), a scheme that makes use of domain knowledge to estimate the gradient for a deterministic policy update. As this task lacks a ground truth reward function, we present a fitting reward function and show that this reward function, in a multi-agent partially-observable setting, leads to convergence to a global optimum.
12
+
13
+ :::: table*
14
+ ::: center
15
+ **Many to One** **All to All** **Long Short** **Real World**
16
+ -------------------------- ----------------- ---------------- ---------------- ----------------
17
+ Aurora / PPO / REINFORCE
18
+ DCQCN
19
+ HPCC / SWIFT
20
+ **ADPG** (this paper)
21
+ :::
22
+ ::::
23
+
24
+ To validate our claims, we develop an RL environment, based on a realistic networking simulator, and perform extensive experiments. The simulator, based on OMNeT++ [@{varga2002omnet++}], emulates the behavior of state of the art hardware deployed in current datacenters: ConnectX-6Dx Network Interface Card (NIC). In addition, we further test the generalization and robustness of the agent by porting it to *real hardware* and evaluating it there. Our experiments show that our method, Analytic Deterministic Policy Gradient (ADPG), learns a robust policy, in the sense that it is competitive in all the evaluated scenarios, both in simulation and when tested in the real world. Often, it outperforms the current state-of-the-art methods deployed in real datacenters.
25
+
26
+ # Method
27
+
28
+ In datacenters, traffic contains multiple concurrent data streams transmitting at high rates. The servers, also known as *hosts*, are interconnected through a topology of *switches*. A directional connection between two hosts that continuously transmits data is called a *flow*. We assume, for simplicity, that the path of each flow is fixed.
29
+
30
+ Each host can hold multiple flows whose transmission rates are determined by a *scheduler*. The scheduler iterates in a cyclic manner between the flows, also known as round-robin scheduling. Once scheduled, the flow transmits a burst of data. The burst's size generally depends on the requested transmission rate, the time it was last scheduled, and the maximal burst size limitation.
31
+
32
+ A flow's transmission is characterized by two primary values. *Bandwidth*: the average amount of data transmitted, measured in Gbit per second; and *latency*: the time it takes for a packet to reach its destination. *Round-trip-time (RTT)* measures the latency of source$\rightarrow$destination$\rightarrow$source. While the latency is often the metric of interest, most systems are only capable of measuring RTT.
33
+
34
+ *Congestion* occurs when multiple flows cross paths, transmitting data through a single congestion point (switch or receiving server) at a rate faster than the congestion point can process. In this work, we assume that all connections have equal transmission rates, as typically occurs in most datacenters. Thus, a single flow can saturate an entire path by transmitting at the maximal rate.
35
+
36
+ Each congestion point in the network has an inbound buffer enabling it to cope with short periods where the inbound rate is higher than it can process. As this buffer begins to fill, the time (latency) it takes for each packet to reach its destination increases. When the buffer is full, any additional arriving packets are dropped.
37
+
38
+ CC can be seen as a multi-agent problem. Assuming there are N flows, this results in N CC algorithms (agents) operating simultaneously. Assuming all agents have an infinite amount of traffic to transmit, their goal is to optimize the following metrics (where $\uparrow$/$\downarrow$ mean higher/lower is better, respectively):
39
+
40
+ 1. Switch bandwidth utilization ($\uparrow$) -- the % from maximal transmission rate.
41
+
42
+ 2. Packet latency ($\downarrow$) -- the amount of time it takes for a packet to travel from the source to its destination.
43
+
44
+ 3. Packet-loss ($\downarrow$) -- the amount of data (% of maximum transmission rate) dropped due to congestion.
45
+
46
+ 4. Fairness ($\uparrow$) -- a measure of similarity in the transmission rate between flows sharing a congested path. We consider $\frac{\min_\text{flows} BW}{\max_\text{flows} BW} \in [0, 1]$.
47
+
48
+ These objectives may be contradictory. Minimizing latency often comes at the expense of maximizing throughput. Hence, multi-objective schemes present a Pareto-front [@liu2014multiobjective] for which optimality w.r.t. one objective may result in sub-optimality of another. However, while the metrics of interest are clear, the agent does not necessarily have access to signals representing them. For instance, fairness is a metric that involves all flows, yet the agent is unaware of how many other transmissions are active and what data they transmit. The agent only observes signals relevant to the flow it controls. As such, it is impossible for a flow to obtain an estimate of the current fairness in the system. Instead, we reach fairness by setting each flow's individual target adaptively, based on known relations between its current RTT and rate. More details on this are given in Sec. [5.1](#sec: method){reference-type="ref" reference="sec: method"}.
49
+
50
+ We model the task of congestion control as a multi-agent partially-observable Markov decision process (POMDP) with multiple objectives and continuous actions, where all agents share the same policy. Each agent observes statistics relevant to itself and does not observe the entire global state.
51
+
52
+ A POMDP is defined as the tuple $(\mathcal{O}, \mathcal{S}, \mathcal{A},P,R)$ [@puterman1994markov; @spaan2012partially]. An agent interacting with the environment at state $\mathop{\mathrm{\mathbf{s}}}\in \mathcal{S}$ observes an observation $o(\mathop{\mathrm{\mathbf{s}}}) \in \mathcal{O}$. After observing $o$, the agent selects a continuous action $\mathop{\mathrm{\mathbf{a}}}\in \mathcal{A}$. In a POMDP, the observed state does not necessarily contain sufficient statistics for determining the optimal action. After performing an action, the environment transitions to a new state $\mathop{\mathrm{\mathbf{s}}}'$ based on the transition kernel $P(\mathop{\mathrm{\mathbf{s}}}' | \mathop{\mathrm{\mathbf{s}}}, \mathop{\mathrm{\mathbf{a}}})$ and receives a reward $r(\mathop{\mathrm{\mathbf{s}}}, \mathop{\mathrm{\mathbf{a}}}) \in R$.
53
+
54
+ Let $\Pi$ be the set of stationary deterministic policies on $\mathcal{A}$, i.e., if $\pi\in\Pi$ then $\pi: \mathcal{O}\rightarrow \mathcal{A}$. In this work, we focus on the *average reward* performance metric, also known as the gain of the policy $\pi$, $\rho^\pi(\mathop{\mathrm{\mathbf{s}}}) \equiv \lim_{T \rightarrow \infty} \frac{1}{T} {\mathbb{E}}^\pi[\sum_{t=0}^T r(\mathop{\mathrm{\mathbf{s}}}_t,\mathop{\mathrm{\mathbf{a}}}_t)\mid \mathop{\mathrm{\mathbf{s}}}_0=\mathop{\mathrm{\mathbf{s}}}]$, where ${\mathbb{E}}^\pi$ denotes the expectation w.r.t. the distribution induced by $\pi$. The goal is to find a policy $\pi^*,$ yielding the optimal gain $\rho^*$, i.e., for all $\mathop{\mathrm{\mathbf{s}}}\in \mathcal{S}$, $\pi^*(o(\mathop{\mathrm{\mathbf{s}}})) \in \mathop{\mathrm{arg\,max}}_{\pi\in \Pi} \rho^\pi (\mathop{\mathrm{\mathbf{s}}})$ and the optimal gain is $\rho^{*}(\mathop{\mathrm{\mathbf{s}}}) = \rho^{\pi^*}(\mathop{\mathrm{\mathbf{s}}})$.
55
+
56
+ The agent, a congestion control algorithm, controls the data transmission rate at the source. Specifically, the algorithm runs within the network-interface-card (NIC). At each decision point, the agent observes statistics correlated with the specific flow it controls. The agent then acts by determining a new transmission rate for that flow and observes the outcome of this action. We define the four elements in $(\mathcal{O}, \mathcal{A},P,R)$ ([5](#sec: rl){reference-type="ref+label" reference="sec: rl"}).
57
+
58
+ **Observations.** The agent can only observe information relevant to the flow it controls. In this work, we consider the flow's transmission rate and the RTT measurement.
59
+
60
+ **Actions.** The optimal transmission rate depends on the number of agents simultaneously interacting in the network and on the network itself (bandwidth limitations and topology). As such, the optimal transmission rate will vary greatly across scenarios. To ensure the agent is agnostic to the specifics of the network and can easily generalize, we define the next transmission rate $\text{rate}_{t+1}$ as a multiplication of the previous rate with the action. I.e., $\text{rate}_{t+1} = \mathop{\mathrm{\mathbf{a}}}_t \cdot \text{rate}_t$, where in our experiments $\mathop{\mathrm{\mathbf{a}}}_t \in [0.8, 1.2]$.
61
+
62
+ **Transitions.** The transition $\mathop{\mathrm{\mathbf{s}}}_t \rightarrow \mathop{\mathrm{\mathbf{s}}}_{t}'$ depends on the dynamics of the environment and on the frequency at which the agent is polled to provide an action. Here, the agent acts (is asked to provide an updated transmission rate) once an RTT packet is received. This is similar to the definition of a monitor interval by @dong2018pcc, but while they considered fixed time intervals, we consider event-triggered (RTT) intervals.
63
+
64
+ **Reward.** As the task is a multi-agent partially observable problem, the reward must be designed such that there exists a single fixed-point equilibrium.
65
+
66
+ Based on @appenzeller2004sizing, a good approximation of the RTT inflation ($\text{RTT-inflation} = \frac{\text{RTT}}{\text{base-RTT}}$) in a bursty system, where all flows transmit at the ideal rate, behaves like $\sqrt{N}$, where $N$ is the number of flows. In this case, the combined transmission rate of all flows saturates the congestion point, the system is on the verge of congestion, and the major latency increase is due to the packets waiting in the congestion point's buffer. This latency is orders of magnitude higher than the empty-system routing latency. As such, we can assume that all flows sharing a congested path will observe a similar RTT inflation. We define $$\begin{equation}
67
+ r_t^i = -\left( \text{\bf{target}} - \frac{\text{RTT}^i_t}{\text{base-RTT}^i} \cdot \sqrt{\text{rate}^i_t} \right)^2 \,,
68
+ \end{equation}$$ where **target** is a constant value shared by all flows, $\text{base-RTT}^i$ is defined as the RTT of flow $i$ in an empty system, and $\text{RTT}^i_t$ and $\text{rate}^i_t$ are respectively the RTT and transmission rate of flow $i$ at time $t$. $\frac{\text{RTT}^i_t}{\text{base-RTT}^i}$ is also called the RTT inflation of agent $i$ at time $t$. The ideal reward is obtained when $\textbf{target} = \frac{\text{RTT}^i_t}{\text{base-RTT}^i} \cdot \sqrt{\text{rate}^i_t}$. Hence, when the **target** is larger, the ideal operation point is obtained when $\frac{\text{RTT}^i_t}{\text{base-RTT}^i} \cdot \sqrt{\text{rate}^i_t}$ is larger. As increasing the transmission rate increases network utilization and thus the observed RTT, the two grow together. Such an operation point is less latency sensitive (RTT grows) but enjoys better utilization (higher rate). As [1](#prop: reward is good){reference-type="ref+label" reference="prop: reward is good"} shows, maximizing this reward results in a fair solution.
69
+
70
+ ::: {#prop: reward is good .proposition}
71
+ **Proposition 1**. *The fixed-point rate (solution) for all $N$ flows sharing a congested path is $\frac{\text{max rate}}{N}$.*
72
+ :::
73
+
74
+ Informally, the optimal reward for all agents is 0. An agent for which $\frac{\text{RTT}^i_t}{\text{base-RTT}^i} \cdot \sqrt{\text{rate}^i_t} > target$ needs to reduce the transmission rate, which in turn will also reduce the RTT. On the other hand, an agent below the target will act in the opposite direction. As all agents sharing the same congestion point observe approximately the same RTT, the fixed point solution is a fair solution. A formal proof is provided in the supplementary material, in addition to experiments showing how the target affects the behavior.
75
+
76
+ In this section, we present the intricate combination of challenges arising in our setup. We explain why existing popular approaches are expected to fail, as we indeed observe and show later in experiments. We then introduce our algorithm that leverages the unique properties of the problem to overcome those challenges.
77
+
78
+ **The challenge.** We address three challenges rising when learning in multi-agent partially-observable domains. The first is *non-stationarity*. When multiple agents are trained in parallel while interacting in a shared environment, it becomes non-stationary from the point of view of each agent, as other agents continually change. As a result, one is restricted to on-policy methods to ensure agents are trained on relevant data. Even had the environments changed slowly due to slow learning rate, off-policy methods were still not applicable. Such methods require access to the policies of other agents, which are out of reach in our case.
79
+
80
+ Second, *partial-observability* hinders value-function estimation. Specifically, policy gradient methods that utilize value functions require direct access to states rather than observations to generate correct gradient estimations [@azizzadenesheli2018policy]\[Theorem 3.2\]. Instead, we shall directly use episodic reward trajectories as done in REINFORCE [@williams1992simple].
81
+
82
+ The third challenge is *instability* of stochastic policies. While REINFORCE avoids value-function estimation, it requires the policy to be stochastic. However, stochastic policies in our multi-agent partially-observable setup lead to highly unstable behavior. The multiple agents operate at the same time with the common goal of fairness. Thus, it is essential that they stabilize together to an equilibrium. We observe empirically that this is achievable only with deterministic policies. This also explains the failure of REINFORCE in our experiments later. Such instability was also observed for PPO by [@touati2020stable].
83
+
84
+ The combination of the above three challenges creates a unique set of limitations for a learning algorithm. Namely, it should be on-policy, should not depend on value-function estimation, and needs to support deterministic policies. We now propose an efficient approach that combines all these properties. It is achievable thanks to access to the derivative of the reward function.
85
+
86
+ **Our algorithm.** To generate deterministic policies, as a first choice one might consider Deterministic Policy Gradient [@silver2014deterministic DPG]. However, it relies on value function estimation, as do its successors such as DDPG [@lillicrap2015continuous]. Instead, we work around this by directly estimating the gradient via derivation of the reward function.
87
+
88
+ For $s\in {\mathcal S}$, the Analytic Deterministic Policy Gradient is defined as $$\begin{align*}
89
+ &\nabla_{\theta} \rho^{\pi_\theta} (s) = \nabla_{\theta} \lim_{T \rightarrow \infty} \frac{1}{T} {\mathbb{E}}\left[ \sum_{t=0}^T r\big(o(\mathop{\mathrm{\mathbf{s}}}_t), \pi_\theta(o(\mathop{\mathrm{\mathbf{s}}}_t))\big) \right] \\
90
+ &= \lim_{T \rightarrow \infty} \frac{1}{T}{\mathbb{E}}\left[ \sum_{t=0}^T \nabla_{\mathop{\mathrm{\mathbf{a}}}} r(o(\mathop{\mathrm{\mathbf{s}}}_t), \mathop{\mathrm{\mathbf{a}}})|_{\mathop{\mathrm{\mathbf{a}}}=\mathop{\mathrm{\mathbf{a}}}_t} \cdot \nabla_{\theta} \pi_\theta(o(\mathop{\mathrm{\mathbf{s}}}_t)) \right] \, . \addtocounter{equation}{1}\tag{\theequation}\label{eqn: on policy gradient}
91
+ \end{align*}$$ Similarly to DPG, it is on-policy. Moreover, despite the partial observability, the gradient estimation is unbiased since it relies on rollouts [@azizzadenesheli2018policy]\[Eq. (2)\].
92
+
93
+ The gradient estimator used here is different from common estimators [@sutton2000policy; @silver2014deterministic] as it requires access to $\nabla_{\mathop{\mathrm{\mathbf{a}}}} r(o(\mathop{\mathrm{\mathbf{s}}}_t), \mathop{\mathrm{\mathbf{a}}})$.
94
+
95
+ ::: {#claim: approximate gradient .claim}
96
+ **Claim 2**. *The following is an analytical approximation of the deterministic gradient $$\begin{align*}
97
+ \nabla_{\theta} \rho^{\pi_\theta} (\mathop{\mathrm{\mathbf{s}}}) \approx \Bigg[ &\lim_{T \rightarrow \infty} \frac{1}{T} \sum_{t=0}^{T} \Big(\text{{\bf target}} \addtocounter{equation}{1}\tag{\theequation}\label{eqn: approximate gradient} \\
98
+ &- \text{rtt-inflation}_t \cdot \sqrt{\text{rate}_t} \Big) \Bigg] \nabla_\theta \pi_\theta (o(\mathop{\mathrm{\mathbf{s}}})) \, .
99
+ \end{align*}$$*
100
+ :::
101
+
102
+ in the supplementary material, we provide an extensive derivation of [2](#claim: approximate gradient){reference-type="ref+label" reference="claim: approximate gradient"}.
103
+
104
+ ::: table*
105
+ +:------------+:-------:+:------:+:------:+:-------:+:------:+:------:+:------:+:------:+:------:+:------:+:------:+:------:+
106
+ | | **128 to 1** | **1024 to 1** | **4096 to 1** | **8192 to 1** |
107
+ | +---------+--------+--------+---------+--------+--------+--------+--------+--------+--------+--------+--------+
108
+ | | SU | FR | QL | SU | FR | QL | SU | FR | QL | SU | FR | QL |
109
+ +-------------+---------+--------+--------+---------+--------+--------+--------+--------+--------+--------+--------+--------+
110
+ | Aurora | packet loss | packet loss | packet loss | packet loss |
111
+ +-------------+---------+--------+--------+---------------------------+--------------------------+--------------------------+
112
+ | PPO | 1 | 26 | 3 | packet loss | packet loss | packet loss |
113
+ +-------------+---------+--------+--------+---------+--------+--------+--------+--------+--------+--------------------------+
114
+ | REINFORCE | 51 | 100 | 3 | **74** | **70** | **7** | 53 | 45 | 22 | packet loss |
115
+ +-------------+---------+--------+--------+---------+--------+--------+--------+--------+--------+--------+--------+--------+
116
+ | DCQCN | **100** | **56** | **11** | **100** | **50** | **13** | **95** | **65** | **12** | **95** | **64** | **12** |
117
+ +-------------+---------+--------+--------+---------+--------+--------+--------+--------+--------+--------+--------+--------+
118
+ | HPCC | **83** | **96** | **5** | 59 | 48 | 27 | packet loss | packet loss |
119
+ +-------------+---------+--------+--------+---------+--------+--------+--------+--------+--------+--------------------------+
120
+ | SWIFT | 97 | 94 | 26 | **89** | **96** | **27** | **88** | **85** | **77** | packet loss |
121
+ +-------------+---------+--------+--------+---------+--------+--------+--------+--------+--------+--------+--------+--------+
122
+ | ADPG (ours) | **92** | **95** | **8** | **90** | **70** | **15** | 91 | 44 | 26 | 92 | 29 | 42 |
123
+ +-------------+---------+--------+--------+---------+--------+--------+--------+--------+--------+--------+--------+--------+
124
+ :::
2103.06818/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2103.06818/paper_text/intro_method.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Estimating the geographic location of an image is a fundamental problem in computer vision with applications in autonomous driving, robotics, and augmented reality. Originally, the problem was cast as an image retrieval task [@schindler2007CVPR; @hays2008CVPR; @zamir2010ECCV; @chen2011CVPR; @arandjelovic2014ACCV; @torii2015CVPR; @zamir2014image; @sattler2016CVPR], where the goal is to determine the geographic location of a query street view image by comparing it against a database of GPS-tagged street images. The main limitation of this approach is that, even though there are large databases available for this type of imagery, the coverage varies a lot between different regions of the world, and it is generally sparse in rural areas.
4
+
5
+ Satellite imagery, on the other hand, is broadly available for most parts of the world with services like Google maps. This encouraged researchers to focus on cross-view image-based geo-localization [@workman2015ICCV; @lin2015CVPR; @vo2016ECCV; @liu2019CVPR; @shi2019NeurIPS; @cai2019ICCV; @shi2020AAAI; @shi2020CVPR] as a more general and inclusive alternative. The overall idea is to predict the latitude and longitude of a street-level image by matching it against a GPS-tagged satellite database. Even though this approach helps to cover vast parts of the world, the significant domain gap between a pair of street view and top-view satellite images, shown in [\[fig:teaser\]](#fig:teaser){reference-type="ref+Label" reference="fig:teaser"}, makes cross-view image based geo-localization extremely challenging. For instance, the appearance of the two images can vary significantly as they are typically taken at different times and with different cameras, leading to illumination changes. The biggest challenge, however, comes from the dramatically different viewpoints of street and satellite images -- even for human eyes, it is far from obvious that two images show the same location. Satellite images cover a broader area in comparison to the ego-centric viewpoint of the street images. On the other hand, there are a lot of additional features in street view images, like facades, that are not visible in the top-view satellite images which would otherwise be extremely useful for precise location retrieval.
6
+
7
+ In order to alleviate the difficulty of learning cross-view features,  [@shi2019NeurIPS; @shi2020CVPR] use a simple polar coordinate transformation as a preprocessing step for image retrieval. Intuitively, this mimics the real viewpoint transformation from the over-head view to the ground-view. Nevertheless, there is still a significant appearance gap between polar-transformed and real street images. The two views do not overlap perfectly, which limits the retrieval performance. In the last few years, Generative Adversarial Networks (GANs) [@goodfellow2014NeurIPS] have proven to be a powerful tool for generating realistic looking images. Recent works [@regmi2018CVPR; @regmi2019CVIU; @zhu20183DV] applied them for cross-view image synthesis between aerial and ground-level images but they do not evaluate their effectiveness for the geo-localization task. [@regmi2019ICCV] is the first to use pre-trained synthesized images [@regmi2018CVPR] to train a retrieval network for geo-localization. However, this is done in two stages and therefore does not allow for end-to-end training. They obtained less accurate retrieval results than methods based on polar transformations [@shi2019NeurIPS; @shi2020CVPR]. This suggests that, while GANs create images that *look more realistic*, polar-transformation is more suitable to map the *content* of the images across the two domains.
8
+
9
+ In this work, our goal is to address the drastic viewpoint difference of the two domains by synthesizing realistic-looking and content-preserving street images from their satellite counterparts for geo-localization. To that end, we integrate a cross-view synthesis module and a geo-localization branch in a single architecture. The main insight here is that these two network components mutually reinforce each other: Learning to generate street images from satellite inputs naturally helps the image retrieval branch, since our network learns to extract local features that are useful across the two input domains. Vice versa, the retrieval branch incentivizes our network to create realistic street views that replicate the content of a given satellite image. Additionally, our network uses polar transformed satellite images as a starting point (, as an input to the GAN). This makes the image generation easier, since the spatial layout of the polar transformed image and the street view is approximately the same.
10
+
11
+ We propose a novel geo-localization method that is trained jointly for the multi-task setup of both synthesizing ground images from satellite images and retrieving cross-view image matches. We devise a single network for both of these tasks which can be trained in an end-to-end manner. Our method shows strong empirical results, both in terms of the retrieval accuracy and synthesis quality. For geo-localization, we obtain state-of-the-art performance on standard large-scale cross-view retrieval benchmarks. Moreover, our pipeline generates highly realistic street views that strongly resemble real, panoramic street images. Remarkably, our method outperforms existing cross-view synthesis approaches that use semantic labels as supervision during training.
12
+
13
+ # Method
14
+
15
+ <figure id="fig:network_overview">
16
+ <div class="center">
17
+ <img src="figures/network_overview.png" style="width:107.0%" />
18
+ </div>
19
+ <figcaption>An overview of our network. We convert the pixel coordinates of the top-view satellite image <span class="math inline"><em>I</em><sub>s</sub></span> to <span class="math inline"><em>I</em><sub>ps</sub></span>. Then our generative network <span class="math inline"><em>G</em></span> synthesizes the street image <span class="math inline"><em>G</em>(<em>I</em><sub>ps</sub>)</span>. In the same forward pass, the network feeds the projected satellite features <span class="math inline"><em>G</em><sub><em>E</em></sub>(<em>I</em><sub>ps</sub>)</span> and the corresponding ground image <span class="math inline"><em>I</em><sub>g</sub></span> to the retrieval branch. Network <span class="math inline"><em>R</em><sub><em>E</em></sub></span> extracts the local features from the real street view analogous to <span class="math inline"><em>G</em><sub><em>E</em></sub></span>. <span class="math inline"><em>S</em><em>A</em></span> is a spatial-aware attention module that aggregates the extracted local features into global image descriptors. <span class="math inline">ℒ<sub><em>c</em><em>G</em><em>A</em><em>N</em></sub></span>, <span class="math inline">ℒ<sub><em>L</em><sub>1</sub></sub></span>, <span class="math inline">ℒ<sub><em>r</em><em>e</em><em>t</em></sub></span> are the loss functions that we used for learning, see <a href="#subsec:training" data-reference-type="ref+Label" data-reference="subsec:training">3.4</a>. </figcaption>
20
+ </figure>
21
+
22
+ In this section, we describe our proposed multi-task approach to geo-localization, see [1](#fig:network_overview){reference-type="ref+Label" reference="fig:network_overview"} for an overview. The main idea is to jointly address the cross-view image retrieval and satellite-to-street view synthesis in a single framework. Specifically, we project a given pair of satellite and street images into their latent feature space and use those features simultaneously for both tasks. On one hand, the retrieval branch makes sure that the *content* of the generated images is true to the real scene depicted. At the same time, the image synthesis biases our model to learn features that are consistent across the two input domains which, in turn, benefits the localization.
23
+
24
+ Initially, we apply a polar transformation to the satellite inputs [@shi2020CVPR; @shi2019NeurIPS], which maps their content to an approximate street view, see [3.1](#subsec:polar){reference-type="ref+Label" reference="subsec:polar"}. We then synthesize a realistic street view from the polar-transformed images, see [3.2](#subsec:gan){reference-type="ref+Label" reference="subsec:gan"}. At the same time, the network learns to set satellite-street pairs in correspondence in the image retrieval branch, which we outline in [3.3](#subsec:retrieval){reference-type="ref+Label" reference="subsec:retrieval"}. Finally, we provide details on the learning procedure in [3.4](#subsec:training){reference-type="ref+Label" reference="subsec:training"}. Also, see our supplementary material for more technical implementation details.
25
+
26
+ As shown in earlier work [@shi2019NeurIPS; @shi2020CVPR], we can partially bridge the domain gap of our input pairs with a simple polar coordinate transformation of the top-view satellite inputs: $$\begin{equation}
27
+ \label{eq:polartransform}
28
+ \begin{aligned}
29
+ x_{i}^{\mathrm{s}} = \frac{W_{\mathrm{s}}}{2} + \frac{W_s}{2} \frac{y_{i}^{\mathrm{ps}}}{H_{\mathrm{ps}}} \sin\left({\frac{2\pi}{W_{\mathrm{ps}}}x_{i}^{\mathrm{ps}}}\right)\\
30
+ y_{i}^{\mathrm{s}} = \frac{H_{\mathrm{s}}}{2} - \frac{H_s}{2} \frac{y_{i}^{\mathrm{ps}}}{H_{\mathrm{ps}}}
31
+ \cos\left({\frac{2\pi}{W_{\mathrm{ps}}}x_{i}^{\mathrm{ps}}}\right)
32
+ \end{aligned}
33
+ \end{equation}$$ Here, $(x_{i}^{\mathrm{s}},y_{i}^{\mathrm{s}})$ and $(x_{i}^{\mathrm{ps}},y_{i}^{\mathrm{ps}})$ are pixel coordinates of the satellite and polar transformed images, respectively. The dimensions are specified by $W_{\mathrm{s}}\times H_{\mathrm{s}}$ and $W_{\mathrm{ps}}\times H_{\mathrm{ps}}$. In this formulation, circular lines in the top-view satellite images become horizontal lines in the ground view. Vice-versa, radial lines correspond to vertical lines in the new set of coordinates. In particular, the north-line, which is a vertical line originating from the center of the satellite image, corresponds to the vertical line at $\frac{W_{\mathrm{ps}}}{2}$ in the transformed image.
34
+
35
+ Overall, this transformation produces image pairs that respect the content of the scene, , they have roughly the same arrangement of objects in the scene. However, that alone is not sufficient to completely close the domain gap between the two views: The overlap is typically not perfect and a lot of features, like, , the sky as seen from the ground-view, can simply not be recovered in that manner. Consequently, in the next step, we convert the polar transformed images to street images using a generative model.
36
+
37
+ Generative Adversarial Networks (GANs) [@goodfellow2014NeurIPS] are nowadays broadly used for image synthesis tasks in computer vision. The main appeal of this class of architectures is that they are able to generate highly realistic images. This is typically done via adversarial training of two opposing networks, the generator $G$ and the discriminator $D$. We follow the lines of recent conditional GAN method [@isola2017CVPR] since our goal is to synthesize realistic street views that, at the same time, replicate the content from a reference satellite image.
38
+
39
+ The first component of our model is the *generator* $G$ which takes a polar-transformed satellite image $I_\mathrm{ps}$ as an input and translates it into a photo-realistic street panorama $G(I_\mathrm{ps})$. The polar-coordinate representation, in this context, is a highly useful preprocessing step since the general outline of the transformed image already resembles the actual street view. This takes some of the burden of bridging the satellite-street domain gap from the generator. The generated images $G(I_\mathrm{ps})$, as well as the ground-truth street views $I_\mathrm{g}$, are then fed to the *discriminator* $D$ which tries to determine whether the respective images are real or fake. The feedback from this discriminator in turn incentivizes the generator to create images that are indistinguishable from real street views.
40
+
41
+ In the remainder of this section, we briefly outline the architecture of the two network components $G$ and $D$. For further details, we refer the interested reader to our supplementary material.
42
+
43
+ Our generator network $G$ is designed as a U-Net [@ronneberger2015unet] architecture, which consists of residual blocks [@he2016CVPR]. The first few downsampling layers, together with the network bottleneck, are called the image encoder $G_{E}$. Specifically, $G_{E}$ consists of 3 residual downsampling blocks that reduce the spatial size by a factor of 4 each. On this reduced resolution, the bottleneck layers further refine the latent features with 6 residual blocks. In the remainder of the generator $G\setminus G_{E}$ we use 3 residual upsampling blocks to obtain a synthesized street-level image $G(I_\mathrm{ps})$ with the same resolution as the polar-transformed input image $I_\mathrm{ps}$. Between all downsampling and upsampling blocks we use skip connections as a standard trick to improve the network's convergence. Furthermore, we use instance normalization [@ulyanov2016instance] after each residual block and spectral normalization [@miyato2018spectral] after each convolution layer.
44
+
45
+ We construct the discriminator $D$ as a PatchGAN [@isola2017CVPR; @li2016ECCV] classifier. For a given $H_\mathrm{ps}\times W_\mathrm{ps}$ street-view image, the discriminator $D$ downscales the spatial size to smaller patches and classifies each patch as either real or fake. The patch-wise strategy is particularly beneficial for synthesizing street view images, which typically consist of recurring patterns of streets, trees, and buildings. Since the global coherency is secondary in this context, the classifier can place a higher emphasis on fine-scale details.
46
+
47
+ Having defined our image synthesis module, we now describe our retrieval branch $R$. The goal is to localize a given query street image $I_\mathrm{g}$ by matching it against a database of satellite images. $R$ consists of two parts: An encoder block $R_{E}$ for $I_\mathrm{g}$ and a spatial attention module $SA$ that converts obtained local features of street and satellite images into global descriptors. For $R_{E}$, we use a modified ResNet34 [@he2016CVPR] backbone which extracts local features for the street-view input. We do not, however, compute an analogous latent encoding of the satellite inputs $I_\mathrm{ps}$ here. Instead, we reuse the features from the generator encoder $G_{E}(I_\mathrm{ps})$.
48
+
49
+ This is the core idea of our *multi-task* setup: By using the learned features $G_{E}(I_\mathrm{ps})$ for both the synthesis and retrieval tasks, we allow these two aspects of the learning procedure to interact and reinforce each other. The retrieval part by itself is limited to detect and identify similar objects. The learned features from the image synthesis task, on the other hand, provide an explicit notion of domain transfer, since we learn to translate images across the two domains. In turn, the retrieval network compels the generator branch to learn features that are eventually useful for image matching -- this yields realistic generated images that also faithfully depict the content of the scene.
50
+
51
+ The generator and retrieval feature encoders $G_{E}$ and $R_{E}$ learn local feature representations on both the polar-transformed satellite and the street images. In order to convert these local features $F_{\mathrm{ps}}:=G_{E}(I_{\mathrm{ps}})$ into a global descriptor $\tilde{F}_{\mathrm{ps}}$, we use a spatial-aware feature aggregation [@shi2019NeurIPS] layer. For a given set of input features, this module predicts $k$ spatial attention masks $A_1,\dots,A_k\in\mathbb{R}^{H\times W}$. These masks $A_i$ are obtained by max-pooling $F_{\mathrm{ps}}\in\mathbb{R}^{H\times W\times C}$ along the channel dimension $C$ and refining the obtained features with two consecutive full-connected layers. The global feature components $\tilde{F}_{\mathrm{ps},i}\in\mathbb{R}^{C}$ are then defined as a weighted combination of the input features and the attention masks $A_i$: $$\begin{equation}
52
+ \tilde{F}_{ps,i}:=\bigl\langle F_{ps}, A_i\bigr\rangle_F.
53
+ \end{equation}$$ Here, $\langle\cdot,\cdot\rangle_F$ denotes the Frobenius inner product. Finally, we obtain a global descriptor $\tilde{F}_{\mathrm{ps}}$ by stacking $\tilde{F}_{ps,1},\dots,\tilde{F}_{ps,k}$ into one $kC$-dimensional feature vector.
54
+
55
+ The goal of our method is to jointly retrieve the correct satellite match for a given query street view, as well as synthesizing the corresponding street view from the satellite image. To that end, we devise the following loss function: $$\begin{equation}
56
+ \label{eq:general}
57
+ \mathcal{L} = \lambda_{cGAN}\mathcal{L}_{cGAN} + \lambda_{L_1}\mathcal{L}_{L_1} + \lambda_{ret}\mathcal{L}_{ret}.
58
+ \end{equation}$$ During training, we then update the weights of the three components of our model $G$, $D$ and $R$ in an adversarial manner: $$\begin{equation}
59
+ \label{eq:minmax}
60
+ \min_{G,R}\max_D \mathcal{L}(G,R,D).
61
+ \end{equation}$$ In the remainder of this section, we describe in detail how the three components of our composite loss in [\[eq:general\]](#eq:general){reference-type="ref+Label" reference="eq:general"} are defined.
62
+
63
+ For the image generation task, we define a conditional GAN loss [@isola2017CVPR]: $$\begin{equation}
64
+ \label{eq:conditional_gan}
65
+ \begin{split}
66
+ \mathcal{L}_{cGAN}(G, D) &= \mathop{\mathrm{\mathbb{E}}}_{I_{\mathrm{ps}}, I_{\mathrm{g}}}\bigl[\log D(I_{\mathrm{ps}}, I_{\mathrm{g}})\bigr] + \\
67
+ & \mathop{\mathrm{\mathbb{E}}}_{I_{\mathrm{ps}}}\bigl[\log (1-D(I_{\mathrm{ps}}, G(I_{\mathrm{ps}})))\bigr].
68
+ \end{split}
69
+ \end{equation}$$ While the discriminator $D$ tries to classify images into real (for $I_\mathrm{g}$) and fake (for $G(I_{\mathrm{ps}})$), the generator $G$ tries to minimize the loss by creating realistic images. The corresponding satellite image $I_\mathrm{ps}$ is applied as a condition to both the discriminator and the generator.
70
+
71
+ The second component in [\[eq:general\]](#eq:general){reference-type="ref+Label" reference="eq:general"} is a $L_1$ reconstruction loss which minimizes the distance between the predicted $G(I_{ps})$ and the ground-truth street-level images $I_{g}$: $$\begin{equation}
72
+ \label{eq:l1}
73
+ \begin{split}
74
+ \mathcal{L}_{L_1}(G)= \mathop{\mathrm{\mathbb{E}}}_{I_{\mathrm{g}}, I_{\mathrm{ps}}} \bigl[ \|I_{\mathrm{g}} - G(I_{\mathrm{ps}}) \|_{1}\bigr].
75
+ \end{split}
76
+ \end{equation}$$ While, in principle, $\mathcal{L}_{cGAN}$ suffices to obtain meaningful translations, $\mathcal{L}_{L_1}$ still helps the network to capture low-level image features and thereby steers the image synthesis to convergence.
77
+
78
+ Finally, we use a supervised retrieval loss for the geo-localization task, which is specified as a weighted soft-margin ranking loss [@hu2018CVPR]: $$\begin{flalign}
79
+ \label{eq:weigthed_soft_margin}
80
+ \mathcal{L}_{ret}(G_{E}, R_{E}, SA)=& \\\mathop{\mathrm{\mathbb{E}}}_{I_{\mathrm{ps}}, I_{\mathrm{g}}} \mathop{\mathrm{\mathbb{E}}}_{\tilde{I_{\mathrm{g}}}\neq I_{\mathrm{g}}}&\bigl[\log (1+
81
+ e^{\alpha \mathrm{d}(I_{\mathrm{g}}, I_{\mathrm{ps}})-\alpha \mathrm{d}(\tilde{I_{\mathrm{g}}}, I_{\mathrm{ps}})})\bigr]\nonumber.
82
+ \end{flalign}$$ Here, the distance metric between a pair of ground and satellite images $I_g$ and $I_{ps}$ is defined as the squared $L_2$ distance between the learned features of both images: $$\begin{equation}
83
+ \label{eq:distancemetric}
84
+ \mathrm{d}(I_{\mathrm{g}}, I_{\mathrm{ps}}):=\|SA(R_{E}(I_{\mathrm{g}}))-SA(G_{E}(I_{\mathrm{ps}}))\|^2_2.
85
+ \end{equation}$$ Intuitively, $\mathcal{L}_{ret}$ aims at decreasing the distance of positive matches in the latent space and pushes negative pairs apart.
86
+
87
+ ![Qualitative comparisons for cross-view image synthesis on the CVUSA benchmark. We compare the images generated by our method with the best baselines X-Fork and X-Seq [@regmi2018CVPR]. Note, that they focus on synthesizing the first quarter of the street view (which is equivalent to the red, dashed boxes on the target street-view), our method is able to create coherent full street view panoramas.](figures/cvusa_qualitative.png){#fig:qualitative_cvusa_comparison width="\\linewidth"}
2103.13516/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2103.13516/paper_text/intro_method.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Tracking multiple objects, especially humans, is a central problem in visual scene understanding. The intricacy of this task grows with increasing targets to be tracked and remains an open area of research. Alike other subfields in Computer Vision, with the advent of Deep Learning, the task of Multiple Object Tracking (MOT) has remarkably advanced its benchmarks [@dave_tao; @Geiger2013VisionMR; @KITTI; @MOTChallenge2015; @MOT16; @MOTS] since its inception [@pets_dataset]. In the recent past, the focus of MOTChallenge benchmark [@MOT19_CVPR] has shifted towards tracking pedestrians in crowds of higher density. This has several applications in fields such as activity recognition, anomaly detection, robot navigation, visual surveillance, safety planning etc.
4
+
5
+ <figure id="fig:bodyvshead" data-latex-placement="t">
6
+ <div class="center">
7
+ <img src="images/bodyvsHead.png" style="width:80.0%" />
8
+ </div>
9
+ <figcaption>Comparison between head detection and full body detection in a crowded scene from CroHD. HeadHunter detects 36 heads whereas Faster-RCNN <span class="citation" data-cites="FRCNN"></span> can detect only 23 pedestrians out of 37 present in this scene.</figcaption>
10
+ </figure>
11
+
12
+ Yet, the performances of trackers on these benchmark suggests a trend of saturation[^1]. Majority of online tracking algorithms today follow the tracking-by-detection paradigm and several research works have well-established object detector's performance to be crucial in tracker's performance [@tracktor; @SORT; @tracking_survey]. As the pedestrian density in a scene increases, pedestrian visibility reduces with increasing mutual occlusions, leading to reduced pedestrian detection as visualized in Figure [1](#fig:bodyvshead){reference-type="ref" reference="fig:bodyvshead"}. To tackle these challenges yet track humans efficiently in densely crowded environments, we rekindle the task of MOT with tracking humans by their distinctly visible part - heads. To that end, we propose a new dataset, *CroHD, Crowd of Heads Dataset*, comprising 9 sequences of 11,463 frames with head bounding boxes annotated for tracking. We hope that this new dataset opens up opportunities for promising future research to better understand global pedestrian motion in dense crowds.
13
+
14
+ Supplementing this, we develop two new baseline methods on CroHD, a head detector, *HeadHunter* and a head tracker, *HeadHunter-T*. We design HeadHunter peculiar for head detection in crowded environments, distinct from standard pedestrian detectors and demonstrate state-of-the-art performance on an existing head detection dataset. HeadHunter-T extends HeadHunter with a Particle Filter framework and a light-weight re-identification module for head-tracking. To validate HeadHunter-T to be a strong baseline tracker, we compare it with three published top performing pedestrian trackers on the crowded MOTChallenge benchmark, evaluated on CroHD. We further perform comparisons between tracking by head detection and tracking by body detection to illustrate the usefulness of our contribution.
15
+
16
+ To establish correspondence between a tracking algorithm and pedestrian motion, it is necessary to understand the adequacy of various trackers in successfully representing ground truth pedestrian trajectories. We thus propose a new metric, *IDEucl* to evaluate tracking algorithms based on their consistency in maintaining the same identity for the longest length of a ground truth trajectory in the image coordinate space. *IDEucl* is compatible with our dataset and can be extended to any tracking benchmark, recorded with a static camera.\
17
+ In summary, this paper makes the following contributions **(i)** We present a new dataset, CroHD, with annotated pedestrian heads for tracking in dense crowd, **(ii)** We propose a baseline head detector for CroHD, HeadHunter, **(iii)** We develop HeadHunter-T, by extending HeadHunter as the baseline head tracker for CroHD, **(iv)** We propose a new metric, IDEucl, to evaluate the efficiency of trackers in representing a ground truth trajectory and finally, **(v)** We demonstrate HeadHunter-T to be a strong baseline by comparing with three existing state-of-the-art trackers on CroHD.
18
+
19
+ # Method
20
+
21
+ In this section, we elucidate the design and working of HeadHunter and HeadHunter-T.
22
+
23
+ As detection is the pivotal step in object tracking, we designed HeadHunter differently from traditional object detectors [@DPM; @FRCNN; @SDP] by taking into account the nature and size of objects we detect. HeadHunter is an end-to-end two stage detector, with three functional characteristics. First, it extracts feature at multiple scales using Feature Pyramid Network (FPN) [@FPN] using a Resnet-50 [@ResNet] backbone. Images of heads are homogeneous in appearance and often, in crowded scenes, resemble extraneous objects (typically background). For that reason, inspired by the head detection literature, we augmented on top of each individual FPNs, a Context-sensitive Prediction Module (CPM) [@PyramidBox]. This contextual module consists of 4 Inception-ResNet-A blocks [@inception_resnet] with 128 and 256 filters for $3\times3$ convolution and 1024 filters for $1\times1$ convolution. As detecting pedestrian heads in crowded scenes is a problem of detecting many small-sized adjacently placed objects, we used Transpose Convolution on features across all pyramid levels to upscale the spatial resolution of each feature map. Finally, we used a Faster-RCNN head with Region Proposal Network (RPN) generating object proposals while the regression and classification head, each providing location offsets and confidence scores respectively. The architecture of our proposed network is summarised in Figure [4](#fig:our_architecture){reference-type="ref" reference="fig:our_architecture"}.
24
+
25
+ <figure id="fig:our_architecture" data-latex-placement="htb">
26
+ <div class="center">
27
+ <img src="images/Architecture_final.png" />
28
+ </div>
29
+ <figcaption>An overview of the architecture of our proposed head detector, Headhunter. We augment the features extracted using FPN (C4…P4) with Context Sensitive feature extractor followed by series of transpose convolutions to enhance spatial resolution of feature maps. Cls and Reg denote the Classification and Regression branches of Faster-RCNN <span class="citation" data-cites="FRCNN"></span> respectively.</figcaption>
30
+ </figure>
31
+
32
+ We extended HeadHunter with two motion models and a color histogram based re-identification module for head-tracking. Our motion models consist of Particle Filter to predict motion of targets and Enhanced Correlation Coefficient Maximization [@ECC] to compensate the Camera motion in the sequence. A Particle Filter is a Sequential Monte Carlo (SMC) process, which recursively estimates the state of dynamic systems. In our implementation, we represent the posterior density function by a set of bounding box proposals for each target, referred to as particles. The use of Particle Filter enables us to simultaneously model non-linearity in motion occurring due to rapid movements of heads and pedestrian displacement across frames.
33
+
34
+ **Notation:** Given a video sequence $\mathcal{I}$, we denote the ordered set of frames in it as $\{I_{0},\cdots, I_{T-1}\}$, where T is the total number of frames in the sequence. Throughout the paper, we use subscript notation to represent time instance in a video sequence. In a frame $I_{t}$ at time $t$, the active tracks are denoted by $\mathbf{T}_{t}=\{\mathbf{b}_{t}^{1}, \mathbf{b}_{t}^{2}, \ldots, \mathbf{b}_{t}^{N}\}$, where $\mathbf{b}_{t}^{k}$ refers to bounding box of the $k^{th}$ active track, denoted as $\mathbf{b}_{t}^{k}=\mathbf{\left(x_{t}^{k}, y_{t}^{k}, w_{t}^{k}, h_{t}^{k}\right)}$. At time $t$, the $i^{th}$ particle corresponding to $k^{th}$ track is denoted by $\mathbf{p}_{t}^{k,i}$ and its respective importance weight by $\mathbf{w}_{t}^{k,i}$. $\mathbf{L}_{t}$ and $\mathbf{N}_{t}$ denote the set of inactive tracks and newly initialized tracks respectively.
35
+
36
+ **Particle Initialization:** New tracks are initialized at the start of the sequence, $I_{0}$ from the detection provided by HeadHunter and at frame $I_{t}$ for detection(s) which cannot be associated with an existing track. A plausible association of new detection with existing track is resolved by Non-Maximal-Suppression (NMS). The importance weights of each particle are set to be equal at the time of initialisation. Each particles represent 4 dimensional state space, with the state of each targets modelled as $(\mathbf{x}_{c}, \mathbf{y}_{c}, \mathbf{w}, \mathbf{h}, \mathbf{\dot{x}}_{c}, \mathbf{\dot{y}}_{c}, \mathbf{\dot{w}}, \mathbf{\dot{h}})$, where, $(\mathbf{x}_{c}, \mathbf{y}_{c}, \mathbf{w}, \mathbf{h})$ denote the centroids, width and the height of bounding boxes.
37
+
38
+ **Prediction and Update:** At time $t>0$, we perform RoI pooling on the current frame's feature map, $\mathbf{F}_{t}$, with the bounding box of particles corresponding to active tracks. Each particles' location in the current frame is then adjusted using the regression head of HeadHunter, given their location in the previous frame. The importance weights of each particle are set to their respective foreground classification score from the classification head of HeadHunter. Our prediction step is similar to the Tracktor [@tracktor], applied to particles instead of tracks. Given the new location and importance weight of each particle, estimated position of $k^{th}$ track is computed as weighted mean of the particles,
39
+
40
+ $$\begin{align}
41
+ \mathbf{S}_{t}^{k} = \frac{1}{M} \sum_{i=1}^{M} \mathbf{w}_{t}^{k,i} \mathbf{p}_{t}^{k,i}
42
+ \end{align}$$
43
+
44
+ **Resampling:** Particle Filtering frameworks are known to suffer from degeneracy problems [@PF_tutorial] and as a result we resample to replace particles of low importance weight. $M$ particles corresponding to $k^{th}$ track are re-sampled when the number of particles which meaningfully contributes to probability distribution of location of each head, $\hat{\mathbf{N}}_{\mathrm{eff}}^{k}$ exceeds a threshold, where, $$\begin{equation}
45
+ \hat{\mathbf{N}}_{\mathrm{eff}}^{k}=\frac{1}{\sum_{i=1}^{M}(\mathbf{w}^{k,i})^{2}}
46
+ \end{equation}$$
47
+
48
+ **Cost Matching:** []{#costmatching label="costmatching"} Tracks are set to inactive when scores of their estimated state $\mathbf{S}_{t}^{a}$ falls below a threshold, $\lambda_{nms}^{reg}$. Positions of such tracks are predicted following Constant Velocity Assumption (CVA) and their tracking is resumed if it has a convincing similarity with a newly detected track. The similarity, $\mathbf{C}$ is defined as
49
+
50
+ $$\begin{equation}
51
+ \mathbf{C} = \alpha \cdot IoU(\mathbf{L}_{t}^{i}, \mathbf{N}_{t}^{j}) + \beta \cdot d^{1}(\mathbf{L}^{i}_{t}, \mathbf{N}^{j}_{t})
52
+ \label{eq:color_reid}
53
+ \end{equation}$$
54
+
55
+ where $\mathbf{L}_{t}^{i}$ and $\mathbf{N}_{t}^{j}$ are the $i^{th}$ lost track and $j^{th}$ new track respectively. And, $d^{1}$ denotes the Bhattacharyya distance between the respective color histograms in the HSV space [@numiaro_histogram]. Once tracks are re-identified, we re-initialize particles around its new position.
2104.00764/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-09-09T16:48:29.671Z" agent="5.0 (Macintosh; Intel Mac OS X 11_4_0) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/15.1.3 Chrome/89.0.4389.128 Electron/12.1.0 Safari/537.36" etag="soMWpHJPNzb47zHrVMUw" version="15.1.3" type="device"><diagram id="c9oWNbHQZnioFB_LdEoV" name="Page-1">7Vpbc6M2FP41PG7HgLnkMXEu3Znd6c54d9p9VEAGtQKxQsR2f30lEFeDTYIxOE0mk6CjC+J85/t0EFL0VbB7oiDyvxIXYkVbuDtFv1c07UbX+F9h2GcG07Azg0eRm5nU0rBG/0JpXEhrglwY1xoyQjBDUd3okDCEDqvZAKVkW2+2Ibh+1wh48o6L0rB2AIYHzf5ELvMzq21UWv8Okefnd1YXsiYAeWM5ROwDl2wr99IfFH1FCWHZVbBbQSx8l/slG+ixo7aYGIUh69Ph2fuM4s9GuFUd+uTe4y1++eOTHOUF4EQ+8FrRTMwHvIuTZzFrtpeuMH8lYqp3GxKyT3EK1C1voC4jjvVdWc+vPPFfzQfiU0rHyszSGcWwGiVJ6EIxSZVXb33E4DoCjqjd8pDiNp8FWFbHjJJ/CiQ0OZ1HECAsAuxL4iAX8PFXJIyJuENaL8NKXYoywnhFMKHp3XWY/hQjV2p0U7/R3WLGVUfnXoOUwV3FJB3/BEkAGd3zJrJWs2RU7PPoluVtGVN5E78STktpAzKKvWLkEmh+IbF+Be76Ae6KthS/xoo/ElOsu3XyvCE0CRTrXlY1gUsbtqCT+zAkIWw4XJoARl7IixhuxAjCj4hT7laaA+S6KXZtwVCGy+IN6J8BS7sBpdoPSm0sKFuQgS7XMFmULq+7jVDmE4+EAH8hJJLo/Q0Z20tngYSROrZwh9hfovtvliGLP+Vo4vp+Vy3sZaHT3TFJqAOPCZNcN8SjHEWFQgwYeqnLdZuTZddvBPG5lMy07TqcTcoxQD3IZK8GVMU03o5e/pw1Jr5egA/U93tf+eXLUiQuOUgAY4iJR0HAG0aQIv5wkDbrvpUVp9R6g3YwX9OvWL1Vs0F5a2L1zrWkdRW9JL+5T+k+62XkxZ/5gKJQdktLZT/3ViRnpT5xyyMSXhiqG3Jly2h7rKF1boEZBql+YRUfH+RLgHj2VWIQiGpbWiXFPALhoHT6R1XP08FyQZ8qXV+823RdbeR4hZJXBN+8qOCrBzjMJseb4Rqw7Csf5rzkY3kB+RiqE1qXTnSo0xD5mODtrvmmPj31tQtTX5ue+jFnLstbOBjEMXJys2ymDlMIo69C2PNSCOMKFEK/oEJcaYKhm3NTGePdvX6MmF+YfdVjXuJhdovHoK2mzpRgwq2mKVIHfW7bROYHqfuT2rpOUlsjkbpzFf9/kVpfzI3U1jWQ2noVq3vm+yOS375O8tsjkX85Q/Jfa6rf/PQ/uYC0BU392/93n0Lgfnz5P/Hl3+6H5Hhf/m866S+entdsgFOn/4GjKtx/q1o0o+dHDGlcCZ5CSbJJdUjJHGLq/DGiGRPHSP5O0sZ2cUTtecMRK7EL4ZYV5B+R/w73cLoIDFSAPO15bXaonQdttfHKV+B4Yh9nPLgPD/QdgxuFDk5iREKBtfHALTzD+UC+D/LNMyEtO3j2RZHXunPB2nZrf/JXAqIp5Ud3cK8zSM4QFEttbnJw5JznKTl4ZxpwjlT+5jTnzwQvL5Ynw7OzhuXxev3hPw==</diagram></mxfile>
2104.00764/main_diagram/main_diagram.pdf ADDED
Binary file (15.3 kB). View file
 
2104.00764/paper_text/intro_method.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Crypto markets are *"online forums where goods and services are exchanged between parties who use digital encryption to conceal their identities"* [\(Martin,](#page-10-0) [2014\)](#page-10-0). They are typically hosted on the Tor network, which guarantees anonymization in terms of IP and location tracking. The identity of individuals on a crypto-market is associated only with a username; therefore, building trust on these networks does not follow conventional models prevalent in eCommerce. Interactions on these forums are facilitated by means of text posted by their users. This makes the analysis of textual style on these forums a compelling problem.
4
+
5
+ Stylometry is the branch of linguistics concerned with the analysis of authors' style. Text stylometry was initially popularized in the area of forensic linguistics, specifically to the problems of author profiling and author attribution [\(Juola,](#page-10-1) [2006;](#page-10-1) [Rangel](#page-10-2)
6
+
7
+ [et al.,](#page-10-2) [2013\)](#page-10-2). Traditional techniques for authorship analysis on such data rely upon the existence of long text corpora from which features such as the frequency of words, capitalization, punctuation style, word and character n-grams, function word usage can be extracted and subsequently fed into any statistical or machine learning classification framework, acting as an author's 'signature'. However, such techniques find limited use in short text corpora in a heavily anonymized environment.
8
+
9
+ Advancements in using neural networks for character and word-level modeling for authorship attribution aim to deal with the scarcity of easily identifiable 'signature' features and have shown promising results on shorter text [\(Shrestha et al.,](#page-10-3) [2017\)](#page-10-3). [Andrews and Witteveen](#page-9-0) [\(2019\)](#page-9-0) drew upon these advances in stylometry to propose a model for building representations of social media users on Reddit and Twitter. Motivated by the success of such approaches, we develop a novel methodology for building authorship representations for posters on various darknet markets. Specifically, our key contributions include:
10
+
11
+ First, a *representation learning* approach that couples temporal content stylometry with access identity (by levering forum interactions via *meta-path graph context information*) to model and enhance user (author) representation;
12
+
13
+ Second, a novel framework for training the proposed models in a *multitask setting* across multiple darknet markets, using a small dataset of labeled migrations, to refine the representations of users within each individual market, while also providing a method to correlate users across markets;
14
+
15
+ Third, a detailed drill-down *ablation study* discussing the impact of various optimizations and highlighting the benefits of both graph context and multitask learning on forums associated with four darknet markets - *Black Market Reloaded*, *Agora Marketplace*, *Silk Road*, and *Silk Road 2.0* - when compared to the state-of-the-art alternatives.
16
+
17
+ # Method
18
+
19
+ Motivated by the success of social media user modeling using combinations of multiple posts by each user [\(Andrews and Bishop,](#page-9-8) [2019;](#page-9-8) [Noor](#page-10-12)[shams et al.,](#page-10-12) [2020\)](#page-10-12), we model posts on darknet forums using *episodes*. Each *episode* consists of the textual content, time, and contextual information from multiple posts. A neural network architecture f<sup>θ</sup> maps each episode to combined representation e ∈ RE. The model used to generate this representation is trained on various metric learning tasks characterized by a second set of parameters g<sup>φ</sup> : R<sup>E</sup> −→ R. We design the metric learning task to ensure that episodes having the same author have *similar* embeddings. Figure [1](#page-2-0) describes the architecture of this workflow and the following sections describe the individual components and corresponding tasks. Note that our base modeling framework is inspired by the social media user representations built by [An](#page-9-8)[drews and Bishop](#page-9-8) [\(2019\)](#page-9-8) for a single task. We add meta-path embeddings and multitask objectives to
20
+
21
+ <span id="page-1-0"></span><sup>1</sup> a single author can have multiple users accounts which are considered as *sybils*
22
+
23
+ enhance the capabilities of SYSML. Our implementation is available at: [https://github.com/](https://github.com/pranavmaneriker/SYSML) [pranavmaneriker/SYSML](https://github.com/pranavmaneriker/SYSML).
24
+
25
+ <span id="page-2-0"></span>![](_page_2_Figure_1.jpeg)
26
+
27
+ Figure 1: Overall SYSML Workflow.
28
+
29
+ Each episode e of length L consists of multiple tuples of texts, times, and contexts e = {(t<sup>i</sup> , τ<sup>i</sup> , ci)|1 ≤ i ≤ L}. Component embeddings map individual components to vector spaces. All embeddings are generated from the forum data only; no pretrained embeddings are used.
30
+
31
+ Text Embedding First, we tokenize every input text post using either a character-level or byte-level tokenizer. A one-hot encoding layer followed by an embedding matrix E<sup>t</sup> of dimensions |V | × d<sup>t</sup> where V is the token vocabulary and d<sup>t</sup> is the token embedding dimension embeds an input sequence of tokens T0, T1, . . . , Tn−1. We get a sequence embedding of dimension n×d<sup>t</sup> . Following this, we use f sliding window filters, with filters sized F = {2, 3, 4, 5} to generate feature-maps which are then fed to a max-over-time pooling layer, leading to a |F| ×f dimensional output (one per filter). Finally, a fully connected layer generates the embedding for the text sequence, with output dimension d<sup>t</sup> . A dropout layer prior to the final fully connected layer prevents overfitting, as shown in Figure [2.](#page-2-1)
32
+
33
+ Time Embedding The time information for each post corresponds to when the post was created and is available at different granularities across darknet market forums. To have a consistent time embedding across different granularities, we only consider the least granular available date information (date) available on all markets. We use the day of the week for each post to compute the time em-
34
+
35
+ <span id="page-2-1"></span>![](_page_2_Figure_7.jpeg)
36
+
37
+ Figure 2: Text Embedding CNN [\(Kim,](#page-10-6) [2014\)](#page-10-6).
38
+
39
+ bedding by selecting the corresponding embedding vector of dimension d<sup>τ</sup> from the matrix Ew.
40
+
41
+ Structural Context Embedding The context of a post refers to the threads that it may be associated with. Past work [\(Andrews and Bishop,](#page-9-8) [2019\)](#page-9-8) used the subreddit as the context for a Reddit post. In a similar fashion, we encode the subforum of a post as a one-hot vector and use it to generate a d<sup>c</sup> dimensional context embedding. In the previously mentioned work, this embedding is initialized randomly. We deviate from this setup and use an alternative approach based on a *heterogeneous graph* constructed from forum posts to initialize this embedding.
42
+
43
+ Definition 3.1 (Heterogeneous Graph). A heterogeneous graph G = (V, E, T) is one where each node v and edge e are associated with a 'type' T<sup>i</sup> ∈ T, where the association is given by mapping functions φ(v) : V → T<sup>V</sup> , ψ(e) : E → TE, where |T<sup>V</sup> | + |TE| > 2
44
+
45
+ The constraint on TV,E ensures that at least one of T<sup>V</sup> and T<sup>E</sup> have more than one element (making the graph heterogeneous). Specifically, we build a graph in which there are four types of nodes: user (U), subforum (S), thread (T), and post (P), and each edge indicates either a post of new thread (U-T), reply to existing post (U-P) or an inclusion (T-P, S-T) relationship. To learn the node embeddings in such heterogeneous graphs, we leverage the metapath2vec [\(Dong et al.,](#page-9-6) [2017\)](#page-9-6) framework with specific meta-path schemes designed for darknet forums. Each meta-path scheme can incorporate specific semantic relationships into node embeddings. For example, Figure [3](#page-3-0) shows an instance of a meta-path 'UTSTU', which connects two users posting on threads in the same subforum and goes through the relevant threads and subforum. Our analysis is user focused; to capture user behavior, we consider *all* metapaths starting from and ending at a user node. Thus, to fully capture the semantic relationships in the heterogeneous graph, we use seven meta-path schemes: UPTSTPU, UTSTPU, UPTSTU, UTSTU, UPTPU, UPTU, and UTPU. As a result, the learned embeddings will preserve the semantic relationships between each subforum, included posts as well as relevant users (authors). Metapath2vec generates embeddings by maximizing the probability of heterogeneous neighbourhoods, normalizing it across typed contexts. The
46
+
47
+ <span id="page-3-0"></span>![](_page_3_Figure_0.jpeg)
48
+
49
+ Figure 3: An instance of meta-path 'UTSTU' in a subgraph of the forum graph.
50
+
51
+ optimization objective is:
52
+
53
+ $$\arg \max_{\theta} \prod_{v \in V} \prod_{t \in T_v} \prod_{c_t \in N_t(v)} p(c_t | v; \theta)$$
54
+
55
+ Where $\theta$ is the learned embedding, $N_t(v)$ denotes v's neighborhood with the $t^{th}$ type of node. In practice, this is equivalent to running a word2vec (Mikolov et al., 2013) style skip gram model over the random walks generated from the meta-path schemes when $p(c_t|v;\theta)=$ is defined as a softmax function. Further details of metapath2vec can be found in the paper by Dong et al. (2017).
56
+
57
+ The embeddings of each component of a post are concatenated into a $d_e = d_t + d_\tau + d_c$ dimensional embedding. An episode with L posts, therefore, has a $L \times d_e$ embeddings. We generate a final embedding for each episode, given the post embeddings using two different models. In Mean Pooling, the episode embedding is the mean of L post embeddings, resulting in a $d_e$ dimensional episode embedding. For the **Transformer**, the episode embeddings are fed as the inputs to a transformer model (Devlin et al., 2019; Vaswani et al., 2017), with each post embedding acting as one element in a sequence for a total sequence length L. We follow the architecture proposed by Andrews and Bishop (2019) and omit a detailed description of the transformer architecture for brevity (Figure 4 shows an overview). Note that we do not use positional embeddings within this pooling architecture. The parameters of the component-wise models and episode embedding models comprise the episode embedding $f_{\theta}: \{(t, \tau, c)\}^{L} \to \mathbb{R}^{E}$ .
58
+
59
+ An important element of our methodology is the ability to learn a distance function over user representations. We use the username as a label for
60
+
61
+ <span id="page-3-1"></span>![](_page_3_Figure_9.jpeg)
62
+
63
+ Figure 4: Architecture for Transformer Pooling.
64
+
65
+ the episode e within the market M and denote each username as a unique label $u \in U_M$ . Let $W = |U_M| \times d_E$ represent a matrix denoting the weights corresponding to a specific metric learning method and let $x^* = \frac{x}{||x||}$ . An example of a metric learning loss would be Softmax Margin, i.e., cross-entropy based softmax loss.
66
+
67
+ $$P(u|e) = \frac{e^{W_u d_e}}{\sum\limits_{j=1}^{|U_M|} e^{W_j d_e}}$$
68
+
69
+ We also explore alternative metric learning approaches such as Cosface (CF) (Wang et al., 2018), ArcFace (AF) (Deng et al., 2019), and MultiSimilarity (MS) (Wang et al., 2019).
70
+
71
+ The components discussed in the previous sections are combined together to generate an embedding and the aforementioned tasks are used to train these models. Given an episode $e = \{(t_i, \tau_i, c_i) | 1 \le i \le i \le i \le i \le i \le i \le i \le i \le i \le$ L}, the componentwise embedding modules generate embedding for the text, time, and context, respectively. The pooling module combines these embeddings into a single embedding $e \in \mathbb{R}^E$ . We define $f_{\theta}$ as the combination of the transformations that generate an embedding from an episode. Using a final metric learning loss corresponding to the task-specific $g_{\phi}$ , we can train the parameters $\theta$ and $\phi$ . The framework, as defined in Figure 1, results in a model trainable for a single market $M_i$ . Note that the first half of the framework (i.e., $f_{\theta}$ ) is sufficient to generate embeddings for episodes, making the module invariant to the choice of $q_{\phi}$ . However, the embedding modules learned from these embeddings may not be compatible for comparisons across different markets, which motivates our multi-task setup.
72
+
73
+ We use authorship attribution as the metric learning task for each market. Further, a majority of the embedding modules are shared across the different markets. Thus, in a multi-task setup, the model can share episode embedding weights (except context, which is market dependent) across markets. A shared BPE vocabulary allows weight sharing for text embedding on the different markets. However, the task-specific layers are not shared (different authors per dataset), and sharing $f_{\theta}$ does not guarantee alignment of embeddings across datasets (to reflect migrant authors). To remedy this, we construct a small, manually annotated set of labeled samples of authors known to have migrated from one market to another. Additionally, we add pairs of authors known to be distinct across datasets. The cross-dataset consists of all episodes of authors that were manually annotated in this fashion. The first step in the multi-task approach is to choose a market $(\mathcal{T}_M)$ or cross-market $(\mathcal{T}_{cr})$ metric learning task $\mathcal{T}_i \sim \mathcal{T} = \{\mathcal{T}_M, \mathcal{T}_{cr}\}$ . Following this, a batch of N episodes $\mathcal{E} \sim \mathcal{T}_i$ is sampled from the corresponding task. The embedding module generates the embedding for each episode $f_{\theta}^{N}: \mathcal{E} \to \mathbb{R}^{N \times E}$ Finally, the task-specific metric learning layer $g_{\phi}^{\mathcal{T}_i}$ is selected and a task-specific loss is backpropagated through the network. Note that in the crossdataset, new labels are defined based on whether different usernames correspond to the same author and episodes are sampled from the corresponding markets. Figure 5 demonstrates the shared layers and the use of cross-dataset samples. The overall loss function is the sum of the losses across the markets: $\mathcal{L} = \mathbb{E}_{\mathcal{T}_i \sim \mathcal{T}, \ \mathcal{E} \sim \mathcal{T}_i} [\mathcal{L}_i(\mathcal{E})].$
74
+
75
+ Munksgaard and Demant (2016) studied the politics of darknet markets using structured topic models on the forum posts across six large markets. We start with this dataset and perform basic preprocessing to clean up the text for our purposes. We focus on four of the six markets - *Silk Road* (**SR**), *Silk Road* 2.0 (**SR2**), *Agora Marketplace* (**Agora**), and *Black Market Reloaded* (**BMR**). We exclude 'The Hub' as it is not a standard forum but an 'omniforum' (Munksgaard and Demant, 2016) for discus-
76
+
77
+ <span id="page-4-0"></span>![](_page_4_Figure_5.jpeg)
78
+
79
+ Figure 5: Multi-task setup. Shaded nodes are shared
80
+
81
+ sion of other marketplaces and has a significantly different structure, which is beyond the scope of this work. We also exclude 'Evolution Marketplace' since none of the posts had PGP information present in them and thus were unsuitable for migration analysis.
82
+
83
+ **Pre-processing** We add simple regex and rule based filters to replace quoted posts (i.e., posts that are begin replied to), PGP keys, PGP signatures, hashed messages, links, and images each with different special tokens ([QUOTE], [PGP PUBKEY], [PGP SIGNATURE], [PGP ENCMSG], [LINK], [IMAGE]). We retain the subset of users with sufficient posts to create at least two episodes worth of posts. In our analysis, we focus on episodes of up to 5 posts. To avoid leaking information across time, we split the dataset into approximately equal-sized train and test sets with a chronologically midway splitting point such that half the posts on the forum are before that time point. Statistics for data after pre-processing is provided in Table 1. Note that the test data can contain authors not seen during training.
84
+
85
+ <span id="page-4-1"></span>
86
+
87
+ | Market | Train Posts | Test Posts | #Users train | #Users test |
88
+ |--------|-------------|------------|--------------|-------------|
89
+ | SR | 379382 | 381959 | 6585 | 8865 |
90
+ | SR2 | 373905 | 380779 | 5346 | 6580 |
91
+ | BMR | 30083 | 30474 | 855 | 931 |
92
+ | Agora | 175978 | 179482 | 3115 | 4209 |
93
+
94
+ Table 1: Dataset Statistics for Darkweb Markets.
95
+
96
+ **Cross-dataset Samples** Past work has established PGP keys as strong indicators of shared authorship
97
+
98
+ on darkweb markets [\(Tai et al.,](#page-10-16) [2019\)](#page-10-16). To identify different user accounts across markets that correspond to the same author, we follow a two-step process. First, we select the posts containing a PGP key, and then pair together users who have posts containing the same PGP key. Following this, we still have a large number of potentially incorrect matches (including scenarios such as information sharing posts by users sharing the PGP key of known vendors from a previous market). We manually check each pair to identify matches that clearly indicate whether the same author or different authors posted them, leading to approximately 100 reliable labels, with 33 pairs matched as migrants across markets.
2104.14207/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2104.14207/paper_text/intro_method.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ We propose a novel multi-task learning framework that leverages instance-level segmentation annotations, obtained via a *zero-shot* transfer mechanism, to effectively generate pixel-level groundings for the objects within a scene graph. Our approach, highlighted in Figure [2](#fig:model_overview){reference-type="ref" reference="fig:model_overview"}, builds on existing scene graph generation methods, but is *agnostic* to the underlying architecture and can be easily integrated with existing state-of-the-art approaches.
4
+
5
+ Let $\mathcal{D}^{g} = \{ (\mathbf{x}^{g}_i, \mathbf{G}^{g}_i) \}$ denote the dataset containing graph-level annotations $\mathbf{G}^{g}_i$ for each image $\mathbf{x}^{g}_i$. We represent the scene graph annotation $\mathbf{G}^{g}_i$ as a tuple of object and relations, $\mathbf{G}^{g}_i = (\mathbf{O}^{g}_i, \mathbf{R}^{g}_i)$, where $\mathbf{O}^{g}_i \in \mathbb{R}^{n_i \times d_g}$ represents object labels and $\mathbf{R}^{g}_i \in \mathbb{R}^{n_i \times n_i \times d_g'}$ represents relationship labels; $n_i$ is the number of objects in an image $\mathbf{x}^{g}_i$; $d_g$ and $d_g'$ are the total number of possible object and relation labels, respectively, in the dataset.
6
+
7
+ In addition, we assume availability of the dataset $\mathcal{D}^{m} = \{(\mathbf{x}^{m}_i, \mathbf{M}^{m}_i)\}$, where each image $\mathbf{x}^{m}_i$ has corresponding instance-level segmentation annotations $\mathbf{M}^{m}_i$. Finally, $d_m$ are the total number of possible object labels in $\mathcal{D}^{m}$.
8
+
9
+ As is the case with existing scene graph datasets like Visual Genome [@krishna2017visual], $\mathcal{D}^{g}$ does not contain any instance-level segmentation masks. Also, $\mathcal{D}^{m}$ can be any dataset (like MS COCO [@lin2014microsoft]). Note, that in general, the images in the two datasets, $\mathcal{D}^{g}$ and $\mathcal{D}^{m}$, are disjoint and the object classes in the two datasets may have minimal overlap (, MS COCO provides segmentations for 80, while Visual Genome provides object bounding boxes for 150 object categories[^3]).
10
+
11
+ For brevity, we drop subscript $i$ for the rest of the paper.
12
+
13
+ []{#sec:sg-generation label="sec:sg-generation"} Given an image $\mathbf{x}^{g} \in \mathcal{D}^{g}$, a typical scene graph model defines the distribution over the scene graph $\mathbf{G}^{g}$ as follows, $$\begin{align}
14
+ \label{eq:sg-factor}
15
+ \text{Pr}\left(\mathbf{G}^{g}| \mathbf{x}^{g} \right) = \text{Pr}\left(\mathbf{B}^{g}|\mathbf{x}^{g} \right)\cdot
16
+ \text{Pr}\left(\mathbf{O}^{g} |\mathbf{B}^{g},\mathbf{x}^{g} \right)\cdot
17
+ \text{Pr}\left(\mathbf{R}^{g}|\mathbf{O}^{g},\mathbf{B}^{g},\mathbf{x}^{g} \right)
18
+ \end{align}$$ The bounding box network $\text{Pr}\left(\mathbf{B}^{g}|\mathbf{x}^{g} \right)$ extracts a set of boxes $\mathbf{B}^{g} = \{\mathbf{b}^{g}_{1}, \dots, \mathbf{b}^{g}_{n} \}$ corresponding to regions of interest. This can be achieved using standard object detectors such as Faster R-CNN [@ren2015faster] or Detectron [@wu2019detectron2]. Specifically, these detectors are pretrained on $\mathcal{D}^{g}$ with the objective to generate accurate bounding boxes $\mathbf{B}^b$ and object probabilities $\mathbf{L}^g = \{\mathbf{l}^g_1, \dots, \mathbf{l}^g_n\}$ for an input image $\mathbf{x}^{g}$. Note that this only requires access to the object (node) annotations in $\mathbf{G}^{g}$.
19
+
20
+ The object network $\text{Pr}\left(\mathbf{O}^{g} |\mathbf{B}^{g},\mathbf{x}^{g} \right)$, for each bounding box $\mathbf{b}^{g}_{j} \in \mathbf{B}^{g}$, utilizes feature representation $\mathbf{z}^{g}_{j}$, where $\mathbf{z}^{g}_{j}$ is computed as $\text{\tt RoIAlign}(\mathbf{x}^{g}, \mathbf{b}^{g}_{j})$, which extracts features from the area within the image corresponding to the bounding box $\mathbf{b}^{g}_{j}$. These features, alongside object label probabilities $\mathbf{l}^g_j$, are fed into a context aggregation layer such as Bi-directional LSTM [@zellers2018neural], Tree-LSTM [@tang2019learning], or Graph Attention Network [@yang2018graph], to obtain refined features $\mathbf{z}^{o,g}_j$. These refined features are used to obtain the object labels $\mathbf{O}^g$ for the nodes within the graph $\mathbf{G}^{g}$.
21
+
22
+ Similarly, for the relation network $\text{Pr}\left(\mathbf{R}^{g}|\mathbf{O}^{g},\mathbf{B}^{g},\mathbf{x}^{g} \right)$, features corresponding to union of object bounding boxes are refined using message passing layers and subsequently classified to produce predictions for relations.
23
+
24
+ Existing models ground the objects in the scene graph to rectangular regions in the image. While grounding with bounding boxes provides an approximate estimate of the object locations, having a more granular pixel-level grounding achievable through segmentation masks is much more desirable. A major challenge is the lack of segmentation annotation in scene graph datasets like Visual Genome [@krishna2017visual]. Furthermore, manually labelling segmentation masks for such large datasets is both time consuming and expensive. As a solution, we derive segmentation masks via a *zero-shot* transfer mechanism from a segmentation head trained on an external dataset $\mathcal{D}^{m}$(MSCOCO [@lin2014microsoft]). This inferred segmentation mask is then used as additional input to the object and relation networks to generate better scene graphs. Our approach factorizes the distribution over $\mathbf{G}^{g}$ as, $$\begin{align}
25
+ \label{eq:sg-seg-factor}
26
+ \begin{split}
27
+ \text{Pr}\left(\mathbf{G}^{g}| \mathbf{x}^{g} \right) = \text{Pr}\left(\mathbf{B}^{g}|\mathbf{x}^{g} \right)\cdot &\text{Pr}\left(\mathbf{M}^{g}|\mathbf{x}^{g} \right)\cdot
28
+ \text{Pr}\left(\mathbf{O}^{g} |\mathbf{B}^{g},\mathbf{M}^{g},\mathbf{x}^{g} \right)\\
29
+ \cdot&\text{Pr}\left(\mathbf{R}^{g}|\mathbf{O}^{g},\mathbf{B}^{g},\mathbf{M}^{g},\mathbf{x}^{g} \right)
30
+ \end{split}
31
+ \end{align}$$ where $\mathbf{M}^{g} = \{\mathbf{m}^g_{1}, \dots, \mathbf{m}^g_{n}\}$ are the inferred segmentation masks corresponding to the bounding boxes $\mathbf{B}^g$. Such a factorization enables grounding scene graphs to segmentation masks and affords easy integration to existing architectures.
32
+
33
+ []{#sec:sg-transfer label="sec:sg-transfer"}
34
+
35
+ For each image $\mathbf{x}^{g} \in \mathcal{D}^{g}$, we derive segmentation masks $\mathbf{M}^{g}$ using annotations learned over classes in an external dataset $\mathcal{D}^m$. To facilitate this, like described in Section [\[sec:sg-generation\]](#sec:sg-generation){reference-type="ref" reference="sec:sg-generation"}, we pretrain a standard object detector (like Faster R-CNN [@ren2015faster]) on the scene graph dataset $\mathcal{D}^{g}$. However, instead of training the detector just on images in $\mathcal{D}^{g}$, we additionally jointly learn a segmentation head $f_{\mathbf{M}}$ on images in $\mathcal{D}^m$. Note that when training the object detector jointly on images in $\mathcal{D}^{g}$ and $\mathcal{D}^{m}$, the same backbone and proposal generators are used, thus reducing the memory overhead.
36
+
37
+ For an image $\mathbf{x}^{g} \in \mathcal{D}^{g}$, let $\mathbf{z}^{g}_{j}$ be the feature representation for a bounding box $\mathbf{b}^{g}_{j} \in \mathbf{B}^{g}$. Let, $$\begin{align}
38
+ \widetilde{\mathbf{m}}^{g}_{j} = f_{\mathbf{M}}(\mathbf{z}^{g}_{j})
39
+ \end{align}$$ where $\widetilde{\mathbf{m}}^{g}_{j} \in \mathbb{R}^{d_{m} \times m \times m}$, $d_{m}$ represents the number of classes in $\mathcal{D}^{m}$, and $m$ is the spatial resolution of the mask. Per class segmentation masks ${\mathbf{m}}^{g}_{j} \in \mathbb{R}^{d_{g} \times m \times m}$ are then derived from $\widetilde{\mathbf{m}}^{g}_{j}$ using a *zero-shot* transfer mechanism[^4]. Let $\mathbf{S} \in \mathbb{R}^{d_g \times d_m}$ be a matrix that captures linguistic similarities between classes in $\mathcal{D}^{g}$ and $\mathcal{D}^{m}$. For a pair of classes $c_g \in [1, d_g], c_m \in [1, d_m]$, the element $\mathbf{S}_{c_g,c_m}$ is defined as, $$\begin{align}
40
+ \mathbf{S}_{c_g,c_m} = \mathbf{g}_{c_g}^\top\mathbf{g}_{c_m}
41
+ \end{align}$$ where $\mathbf{g}_{c_g}$ and $\mathbf{g}_{c_m}$ are $300$-dimensional GloVe [@pennington2014glove] vector embeddings for classes $c_g$ and $c_m$ respectively[^5]. ${\mathbf{m}}^{g}_{j}$ is then obtained as a linear combination over $\widetilde{\mathbf{m}}^{g}_{j}$ as follows, $$\begin{align}
42
+ {\mathbf{m}}^{g}_{j} = \mathbf{S}^\top \widetilde{\mathbf{m}}^{g}_{j}
43
+ \end{align}$$ Note that such a transfer doesn't require *any* additional labelling cost as we rely on a publicly available dataset $\mathcal{D}^m$.
44
+
45
+ []{#sec:sg-object label="sec:sg-object"} As mentioned in Equation [\[eq:sg-seg-factor\]](#eq:sg-seg-factor){reference-type="ref" reference="eq:sg-seg-factor"}, we incorporate the inferred segmentation masks in the object network $\text{Pr}\left(\mathbf{O}^{g} |\mathbf{B}^{g},\mathbf{M}^{g},\mathbf{x}^{g}\right)$ to ground objects in $\mathcal{D}^g$ to pixel-level regions within the image.
46
+
47
+ Specifically, for a particular image $\mathbf{x}^g$, the model $\text{Pr}\left(\mathbf{B}^g|\mathbf{x}^g \right)$ outputs a set of bounding boxes $\mathbf{B}^g$. For each bounding box $\mathbf{b}^g_{j} \in \mathbf{B}^g$, it additionally also computes a feature representation $\mathbf{z}^g_{j}$ and object label probabilities $\mathbf{l}^g_{j} \in \mathbb{R}^{d_g + 1}$ (includes background as a possible label). Following the procedure described in Section [\[sec:sg-transfer\]](#sec:sg-transfer){reference-type="ref" reference="sec:sg-transfer"}, per-class segmentation masks $\mathbf{m}^g_{j}$ are inferred for each bounding box $\mathbf{b}^g_{j}$. We define a segmentation aware representation $\hat{\mathbf{z}}^g_{j}$ as, $$\begin{align}
48
+ \hat{\mathbf{z}}^g_{j} = f_{\mathbf{N}}\left(~[{\mathbf{z}}^g_{j}, \mathbf{m}^g_{j}]~\right)
49
+ \end{align}$$ where $f_{\mathbf{N}}$ is a learned network and $[.,.]$ represents concatenation. Contrary to existing methods like [@tang2019learning; @zellers2018neural] that use the segmentation agnostic representation $\mathbf{z}^g_{j}$, we feed $\hat{\mathbf{z}}^g_{j}$ and $\mathbf{l}^g_{j}$ as inputs to the object network ${\text{Pr}\left(\mathbf{O}^{g} |\mathbf{B}^{g},\mathbf{M}^{g},\mathbf{x}^{g}\right)}$.
50
+
51
+ []{#sec:sg-relation label="sec:sg-relation"} To facilitate better relation prediction, we leverage the inferred segmentation masks in the relation network $\text{Pr}\left(\mathbf{R}^{g}|\mathbf{O}^{g},\mathbf{B}^{g},\mathbf{M}^{g},\mathbf{x}^{g}\right)$. Specifically, for a pair of objects, we utilize a novel Gaussian attention mechanism to identify relation identifying pixel-level regions within an image.
52
+
53
+ Given a pair of bounding boxes $(\mathbf{b}^g_{j}, \mathbf{b}^g_{j'}) \in \mathbf{B}^g$ that contain a possible edge and their corresponding object label probabilities $(\mathbf{l}^g_j, \mathbf{l}^g_{j'})$, their respective segmentation masks $(\mathbf{m}^g_{j}, \mathbf{m}^g_{j'})$ are computed via the procedure described in Section [\[sec:sg-transfer\]](#sec:sg-transfer){reference-type="ref" reference="sec:sg-transfer"}. We define $\mathbf{z}^g_{j,j'}$ as the segmentation agnostic feature representation representing the union of boxes $(\mathbf{b}^g_{j}, \mathbf{b}^g_{j'})$, which is computed as $\texttt{RoIAlign}(\mathbf{x}^g, \mathbf{b}^g_{j} \cup \mathbf{b}^g_{j'} )$[^6].
54
+
55
+ Contrary to existing works that rely on this coarse rectangular union box, our approach additionally incorporates a union of the segmentation masks $(\mathbf{m}^g_{j}, \mathbf{m}^g_{j'})$ to provide more granular information. To this end, we define the attended union segmentation mask $\mathbf{m}^g_{j,j'}$ as, $$\begin{align}
56
+ \mathbf{m}^g_{j,j'} = (\mathbf{K}_j \circledast \mathbf{m}^g_{j}) \odot (\mathbf{K}_{j'} \circledast \mathbf{m}^g_{j'})
57
+ \end{align}$$ where $\circledast$ is the convolution operation, and $\odot$ computes an element-wise product. $\mathbf{K}_j, \mathbf{K}_{j'}$ are $\delta\times\delta$ sized Gaussian smoothing spatial convolutional filters parameterized by variances $\sigma_x^2$, $\sigma_y^2$ and correlation $\rho_{x,y}$. These parameters are obtained by learning a transformation over the object label probabilities $\mathbf{l}^g_j$. Specifically, $$\begin{align}
58
+ \sigma_x^2, \sigma_y^2, \rho_{x,y} = f_{\mathcal{N}}\left(\mathbf{l}^g_j\right)
59
+ \end{align}$$ where $f_{\mathcal{N}}$ is a learned network. $\mathbf{K}_{j'}$ is computed analogously using $\mathbf{l}^g_{j'}$. The attended union segmentation mask $\mathbf{m}^g_{j,j'}$ affords the computation of a segmentation aware representation $\hat{\mathbf{z}}^g_{j,j'}$ as follows, $$\begin{align}
60
+ \hat{\mathbf{z}}^g_{j,j'} = f_{\mathbf{E}}\left([\mathbf{z}^g_{j,j'}, \mathbf{m}^g_{j,j'}]\right)
61
+ \end{align}$$ where $f_{\mathbf{E}}$ is a learned network. $\hat{\mathbf{z}}^g_{j,j'}$ is then used as an input to the relation network ${\text{Pr}\left(\mathbf{R}^{g}|\mathbf{O}^{g},\mathbf{B}^{g},\mathbf{M}^{g},\mathbf{x}^{g}\right)}$.
62
+
63
+ []{#sec:sg-seg-refine label="sec:sg-seg-refine"} As described previously, our proposed approach incorporates segmentation masks to improve relation prediction. However, we posit that the tasks of segmentation and relation prediction are indelibly connected, wherein an improvement in one leads to an improvement in the other.
64
+
65
+ To this end, for each object $\mathbf{b}^g_j \in \mathbf{B}^g$, in addition to predicting the object labels $\mathbf{O}^g$, we learn a segmentation *refinement* head $f_{\mathbf{M}'}$ to refine the inferred segmentation masks $\mathbf{m}^g_j$. However, as the scene graph dataset $\mathcal{D}^g$ does not contain any instance-level segmentation annotations, training $f_{\mathbf{M}'}$ in a traditionally supervised manner is challenging.
66
+
67
+ To alleviate this issue, we again leverage the auxiliary dataset $\mathcal{D}^m$, which contains instance-level segmentation annotations. For an image $\mathbf{x}^m \in \mathcal{D}^m$, bounding boxes $\mathbf{B}^m$ are computed using the object detector. Note that this does not require any additional training as the object detector is jointly trained using both $\mathcal{D}^g$ and $\mathcal{D}^m$ as described in Section [\[sec:sg-transfer\]](#sec:sg-transfer){reference-type="ref" reference="sec:sg-transfer"}. For a bounding box $\mathbf{b}^m_j \in \mathbf{B}^m$, the corresponding per class masks are computed as, $$\begin{align}
68
+ \mathbf{m}^m_j = f_{\mathbf{M}}\left(\mathbf{z}^m_j \right)
69
+ \end{align}$$ where $\mathbf{z}^m_j$ is the feature representation for $\mathbf{b}^m_j$, and $f_{\mathbf{M}}$ is the segmentation head defined in Section [\[sec:sg-transfer\]](#sec:sg-transfer){reference-type="ref" reference="sec:sg-transfer"}. The refined mask $\hat{\mathbf{m}}^m_j$ is then computed as, $$\begin{align}
70
+ \hat{\mathbf{m}}^m_j = \mathbf{m}^m_j + f_{\mathbf{M}'}\left(\mathbf{z}^{o,m}_j \right)
71
+ \end{align}$$ where $\mathbf{z}^{o,m}_j$ is the representation computed by the context aggregation layer within the object network $\text{Pr}\left(\mathbf{O}^{m} |\mathbf{B}^{m},\mathbf{M}^{m},\mathbf{x}^{m} \right)$. Note that this network is identical to the one defined in Equation [\[eq:sg-seg-factor\]](#eq:sg-seg-factor){reference-type="ref" reference="eq:sg-seg-factor"}. The segmentation *refinement* head $f_{\mathbf{M}'}$ is a zero-initialized network that learns a residual update over the mask $\mathbf{m}^m_j$. As ground-truth segmentation annotations are available for all objects $\mathbf{B}^m$, $f_{\mathbf{M}'}$ is trained using a pixel-level cross entropy loss.
72
+
73
+ $f_{\mathbf{M}'}$ is trained alongside the scene graph generation model, and the refined masks are used during inference to improve relation prediction performance. Specifically, for a particular image $\mathbf{x}^g \in \mathcal{D}^g$, we follow the model described in Equation [\[eq:sg-seg-factor\]](#eq:sg-seg-factor){reference-type="ref" reference="eq:sg-seg-factor"} to generate predictions. However, instead of directly using the inferred masks obtained using the zero-shot formulation in Section [\[sec:sg-transfer\]](#sec:sg-transfer){reference-type="ref" reference="sec:sg-transfer"}, we additionally refine it using $f_{\mathbf{M}'}$. For a particular mask $\mathbf{m}^g_j$ corresponding to a bounding box $\mathbf{b}^g_j$, we compute $\hat{\mathbf{m}}^g_j$ as, $$\begin{align}
74
+ \hat{\mathbf{m}}^g_j = \mathbf{m}^g_j + f_{\mathbf{M}'}\left(\mathbf{z}^{o,g}_j \right)
75
+ \end{align}$$ where $\mathbf{z}^{o,g}_j$ is the representation computed by the context aggregation layer. The refined mask is used in the object and relation networks as described in Sections [\[sec:sg-object\]](#sec:sg-object){reference-type="ref" reference="sec:sg-object"} and [\[sec:sg-relation\]](#sec:sg-relation){reference-type="ref" reference="sec:sg-relation"}.
76
+
77
+ Our proposed approach is trained in two stages. The first stage involves pre-training the object detector to enable bounding box proposal generation for a given image. Given datasets $\mathcal{D}^g$ and $\mathcal{D}^m$, the object detector is jointly trained to minimize the following objective, $$\begin{align}
78
+ \label{eq:pretrain}
79
+ \mathcal{L}^{obj} = \mathcal{L}^{rcnn} + \mathcal{L}^{seg}
80
+ \end{align}$$ where $\mathcal{L}^{rcnn}$ is the Faster R-CNN [@ren2015faster] objective, and $\mathcal{L}^{seg}$ is the pixel-level binary cross entropy loss [@he2017iccv] applied over segmentation masks. Note that images in $\mathcal{D}^g$ do not contribute to $\mathcal{L}^{seg}$ due to lack of segmentation annotations.
81
+
82
+ The second stage of training involves training the scene graph generation network to accurately identify relations between pairs of objects. Given datasets $\mathcal{D}^g$ and $\mathcal{D}^m$, the scene graph generation network is jointly trained to minimize the following objective, $$\begin{align}
83
+ \mathcal{L} = \mathcal{L}^{sg} + \mathcal{L}^{seg}
84
+ \end{align}$$ where $\mathcal{L}^{sg}$ depends on the architecture of the underlying scene graph method our approach is augmented to. For example, in the case of MOTIF [@zellers2018neural], $\mathcal{L}^{sg}$ consists of two cross-entropy losses, one to refine the object categorization obtained from the pretrained detector, and the other to aide with accurate relation prediction. $\mathcal{L}^{seg}$ is identical to the segmentation loss described in Equation [\[eq:pretrain\]](#eq:pretrain){reference-type="ref" reference="eq:pretrain"}, and is used to learn the refinement network $f_{\mathbf{M}'}$ (Section [\[sec:sg-seg-refine\]](#sec:sg-seg-refine){reference-type="ref" reference="sec:sg-seg-refine"}). As images in $\mathcal{D}^m$ do not contain scene graph annotations, they only contribute to $\mathcal{L}^{seg}$. Similarly, images in $\mathcal{D}^m$ only affect $\mathcal{L}^{sg}$.
85
+
86
+ ::: table*
87
+ :::
88
+
89
+ ::: table*
90
+ :::
2105.05912/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2105.05912/paper_text/intro_method.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Transformers [@vaswani2017attention] and transformer-based Pre-trained Language Models (PLMs) [@devlin-etal-2019-bert] are ubiquitous in applications of NLP. Since transformers are non-autoregressive, they are highly parallelizable and their performance scales well with an increase in model parameters and data. Increasing model parameters depends on availability of computational resources and PLMs are typically trained on unlabeled data which is cheaper to obtain.
4
+
5
+ Recently, the trillion parameter mark has been breached for PLMs [@fedus2021switch] amid serious environmental concerns [@strubell2019energy]. However, without a change in our current training paradigm , training larger models may be unavoidable [@li2020train]. In order to deploy these models for practical applications such as for virtual personal assistants, recommendation systems, e-commerce platforms etc. model compression is necessary.
6
+
7
+ Knowledge Distillation (KD) [@bucilua2006model; @hinton2015distilling] is a simple, yet powerful knowledge transfer algorithm which is used for compression [@jiao2019tinybert; @sanh2019distilbert], ensembling [@hinton2015distilling] and multi-task learning [@clark2019bam]. In NLP, KD for compression has received renewed interest in the last few years. It is one of most widely researched algorithms for the compression of transformer-based PLMs [@rogers2020primer].
8
+
9
+ One key feature which makes KD attractive is that it only requires access to the teacher's output or logits and not the weights themselves. Therefore if a trillion parameter model resides on the cloud, an API level access to the teacher's output is sufficient for KD. Consequently, the algorithm is architecture agnostic i.e. it can work for any deep learning model and the student can be a different model from the teacher.
10
+
11
+ Recent works on KD for transfer learning with PLMs extend the algorithm in two main directions. The first is towards "model\" distillation [@sun2019patient; @wang2020minilm; @jiao2019tinybert] i.e. distilling the intermediate weights such as the attention weights or the intermediate layer output of transformers. The second direction is towards curriculum-based or progressive KD [@sun2020mobilebert; @mirzadeh2019improved] where the student learns one layer at a time or from an intermediary teacher, known as a teacher assistant. While these works have shown accuracy gains over standard KD, they have come at the cost of architectural assumptions, least of them a common architecture between student and teacher, and greater access to teacher parameters and intermediate outputs. Another issue is that the decision to distill one teacher layer and to skip another is arbitrary. Still the teacher typically demonstrates better generalization
12
+
13
+ We are interested in KD for model compression and study the use of adversarial training [@goodfellow2014explaining] to improve student accuracy using just the logits of the teacher as in standard KD. Specifically, our work makes the following contributions:
14
+
15
+ - We present a text-based adversarial algorithm, MATE-KD, which increases the accuracy of the student model using KD.
16
+
17
+ - Our algorithm only requires access to the teacher's logits and thus keeps the teacher and student architecture independent.
18
+
19
+ - We evaluate our algorithm on the GLUE [@wang-etal-2018-glue] benchmark and demonstrate improvement over competitive baselines.
20
+
21
+ - On the GLUE test set we achieve a score of 80.9, which is higher than $\text{BERT}_{\text{LARGE}}$
22
+
23
+ - We also demonstrate improvement on out-of-domain (OOD) evaluation.
24
+
25
+ # Method
26
+
27
+ We propose an algorithm that involves co-training and deploy an adversarial text generator while training a student network using KD. Figure [1](#fig:mate-kd){reference-type="ref" reference="fig:mate-kd"} gives an illustration of our architecture.
28
+
29
+ <figure id="fig:mate-kd" data-latex-placement="h">
30
+ <img src="figure.png" />
31
+ <figcaption>Illustration of the maximization and minimization steps of MATE-KD </figcaption>
32
+ </figure>
33
+
34
+ The text generator is simply a pre-trained masked language model which is trained to perturb training samples adversarially. We can frame our technique in a *minimax* regime such that in the maximization step of each iteration, we feed the generator with a training sample with few of the tokens replaced by masks. We fix the rest of the sentence and replace the masked tokens with the generator output to construct a pseudo training sample $X'$. This pseudo sample is fed to both the teacher and the student models and the generator is trained to maximize the divergence between the teacher and the student. We present an example of the masked generation process in Figure [2](#fig:mask){reference-type="ref" reference="fig:mask"}. The student is trained during the minimization step.
35
+
36
+ The generator is trained to generate pseudo samples by maximizing the following loss function: $$\begin{equation}
37
+ \begin{split}
38
+ & \max_\phi \mathcal{L}_G (\phi) = \\
39
+ & D_{KL}\Big( T\big( G_\phi(X^m)\big), S_{\theta}\big(G_\phi(X^m)\big) \Big), \\
40
+ \end{split}
41
+ \end{equation}$$ where $D_{KL}$ is the KL divergence, $G_\phi$(.) is the text generator network with parameters $\phi$, $T(\cdot)$ and $S_{\theta}(\cdot)$ are the teacher and student networks respectively, and $X^m$ is a randomly masked version of the input $X = [x_1, x_2, ..., x_n]$ with $n$ tokens.
42
+
43
+ $$\begin{align}
44
+ \begin{split}
45
+ & \forall x_i \in X = [x_1,..., x_i, ..., x_n]\sim \mathcal{D},\\
46
+ & x_i^m = \underset{p\sim \text{unif}(0,1)}{\text{Mask}(x_i \in X, p_i)} \\
47
+ & =
48
+ \begin{cases}
49
+ x_i,&p_i \ge \rho\\
50
+ <\text{mask}>, &\text{o.w.}
51
+ \end{cases}
52
+ \end{split}
53
+ \end{align}$$ where $\text{unif}(0,1)$ represents the uniform distribution, and the $\text{Mask(}\cdot\text{)}$ function masks the tokens of inputs sampled from the data distribution $\mathcal{D}$ with the probability of $\rho$. The term $\rho$ can be treated as a hyper-parameter in our technique. In summary, for each training sample, we randomly mask some tokens according to the samples derived from the uniform distribution and the threshold value of $\rho$.
54
+
55
+ Then in the forward pass, the masked sample, $X^m$, is fed to the generator to obtain the output pseudo text based on the generator predictions of the mask tokens. The generator needs to output a one-hot representation but using an *argmax* inside the generator would lead to non-differentiability. Instead we apply the Gumbel-Softmax [@jang2016categorical], which, is an approximation to sampling from the *argmax*. Using the straight through estimator [@bengio2013estimating] we can still apply *argmax* in the forward pass and can obtain text, $X'$ from the network outputs:
56
+
57
+ <figure id="fig:mask" data-latex-placement="t">
58
+ <img src="fig2.png" />
59
+ <figcaption>This figure illustrates how a training sample will be randomly masked and then fed to the text generator <span class="math inline"><em>G</em><sub><em>ϕ</em></sub></span> to get the pseudo training sample.</figcaption>
60
+ </figure>
61
+
62
+ $$\begin{equation}
63
+ X' = \underset{\text{FORWARD}}{G_\phi(X^m)} = \text{argmax}\big(\sigma_{\text{Gumbel}}(z_\phi(X^m)\big)
64
+ \end{equation}$$ where $$\begin{align}
65
+ & \sigma_{\text{Gumbel}}(z_i) =
66
+ & \frac{\exp{\Big(\big(\log (z_i) +g_i \big)/\tau\Big) }}{\Sigma_{j=1}^K \exp{\Big(\big(\log (z_j) +g_j \big)/\tau \Big) }}
67
+ \end{align}$$ $g_i\sim \text{Gumbel}(0,1)$ and $z_\phi(.)$ returns the logits produced by the generator for a given input.
68
+
69
+ In the backward pass the generator simply applies the gradients from the Gumbel-Softmax without the *argmax* :
70
+
71
+ $$\begin{equation}
72
+ \underset{\text{BACKWARD}}{G_\phi(X^m)} = \sigma_{\text{Gumbel}}(z_\phi(X^m) )
73
+ \end{equation}$$
74
+
75
+ In the minimization step, the student network is trained such as to minimize the gap between the teacher and student predictions and match the hard labels from the training data by minimizing the following loss equation: $$\begin{equation}
76
+ \begin{split}
77
+ & \min_\theta \mathcal{L}_\text{MATE-KD}(\theta) = \\
78
+ & \frac{1}{3} \mathcal{L}_{CE}(\theta) + \frac{1}{3} \mathcal{L}_{KD}(\theta) + \frac{1}{3} \mathcal{L}_{ADV}(\theta)
79
+ \end{split}
80
+ \label{eq:loss}
81
+ \end{equation}$$ where $$\begin{equation}
82
+ \mathcal{L}_{ADV}(\theta) = D_{KL}\Big( T(X') , S_{\theta}(X') \Big)
83
+ \end{equation}$$
84
+
85
+ In Equation [\[eq:loss\]](#eq:loss){reference-type="ref" reference="eq:loss"}, the terms ${L}_{KD}$ and ${L}_{CE}$ are the same as Equation [\[eq:KD\]](#eq:KD){reference-type="ref" reference="eq:KD"}, ${L}_{KD}(\theta)$ and ${L}_{ADV}(\theta)$ are used to match the student with the teacher, and ${L}_{CE}(\theta)$ is used for the student to follow the ground-truth labels $y$.
86
+
87
+ Bear in mind that our $\mathcal{L}_\text{MATE-KD}(\theta)$ loss is different from the regular KD loss in two aspects: first, it has the additional adversarial loss, $\mathcal{L}_{ADV}$ to minimize the gap between the predictions of the student and the teacher with respect to the generated masked adversarial text samples, $X'$, in the maximization step; second, we do not have the weight term $\lambda$ form KD in our technique any more (i.e. we consider equal weights for the three loss terms in $\mathcal{L}_\text{MATE-KD}$).
88
+
89
+ The rationale behind generating partially masked adversarial texts instead of generating adversarial texts from scratch (that is equivalent to masking the input of the text generator entirely) is three-fold:
90
+
91
+ 1. Partial masking is able to generate more realistic sentences compared to generating them from scratch when trained only to increase teacher and student divergence. We present a few generated sentences in section [4.6](#sec:gen_sentences){reference-type="ref" reference="sec:gen_sentences"}
92
+
93
+ 2. Generating text from scratch increases the chance of generating OOD data. Feeding OOD data to the KD algorithm leads to matching the teacher and student functions across input domains that the teacher is not trained on.
94
+
95
+ 3. By masking and changing only a few tokens of the original text, we constrain the amount of perturbation as is required for adversarial training.
96
+
97
+ In our MATE-KD technique, we can tweak the $\rho$ to control our divergence from the data distribution and find the sweet spot which gives rise to maximum improvement for KD. We also present an ablation on the effect of this parameter on downstream performance in section [4.5](#sec:sensitivity){reference-type="ref" reference="sec:sensitivity"}.
2106.02960/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2106.02960/main_diagram/main_diagram.pdf ADDED
Binary file (92 kB). View file
 
2106.02960/paper_text/intro_method.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Disambiguating word meaning in context is at the heart of any natural language understanding task or application, whether it is performed explicitly or implicitly. Traditionally, word sense disambiguation (WSD) has been defined as the task of explicitly labeling word usages in context with sense labels from a pre-defined sense inventory. The majority of approaches to WSD rely on (semi-)supervised learning [@yuan-semi_wsd; @raganato-framework; @raganato-wsd; @hadiwinoto-improved_wsd; @huang-glossbert; @scarlini-wsd; @bevilacqua-breaking] and make use of training corpora manually annotated for word senses. Typically, these methods require a fairly large number of annotated training examples per word. This problem is exacerbated by the dramatic imbalances in sense frequencies, which further increase the need for annotation to capture a diversity of senses and to obtain sufficient training data for rare senses.
4
+
5
+ This motivated recent research on few-shot WSD, where the objective of the model is to learn new, previously unseen word senses from only a small number of examples. @holla-wsd presented a meta-learning approach to few-shot WSD, as well as a benchmark for this task. Meta-learning makes use of an episodic training regime, where a model is trained on a collection of diverse few-shot tasks and is explicitly optimized to perform well when learning from a small number of examples per task [@snell-protonet; @finn-maml; @triantafillou_metadataset]. @holla-wsd have shown that meta-learning can be successfully applied to learn new word senses from as little as one example per sense. Yet, the overall model performance in settings where data is highly limited (e.g. one- or two-shot learning) still lags behind that of fully supervised models.
6
+
7
+ In the meantime, machine learning research demonstrated the advantages of a memory component for meta-learning in limited data settings [@santoro2016meta; @munkhdalai_metanet; @munkhdalai2018rapid; @VarSemMemory]. The memory stores general knowledge acquired in learning related tasks, which facilitates the acquisition of new concepts and recognition of previously unseen classes with limited labeled data [@VarSemMemory]. Inspired by these advances, we introduce the first model of semantic memory for WSD in a meta-learning setting. In meta-learning, prototypes are embeddings around which other data points of the same class are clustered [@snell-protonet]. Our semantic memory stores prototypical representations of word senses seen during training, generalizing over the contexts in which they are used. This rich contextual information aids in learning new senses of previously unseen words that appear in similar contexts, from very few examples.
8
+
9
+ The design of our prototypical representation of word sense takes inspiration from prototype theory [@rosch:prototype], an established account of category representation in psychology. It stipulates that semantic categories are formed around prototypical members, new members are added based on resemblance to the prototypes and category membership is a matter of degree. In line with this account, our models learn prototypical representations of word senses from their linguistic context. To do this, we employ a neural architecture for learning probabilistic class prototypes: variational prototype networks, augmented with a variational semantic memory (VSM) component [@VarSemMemory].
10
+
11
+ Unlike deterministic prototypes in prototypical networks [@snell-protonet], we model class prototypes as distributions and perform variational inference of these prototypes in a hierarchical Bayesian framework. Unlike deterministic memory access in memory-based meta-learning [@santoro_mann; @munkhdalai_metanet], we access memory by Monte Carlo sampling from a variational distribution. Specifically, we first perform variational inference to obtain a latent memory variable and then perform another step of variational inference to obtain the prototype distribution. Furthermore, we enhance the memory update of vanilla VSM with a novel adaptive update rule involving a hypernetwork [@ha2016hypernetworks] that controls the weight of the updates. We call our approach $\beta$-VSM to denote the adaptive weight $\beta$ for memory updates.
12
+
13
+ We experimentally demonstrate the effectiveness of this approach for few-shot WSD, advancing the state of the art in this task. Furthermore, we observe the highest performance gains on word senses with the least training examples, emphasizing the benefits of semantic memory for truly few-shot learning scenarios. Our analysis of the meaning prototypes acquired in the memory suggests that they are able to capture related senses of distinct words, demonstrating the generalization capabilities of our memory component. We make our code publicly available to facilitate further research.[^1]
14
+
15
+ # Method
16
+
17
+ We experiment with the same model architectures as @holla-wsd. The model $f_{\theta}$, with parameters $\theta$, takes words $\mathbf{x}_i$ as input and produces a per-word representation vector $f_{\theta}(\mathbf{x}_i)$ for $i = 1, ..., L$ where $L$ is the length of the sentence. Sense predictions are only made for ambiguous words using the corresponding word representation.
18
+
19
+ Single-layer bi-directional GRU [@cho-gru] network followed by a single linear layer, that takes GloVe embeddings [@pennington-glove] as input. GloVe embeddings capture all senses of a word. We thus evaluate a model's ability to disambiguate from sense-agnostic input.
20
+
21
+ A multi-layer perception (MLP) network that receives contextualized ELMo embeddings [@peters-elmo] as input. Their contextualised nature makes ELMo embeddings better suited to capture meaning variation than the static ones. Since ELMo is not fine-tuned, this model has the lowest number of learnable parameters.
22
+
23
+ Pretrained BERT~BASE~ [@devlin-bert] model followed by a linear layer, fully fine-tuned on the task. BERT underlies state-of-the-art approaches to WSD.
24
+
25
+ Our few-shot learning approach builds upon prototypical networks [@snell-protonet], which is widely used for few-shot image classification and has been shown to be successful in WSD [@holla-wsd]. It computes a prototype $\mathbf{z}_k = \frac{1}{K}\sum_k f_{\theta}(\mathbf{x}_k)$ of each word sense (where $K$ is the number of examples for each word sense) through an embedding function $f_{\theta}$, which is realized as the aforementioned architectures. It computes a distribution over classes for a query sample $\mathbf{x}$ given a distance function $d(\cdot, \cdot)$ as the softmax over its distances to the prototypes in the embedding space: $$\begin{equation}
26
+ p(\mathbf{y}_{i} = k|\mathbf{x}) = \frac{\exp(-d(f_{\theta}(\mathbf{x}),\mathbf{z}_k))}{\sum_{k'} \exp(-d(f_{\theta}(\mathbf{x}),\mathbf{z}_{k'}))}
27
+ \label{eq:prototype}
28
+ \end{equation}$$
29
+
30
+ However, the resulting prototypes may not be sufficiently representative of word senses as semantic categories when using a single deterministic vector, computed as the average of only a few examples. Such representations lack expressiveness and may not encompass sufficient intra-class variance, that is needed to distinguish between different fine-grained word senses. Moreover, large uncertainty arises in the single prototype due to the small number of samples.
31
+
32
+ Variational prototype network [@VarSemMemory] (VPN) is a powerful model for learning latent representations from small amounts of data, where the prototype $\mathbf{z}$ of each class is treated as a distribution. Given a task with a support set $S$ and query set $Q$, the objective of VPN takes the following form: $$\begin{equation}
33
+ \begin{aligned}
34
+ \mathcal{L}_{\mathrm{VPN}} &= \frac{1}{|Q|} \sum^{|Q|}_{i=1} \Big[\frac{1}{L_\mathbf{z}} \sum_{l_\mathbf{z}=1}^{L_\mathbf{z}} -\log p(\mathbf{y}_i|\mathbf{x}_i,\mathbf{z}^{(l_\mathbf{z})})
35
+ \\& + \lambda D_{\mathrm{KL}}[q(\mathbf{z}|S)||p(\mathbf{z}|\mathbf{x}_i)]\Big]
36
+ \label{L_VPN}
37
+ \end{aligned}
38
+ \end{equation}$$ where $q(\mathbf{z}|S)$ is the variational posterior over $\mathbf{z}$, $p(\mathbf{z}|\mathbf{x}_i)$ is the prior, and $L_\mathbf{z}$ is the number of Monte Carlo samples for $\mathbf{z}$. The prior and posterior are assumed to be Gaussian. The re-parameterization trick [@kingma2013auto] is adopted to enable back-propagation with gradient descent, i.e., $\mathbf{z}^{(l_\mathbf{z})} = f(S, \epsilon^{(l_\mathbf{z})})$, $\epsilon^{(l_\mathbf{z})} \sim \mathcal{N} (0, I)$, $f(\cdot, \cdot) = \epsilon^{(l_\mathbf{z})} * \mu_z + \sigma_z$, where the mean $\mu_z$ and diagonal covariance $\sigma_z$ are generated from the posterior inference network with $S$ as input. The amortization technique is employed for the implementation of VPN. The posterior network takes the mean word representations in the support set $S$ as input and returns the parameters of $q(\mathbf{z}|S)$. Similarly, the prior network produces the parameters of $p(\mathbf{z}|\mathbf{x}_i)$ by taking the query word representation $\mathbf{x_i} \in \mathcal{Q}$ as input. The conditional predictive log-likelihood is implemented as a cross-entropy loss.
39
+
40
+ <figure id="fig:framework" data-latex-placement="t">
41
+ <img src="framework.png" />
42
+ <figcaption>Computational graph of variational semantic memory for few-shot WSD. <span class="math inline"><em>M</em></span> is the semantic memory module, <span class="math inline"><em>S</em></span> the support set, <span class="math inline"><strong>x</strong></span> and <span class="math inline"><strong>y</strong></span> are the query sample and label, and <span class="math inline"><strong>z</strong></span> is the word sense prototype.</figcaption>
43
+ </figure>
44
+
45
+ In order to leverage the shared common knowledge between different tasks to improve disambiguation in future tasks, we incorporate variational semantic memory (VSM) as in @VarSemMemory. It consists of two main processes: *memory recall*, which retrieves relevant information that fits with specific tasks based on the support set of the current task; *memory update*, which effectively collects new information from the task and gradually consolidates the semantic knowledge in the memory. We adopt a similar memory mechanism and introduce an improved update rule for memory consolidation.
46
+
47
+ The memory recall of VSM aims to choose the related content from the memory, and is accomplished by variational inference. It introduces latent memory $\mathbf{m}$ as an intermediate stochastic variable, and infers $\mathbf{m}$ from the addressed memory $M$. The approximate variational posterior $q(\mathbf{m}|M, S)$ over the latent memory $\mathbf{m}$ is obtained empirically by
48
+
49
+ $$\begin{equation}
50
+ q(\mathbf{m}|M,S) = \sum^{|M|}_{a=1}\gamma_a p(\mathbf{m}|M_a),
51
+ \label{qms}
52
+ \end{equation}$$ where $$\begin{equation}
53
+ \gamma_a = \frac{\exp\big(g(M_a,S)\big)}{\sum_i \exp\big(g(M_i,S)\big)}
54
+ \label{lambda}
55
+ \end{equation}$$ $g(\cdot)$ is the dot product, $|M|$ is the number of memory slots, $M_a$ is the memory content at slot $a$ and stores the prototype of samples in each class, and we take the mean representation of samples in $S$.
56
+
57
+ The variational posterior over the prototype then becomes: $$\begin{equation}
58
+ \tilde{q}(\mathbf{z}|M,S) \approx \frac{1}{L_{\mathbf{m}}}\sum^{L_m}_{l_{\mathbf{m}}=1} q(\mathbf{z}|\mathbf{m}^{(l_{\mathbf{m}})},S),
59
+ \label{}
60
+ \end{equation}$$ where $\mathbf{m}^{(l_{\mathbf{m}})}$ is a Monte Carlo sample drawn from the distribution $q(\mathbf{m}|M,S)$, and $l_{\mathbf{m}}$ is the number of samples. By incorporating the latent memory $\mathbf{m}$ from Eq. ([\[qms\]](#qms){reference-type="ref" reference="qms"}), we achieve the objective for variational semantic memory as follows: $$\begin{equation}
61
+ \begin{aligned}
62
+ \mathcal{L}_{\rm{VSM}} &= \sum^{|Q|}_{i=1} \Big[ -\mathbb{E}_{q(\mathbf{z}|S, \mathbf{m})} \big[\log p(\mathbf{y}_i|\mathbf{x}_i,\mathbf{z})\big]
63
+ \\& + \lambda_{\mathbf{z}} D_{\mathrm{KL}}\big[q(\mathbf{z}|S, \mathbf{m})||p(\mathbf{z}|\mathbf{x}_i)\big]
64
+ \\& + \lambda_{\mathbf{m}} D_{\mathrm{KL}}\big[\sum^{|M|}_{i}\gamma_i p(\mathbf{m}|M_i)||p(\mathbf{m}|S)\big]\Big]
65
+ \label{obj}
66
+ \end{aligned}
67
+ \end{equation}$$ where $p(\mathbf{m}|S)$ is the introduced prior over $\mathbf{m}$, $\lambda_{\mathbf{z}}$ and $\lambda_{\mathbf{m}}$ are the hyperparameters. The overall computational graph of VSM is shown in Figure [1](#fig:framework){reference-type="ref" reference="fig:framework"}. Similarly, the posterior and prior over $\mathbf{m}$ are also assumed to be Gaussian and obtained by using amortized inference networks; more details are provided in Appendix [8.1](#sec:implementation){reference-type="ref" reference="sec:implementation"}.
68
+
69
+ The memory update is to be able to effectively absorb new useful information to enrich memory content. VSM employs an update rule as follows: $$\begin{equation}
70
+ M_c \leftarrow \beta M_c + (1-\beta) \bar{M}_c,
71
+ \end{equation}$$ where $M_c$ is the memory content corresponding to class $c$, $\bar{M}_c$ is obtained using graph attention [@velivckovic2017graph], and $\beta \in (0,1)$ is a hyperparameter.
72
+
73
+ Although VSM was shown to be promising for few-shot image classification, it can be seen from the experiments by @VarSemMemory that different values of $\beta$ have considerable influence on the performance. $\beta$ determines the extent to which memory is updated at each iteration. In the original VSM, $\beta$ is treated as a hyperparameter obtained by cross-validation, which is time-consuming and inflexible in dealing with different datasets. To address this problem, we propose an adaptive memory update rule by learning $\beta$ from data using a lightweight hypernetwork [@ha2016hypernetworks]. To be more specific, we obtain $\beta$ by a function $f_{\beta}(\cdot)$ implemented as an MLP with a sigmoid activation function in the output layer. The hypernetwork takes $\bar{M}_c$ as input and returns the value of $\beta$: $$\begin{equation}
74
+ \label{eqn:hyp_net}
75
+ \beta = f_{\beta}(\bar{M}_c)
76
+ \end{equation}$$ Moreover, to prevent the possibility of endless growth of memory value, we propose to scale down the memory value whenever $\left \|M_c \right\|_2 > 1$. This is achieved by scaling as follows:
77
+
78
+ $$\begin{equation}
79
+ M_c = \frac{M_c}{\max (1, \left \|M_c \right\|_2 )}
80
+ \end{equation}$$
81
+
82
+ When we update memory, we feed the new obtained memory $\bar{M}_c$ into the hypernetwork $f_{\beta}(\cdot)$ and output adaptive $\beta$ for the update. We provide a more detailed implementation of $\beta$-VSM in Appendix [8.1](#sec:implementation){reference-type="ref" reference="sec:implementation"}.
2107.04086/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-06-03T20:19:50.577Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36" etag="3s4caff_D74jtkVjthe7" version="14.6.13" type="device"><diagram id="uMuzKYRAcHuI6GL1g5hQ" name="Page-1">7X1rc+NGsuWv8Ucj6v34aLtnfGfnzmzv9cbcmf0ywRbRal6rRQXFtrv967cgsUBUAiJBiKgsMkVHOCSIDUqnDrLyZOXjO/nT568/bxYPn/62XtZ33wm2/PqdfPedEFLJ8P/mwrfnC0KJ5wu3m9Xy+RLfX/hl9Ue9u8h2V7+slvVj8sbten23XT2kF2/W9/f1zTa5tths1r+nb/u4vks/9WFxu/tEtr/wy83iru697b9Xy+2n56tOd979H/Xq9lP8ZM52P/m8iG/eXXj8tFiuf+9ckn/6Tv60Wa+3z199/vpTfddgF3F5/nd/fuGn7S+2qe+3Y/7B53+J7b837+//+o6/2/768OX//PPf//v73er8trj7svuDd7/s9ltEIPzeD82X27C+9R/r5nY/PtSb1ed6W2+619/vL/74+6fVtv7lYXHT/MvfwzvCtU/bz3fhOx6+/Lj6WsfFbr5frjZh8Vbr+/D94/pLg1x4z93dT+u79ebp15Af3U19cxOuP24361/rzk8+OK3CesgfbzeL5Sqg0flZvTAfjAk/66O1A/C3erOtv3Yu7dD7uV6HP2bzLbwl/lTuwNlRmZvd9793iLFb7E8dTsRrix0Vb9s771crfLFbsBMWTw8snrkLn/rjh/DFbfPFz3//e3ttEy/+tL7/bX33pcH7Mf40/ALtP+pRIAC0TRcwXYT79X0NVmx3aXG3um0W9SbA/kSMBu5VeLx+2P3g82q5bD5mkDGb9Zf7Zd1AwM60hBosoe8voRpYQjHXEprjz199v/yhMWQNineLx8fVzYileBGrenlbH0Sqg4QeQCJe29R3i+3qt9RKDsGz+4T361X4TdqFEOBZEkyktwhWYHNT7/5V16rBG9kjN9ouNrf1tnejp9Vq/+zpCygsO76ER6zh4vHheet6MovnYbrwPGW66jOdi4EF5rNxXVh+HKpwn7C1v2QQDoF22nMwHkkpAJK2j6QZANLMh6OYE0ew79aaK9aY34/rZGf9+PSaE3elAO62b6v7sKvZUB/hK10B6lKnqEfvBQt1RQJ1A22Mx0V9yLW8PtQ1RN3hoj7GG7x81CHXHTLqlgTqFqDukVF3JFAXDnAd2a57EqhLgLrH9WHcCIl4BagryHWNi/qsarMY1LUsi+s0tKmEuymuNnU0tKlKURcMGXUa2hSopCCTcFGnoU2NBVzH9RwdDW0qIOrIPgwNbSoB6hwZdRraVAHUBTLqNLSphlznqKh7GtpUp2dJ7SkxFuo0tKkBqHNk1GloU53GYYRARp2GNi1MJXka2hTmCESZgoU6DW0KdlPukVGnoU3hbsqQUaehTQ3YTTky6kS0aVmRXk9Dm0qfoo7rOUpGQ5vCmCOuSpKMiDZNuY58Wi1jQOLKUZfpaTW6haGhTWNiYWthJC7qNLRprzJF4aJOQ5saYGE46rmpZDS0Kcz9ws2HkYyINrUAdVRtKhkRbQojAsg+DA1tCiICyPnrktPQprBWAxt1GtoU5Ahwi2vXo4G7ctQtiH4xXJXESWpT7pFRp6FNNYzD4GpTTkObCsh15N00pzblHxa8Fhiox2OEiLpGrZCRPKc2xUOdwzgMbkSA59SmeKgLaNeRUc+pTfFQh6caEhd1kVOb4qGuAOoKGfWc2vTNwkTUc2rTNwsTUc+pTd8sTEQ9pzbFQx1qU42Mek5tioc6PDfVuBEBQUObaqBNFW4WkqChTQ2MCCCjTkObWoC6QUadhjZ1AHWLi7qkoU09QB05u1SS0KYKRno9Muo0tCm0MAY3vi5paFNoYSwy6iS0ac/C4HZHbps1v6GeFXUS2lTFfPUyOgu2TYKuG/W+XcfNEZA0tClAHbnbWluIee2og4p2gWthFA1tysvy1xUJbdoeyhfir8dxQNeNelu4XwrXaWhTBlDH5joJbdrnOq4Po0ho0z7XkVEnoU17XEfWpoqGNgVcFwwZdRLatG0q12ZSI6NOQ5sygDqyNtVEtClAXSKjTkSbAtQVMupEtCm0MLin1ZqINoUWBhl1Etq0Z9eR8xw1DW0KetwJgdtHQJPQpr0+AhIZdRra1AHUFTLqJLSptHDWALLnSEObOoA6skoyNLQpnCETU96wUKehTWHXe4WMOglt2uu/jqySDA1tCmfIIKskQ0ObSpD7FQPcWKjT0KY85TpHrq02NLQpzEJCVkmGhjYFFoZrZNRpaFMwVRad6zS0KZgqi811S0SbAtQNMupEtGlZfXrjwIkrR52DWQPIOb2WhjaVoHdpPKTHQp2GNlUAdYOMOhFtCuy6w63BszS0KVRJMeiHhToNbaog6rjRL0tDm8btqxBtamloUwdQR9amjoY29aIorjsS2lQxgDo212loU8h15M6CjoQ27XEdubOgo6FNI7naGjyOizoNbepT1AVDRp2GNoWoI2fcORLaVDGAOnJdkqOhTWHGXWxjiYU6DW0KIgJC4aLuSWhT4UEmNXJ2qSehTdtB0YXMGoii7cpR751q4GbceRoz2nl6gieiVsVCncaM9vhPypi0qcd0Aj8V6zPApHqh2T45eUwZ7QLVXpwBqllDVnMRTguApO0jaQaANPPhmDMIhfaga6UB7qibms7akBsPdZgMhNuQW2dtyI2HuoE2BvWoXmdtyI2HuoaoI7sSJBy4HtdxU7B01obceKhbgDqy25y1ITce6nESWct1XLuetSE3og8DUMdtIaqzNuRG9Nch11GP6nXWhtyIPgxIfMPmOg1tGqM+Leq42jRrQ25EC5OiLhgy6jS0KVBJgiOjTkObGnCQxpA9RxraVEDUkX0YGtpUAtQ5LupZG3Ij7qawwR8y6jS0qYZcR03y1FkbciOiDpoU4x7V66wNuRF9GDjeAhl1GtpUp3EYIZBRp6FNC1NJWRtyl5Mj4FBL4XTWhtzF7KbcI6NOQ5vC3ZQho05Dm8JBCxwX9awNud8ivRF1GtoUNAzB9hyzNuQuJ+aIrJKyNuRGzIcBzXGQT6uzNuRGjPSCUjhsC0NDm9q0PEhw1PZbOmtDbkTUYTk/aimcztqQG9FfBxaG456bZm3IXU7uF3I+TNaG3MXkCGDHYbI25C6G69j+etaG3MVEBLDz17M25C6nVgMbdRraFOQIIDef11kbcuOh7kD0i+GqpKwNuYvRptwjo05Dm8KBlwJXm2ZtyI0Y6YVcR95NSTQ907GNYtu4FbdChkZD7vCMpRYGOR+GRkNuLaBdR0adRNOz3qmGREY9pzbFQ10B1HHHjGoaDbmLszA5tembhYmok2jIXZyFIdGQu6dNNTLqJBpy985NcRu3ahoNuduit7ZLI24WEo2G3G06ZxmjXDSNhtztQWUZo1w0jYbcrQRvz5KQUaehTT1AHTm71JPQpgZGej0y6jS0KbQwuEO7taehTaGFsciok9CmPQuD3B0566CFN9Qj6iS0qWGwyhcZdRratGfXUXMEDKOhTQHqyN3WWoN37aiDinaBamFMbAp05ajzovx1w0hoUy2K8tcNo6FNGajyxeY6DW3KAerYXCehTftcR/ZhSGjTPteRUSehTXtcx9WmhtHQphxWyOCizkloU+XBmFGOjDoNbcoA6sjaNBYZXznqcKSuREadiDYFqCtk1Elo055dx+2/bjgNbQrtukRGnYY2hXYdN8/RcBra1MJ+jqh9BAynoU1hHwGJjDoNbeoB6goXdUFCm7bFzIWoJEFDmzqAOrJKiiBfOepwhkysycNCnYY2hV3vFTLqJLRpr/86skoSNLQpnCGDrJIEDW0qQe5X7O+IhToNbcpTrnPc2mojaGhTmIWErZJoaFNgYbjGRV3S0KZgqiw21yUNbQqmyqJznYg2BagbZNSJaNOi+vQaSUOb8jT6hZ3TK2loUwl6l0atioU6DW2qAOoGGXUi2hTYdYdbgydpaFOokuIgOizUaWhTBVHHjX4pGtrUgtpqZG2qaGhTB1BH1qbxkOXKUY/Nh0rhOglt2pbVlsJ1GtoUch23s6BRJLRpj+u4nQWNoqFNY+vttgaP46JOQ5v6FHXBkFGnoU0h6sgZd4qENm2bVBRSl6RpaFOYcReP0LBQp6FNLcxzREadhDZVsalcIdmlmoQ21QygjjtrwGga2rR3qoGbcafHaNP75Q+bzfr3PRAdpOuvq+0/w9ds9/W/mq8ry8Tu+3dfOz98963zzft6swp/Rb2J1+7DX/R0q6oRb7sLz/djjMcL+xs+ffet+x285YsL9rj+srmpDwCj4367XWxu6+2Yd9bL23qcuRMpBQY2mejkdDkQrw2RYPdh79er8Ie++EkiVivEWzzDsPtXeyr1bgRrGLkCN3pGqXejJ062CLyGpmPE/EGaRnLxwCWZkktJMYVcLfOrpj3onv3NJ6iT2V8kVTXsPmv71uo8VNUCUDWesZ9MVXi8EFu3ZqPqGC0+jqopTYNveoSlLSGl5Yk5viJCAts5kABzJkKCTxJyqu108FfObTtzhinwxjrzNElHsD4zcjpWJmeYAnGYNiA37jBtY8aEKU53Z/Vh67l3XJ0SqdH2/noc11610EAe3NmMb2V7zKqM9fuXnmiOweFgK3tzmWMzJrowxT8Q4/2DnQvc8lsKfyX+gQSscTNSNOVRfDhOJaSBvUs1uNHshBwTeJmqrZqGVuNY+X1DS53Q0smJUYTL4Coo7+Z+Rq5WUndsJ/hcXqmuZTUTiQx9AZebyK8OErQG0ngDBBSfxsRBt4BfVTwro1ugWc8t0KpiwfdtX0Dtj3cLQG80k1eluTHDRk5VCS/S5YTT3xHBSi4GVre9eHYn3807IWQuqWShebR9JM0AkGY+HHOec6FJVKuOxzAzSlSXdeYHHuoytafI0+Jc1pkfeKgbaGNQ60hc1pkfeKhriDpqEMxlnflRDtdx6wNd1pkfeKj3w3K4qJM43LACSHaHa9ezzvxA9GFgoATXh8k68wPRX4dcR60jcVlnfiD6MLCOBJnrNLQpPK7wuNo068wPRAuToi4YMuo0tCnsq8qRUaehTUF3LOSZ2i7rzA9Efx2ijuzD0NCmoNek4Mio09CmoNck8gQtl3XmB+JuCrmOWoHsBA1tCqdP4CaZugjylaMOa2E5Muo0tKlO4zBCIKNOQ5sWppKyzvwoJ0fAofZpcllnfhSzm3KPjDoNbQp3U4aMOg1tasBuypFRJ6JNwW6KHOnNOvMDEXU48RbXc8w686OcmCOySso68wMxHwZU7yCfVmed+YEY6QU9ybAtDA1takGJdTykx0Kdhja1oGA1JpBjoU5DmxpgYeJhDhbqNLQpzP1CzofJOvOjmBwB7DhM1pkfxXAd21/POvOjmIgAdv561pkf5dRqYKNOQ5vCeZS4kxFd1pkfeKg7EP1iuCop68yPYrQp98io09CmGsZhcLVp1pkfiJFeyHXk3ZTEzA8bO3S0nUdwK2RozPywHMZhcCMCNGZ+WAHtOi7qNGZ+9E41JDLqObUpHuoKoK6QUc+pTd8sTESdxMyP4ixMTm36ZmEi6iTmUfa0qUZGncQ8yt65Ke5UIadpaFMNtKnCzUIaNQPj8lE3MCKAjDoNbWoB6rhzhl3WQQ94qDuAukVGnYY29QB15OzSeIx73ag7GOn1yKjT0KbQwhjc+PqosQiXjzq0MBYZdRLatGdhkLsjj5qc8Ib6uVEnoU0dnCGJ3FnQ0NCmPbuOmyNgaGhTgDp2tzVLQ5t6UNEucC2MpaFNeVn+emxMfuWoi7L8dUtDmzJQ5YvNdRraFE5GxOY6CW3a5zqyD0NCm/a5jow6CW3a4zqyNrU0tCmHFTLIqJPQpgZOuOS4qDsa2pQB1JG1qSOiTQHqEhl1ItoUoK6QUSehTXt2Hbn/uqOhTaFdl8io09Cm0K4j5zk6GtrUwn6OuH0EHA1tCvsISGTUaWhTD1BXyKiT0KY2SvBCVJKnoU0dQB1ZJXka2hTOkIk1eVio09CmsOu9QkadhDbt9V9HVkmehjaFM2SQVZKnoU1jm/kWddwTPE9Dm/KU6xy5ttrT0KYwCwlZJXka2hRYGK6RUaehTcFUWWSue0ZDm4Kpsshc94yINgWoG2TUiWjTovr0ekZDm/I0+oWc0+sZDW0qQe/SqFWxUKehTRVA3SCjTkSbArvuUGvwPKOhTaFKiu0TsFCnoU0VRB01+uUZDW1qQW01sjblNLSpA6gja1NOQ5vGI7NSuE5CmzoGUMfmOg1tCrmO21nQcxLatMd13M6CntPQpjFhvK3B47io09CmPkVdMGTUaWhTiDpuxp3nJLSpYwB13Lokz2loU5hxJwUq6oKGNrUwzxEZdRLa1MRUiDKyS30E+bpRtwygjjtrwAsa2rR3qoGacefFGG16v/xhs1n/vgeig3T9dbX9Z/ia7b7+V/N1ZZnYff/ua+eH7751vnlfb1bhr6g38dp9+IueblUZZeKF5/sxxuOF/Q2fvvvW/Q7e8sUFe1x/2dzUB4Bpu51uF5vbejvmnfXyth5n7kRKgYFNJo5s63IgXhsiwe7D3q9X4Q998ZNEfMbiLZ5h2P2rPZV6N4I1jFyBGz2j1LvREydbBF5D0zFi/iBNI7l44JJMyaWkmEKulvlV0x50z/7mE9TJ7C+SqhZ2n7V9a3UeqloBqBrP2E+mKjxeiF2BslF1TARkHFVTmgbf9AhLW0JKyxNzfEWEBLZzIAHmTIQEnyTkVNvp4K+c23bmDA7hjXXmaZKOYH1mZHWscgaHEIdpA3LjDtP2Ykxw6HR3Vh+2nnvH1SmRGm3vr8dx7VULDeTBnc34VrbHrMpYv3/pieYYHA62sjeXOZZjYjpT/AMx3j/YucAtv2V4bq7DP5CANW5GiqY8ig/HqYR0sHepBjeanZBn01Y7QxkpedTyDdpacdjYTqedjtw4Srv2nWNpp2HN8IDzcR7aacETMwjrfHjlHNu/JhpJDR4jYfNykrNRM3X2pLy5Wzw+rm4gL5e/rP5o3mwOEefoIo9cxE19t9iufquTm49fWQEVMBPTFk+4Izeaf/GGFIe5C3j9+CF8cdt88XNwcj816H/+UC+Xq/vbx/iW8KHtu3prHvzSbbrKg65t1z/eXVrcrW7vG6qE9W82sR8bL3d1s7j7YfeDz6vl8u4lZ3yz/nK/bFzvw0ZovOcs4CM21FB2gG1cv0ysV/nOYdnGSJareubUuZ45hf7MDQ6rAc/cT4v75Wq52NbhfR8aQi82q5rYc6d6z91AgUPe525w4k3PXNb1svkzHuu7+ma7Wt/TWjbTW7aBjKvMyyaomUtzLnNp8M3lCP/y8dPioflyGzhd/7Fu7vjjQ0cFt9c70vhYJO8pgLdb8ub75Wqze5zlu4Bes4gwuPfR3dQ3N71HN/zkg9NKNw/Y7WaxXNVJ6K9emA/mIKnGP3wcPHxyQN64Adq5A/LmtQ/fGBfz739vr23aPXB9/9v67ksDObGdjwORKodSygZWUcy4imfwOE8LbRdiSaH3L2MOwMlizx650fyW9LSQ+xVsg1CpS5gNMVWp9240++INjhEBZvRPDepvQr1dpKGeC3k9zzFjSApxXpaL2n0cdF7Mjas/fBx2XmzwXupzrR9IgpKsL/gyOy+DA03gU/f14W5xv3hWemzvydB46CQYjyrlSJVu51u00zIlX9rn/ry6i9+Vvs8pMN9NKnCL0dExCW7Ex+VkBSwX3zpve2je8HjCL7z7nP3CP9/x3JtozkrDj4ulXQyrQWX0U57XOYymA0D29zzb592MD1/OssI8EINEcBmTYtAgzpkmhgOx7Gev5oU4Z05YHohlaRDnrA7EgRjbUGQdNJFpu0tLoYYkXl6IhwTCZUMMcgfxIc5ZbpaJxWlmoxxoP5sX4px9T3BsMTrEr8/d60ZeY0ZeUj3y3bjc5+9OyPx7cSWOJvHtpcDRLL69MCtETwPuxMfj5OQ8cB8F7jN70HjUbIwTSDeYzzwpefQgVV9FOn+ppAMzhCSsvRjLujhnq40ImNysG6NPTzd1++KOU4zdaYnOr2KeHc88UxTzpKy8Vc5KKxXXQkGlqCsmNBdGGO2UnZipLH1yFwUKVqVwyS+RnbSnHXBfDWndeNL6skjLK8AgBby30czs36mpDN+/wO82PxdfXRt33FccXSnXZWKHmMgG1L1xMQsX+agxPK+1i/wgF8/BqVKIoqpOYaSXoIuV1AGXlxd7NInA4VrmOvZAmtdXTg7qjqllaydswq8hWwwGjjFgZfESeHwanPqey8czPP15ZiePj5qzNK8xe2FjnXFbjacAI1gZAx2FsFKpg7rEpD+G5eGjcx6gtczbhiawckyAOxcrUwtbCi9lUbzsO2nBsHX39akBQ9W1n1rB1Cprkx+z3OZTjfEFL+uYANqUyBCkYwKuzuw5XcQONT4I0VKwWEtwrl3I5PbZ3YhdqJBUZfw6K9h2Ab/Oig9mvb7VWR1cxeLqrPioDNVrrLMyUgzFuV5bdWWwq674qITYq6q6MuLwUk6swYKlyNlrsPhg4u1bt5SD5f/4RVh8TDLvHgd+3GXpV0g9vQYcFd38N+yOfHj670ygw8qpgVYZXAygbubbyfyYNicB9UDaj4ub7ZdFc8//XD8Se15snFZ7oH7K5/VBRqUNX9WOZWFXODMxAA5vJFnuYOKohOSrWjwDIFeicmba8nFx/F7zr+Bp4eDrW0HB+NlWcOhec69g8AeOr+Bt2Ekexu8fgt2s7+/rm+3iQ7wDO7ivxKy/b/Hb/ozHeOqXa1vJmWTejo3p+WqMmT/98OfvXu7A3V+UY8t8QmlFJQbWobIieQ0sy/Bb5lgllGGcRa2SqRQ3nVca4dWVTF+FryfKmM8LWk/OK959MV/4gqJMEC1qQbmrmOs+guKyVxRlOuklrehl2VwxKmfyShe02DWZJ8lm7vq7deDFatt8p4cDTkdW7Pjpe+vxjTh9Lyw/jFXaDdmF9gy9Uskrvf/o3m++Amm6XFUpazMnZQs2RkqVV9eXg8zjU0kKI3Ngc+qmpmV8L2yBJ8dHdI/MlQWv3GSepzIap/LqDAweXTXdWu5CGNxwS2qpJONcO+cVIBqrmtON5jpXzrbjH0+msKuM4zJgqjWzcPy6rbRS0jkpjPCM5T7uF+zMJdeXzubRNV2t6S6FzWGT9/xFexzIrpPXRO/i2MeIynVeXma3zvMUc+eoRczhbpxgrMuq+Hmj99Nax2ZHFKV624yoSlYIHhwlK+TKjqW1M2JmNFdzZM/nMFWjd+L2obgQUxX8weQ11VT59nAuHh6Lw3ee3zq9HZm+YJ2KtT+X2earqMhMS/tC7E/QlIoJxryxTmjH0ihjMD+Cc6aV98Jrp6dOWpKsksp6z6TgVnEPUjqbOM2TpmbMMsUVnKo8vy2aR9he/s7aHvdcXsTmCLOTgI32U5vKH2G2rVhTMSud405Z6XMH0+UIhTt7thZwPOJ0hA4N4pFosq/NuJHlTDOoA8EYG3I9fhRNBclZXA/7wmIdWJTKeMe5FMxqZXksJeouSsWd7oTOY0yzu0gvvWeORcuZSVDmovngsjNprdROeaZNml2vK5c49K74Fc3ZJf0SV7RJqUxe/dyQwpZU5MwOKXNJG2c2LufT2ppLX9OcnfYvck0vzvLGFaS4pK1nfiztrrQ1I5bktVvRERqVnxB9KeugSlZGCKO4ZkEjSm9SjapYxZLjiYl1Q1JWHU/f+LQtUKMEkkyy3MdUYkwg+IoSvsYTOxLq8ogtVNWS+onho6TKhFONDqsDxWH2l0+OeXOXyonLTf7KYbVHDzNoDXwh5G5ox3RwB7ni0up21sK+REMEJSdc4L6RRk89sxOikoHavokscq4VmCPnKuGdE5Zr57lnufuNCnG5yWA5TPfouHlpBcvcAm8BnEjbtABpYtz82KeE/aPrVscAcT52E0wNm8V2xwfhjd0lsfstli5sZbsxHdCAi7uqq+tt+SGd+edqFHWEPc9WXJijediMNDViSQhrYtFBcDQ7Sk2nLe2CkvPJh+QOD8i3IwVZMe6csV4oHWQFS00Vq6wcokG5tirud2/O1aHg/OUJY8kr6XgjjaUP+6oDRE2FsdJyatiHVd2gKZzwYYJsxgz7yDMH6stl6gmB98JE7olMjV02Tt5ZedXdvsH06bC7JyfBuTOw4zDNc0dwJriMp03myEBteYLTKC6J2sFdaHIVDVM+fKRqW9iePom1UsETMcpzHWwuaMnWZLabIGx9U8WqtFGZW0UKeebY+znEzemaCpnZhbkXR/Z9WamhxJpTie0Only9lL6Tj9iTdP5y8fip7QDdayz837vle7ryabF8+pes9xA0N3m/2G7rzf3TmwUTgxwtbKsH9o9PNXgwQJQ7aDc4oQg0nf6/m8XqfnV/G962XGwX1NpNg5XuzzvKO/FCqCEpAJbsL/cPXxqa3T431ye8YO0sz2Pt9OV8CzZi0My7+mb12MzwEexDg8his6qJNXY3HkQiHR+3cmeIKdW/3tu/3vzP+7//+kXW//HHP93q/cP3QymqYNn+tnj8NWDQGxGUPHf7y7QWVLrUzRGxjinDgq7f/ePdX+7+8XEt/r39fz+rr3/9dvvr92OSjucuXFJgbtkQKHIAFCh2zthOM3YTL3dK4eHVHF+fxKvwx1rPjLfGKhVPnLt8rJyTVjAfZKgUWg90/H7hLXMszJig3zUsjKq4Myx8DH9en9Ru+EqpZkZS0P2OezNQVVbWquXsQVDuqnFfhaXiknNltWLtOWOxy5azjzuqFXQVl15LFZG97GXL2a693GW7NBuZs1ga2eU4YCMlr3ZPmTTMaG0HtGpRy5YzxQtl2S51YXJWPWM+T7Z6ctzbZwoUN4lKee6YFNYZ54YmKZa0ajZnFlK5e9elrdpp0/5KqZk8cXFHNKiKD9+INj4xxFDI8c33oupEARpjku7LRlVaKO1Z+Bu9VFNnKTcTmls+Ng8ACHDqqsv63MU2bXuWC6uRnIHI45vSFkZkwaqEYHJU5GRCD/y96W4+Lc3LkBUfdJbyEXmeKnacFuKvJrMaT2ZZFJkbmkkumRXaSKcV7/UN5LzpY8+VNNxaPjXT01Xh5lb68NAE30NK0EtFVlJbYa3xQinr4RSU+dk8UwbdZbJ5/KydaMVLYbOsusLfeJ9u/s3chY7JtBP7MBz7GFE9W/7niJ7JbpoztHedKSsf1zSX1ffyjcxPZM5RnX44ebMn3n96es1K11I4aKouN9L+CJJVey+3SY+fOu5JVNGVfvZ4Rfoxokr86cytV7k9cwX5xH1+ik+Rlbnjh5oVRvJjkQlpqo4F1FM7DB/xgeVToXLTw1haZpiATaLm5/mYg4Z5Le207PysLB8/KipGmgtheVN8yoLI0s5wJo22KckVr4zTPBh7pZqHgQMJdkIrJ50ELk55lObn+DzF9dM4flqtVKEsj6dghbD8iDcrfSW74bmJwbmm75kMpjrciwsfPHDQFa3JebOSCxXsuW8PL7Kx3FE542qi+Ul2IQj2+2p/nNCseeEnyjH5ugjrVKxtGh+dcmXtwMdsU9igkzPZWXZgxapOolkwXbONaBlc6MG6jJcpfnO3eHxc3fRY/ufVXfyuz7CyytYUS5PPZUxGP3VdlQQ3smzU0gUsF986b3to3vB4wi+8+5w9E57veF5eDMUsQeHHf9WfF6v7Zb15q/14ofYDKMro7WLVfqjTYnfX8LDDSripB2ZaHLnRmR52DR/23efM+7CfFlG7Bl6Y4Ip2Xyno4acTY6c9nvRvNRdT2k+alSujuuWjapnTazi0qA6oFhkbt6HLlOEFKb7U5vQFUTotqwFEZyClp+jlKb6k5tzLY1nVjygWujjFF85MMGY+zTOGPugFrU7xBTJnX53LMm3FV8JMcgWSWiXgCjiR1MT050qXtDzF18NM2Hn45a5H1n67udbDJbVIIl0dL9OilqJXZ0ys/8JW58hmc1HLI0YsT3kFR6ccxAwcuRwUfN0Dl4NS/WjM5SXrOiB8D8VkTgi/SFd1OMPTXVZxnZYepbcfHZ+3Mi09AqkP1iSlR3nPXebqZTxz3dEsfB5Ibrg0Phue1iCBxMwX3PqTGa1NdSCXR6ukCGm2fLVhQl9uxcYspB44FT9ozQskdcO2bi0S4LRPS5GmWungpCRpmPDRUWkpUmYzfbmTBmdh9UCR6EGbXiKrVXWgGYl1aRXHxMkvxz5FpkUceTk9TwVHjnIkXEs9trbujdP5OT3PVMFTMvVez8ASiWUP1Rs5nhYcTeWVTOuNwIfI6kD5/ty8OnMNRsZao1fwcWAI5aUR92CAQTqblj1MJe5h19X5pIJotkrNYeaWVFmR7PJHuP4K3g5UvB2M9RbI2yB3ulVBMGIr0qKgibw1LM1IPuXhmDsfecwhSZkFQTmIOzaBvjQX1LO0zmdidp0yJi30ST5FMZHW+eRl7hUeIAW4k3oeYChYUs5TSjHP8OqIguxKJqsyNlijyt0OD1sVp9IKnal+3OH90PO0Qmcuq/Lu4b/++r/YB/WPlTZ/edh88r/cycH5Gy/TdkRm9nQpAehxlJTDRWDMoSZ+SzDqefrpYq+M6LVp3uemzustXusHiYlqtekssSfZ9yz4tkdsZPjmfb1Zhb+9KeA5iX0v2iut0yeaD4wbOY+90qKp2k8/TPGwxbH9a2L0TcNxa7ELYyZD9Prz6h2b9izoxH3lWE7xRh4lkeYm7nQqpZ7uPZVnI6xcu1hHi2J373yZuwYsO5uNu7LPXX0m7irwR8wXxRvk7quPpgePOMTYM44TzWeZPBTpEvKB6Qpn4qFrsn8AD4OwA5QZX5AFBnCZc3V/CN9u1utt9+1N1evf1su6ecf/Bw==</diagram></mxfile>
2107.04086/paper_text/intro_method.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Graph Neural Networks (GNNs) [\[22,](#page-10-0) [37,](#page-11-0) [50\]](#page-11-1) have achieved great practical successes in many realworld applications, such as chemistry [\[31\]](#page-10-1), molecular biology [\[17\]](#page-10-2), social networks [\[3\]](#page-9-0) and epidemic modelling [\[34\]](#page-10-3). For most of these applications, explaining predictions made by a GNN model is crucial for establishing trust with end-users, identifying the cause of a prediction, and even discovering potential deficiencies of a GNN model before massive deployment. Ideally, an explanation should be able to answer questions like "*Would the prediction of the GNN model change if a certain part of an input molecule is removed?*" in the context of predicting whether an artificial molecule is active for a certain type of proteins [\[19,](#page-10-4) [41\]](#page-11-2), *"Would an item recommended still be recommended if a customer had not purchased some other items in the past?"* for a GNN built for recommendation systems [\[9,](#page-9-1) [44\]](#page-11-3).
4
+
5
+ Counterfactual explanations [\[28\]](#page-10-5) in the form of "*If X had not occurred, Y would not have occurred*" [\[26\]](#page-10-6) are the principled way to answer such questions and thus are highly desirable for GNNs. In the context of GNNs, a counterfactual explanation identifies a small subset of edges of the input graph instance such that removing those edges significantly changes the prediction made by the GNN. Counterfactual explanations are usually concise and easy to understand [\[28,](#page-10-5) [36\]](#page-11-4) because they align well with the human intuition to describe a causal situation [\[26\]](#page-10-6). To make explanations more trustworthy, the counterfactual explanation should be robust to noise, that is, some slight changes on
6
+
7
+ <sup>∗</sup>Equal contribution.
8
+
9
+ an input graph do not change the explanation significantly. This idea aligns well with the notion of robustness discussed for DNN explanations in computer vision domain [\[11\]](#page-9-2). According to Ghorbani et al. [\[11\]](#page-9-2) many interpretations on neural networks are fragile as it is easier to generate adversarial perturbations that produce perceptively indistinguishable inputs that are assigned the same predicted label, yet have very different interpretations. Here, the concepts of "fragile" "robustness" describe the same concept from opposite perspectives. An interpretation is said to be fragile if systematic perturbations can lead to dramatically different interpretations without changing the label. Otherwise, the interpretation is said to be robust.
10
+
11
+ How to produce robust counterfactual explanations on predictions made by general graph neural networks is a novel problem that has not been systematically studied before. As to be discussed in Section [2,](#page-1-0) most GNN explanation methods [\[45,](#page-11-5) [25,](#page-10-7) [46,](#page-11-6) [37,](#page-11-0) [32\]](#page-10-8) are neither counterfactual nor robust. These methods mostly focus on identifying a subgraph of an input graph that achieves a high correlation with the prediction result. Such explanations are usually not counterfactual because, due to the high non-convexity of GNNs, removing a subgraph that achieves a high correlation does not necessarily change the prediction result. Moreover, many existing methods [\[45,](#page-11-5) [25,](#page-10-7) [37,](#page-11-0) [32\]](#page-10-8) are not robust to noise and may change significantly upon slight modifications on input graphs, because the explanation of every single input graph prediction is independently optimized to maximize the correlation with the prediction, thus an explanation can easily overfit the noise in the data.
12
+
13
+ In this paper[2](#page-1-1) , we develop RCExplainer, a novel method to produce robust counterfactual explanations on GNNs. The key idea is to first model the common decision logic of a GNN by set of decision regions where each decision region governs the predictions on a large number of graphs, and then extract robust counterfactual explanations by a deep neural network that explores the decision logic carried by the linear decision boundaries of the decision regions. We make the following contributions.
14
+
15
+ First, we model the decision logic of a GNN by a set of decision regions, where each decision region is induced by a set of linear decision boundaries of the GNN. We propose an unsupervised method to find decision regions for each class such that each decision region governs the prediction of multiple graph samples predicted to be the same class. The linear decision boundaries of the decision region capture the common decision logic on all the graph instances inside the decision region, thus do not easily overfit the noise of an individual graph instance. By exploring the common decision logic encoded in the linear boundaries, we are able to produce counterfactual explanations that are inherently robust to noise.
16
+
17
+ Second, based on the linear boundaries of the decision region, we propose a novel loss function to train a neural network that produces a robust counterfactual explanation as a small subset of edges of an input graph. The loss function is designed to directly optimize the explainability and counterfactual property of the subset of edges, such that: 1) the subgraph induced by the edges lies within the decision region, thus has a prediction consistent with the input graph; and 2) deleting the subset of edges from the input graph produces a remainder subgraph that lies outside the decision region, thus the prediction on the remainder subgraph changes significantly.
18
+
19
+ Last, we conduct comprehensive experimental study to compare our method with the state-of-the-art methods on fidelity, robustness, accuracy and efficiency. All the results solidly demonstrate the superior performance of our approach.
2108.01806/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-11-13T17:19:55.287Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36" etag="h-SXdohQDXCtyMuT3cnF" version="15.6.8" type="dropbox"><diagram id="Cue0WMS9nyb8cUcdGwDY" name="Page-1">7V1tc9o4EP41zNx9SAfLkjAfCSXpzaWdzmV6TT/dOFgBXw3ijCHQX38ytvGLZKLWNmuHtNMGr+UXtPs8u1qtlJ45XuxufXs1/8gd5vVQ39n1zPc9hIyB0Rc/Qsk+kgwJjgQz33XiRqng3v3BYmF83WzjOmydaxhw7gXuKi+c8uWSTYOczPZ9/pxv9sS9/FNX9ix+Yj8V3E9tj0nNvrpOMI+kFhqk8g/Mnc2TJxt0GJ1Z2Enj+Mbrue3w54zInPTMsc95EH1a7MbMCzsv6ZfoupuSs8cX89ky0Lng6+SfGR9b27/cP57o/M8+ft49XsXKWAf75AszR3z/+JD7wZzP+NL2Jqn02uebpcPCu/bFUdrmjvOVEBpC+C8Lgn2sTHsTcCGaBwsvPst2bvAQXv6OxEffMmfe7+I7Hw72ycEy8PeZi8LDb9lz6WWHo+S66PuFX6q022LRmm/8adIJxsqw5v6n6Xr/MBp4w83fVx+vEvOz/RkLTvQpOipXoILxBRPvI67zmWcH7jb/HnZsnrNju1SD4kOsxJ9QKCIgGk20Y+R0k6qqsnZe7nWzYrfHl37mrng06scMZvZj+Mb8ZSZwTm4RmU18Vaq8ke/b+0yzVdhgXf4cjNTPudFsTygp2E70BqklHftEy7hOIWBre5u4P2/Zkvl2wP1rj0+/i3O4N0E9i/aGyJIMMW9mz3M3YPcr+4C5Z+E98ib15HremHvcP1xrPj09oelUyNeBz7+zzBmHPlJCw4u57/7gy8BObrFlfuAKKh957mwpZEFozdelJhY2Z7uMSDam+CwqGIWRKOM5dRHHNvOsexgWrOdXcK9UDZJU00PUC8KO5Ae7SfVA/9vw5MTV+gDpkWhg4NUuPSk+zcKf959H7yexcqP7ideLbhk1qFfLJPwbv1tGHv1RaZ8e/tSkVqOgViKr1bAUajVr0KqS1EwYNq/kn5vyAC/6Z1PTUxhVPUUloJrVgUpVQH1oBJ81oIrkQWXJmFJBymqKKHFDRDnmy6kdsKX4d3lMeWTAczClOjQZNqfXbS/070LSv/50/Hh79+UC9TyA9ogG7Ig14w9T71jviHUtHFgwClMXQrDkS/bLXlLdgZbsJsuDfbBhrAGq5lzg09dTc2ND35cDH4VKT+geSqd42HXoGi3UKT6XTk++Zu3h7F1rw9mCS6TD8wW0alCjN6rUhhXua3q/cyVxT75lBlWj7ewz517F2FIA6cZeuF7Ysx+Yt2VhfkwRdFpTpk62PVoEk349SMK0JLWaQRJVIKmYga3PQbVoNkQTSXAOCivSLSeMGQxKcr7lEqCkyLKcF0otSly2H0pYF0qgmUssp84uAEpYkdo6L5ToG5T0oUR0oYRBoUQuEkrQXkmekm79bNo5ql3Kq1g08g/kTDhSvyQMN7ag2qUqfWlWuxSzFjVVu5Q9p7TaxazWntZcHaNGAqijNjLk8lJ+MzfFMPXs9dqdJuIb10vt3CnMQwhJfL4JtqndnPGwYDaEvDMwssgg/j9/wwh1knHXZR9I8vlS9ZR1rJ4yZGOCLp9auI5zsNmSOEChee0KKhPLoYGygqqxiUXYyYluxQaKIFvZjkLGBmTwRseVFNw4HWMDko7lIZhEx4KEEz425VmW18zHuA/Nx8k02xsha+CVahLyAJSQQYff3Sbkqp5Uk5ApfpeyMUbDsxIyfZmQBQsnhEzlybpXTciKWvTzEjICjac6RsgDTUKuvGqpGiODDnq6zchVXakeIxPQEHnwMiPTdMGX0cIlX01SMgGPkREMfuHp1dBd61M5T15JPxR0DNNtfq28SkuPYCkowRpydZKchEBpVhgp5hheM8VScIo15FFJPeuKvqzW9mLldWOxWDWlFpfVKiqmz7ystjO8XCehKsYk5SskoVymCTvK7LbLbGZMUvCYCEMmiQx5TBLt1lONL3XrphIiLSfMMoqtgUiP20wcV90SiUjPWk9lyKt/LkYZZkEZ0LVtisH3W3qutLNat1fEydfM4utuUmto6BBmOVgFIgs9mhR8BEDA897UfO3AUq04z4Yx/XwYk8Y6aSRj1IxPxYoIdXE8AsWnvCLi0vBJwfFpdmZauUaAmJ3YjJAgUOrs9GDOrLr/htZgDrYiNrHiSxw/FAdzJgYezJmdmQw+715z6s5CmvwLujyGwIau3ebfqp5Ti39hS2ATK37j37DiFZh/cWcmJdrAv7oJHNglCJ3ZrK6F/NvM/H+rKl5NRX7vYvmXQPMv6JaDXeNf3QQd7IoD2F1ous2/zawwb1V9q6nI314q/xLo+JfCJgsvcp4F66YxaNVsZDXXLI9T69zc+9UX4BXmbAbgVZUENNZq755WauvXLtwDTTZiuTKrnXtaVcJScU8rAj1sIaBpo5+BUp2Q0N4dG3T8geX6uAuABIWGBJWnFevZlPz2t4ceGt/93uDe5NUcfUETAyo7+pr2JheH6a+ujMZj6S8ANSf/Aw==</diagram></mxfile>
2108.01806/main_diagram/main_diagram.pdf ADDED
Binary file (18.6 kB). View file
 
2108.01806/paper_text/intro_method.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ We show qualitative results of our method using different layouts on the same background image, demonstrating the diversity of generated contents. Several results of this experiment are presented in Figure [1](#fig:diff_layouts){reference-type="ref" reference="fig:diff_layouts"}. As can be seen, our model can provide plausible renderings given different layouts.
4
+
5
+ <figure id="fig:diff_layouts" data-latex-placement="h">
6
+ <div class="center">
7
+
8
+ </div>
9
+ <figcaption>Diversity evaluation. Generation results under same background image <span class="math inline"><em>X</em></span> with different object layouts.</figcaption>
10
+ </figure>
11
+
12
+ Here we show results of our method using the same background and layout. The diversity is now controlled by the initial latent code of the generator. The results are presented in Figure [2](#fig:diff_appearance){reference-type="ref" reference="fig:diff_appearance"}. As can be seen, our model can provide plausible diversity in the object appearance.
13
+
14
+ <figure id="fig:diff_appearance" data-latex-placement="h">
15
+ <img src="figures/rebuttal/0753_diver_collage.png" />
16
+ <figcaption>Diversity evaluation. Generation results from same background image <span class="math inline"><em>X</em></span> with different model weights.</figcaption>
17
+ </figure>
18
+
19
+ We analyze the effect of the background to the generation of the furniture. The diagram below shows a simple example where we modify the background image by enlarging the left white backdrop. In Figure [3](#fig:background_impact){reference-type="ref" reference="fig:background_impact"}, we see that objects like paintings can conform to this structural change in the background. Other objects like beds only have appearance change. The quantization of the impact of background images is left for future work.
20
+
21
+ <figure id="fig:background_impact" data-latex-placement="h">
22
+ <img src="figures/rebuttal/change_bg.png" />
23
+ <figcaption>Impact of background to furniture generation.</figcaption>
24
+ </figure>
25
+
26
+ Our model is designed for inferring the decoration at once. While not designed for iterative object insertion, our method can add one object at a time to a limited extent, thanks to the diversity of the scenes and images in the dataset, as shown in the example in Figure [4](#fig:iterative){reference-type="ref" reference="fig:iterative"}. In future work, we could consider using object removal with image inpainting to augment the training data.
27
+
28
+ <figure id="fig:iterative" data-latex-placement="h">
29
+ <img src="figures/rebuttal/4069_seq.png" />
30
+ <figcaption>Generation results by adding the objects one at a time.</figcaption>
31
+ </figure>
32
+
33
+ We provide an additional comparison of our method with text-guided image synthesis, which also use coarse layout descriptions similar to ours, unlike fine-grained semantic maps. For text-guided methods, we chose GLIDE [@nichol2021glide] and generated objects by masking target regions and providing a text prompt for each object. Specifically, we used released GLIDE (filtered) model for image inpainting in a masked region conditioned on a text prompt. We generate objects one by one iteratively via masking each object box with target object text to realize semantic spatially generation (Fig. [5](#fig:glide){reference-type="ref" reference="fig:glide"} GLIDE-iter column). We also inpaint all areas of same boxes of given empty scene once with one text prompt (Fig. [5](#fig:glide){reference-type="ref" reference="fig:glide"} GLIDE column). Compared to our method, GLIDE failed to preserve the background (e.g., windows) properly while the generated objects are unaware of the context, making their results not semantically consistent, e.g., the fireplace in the last image. Additionally, GLIDE takes 4 to 8 seconds to inpaint a 256$\times$`<!-- -->`{=html}256 image which is much slower than our method.
34
+
35
+ <figure id="fig:glide" data-latex-placement="h">
36
+ <table>
37
+ <tbody>
38
+ <tr>
39
+ <td style="text-align: center;"><img src="figures/rebuttal/glide/bed3378/bed3378_bg.png" style="width:20.0%" alt="image" /></td>
40
+ <td style="text-align: center;"><img src="figures/baseline_boxes/bed3378_nsd.png" style="width:20.0%" alt="image" /></td>
41
+ <td style="text-align: center;"><img src="figures/rebuttal/glide/bed3378/img00003378_7.png" style="width:20.0%" alt="image" /></td>
42
+ <td style="text-align: center;"><img src="figures/rebuttal/glide/bed3378/3378_all.png" style="width:20.0%" alt="image" /></td>
43
+ </tr>
44
+ <tr>
45
+ <td style="text-align: center;"><img src="figures/rebuttal/glide/liv2825/liv2825_boxes_bg.png" style="width:20.0%" alt="image" /></td>
46
+ <td style="text-align: center;"><img src="figures/boxes_vs_points/liv2825_boxes_fake.png" style="width:20.0%" alt="image" /></td>
47
+ <td style="text-align: center;"><img src="figures/rebuttal/glide/liv2825/img00002825_7.png" style="width:20.0%" alt="image" /></td>
48
+ <td style="text-align: center;"><img src="figures/rebuttal/glide/liv2825/2825_all.png" style="width:20.0%" alt="image" /></td>
49
+ </tr>
50
+ <tr>
51
+ <td style="text-align: center;">Input</td>
52
+ <td style="text-align: center;">Ours</td>
53
+ <td style="text-align: center;">GLIDE-iter</td>
54
+ <td style="text-align: center;">GLIDE</td>
55
+ </tr>
56
+ </tbody>
57
+ </table>
58
+ <figcaption>Comparison to text-guided image synthesis method GLIDE <span class="citation" data-cites="nichol2021glide"></span>.</figcaption>
59
+ </figure>
60
+
61
+ # Method
62
+
63
+ Table [1](#table:generator_arch){reference-type="ref" reference="table:generator_arch"} describes the input and output dimensions used in the sequence of generator blocks in our generator. For each generator block with $v_i$ input and $v_o$ output channels, the object layout $L$ first modulates the feature map using a SPADE residual block similar [@park2019SPADE], which consists of two consecutive SPADE layers with ReLU activations, as well as a skip connection across the block. Unlike [@park2019SPADE], we do not add a convolutional layer after each SPADE layer in the residual blocks. The number of channels remains to be $v_i$ before and after the SPADE block, and the number of hidden channels in SPADE layers is set to $v_i/2$. Following the SPADE block, we upsample the feature map by a factor of 2, pass through a convolutional layer with $2c_o$ output channels, batch norm layer and finally through a gated linear unit (GLU), following the convolutional block implementation in [@liu2021faster]. All aforementioned convolutional layers have a kernel size of 3 and padding size of 1.
64
+
65
+ The last two generator blocks use the SLE module in [@liu2021faster] to modulate the feature maps with earlier, smaller-resolution feature maps. We pass the output of the source generator block through an adaptive pooling layer to reduce its spatial size to $4\times 4$, then use a $4\times4$ convolutional layer of kernel size of 4 to collapse the spatial dimensions, reducing the feature map to a 1D vector. This is passed through a LeakyReLU (0.1) activation, $1\times1$ convolutional layer and sigmoid function to obtain a 1D vector of size $v_o$, where $v_o$ is the number of output channels of the destination generator block. This vector is multiplied channel-wise with the feature map inside the destination generator block, right after the upsample operation.
66
+
67
+ The main discriminator $D_{adv}$ consists of five discriminator blocks, followed by an output convolution module. Each discriminator block consists of two sets of convolutional layers. The first set has a kernel size of 4 and stride of 2, and is responsible for downsampling feature maps by a factor of 2. The second one has a kernel size of 3 and padding size of 1, and transforms the feature maps from $v_i$ to $v_o$ channels, where $v_i$ and $v_o$ are the numbers of channels listed in Table [2](#table:discriminator_arch){reference-type="ref" reference="table:discriminator_arch"}. The output convolution module downsamples the feature map to $4\times4$, and is followed by a final $4\times 4$ convolution layer reducing the feature map to one single logit. The object layout discriminator $D_{obj}$ takes the $32\times 32$ feature map output from $D_{adv}$ and repeatedly downsamples the feature map by a factor of 2, using convolutional layers with kernel size of 4 and stride of 2. Like $D_{adv}$, the final feature maps are reduced to a single logit via a final convolutional layer. Each convolutional layer in $D_{adv}$ and $D_{obj}$ - except the final layer - is followed by batch norm and LeakyReLU (0.1) activations.
68
+
69
+ :::: center
70
+ ::: {#table:generator_arch}
71
+ Block $\#$ Resolution SLE source block Features
72
+ ------------ --------------------- ------------------ ---------------------
73
+ 2 $4\rightarrow8$ -- $12\rightarrow512$
74
+ 3 $8\rightarrow16$ -- $512\rightarrow512$
75
+ 4 $16\rightarrow32$ -- $512\rightarrow256$
76
+ 5 $32\rightarrow64$ -- $256\rightarrow128$
77
+ 6 $64\rightarrow128$ 2 $128\rightarrow64$
78
+ 7 $128\rightarrow256$ 3 $64\rightarrow32$
79
+
80
+ : List of generator blocks and their properties.
81
+ :::
82
+ ::::
83
+
84
+ :::: center
85
+ ::: {#table:discriminator_arch}
86
+ Block $\#$ Resolution Features
87
+ ------------ --------------------- --------------------- --
88
+ 7 $256\rightarrow128$ $3\rightarrow32$
89
+ 6 $128\rightarrow64$ $32\rightarrow64$
90
+ 5 $64\rightarrow32$ $64\rightarrow128$
91
+ 4 $32\rightarrow16$ $128\rightarrow256$
92
+ 3 $16\rightarrow8$ $256\rightarrow512$
93
+ Output $8\rightarrow1$ $512\rightarrow1$
94
+ 4 $32\rightarrow16$ $64\rightarrow128$
95
+ 3 $16\rightarrow8$ $128\rightarrow256$
96
+ 2 $8\rightarrow4$ $256\rightarrow256$
97
+ 1 $4\rightarrow2$ $256\rightarrow256$
98
+ Output $2\rightarrow1$ $256\rightarrow1$
99
+
100
+ : List of discriminator blocks in $D_{adv}$ and convolution layers in $D_{obj}$.
101
+ :::
102
+ ::::
103
+
104
+ As presented in the main paper, the semantic labels for images in the Structured3D dataset are retrieved from the NYU-Depth V2 dataset [@Silberman:ECCV12]. Five classes: `window`, `door`, `wall`, `ceiling`, and `floor` are considered as "background" and appear in both both empty and decorated scenes. The remaining classes represent "foreground" and are used in decorated scenes only. In addition, since the distribution of the foreground classes is highly unbalanced, and some classes do not really exist in the Structured3D dataset, only a subset of these foreground classes were used in our experiments. We show the list of the foreground classes used in our work in Table [3](#table:palette){reference-type="ref" reference="table:palette"}.
105
+
106
+ :::: center
107
+ ::: {#table:palette}
108
+ Name Color Name Color
109
+ ----------- ------------------------------------------------------------- -------------- ----------------------------------------------------------------
110
+ `cabinet` ![image](figures/palette/class_cabinet.png){width="0.15in"} `picture` ![image](figures/palette/class_picture.png){width="0.15in"}
111
+ `bed` ![image](figures/palette/class_bed.png){width="0.15in"} `curtain` ![image](figures/palette/class_curtain.png){width="0.15in"}
112
+ `chair` ![image](figures/palette/class_chair.png){width="0.15in"} `television` ![image](figures/palette/class_television.png){width="0.15in"}
113
+ `sofa` ![image](figures/palette/class_sofa.png){width="0.15in"} `nightstand` ![image](figures/palette/class_nightstand.png){width="0.15in"}
114
+ `table` ![image](figures/palette/class_table.png){width="0.15in"} `lamp` ![image](figures/palette/class_lamp.png){width="0.15in"}
115
+ `desk` ![image](figures/palette/class_desk.png){width="0.15in"} `pillow` ![image](figures/palette/class_8.png){width="0.15in"}
116
+
117
+ : Foreground classes used in our work.
118
+ :::
119
+ ::::
120
+
121
+ We carried out experiments on two subsets of the Structured3D dataset - bedrooms and living rooms, as those sets contain enough samples for training and testing. Note that each scene in the Structured3D dataset is associated with a room type label, that allows us to identify bedroom and living room scenes. To provide enough clue for a scene type, we filtered out images that contain less than 4 objects. For each source image, we resized the image from the original size $1280\times720$ to $456\times256$, then cropped two images with size $256\times256$ from each source image. Images were cropped such that, for foreground object pixels, at least 60% were still present in cropped regions. We report the total number of training and test samples for each set in Table [4](#table:datasets){reference-type="ref" reference="table:datasets"}.
122
+
123
+ :::: center
124
+ ::: {#table:datasets}
125
+ Data split No training images No test images
126
+ ------------- -------------------- ---------------- -- --
127
+ Bedroom 28,038 4,931
128
+ Living room 19,636 3,976
129
+
130
+ : Statistics of data used for training and testing.
131
+ :::
132
+ ::::
133
+
134
+ <figure id="fig:augment" data-latex-placement="t">
135
+ <div class="minipage">
136
+ <p><img src="figures/augment/full.png" alt="image" /> <img src="figures/augment/label_5.png" alt="image" /></p>
137
+ <p>(a)</p>
138
+ </div>
139
+ <div class="minipage">
140
+ <p><img src="figures/augment/full_aug.png" alt="image" /> <img src="figures/augment/label_5_aug.png" alt="image" /></p>
141
+ <p>(b)</p>
142
+ </div>
143
+ <figcaption>(a) Sample image with corresponding object layout map where each dot shows the location and semantic label (via the color) for an object. (b) Same sample after translation and horizontal flipping.</figcaption>
144
+ </figure>
145
+
146
+ A direct consequence of training on smaller subsets of the Structured3D dataset is that the number of usable training samples the model observes is greatly reduced. To deal with this issue, we implemented the DiffAugment technique [@zhao2020diffaugment] in our training process. DiffAugment improves generation quality by randomly perturbing both the generated and real images with differentiable augmentations when training both $G$ and $D$, and is reported to significantly boost the generation quality of state-of-the-art unconditional StyleGAN2 [@karras2020stylegan2; @karras2020ada] architecture when training data is limited to a few thousand samples. Thus, we adopt this technique when training on our architecture, in order to compensate for the reduction of training samples.
147
+
148
+ While the authors of DiffAugment proposed multiple augmentation methods, we only applied translation augmentation to the images. This is because other methods (e.g., random square cutouts) may affect the integrity of decorated scene images. We set the translation augmentation probability to 30%, and also horizontally flipped the images for 50% of the time. For each augmented image, its corresponding object layout was also perturbed in the same manner. Figure [8](#fig:augment){reference-type="ref" reference="fig:augment"} shows an example of our augmentation scheme.
149
+
150
+ While our proposed model can make plausible object locations, we notice that the arrangement of the objects currently lacks flexibility. For example, supplying an object label $L$ with only one to two objects is less likely to result in realistic decorated scenes. This is probably due to the fact that the training dataset only contains fully decorated rooms, and therefore the generator is not trained to produce partially decorated rooms. Likewise, our model also tends to perform fairly on object arrangements that rarely occur in the training dataset.
151
+
152
+ Additionally, we found that multiple object instances in an image are occasionally labelled by Structured3D with the same object ID, e.g., paintings and curtains. This explains why a single `picture` object label can result in two (or more) generated paintings. Reflections and highlights caused by foreground objects (e.g., lights) are also present in empty scene images, which could hinder the ability of our approach when generalizing to real-life empty scene images that are not lit up.
2108.02388/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2108.02388/paper_text/intro_method.md ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ As noted in the introduction, the "`acmart`" document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a "camera-ready" journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate *template style* and *template parameters*.
4
+
5
+ This document will explain the major features of the document class. For further information, the *LaTeX User's Guide* is available from <https://www.acm.org/publications/proceedings-template>.
6
+
7
+ The primary parameter given to the "`acmart`" document class is the *template style* which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the `documentclass` command:
8
+
9
+ \documentclass[STYLE]{acmart}
10
+
11
+ Journals use one of three template styles. All but three ACM journals use the `acmsmall` template style:
12
+
13
+ - `acmsmall`: The default journal template style.
14
+
15
+ - `acmlarge`: Used by JOCCH and TAP.
16
+
17
+ - `acmtog`: Used by TOG.
18
+
19
+ The majority of conference proceedings documentation will use the `acmconf` template style.
20
+
21
+ - `acmconf`: The default proceedings template style.
22
+
23
+ - `sigchi`: Used for SIGCHI conference articles.
24
+
25
+ - `sigchi-a`: Used for SIGCHI "Extended Abstract" articles.
26
+
27
+ - `sigplan`: Used for SIGPLAN conference articles.
28
+
29
+ In addition to specifying the *template style* to be used in formatting your work, there are a number of *template parameters* which modify some part of the applied template style. A complete list of these parameters can be found in the *LaTeX User's Guide.*
30
+
31
+ Frequently-used parameters, or combinations of parameters, include:
32
+
33
+ - `anonymous,review`: Suitable for a "double-blind" conference submission. Anonymizes the work and includes line numbers. Use with the `\acmSubmissionID` command to print the submission's unique ID on each page of the work.
34
+
35
+ - `authorversion`: Produces a version of the work suitable for posting by the author.
36
+
37
+ - `screen`: Produces colored hyperlinks.
38
+
39
+ This document uses the following string as the first command in the source file:
40
+
41
+ \documentclass[sigconf,authordraft]{acmart}
42
+
43
+ Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the `\vspace` command to manually adjust the vertical spacing between elements of your work --- is not allowed.
44
+
45
+ **Your document will be returned to you for revision if modifications are discovered.**
46
+
47
+ The "`acmart`" document class requires the use of the "Libertine" typeface family. Your TeX installation should include this set of packages. Please do not substitute other typefaces. The "`lmodern`" and "`ltimes`" packages should not be used, as they will override the built-in typeface families.
48
+
49
+ The title of your work should use capital letters appropriately - <https://capitalizemytitle.com/> has useful rules for capitalization. Use the `title` command to define the title of your work. If your work has a subtitle, define it with the `subtitle` command. Do not insert line breaks in your title.
50
+
51
+ If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The `title` command has a "short title" parameter:
52
+
53
+ \title[short title]{full title}
54
+
55
+ Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible.
56
+
57
+ Grouping authors' names or e-mail addresses, or providing an "e-mail alias," as shown below, is not acceptable:
58
+
59
+ \author{Brooke Aster, David Mehldau}
60
+ \email{dave,judy,steve@university.edu}
61
+ \email{firstname.lastname@phillips.org}
62
+
63
+ The `authornote` and `authornotemark` commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work.
64
+
65
+ If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last `\author{}` definition:
66
+
67
+ \renewcommand{\shortauthors}{McCartney, et al.}
68
+
69
+ Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers.
70
+
71
+ The article template's documentation, available at <https://www.acm.org/publications/proceedings-template>, has a complete explanation of these commands and tips for their effective use.
72
+
73
+ Note that authors' addresses are mandatory for journal articles.
74
+
75
+ Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement.
76
+
77
+ Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains LaTeX commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document:
78
+
79
+ - the "ACM Reference Format" text on the first page.
80
+
81
+ - the "rights management" text on the first page.
82
+
83
+ - the conference information in the page header(s).
84
+
85
+ Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works.
86
+
87
+ The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts).
88
+
89
+ Two elements of the "acmart" document class provide powerful taxonomic tools for you to help readers find your work in an online search.
90
+
91
+ The ACM Computing Classification System --- <https://www.acm.org/publications/class-2012> --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via <https://dl.acm.org/ccs/ccs.cfm>, and generate the commands to be included in the LaTeX source.
92
+
93
+ User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented.
94
+
95
+ CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts).
96
+
97
+ Your work should use standard LaTeX sectioning commands: `section`, `subsection`, `subsubsection`, and `paragraph`. They should be numbered; do not remove the numbering from the commands.
98
+
99
+ Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is **not allowed.**
100
+
101
+ The "`acmart`" document class includes the "`booktabs`" package --- <https://ctan.org/pkg/booktabs> --- for preparing high-quality tables.
102
+
103
+ Table captions are placed *above* the table.
104
+
105
+ Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper "floating" placement of tables, use the environment **table** to enclose the table's contents and the table caption. The contents of the table itself must go in the **tabular** environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on **tabular** material are found in the *LaTeX User's Guide*.
106
+
107
+ Immediately following this sentence is the point at which Table [1](#tab:freq){reference-type="ref" reference="tab:freq"} is included in the input file; compare the placement of the table here with the table in the printed output of this document.
108
+
109
+ ::: {#tab:freq}
110
+ Non-English or Math Frequency Comments
111
+ --------------------- ------------- -------------------
112
+ Ø 1 in 1,000 For Swedish names
113
+ $\pi$ 1 in 5 Common in math
114
+ \$ 4 in 5 Used in business
115
+ $\Psi^2_1$ 1 in 40,000 Unexplained usage
116
+
117
+ : Frequency of Special Characters
118
+ :::
119
+
120
+ To set a wider table, which takes up the whole width of the page's live area, use the environment **table\*** to enclose the table's contents and the table caption. As with a single-column table, this wide table will "float" to a location deemed more desirable. Immediately following this sentence is the point at which Table [\[tab:commands\]](#tab:commands){reference-type="ref" reference="tab:commands"} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document.
121
+
122
+ ::: table*
123
+ Command A Number Comments
124
+ ---------------- ---------- ------------------
125
+ `’134``author` 100 Author
126
+ `’134``table` 300 For tables
127
+ `’134``table*` 400 For wider tables
128
+ :::
129
+
130
+ Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily.
131
+
132
+ You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections.
133
+
134
+ A formula that appears in the running text is called an inline or in-text formula. It is produced by the **math** environment, which can be invoked with the usual `’134``begin …``’134``end` construction or with the short form `$ …$`. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in LaTeX [@Lamport:LaTeX]; this section will simply show a few examples of in-text equations in context. Notice how this equation: $ \lim_{n\rightarrow \infty}x=0$, set here in in-line math style, looks slightly different when set in display style. (See next section).
135
+
136
+ A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the **equation** environment. An unnumbered display equation is produced by the **displaymath** environment.
137
+
138
+ Again, in either environment, you can use any of the symbols and structures available in LaTeX; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: $$\begin{equation}
139
+ \lim_{n\rightarrow \infty}x=0
140
+ \end{equation}$$ Notice how it is formatted somewhat differently in the **displaymath** environment. Now, we'll enter an unnumbered equation: $$\sum_{i=0}^{\infty} x + 1$$ and follow it with another numbered equation: $$\begin{equation}
141
+ \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
142
+ \end{equation}$$ just to demonstrate LaTeX's able handling of numbering.
143
+
144
+ The "`figure`" environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below.
145
+
146
+ <figure data-latex-placement="h">
147
+ <img src="sample-franklin" />
148
+ <figcaption>1907 Franklin Model D roadster. Photograph by Harris &amp; Ewing, Inc. [Public domain], via Wikimedia Commons. (<a href="https://goo.gl/VLCRBB" class="uri">https://goo.gl/VLCRBB</a>).</figcaption>
149
+ </figure>
150
+
151
+ Your figures should contain a caption which describes the figure to the reader.
152
+
153
+ Figure captions are placed *below* the figure.
154
+
155
+ Every figure should also have a figure description unless it is purely decorative. These descriptions convey what's in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded.
156
+
157
+ A figure description must be unformatted plain text less than 2000 characters long (including spaces). **Figure descriptions should not repeat the figure caption -- their purpose is to capture important information that is not already provided in the caption or the main text of the paper.** For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see <https://www.acm.org/publications/taps/describing-figures/>.
158
+
159
+ A "teaser figure" is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the `\maketitle` command:
160
+
161
+ \begin{teaserfigure}
162
+ \includegraphics[width=\textwidth]{sampleteaser}
163
+ \caption{figure caption}
164
+ \Description{figure description}
165
+ \end{teaserfigure}
166
+
167
+ The use of  for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names ("Donald E. Knuth") not initials ("D. E. Knuth") --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc.
168
+
169
+ The bibliography is included in your source document with these two commands, placed just before the `\end{document}` command:
170
+
171
+ \bibliographystyle{ACM-Reference-Format}
172
+ \bibliography{bibfile}
173
+
174
+ where "`bibfile`" is the name, without the "`.bib`" suffix, of the  file.
175
+
176
+ Citations and references are numbered by default. A small number of ACM publications have citations and references formatted in the "author year" style; for these exceptions, please include this command in the **preamble** (before the command "`\begin{document}`") of your LaTeX source:
177
+
178
+ \citestyle{acmauthoryear}
179
+
180
+ Some examples. A paginated journal article [@Abril07], an enumerated journal article [@Cohen07], a reference to an entire issue [@JCohen96], a monograph (whole book) [@Kosiur01], a monograph/whole book in a series (see 2a in spec. document) [@Harel79], a divisible-book such as an anthology or compilation [@Editor00] followed by the same example, however we only output the series if the volume number is given [@Editor00a] (so Editor00a's series should NOT be present since it has no vol. no.), a chapter in a divisible book [@Spector90], a chapter in a divisible book in a series [@Douglass98], a multi-volume work as book [@Knuth97], a couple of articles in a proceedings (of a conference, symposium, workshop for example) (paginated proceedings article) [@Andler79; @Hagerup1993], a proceedings article with all possible elements [@Smith10], an example of an enumerated proceedings article [@VanGundy07], an informally published work [@Harel78], a couple of preprints [@Bornmann2019; @AnzarootPBM14], a doctoral dissertation [@Clarkson85], a master's thesis: [@anisi03], an online document / world wide web resource [@Thornburg01; @Ablamowicz07; @Poker06], a video game (Case 1) [@Obama08] and (Case 2) [@Novak03] and [@Lee05] and (Case 3) a patent [@JoeScientist001], work accepted for publication [@rous08], 'YYYYb'-test for prolific author [@SaeediMEJ10] and [@SaeediJETC10]. Other cites might contain 'duplicate' DOI and URLs (some SIAM articles) [@Kirschmer:2010:AEI:1958016.1958018]. Boris / Barbara Beeton: multi-volume works as books [@MR781536] and [@MR781537]. A couple of citations with DOIs: [@2004:ITE:1009386.1010128; @Kirschmer:2010:AEI:1958016.1958018]. Online citations: [@TUGInstmem; @Thornburg01; @CTANacmart]. Artifacts: [@R] and [@UMassCitations].
181
+
182
+ # Method
183
+
184
+ Lorem ipsum dolor sit amet, consectetur adipiscing elit. Morbi malesuada, quam in pulvinar varius, metus nunc fermentum urna, id sollicitudin purus odio sit amet enim. Aliquam ullamcorper eu ipsum vel mollis. Curabitur quis dictum nisl. Phasellus vel semper risus, et lacinia dolor. Integer ultricies commodo sem nec semper.
185
+
186
+ Etiam commodo feugiat nisl pulvinar pellentesque. Etiam auctor sodales ligula, non varius nibh pulvinar semper. Suspendisse nec lectus non ipsum convallis congue hendrerit vitae sapien. Donec at laoreet eros. Vivamus non purus placerat, scelerisque diam eu, cursus ante. Etiam aliquam tortor auctor efficitur mattis.
187
+
188
+ Nam id fermentum dui. Suspendisse sagittis tortor a nulla mollis, in pulvinar ex pretium. Sed interdum orci quis metus euismod, et sagittis enim maximus. Vestibulum gravida massa ut felis suscipit congue. Quisque mattis elit a risus ultrices commodo venenatis eget dui. Etiam sagittis eleifend elementum.
189
+
190
+ Nam interdum magna at lectus dignissim, ac dignissim lorem rhoncus. Maecenas eu arcu ac neque placerat aliquam. Nunc pulvinar massa et mattis lacinia.
2108.10949/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-12-22T17:56:26.235Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36" etag="q99PGOB3OvviVezhOQjh" version="13.10.4" type="google"><diagram id="C5RBs43oDa-KdzZeNtuy" name="Page-1">7Vpdc9o4FP01zLQPYfyFgcdAIG1DNtmQbtp96ShY2Fpki8oiQH/9XtkyxsgBQvh4CDNMYl1fyfI9R7pHFyp2O5xdczQObpmHacUyvFnFvqpYVtNx4a80zFODa9qpwefES01mbuiTP1gZDWWdEA/HBUfBGBVkXDQOWBThgSjYEOdsWnQbMlp86hj5WDP0B4jq1ifiiSC1Nqx6bv+CiR9kTzbdZnonRJmzepM4QB6bFkx4JrosEmqK95iHKMKRgDu3iI8wr9Q6gRDyTS8rVhc+Q+ld9RnzKUZjElcHLATzIAaX7hCFhMowLw3UUgPB4+xOxW5zxkR6Fc7amEqoMhievs6faG/kXn/7O/6NvrduHv/65yKdaPctXRYR43IGuw4d9n9ZNxd3d7Xxv6Ov/b7rIudBdTFeEJ0oeK5IjMJn4k+QICzK8LSMT/BsIuYy7GM0AOfuZ4WCmGfQxlMSUohTxW7JuPbVHQPag4BQr4fmbCLfIBZoMMparYBx8gf8EYVbJhjgNheKuZYrRyOUthllHAwRSx6Qd+rLwdRjOI6h230WKXPFdItmBcceikU2QUYpGsfkOZmy7Bgi7pOoxYQASqROWwKhAHvBXODZEusVMNeYhVhwGcvsrquAmGekV+1pvkIWCzhYXh2GMiJFeX8xdk4FuFBseAMzLI0Z3ivMgGFWiQCvLRIcORvhFeBKsESU+BE0KR7KbjJuBDaMS2UWbCwHA9qRyO8lPldObnlQ4ZAmBn2HNNkUAuJ5OJJQM4EEel5QccyI2iFqLfhAANtGtVapwcTb0DbzNnykOxdtFsG7IJKgjoE0UyyJU2C5o5Nj7ZLbTA5FBsvdjguZ396pYGpUSHnAJrFMHkTQdIPwqxW5D7u/J3JLbN1BmolR3tb3i49Ck0Mxo2admBm2xoxFlpCbQ3xGfM+I1xtbIm69H/Ho0b9+mrq/OtFNd/TUagwvjFY24yVIsQfyTjUhIgHzWYRoJ7dCnCeRhz0V5dynxyR+Sa79DwsxVxkfTQSTCV6EmR7AMyJ+LF3/lEMBEmnrKsvpSWNeSNVyciuycztIYjbhA7yZ+qBUfLxuPLscT44ppNGX4uT2vj5r2vrsLK/Ps4A7lICzmiUCzikTcPahBFxdw/6s13bRa7WtubBGr5VBf7Cs7L6G/EapBs8zHvEMZnRWbdvm8Dfzo0y1HZUfDY0fHA8xvOYAn4XbgUAvE25HBd3USz0D2A0lrLJshsIxPcO+d9hNwzkx7s2TCPYIpv4jV+my+TNT6bKRa/akVRTt28qzvWn0Wulh4R2iPel6yTmaLzkouuYj30tDriHt5koRsNlcRn+jv2Uba/0XvNvVv77Bv7HWHy7SiORcXoT2HduaXqd8WElm2gI4n3J2POU4Kwjbpi51zbLdLct++89qegXq+0Pvw6axXc43ixX0ngNOKeqH0zKOhvpwknRL9cwHRf9ggJecWI4MuHsSFbND2XGhfC6MqmE6BfFTNevNDQIoad1jTiBumB9GFTW2VEUZT04si5wVGWI3zLUyRPO3jyFD9Gpbe/V0ddYhh6q21gy92uq4ZdXW7Di2/x2qqRFAoa8V2x4DJO2wZuTPSAwx4RHsUjC+/E4d9iCYnXwVjEc0+blFAAnEMpJv3G/u+peZ5+rXq+e0tyHt1bem2es6p5RVh0t7eqVu8f1qKK8SUgACUc6zT05ThrBtVS5hqob7+VzA3TsxSvTQcYlh6cW81RIu/PvIJ6GDYV9SyD0y9s5JtHBe0bPr9rKslUJ3YTihrM0K3Jt1bX3fuvYVPKGZ/zo0VZr5L3rtzv8=</diagram></mxfile>
2108.10949/main_diagram/main_diagram.pdf ADDED
Binary file (12.7 kB). View file
 
2108.10949/paper_text/intro_method.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The task of entity linking (EL) refers to finding named entity mentions in unstructured documents and matching them with the corresponding entries in a structured knowledge graph [\(Milne and Wit](#page-9-0)[ten,](#page-9-0) [2008;](#page-9-0) [Oliveira et al.,](#page-9-1) [2021\)](#page-9-1). This matching is usually done using the surface form of an entity, which is a text label assigned to an entity in the knowledge graph [\(van Hulst et al.,](#page-9-2) [2020\)](#page-9-2). Some mentions may have several possible matches: for example, "Michael Jordan" may refer either to a well-known scientist or the basketball player, since they share the same surface form. Such mentions are ambiguous and require an additional step of entity disambiguation (ED), which is conditioned on the context in which the mentions appear in the text, to be linked correctly. Following [van Erp and](#page-8-0)
4
+
5
+ [Groth](#page-8-0) [\(2020\)](#page-8-0) we refer to a set of entities that share the same surface form as an *entity space*.
6
+
7
+ <span id="page-0-0"></span>![](_page_0_Figure_9.jpeg)
8
+
9
+ ![](_page_0_Figure_10.jpeg)
10
+
11
+ (b) Even with more relevant context, overshadowing persists.
12
+
13
+ Figure 1: An example of entity overshadowing. The correct entity is ranked lower by the EL systems (indicated in blue) than the more common one.
14
+
15
+ To decide which of the possible matches is the correct one, an ED algorithm typically relies on: (1) contextual similarity, which is derived from the document in which the mention appears, indicating the *relatedness* of the candidate entity to the document content, and (2) entity importance, which is the prior probability of encountering the candidate entity irrespective of the document content, indicating its *commonness* [\(Milne and Witten,](#page-9-0) [2008;](#page-9-0) [Ferragina and Scaiella,](#page-8-1) [2012;](#page-8-1) [van Hulst et al.,](#page-9-2) [2020\)](#page-9-2).
16
+
17
+ The standard datasets currently used for training and evaluating ED models, such as AIDA-CoNLL [\(Hoffart et al.,](#page-9-3) [2011\)](#page-9-3) and WikiDisamb30 [\(Ferragina and Scaiella,](#page-8-1) [2012\)](#page-8-1), are collected by randomly sampling from common data sources, such as news articles and tweets. Therefore, they are expected to mirror the probability distribution
18
+
19
+ with which the entities occur, thereby favouring more frequent entities (head entities) [\(Ilievski et al.,](#page-9-4) [2018\)](#page-9-4). From these considerations, we conjecture that the performance of existing EL algorithms on the ED task is overestimated. We set out to explore this effect in more detail by introducing a new dataset for ED evaluation, in which the entity distribution differs from the one typically used for training ED algorithms.
20
+
21
+ We perform a systematic study focusing on a particular phenomenon we refer to as *entity overshadowing*. Specifically, we define an entity e<sup>1</sup> as overshadowing an entity e<sup>2</sup> if two conditions are met: (1) e<sup>1</sup> and e<sup>2</sup> belong to the same entity space S, i.e., share the same surface form and, therefore, can be confused with each other outside of the local context; (2) e<sup>1</sup> is more common than e<sup>2</sup> in some corresponding background corpus (e.g. the Web), i.e., it has a higher prior probability P(e1) > P(e2).
22
+
23
+ For example, e<sup>1</sup> = "Michael Jordan" (basketball player) overshadows e<sup>2</sup> = "Michael Jordan" (scientist) because P(e1) > P(e2) in a typical dataset sampled from the Web. We use an unambiguous text sample that contains this mention to evaluate three popular state-of-the-art EL systems, GENRE [\(De Cao et al.,](#page-8-2) [2020\)](#page-8-2), REL [\(van Hulst](#page-9-2) [et al.,](#page-9-2) [2020\)](#page-9-2), and WAT [\(Piccinno and Ferragina,](#page-9-5) [2014\)](#page-9-5), and empirically verify that the overshadowing effect that we hypothesized, indeed, takes place (see Fig. [1a\)](#page-0-0). Even when more information is added to the local context, including the directly related entities that were correctly recognised by the system ("machine learning"), the ED components still fail to recognise the overshadowed entity (see Fig. [1b\)](#page-0-0).
24
+
25
+ The concept of overshadowed entities introduced in this paper is related to long-tail entities [\(Ilievski](#page-9-4) [et al.,](#page-9-4) [2018\)](#page-9-4). However, these two concepts are distinct: a long-tail entity may be unambiguous and therefore not overshadowed, while an overshadowed entity may still be too popular to be considered a long-tail one.
26
+
27
+ To systematically evaluate the phenomenon of entity overshadowing that we have identified, we introduce a new dataset, called ShadowLink. [1](#page-1-0) . ShadowLink contains groups of entities that belong to the same entity space. Following [van Erp](#page-8-0) [and Groth](#page-8-0) [\(2020\)](#page-8-0), we use Wikipedia disambigua-
28
+
29
+ tion pages to collect entity spaces. Disambiguation pages group entities that often share the same surface form and may be confused with each other. We then follow the links in the Wikipedia disambiguation pages to the individual (entity) Wikipedia pages to extract text snippets in which each of the ambiguous entities occur.
30
+
31
+ Note that we do not extract the text from these Wikipedia pages directly, since pre-trained language models such as BERT (typically used in state-of-the-art ED systems) also use Wikipedia as a training corpus, and can learn certain biases as well. Instead, we parse external web pages that are often linked at the end of a Wikipedia page as references. This data collection approach helps us to minimise the possible overlap between the test and training corpus.
32
+
33
+ Thereby, every entity in ShadowLink is annotated with a link to at least one web page in which the entity is mentioned. We then proceed to extract all text snippets in which the corresponding entity mention appears on the page. An extracted text snippet typically consists of the sentence in which the mention occurs.
34
+
35
+ Next, we use ShadowLink to answer the following research questions:
36
+
37
+ RQ1: How well can existing ED systems recognise overshadowed entities?
38
+
39
+ RQ2: How does performance on overshadowed entities compare to long-tail entities?
40
+
41
+ RQ3: Are ED predictions biased and how can we measure this bias?
42
+
43
+ Our contribution is twofold: (1) a new dataset for evaluating entity disambiguation performance of EL systems specifically focused on overshadowed entities, and (2) an evaluation of current state-of-the-art algorithms on this dataset, which empirically demonstrates that we correctly identified the type of samples that remain challenging and provide an important direction for future work.
44
+
45
+ This section describes the ShadowLink dataset: its construction process, structure, and statistics.
46
+
47
+ The process of dataset construction consists of 3 steps: (1) collecting entities, (2) retrieving context examples for each entity, and (3) filtering the data based on the validity requirements detailed below.
48
+
49
+ Collecting entities. Similar to [van Erp and](#page-8-0)
50
+
51
+ <span id="page-1-0"></span><sup>1</sup> ShadowLink dataset can be downloaded at [https:](https://huggingface.co/datasets/vera-pro/ShadowLink) [//huggingface.co/datasets/vera-pro/](https://huggingface.co/datasets/vera-pro/ShadowLink) [ShadowLink](https://huggingface.co/datasets/vera-pro/ShadowLink)
52
+
53
+ <span id="page-2-1"></span>![](_page_2_Figure_0.jpeg)
54
+
55
+ Figure 2: Structure of the ShadowLink dataset
56
+
57
+ [Groth](#page-8-0) [\(2020\)](#page-8-0), we use Wikipedia disambiguation pages to represent entity spaces. We retrieve a set of all Wikipedia disambiguation pages and filter it on the following criteria:
58
+
59
+ - (1) For each disambiguation page (DP), we only include candidate entity pages with names containing the title of the DP as a substring. This step is required to exclude synonyms and redirects.
60
+ - (2) If at least two candidate pages for the same DP match the criterion described above, then the DP and all its matching candidates are included as a new entity space.
61
+
62
+ During the first stage of the data collection, 170K out of 316.5K Wikipedia disambiguation pages matched the filtering criteria described above.
63
+
64
+ Filtering pages by year. To make sure that all pre-trained EL systems we evaluate in our experiments can potentially recognise all of the entities in the dataset, we also exclude pages that are more recent than the Wikipedia dumps used by these systems during training. The oldest dump used by a system in our experiments was the 2016 Wikipedia dump over which TagMe was trained, i.e we excluded all the pages that were created after 2016.
65
+
66
+ Collecting context examples. To retrieve context examples for each entity, we follow the external links extracted from the references section of the corresponding Wikipedia page and parse them to extract the text snippets which contain the entity mention. Then, every target entity mention is replaced with its corresponding entity space name, yielding an ambiguous entity mention. For example, if we have entities "John Smith" and "Paul Smith" that both belong to the entity space "Smith", then the mentions of both names will be replaced with "Smith". Looking for an entity name and replacing it with the corresponding entity space
67
+
68
+ name (instead of looking for the entity space name in the first place) allowed us to make sure that the text snippets refer to the correct entity. Using this method, however, significantly reduced the number of retrieved snippets, as many of the entity mentions in natural texts do not include the full titles of the entities.
69
+
70
+ To extract the text snippets, we used a simple greedy algorithm that starts with the mention boundaries and tries to include more text, expanding the boundaries to the left and to the right, until it either covers one sentence on each side, or reaches the end (or beginning) of the document text. Our decision was to use relatively short spans similar to other popular ED benchmarks: WikiDisamb30 [\(Ferragina and Scaiella,](#page-8-1) [2012\)](#page-8-1) and KORE50 [\(Hof](#page-9-3)[fart et al.,](#page-9-3) [2011\)](#page-9-3). Our manual evaluation confirmed that these spans provide sufficient context for entity disambiguation. We also release the full-text of all web pages as part of our dataset, making the context of different lengths available for future experiments.
71
+
72
+ Commonness score. We estimate the commonness (popularity) of an entity as the number of links pointing to the entity page from other Wikipedia pages, that is, the in-degree of the entity page in the web graph of Wikipedia hyperlinks. Intuitively, this is proportional to the probability of encountering this entity when sampling a page at random. To obtain this metric for all the entities in the dataset, we use the Backlinks MediaWiki API[2](#page-2-0) .
73
+
74
+ Quality assurance. We conduct manual evaluation to assess the quality of the dataset and provide the upper bound performance for the ED task. The details of the setup and the results are discussed in Section [3.](#page-3-0)
75
+
76
+ The ShadowLink dataset consists of 4 subsets: *Top*, *Shadow*, *Neutral* and *Tail*. The Top, Shadow and Neutral subsets are linked to each other through the shared entity spaces. On the other hand, the Tail subset, which contains (typically unambiguous) long-tail entities, is not connected to the other three through the same entity spaces. Nevertheless, it is collected in a similar way as the other three subsets.
77
+
78
+ Top and Shadow subsets. The structure of the Top and Shadow subsets is shown in Figure [2.](#page-2-1) Every entity e belongs to an entity space Sm, derived
79
+
80
+ <span id="page-2-0"></span><sup>2</sup>[https://www.mediawiki.org/wiki/API:](https://www.mediawiki.org/wiki/API:Backlinks) [Backlinks](https://www.mediawiki.org/wiki/API:Backlinks)
81
+
82
+ from the Wikipedia disambiguation pages, where m is an ambigous mention that may refer to any of the entities in Sm. Every S<sup>m</sup> contains at least two entities: one etop and one or more eshadow entities. Every entity e ∈ S<sup>m</sup> is annotated with a link to the corresponding Wikipedia page and provided with context examples. A context example is a text snippet extracted from one of the external pages which contains the mention m , with a length of 25 words on average.
83
+
84
+ Neutral subset. To quantify the strength of the prior of each ED system, we synthetically generate data points for which the context around an entity mention is not useful for disambiguating that mention. To do that we use 7 hand-crafted templates. An example of such a template is the following: "It was the scarcity that fueled our creativity. This reminded me of m today." For each entity space, we generated 7 random contexts.
85
+
86
+ Tail subset. To evaluate the performance of ED systems on long-tail but typically not overshadowed entities, we collect an additional set of entities by randomly sampling Wikipedia pages that have a low commonness score (<= 56 backlinks)[3](#page-3-1) .
87
+
88
+ Context examples for these pages were collected in the same manner as described above. The resulting dataset matches the size and structure of other ShadowLink subsets, containing 904 entities.
89
+
90
+ The sampling process used to collect this subset follows the existing definition of long-tail entities[\(Ilievski et al.,](#page-9-4) [2018\)](#page-9-4), and is controlled for popularity but not for ambiguity. The Tail subset serves as a control group for the experiments conducted in our study, showing that the concept of entity overshadowing differs from the previously studied long-tail entity phenomena.
91
+
92
+ ShadowLink statistics. The dataset statistics across all the subsets are summarised in Table [1.](#page-4-0) Note that the *Top*, *Shadow* and *Neutral* subsets are grouped around the same entity spaces, while the *Tail* subset is constructed by sampling the same number of non-ambiguous entities. Every entity space contains at least 2 entities, with the mean number of entities per space being 2.63, median 2, and maximum 10. Figure [3](#page-3-2) shows the distribution of commonness in the three subsets: Top, Shadow and Tail.
93
+
94
+ For the experiments we used a smaller subset of ShadowLink, with only one randomly selected
95
+
96
+ <span id="page-3-2"></span>![](_page_3_Figure_8.jpeg)
97
+
98
+ Figure 3: Distribution of the commonness score on the three subsets of ShadowLink.
99
+
100
+ shadow entity per entity space and one text snippet per entity. Thus, every subset contained 904 entities, with the total size of 9K text snippets. The rest of the data is left out as a training set and can be used in future experiments.
101
+
102
+ We perform manual evaluation of a random sample from ShadowLink to assess its quality, with the goal of ensuring that the extracted text snippets provide context sufficient for disambiguation. Human performance also sets the skyline for automated approaches on this dataset. In the following subsections, we describe the evaluation setup and the results of the manual evaluation.
103
+
104
+ We conduct a manual evaluation to assess the quality of the dataset and evaluate how well human annotators can disambiguate overshadowed entities. A sample of 91 randomly selected dataset entries was presented to two annotators, who examined the entries independently. For each entry, the annotators were presented with a text snippet containing an ambiguous entity mention m, and two entities, *Top* and *Shadow*, from the same entity space Sm, where one of the two entities was the correct answer. The annotators were instructed to either indicate the correct entity or mark the text snippet as ambiguous, which indicates that the provided context is not sufficient for the disambiguation decision to be made. Note, however, that the commonness scores were not displayed to the annotators.
105
+
106
+ <span id="page-3-1"></span><sup>3</sup>This threshold is equal to the median number of backlinks in the *Shadow* subset.
107
+
108
+ <span id="page-4-0"></span>
109
+
110
+ | Subset | # Entity Spaces | # Entities | # Text Snippets | Avg. # Words | Avg. # Sentences |
111
+ |---------|-----------------|------------|-----------------|--------------|------------------|
112
+ | Top | 904 | 904 | 2K | 29.25 | 1.11 |
113
+ | Shadow | 904 | 1.5K | 6K | 28.97 | 1.11 |
114
+ | Neutral | 904 | - | 6K | 14.83 | 1.87 |
115
+ | Tail | - | 904 | 2K | 28.94 | 1.10 |
116
+
117
+ <span id="page-4-1"></span>Table 1: Dataset statistics across all the subsets of ShadowLink. The average number of words and sentences were calculated per text snippet extracted from the corresponding web page.
118
+
119
+ | | Shadow | Top |
120
+ |-------------|-----------|-----------|
121
+ | | P = R = F | P = R = F |
122
+ | Annotator 1 | 0.973 | 0.973 |
123
+ | Annotator 2 | 0.950 | 0.919 |
124
+ | Average | 0.963 | 0.946 |
125
+
126
+ Table 2: Results of the manual annotations.
2108.13393/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2108.13393/paper_text/intro_method.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Semantic segmentation is a fundamental task, where each pixel of an image is labeled into a predefined set of classes. In the field of computer vision, it has made great progress in many applications, such as automatic driving, scene understanding, and medical diagnosis [@minaee2020image] [@2019A]. Recently, deep convolutional neural networks (CNNs) have achieved remarkable success in a variety of semantic segmentation tasks [@chen2017deeplab; @minaee2020image]. However, they require large amounts of pixel-level annotations for training. The acquisition process of pixel-level annotations is extremely time-consuming and labor-intensive.
4
+
5
+ <figure id="fig:head" data-latex-placement="t">
6
+ <div class="center">
7
+ <embed src="figures/head.pdf" style="width:90.0%" />
8
+ </div>
9
+ <figcaption>Weakly supervised segmentation with the click-level annotations. (a) A generic model trained only with click-level annotations overfits to the labels and cannot recognize the whole object. (b) Previous works (e.g. regularized loss) apply low-dimensional continuity information to the training, also failing to correctly segment the object. (c) With seminar learning, our teacher-student module enables the network to generalize to the whole object, as indicated by the arrows. Meanwhile, by integrating diverse information from the two networks, the student-student module can smooth the boundary area in the marked boxes.</figcaption>
10
+ </figure>
11
+
12
+ In order to alleviate the burden of annotations, weakly supervised semantic segmentation has become increasingly popular, as it only requires coarse annotations, such as box-level [@dai2015boxsup], image-level [@2015Weakly], scribble-level [@2016ScribbleSup], or click-level [@2016What] supervision. Among these, click-level supervision only annotates one pixel for each object in an image. Further, it not only provides valuable location information, but is also one of the cheapest form of weakly supervision [@2016What]. It has high research potential in terms of the trade-off between information and time costs.
13
+
14
+ As commonly known, it is challenging to achieve satisfactory performance with limited supervision information during model training. For instance, with click-level patterns, if only learning from one labeled pixel, the model cannot infer the entire range of an object, especially the edges, which will eventually weaken the segmentation performance. An effective way to compensate weak supervision information is to introduce more prior information. For example, 'What's the point' [@2016What] incorporates an objectness prior into network training, which helps distinguish between foreground and background. 'ScribbleSup' [@2016ScribbleSup] uses an additional graphical model to propagate information from click-level annotations. 'Regularized Loss' [@tang2018regularized] designs a regularization item based on dense conditional random field (CRF) for classifying nearby pixels with similar colors into the same category. These models only focus on low-dimensional continuity between labeled pixels and others pixels, which is limited to local annotation information in click-level supervision. Therefore, these models cannot properly segment the entire object and still underperform.
15
+
16
+ Considering the nature of click-level supervised semantic segmentation, we make two observations: 1) A large number of unlabeled pixels are not well used, but could provide broader information, which can expand the learning range of networks from a single annotated pixel to an entire object. 2) If a network is trained under different conditions, such as using different random seeds, the predictions will vary greatly. This uncertainty causes that different networks capture distinctive and diverse information, which could be aggregated to complement each other.
17
+
18
+ Inspired by these observations, we propose *seminar learning*, a novel learning paradigm for click-level weakly supervised semantic segmentation by introducing more effective information. The essence of our seminar learning is to complement the deficiency of networks by leveraging the knowledge provided from the predictions of other networks. As shown in Fig. [1](#fig:head){reference-type="ref" reference="fig:head"}, seminar learning framework consists of two components: teacher-student module and student-student module. Notably, the teacher-student module is exploited to expand the learning range of networks. We use an exponential moving average (EMA) based teacher network for generalized prediction and prevent the student network from overfitting to click-level labels, which has a similar workflow to semi-supervised mean-teacher [@2017Mean] method. However, compared to mean-teacher, our module is able to operate on unlabeled pixels in each image instead of unlabeled images. The student-student module is applied to refine segmentation boundaries by aggregating diversity information of student networks. To improve the efficiency of information transfer, we propose heterogeneous pseudo-labels as bridges between student networks, which based on the prediction of a fully trained student to guide the other. In summary, we make several major contributions as follows:
19
+
20
+ - We propose a novel learning paradigm, called seminar learning, that can learn to leverage more supervisory information provided by a group of networks.
21
+
22
+ - We treat the click-level supervised semantic segmentation task as a semi-supervised pixel classification task per image, and propose a novel pixel consistency loss, which enables a student to learn from a teacher using unlabeled pixels.
23
+
24
+ - The novel concept of heterogeneous pseudo-labels is proposed, which is a more effective medium to enable the supervisory information to be shared among diverse networks by the student-student module.
25
+
26
+ - We conduct extensive experiments to verify the effectiveness of the proposed seminar learning, which outperforms previous SOTA works [@tang2018regularized] by a large margin (from 55.63% to 72.51% in terms of the mIOU metric).
27
+
28
+ <figure id="fig:pipeline" data-latex-placement="ht">
29
+ <div class="center">
30
+ <embed src="figures/pipeline.pdf" style="width:90.0%" />
31
+ </div>
32
+ <figcaption>The pipeline of the proposed seminar learning method for click-level supervised semantic segmentation. It consists of a primary model and ancillary model, which are trained progressively. </figcaption>
33
+ </figure>
34
+
35
+ # Method
36
+
37
+ In this section, we will provide a detailed description on our proposed seminar learning for click-level supervised semantic segmentation. Our framework mainly consists of the teacher-student and student-student module, which used to transfer information among networks. The combination of the two modules is similar to a real-world seminar, which was the inspiration of seminar learning. We will describe the overall process first and then explain how it works.
38
+
39
+ An overview of our proposed approach is shown in Fig. [2](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}. We train the ancillary model first, and then the primary model. For each model, we apply a teacher-student module.
40
+
41
+ Meanwhile, heterogeneous pseudo-labels generated by an ancillary student are used as the extra input of the primary model, which constitute the student-student module. In this way, the primary model can integrate information from the ancillary model.
42
+
43
+ A unified CNN framework is used for training. We define input pairs of images as $X$, of size $W \times H$, with corresponding annotation $\hat{Y}$; $x$ and $\hat{y}$ as the pixels of $X$ and $\hat{Y}$; $N = W \times H$ as the total number of pixels in each image; and $n$ as labeled pixels of each image in our click-level supervised task. The network outputs a softmax score map $Y$ of size $W \times H \times C$, where $C$ is the number of label classes. For the test process, the score map chooses the class of the max score for each pixel, and a final prediction of size $H \times W$ is obtained.
44
+
45
+ The training procedure can be described as follows:
46
+
47
+ **Training the ancillary model.** The ancillary model is constructed by the teacher-student module. In this module, we only need to train the student network. The teacher network is obtained by the exponential moving average (EMA) of the student network. At the training iteration $t$, the EMA process is defined as $$\begin{equation}
48
+ \label{eq:moveaverage}
49
+ \theta^{'}_t = \left\{
50
+ \begin{array}{lcl}
51
+ (1-\frac{1}{t}) \times \theta^{'}_{t-1} + \frac{1}{t} \times \theta_{t}, & & 1-\frac{1}{t} < \alpha \\
52
+ \alpha \theta^{'}_{t-1} + (1 - \alpha)\theta_{t}, & & otherwise,
53
+ \end{array}
54
+ \right .
55
+ \end{equation}$$ where $\alpha$ is a smoothing coefficient hyperparameter, and $\theta^{'}$ and $\theta$ are the weight of the teacher and student, respectively. To renew the weight of the teacher model quickly during the initial training iterations, we use absolute average instead of EMA when $1-\frac{1}{t} < \alpha$.
56
+
57
+ The networks of the ancillary student and the ancillary teacher are randomly initialized with the same random seed. In each iteration of the training, we input training images to both the student and teacher network and three losses are evaluated. Firstly, we train the student network using click-level labels by minimizing the partial cross-entropy loss $L_{pCE}$ [@tang2018regularized], which is defined as $$\begin{equation}
58
+ \label{eq:ce}
59
+ L_{pCE} = - \frac{1}{n}\sum_{i \in n}\hat{y}^c_i\log(y^c_{i}),
60
+ \end{equation}$$ where $i \in n$ indicates that only labeled pixels participate in the calculation of the loss, and $\hat{y}^c_{i} = [0, 1]^c$ is the ground truth of pixel $i$ belonging to class $c$.
61
+
62
+ To obtain the assistance of the ancillary teacher network, we apply a pixel consistency loss $L_{pCons}$, which is defined as $$\begin{equation}
63
+ \label{eq:consistencyloss_pixel}
64
+ L_{pCons} = - \frac{1}{N}\sum_{i \in N}||f(x_i, \theta^{'}) - f(x_i, \theta)||^2,
65
+ \end{equation}$$ where $f(\cdot)$ is the softmax prediction of the network and no gradient is calculated in the teacher network.
66
+
67
+ An regularized loss $L_{CRF}$ [@tang2018regularized] is also applied to smooth the segmentation, which is defined as $$\begin{equation}
68
+ L_{CRF} = \sum_{C}Y^{C'}W_{pq}(1-Y^C),
69
+ \end{equation}$$ where $W_{pq}$ is a dense Gaussian kernel with a role of the relaxation of dense CRF [@krahenbuhl2013parameter], $Y^{C}$ is the softmax output of each class, and $Y^{C'}$ is the transposed matrix of $Y^{C}$.
70
+
71
+ After the backpropagation of all the losses, the ancillary teacher network will be updated by EMA. This process continues iteratively until the end of the training. Overall, the ancillary student network is trained by loss $L^*$, which is defined as $$\begin{equation}
72
+ \label{equ:L^*}
73
+ L^* = L_{pCE} + \lambda_{pCons}L_{pCons} + \lambda_{CRF}L_{CRF},
74
+ \end{equation}$$ where $\lambda$ controls the contribution of each loss term.
75
+
76
+ **Training the primary model.** In the primary model, we also use the teacher-student module for the training. In addition, we apply the student-student module to connect the ancillary student network and the primary student network with heterogeneous pseudo-labels.
77
+
78
+ After the ancillary model is fully trained, the networks of the primary student and teacher are initialized in the same manner as their ancillary counterparts. During each training iteration, we train the primary student network the same way as the ancillary student network through EMA. Moreover, we input training images to the ancillary model, and obtain prediction maps. By choosing the maximal class of the prediction maps, we generate heterogeneous pseudo-labels to introduce the contribution of the ancillary student network to the training of the primary student network. To include the information of heterogeneous pseudo-labels, a new loss $L_{pseudo}$ is proposed. Since heterogeneous pseudo-labels are applied to each pixel, the loss is in the form of cross-entropy. $L_{pseudo}$ is defined as $$\begin{equation}
79
+ \label{eq:pseudo}
80
+ L_{pseudo}(\theta) = - \frac{1}{N}\sum_{i \in N}\tilde{y}^c_i\log(y^c_{i}),
81
+ \end{equation}$$ where $\tilde{y}$ denotes the heterogeneous pseudo-labels generated by ancillary model $\theta_{anc}$.
82
+
83
+ Therefore, the primary student network is trained with the overview loss $L$ in each iteration, which is defined as $$\begin{equation}
84
+ L = L^* + \lambda_{pseudo} L_{pseudo}.
85
+ \end{equation}$$
86
+
87
+ <figure id="fig:seminardescription" data-latex-placement="ht">
88
+ <div class="center">
89
+ <embed src="figures/description.pdf" style="width:85.0%" />
90
+ </div>
91
+ <figcaption>Visualization of the mechanism in seminar learning. We obtain the first three results in the tenth epoch of primary model training. </figcaption>
92
+ </figure>
93
+
94
+ **Teacher-student.** A large number of unlabeled pixels are not well utilized in click-level supervised semantic segmentation, which is also the case in semi-supervised learning (SSL). Thus, we regard the click-level supervision as a SSL task, where some image pixels are labeled while the others are unlabeled. The mean-teacher [@2017Mean] is a effective SSL method that use a teacher-student module to leverage unlabeled images. Inspired by this, we adapt the teacher-student module to our model by operating on unlabeled pixels instead of unlabeled images.
95
+
96
+ In the teacher-student module, the teacher network is obtained by the EMA of the student network. The EMA network is proved to be more efficient than using the final network directly [@polyak1992acceleration]. EMA can be considered a temporal ensemble process, which endows it with a strong generalization ability. Thus, teacher network can avoid the overfitting to click-level labels and further guides the student network to learn the full object. In addition, its ability to reduce the bias of the targets can achieve a smoother classification boundary [@2017Mean]. Since the object boundary can be viewed as classification boundary [@french2020semi], the EMA network can also predict a smoother and more accurate mask.
97
+
98
+ To make consistency constraint between teacher and student network, we propose a pixel consistency loss $L_{pCons}$, as a form of mean square error (MSE). Our pixel consistency loss only measures unlabeled pixels and is defined as: $$\begin{equation}
99
+ \label{eq:consistencyloss_pixel_former}
100
+ \begin{aligned}
101
+ L_{pCons} = \frac{1}{N-n}(\sum_{i \in n}||f(x_i, \theta^{'}) - f(x_i, \theta,)||^2 - \\ \sum_{i \in N}||f(x_i, \theta^{'}) - f(x_i, \theta)||^2).
102
+ \end{aligned}
103
+ \end{equation}$$ Because $n \ll N$, we ultimately use an approximate form of $L_{pCons}$, defined in Eq. [\[eq:consistencyloss_pixel\]](#eq:consistencyloss_pixel){reference-type="ref" reference="eq:consistencyloss_pixel"}
104
+
105
+ **Student-student.** The teacher-student module is used as an individual model, while the student-student module is used to connect two models. In the student-student module, we propose heterogeneous pseudo-labels as the bridge between the two models. The heterogeneous pseudo-labels are generated by the ancillary student network after the network is fully trained. Then, the labels are transferred to the primary student network.
106
+
107
+ Early attempts with pseudo-labels [@lee2013pseudo] used the network's predictions to train the network itself. However, such an operation will produce confirmation bias [@arazo2020pseudo]. In this case, the model will memorize the false pseudo-labels and it will be difficult to forget them during training. Therefore, we use a fully trained ancillary model to generate heterogeneous pseudo-labels. Such a discriminative model can produce reliable predictions that guide the training of the primary student network correctly.
108
+
109
+ Furthermore, the ancillary model should be trained under different conditions from the primary model, such as different random seed. As is mentioned before, models can generate different masks with great diversity. Learn from the prediction of the ancillary student network can compensate the deficiency of the primary student network and then smooth segmentation boundary.
110
+
111
+ We visualize the prediction of each network in the training of click-level supervised semantic segmentation, as shown in Fig. [3](#fig:seminardescription){reference-type="ref" reference="fig:seminardescription"}, to illustrate why seminar learning works by leveraging teacher-student and student-student modules.
112
+
113
+ Comparing the segmentation of the ancillary student and primary student, we can see that the two networks have diverse predictions of the target person. Although the ancillary student fails to predict the right arm of the person in the green box, it has better robustness to noise in the red box and correctly predicts the legs of the person in the yellow box. The limbs of the person and background noise are uncertain regions since they are far from the click-level labels. By integrating the two networks in the student-student module, the ancillary student obtain a better segmentation performance in the leg and noisy regions.
114
+
115
+ As for the teacher-student module, we find that the prediction of the primary teacher covers a wider region of the person in the green and yellow boxes compared to the primary student, which confirms that the primary teacher has better generalization. Since the primary teacher is updated by the primary student, the learning range of the primary student will gradually grow during training, finally allowing it to recognize the whole person.
116
+
117
+ After the training is done, we obtain the final segmentation prediction as the output. We can see that almost every part of the person is accurately predicted, and the final result is close to the ground truth. This shows that our seminar learning can effectively integrate information from all the networks in our pipeline and overcome the limitations of click-level labels to provide smoother segmentation.
118
+
119
+ :::: table*
120
+ ::: center
121
+ +-----------------------------------------+----------------+----------------+---------------------------------------------------------------+--------------+
122
+ | -------- | ------------ | ------------ | ----------- | ---------- |
123
+ | Method | Foreground | Background | Specifics | mIOU (%) |
124
+ | -------- | Annotation | Annotation | ----------- | ---------- |
125
+ | | ------------ | ------------ | | |
126
+ +:========================================+:==============:+:==============:+:==============================================================+:============:+
127
+ | What's the Point [@2016What] | manual | \- | VGG16, size=\[1$\times$`<!-- -->`{=html}1\]px | 43.40 |
128
+ +-----------------------------------------+----------------+----------------+---------------------------------------------------------------+--------------+
129
+ | ScribbleSub [@2016ScribbleSup] | synthetic | synthetic | Deeplab-v2-VGG16, size=\[3$\times$`<!-- -->`{=html}3\]px | 51.60 |
130
+ +-----------------------------------------+----------------+----------------+---------------------------------------------------------------+--------------+
131
+ | Regularized Loss [@tang2018regularized] | synthetic | synthetic | Deeplab-v2-ResNet101, size=\[3$\times$`<!-- -->`{=html}3\]px | 57.00 |
132
+ +-----------------------------------------+----------------+----------------+---------------------------------------------------------------+--------------+
133
+ | Regularized Loss [@tang2018regularized] | manual | synthetic | Deeplab-v3+-ResNet101, size=\[1$\times$`<!-- -->`{=html}1\]px | 55.63 |
134
+ +-----------------------------------------+----------------+----------------+---------------------------------------------------------------+--------------+
135
+ | **Ours** | manual | synthetic | Deeplab-v3+-ResNet101, size=\[1$\times$`<!-- -->`{=html}1\]px | **72.51** |
136
+ +-----------------------------------------+----------------+----------------+---------------------------------------------------------------+--------------+
137
+ :::
138
+ ::::
2109.04853/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-06-13T19:47:56.779Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" etag="bZJFW_TNMv9_2MtqaC7R" version="14.6.13" type="device"><diagram id="niUh-jy6z6EmGra3J54k" name="Page-1">7V1bc5s4FP41mdl9SIY79mOatNnZbbfZyW6bPlJbttlg8GISO/31C0ayQVIam0jogp4SS4Dx+fjOjaOjM/dqub3Jo9XiUzYFyZljTbdn7vWZ49he4JR/qpHnemRkefXAPI+n8KDDwF38A8BBC44+xlOwbh1YZFlSxKv24CRLUzApWmNRnmeb9mGzLGl/6yqaA2LgbhIl5OjXeFos4K9wwsP4byCeL9A328G4nllG6GD4S9aLaJptGkPu+zP3Ks+yov5vub0CSSU8JJf6vA8vzO5vLAdpccwJIPpq3yd///W58P+5m/kPX0e36bkNf8dTlDzCXwzvtnhGIsizx3QKqqtYZ+67zSIuwN0qmlSzmxL0cmxRLJPyk13+O8vSAqLoBOVn8i7hjT+BvADbxhC86xuQLUGRP5eHoFkkQfgIBejz5gBICIcWDSw8dFwEn4H5/tIHMZX/QEmdIrURRWpBUn7vu/UqSlviC/57rBDeCeZ8vZPMZXmAE6y2O/Gg+fK/efXXRlcq76y+WD1OwFLeZ0kCoBQkITdExmwQIeCwlIRjLBgOdD/M4VCSHWEgGg6bExxKsmPkiobDOR2OSZZkeQ1FPv/+S3lr5Xdb6M+vO0lZO8xm0TJOnutDFyB5AkU8iRrzFBsEJ+ovrWbSLF9GSWNuA+VSTXqWVc8koChAfl7e8yRO5+SZJT7FeZTE87Sem5QYgrwxF5d+RQqvaqF72c0UeZSuZ+W10FVTUM9usnza/sb9id+jycN856ycY9JyvFEtKMcbw398JLNpvF4lEZRXnCYx+qZZkkUF9vX48+8G3oV9tE6qftnpDKge9BLB5LIW5PUynk4TwIYaLmY3PJIZYwozXG7M8Awz9GGGow8zQls0M3zDDI2Y4enDjJFoZgSGGfoww9WGGCPfF0wMBXJ7jnS5PWfoub2ukPALl3nl9tSEQ3RuD+mrYSaTcDiE5/ZcXrk9JdkhPLfnMsntHRzSg0/qWC+4peWFouVqN+m6XiXyvZ+Kz7Qvgj8BrTnCfXVe91PRIVR3tDVJ80jRAXSnFM2+4pdaDdfUanin6HSag7r/4ZiPSlCCDbFuqnfQL3GLpfMLg4XrfaSwIyC8isWHkLbdJqQvOjx0XVJhTecAiSHLi0U2z9IoeX8Yfdd2iw/HfMyyFRTuvyUNnqHwoscia4sebOPivjr9woefvjVmrrfwyrsPz000uqCzzh7zCThCKRVRPgfFEQdWEvop2jlIoiJ+Aq37YI8dk3SwUaYaK9PbHEzjSRFnqbYq1ZFOpdJy0ZIlFjzPbwnNtinvtkKflJoz4ia2DolKrTILnTHh563TUmSDiWUJPBzKW65+8WCUfNOEHw5F0/eKB+Knebeiw7sV3pUqDCjg+riJ8I9L6HBzdrwO+TVDAVkpoHRJCs4Nh5Ls7DUQ8JgkOw03JOGGykUpODdcSzQ3aLkryYJkX74g2etQ56ZVkNwZE35BAKO8hSZ4CA+SvUEnLUg8hAfJtPoUyTQ9nloIj1Qqrs9Laj6jugVlFT0OyUi0XvE7ePP66BUCDtFqxXdPh8MEV7IGVxrlHWglb72GVr5ZJaYRMxy+zOBAgJEjuLLfN4vBNCKAykteMGaMRWfdfFquQrJYDI9gJYjFGGUUlI3FukLCz/kf+iokHBHx0fGgVyERcIiOjlEMIrOix1MK1JV0nIQ2+3h378xWT9twPANfvvxx/3nhn6MLy18LXwo9f26cVH381pw7nLb7hM5bF1FeXFZ9/8qBSRKt1/EEDX+IE3RLL4L7ail9gJoQvlZKbyOjKkktfWDyreLYSEfEpDH0ieKUS2L02euJaoxQo1RjjLobo/BIYySbLVKgMwXu8Yo3FyYklAyRsGthTAfbjCN2laWT0i6dZnUa+KyLPHsAV7WBvIaWbVYqJWyIWPD2YjqR9kC0SczjmUAJx321FCUDGfSZgbQ9Ju9tzaJVjRetat0BwHPbalr4clWbWoprGGkYebjKAJaR+/LxkslrbMNLjXmptaXct06Vh5FMeqkaRmrMyAFYylEYXkhnK7uWaDDINVwqmGaAN8VrSxEf09yUisD9tiPNByTg94B0TQ8yeEBO5bz+D4iHm3YJHpCuBSwsspXmAcFtDN7US/wD4qvVKbGBB/cXbfuk7utlH+hISV612YOv7ey69JHfXoVB1zXu5r0Rr/dGYY+vjajVEGR1FqVWT6JN9GiV/rwabVMF5pzOIbUyFdSCq1PTGGTZle5ZjmpFzZhvNVaPm8iMKbs99Fql1eFltj5V7/juAsKVHqPmOHqs0B/TWgP2WU5v+uLIhEaHOGeADoGmFp9z/TWKZhIwK3g7APiS2tC+IFNDNB8A7ffGPjbhtcWQkmbHdo/UdKjAnT0eHWIffQ2PBHhoX/FpYtHulkmjzj+2G4yPNEbcAtLQI7WRnG8pZFg2RNdXxy4bQjuW9P8u46f3PUy7R/ghliM64vINFd9KxdGRVETgy0JFXhu1KBESEC6ocCraHYo7jAs6EBdU5dchuAdqiV62Tt3+GpOj6LXL4UhkX3+62IbefqQzJtxsRpetqvWx4QQeffb1p+Mx6HetJB49thij42H68+jTn0eFze+CscDN7+gUMI2GNaKAo3AqmuBGn5vf0bnBZK2k4YYk3FB48zuCG31ufkfnhgINvkYSBslDb/HVGRN+QcCg2z4TeAgPkrvsSK5P0oLEQ3SQTN0eWzJNj6cWet1wgS41Rqk3ZRU9Dkmf7f3piPBKvimhVwg4hKsVk3vTKLjSKO/Q5+Z3dGaYlJxGzHD4MoMDAfrc/I5OAJN304gACm9+hzOjz83v6MxQMOsmQSxmkm7iGqTQEWGUc9MGEeHRsc8r66YmHKKjY1/BpJv4ze9sQkZmtUIL3FdXK/jwMWuuVqAfCG2qJKsV0H0zV19He68yJfeEM9Fnsl7WRHBSRHDKJTCEb3zn03Lbkllv3OURrjNCUkbGep9ovf0jrbcnmfXm1fXqaN0lU/AhnIm+2fVOdPfSETJhIna9oz8UoSrqmXXvaHZKEp56m8XlVxywxnoQBjaGYa334VkHGE+/UG0XiAvtnof973nDIzL0PGvXMiR+mlzkjgBGk9M0ufA+1AGvTK+SdVDCm1oGvFrvKaExcd9XPBwdEokDbLOhaR8NfZqMEqUL4puMBqYAtxnRCW9qGXSowNXX8kiAh/ab5ZoOT90tk0YV1lI0GQ1IbSRnOkvatw0BFOGrbxtc5om0t+nZQe/qQPghwjsbhsrsSigvFSn9fn+6/lwWKvLq96tESEC4oMKpGHTI4BsXdCAuqMJNRgkPtM8mo/Zm+fDn5nb+e5Lk3727qx+XTxllN0i1fBBC9hSEXoSD2GoypOg9Chos9B4VjQ5JX6nM0JvQIPbAE42G2WzFGCFJ8yBvIhq+8yfF22Nkg8qPeVbJbj93U/7yxadsCqoj/gc=</diagram></mxfile>
2109.04853/main_diagram/main_diagram.pdf ADDED
Binary file (86.9 kB). View file
 
2109.04853/paper_text/intro_method.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Within a structured label space some labels are inherently closer to one another than to others - *e.g.*, in ICD-9 *425.0 Endomyocardial fibrosis* is closer to *425.3 Endocardial fibroelastosis* than to *305.1 Tobacco use disorder*. Flat prediction and standard precision/recall/$F_1$ score (from hereon referred to as *standard metrics*) of individual prediction level (leaf) codes treat all mispredictions equally -- *e.g.*, having *425.0* be mispredicted as *425.3* is penalised the same way as mispredicting it as *305.1*. This phenomenon has been addressed in information extraction (IE) by @Maynard_Peters_Li through the use of distance metrics. The IE setting assumes both the gold standard and predictions to be associated with specified spans within the input text. This means an individual prediction can be associated with a true label, allowing direct comparison between them.
4
+
5
+ The LMTC setting uses weak labels -- predictions and true labels appear on the document level, without exact association to spans within the text. Due to the absence of information regarding associated spans, direct links between individual predictions and true labels do not exist. Label comparison is performed on full vectors representing multiple labels, hence the IE approach is not directly usable. @Kosmopoulos_Partalas_Gaussier_Paliouras_Androutsopoulos_2015 address hierarchical label spaces in document classification with *set-based* measures. The gold standard and prediction vectors are extended to include ancestor nodes within the hierarchical label-space and augmented according to its structure, and the true-path rule.
6
+
7
+ Let $X = \{x_i|1, ..., M\}$ and let $Y = \{y_i|1, ..., N\}$ represent the set of predicted and true codes for a certain document respectively. Assume we have access to an augmentation function $An_j(x)$ which returns the ancestors of $x$ up to the $j^{th}$ order.
8
+
9
+ Let $X_{aug} = X \cup \{An_j(x_i)|1, ..., M\}$ and let $Y_{aug} = Y \cup \{An_j(y_i)|1, ..., N\}$ represent the set of predicted and gold ancestor codes for $X$ and $Y$ respectively. Standard metrics can then be applied to the $X_{aug}$ and $Y_{aug}$ sets. A correct assignment of predicted lower-level (leaf) codes results in correct assignment of their ancestors. In the case of incorrect prediction-level assignments, the closer a mismatched predicted code is to the gold standard leaf within the ontology, the more matches will occur across the levels of the hierarchy.
10
+
11
+ On the leaf level each code appears at most once per document. Duplicates can occur when $An_j(x)$ produces the same ancestor for multiple codes. As $X_{aug}$ is a set, duplicates are removed. Hence, the set-based approach captures whether an ancestor is present, but not how many of its descendants were predicted. This results in loss of information regarding over-/under- predictions of classes on the ancestral level. Over- and under-prediction is a valuable phenomenon to track, particularly if the label-space includes inexplicit rules -- for instance, for some nodes only a single descendant can be predicted at a time, as individual siblings are mutually exclusive (*e.g.*, a patient can be assigned at most one of codes *401.0*, *401.1*, and *401.9*, which represent malignant, benign, and unspecified hypertension respectively -- concepts that are mutually exclusive). Furthermore, retaining this numeric data on ancestral levels enables analyses on higher levels, *e.g.*, performance of a family of codes in the case of a semi-automated code-assignment application. For this reason we propose using a metric that retains the descendant counts for these ancestor codes.
12
+
13
+ To correctly define the augmentation function we need to ensure our representation of the hierarchy fits the setting. Previous work involving the ICD-9 hierarchy [@Rios_Kavuluru_2018; @Falis_2019; @Chalkidis_Fergadiotis_Kotitsas_Malakasiotis_Aletras_Androutsopoulos_2020] represents it through the relation of direct ancestry considering parents and grandparents of the leaf nodes. As the ICD-9 has leaves at different depths, this representation results in structural issues, such as one code being both in the position of a parent and a grandparent for different leaves. For instance, code *364* has a parent relation to the leaf *364.3* and grandparent to leaf *364.11* (Figure [1](#fig:sub2){reference-type="ref" reference="fig:sub2"}). This poses an issue to aggregation and evaluation. We aim to address this issue by producing a representation of the hierarchy through the levels of the label space with each level representing all the nodes at a certain depth in the tree structure.
14
+
15
+ To implement level-based augmentation, we first define the first three layers of the ICD-9 hierarchy on which leaves appear. An ICD-9 code (*e.g.*, 364.11) consists of a "category" (part of the code appearing prior to the decimal point, *e.g.*, 364) and "etiology" (appearing after the decimal point, *e.g.*, 11). The etiology can be represented with up to two digits. We define the basic levels of the hierarchy (encapsulating all the labels in MIMIC-III) as follows: codes with double digit etiology ($e_2$); codes of single digit etiology ($e_1$); codes described only with "category" (no etiology, $e_0$). Augmentation can be performed up to a higher user-defined level within the hierarchy by adding further layers representing chapters within the ontology.
16
+
17
+ The originally flat predicted and true labels are divided into their respective layers in the hierarchy. If a code appears in a level lower than the maximum level set by the user, the truth value of the code is propagated to its direct ancestor through augmentation. The propagation can be interpreted either as a truth value (*binary*) or the number of descendant leaves present (*count-preserving*). The binary interpretation of the propagation results in the ancestor holding the truth value of the logical OR operation on its children. This mimics the set-based approaches described by @Kosmopoulos_Partalas_Gaussier_Paliouras_Androutsopoulos_2015. The count-preserving interpretation sets the value of the ancestor to be the sum of the values of its descendants. Through retaining the numeric information, the count-preserving interpretation allows us to track over- or under-prediction within a family of codes.
2110.13578/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-04-27T10:41:00.188Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.72 Safari/537.36 Edg/90.0.818.41" etag="b9Gfg1rkjce_RhGV7t-a" version="14.6.1" type="google"><diagram id="zOMYd3nO2qpQgvZTptGV" name="Page-1">7Vxdc9o6EP01TNqHdmzJNuQxIfRjppnpTKa9N3npKFgBt7blyiJAfn1lLGFbGEhbsDc1T1irleScPdKudiE9PIwW7zlJptfMp2EPWf6ih696CNkWHsiPTLLMJV7fywUTHvhKqRDcBE9Uj1TSWeDTtKIoGAtFkFSFYxbHdCwqMsI5m1fVHlhYXTUhE7WiVQhuxiSkG2r/Bb6Y5tKBW9L+QIPJVK9sW6onIlpZTZFOic/mpbXwqIeHnDGRP0WLIQ0z8DQu+UTvtvSuX4zTWDxnALn98MVzP97dX12Nvn7+GvAv89kbB6mXE0v9F1NfAqCajIspm7CYhKNCesnZLPZpNq0lW4XOJ8YSKbSl8DsVYqmsSWaCSdFURKHqpYtA/J8Nf+uq1q2aLHu+WpQbS92IBV+WBmXNWz1f1iiGrVp63CZMmkaET6jYhY1SzAApjVTovqcsonIhqcBpSETwWGUMUcSbrPXWQz+zQL4LstQmQZoyaougvlWdImUzPqZqVGHhC87JsqSWZArpb6zjGoQx9PH5Tn35kL+BbpUwKUQrEv4GITV8jyScKUA3GFrl33waCHqTkHHWO5enUJVrW+3/SLmgi52G1b0mEAPVnhcngj4+pqXDABl2LFOhhOIfgGQBBGlggHTeMkg2RJD6VZCw1TZInT3+80N11yZ7ppsYHNpL/N3RAI/0bpXzjrvJebtR0uMT6bfGPC+S9A480q8vMDtY3yjp3RqMvFCuevnAVlFfAZb3c8Z0x5t0RegLqWA7yaLolE+T/FOi76T5h55SvmE+q9YxrCFBFFXIU8HZDzpkIeNSErM423QPQRgaIhIGk1g2x9I0VMovM5ME8r52oTqiwPdXO7bOxlUWHMDMRmTo1ESGTpNW9o5qZdJRK5s3p9bN3D95sD2eaa8H0+czEBc2gOfCbCNwwzWs95pk/flRD7fxK+nEhuR1R88429pv7UbPOLsuE3Q4c9913NxofyaiWXNjeCegGd5h1HIQb9fddA63J56+PXR0O2BzO7Ru6bpIvu3tYKR4MW4bpPMNTJqMg8tR8DomhhIH2+qO8NICYf3eoIhvpu2dlonvdDeFCaZs6zVUtt1YB2TZFsHbtS64si3AENeFVrZFAJP5LrSyLXI7e/7vr9t6z3QUGIEKfBDAiN8BV7tFp9T3dmyem/uGxnyAyW8PWv0WHTf7nZ51NPPjQCvh6qTKsWq4XTW0eY9q39CnL9/t9VB7XZm+LgBxZfq9IbkyF1odF9fdgw9ayD3LSntnXa3tmS6t9VIuPm7Z6q7rBnft/fmJZg1e923DtiN6aMVcfNwv6z313FHPHSY8iGhn67p9aHVdDLC85UGr6+ofjZ7qujXYqDzAi4uM6/IXbRMfXF0X4K8fsVkhstsGaXs0ea892vW1FFshS9OSy7v/9/0d9qrGstu+6znbf3eyNkdKokRChKx4FuVxTXZj7ZTZzCt6TZhiW+5B7CabxT8dyOv/xb9uwKNf</diagram></mxfile>
2110.13578/main_diagram/main_diagram.pdf ADDED
Binary file (12.1 kB). View file