Eric03 commited on
Commit
4338e65
·
verified ·
1 Parent(s): d0d36ed

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2007.12729/main_diagram/main_diagram.drawio +1 -0
  2. 2007.12729/main_diagram/main_diagram.pdf +0 -0
  3. 2007.12729/paper_text/intro_method.md +48 -0
  4. 2011.13005/main_diagram/main_diagram.drawio +1 -0
  5. 2011.13005/main_diagram/main_diagram.pdf +0 -0
  6. 2011.13005/paper_text/intro_method.md +144 -0
  7. 2105.03714/main_diagram/main_diagram.drawio +1 -0
  8. 2105.03714/main_diagram/main_diagram.pdf +0 -0
  9. 2105.03714/paper_text/intro_method.md +154 -0
  10. 2105.12245/main_diagram/main_diagram.drawio +1 -0
  11. 2105.12245/main_diagram/main_diagram.pdf +0 -0
  12. 2105.12245/paper_text/intro_method.md +113 -0
  13. 2106.00162/main_diagram/main_diagram.drawio +1 -0
  14. 2106.00162/paper_text/intro_method.md +119 -0
  15. 2106.02658/main_diagram/main_diagram.drawio +1 -0
  16. 2106.02658/main_diagram/main_diagram.pdf +0 -0
  17. 2106.02658/paper_text/intro_method.md +25 -0
  18. 2112.07374/main_diagram/main_diagram.drawio +0 -0
  19. 2112.07374/paper_text/intro_method.md +94 -0
  20. 2203.04251/main_diagram/main_diagram.drawio +0 -0
  21. 2203.04251/paper_text/intro_method.md +80 -0
  22. 2203.12719/main_diagram/main_diagram.drawio +1 -0
  23. 2203.12719/main_diagram/main_diagram.pdf +0 -0
  24. 2203.12719/paper_text/intro_method.md +114 -0
  25. 2205.12374/main_diagram/main_diagram.drawio +1 -0
  26. 2205.12374/main_diagram/main_diagram.pdf +0 -0
  27. 2205.12374/paper_text/intro_method.md +91 -0
  28. 2206.01078/main_diagram/main_diagram.drawio +1 -0
  29. 2206.01078/main_diagram/main_diagram.pdf +0 -0
  30. 2206.01078/paper_text/intro_method.md +87 -0
  31. 2208.11640/main_diagram/main_diagram.drawio +1 -0
  32. 2208.11640/main_diagram/main_diagram.pdf +0 -0
  33. 2208.11640/paper_text/intro_method.md +161 -0
  34. 2209.10091/main_diagram/main_diagram.drawio +1 -0
  35. 2209.10091/main_diagram/main_diagram.pdf +0 -0
  36. 2209.10091/paper_text/intro_method.md +177 -0
  37. 2210.15777/main_diagram/main_diagram.drawio +1 -0
  38. 2210.15777/main_diagram/main_diagram.pdf +0 -0
  39. 2210.15777/paper_text/intro_method.md +114 -0
  40. 2211.13775/main_diagram/main_diagram.drawio +1 -0
  41. 2211.13775/main_diagram/main_diagram.pdf +0 -0
  42. 2211.13775/paper_text/intro_method.md +122 -0
  43. 2212.00767/main_diagram/main_diagram.drawio +0 -0
  44. 2212.00767/paper_text/intro_method.md +81 -0
  45. 2212.12192/main_diagram/main_diagram.drawio +1 -0
  46. 2212.12192/main_diagram/main_diagram.pdf +0 -0
  47. 2212.12192/paper_text/intro_method.md +83 -0
  48. 2301.07300/main_diagram/main_diagram.drawio +120 -0
  49. 2301.07300/main_diagram/main_diagram.pdf +0 -0
  50. 2301.07300/paper_text/intro_method.md +194 -0
2007.12729/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile modified="2019-09-18T17:28:08.052Z" host="www.draw.io" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" etag="mwiNqXduFpoi1IAIet1G" pages="1" version="11.2.9" type="device"><diagram id="oJW6QlTAoXozuOn7SZ3D" name="Page-1">5ZzbcqM4EIafxpeTQicOl4mTyU7NTtXWZGv3WjGKTQ1GLpAPmadfEcA2ah9YD1jKJBcp0wgBX7fQ343sERnPN485X8y+yVikI+zFmxG5H2EcUV//Lw2vlYF5qDJM8ySuTHuGp+SnqI1ebV0msShaDZWUqUoWbeNEZpmYqJaN57lct5u9yLR91gWfCmB4mvAUWv9NYjWrrciPdjv+EMl0Vp86xEG1Y86bxvWdFDMey/WeiTyMyDiXUlWf5puxSEt2DZfquM9H9m4vLBeZ6nKAWsffvtB/bnn0ffnj6+vn+XT1/Km+2BVPl/UN1xerXhsCuVxmsSg78Ubkbj1LlHha8Em5d61drm0zNU/1FtIf4UXV17kSuRKbPVN9kY9CzoXKX3WTei+rDljvaNeW2R5mGtRUee3f6bafHQL9oabwP4g0gXcKicji2zK29NYk5UWRTDpSEHEr2iCD/bv24G03tlykXCWrdoweQlGf4S+Z6CvZIm5Csh6TmKIb1u6jkMt8IurD9oPK6Cli53pSPJ8KBXp68832xn/BXei8u64dwciDMUyuG8TYPSqY4Zt2tCDUaawjPBQm0uHx997HOmLmYDdodh3q2wOPdTTwSCeRgzHtg5juNn8NF9O0w/x1dUwBwERsY3Jw3tDi0cREbWNycSKJACZmGxNxDxPxACbfNibqICYEMAW2MTEHMWGAKbSNyXcQEwGYItuYHMzxCTUx4QM503UxhQ5iAikTtp0yUQdVOAEqvGMVaThMzEEVToAKx7ZVOHNQhROgwrFtFc46yMv3XqegHjGgX1inoNGZjo7UKTQ9/rrXbFE2KE5c8JHz7Bxd9dhrEYS5qKBBPoZt52PMQQVNQT6GbedjzEEFTUE+hm3nY8xBBU1BPoZt52PMQQVNQT6GbedjzEEFTUE+RmznYz7uGdM053GiEY1lKvO348nL219fCFELIGpKp6cBsqHwOViRpMxAxOwiclBLUd9A5NtF5KKOCgxEgV1ELmqo0EAU2kXkon6KDESRXUQOaqftUr1tidYuIgd1E0MGImQVUeBg1ZFhAxG2i8jBiiMjBiJiF1HfyrsPRIa6xnbVdeCgumaGusZ21XXgoLpmhrrGdtV14KC6Zoa6xnbVdeCgumaGusZ21XXgoLpmhrrGdtV14KC69g11Teyq68BBde0b6prYVdehg+raN9Q1sauuQwfVtW+oa2JXXTeThVOIDHVN7Krr0EF17RvqmthV11GHKHrvK0KYH4KFOBeuCfHZ2a56WhVy/EyDrguJYCrxMH8WcZxkUxAYOuRVOxRyUSQ/+fNbg3I81XepW7O7EbvXFr5Usqi+nVsewNNkmunPqXgpuyrHUTLh6W1tVrIcgYUekPr8f5cb959oT6v/zZV+cByGB2JysPeUEcxQxjJbyXSpEpn9XuwR88FCS88yfpj9PKbymZcHzvmmvGMp09/LDdSjbSegCDgBeVf1AkywvotixhdiSPITjVDk12XPgggsyYb0/WvC37raKb1klNqI3VIb8qCmRBvs0bCBdzJEC5XLH6JZOZLJTLe8e0nS1DCdjcx5EsflaQ56oO2jXCr+NoVUm31MncYXeOGjmx6K2xPC7Rd9AkvEyP9oTkHMeJjDdVcHvTLc0wQ+yxEON2jDEGpm2w/gFxwx8HMFxLZrYJGucg36MG6hzWKS7RfJqWWnoL5fdJ1aqAics0eYsr5eJMLvCcHJGzeOuMr0jWB+qyP/w0S97/mXRT0ezCF9v5i7NOqjnhCHzcuFbcwHN3AqvqZkRR3eWr33Gl/ggXSKGp10rfEF0dmueqrxHT/ToDU+hOD0/2eSCZ53eA6+qyQ7MAscB1KVQ0F5wSSvN3c/wVd5avc7huThPw==</diagram></mxfile>
2007.12729/main_diagram/main_diagram.pdf ADDED
Binary file (11 kB). View file
 
2007.12729/paper_text/intro_method.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Malware programs are still making newspapers' headlines. They are used by criminal organizations, governments, and industries to steal money, spy, or other unwanted activities. As millions of new malicious samples are discovered every day, spotting them before they harm a computer or a network remains one of the most important challenges in cybersecurity. During the last two decades, hackers kept finding new attack vectors, giving malware multiple forms. Some use the macros in Microsoft Office documents while others exploit browser's vulnerabilities with javascript files. This diversity raises the need for new automated solutions.
4
+
5
+ Portable Document Format (PDF) is one of the most popular types of documents. Despite the lack of awareness of the population, it also became an important attack vector (AV) for computer systems. Dozens of vulnerabilities are discovered every year on Adobe Reader, the most popular software for reading PDF files [\[1\]](#page-13-0), allowing hackers to take control of the victim's computer. PDF malware can be segmented into three main categories: (i) exploits, (ii) phishing, and (iii) misuse of PDF capabilities. Exploits operate by taking advantage of a bug in the API of a PDF reader application, which allows the attacker to execute code on the victim's computer. This is usually done via JavaScript code embedded in the file. In phishing attacks, the PDF itself does not have any malicious behavior but attempts to convince the user to click on a malicious link. Such campaigns have been discovered recently [\[2\]](#page-13-1) and are, by nature, much harder to identify. The last category exploits some regular functionality of PDF files such as running a command or launching a file. All those attacks can lead to devastating consequences, such as downloading a malicious executable or stealing credentials from a website.
6
+
7
+ Regardless of recent work in machine learning for malware detection, antivirus companies are still largely focusing on handwritten signatures to detect malicious PDF. This not only requires significant human resources but is also rarely efficient at detecting unknown variants or zero-day attacks [\[3\]](#page-13-2). Another popular solution is the dynamic analysis by running the files in a controlled sandboxed environment [\[4\]](#page-13-3). Such approaches increase significantly the chance of detecting new malware, but take much longer and require access to a sandbox virtual machine. They also still require a human to define the detection rules according to the file behavior.
8
+
9
+ Antivirus vendors use a few different approaches to detect malware in PDF:
10
+
11
+ - Signature-based detection It is the most basic and common method used to identify malicious files [\[6\]](#page-13-4). Security analyst manually inspects a malicious file and extract one or several patterns from the byte code, the "signatures", that they store in a database. When analyzing a new file, they try to match their code segments with the one in the database. If a match occurs, the file is blocked.
12
+ - Static Analysis Another rudimentary technique commonly used by antivirus is static analysis. In consists of applying heuristic-based rules on the content of a file to find potentially malicious action. The easiest approach is the search for keywords like /JavaScript, /Open-Action, or /GoTo which are related to an action that can be harmful to the computer. In absence of those tags, an analyst can confidently say that the file is benign [\[21\]](#page-14-0) (although some attacks are managing to inject javascript code without requiring a javascript tag).
13
+ - Dynamic Analysis It is a more expensive but potentially stronger method for detecting malicious behavior. It consists of running the file in a controlled environment (sandbox) and evaluates and retrieve the API calls and the network activity produced by the possible malware. Then, a program can apply heuristics on top of the activity logs like connecting to a malicious website or launching a subprocess [\[21\]](#page-14-0).
14
+
15
+ In this work, we are using an ensemble of Convolutional Neural Network (CNN) in order to detect any type of malicious PDF files. Without any preprocessing of the files, our classifier succeeds to detect 94% of the malicious samples of our test set while keeping a False Positive Rate (FPR) at 0.5%. Our classifier outperforms most of the antiviruses (AV) vendors available in the VirusTotal website.We also show that our CNN can successfully group more than 75% of the malware into different families. Finally, we will present some examples on which we were able to detect an attack before the AV (zero-day).
16
+
17
+ To the best of our knowledge, this is the first paper using Neural Network to classify PDF malware. It is also the first one that investigates the ability to automatically classify malicious PDF into different families. Finally, as an attempt to build a baseline for detecting PDF malware, we open-sourced the list of the files used for the research. They are all downloadable from VirusTotal. Paper organization: We first present the related research in machine learning for detecting malicious PDF and the usage of Deep Learning applied to Malware Detection in executable files (Section [2\)](#page-2-0). We describe how we built our data set in Section [3,](#page-4-0) and describe our model in Section [4.](#page-4-1) We show our results on the data sets in Section [5.](#page-5-0) We investigate the capability of our network to differentiate between malware types in Section [6.](#page-9-0) Our conclusion is in Section [7.](#page-11-0)
18
+
19
+ # Method
20
+
21
+ Figure 8: Model A
22
+
23
+ ![](_page_15_Figure_2.jpeg)
24
+
25
+ ![](_page_16_Figure_0.jpeg)
26
+
27
+ ![](_page_16_Figure_1.jpeg)
28
+
29
+ Table 2: Most common family name per cluster
30
+
31
+ <span id="page-16-0"></span>
32
+
33
+ | Cluster | Microsoft | Ikarus | McAfee |
34
+ |---------|-----------------------------|-----------------------------|--------------------------|
35
+ | 0 | Exploit:Win32/Pdfdrop.D | possible-Threat.PDF.Acmd | Suspicious-PDF.gen.a |
36
+ | 1 | Exploit:JS/ShellCode.gen | Exploit.PDF-JS | RDN/suspicious-pdf.gen |
37
+ | 2 | Trojan:Win32/Tiggre!plock | Trojan.SuspectCRC | RDN/Generic.dx |
38
+ | 3 | Trojan:Win32/Meterpreter.O | possible-Threat.PDF.Acmd | Artemis |
39
+ | 4 | Exploit:Win32/CVE-2012-4914 | Exploit.Win32.CVE-2012-4914 | Artemis |
40
+ | 5 | PDF/Domepidief.A | PDF.Domepidief | Artemis |
41
+ | 6 | Exploit:Win32/Pdfjsc | PDF.Exploit.PDF-JS | Exploit-PDF.bk.gen |
42
+ | 7 | Exploit:SWF/CVE-2010-1297.D | Trojan.Script | Exploit-PDF.bk.gen |
43
+ | 8 | Trojan:PDF/Sonbokli.A!cl | Trojan.PDF.Phishing | Artemis |
44
+ | 9 | Trojan:HTML/Brocoiner.N!lib | Trojan.JS.Agent | RDN/Generic Downloader.x |
45
+
46
+ Figure 10: Model C
47
+
48
+ ![](_page_17_Figure_1.jpeg)
2011.13005/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-03-16T22:06:40.155Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" etag="NNstpxl0vD5QBcPTUHSW" version="14.4.8"><diagram id="LPTF2LL8Ot6HEKbtqK-g" name="Page-1">7V1td+I4sv41fIyO3l8+5nXuPTvTZ8723r27n/aQ4CRME8gFejo9v/7KYIEtyViAZYtAus9JMFg2rlJV6amnSgNy+/bxy3z4/vrbbJRNBhiOPgbkboAxIorpX/mRn+sjCsv1gZf5eFR8aHvg6/ivrDgIi6Pfx6NsUfngcjabLMfv1YNPs+k0e1pWjg3n89mP6seeZ5PqVd+HL8UV4fbA16fhJHM+9r/j0fJ1fVRisT3+X9n45dVcGXG1fudtaD5cDLF4HY5mP0rXIvcDcjufzZbrv94+brNJ/vDMc1nf0EPNu5sbm2fTZcgJo7/+dQ9/on/+8frz+fsb+fLjb4vfr4pR/hxOvhdfuLjZ5U/zBOaz79NRlg8CB+Tmx+t4mX19Hz7l7/7QMtfHXpdvE/0K6T9fJsPFovjoYjmffds8NaKPPM+my0LEiOvXw8n4ZapfTLLnZf72eDK5nU1mc31oOptmmzHMwQEm/E4KTvU7xY1n82X2UftE0OY5awXNZm/Zcv5Tf8ScgKAAHMHNTzFCoaqCFSrxYyt4pBhgxcdeS3InlBY6V+jby+ZqW5HoPwqp7CEh3KuE8telp79+TEGSGw0Xr6t7Qi0JixAARUVAeq5hV0JIQFc+zNiT1uVDLvIp5IOrwkHIM338wmGxhENPzbzdXd/Lh9t2JEIkB1I2mjRsTFc39oydmkgeHu4U5+2IhDMGcKNEkGCAdykTfnoyuWECtiMTyQQQrHmaYAlgp1IRpyaVO3Yv79qKzQhMUCTSIxI+WRaPT//9slx9/fKxkrz4/32fmTeuFquHfa0/gPD7x/ZNM8rffr+dTf8cYH2ncKrvFjt//vcXcyV9zHcDj3P7iPPBYxRqPlsOl+NZriUKWjqi1eFx+CRHxFGUQnviODgppGtOpSfmwLFCDtWgIgHqQH3q0JdMqwHm8+rHlTV6HKIMR5Q1hRBY6zMpWZiseSRRmwB3l4XORi/Z1+LlbL58nb3MpsPJ/fboTVU828/8Opu9F0L5I1sufxbGefh9OauKLJuOrnOkQ798nMyevv3jdTxdH34YT8yHbPOuH/v857/yiwJmXv67uIfVi7uPyquf5tXHeLk+jUpVvM7Pu9Ligbg4sD03f1E+9fdsPtYPP5sXx9bPK39Iu/VCP9PZ9/lTtksaBh4azl+y5a4PCr+mzbOJVv0/q3fiU5rVqfqRD3+WPvA+G0+Xi9LIv+cHtgqsFzmOsZLc0sH1oFuN3NzdEUrqg3hi+azcY6E7x1HhadKO6lk+ZU9PMY0XTtBRIR+ydPFURwubUQmkUJufqtwRxAgQR/Sduq0AyCp9t1VxWlsfVuO22vQztCs/c5yUfdhXC3YfBdj9/Jd+a/iWT9np4yL/dfY+gDg+AEHkRrDd+gAfGtdRcFCKEc5ZLwjDCcYGAYhg+i4CMIjKbgIB2OQnVq9iLlJEqPPAbSxS3FWIZPYyWkckrDrM+uaKM7eatPeKR5Laa8Vd8rjI6fVo5CrwZDJ+X2TNJmK4eF9TC57HH7k6OzaC5f/08Zf5cDTOKmHo4+pfzHCTSEBK4aayXQwGrpMxEq8YE1avTMcZExc1/Xu2uMkn9H9uPJHClefY1JGefmLLqpg8ELUFYucPevw0nFwXh9/Go9GkTgGq1qtsZVqCuIlUgFkZbSg86SDsSdDFM/1NAGYsjButf3GaJrhdzZjY+tYF3I1wNSWCsPHKvcUJ2IeAnjCG4BFyIqgCIsQWvgoTfiwYAbvAYsWsXxWT+Rj5uPJoCNdHw0w+xwzXEU9vEvqAvESD9U36YLBNHRToTX3e4MBUxaC9sB0Xk685bCfQrz5Hxu0IKQRKQZ6NKmIsdKwNKZQk/7HmfEsBPZa0elWTpH+ou2sM4c4z4sT/2AU4N5bpurBMyPCQuzNNLJMjGtM0KWorRe+mKYCAl4xpOjYjuodJa9E0baLCRtvEatTnWNME9QqUY0v1uKVSLZkgCq0LGUJprQmCXO08I5IJCqA5pqL4NoAWqpytaxLnGKBqmKsE3PDtzUDrKeGokjuc4oCSOmTEN3KNkramFD5UNdbS+tvVdGqjKC95jU5ay2rLSXIquBQ71l2xnCc3kLdRD+4BYrr1nT7+p6st5awKLtIqbzaE8slzKJzB1ITnY4omav8dHu+miuRQzxCyolJRXIg0p2ytfLcmPgw9/W34kT+32Wwynr6kMUkfHu6ZUjEnqUptkhIXvcxn3xqSDkpLfKtks98cAZ1+koIzaVdH+JMUhACiuhReE6/xxKDnVIBmzq0InIuwaRoLZzZ1hZ/LlZbQBohUBW8AGB1EUzBAhMFZOwYiSChGSmQU148QUwBXlVdKBuKQGxT02MUidxoVWSA+9qYbYWwd2Sq3CosFAuOlP9IIO4rqylj2TCIJuFU4InjfkUcquOjGCFFZAT3zcg4qD7JCe6WBAIOkZK7yy+J4VSSt2xzFMIBs20DCtj4cMKsUMRS8UoICQWgtixzpCysH22rbyOmbsOdOFzUrJIyW2mDjWM6hq8br52LztFqmZ/NCOKV+4+E1WdWYqTliKtufww1G6fExz9Mzx46FMTgDevrVkvpsrl6wVeEE8JLdcGpTGJBBlqS1mR5Q5H56wX2oeoVE1ZGCZYiIG7/aa7iW3AiCvmAZVxzJfncY1/u42O3tcOmq5aegDzdZBC0Et1qtU/owCYNUy3kQrSfUCgbOJheimBOz6fnHPFJEtEv3T12w9aTwujocLhTHizeBxe4JjENFHwvAoy5S+8uXL440Th8rlxztjNp65/bTECg1IAQ/pVA6J0XqlfTOmJez2pV0aGSNoDzmMq1FWofcRf13OuTRRY3NaBj6ubdn4T7PsrJSCZNlNl3QouWCbLYdMcmYJhSBRjNhkcraLwoQpADMINf9KUCkknWvAtzOZwttbuH1cqmfWC6IlNWBj4reyNFqTbkOJqs8OUSJG14S6VkgxtOISLTKi0nw6YDAVhSQgE8IY0q2w6st4wxCQH+zE5TzwFKsYu0cjXC0BQZqS7xFUFgTzy+Z/pKLXFG+vy+0cPXjT0Jk11zCGxlRZNIuI0pAZMp5pJ1mxwf7ppoO6FqyyYhjVSbwrBLxgx7bKppleEflRfuveKmlrlhQS+EirEPZiaOZp1P4rih0qssI7Ln2nbmQ5udNESHIOKAWmIm56xQ6TQsxH5bZRdC3Wtz4gr5UO5d0HfNpfQlQlm4jCBaGG55t0LellCQksyao7+JjW5O+QB4L37eT7as5Jcq3lfJZ+Bz8PGvLjqQCdlqz/y4TLAwAPF/jjhhLUWwnxP2LyPBjcSph9TrY6r3DbC56ZG4n8wJtFh8b6pmo5yEcZU+zUTb3GF2vTvw6fMwm1hqq4IA8aQnkQEg9OURbgfFfw8fVeLnSFBiDHpzdDNhdrbg9k7DYdrQYbFDaRXSrCDuUv3bKrgohZJXGZbaCPFYxgGLecc0Qs+fnRRZHIyL1lrxkaoIyNf0nb7kLm31d28eAeb+bIJbHP3BI4HAvU7AfT4x7pNqCoOwORRgFltySaHLyFVefn2tel0K37pq5Xm1Zq+2ufbNZ3Af55mx6Xr6Z756uV6uSbSvr0IprviIACd+48V0zdwGyf6wzPs1muEtza1Ne+A1+eIhjhhkJBEDimeFIZLdPi4A8yNv729v4CEgtJZbRwBAr2qqan3DnwboWnJtsfPOmhNxUj69z9RA19BaOmzznBTLV3Dd43X6u6+y5sjdtJ6qDAjveZR/ESn88Uw2GkrZi0XHcfMtrO99D5LZGs7wdFQe+rG4069Ullc+z+4SnXjANFXGTR17lsDTp5vpWrnb3bsP12dtPSBi4TI2nLU1UvpPPEIYIGd1co3vckpAJBsRagkgUGNPESgRyFzZ0dprAJgkUr597iCzuru/lw21LsrC3muh/wgkXv3M66zOzsewn7qyfgCRchM6ZEx1Ious5QbAtCdq7JFworTwntv0IGqCzE58jhMnkJBOpWLQZPjHlH7bQk60LCZnHG3ClBW1RiiSnLWFgWzIskhCRbaglbYgMqeRE1sQQu8T/+0Y6easYmyokWc8LABHSOi4RTHPQHnoojt1AvoYA4oazZovWtttRIFFzpdqWE8jeY8w6Iw4iKVzY6RPXc+goBDiCgT3vEizCyjLbD9ZM2YYdq+WpnXMO0bxk3wQWvmEcsrMN0xBCHg/eu9ikixxdmF8O5UCGllpHoxxIF1dKNtQKa8hKB06hzeZzLQZrMhZXDAKO6vqAYxjYSKytQEm6YNep8sZ260SbfDLZzCdjwrIE61dHKg8EksLyD/JdJD65TF7IZXoaW1tbYig9y2ohNln4bux9chSzS9xdopjZGhPISIwXxvVVk5kqdadr7fBRd5SnA1fHvB2ZCkLnJxBuKYNHtOzpeaNvWSzhGrmE8WBDBDiu2+wG8Ug9bQmyEosCofJ4nhs1fY5rzoiDIUoXQzwHbgS2q8wSwBo+3ea7u9awh9q8Nk2TCjRNiLbSJMy3w4gAhGFFERKcCmhR2HjgNsD72iYqbUtTNAar3YDHJlxbJ0QyTTsIdOdEUVHWw+8/g61cWNShca1FE1cUXfO4qJPmk72LYgejrpgkGJrmoZ9nUjA7vEpAEl6QsZNUIKpv7XbOq04dUqeYC1RNBL8L/2dvQXMMhM3N67sAQPUFUvr2Sz5PCyAEt3sWJOAoeusQh/PA/8Lw3UUXZckxfFVT3enFXewrZM6c5sy9s0XVCfWVO7TH/CAYvOkPLkYQwPqNCaNxTqETp+5ikO5xftOecG2hNaovJmS60U7XBfKeaIeYpEJ/7itS97xmvVjBHkkqRtdRDTNt59MJghGMtRdFJX1lB7pXnmP126NZMu94r9cWBI8IAdyKaPvf0BVBF7RMNtZpq9tPi7HPGthoPfahUACbawcxBcIKeUM3l6UCAint8axKjLZSWoTXXqv2/njjOXESW8jwpY/b0djlBZbnw2ntd0wR8+iepSvBiqetnjWYlJYSt6V37n2bS0VWoS6rxMsth7S3fMDOn6nhRlYw9Th8kqO41WnadgpLDzx+tuMYqwmBPjHoKJmdR7Cyw6rgiDoWfoTgJ2yh2MiA3LRQNITGf5feiU5uRDC8U2IrFKJ93ROW0DFLqgPqzubBdL7qL/kknPa+1/H3vWFJ+qQTwrkjNmlHULZhEFzeMeVAkVKrXzumFkBCShjnnHFl+Extk58xsTVPX3l3D4WAc2LZqk/WjzGVKAkLrWtiq4nCkS4F0jFH3cZMfYHT04q3OmcvJZ1pn4CTQj5w+kScVDmWhhtUcgVRAti8C7h+FbG1eBEHh0TMsRwkoh6kSYI4uCRBeMfV4vo186hLSnw9Grl6/CnaA2EuABH1cQ/ijk3ptFEQMjVUp2hSQg1A+3QMwr3Tx44IQpHhXEvcIDNOBaAOA/YOgnfdX2Rj4cOTbfU8q5QEokGOIlT1coqykylQkdyO5963F4usSD7M+WLndusaUwgoWloq2WpHgGIcIsIYUYjTw1SQI1nhtjnYgL4K32IDOIpqMkWB0ZHyShAx+8bqvsbeI8TS9HMG3Mtwe3fdBBAu3FRXWxM5yiftVu4YWqvQtuaJvXEJRnsW6VonxJoFLtL/eZuQKqSAYRJsKJaegrKOlxYnDKlvM3hEf42tUVltJY132pVjdkFr3S4oygCxmHZEa4sIY2G747m7rRIb72rJ0mifD6Ty3Druwny44LtbSm6adXRXvzzM5HNM1BNT242YgpQeQc8d/RaMKDrYsKhzUQiZnCiwiz87XXl6EEX0VhcGJEtJFDsaLHQ4KzrfssgukCOhTX4jigI3i6KHXlWxDZTZ3TQpUbhw4Tm0DaMEpicKF3A7h1lBpS2K3kvdEHYRoV76InUtC2bDcSnIoi8G4lrASdYdds3tYFSmpxcXsMSAJRsQeNXLvUIVgYDKyFyR/iBcIYDdBjbHO0h1oGCkBhJgx8p6NFNy2zJYI+3cEEFd5Do8CEynjZGrutmsmUdNLqyQNbno6QCRCErkYNRYUGBQ4r0T1whSQJU7YCTeBIJ2w0ssCBCdaLm6aHkULQ+x9uulVfvzAXGtvrK2owsmApAwkP6QjQSd8ntcbFQfV5OJCyB+3qwcQlIABzrEPW8NqD3zxZqciM/U/5psxMHuExPmKqceMJb71CrtMTq8Gw9qOAglu/NrNvyWf4G/Z7/+jzMhOm6pYnenb8H45MGJQ9n0tFgRCphd6rpZ6JJIHQK4DwD55UvaDQAeHm6YqM2jtKAGHFpTXIrAgnAaTwOayv8vGhBRAzA0u+P1qAGRWs16NeB2PltoxwSvl0v9yHJJpKwPfCQFj5krYhwDYq3DlXALHon0RKURVSISLn4xCt6WjM4+Qwm4hRNCwFvYaKjnFiFE+RXkWBDbIpC5e+qGL1AEbBgq8va8iHgr8Gv25x1ll/15S/O41vTka3BCqiuTYklwqO5VR4m/A28xeS7Oqh9nlUIES10gM1ln1Var0K3LotxkddcgHkS7S21iY/SRNsJTedN1UrvhtyIqfxJ2dVzLaL0iCEgLN1O0ixIa6lJMPy9Yn5fQcLtXPUOOnekWqze8lJM0M4W5QBVT0V1NXiyjwCBA1vpZcgJEuaXQYSGvU/oq7ZLatkprrNB6c6HIBiWkmD9xbT4TjlOkySMFd0gg+eQpOVkDVx7Cd9rhrPOrROM+ccdDyyLVFXlChTQ1OKvuGFJCIFitFhzcu1squWPcWG2819/Ge6nIipVGD4Fm8AzqpXhlJaLXigfbYgh0aF01xYpFNcURaEWEA444J9VQpNAekQcqitmEgv2xOS6AIKxmOuRXYZDz9e7hLNLkQAg79BrRTRzjy1XQ/D+7fRsuXx+fB+LmYyDu/jMesHv9dw7hiLviM0dhMe1U5D3AawKv9fHReK6XZOuzs+FiixUdR3yCAtqSkRBtqCVl2MaUaFTJB5uzIyypfGkGr/D+SFN4nN/ghwdLeD+yFoXnTCuFJFAUmflsgp2yHKG3WgJsz8GEkWgy9QH1CbmrQ7lvWzenKlkioCSJCLgFrK2Lmdy8kFjHwcf6uf3ZZhA7Foh1wTQzT6YMmtXuCti/Mdls5xbJmOgf1xNI7QlK1iQxY8J80P7FQex0EHk3SLCWSyHa1ITqYtlpeohYu7c123SDjhxu07eVOUqvQeoWy4LJeinXmXNJAakOo2IRlZEgddeqr0VqPCeSuzFCu7ibOneTeuzK0gDFGy2ThVGHx67SVFysDBqA0CSE+gldzYbqSZo5KqzualpzARMbr0ojQSnYpolD2AXMyFz8+mK9TiywcpHisxahIkDyOpugMAKUJeyLfMBmOr6oVTdQw1I8umSPK2fBiyxP0JrVprTuWpHttg9DbaVhTc4vCqMwBtOOqurompFDWEhl66F1LR8r4q6eBAFSa1MkZDn3ZBPhuhi7n7IE0Mae5J4lnk05FWy2v5V3OJpaZ7XirrwRrytB1uYGbINTAS3iU2yCPnPB1pR0q1V/xRLQkeMsAU+DM92FtHgw0YqzI+OQI2Xi4fiWwvdVg8YzCt91iK5D7VpXSxTQFnCzGEstc2F645aF6Ra/WlI8/VYHetGloxmb6aXjcH9qadM0qh8ZeeC+Wzy4EWcc4CLpbF+hEAcUb+VBE4tquYtcnaIvi7D+RdSRJZbgYKIZIQ6gH61Djb51yupC3dW3cHawrr/xwweLtGjnTU0XDl20r3bvvCzarUiC851RNtaRRA+Ldv1yPsult1Ut/exff5uN8mrk+/8H</diagram></mxfile>
2011.13005/main_diagram/main_diagram.pdf ADDED
Binary file (59.1 kB). View file
 
2011.13005/paper_text/intro_method.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Recent work has made substantial progress in fully automatic, 3D feature-based point cloud registration. At first glance, benchmarks like *3DMatch* [\[56\]](#page-9-0) appear to be saturated, with multiple state-of-the-art (SoTA) methods [\[18,](#page-8-0) [9,](#page-8-1) [3\]](#page-8-2) reaching nearly 95% feature matching recall and successfully registering >80% of all scan pairs. One may get the impression that the registration problem is solved—but this is actually not the case. We argue that the high success rates are a consequence of lenient evaluation protocols. We have been making our task too easy: existing literature and benchmarks [\[6,](#page-8-3) [56,](#page-9-0) [23\]](#page-8-4) consider only pairs of point clouds with ≥30% overlap to measure performance. Yet, the lowoverlap regime is very relevant for practical applications. On the one hand, it may be difficult to ensure high overlap, for instance when moving along narrow corridors, or when closing loops in the presence of occlusions (densely builtup areas, forest, etc.). On the other hand, data acquisition is
4
+
5
+ <span id="page-0-0"></span>![](_page_0_Figure_10.jpeg)
6
+
7
+ Figure 1: PREDATOR is designed to focus attention on the overlap region, and to prefer salient points in that region, so as to enable robust registration in spite of low overlap.
8
+
9
+ often costly, so practitioners aim for a low number of scans with only the necessary overlap [\[52,](#page-9-1) [53\]](#page-9-2).
10
+
11
+ Driven by the evaluation protocol, the high-overlap scenario became the focus of research, whereas the more challenging low-overlap examples were largely neglected (*cf* . Fig. [1\)](#page-0-0). Consequently, the registration performance of even the best known methods deteriorates rapidly when the overlap between the two point clouds falls below 30%, see Fig. [2.](#page-1-0) Human operators, in contrast, can still register such low overlap point clouds without much effort.
12
+
13
+ This discrepancy is the starting point of the present work. To study its reasons, we have constructed a low-overlap dataset *3DLoMatch* from scans of the popular *3DMatch* benchmark, and have analysed the individual modules/steps of the registration pipeline (Fig. [2\)](#page-1-0). It turns out that the effective receptive field of fully convolutional feature point descriptors [\[9,](#page-8-1) [3\]](#page-8-2) is local enough and the descriptors are hardly corrupted by non-overlapping parts of the scans. Rather than coming up with yet another way to learn better descriptors, the key to registering low overlap point clouds is *learning where to sample feature points*. A large performance boost can be achieved if the feature points are predominantly sampled from the overlapping portions of the scans (Fig. [2,](#page-1-0) right).
14
+
15
+ We follow this path and introduce PREDATOR, a neural architecture for pairwise 3D point cloud registration that learns to detect the overlap region between two unregistered scans, and to focus on that region when sampling feature
16
+
17
+ <sup>∗</sup>First two authors contributed equally to this work.
18
+
19
+ <span id="page-1-1"></span><span id="page-1-0"></span>![](_page_1_Figure_0.jpeg)
20
+
21
+ Figure 2: Registration with SoTA methods deteriorates rapidly for pairs with <30% overlap (*left*). By increasing the fraction of points sampled in the overlap region, many failures can be avoided as shown here for FCGF [\[9\]](#page-8-1) (*right*).
22
+
23
+ points. The main contributions of our work are:
24
+
25
+ - an analysis why existing registration pipelines break down in the low-overlap regime
26
+ - a novel *overlap attention* block that allows for early information exchange between the two point clouds and focuses the subsequent steps on the overlap region
27
+ - a scheme to refine the feature point descriptors, by conditioning them also on the respective other point cloud
28
+ - a novel loss function to train *matchability* scores, which help to sample better and more repeatable interest points
29
+
30
+ Moreover, we make available the *3DLoMatch* dataset, containing the previously ignored scan pairs of *3DMatch* that have low (10-30%) overlap. In our experiments, PREDATOR greatly outperforms existing methods in the low-overlap regime, increasing registration recall by >15 percent points. It also sets a new state of the art on the *3DMatch* benchmark, reaching a registration recall of >90%.
31
+
32
+ # Method
33
+
34
+ PREDATOR is a two-stream encoder-decoder network. Our default implementation uses residual blocks with KPConv-style point convolutions [40], but the architecture is agnostic w.r.t. the backbone and can also be implemented with other formulations of 3D convolutions, such as for instance sparse voxel convolutions [8] (cf. Appendix). As illustrated in Fig. 3, the architecture of PREDATOR can be decomposed into three main modules:
35
+
36
+ - encoding of the two point clouds into smaller sets of superpoints and associated latent feature encodings, with shared weights (Sec. 3.2);
37
+ - the overlap attention module (in the bottleneck) that extracts co-contextual information between the feature encodings of the two point clouds, and assigns each superpoint two overlap scores that quantify how likely the superpoint itself and its soft-correspondence are located in the overlap between the two inputs (Sec. 3.3);
38
+ - 3. decoding of the mutually conditioned bottleneck repre-
39
+
40
+ sentations to point-wise descriptors as well as refined per-point overlap and matchability scores (Sec. 3.4).
41
+
42
+ Before diving into each component we lay out the basic problem setting and notation in Sec. 3.1.
43
+
44
+ Consider two point clouds $\mathbf{P} = \{\mathbf{p}_i \in \mathbb{R}^3 | i = 1..N\}$ , and $\mathbf{Q} = \{\mathbf{q}_i \in \mathbb{R}^3 | i = 1..M\}$ . Our goal is to recover a rigid transformation $\mathbf{T}_{\mathbf{P}}^{\mathbf{Q}}$ with parameters $\mathbf{R} \in SO(3)$ and $\mathbf{t} \in \mathbb{R}^3$ that aligns $\mathbf{P}$ to $\mathbf{Q}$ . By a slight abuse of notation we use the same symbols for sets of points and for their corresponding matrices $\mathbf{P} \in \mathbb{R}^{N \times 3}$ and $\mathbf{Q} \in \mathbb{R}^{M \times 3}$ .
45
+
46
+ Obviously $\mathbf{T}_{\mathbf{P}}^{\mathbf{Q}}$ can only ever be determined from the data if $\mathbf{P}$ and $\mathbf{Q}$ have sufficient overlap, meaning that after applying the ground truth transformation $\mathbf{\bar{T}}_{\mathbf{P}}^{\mathbf{Q}}$ the overlap ratio
47
+
48
+ $$\frac{1}{N} \left| \left\{ \| (\overline{\mathbf{T}}_{\mathbf{P}}^{\mathbf{Q}}(\mathbf{p}_i) - \mathsf{NN}(\overline{\mathbf{T}}_{\mathbf{P}}^{\mathbf{Q}}(\mathbf{p}_i), \mathbf{Q}) \|_2 \le v \right\} \right| > \tau , \quad (1)$$
49
+
50
+ where NN denotes the nearest-neighbour operator w.r.t. its second argument, $\|\cdot\|_2$ is the Euclidean norm, $|\cdot|$ is the set cardinality, and v is a tolerance that depends on the point density. Contrary to previous work [56, 23], where the threshold to even attempt the alignment is typically $\tau > 0.3$ , we are interested in low-overlap point clouds with $\tau > 0.1$ . Fragments with different overlap ratios are shown in Fig. 4.
51
+
52
+ We follow [40] and first down-sample raw point clouds with a voxel-grid filter of size V, such that ${\bf P}$ and ${\bf Q}$ have reasonably uniform point density. In the shared encoder,
53
+
54
+ <span id="page-2-3"></span> $<sup>^2</sup>$ For efficiency, v is in practice determined after voxel-grid down-sampling of the two point clouds.
55
+
56
+ <span id="page-3-4"></span><span id="page-3-2"></span>![](_page_3_Figure_0.jpeg)
57
+
58
+ Figure 4: Fragments with different overlap ratios. Overlap is computed relative to the source fragment (orange).
59
+
60
+ a series of ResNet-like blocks and strided convolutions aggregate the raw points into *superpoints* $\mathbf{P}' \in \mathbb{R}^{N' \times 3}$ and $\mathbf{Q}' \in \mathbb{R}^{M' \times 3}$ with associated features $\mathbf{X}^{\mathbf{P}'} \in \mathbb{R}^{N' \times b}$ and $\mathbf{X}^{\mathbf{Q}'} \in \mathbb{R}^{M' \times b}$ . Note that superpoints correspond to a fixed receptive field, so their number depends on the spatial extent of the input point cloud and may be different for the two inputs.
61
+
62
+ So far, the features $\mathbf{X}^{\mathbf{P}'}$ , $\mathbf{X}^{\mathbf{Q}'}$ in the bottleneck encode the geometry and context of the two point clouds. But $\mathbf{X}^{\mathbf{P}'}$ has no knowledge of point cloud $\mathbf{Q}$ and vice versa. In order to reason about their respective overlap regions, some crosstalk is necessary. We argue that it makes sense to add that cross-talk at the level of superpoints in the bottleneck, just like a human operator will first get a rough overview of the overall shape to determine likely overlap regions, and only after that identifies precise feature points in those regions.
63
+
64
+ **Graph convolutional neural network**: Before connecting the two feature encodings, we first further aggregate and strengthen their contextual relations individually with a graph neural network (GNN) [48]. In the following, we describe the GNN for point cloud $\mathbf{P}'$ . The GNN for $\mathbf{Q}'$ is the same. First, the superpoints in $\mathbf{P}'$ are linked into a graph in Euclidean space with the k-NN method. Let $\mathbf{x}_i \in \mathbb{R}^b$ denote the feature encoding of superpoint $\mathbf{p}_i'$ , and $(i,j) \in \mathcal{E}$ the graph edge between superpoints $\mathbf{p}_i'$ and $\mathbf{p}_j'$ . The encoder features are then iteratively updated as
65
+
66
+ $$^{(k+1)}\mathbf{x}_{i} = \max_{(i,j)\in\mathcal{E}} h_{\theta}\left(\operatorname{cat}[^{(k)}\mathbf{x}_{i},^{(k)}\mathbf{x}_{j} - {}^{(k)}\mathbf{x}_{i}]\right), \quad (2)$$
67
+
68
+ where $h_{\theta}(\cdot)$ denotes a linear layer followed by instance normalization [43] and a LeakyReLU activation [29], $\max(\cdot)$ denotes element-/channel-wise max-pooling, and $\cot[\cdot,\cdot]$ means concatenation. This update is performed twice with separate (not shared) parameters $\theta$ , and the final GNN features $\mathbf{x}_i^{\text{GNN}} \in \mathbb{R}^{d_b}$ are obtained as
69
+
70
+ $$\mathbf{x}_i^{\text{GNN}} = h_{\theta}(\text{cat}[^{(0)}\mathbf{x}_i, ^{(1)}\mathbf{x}_i, ^{(2)}\mathbf{x}_i]). \tag{3}$$
71
+
72
+ **Cross-attention block**: Knowledge about potential overlap regions can only be gained by mixing information about both point clouds. To this end we adopt a cross-attention block [36] based on the message passing formulation [16].
73
+
74
+ First, each superpoint in $\mathbf{P}'$ is connected to all superpoints in $\mathbf{Q}'$ to form a bipartite graph. Inspired by the Transformer architecture [45], vector-valued queries $\mathbf{s}_i \in \mathbb{R}^b$ are used to retrieve the values $\mathbf{v}_j \in \mathbb{R}^b$ of other superpoints based on their keys $\mathbf{k}_j \in \mathbb{R}^b$ , where
75
+
76
+ $$\mathbf{k}_{j} = \mathbf{W}_{k} \mathbf{x}_{i}^{\text{GNN}} \quad \mathbf{v}_{j} = \mathbf{W}_{v} \mathbf{x}_{i}^{\text{GNN}} \quad \mathbf{s}_{i} = \mathbf{W}_{s} \mathbf{x}_{i}^{\text{GNN}}$$
77
+ (4)
78
+
79
+ and $W_k$ , $W_v$ , and $W_s$ are learnable weight matrices. The messages are computed as weighted averages of the values,
80
+
81
+ $$\mathbf{m}_{i\leftarrow} = \sum_{j:(i,j)\in\mathcal{E}} a_{ij} \mathbf{v}_j , \qquad (5)$$
82
+
83
+ with attention weights $a_{ij} = \operatorname{softmax}(\mathbf{s}_i^T \mathbf{k}_j / \sqrt{b})$ [36]. I.e., to update a superpoint $\mathbf{p}_i'$ one combines that point's query with the keys and values of all superpoints $\mathbf{q}_j'$ . In line with the literature, in practice we use a multi-attention layer with four parallel attention heads [45]. The co-contextual features are computed as
84
+
85
+ $$\mathbf{x}_i^{\text{CA}} = \mathbf{x}_i^{\text{GNN}} + \text{MLP}(\text{cat}[\mathbf{s}_i, \mathbf{m}_{i\leftarrow}]),$$
86
+ (6)
87
+
88
+ with $MLP(\cdot)$ denoting a three-layer fully connected network with instance normalization [43] and ReLU [30] activations after the first two layers. The same cross-attention block is also applied in reverse direction, so that information flows in both directions, $\mathbf{P}' \to \mathbf{Q}'$ and $\mathbf{Q}' \to \mathbf{P}'$ .
89
+
90
+ Overlap scores of the bottleneck points: The above update with co-contextual information is done for each superpoint in isolation, without considering the local context within each point cloud. We therefore, explicitly update the local context after the cross-attention block using another GNN that has the same architecture and underlying graph (within-point cloud links) as above, but separate parameters $\theta$ . This yields the final latent feature encodings $\mathbf{F}^{\mathbf{P}'} \in \mathbb{R}^{N' \times b}$ and $\mathbf{F}^{\mathbf{Q}'} \in \mathbb{R}^{M' \times b}$ , which are now conditioned on the features of the respective other point cloud. Those features are linearly projected to overlap scores $\mathbf{o}^{\mathbf{P}'} \in \mathbb{R}^{N'}$ and $\mathbf{o}^{\mathbf{Q}'} \in \mathbb{R}^{M'}$ , which can be interpreted as probabilities that a certain superpoint lies in the overlap region. Additionally, one can compute soft correspondences between superpoints and from the correspondence weights predict the cross-overlap score of a superpoint $\mathbf{p}'_i$ , i.e., the probability that its correspondence in $\mathbf{Q}'$ lies in the overlap region:
91
+
92
+ <span id="page-3-3"></span>
93
+ $$\tilde{o}_i^{\mathbf{P}'} := \mathbf{w}_i^T \mathbf{o}^{\mathbf{Q}'}, \quad w_{ij} := \operatorname{softmax} \left( \frac{1}{t} \langle \mathbf{f}_i^{\mathbf{P}'}, \mathbf{f}_j^{\mathbf{Q}'} \rangle \right), \quad (7)$$
94
+
95
+ where $\langle \cdot, \cdot \rangle$ is the inner product, and t is the temperature parameter that controls the soft assignment. In the limit $t \rightarrow 0$ , Eq. (7) converges to hard nearest-neighbour assignment.
96
+
97
+ Our decoder starts from conditioned features $\mathbf{F}^{\mathbf{P}'}$ , concatenates them with the overlap scores $\mathbf{o}^{\mathbf{P}'}$ , $\tilde{\mathbf{o}}^{\mathbf{P}'}$ , and outputs per-point feature descriptors $\mathbf{F}^{\mathbf{P}} \in \mathbb{R}^{N \times 32}$ and refined
98
+
99
+ <span id="page-4-2"></span>per-point overlap and matchability scores $\mathbf{o^P}, \mathbf{m^P} \in \mathbb{R}^N$ . The matchability can be seen as a "conditional saliency" that quantifies how likely a point is to be matched correctly, given the points (resp. features) in the other point cloud $\mathbf{Q}$ .
100
+
101
+ The decoder architecture combines NN-upsampling with linear layers, and includes skip connections from the corresponding encoder layers. We deliberately keep the overlap score and the matchability separate to disentangle the reasons why a point is a good/bad candidate for matching: in principle a point can be unambiguously matchable but lie outside the overlap region, or it can lie in the overlap but have an ambiguous descriptor. Empirically, we find that the network learns to predict high matchability mostly for points in the overlap; probably reflecting the fact that the ground truth correspondences used for training, naturally, always lie in the overlap. For further details about the architecture, please refer to Appendix and the source code.
102
+
103
+ PREDATOR is trained end-to-end, using three losses w.r.t. ground truth correspondences as supervision.
104
+
105
+ Circle loss: To supervise the point-wise feature descriptors we follow<sup>3</sup> [3] and use the circle loss [39], a variant of the more common triplet loss. Consider again a pair of overlapping point clouds $\mathbf{P}$ and $\mathbf{Q}$ , this time aligned with the ground truth transformation. We start by extracting the points $\mathbf{p}_i \in \mathbf{P}_p \subset \mathbf{P}$ that have at least one (possibly multiple) correspondence in $\mathbf{Q}$ , where the set of correspondences $\mathcal{E}_p(\mathbf{p}_i)$ is defined as points in $\mathbf{Q}$ that lie within a radius $r_p$ around $\mathbf{p}_i$ . Similarly, all points of $\mathbf{Q}$ outside a (larger) radius $r_s$ form the set of negatives $\mathcal{E}_n(\mathbf{p}_i)$ . The circle loss is then computed from $n_p$ points sampled randomly from $\mathbf{P}_p$ :
106
+
107
+ $$\mathcal{L}_{c}^{\mathbf{P}} = \frac{1}{n_{p}} \sum_{i=1}^{n_{p}} \log \left[ 1 + \sum_{j \in \mathcal{E}_{p}} e^{\beta_{p}^{j} (d_{i}^{j} - \Delta_{p})} \cdot \sum_{k \in \mathcal{E}_{n}} e^{\beta_{n}^{k} (\Delta_{n} - d_{i}^{k})} \right], \tag{8}$$
108
+
109
+ where $d_i^j = ||\mathbf{f}_{\mathbf{p}_i} - \mathbf{f}_{\mathbf{q}_j}||_2$ denotes distance in feature space, and $\Delta_n, \Delta_p$ are negative and positive margins, respectively. The weights $\beta_p^j = \gamma(d_i^j - \Delta_p)$ and $\beta_n^k = \gamma(\Delta_n - d_i^k)$ are determined individually for each positive and negative example, using the empirical margins $\Delta_p := 0.1$ and $\Delta_n := 1.4$ with hyper-parameter $\gamma$ . The reverse loss $\mathcal{L}_c^{\mathbf{Q}}$ is computed in the same way, for a total circle loss $\mathcal{L}_c = \frac{1}{2}(\mathcal{L}_c^{\mathbf{P}} + \mathcal{L}_c^{\mathbf{Q}})$ .
110
+
111
+ **Overlap loss:** The estimation of the overlap probability is cast as binary classification and supervised using the overlap loss $\mathcal{L}_o = \frac{1}{2}(\mathcal{L}_o^{\mathbf{P}} + \mathcal{L}_o^{\mathbf{Q}})$ , where
112
+
113
+ $$\mathcal{L}_o^{\mathbf{P}} = \frac{1}{|\mathbf{P}|} \sum_{i=1}^{|\mathbf{P}|} \bar{o}_{\mathbf{p}_i} \log(o_{\mathbf{p}_i}) + (1 - \bar{o}_{\mathbf{p}_i}) \log(1 - o_{\mathbf{p}_i}). \tag{9}$$
114
+
115
+ <span id="page-4-1"></span>![](_page_4_Figure_10.jpeg)
116
+
117
+ --------------------------------------
118
+
119
+ Figure 5: Example results of PREDATOR that succeeds in attending to the overlap region to enable robust registration.
120
+
121
+ The ground truth label $\bar{o}_{\mathbf{p}_i}$ of point $\mathbf{p}_i$ is defined as
122
+
123
+ $$\bar{o}_{\mathbf{p}_{i}} = \begin{cases} 1, & ||\overline{\mathbf{T}}_{\mathbf{P}}^{\mathbf{Q}}(\mathbf{p}_{i}) - \mathsf{NN}(\overline{\mathbf{T}}_{\mathbf{P}}^{\mathbf{Q}}(\mathbf{p}_{i}), \mathbf{Q})||_{2} < r_{o} \\ 0, & \text{otherwise} \end{cases}, (10)$$
124
+
125
+ with overlap threshold $r_o$ . The reverse loss $\mathcal{L}_o^{\mathbf{Q}}$ is computed in the same way. The contributions from positive and negative examples are balanced with weights inversely proportional to their relative frequencies.
126
+
127
+ **Matchability loss**: Supervising the matchability scores is more difficult, as it is not clear in advance which are the right points to take into account during correspondence search. We follow a simple intuition: good keypoints are those that can be matched successfully at a given point during training, with the current feature descriptors. Hence, we cast the prediction as binary classification and generate the ground truth labels on the fly. Again, we sum the two symmetric losses, $\mathcal{L}_m = \frac{1}{2}(\mathcal{L}_m^{\mathbf{P}} + \mathcal{L}_m^{\mathbf{Q}})$ , with
128
+
129
+ $$\mathcal{L}_{m}^{\mathbf{P}} = \frac{1}{|\mathbf{P}|} \sum_{i=1}^{|\mathbf{P}|} \overline{m}_{\mathbf{p}_{i}} \log(m_{\mathbf{p}_{i}}) + (1 - \overline{m}_{\mathbf{p}_{i}}) \log(1 - m_{\mathbf{p}_{i}}), \tag{11}$$
130
+
131
+ where ground truth labels $\overline{m}_{\mathbf{p}_i}$ are computed on the fly via nearest neighbour search $\mathsf{NN}_{\mathbf{F}}(\cdot,\cdot)$ in feature space:
132
+
133
+ $$\overline{m}_{\mathbf{p}_i} = \begin{cases} 1, & ||\overline{\mathbf{T}}_{\mathbf{p}}^{\mathbf{Q}}(\mathbf{p}_i) - \mathsf{NN}_{\mathbf{F}}(\mathbf{p}_i, \mathbf{Q})||_2 < r_m \\ 0, & \text{otherwise.} \end{cases}$$
134
+ (12)
135
+
136
+ **Implementation and training:** PREDATOR is implemented in pytorch and can be trained on a single RTX 3090 GPU. At the start of the training we supervise PREDATOR only with the circle and overlap losses, the matchability loss
137
+
138
+ <span id="page-4-0"></span><sup>&</sup>lt;sup>3</sup>Added to the repository after publication, not mentioned in the paper.
139
+
140
+ <span id="page-5-5"></span><span id="page-5-2"></span>![](_page_5_Figure_0.jpeg)
141
+
142
+ Figure 6: Distribution of the relative overlap ratio before and after filtering the points with the inferred overlap scores, *3DLoMatch* (left) and *3DMatch* (right).
143
+
144
+ is added only after few epochs, when the point-wise features are already meaningful (i.e., >30% of interest points can be matched correctly). The three loss terms are weighted equally. For more details, please refer to Appendix.
2105.03714/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-05-14T15:24:39.006Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1 Safari/605.1.15" etag="W48PtBGGChlGoiTqeZDC" version="14.6.13"><diagram id="xXFTva77O-UGSDJbxexa" name="Page-1">7V3fc5s4EP5r8tgbJCHAj03S9GauneldH9rryw21ZZuWWD6sNM799ScMhF+JIxuzKxP60BohgdCnXe1+Wm0v2NXt9n0Srpcf5UzEF9SZbS/Y9QWlAXP032nBQ1bAPZYVLJJolhWRsuBz9J/IC/N2i7toJja1ikrKWEXreuFUrlZiqmplYZLI+3q1uYzrb12Hi/yNTlnweRrGolXtSzRTy/yzqF+W/y6ixbJ4M/Em2Z3bsKicP3izDGfyvlLE3l2wq0RKlf263V6JOB27YlyydjfP3H3sWCJWyqTBH/Qf5837r+63n98+TNWX+x/+n3+9IX7+nF9hfJd/ct5d9VCMQSLvVjORPoZcsMv7ZaTE53U4Te/ea9B12VLdxvntjUrkz8exYrpkFm6Wj63Ti0+hUiJZ7UqoUza6krFMdq9k3wPuckffmUdxXCm/Cd6Ra/2Fl1J3IFLppHLTavlHiESJ7bPjQx5HXc9WIW+FSh50lbyBW0zVfKZSL7++L3Gnbl62rGDOiophPtcWj88u4dA/ckQOQodaiY43DcT3+RPouDf+zVUv6JDAPnRaUIiZ1h355Uqu9D+XJTrpQMhELeVCrsL4g5TrfNB/CKUecs0X3ilZR0xsI/U1bf4bz6/+rty53uZP3l08FBcr/X1fqxd/Vy/KRrurstXsbaoxy87rkpsoHZTrEsX0G/djqIdE3iVTsWfo8jVAhclCqD31qPv0pEhEHKroV70jJwe4LWu9AvwK4fUw4WVP6FYv1v29/K5/LNRuTLKCudRDUJ0J3r93srjxZrOD9q2uQOl6W94snkKKx+huZk+qP10XV97YnHJxrG0d8bJqDzfrzACaR9t0PjaVs+DEdZyn1PkldZzdnbRzlfL57s9p9Ld+RU1/M9bW30+pb7cv7e3uFe5Sjt+VpaeTdVKR9VKxHyTtWSs75H1iKO8EVd75HtMJQXeXAFqmvU3RZJhgemdgfJ09wrjmlw8KMTkC4PNXyBQT4ADNvn5Fa67bFeJdU/1V4UOlwlpGK7WpPPlTWlBx1ycNc89tkFMv1Of1+vpH1oNysj1+yvHzbwLlAFArHADH4Y6YP+UAOA65vrzq2QFwGwSOj+wAFAwzignhmKkfUlU/xB7lQ+hZWPwElqI7woSoAWyRhWgMMEP1Aopu9q/C3VGFl1S6LRwOYYgq/Oy9QOIZynhluxJDxvczdZg0PDBUXY34ltXdtMkYb0hqNjXyVg0cT2CAkzYlB7U+G3uA9RXaJg/QWH7Z5BSz51AX0CWHuYDN+iAuYDGG/RsQzAoDAncTiFHbfEA8jtHQA2yYDxb5gAWWlpOMBJFlPPcVxhhiXCdwMnoBEBijUjlFN/tfqPm4ULcWajZBXqgpHpM3gL0i6hrKOK6nT6m11hgwVCf39FvUHbCnT0emDkJ+2TNqHUh+Xag12rNijbZrQxV/jYZl8wYVEE9NubzO4RzdIAajy3wcCd8jyc/IPmDMNCfYEr6fLhuDpg8ReVYcZ7Tc8W7zZ2PYdCc8cU+90JEtgwAZd50uAjPsDXwagl5GZb2LbvZvigWWmGIl8fkMVQrobLkc2RRjsFzZoKJXmen5VFzLi8EdUHVGEW9RpOjeFgMOfBuYpQYWHdUNZGDS7ACIgaHqPbqRM9g9DwZGlk0sUd+YZFlzyxrfQhtjy47XCMZn0HF9sDG2rH+IkZPEgB0SJUhpYqzW4hw7QtjFY9IGsMfh0rMwwl3E6DJbjPDT5Al4MfAI2ggvZiCA/qaW6G+beFJ8/X0OgWc1/W2R9uaG2ttDNdFcew+IDgbgABVgDqbDx1N6FebElmP+7v5kb2PY0SEyz023tzmuzLfZszHsqBOemRZFwxOYKhvWZpYxyMiWGGxs2aDCjsz1MmpSTg7Lh9Uo71e0+AYnOQZ2cMqO5pGSl1J2NL19iJQdxRgCeANj0i+NcSMzJ3rODk4RLYkzj3zjpgcJcY1/DkvaDShvozHAPqo3wMFOipLxqCilXjMVFzajw8/hqKi93mBgKORe1z29biAD/x8NB0AMDNXJd1WbRhl0OgeOGNt2/okbjeU36JrO4Sgv0GvmCnnBC2zWh/ECAzALYkwIVdnHt8YLxKMaDX1Ae6NrPWqof3CpRg+Rajz3JcYYYtxdIQ828G5gfoAxyD4qm1N0E2CpRsoaY/VSjZ4XysNj8wawZeSZnmPEdfY92CA8K0OoO2++P+NfNdk7YGffG8k6CPn1cf8/VbDDqgQpn4hVjHyLwENfpGEZvUHFyXvGfB6uPw1HmY0H0tth1Oj5RLz9lNkYRn2IzPumOYRwfW+/zaGNYdSd8MQ9CuOPhBkEyLgLtU9BQR5UGLW5XkYlvn2w3G50zO3WdrfQkwP5mLndjtndsmj70jc9tYpseoGdWi0f/IpFvMmSortbPnD428BMNbAYqW4gA9NmB0AMDFXvMY7QmWN8MLqMjpm/2tvW+CbaGGF2tEoITM854nphwRhh1j/EuIRZ0U0ALY6UO8ZqLY6e/yvA49IGsMsRmB4nxLXCA8QIM1us8NNkDHgx+AjaCi9mIID+RjrtbxWL0nK6+tPf+jKRKQLldNGDtfwoZyKt8T8=</diagram></mxfile>
2105.03714/main_diagram/main_diagram.pdf ADDED
Binary file (31.8 kB). View file
 
2105.03714/paper_text/intro_method.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Consider a recommendation service that groups news articles by finding clusters in a graph which connects these articles via cross-references. Unfortunately, as cross-references between articles with different political viewpoints are uncommon, this service risks forming ideological filter bubbles. To counter polarization, it must ensure that clusters on topics like finance and healthcare include a diverse range of opinions. This is an example of a constrained clustering problem. The popular spectral clustering algorithm [Ng et al., 2001, von Luxburg, 2007] has been adapted over the years to include constraints such as *must-link* and *cannot-link* constraints [Kamvar et al., 2003, Wang and Davidson, 2010], size-balanced clusters [Banerjee and Ghosh, 2006], and statistical fairness [Kleindessner et al., 2019]. These constraints can be broadly divided into two categories: (i) *Population level* constraints that must be satisfied by the clusters as a whole (e.g. size-balanced clusters and statistical fairness); and (ii) *Individual level* constraints that must be satisfied at the level of individual nodes (e.g. must/cannot link constraints). To the best of our knowledge, the only known statistical consistency guarantees for constrained spectral clustering were studied in Kleindessner et al. [2019] in the context of a *population level* fairness constraint where the goal is to find clusters that are balanced with respect to a categorical sensitive node attribute. In this paper, we establish consistency guarantees for constrained spectral clustering under a new and more general *individual level* fairness constraint.
4
+
5
+ <sup>∗</sup>Work done while the author was at the Indian Institute of Science, Bangalore.
6
+
7
+ Informal problem description: We assume the availability of two graphs: a *similarity graph* G in which the clusters are to be found and a *representation graph* R, defined on the same set of nodes as G, which encodes the "is representative of" relationship. Our goal is to find clusters in G such that every node has a sufficient number of its representatives from R in all clusters. For example, G may be a graph of consumers based on the similarity of their purchasing habits and R may be a graph based on the similarity of their sensitive attributes such as gender, race, and sexual orientation. This, for instance, would then be a step towards reducing discrimination in online marketplaces [Fisman and Luca, 2016].
8
+
9
+ Contributions and results: *First*, in Section 3.1, we formalize our new individual level fairness constraint for clustering, called the *representation constraint*. It is different from most existing fairness notions which either apply at the population level [Chierichetti et al., 2017, Rösner and Schmidt, 2018, Bercea et al., 2019, Bera et al., 2019] or are hard to integrate with spectral clustering [Chen et al., 2019, Mahabadi and Vakilian, 2020, Anderson et al., 2020]. Unlike these notions, our constraint can be used with multiple sensitive attributes of different types (categorical, numerical etc.) and only requires observing an abstract representation graph based on these attributes rather than requiring their actual values, thereby discouraging individual profiling. Appendix A discusses the utility of individual fairness notions.
10
+
11
+ *Second*, in Section 3.2, we develop the *representation-aware* variant of unnormalized spectral clustering to find clusters that approximately satisfy the proposed constraint. An analogous variant for normalized spectral clustering is presented in Appendix B.2.
12
+
13
+ *Third*, in Section 4.1, we introduce R-PP, a new representation-aware (or fair) planted partition model. This model generates random similarity graphs G conditioned on both the cluster membership of nodes and a given representation graph R. Intuitively, R-PP plants the properties of R in G. We show that this model generates "hard" problem instances and establish the weak consistency2 of our algorithms under this model for a class of d-regular representation graphs (Theorems 4.1 and 4.2). To the best of our knowledge, these are the first consistency results for constrained spectral clustering under an individual-level constraint. In fact, we show that our results imply the only other similar consistency result (but for a population-level constraint) in Kleindessner et al. [2019] as a special case (Appendix A).
14
+
15
+ Finally, *fourth*, we present empirical results on both real and simulated data to corroborate our theoretical findings (Section 5). In particular, our experiments show that our algorithms perform well in practice, even when the d-regularity assumption on R is violated.
16
+
17
+ Related work: Spectral clustering has been modified to satisfy individual level *must-link* and *cannot-link* constraints by pre-processing the similarity graph [Kamvar et al., 2003], post-processing the eigenvectors of the graph Laplacian [Li et al., 2009], and modifying its optimization problem [Yu and Shi, 2001, 2004, Wang and Davidson, 2010, Wang et al., 2014, Cucuringu et al., 2016]. It has also been extended to accommodate various population level constraints [Banerjee and Ghosh, 2006, Xu et al., 2009]. We are unaware of theoretical performance guarantees for any of these algorithms.
18
+
19
+ Of particular interest to us are the fairness constraints for clustering. One popular population level constraint requires sensitive attributes to be proportionally represented in clusters [Chierichetti et al., 2017, Rösner and Schmidt, 2018, Bercea et al., 2019, Bera et al., 2019, Esmaeili et al., 2020, 2021]. For example, if 50% of the population is female then the same proportion should be respected in all clusters. Several efficient algorithms for discovering such clusters have been proposed [Schmidt et al., 2018, Ahmadian et al., 2019, Harb and Shan, 2020], though they almost exclusively focus on variants of k-means while we are interested in spectral clustering. Kleindessner et al. [2019] deserve a special mention as they develop a spectral clustering algorithm for this fairness notion. However, we recover all the results presented in Kleindessner et al. [2019] as a special case of our analysis as our proposed constraint interpolates between population level and individual level fairness based on the structure of R. While individual fairness notions for clustering have also been explored [Chen et al., 2019, Mahabadi and Vakilian, 2020, Anderson et al., 2020, Chakrabarty and Negahbani, 2021], none of them have previously been used with spectral clustering. See Caton and Haas [2020] for a broader discussion on fairness.
20
+
21
+ <sup>2</sup>An algorithm is called weakly consistent if it makes o(N) mistakes with probability 1 − o(1), where N is the number of nodes in the similarity graph G [Abbe, 2018]
22
+
23
+ A final line of relevant work concerns consistency results for variants of unconstrained spectral clustering, von Luxburg et al. [2008] established the weak consistency of spectral clustering assuming that the similarity graph $\mathcal{G}$ encodes cosine similarity between examples using feature vectors drawn from a particular probability distribution. Rohe et al. [2011] and Lei and Rinaldo [2015] assume that $\mathcal{G}$ is sampled from variants of the Stochastic Block Model (SBM) [Holland et al., 1983]. Zhang et al. [2014] allow clusters to overlap. Binkiewicz et al. [2017] consider auxiliary node attributes, though, unlike us, their aim is to find clusters that are well aligned with these attributes. A faster variant of spectral clustering was analyzed by Tremblay et al. [2016]. Spectral clustering has also been studied on other types of graphs such as hypergraphs [Ghoshdastidar and Dukkipati, 2017a,b] and strong consistency guarantees are also known [Gao et al., 2017, Lei and Zhu, 2017, Vu, 2018], albeit under stronger assumptions.
24
+
25
+ **Notation:** Define $[n] := \{1, 2, \dots, n\}$ for any integer n. Let $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ denote a similarity graph, where $\mathcal{V} = \{v_1, v_2, \dots, v_N\}$ is the set of N nodes and $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$ is the set of edges. Clustering aims to partition the nodes in $\mathcal{G}$ into $K \geq 2$ non-overlapping clusters $\mathcal{C}_1, \dots, \mathcal{C}_K \subseteq \mathcal{V}$ . We assume the availability of another graph, called a representation graph $\mathcal{R} = (\mathcal{V}, \hat{\mathcal{E}})$ , which is defined on the same set of vertices as $\mathcal{G}$ but with different edges $\hat{\mathcal{E}}$ . The discovered clusters $\mathcal{C}_1, \dots, \mathcal{C}_K$ are required to satisfy a fairness constraint encoded by $\mathcal{R}$ , as described in Section 3.1. $\mathbf{A}, \mathbf{R} \in \{0, 1\}^{N \times N}$ denote the adjacency matrices of graphs $\mathcal G$ and $\mathcal R$ , respectively. We assume that $\mathcal G$ and $\mathcal R$ are undirected and that $\mathcal{G}$ has no self-loops.
26
+
27
+ We begin with a brief review of unnormalized spectral clustering which will be useful in describing our algorithm in Section 3.2. The normalized variants of traditional spectral clustering and our algorithm have been deferred to Appendix B. Given a similarity graph $\mathcal{G}$ , unnormalized spectral clustering finds clusters by approximately minimizing the following metric known as ratio-cut [von Luxburg, 2007]
28
+
29
+ $$\operatorname{RCut}(\mathcal{C}_1, \dots, \mathcal{C}_K) = \sum_{i=1}^K \frac{\operatorname{Cut}(\mathcal{C}_i, \mathcal{V} \setminus \mathcal{C}_i)}{|\mathcal{C}_i|}.$$
30
+
31
+ Here, $V \setminus C_i$ is the set difference between V and $C_i$ . For any two subsets $X, Y \subseteq V$ , Cut(X, Y) = $\frac{1}{2}\sum_{v_i\in\mathcal{X},v_j\in\mathcal{Y}}A_{ij}$ counts the number of edges that have one endpoint in $\mathcal{X}$ and another in $\mathcal{Y}$ . Let $\mathbf{D} \in \mathbb{R}^{N \times N} \text{ be a diagonal degree matrix where } D_{ii} = \sum_{j=1}^{N} A_{ij} \text{ for all } i \in [N]. \text{ It is easy to verify that ratio-cut can be expressed in terms of the graph Laplacian } \mathbf{L} := \mathbf{D} - \mathbf{A} \text{ and a cluster membership matrix } \mathbf{H} \in \mathbb{R}^{N \times K} \text{ as } \mathrm{RCut}(\mathcal{C}_1, \dots, \mathcal{C}_K) = \mathrm{trace}\{\mathbf{H}^\mathsf{T}\mathbf{L}\mathbf{H}\}, \text{ where}$ $H_{ij} = \begin{cases} \frac{1}{\sqrt{|\mathcal{C}_j|}} & \text{if } v_i \in \mathcal{C}_j \\ 0 & \text{otherwise.} \end{cases} \tag{1}$
32
+
33
+ $$H_{ij} = \begin{cases} \frac{1}{\sqrt{|\mathcal{C}_j|}} & \text{if } v_i \in \mathcal{C}_j\\ 0 & \text{otherwise.} \end{cases}$$
34
+ (1)
35
+
36
+ Thus, to find good clusters, one can minimize trace{H<sup>†</sup>LH} over all H that have the form given in (1). However, the combinatorial nature of this constraint makes this problem NP-hard [Wagner and Wagner, 1993]. Unnormalized spectral clustering instead solves the following relaxed problem:
37
+
38
+ $$\min_{\mathbf{H} \in \mathbb{R}^{N \times K}} \operatorname{trace}\{\mathbf{H}^{\mathsf{T}} \mathbf{L} \mathbf{H}\} \text{ s.t. } \mathbf{H}^{\mathsf{T}} \mathbf{H} = \mathbf{I}.$$
39
+ (2)
40
+
41
+ Note that **H** in (1) satisfies $\mathbf{H}^{\mathsf{T}}\mathbf{H} = \mathbf{I}$ . The above relaxation is often referred to as the spectral relaxation. By Rayleigh-Ritz theorem [Lütkepohl, 1996, Section 5.2.2], the optimal matrix H\* is such that it has $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_K \in \mathbb{R}^N$ as its columns, where $\mathbf{u}_i$ is the eigenvector corresponding to the $i^{th}$ smallest eigenvalue of L for all $i \in [K]$ . The algorithm clusters the rows of $\mathbf{H}^*$ into Kclusters using k-means clustering [Lloyd, 1982] to return $\hat{\mathcal{C}}_1, \dots, \hat{\mathcal{C}}_K$ . Algorithm 1 summarizes this procedure. Unless stated otherwise, we will use spectral clustering (without any qualification) to refer to unnormalized spectral clustering.
42
+
43
+ In this section, we first describe our individual level fairness constraint in Section 3.1 and then develop Unnormalized Representation-Aware Spectral Clustering in Section 3.2 to find clusters that approximately satisfy this constraint. See Appendix B for the normalized variant of the algorithm.
44
+
45
+ # Method
46
+
47
+ - 1: **Input:** Adjacency matrix **A**, number of clusters $K \ge 2$
48
+ - 2: Compute the Laplacian matrix L = D A.
49
+ - 3: Compute the first K eigenvectors $\mathbf{u}_1, \dots, \mathbf{u}_K$ of $\mathbf{L}$ . Let $\mathbf{H}^* \in \mathbb{R}^{N \times K}$ be a matrix that has $\mathbf{u}_1, \dots, \mathbf{u}_K$ as its columns.
50
+ - 4: Let $\mathbf{h}_i^*$ denote the $i^{th}$ row of $\mathbf{H}^*$ . Cluster $\mathbf{h}_1^*, \dots, \mathbf{h}_N^*$ into K clusters using k-means clustering. 5: **Output:** Clusters $\hat{\mathcal{C}}_1, \dots, \hat{\mathcal{C}}_K$ , s.t. $\hat{\mathcal{C}}_i = \{v_j \in \mathcal{V} : \mathbf{h}_j^* \text{ was assigned to the } i^{th} \text{ cluster}\}$ .
51
+
52
+ A representation graph $\mathcal{R}$ connects nodes that represent each other based on sensitive attributes (e.g. political opinions). Let $\mathcal{N}_{\mathcal{R}}(i) = \{v_j : R_{ij} = 1\}$ be the set of neighbors of node $v_i$ in $\mathcal{R}$ . The size of $\mathcal{N}_{\mathcal{R}}(i) \cap \mathcal{C}_k$ specifies node $v_i$ 's representation in cluster $\mathcal{C}_k$ . To motivate our constraint, consider the following notion of balance $\rho_i$ of clusters defined from the perspective of a particular node $v_i$ :
53
+
54
+ $$\rho_i = \min_{k,\ell \in [K]} \frac{|\mathcal{C}_k \cap \mathcal{N}_{\mathcal{R}}(i)|}{|\mathcal{C}_\ell \cap \mathcal{N}_{\mathcal{R}}(i)|}$$
55
+ (3)
56
+
57
+ It is easy to see that $0 \le \rho_i \le 1$ and higher values of $\rho_i$ indicate that node $v_i$ has an adequate representation in all clusters. Thus, one objective could be to find clusters $C_1, \ldots, C_K$ that solve the following optimization problem.
58
+
59
+ $$\min_{\mathcal{C}_1, \dots, \mathcal{C}_K} f(\mathcal{C}_1, \dots, \mathcal{C}_K) \quad \text{s.t.} \quad \rho_i \ge \alpha, \ \forall \ i \in [N],$$
60
+ (4)
61
+
62
+ where $f(\cdot)$ is inversely proportional to the quality of clusters (such as RCut) and $\alpha \in [0,1]$ is a user specified threshold. However, it is not clear how this approach can be combined with spectral clustering to develop a consistent algorithm. We take a different approach described below.
63
+
64
+ First, note that $\min_{i \in [N]} \rho_i \leq \min_{k,\ell \in [K]} \frac{|\mathcal{C}_k|}{|\mathcal{C}_\ell|}$ . Therefore, the balance $\rho_i$ of the least balanced node $v_i$ is maximized when its representatives $\mathcal{N}_{\mathcal{R}}(i)$ are split across clusters $\mathcal{C}_1, \ldots, \mathcal{C}_K$ in proportion to their sizes. Representation constraint requires this condition to be satisfied for each node in the graph.
65
+
66
+ **Definition 3.1** (Representation constraint). Given a representation graph $\mathcal{R}$ , clusters $\mathcal{C}_1, \ldots, \mathcal{C}_K$ in $\mathcal{G}$ satisfy the representation constraint if $|\mathcal{C}_k \cap \mathcal{N}_{\mathcal{R}}(i)| \propto |\mathcal{C}_k|$ for all $i \in [N]$ and $k \in [K]$ , i.e.,
67
+
68
+ $$\frac{|\mathcal{C}_k \cap \mathcal{N}_{\mathcal{R}}(i)|}{|\mathcal{C}_k|} = \frac{|\mathcal{N}_{\mathcal{R}}(i)|}{N}, \ \forall k \in [K], \ \forall i \in [N].$$
69
+ (5)
70
+
71
+ In other words, the representation constraint requires the representatives of any given node to have a proportional membership in all clusters. For example, if $v_i$ is connected to 30% of all nodes in $\mathcal{R}$ , then it must have 30% representation in all clusters discovered in $\mathcal{G}$ . It is important to note that this constraint applies at the level of individual nodes unlike population level constraints [Chierichetti et al., 2017].
72
+
73
+ While (4) can always be solved for a small enough value of $\alpha$ (with the convention that 0/0 = 1), the constraint in Definition 3.1 may not always be feasible. For example, (5) can never be satisfied if a node has only two representatives (i.e., $|\mathcal{N}_{\mathcal{R}}(i)|=2$ ) and there are K>2 clusters. However, as exactly satisfying constraints in clustering problems is often NP-hard [Davidson and Ravi, 2005], most approaches look for approximate solutions. In the same spirit, our algorithms use spectral relaxation to approximately satisfy (5), ensuring their wide applicability even when exact satisfaction is impossible.
74
+
75
+ In practice, $\mathcal{R}$ can be obtained by computing similarity between nodes based on one or more sensitive attributes (say by taking k-nearest neighbors). These attributes can have different types as opposed to existing notions that expect categorical attributes (Appendix A). Moreover, once $\mathcal{R}$ has been calculated, the values of sensitive attributes need not be exposed to the algorithm, thus adding privacy. Appendix A presents a toy example to demonstrate the utility of individual level fairness and shows that (5) recovers the population level constraint from Chierichetti et al. [2017] and Kleindessner et al. [2019] for particular configurations of $\mathcal{R}$ , thus recovering all results from Kleindessner et al. [2019] as a special case of our analysis.
76
+
77
+ Finally, while individual fairness notions have conventionally required similar individuals to be treated similarly [Dwork et al., 2012], our constraint requires similar individuals (neighbors in $\mathcal{R}$ ) to be spread across different clusters (Definition 3.1). This new type of individual fairness constraint may be of independent interest to the community. Next, we describe one of the proposed algorithms.
78
+
79
+ The lemma below identifies a sufficient condition that implies the representation constraint and can be added to the optimization problem (2) solved by spectral clustering. See Appendix E for the proof.
80
+
81
+ **Lemma 3.1.** Let $\mathbf{H} \in \mathbb{R}^{N \times K}$ have the form specified in (1). The condition
82
+
83
+ $$\mathbf{R}\left(\mathbf{I} - \frac{1}{N}\mathbf{1}\mathbf{1}^{\mathsf{T}}\right)\mathbf{H} = \mathbf{0} \tag{6}$$
84
+
85
+ implies that the corresponding clusters $C_1, \ldots, C_K$ satisfy the constraint in (5). Here, **I** is the $N \times N$ identity matrix and **1** is a N-dimensional all-ones vector.
86
+
87
+ With the unnormalized graph Laplacian $\bf L$ defined in Section 2, we add the condition from Lemma 3.1 to the optimization problem after spectral relaxation in (2) and solve
88
+
89
+ $$\min_{\mathbf{H}} \quad \operatorname{trace}\{\mathbf{H}^{\mathsf{T}}\mathbf{L}\mathbf{H}\} \quad \text{s.t.} \quad \mathbf{H}^{\mathsf{T}}\mathbf{H} = \mathbf{I}; \quad \mathbf{R}\left(\mathbf{I} - \frac{1}{N}\mathbf{1}\mathbf{1}^{\mathsf{T}}\right)\mathbf{H} = \mathbf{0}. \tag{7}$$
90
+
91
+ Clearly, the columns of any feasible $\mathbf{H}$ must belong to the null space of $\mathbf{R}(\mathbf{I} - \mathbf{1}\mathbf{1}^{\mathsf{T}}/N)$ . Thus, any feasible $\mathbf{H}$ can be expressed as $\mathbf{H} = \mathbf{Y}\mathbf{Z}$ for some matrix $\mathbf{Z} \in \mathbb{R}^{N-r \times K}$ , where $\mathbf{Y} \in \mathbb{R}^{N \times N-r}$ is an orthonormal matrix containing the basis vectors for the null space of $\mathbf{R}(\mathbf{I} - \mathbf{1}\mathbf{1}^{\mathsf{T}}/N)$ as its columns. Here, r is the rank of $\mathbf{R}(\mathbf{I} - \mathbf{1}\mathbf{1}^{\mathsf{T}}/N)$ . Because $\mathbf{Y}^{\mathsf{T}}\mathbf{Y} = \mathbf{I}$ , $\mathbf{H}^{\mathsf{T}}\mathbf{H} = \mathbf{Z}^{\mathsf{T}}\mathbf{Y}^{\mathsf{T}}\mathbf{Y}\mathbf{Z} = \mathbf{Z}^{\mathsf{T}}\mathbf{Z}$ . Thus, $\mathbf{H}^{\mathsf{T}}\mathbf{H} = \mathbf{I} \Leftrightarrow \mathbf{Z}^{\mathsf{T}}\mathbf{Z} = \mathbf{I}$ . The following problem is equivalent to (7) by setting $\mathbf{H} = \mathbf{Y}\mathbf{Z}$ .
92
+
93
+ $$\min_{\mathbf{Z}} \quad \operatorname{trace}\{\mathbf{Z}^{\mathsf{T}}\mathbf{Y}^{\mathsf{T}}\mathbf{L}\mathbf{Y}\mathbf{Z}\} \quad \text{s.t.} \quad \mathbf{Z}^{\mathsf{T}}\mathbf{Z} = \mathbf{I}. \tag{8}$$
94
+
95
+ As in standard spectral clustering, the solution to (8) is given by the K leading eigenvectors of $\mathbf{Y}^{\mathsf{T}}\mathbf{L}\mathbf{Y}$ . Of course, for K eigenvectors to exist, N-r must be at least K as $\mathbf{Y}^{\mathsf{T}}\mathbf{L}\mathbf{Y}$ has dimensions $N-r\times N-r$ . The clusters can then be recovered by using k-means clustering to cluster the rows of $\mathbf{H}=\mathbf{Y}\mathbf{Z}$ , as in Algorithm 1. Algorithm 2 summarizes this procedure. We refer to this algorithm as unnormalized representation-aware spectral clustering (UREPSC). We make three important remarks before proceeding with the theoretical analysis.
96
+
97
+ Remark 1 (Spectral relaxation). As $\mathbf{R}(\mathbf{I}-\mathbf{1}\mathbf{1}^{\mathsf{T}}/N)\mathbf{H}=\mathbf{0}$ implies the satisfaction of the representation constraint only when $\mathbf{H}$ has the form given in (1), a feasible solution to (7) may not necessarily result in *representation-aware* clusters. In fact, even in the unconstrained case, there are no general guarantees that bound the difference between the optimal solution of (2) and the original NP-hard ratio-cut problem [Kleindessner et al., 2019]. Thus, the representation-aware nature of the clusters discovered by solving (8) cannot be guaranteed in general (as is the case with [Kleindessner et al., 2019]). Nonetheless, we show in Section 4 that the discovered clusters indeed satisfy the constraint under certain additional assumptions.
98
+
99
+ Remark 2 (Computational complexity). Algorithm 2 has a time complexity of $O(N^3)$ and space complexity of $O(N^2)$ . Finding the null space of $\mathbf{R}(\mathbf{I} - \mathbf{1}\mathbf{1}^{\mathsf{T}}/N)$ to calculate $\mathbf{Y}$ and computing the eigenvectors of appropriate matrices are the computationally dominant steps. This matches the worst-case complexity of Algorithm 1. For small K, several approximations can reduce this complexity, but most such techniques require K = 2 [Yu and Shi, 2004, Xu et al., 2009].
100
+
101
+ Remark 3 (Approximate UREPSC). Algorithm 2 requires $\operatorname{rank}\{\mathbf{R}\} \leq N-K$ to ensure the existence of K orthonormal eigenvectors of $\mathbf{Y}^\intercal \mathbf{L} \mathbf{Y}$ . When a graph $\mathcal{R}$ violates this assumption, we instead use the best $\operatorname{rank} R$ approximation of its adjacency matrix $\mathbf{R}$ ( $R \leq N-K$ ) and refer to this algorithm as UREPSC (APPROX.). This approximation of $\mathbf{R}$ need not have binary elements, but it works well in practice (Section 5). Appendix C provides more intuition behind this low rank approximation, contrasts this strategy with clustering $\mathcal{R}$ to recover latent sensitive groups that can be reused with existing population level notions, and highlights the challenges associated with finding theoretical guarantees for UREPSC (APPROX.), which is an interesting direction for future work.
102
+
103
+ - 1: **Input:** Adjacency matrix **A**, representation graph **R**, number of clusters $K \geq 2$
104
+ - 2: Compute Y containing orthonormal basis vectors of null{ $\mathbf{R}(\mathbf{I} \frac{1}{N}\mathbf{1}\mathbf{1}^{\intercal})$ }
105
+ - 3: Compute Laplacian $\mathbf{L} = \mathbf{D} \mathbf{A}$
106
+ - 4: Compute leading K eigenvectors of $\mathbf{Y}^{\intercal}\mathbf{L}\mathbf{Y}$ . Let $\mathbf{Z}$ contain these vectors as its columns.
107
+ - 5: Apply k-means clustering to rows of $\mathbf{H} = \mathbf{YZ}$ to get clusters $\hat{\mathcal{C}}_1, \hat{\mathcal{C}}_2, \dots, \hat{\mathcal{C}}_K$
108
+ - 6: **Return:** Clusters $\hat{\mathcal{C}}_1, \hat{\mathcal{C}}_2, \dots, \hat{\mathcal{C}}_K$
109
+
110
+ This section shows that Algorithms 2 and 4 (see Appendix B) recover ground truth clusters with high probability under certain assumptions on the representation graph. We begin by introducing the representation-aware planted partition model in Section 4.1.
111
+
112
+ The well known Planted Partition random graph model independently connects two nodes in $\mathcal V$ with probability p if they belong to the same cluster and q otherwise, where the ground truth cluster memberships are specified by a function $\pi:\mathcal V\to [K]$ . Below, we define a variant of this model with respect to a representation graph $\mathcal R$ and refer to it as the Representation-Aware (or Fair) Planted Partition model or $\mathcal R$ -PP.
113
+
114
+ **Definition 4.1** ( $\mathcal{R}$ -PP). A $\mathcal{R}$ -PP is defined by the tuple $(\pi, \mathcal{R}, p, q, r, s)$ , where $\pi: \mathcal{V} \to [K]$ maps nodes in $\mathcal{V}$ to clusters, $\mathcal{R}$ is a representation graph, and $1 \ge p \ge q \ge r \ge s \ge 0$ are probabilities used for sampling edges. Under this model, for all i > j,
115
+
116
+ $$P(A_{ij} = 1) = \begin{cases} p & \text{if } \pi(v_i) = \pi(v_j) \text{ and } R_{ij} = 1, \\ q & \text{if } \pi(v_i) \neq \pi(v_j) \text{ and } R_{ij} = 1, \\ r & \text{if } \pi(v_i) = \pi(v_j) \text{ and } R_{ij} = 0, \\ s & \text{if } \pi(v_i) \neq \pi(v_j) \text{ and } R_{ij} = 0. \end{cases}$$
117
+ (9)
118
+
119
+ Similarity graphs $\mathcal G$ sampled from $\mathcal R$ -PP have two interesting properties: (i) Everything else being equal, nodes have a higher tendency of connecting with other nodes in the same cluster $(p \geq q \text{ and } r \geq s)$ ; and (ii) Nodes connected in $\mathcal R$ have a higher probability of connecting in $\mathcal G$ $(p \geq r \text{ and } q \geq s)$ . Thus, $\mathcal R$ -PP plants both the clusters in $\pi$ and the properties of $\mathcal R$ into the sampled graph $\mathcal G$ . Remark 4 ( $\mathcal R$ -PP and "hard" problem instances). Clusters satisfying (5) must proportionally distribute the nodes connected in $\mathcal R$ amongst themselves. However, $\mathcal R$ -PP makes nodes connected in $\mathcal R$ more likely to connect in $\mathcal G$ , even if they belong to different clusters $(q \geq r)$ . In this sense, graphs sampled from $\mathcal R$ -PP are "hard" instances for our algorithms.
120
+
121
+ When $\mathcal{R}$ itself has latent groups, there are two natural ways to cluster the nodes: (i) Based on the clusters specified by $\pi$ ; and (ii) Based on the clusters in $\mathcal{R}$ . The clusters based on option (ii) are likely to not satisfy (5) as tightly connected nodes in $\mathcal{R}$ will be assigned to the same cluster. We show in the next section that, under certain assumptions, $\pi$ can be defined so that the clusters encoded by it satisfy (5) by construction. Recovering these ground truth clusters (instead of other natural choices like option (ii)) then amounts to recovering *representation-aware* clusters.
122
+
123
+ As noted in Section 3.1, some representation graphs lead to constraints that cannot be satisfied. For our theoretical analysis, we restrict our focus to a case where the constraint in (5) is feasible. Towards this end, an additional assumption on $\mathcal{R}$ is required.
124
+
125
+ **Assumption 4.1.** $\mathcal{R}$ is a d-regular graph for $K \leq d \leq N$ . Moreover, $R_{ii} = 1$ for all $i \in [N]$ and each node in $\mathcal{R}$ is connected to d/K nodes from cluster $C_j$ for all $j \in [K]$ (including the self-loop).
126
+
127
+ Assumption 4.1 ensures the existence of a $\pi$ for which the ground-truth clusters satisfy (5). Namely, assuming equal-sized clusters, set $\pi(v_i) = k$ if $(k-1)\frac{N}{K} \le i \le k\frac{N}{K}$ for all $i \in [N]$ and $k \in [K]$ .
128
+
129
+ Before presenting our main results, we need additional notation. Let $\Theta \in \{0,1\}^{N \times K}$ indicate the ground-truth cluster memberships encoded by $\pi$ (i.e., $\Theta_{ij} = 1 \Leftrightarrow v_i \in \mathcal{C}_j$ ) and $\hat{\Theta} \in \{0,1\}^{N \times K}$ indicate the clusters returned by the algorithm $(\hat{\Theta}_{ij} = 1 \Leftrightarrow v_i \in \hat{\mathcal{C}}_j)$ . With $\mathcal{J}$ as the set of all $K \times K$ permutation matrices, the fraction of misclustered nodes is defined as $M(\Theta, \hat{\Theta}) = \min_{\mathbf{J} \in \mathcal{J}} \frac{1}{N} ||\Theta - \hat{\mathbf{\Theta}} \mathbf{J}||_0$ [Lei and Rinaldo, 2015]. Theorems 4.1 and 4.2 use the eigenvalues of the Laplacian matrix in the expected case, defined as $\mathcal{L} = \mathcal{D} - \mathcal{A}$ , where $\mathcal{A} = \mathrm{E}[\mathbf{A}]$ is the expected adjacency matrix of a graph sampled from $\mathcal{R}$ -PP and $\mathcal{D} \in \mathbb{R}^{N \times N}$ is its corresponding degree matrix. The next two results establish high-probability upper bounds on the fraction of misclustered nodes for UREPSC and NREPSC (see Appendix B) for similarity graphs $\mathcal{G}$ sampled from $\mathcal{R}$ -PP.
130
+
131
+ **Theorem 4.1** (Error bound for UREPSC). Let $\operatorname{rank}\{\mathbf{R}\} \leq N-K$ and assume that all clusters have equal sizes. Let $\mu_1 \leq \mu_2 \leq \cdots \leq \mu_{N-r}$ denote the eigenvalues of $\mathbf{Y}^\intercal \mathcal{L} \mathbf{Y}$ , where $\mathbf{Y}$ was defined in Section 3.2. Define $\gamma = \mu_{K+1} - \mu_K$ . Under Assumption 4.1, there exists a universal constant $\operatorname{const}(C,\alpha)$ , such that if $\gamma$ satisfies $\gamma^2 \geq \operatorname{const}(C,\alpha)(2+\epsilon)pNK \ln N$ and $p \geq C \ln N/N$ for some C > 0, then
132
+
133
+ $M(\mathbf{\Theta}, \hat{\mathbf{\Theta}}) \le \operatorname{const}(C, \alpha) \frac{(2+\epsilon)}{\gamma^2} pN \ln N$
134
+
135
+ for every $\epsilon > 0$ with probability at least $1 - 2N^{-\alpha}$ when a $(1 + \epsilon)$ -approximate algorithm for k-means clustering is used in Step 5 of Algorithm 2.
136
+
137
+ **Theorem 4.2** (Error bound for NREPSC). Let $\operatorname{rank}\{\mathbf{R}\} \leq N - K$ and assume that all clusters have equal sizes. Let $\mu_1 \leq \mu_2 \leq \cdots \leq \mu_{N-r}$ denote the eigenvalues of $\mathcal{Q}^{-1}\mathbf{Y}^\intercal\mathcal{L}\mathbf{Y}\mathcal{Q}^{-1}$ , where $\mathcal{Q} = \sqrt{\mathbf{Y}^\intercal\mathcal{D}\mathbf{Y}}$ and $\mathbf{Y}$ was defined in Section 3.2. Define $\gamma = \mu_{K+1} - \mu_K$ and $\lambda_1 = qd + s(N - d) + (p-q)\frac{d}{K} + (r-s)\frac{N-d}{K}$ . Under Assumption 4.1, there are universal constants $\operatorname{const}_1(C,\alpha)$ , $\operatorname{const}_2(C,\alpha)$ , and $\operatorname{const}_3(C,\alpha)$ such that if:
138
+
139
+ $$I. \ \left(\frac{\sqrt{pN\ln N}}{\lambda_1 - p}\right) \left(\frac{\sqrt{pN\ln N}}{\lambda_1 - p} + \frac{1}{6\sqrt{C}}\right) \le \frac{1}{16(\alpha + 1)},$$
140
+
141
+ 2.
142
+ $$\frac{\sqrt{pN \ln N}}{\lambda_1 - p} \leq \text{const}_2(C, \alpha)$$
143
+ , and
144
+
145
+ 3.
146
+ $$16(2+\epsilon) \left[ \frac{8\mathrm{const}_3(C,\alpha)\sqrt{K}}{\gamma} + \mathrm{const}_1(C,\alpha) \right]^2 \frac{pN^2 \ln N}{(\lambda_1 - p)^2} < \frac{N}{K}$$
147
+
148
+ and $p \ge C \ln N/N$ for some C > 0, then,
149
+
150
+ $$M(\mathbf{\Theta}, \hat{\mathbf{\Theta}}) \le 32(2+\epsilon) \left[ \frac{8\mathrm{const}_3(C, \alpha)\sqrt{K}}{\gamma} + \mathrm{const}_1(C, \alpha) \right]^2 \frac{pN \ln N}{(\lambda_1 - p)^2},$$
151
+
152
+ for every $\epsilon > 0$ with probability at least $1 - 2N^{-\alpha}$ when a $(1 + \epsilon)$ -approximate algorithm for k-means clustering is used in Step 6 of Algorithm 4.
153
+
154
+ All proofs have been deferred to Appendix D. Briefly, we show that the top K eigenvectors of $\mathcal{L}$ (i) recover ground-truth clusters in the expected case (Lemmas D.1 to D.3) and (ii) lie in the null space of $\mathbf{R}(\mathbf{I} - \mathbf{1}\mathbf{1}^{\mathsf{T}}/N)$ and hence are also the top K eigenvectors of $\mathbf{Y}^{\mathsf{T}}\mathcal{L}\mathbf{Y}$ (Lemma D.4). Matrix perturbation arguments then establish a high probability mistake bound in the general case when the graph $\mathcal{G}$ is sampled from a $\mathcal{R}$ -PP (Lemmas D.5–D.8). Next, we discuss our assumptions and use the error bounds above to establish the weak consistency of our algorithms.
2105.12245/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-02-05T16:44:03.823Z" agent="5.0 (Macintosh; Intel Mac OS X 11_0_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.146 Safari/537.36" etag="MxtWxNZra3p8Ef6YeEht" version="14.2.9" type="google"><diagram id="nRyR33ac3CmuLcCAgw2L" name="Page-1">7Vpbc5s8EP01PDqDxE082k7TzjS9TDL9ennpyCBsTTCiIMd2f/0ngcAG5MZJbOo6IZkxPhIrsWf3SAs2rPF89TbD6ewDC0lsQDNcGdalAcXhWOJDIusS8Vy/BKYZDUsIbIBb+pso0FTogoYkb3TkjMWcpk0wYElCAt7AcJaxZbNbxOLmqCmekg5wG+C4i36lIZ+VKILeBn9H6HRWjQyq+5vjqrMykc9wyJYlVNyc9cawxhljvDybr8Ykls6r/FJ64GpHaz2xjCR8nwt+W1+W6YfL649uNBi8/+/7zY+hOQCKnnscL9QdG9CNhcFRxIRdMW2+Vr5wfy1Y1TDIC6aGooPlpqtNoziblp+2/HfGnKy44Y3GLLk3PGHFFJgBx0A20TnJgeqpRhXzLweuzMDGHGBhzhrN+DwWABCnOKbTRJwHwg8kE8A9yTgVDA5Vw5yGobx8lBExazwpTJnie8powov4cEaGcyltLTgr76ww3XWx8rocgqy2IOXyt4TNCc/WootqHUBTXbOuIgQ5JbDcxJNtq3CfbcWSpTCsQnhaG9+wLE4U0Y8hHfVE+g25/lKQfs4MQ9d8mGCIeiXY1xDccrOQo1SeBouJ9NxyRjm5TXEgsaXQ8iYFE7ZIQhJeT2oAB3fTTKKfFjymSeXQEGd3n4QZyqU7zAvTaYKwQGXPnGfsjoxZzLJiRtYEObZj1i2V3EIZfTSOt3pGKCBBIA3TTIg+ZTIIEpbJ/gehFMEWpUpbGzmrYRS4x6K0mkDvQm1VQm2ddxp74OR0GuoW55NPYzdAZBLtk8YhJig6Yhq7jv9wGgOn3zx+dhrbaHcaz37enXeWDuqFtKonfM1ia/aZpXskKUnCoaxFpFtjnOc0aPq8myk7XUXCRr3SddSWH3SRXWEZiTGn980qR+cbNcJnSe4WD8Bq8oBM/8JpWsnZIguIunC7MGnb0pjyWzPiOJsS3rElvIrXW91UCLYprd3zdJadf1GKT2dHVStvlbbW395QuTuFOE9x8lwhFpumSxJzrNHj0vwOPT69GDqd5Ry0pV8TQz2v5t7L1P4WEQi1/Luv8gOzVTqbsGXp0bq/55TrkTbslzYPumjoyvDDaczVPyou27Guk4r20hRFsFiaOtIUuhPXcY+zQPm2RlxAr+JSjfay1KW2s1MT9lUXBz4gUwcSl/Y4/WgLOG4lacDR4ypJMWWa5nvoCs7T8kVNRFck1OS7I/+0W5HiUPexhZfHYXTAdr3mJsNxLroVJjB9jRJU4OGV4LX6eOazPbtJ6l+vPoCu/Dh7bXf8Jg+y0rdbVvZVd1djCwG/PtAztf7QDxDAi6wVPM/psPTEx0RIY8o3N4T7J0Z4xe/RHvUa3kgUACNw9m9YUbWw/umZr9/rmxn7JSaz1dqal/Jt1gd8WmLbGrOoZavPbP72MZneLKPrb+lPNEl4Mlpc/RgADeGHK+WHr6V8o5TvaIMmLXa/IwLNeNI+JzxaKa+NHt1S8Mro/ozCFqX6N7mH4lR83fwsrxSQzY8brTf/Aw==</diagram></mxfile>
2105.12245/main_diagram/main_diagram.pdf ADDED
Binary file (16.2 kB). View file
 
2105.12245/paper_text/intro_method.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Residual networks, or ResNets, are multilayer neural network architectures in which a *skip connection* is introduced at every layer (He et al., 2016). This allows deep networks to be trained by circumventing vanishing and exploding gradients (Bengio et al., 1994). The increased depth in ResNets has lead to commensurate performance gains in applications ranging from speech recognition (Heymann et al., 2016; Zagoruyko & Komodakis, 2016) to computer vision (He et al., 2016; Huang et al., 2016).
4
+
5
+ A residual network with L layers may be represented as
6
+
7
+ $$h_{k+1}^{(L)} = h_k^{(L)} + \delta_k^{(L)} \sigma_d \left( A_k^{(L)} h_k^{(L)} + b_k^{(L)} \right), \quad (1)$$
8
+
9
+ Proceedings of the 38<sup>th</sup> International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s).
10
+
11
+ where $h_k^{(L)}$ is the hidden state at layer $k=0,\dots,L,h_0^{(L)}=x\in\mathbb{R}^d$ the input, $h_L^{(L)}\in\mathbb{R}^d$ the output, $\sigma\colon\mathbb{R}\to\mathbb{R}$ is a non-linear activation function, $\sigma_d(x)=(\sigma(x_1),\dots,\sigma(x_d))^{\top}$ its component-wise extension to $x\in\mathbb{R}^d$ , and $A_k^{(L)},b_k^{(L)}$ , and $\delta_k^{(L)}$ are trainable network weights for $k=0,\dots,L-1$ .
12
+
13
+ ResNets have been the focus of several theoretical studies due to a perceived link with a class of differential equations. The idea, put forth in (Haber & Ruthotto, 2018; Chen et al., 2018), is to view (1) as a discretization of a system of ordinary differential equations
14
+
15
+ $$\frac{\mathrm{d}H_t}{\mathrm{d}t} = \sigma_d \left( \overline{A}_t H_t + \overline{b}_t \right),\tag{2}$$
16
+
17
+ where $\overline{A}: [0,1] \to \mathbb{R}^{d \times d}$ and $\overline{b}: [0,1] \to \mathbb{R}^d$ are appropriate smooth functions and H(0) = x. This may be justified (Thorpe & van Gennip, 2018) by assuming that
18
+
19
+ $$\delta^{(L)} \sim 1/L, \quad A_k^{(L)} \to \overline{A}_t, \quad b_k^{(L)} \to \overline{b}_t$$
20
+ (3)
21
+
22
+ as L increases and $k/L \to t$ . Such models, named neural ordinary differential equations or neural ODEs (Chen et al., 2018; Dupont et al., 2019), have motivated the use of optimal control methods to train ResNets (E et al., 2019a).
23
+
24
+ However, the precise link between deep ResNets and the neural ODE model (2) is unclear: in practice, the weights $A^{(L)}$ and $b^{(L)}$ result from training, and the validity of the scaling assumptions (3) for trained weights is far from obvious and has not been verified. As a matter of fact, there is empirical evidence showing that using a scaling factor $\delta^{(L)} \sim 1/L$ can deteriorate the network accuracy (Bachlechner et al., 2020). Also, there is no guarantee that weights obtained through training have a non-zero limit which depends smoothly on the layer, as (3) would require. In fact, we present numerical experiments which point to the contrary for many ResNet architectures used in practice. These observations motivate an in-depth examination of the actual scaling behavior of weights with network depth in ResNets and of its impact on the asymptotic behavior of those networks.
25
+
26
+ <sup>&</sup>lt;sup>1</sup> InstaDeep <sup>2</sup>Mathematical Institute, University of Oxford. Correspondence to: Alain Rossier <rossier@maths.ox.ac.uk>.
27
+
28
+ <span id="page-1-0"></span>![](_page_1_Figure_1.jpeg)
29
+
30
+ ![](_page_1_Figure_2.jpeg)
31
+
32
+ ![](_page_1_Figure_3.jpeg)
33
+
34
+ Figure 1: Trained weights as a function of k for $k=0,\ldots,L$ and L=9100. Left: rescaled weights $L^{\beta}A_{k,(0,0)}^{(L)}$ for a tanh network with $\beta=0.2$ . Center: weights $A_{k,(0,0)}^{(L)}$ for a ReLU network. Right: cumulative sum $\sum_{j=0}^{k-1}A_{j,(0,0)}^{(L)}$ for a ReLU network.
35
+
36
+ We systematically investigate the scaling behavior of trained ResNet weights as the number of layers increases and examine the consequences of this behavior for the asymptotic properties of deep ResNets. Our code is publicly available at https://github.com/instadeepai/scaling-resnets.
37
+
38
+ Our main contributions are twofold. Using the methodology described in Section 2, we design detailed numerical experiments to study the scaling of trained network weights across a range of ResNet architectures and datasets, showing the existence of at least three different scaling regimes, none of which correspond to (3). In Section 4, we show that in two of these scaling regimes, the properties of deep ResNets may be described in terms of a class of ordinary or stochastic differential equations, albeit different from the neural ODEs studied in (Chen et al., 2018; Haber & Ruthotto, 2018; Lu et al., 2020). Those novel findings on the relation between ResNets and differential equations complement previous work (Thorpe & van Gennip, 2018; E et al., 2019b; Frei et al., 2019; Ott et al., 2021). In particular, our findings question the validity of the neural ODE (2) as a description of deep ResNets with trained weights.
39
+
40
+ $\|y\|$ denotes the Euclidean norm of a vector y. For a matrix $x, x^{\top}$ denotes its transpose, $\operatorname{Diag}(x)$ its diagonal vector, $\operatorname{Tr}(x)$ its trace and $\|x\|_F = \sqrt{\operatorname{Tr}(x^{\top}x)}$ its Frobenius norm. $\lfloor u \rfloor$ denotes the integer part of a positive number u. $\mathcal{N}(m,v)$ denotes the Gaussian distribution with mean m and (co)variance $v, \otimes$ denotes the tensor product, and $\mathbb{R}^{d,\otimes n} = \mathbb{R}^{d} \times \cdots \times \mathbb{R}^{d}$ (n times). $\operatorname{vec} \colon \mathbb{R}^{d_1 \times \cdots \times d_n} \to \mathbb{R}^{d_1 \cdots d_n}$ denotes the vectorisation operator, and $\mathbb{1}_S$ the indicator function of a set S. $\mathbb{C}^0$ is the space of continuous functions, and for $v \geq 0$ , $\mathbb{C}^{\nu}$ is the space of $\nu$ -Hölder continuous functions.
41
+
42
+ # Method
43
+
44
+ We identify the possible scaling regimes for the network weights, introduce the quantities needed to characterize the deep network limit, and describe the step-by-step procedure we use to analyze our numerical experiments.
45
+
46
+ As described in Section 1, the neural ODE limit assumes
47
+
48
+ $$\delta^{(L)} \sim \frac{1}{L} \quad \text{and} \quad A_{\lfloor Lt \rfloor}^{(L)} \stackrel{L \to \infty}{\longrightarrow} \overline{A}_t, \quad b_{\lfloor Lt \rfloor}^{(L)} \stackrel{L \to \infty}{\longrightarrow} \overline{b}_t,$$
49
+ (4)
50
+
51
+ for $t \in [0,1]$ where $\overline{A} \colon [0,1] \to \mathbb{R}^{d \times d}$ and $\overline{b} \colon [0,1] \to \mathbb{R}^{d \times d}$ are smooth functions (Thorpe & van Gennip, 2018). Our numerical experiments, detailed in Section 3, show that the weights generally shrink as L increases (see for example Figures 2 and 4), so one cannot expect the above assumption to hold, and weights need to be renormalized in order to converge to a non-zero limit. We consider here the following more general situation which includes (4) but allows for shrinking weights:
52
+
53
+ **Hypothesis 1.** There exist $\overline{A} \in C^0([0,1], \mathbb{R}^{d \times d})$ and $\beta \in [0,1]$ such that
54
+
55
+ $$\forall s \in [0, 1], \qquad \overline{A}_s = \lim_{L \to \infty} L^{\beta} A_{\lfloor Ls \rfloor}^{(L)}. \tag{5}$$
56
+
57
+ Properly renormalized weights may indeed converge to a continuous function of the layer in some cases, as shown in Figure 1 (left) which displays an example of layer dependence of trained weights for a ResNet (1) with fully connected layers and tanh activation function, without explicit regularization (see Section 3.1).
58
+
59
+ However it is not always the case that network weights converge to a smooth function of the layer, even after rescaling. For example, network weights $A_k^{(L)}$ are usually initialized to random, independent and identically distributed (i.i.d.) values, whose scaling limit would then correspond to a white noise, which cannot be represented as a function of the layer. Such scaling behaviour also occurs for trained
60
+
61
+ <span id="page-2-0"></span>weights, as shown in Figure 1 (center). In this case, the cumulative sum $\sum_{j=0}^{k-1} A_j^{(L)}$ of the weights behaves like a random walk, which does have a well-defined scaling limit $W \in \mathcal{C}^0\left([0,1],\mathbb{R}^{d \times d}\right)$ . Figure 1 (right) shows that, for a ReLU ResNet with fully-connected layers, this cumulative sum of trained weights converges to an *irregular*, that is, non-smooth function of the layer.
62
+
63
+ This observation motivates the consideration of an alternative hypothesis where the weights $A_k^{(L)}$ are represented as the *increments* of a continuous function $W^A$ .
64
+
65
+ Combining such terms with the ones considered in Hypothesis 1, we consider the following, more general, setting:
66
+
67
+ **Hypothesis 2.** There exist $\beta \in [0,1)$ , $\overline{A} \in \mathbb{C}^0\left([0,1],\mathbb{R}^{d \times d}\right)$ , and $W^A \in \mathbb{C}^0([0,1],\mathbb{R}^{d \times d})$ non-zero such that $W_0^A = 0$ and
68
+
69
+ $$A_k^{(L)} = L^{-\beta} \overline{A}_{k/L} + W_{(k+1)/L}^A - W_{k/L}^A.$$
70
+ (6)
71
+
72
+ The above decomposition is unique. Indeed, for $s \in [0, 1]$ ,
73
+
74
+ $$L^{\beta-1} \sum_{k=0}^{\lfloor Ls \rfloor - 1} A_k^{(L)} = L^{-1} \sum_{k=0}^{\lfloor Ls \rfloor - 1} \overline{A}_{k/L} + L^{\beta-1} W_{\lfloor Ls \rfloor/L}^A$$
75
+ $$\to \int_0^s \overline{A}_r dr, \quad \text{as } L \to \infty. \tag{7}$$
76
+
77
+ The integral of $\overline{A}$ is thus uniquely determined by the weights $A_k^{(L)}$ when L is large, so $\overline{A}$ can be obtained by discretization and $W^A$ by fitting the residual error in (7). In addition, Hypotheses 1 and 2 are mutually exclusive since Hypothesis 2 requires $W^A$ to be non-zero.
78
+
79
+ **Remark 2.1** (IID initialization of weights). *In the special case of independent Gaussian weights*
80
+
81
+ $$A_{k,mn}^{(L)} \overset{i.i.d}{\sim} \mathcal{N}\left(0, L^{-1}d^{-2}\right) \quad \textit{and} \quad b_{k,n}^{(L)} \overset{i.i.d}{\sim} \mathcal{N}\left(0, L^{-1}d^{-1}\right)$$
82
+
83
+ where $A_{k,mn}^{(L)}$ is the (m,n)-th entry of $A_k^{(L)} \in \mathbb{R}^{d \times d}$ and $b_{k,n}^{(L)}$ is the n-th entry of $b_k^{(L)} \in \mathbb{R}^d$ , we can represent the weights $\{A^{(L)}, b^{(L)}\}$ as the increments of a matrix Brownian motion
84
+
85
+ $$A_k^{(L)} = d^{-1} \left( W_{(k+1)/L}^A - W_{k/L}^A \right),$$
86
+
87
+ which is a special case of Hypothesis 2.
88
+
89
+ A question related to the existence of a scaling limit is the degree of smoothness of the limits $\overline{A}$ or $W^A$ , if they exist. To quantify the smoothness of the function mapping the layer number to the corresponding network weight, we define in Table 1 several quantities which may be viewed as
90
+
91
+ discrete versions of various (semi-)norms used to measure the smoothness of functions.
92
+
93
+ Table 1: Quantities associated to a tensor $A^{(L)} \in \mathbb{R}^{L \times d \times d}$ .
94
+
95
+ | Quantity | Definition |
96
+ |------------------------------------|------------------------------------------------------------------------------------------|
97
+ | Maximum norm | $\left\ \max_{k} \left\ A_{k}^{(L)} \right\ _{F} \right\ $ |
98
+ | $\beta$ -scaled norm of increments | $ \left\ L^{\beta} \max_{k} \left\ A_{k+1}^{(L)} - A_{k}^{(L)} \right\ _{F} \right\ $ |
99
+ | Cumulative sum norm | $\left\ \sum_{k} A_{k}^{(L)} \right\ _{F}$ |
100
+ | Root sum of squares | $\left(\sum_{k}\left\ A_{k}^{(L)}\right\ _{F}^{2}\right)^{1/2}$ |
101
+
102
+ The first two norms relate to Hypothesis 1: if $A^{(L)}$ satisfy (5), then the maximum norm scales like $L^{-\beta}$ and the $\beta$ -scaled norm of increments scales like $L^{-\nu}$ if the limit function $\overline{A}$ is $\nu$ -Hölder-continuous.
103
+
104
+ The last two norms relate to Hypothesis 2: if $A^{(L)}$ satisfy (6), then the cumulative sum norm scales like $L^{-\beta}$ . Furthermore, the root sum of squares gives us the regularity of $W^A$ . Indeed, define the *quadratic variation tensor* of $W^A$ by
105
+
106
+ $$\left[W^A\right]_s = \lim_{L \to \infty} \sum_{k=0}^{\lfloor Ls \rfloor - 1} \left(W_{\frac{k+1}{L}}^A - W_{\frac{k}{L}}^A\right) \otimes \left(W_{\frac{k+1}{L}}^A - W_{\frac{k}{L}}^A\right)^\top.$$
107
+
108
+ Then, using (6) and Cauchy-Schwarz, we estimate
109
+
110
+ $$\| \| [W^A]_s \| \| \le 2 \cdot \lim_{L \to \infty} \sum_{k=0}^{\lfloor Ls \rfloor - 1} \| A_k^{(L)} \|_F^2 + L^{1 - 2\beta} \| \overline{A} \|_{L^2}^2$$
111
+ (8)
112
+
113
+ where $\|\cdot\|$ is the Hilbert-Schmidt norm. As $\overline{A}$ is continuous on a compact domain, its $L^2$ norm is finite. Hence, if $\beta \geq 1/2$ , the fact that the root sum of squares of $A^{(L)}$ is upper bounded as $L \to \infty$ implies that the quadratic variation of $W^A$ is finite.
2106.00162/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-01-21T04:42:07.237Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36" etag="xQ_2-ZMtDluc0lxa3mz3" version="14.1.3" type="google" pages="2"><diagram id="JPaOp2ntueGsPXWwOxxR" name="Page-1">7V1rc9s2Fv01nm13Rhi8Hx9jO26bJp1s3CbpfunQEm2zkUVXohN7f/2CFGnxAVKUBFKgQzW1LZKipHMPcB+49+KEnN09/rT07m/fhTN/foLh7PGEnJ9gTKHk+ld85Gl9BGGK1kdulsEsPbY5cBn8z08PwvToQzDzV4ULozCcR8F98eA0XCz8aVQ45i2X4bfiZdfhvPiu996NXzlwOfXm1aOfgll0uz4qGdwc/9kPbm6zd0YwPXPnZRenB1a33iz8ljtEXp+Qs2UYRuu/7h7P/HmMXobL+nUXNWefP9jSX0RtXvAuQBL/+vjuw7s3Tw9vXn/95eqdNyF0fZuv3vwh/cbpp42eMgiW4cNi5sd3gSfk9NttEPmX9940PvtNS10fu43u5voZ0n9eB/P5WTgPl8lryYz5ckb18VC/IohiFrD4LqtoGX7xswsX4ULf7bT6jdIv+dVfRv5j7lD6DX/ywzs/Wj7pS9KzE0VSuL9tpKVJlx68zYlKZJLyUorcPN9tg6L+IwXSDOrbX1afmP/PZ/b6b4nEx/ce9KNJxt2uQL2+vsbTaX+gElTFVOGuIPU+Lcjk/MNcqLt3fz56bz7fTT5OUOeI6kd8PFxE6SSE4TaE9Qth8rAEc/aSdK6kuAq7VAbYNe6Hw45On/7zuPr29uw8Ol/++laQnx8nEyq+A9wnNON3A/Cc9Ao8Qh1PzG4gr2ARecUA4XDzQBU5YGmSAxWA08NFYVaRBkHweZRCV5AI/+chzE5MVgmor/QFGqXHzUn91036O7nL6t5b7H+X3/3HaHLlrTQNYtaub3m13P+G54E3D2/06XMv0reNVtlNNXjrj1r8+Ie8V+7Wayyzw10oywqTZ/yKM54NgfTdoGFI2GD688SdGcKcVLnNDdzmDAjREbWxSa3uyEpuEuzP/sMyWEXBVF9x8bCYRkG4qCdSSdoa0qgo0vpJqCTsi+Shj3vz4Gahj021yHx98jQWVaDN+lfpibtgNovf0MimIt9KM2KeLQYj/HD1Q2RL9UMtaB8jLwQ7nBfSxItPvvdlHn/3t96VP4/nrM0kks5nSy9YBItsAqpeYItFqXmcJ1B6qGPu5OiqFCEJXW1zinNAsNo8ZIFgnFenHpM5LzuzbvB24+ZGA3nfHornCIB3ld0BNkMkACHFKZlygNshgyECsqPBx00GR0twmsA249OKVW3gEKrKN+vYsBZ+YA02jVi/BGyI3I7NIYaUj2bMFyZtrLggHjcaSs1SHBK6aju6+jbB/apOMeSQ9Vb361jldfAYS6MD4LJZDgGUVwSKFaY8LAHEYvOQlenPZHeQruyOzKAZGMiEA1RSJRpY5RKwLeZNB4FFqIoshfqQ3DzUkZFtYco4iKysACspgJLkHkfGtYUV5CCuCFcZy4FEPPc4NrItIosuIksryGrlxjnaPCA9MrJ8mMjyKmeRACJPWnFcZNkwDYNYfTVYXwIDKXDucVyQZYuJYXXr3cd/BnfJunke1nKUJArvc0eTgM/7cBXEoUB99iqMovBOXzCPT5x60y83iXNiWqJI3uxVJjVoEmH6ec5voyhODHgVI4EvprMFBYF27q4D7fYswVS/I76YJXGli/j4Sv9+WAT6Q668+UST/ILHCy8X/K/JX2fLcLWaxJbc/eKmLgiZRG/WZ7L0AGyOKuVDP7VLs7asfgl4cUBrE5USLIkghEtGqxMlQoCIKtlyh+3zrVMVvwgTyqWnDegbZFQVZEXoFxeJbdSh8KoWxHbTDBnmCdTZPNGpKzxYuamK3BQDWNHc47hyU50q0cHKDbGK4LYr5n4F18J7HxXzEBQzJYA3GISIxIsSG5cGVsNGiBmIx7oiXovgxki8IRAPYVI2CRGVgFHROMv1SrYW5uBItkGQjVTIhrUphGl1neZoZOs0CDZcW0jss5TRqzFE0A7W0PXcf3wVVwFo0PzFLP3zfDr3VqtgemJID8kEwNqOmfTT+LNCHUHjSiYz4JUdW/pzL9IzQuHmJhDTd3gfBkkSYrYUBYupHpjG66O5jEpcvOMqfFhO/fQmG8lU7yt2u2/kLW/8qHLfROLPoBxAAmKyTGxkZb71vWWc4PScBvnBv1/6Ky1uL1EdGF4v42kc/hYGq1iu+SylbXmMTuYk1VJ6l8z1oh1LMQb5yV5VPV8MDYOgs4QjbMpj3yP9NuFJiTGvLj9kCbg7Z8Sa7tc+AbfErj6yZHNZscWs2cNJVEmRxbRliqwN2tSk1JhCXTYSIc/9hZ4/Es54d7FoFler+/X58vMxQfKkiwTJItdY1aoxVhp0RjWqTOlbNqh2ESz8ye8PWnLFAoH12dPXH35/gRyyyRWVzQIZV1SpSMWg3YjBmemMOyyzvkaT2GC6clTKectS+Xa1gbncciN7Rq/RiGEmnzUdueWB29oINk4Zl1EczsAQrS969RCFkyRQop9U9I8Wo5YihttqPoqW8fPhqw4nl7l/HR02tZiMnpx+ih+JfrJh/5SL4VhlWpEmo5laiJGY+VZfBXK1P7fsMHSraozJM0l5EN8m1TO1VMfJZWbjq6byrYbT+UtTM28zSC71LDz3Y6m/mt+ES33wbq2W12+8/hkPoI3S3vYZtg6gbZ5AfxQnhG1huMAGxWmjzslcGgHrja7W/DQWwJWp9OzUnYXL+3A5sIiB9ZI3KWAhfFQsfFfVnF59R8CUgRqqK2og0+zXl02loV0+fY7RByx7+mcqjOTJ+WPh2dPzs1l2ZwEgk1IbolhRhASn6xtlLVbiCP3zKQE1tg0jfG3YNKLFnbL5JoIVbXaSxRJ2Nfr0HMEpKIW3oATaKairOek49onr1XLrOYs2Vn5702h7HGJN7usgUaq7xL2M7z2s+dDkcKYalNrSlICXuoFQBHhVYSJjQEwzFtePmQNrN1tkOXddvDkhsgIQyz5HPn/WNMcgrICNhcUaeFrojd3KN/nBqB0VEdoCkMNC1z5PQtdlQ7Yayhbqqq51SbMs2ydvSoA4J4hijpQgTBmKPbQpI5M0XEGkNh8JNwgECsC04a0UR9pofjaQ7Aun8zYzDgkHMaCQFooQUOhhRcvZVwhQRrU8iEBahIhWy+Tdkp0pn7NNxPjauwvmT2t1/KzoY1cvVl5TbRglSh0u/G+VsydpQxRY9q+TUMvBzj4yOvs5t/2H4Id/3Z2IsxjPux/1zx9PyMUPC31OnP2gmfCUHpoHUTT34wv+/cMPJ+zsmz7/L/30VP8dBy/+/eOJONVf/gydiPMfgtgk8FdR0kQnWOhr9ZFQ/7oKYnsoXPxY+FBbIlodLtK5MJiy6Kc2R/RUByGRnMeLvW4PFkRatDL7bmPkE54tITzna5Xsx9b+EmOw+U4dR8kRbdEHoWv7lMUVGwUYKDPY7kRp8hsGBSpBZtE6bTEKdrJOn9G2Yp12iojRmza2orGRSDQN7+7nXkLyPpqcpZrBgiYQNQKt1QScAy4JF0JCqTg1VOTHdfwCVcVKGSDa1NIXKAQJQ1k3WvuNzuqXzg6LmXzwb/xHv36Z6zsNX0xwKZeHZmGKwmDHZlYIIMnhRIggfXP2JL7Sv4OrN288eTr79dXEwmhvyOdJlkX3ZUKHEq7MCtrHUWRWiexX5hXOT7G1xZ+yu6UU4KSaQrotSVBRoCzQwzz7GTvf7WQ8HNBKKjsLMS5FnJFEQNEcVoYSUFMgEHW3PtKm2cFOloSoc0H2syT6hsN22I9aDfv1DkeLuNJucAybHSa9cxAcdBhw1PRJsd3iMMPXTh+/3uHYf7A04ztQOFpMpfvXZjVXyyVVVmdnyrwcvkV8u5QB6/9z/yraHFNAFSZQ8PRnVQI1l3Qgj07bGjksjyx3QAFBGttvOCWtFhGelywtoQDNd1urht2cklaLYOVLlhZCCqB8pZgh9cslcXXa+sYFcSkGUByJy/65rZqy/Vfy8M9u/CyiES6j2/AmXHjz15uj5VDJ8zVvw7hlQCKav/0oekpjYt5DFBYF5z8G0efc30lm3jpRTz/bpObFTzaZeVlG30k+n2+T3lef0ZctPz3nBs4ughin7Hz6ORE1U2jvwv9n9qzXftqYbPlEweYRtHXhq/WK1oEMQo0MShEbKWOfMnywlDHM+SNl+qCMGCxlOm116LDdMEgXStCKcMbRfESbIRs9zoxmNlqdTjCotQnhHIP4aEI4bkI4RxkxUuY4lJGDpcyLDy42W53DCgULNQ5w1w1R6dYAl83hz5EyDlierlGmOd45+i7uGaKuMWgMfzpviLpGGdIrZeA+lEEvkTLZlk4tIuZtG3j0RZlOG3S777sMLDFCNodER6VwfDsiG1HOjPAxBuq6HeEcZcYY6HEok+1A1cKOYI5RRo6UcZ0yrpmeYyDWecoItyijxkCs85RxLCbSZlPHgTq4mz75rODBup1L3mazw4EKJDtLKICMbv6lineg8uo3qDjOueXk2u1zbjaknJlzq4mdI2X6oEyWGTNAyoyBzSNRpnWUyjnKjIFN11e7jkcZ79OCTM4/zIW6e/fno/fm893ko7kJXYlCnbfUlrzYsZBk2UUF+09WLT5iY6/X5m0FumtPnO7xXO6ouq2B1roLk5HFzRJu30KNGOAXvcIvt+2aueeOPnU7phS3srp8uLvzlvGrintLfXiYFxruVfa+2rMzrxu9+bbsqmmJXAgDVhrtCCgmBcdScqaEwfdDSoCsz06hBS0GeqqgCBOBtIdZ3gnYHhst7NVTt+eqtRudB6t58CWm7PRhmYhNq6PwPiZuIzkP3DO2kx6jlUkvzUOozpbdTYIYAigKREUcAy6hYBwiIigThigF37yq0FgSA8GhxFTQeDc/aKHbqJmppi4RrjH1g//Pg79a8zO2hBazDVVhuFwbEFpSq3Tf65G+e9A3TuIp5vXEU68geFMJa6Avk8+7cRQmWgEEs9D50vjlLEyuyBZl6250lrZUjm/ytIr8u8RAX92Hi5Vvh6DbvsIRCIr6nl8h05YAUQxJBSGhpLoLRu38qgCkHFKJoFKc8a64mhm8ObICAFoYdO0NNU2r4H+p6xSL8j5uV598D3Z6ws7je2mHfJX6vH2KTDIQ98oVgkiuCC0KDwsIqFaOECGqKGaG3WcMvfyIBQfC/O3aJHF2X31Q770VAxXUoBvqdsPcRFKKO/Q9pxHXRFKqvDCsYDa1rrYXskhfWtrAobwrDilvW91+5zxaupPWhhBV+j3b3xfCzMUDdj6oGaM7x1i0tgelLVCpQoDlGjvD6nRb138TcLZZZxMd2bJI7I9boxwO6E2aQjeppkSbodL2VD9QWW+JfThWzoDTpnXebuD00CC7N3AO2LOmGe2XAI7tPWu43T1rjguO7TmH1+3/MkRwbDfl54Occ8xb4hyy1UUj2sPSVjXg7D+smtEeFjif3j5c/nR2c/sefv5jjm8+XvmcTnaDpvVQaYUiN0OT3b+4IzDmQBnMaQ1Tss0nEuufhlAbogTwfBabPBzLmm/kbLvuRtm/zFb32N2OOZ1KY4hdGrFo4VC8XFmVexu5Pa6yGNL3Kath1XLjrEXMSxXWoFrc4y3tVsaGPS2TC5u4szW18NlM25pa+Dx6HMlGxT33XhkJU7QkB0iYsUjiKIRpW1XjHmGc3a+tU3thkE6ToenKOJKPZyu41SwDb+mvMtqarpkOrvFnbLbiuOngGmHGVivHIEwWjhseYdQLDyTu1tDe7chV1uFkHNyOmp/ZYHJmcDeHqkbz0zFrwjn+jJErt60J5wjTb3MPuA9hnOxLfhhhsqSc7YRxrJH98yf/Ps3Pga1jq7FbsOP2g1uNX7Eaw1eO2w+uEWYMX9knTAurwHr33/SlpQrGgqZSMItnZMEYBCTliJH1zyyNeEuZ5B7ljEYQCBxbDh+Fe9Yt0iFyb4zsHYV7auQeycrGR+71yj1pPXQzRO6lAeoX5/5nHcAF4GQo/j2BbbqLDFEa2dkt7dqHJayxj/JRfGvruaLmaTvPVT0tw9K6dXHalqXb2+v8UsO9sSHzUbhnPXdkiNwbg9BHWWLqyVx1m3tjPPuYy5vfCffMXb4OaGhUYyfv3G2Ny2KrNZQ1r8+byNjQSANnsVXrDRH37yrSjLKVdjQ9Y7F/+5lGaA+mTdaSM+syR4ES+QZ9qEcSmed13qJAexfoDm9N81QA/JjQ4L2hacbaTt+e3uFoEbDaDQ48aDhaRIwO2U7ERzPmi2e7I9/amQvicaOxsUVyuwGKOKBYKq3+4x73DBu6NHeJb6eVoB1Al1kKECAiBBZUcYKgLDdVBiL/MFgRCjBEIIlDaYIJlW0vVIC95poOpNBpklp3UiAMMEoYQQJzhjArJZ+VxWCId7olhk47e3UnBoQowJQqJpFUkgtU3CgGEQlk/uG8HDrt6dWdHCQFgiAND+YcQYVUQQySgELVuaEUyC0pdNqtq8PRgLWTLTHXtoPSU7+emIqjgQNeeLiuHESnBXQdyoESkCBDENJWKCs6aRwBVHi4PhxEpz22OhQDJ4DG040SWFFOZHFNEpXHg3BdDi38RSflwIDkEgmGENSuhRRFOQgMcP5haErilhha+KmrW+8+/jO48+K4ah748jYuURy5fj761rvy5+/DVZBsZEXOr8IoCu/0BfP4xKk3/XKTOH2mfSGTN3uVyRWahJx+nvPbKLpfJfsIXeh/09mCgkC7ydeBdieXYKrfEV/MvMjTv+LjK/37YREkO2zNJ3qcXPB41rrgf03+OluGq9UEYQnuFzcmdzKOQyXbkDRnKWRbFu69+eo+vpTQpGHacIxZyCAp+lKEb52otfkpmNazgnIkhVRZOL3Yn9d8TQfM7DTJZBEm5DzZ7CG0fbfJqsgr9Li4SBoRdSlmzICATDJCkWSK4KJZtN06VUASwYUkUGjTlhBUFXLNJR3IuFOHebgyRowApph2Aan+m5a289muY5yScQsvcNQwg9AwCCPAtVuMuMJQQl7atxpRsY2YjqmYFq7xSM1hUJMgQDmUimkdKDDPEoazSDJznIrSZIfvtnsnKu3uDafPktsc1DKcTuN1iPJGmxday95rZrXcb9ONPbtLm8XnthBEvMTA5KtzXrN1vL0QLoh1qTa+pWCQlzfiQ6xkhVfNM23A8XindsaRIIgSg3dISjYe6Y6VpmWlkZVWWdmHOamdP63sBKFExsk5Jb0tKeB5UlZ3bnGLlIfvmDyS8vhTJdO6FQsIKcVcMYaLCpthQPJLXdhtTqpO65MM1Jh5q9tEtjXb7J6d1YivMSOstfCEdh8RgZIiFG+IyorCE1sXjIkChEGBCCaIC0GwYSuoumusS4+y0UU9th9gh5YKa/MKIomUEpCWsknK+/067p1S1unCrXtzCoF6QqdcYSaw1gdSlKILeGvYy61ZhXe64Nt/cNOOlCnTil0rDT3+KCWoNEZb5Lg4FNukbfKBX9IQTXZw1E4tgkwwBmlR7WvXobTG5PoIxaPefxF6Hwmo7VHEtUHKIdUzSyns4lD4z/iFcQtVkRHxeu4/ppVY+aKs6dxbrYKpydHM5MTaCqVt6XIOTmbIJ8+OHVhtNWGxt1HMSWW4JIqaBhTVm2mzauvN7JVonb9/8vANe7g8/3P1v/Ovf/z1AZ5O9i8isVehRSkw9JNMAaFcApo/a0jq6qyAwAhZi4m6BrJGCRyMI46Tso4Nzv6lJo1YH17PVUxN4ADiLT1M+8WtRYLJLriRJpBcLeYyfpP9CwSbcHa9csn42fevG21CdpBQtKiq6L2Eq1FmbhZwGT9yz5USB8I28OIt47fvuUrCjgQGW7hltsp6rpCwI4PhVm2ZhdBzfYQdIQy1ZMssghbejnsiGG69llkIna7odiaEoRZrmWXQaQlAZzIYbKWWWQg9txawJIShlmmZZdDCDR0XKjpdqLDkMg21RstMy04bLfS/wm1HxsMs0DILuNO4xFAFPMzqLLOAxwqYl6FYhluaZV6D2mFBfOSly7wcTF2WmYemaFih1sBYOXB9naigSuXAb/7NOhPi+ywdsBVwHWiNlZlh1WDfhiUvgA7dW2MDLW4ys2FrGeg43/Q93wy0UMnMr04jqrX+mmk+KKQyt5TdzkVmQ61TMguv024p7glvuAUhZvF1GkxxT3yDLRYwS2+MlDjpke5MS4qBwvWlAoSCbNXUVaeUdpqr5EhUdvfpRhCAxSbuXko/wPGC4MbAwlVd71BYlnWaCTVUCSs9NHlOwEVzQDEAZc6EruoThwRMTAJe+2ire29RkHRNDwuN8GPV4/tlpkEPrmPUNs4fvNSKae7HBz8mb4jheTLNw6TGZZVzD9fvnrmHO34ibPpE2V2uluUjlbcbgjdaJXddMUn9IjCPk3QK5KVZwDufgcAxMFg8TAELNg6WKlh9Onv/8fbX8PLX3778/Bj9ZChMOvPm04e5FxlJxL27GK7F1ereJL5DksGvr30+nRadfwxNs85MqKsa939nwTAKC1J5LgzLWwIodvirYsmcx0OE8svy89lvX86Dz38Ttvjvm+D1/Optm2qxbWWA++75VgupI5WB2vAHsJhCoRUEoKSyZ9SuZYKMkIb6uORNcrVMsvgm9soHjYSoBoxHQmQzaxKN20gfd8ENLrTPiSq3cYIbpEWIqOvaUkU4KGk38ayzeilfMkPTIlmpBppmqB3fAtD84fffJbIZ3Be1CaD5q7bwvneCTjbh5GrdqPGr0BZu607QqIOhOSIY+5f5N4I7TDBaGC29F482i83N6lHzZ+65VuVQ4AZeP2r++j3XqliSwWArSM3fv+daFUtCGG4JqRmAnrfBtCSFodaQmr9+z3tg2hoJgy0iNQPQc18HW1IYahWp+fv33NrBlhAGW0ZqBKDbJc3upDDUOlKzEFr4pGMSS6dJLLZ8p6EWkpp52SI8MKR8B0tCHmYlqVnCnYYoBivhYZaSmiXcIgAy6pYh6Jbh1pKaidkiKDQScxDEHEwxqZmIprjYuHFVX9Vg1oK1Ay0/NXPSFCccOdlXCay1iXGYNbBmSo77+72AaXKgVbNmRnYaQba5zU9zsteLL5s1fv02+3CN9n//9v/utBzo9n5mVg5m6zBLwhtuNbdZft9DyHp3KQ90ez+ziAezqa8l4Q22Yt8svjEg/UL0/oC29zMz0RSBtlHtexYulzFxsnreckWmnbJe41snA6C+cngIPnKVrzt7uSpLa8kqnaAhNxMZUu3tVPHO33++/O2nbyFfstnHm1fylig+1gfm3qFUuqcgiuNkdWWdgheqB/esD1S0WDsqK28Cqx/Bfn3g5dNfZ19fnwbvl7/8V/LFh3/Ca/PWk7uF1syzwW04bx0vs1IX3nEdOIAlqclqhCrOwBMG85RYKPMwCq86sAEALaba9lPo0teCTiuLYizv1+0h9H3Z6Qk7j+/1EIWrNK5YDDNicy+p3adUCYiCWFuGkitCRdEI5YgBCLHUWhtTRAxVIcQwfdgQiQrOvckX+PTlnl5eXj/9sjz97c82xfkv0MT8FmirQ/85uV76/uSrvm+4nGQn40jthTebbTM1k8FqUPgWKDQhgDGsXQztJOr5nEHkP29jkg3ouEi5OqZxnNpQ5U/usHUKkeqwruXQ97f3c0lscXxBkpz2FPvpaLbbbe1pZTMFWgQavl8KYMUBLctLFIK9aE8aTLRC2fXeXXPBFLXY0USTJhPtldbck0SnBHpS3ntts0N/rKImCCeKzDIzI/1UBquQ81OcRC5s6A5cmnKQUoAbFhOUgfKKAkUO1xPvAiTxr4/vPrx78/Tw5vXXX67eeeZJYkdP3kiMPxaJQZEseBvjB5vOXb8vvWCR0CcLOAzX+69wqrhCrhQh1jjFJC3ql6JNazBiTeX50oIVi06f/vO4+vb27Dw6X/76VpCfHycT5ELbmEm5+xZieuDlV/QM5hqBIBsXhcieYgBZiK8Y0ZIttHUdWM3o2+kkcwxM2mxQtwsmsq455IAgsU2TQbHEqMLarMjsNnKavv/OPTyOAkmL5PQaSBohHjIkLeaS3nvFNAuvdYumCYJAUtRctnsU0HvuF3AonlkZJAVxqg/ROnT9X8GCwAIgmEfbFC0EkuXbBtMq8HXX2BdCz+0CLAmBCCC1JHjcyif+r5Q2gCngQqEGxrslhJ7bBVgSQly0xSSWNUMBxY1+cK5BtiF5wyUpiJ7bBViSgoIACapn6gL4EgGC8Cao5Dr2naYmdjcCCAaQ6dmIS5T8VxwAHGBuSBRxC3k8TOQZBkqhjQbABehjPx4PRwGITtP+uhOCIEARyJhM5/9y9h6J0yZpOQblrBR67qdnSwocSCk2c1BW/peVTkBARX6bCmeE8Ontw+VPZze37+HnP+b45uOVz2mr3vOzGz8Lo4bL6Da8CRfe/PXmaDkK+3zN2zBe7k7k87cfRU9pHkGcYdAULM4vHxcXuGgpISGL8uZXFEga1d1kOG2Smv48yWU/1WQ4+Y9BtH4Z5enT+GUTqFVP+nzzyvhJ/oXv/WWgBRMHsnPpUqV1NmN4db2g1cDU7ML16lRqjNSvL1X5e3Ar/qLJQ8qNz1uvzJZuhJXQLl5uZbaUpGlvOc44Ao6/MrudHkfLpsMKlNdO4X5S14LddquOBV2/7Np6dY2bVtc2u6a8mt+EyyC6vbO1bFaXSJMUesSPPhfQzHqyFVuzMEKpvxHJ1rvyVaFZxUiezmWmtFB/+ukyjOW0oZD+xrfvwllcdPP6/w==</diagram><diagram name="AAAI version of Page-1" id="GecHlVGQ2Maz043FBUnL">7V1bc9s4sv41rs1slVG4Xx4TO96dqszU1GT27JnzMkVLtM0dWfRK9MbeX38AiqB4AUXKEnhxyKgciaQgqftDo7vRlwty9fjyt03w9PBTvAxXFxguXy7I9QXGGBKi/zNnXndnEBZwd+Z+Ey2zc/sTX6P/htlJe9tztAy3pRuTOF4l0VP55CJer8NFUjoXbDbxt/Jtd/Gq/KlPwX1YO/F1EazqZ/8ZLZOH3VnJ4P7838Po/sF+MoLZlcfA3pyd2D4Ey/hb4RT5fEGuNnGc7J49vlyFK0M9S5eb+2+//18U/vTLnXi6vVp/eUh+/tvlbrCbY96S/4RNuE7OOzTeDf2fYPWc0Sv7rcmrJeAmfl4vQzMIvCCfvj1ESfj1KViYq980ZvS5h+RxpV8h/fQuWq2u4lW8Sd9L7u7u8GKhz8f6HVFiMMTMKNtkE/8Z2hvX8VqP9in7KuEmCV8qvGv54SjnhsZxGD+GyeZVvy8bhVgoftvzn1umPhR4Lyzrgwxz9/lYe7rqJxlpjyAz8U5mfZjz8TrJpiCGQ5OdZNTMJAeFuMYGJhxsyCXM2dlA/bJhGYTybmi0Xypahztx0dkf3JlnOrNQLunQdM6xW6CzlL3SmbfTOVzq1TB7GW+Sh/g+Xgerz/uzn8qc2N/zJY6fMvr/K0yS10yuBM9JXOZOmeyaPzA98it22aUOAWVeF95pjpubnGvmy7+BZ1rDCDb34aH7BHczdxOugiT6T/lTXXzL3vpLHOnvswcFwmWhRxQtj7GNnzeLMHtbhf3593g7IsR3uNBcUokAq6w1jiWfu9Ya6Wtuyu+RE1KV4Y8gBoTXWIFdYhJTATj1xA3lmRtyEQ6u5ypSBz1WvS5IVtMu0ZmvkgynJYLzfz/H9sLlNkXwR32DJsNLSiR7XT+7z/5PB9o+BeuTBvotfEkub4Ot5rWR1rtRbzcnjXkdJHrAZGuH09Tbfc/ydz/xUwqj78hpT/sw2WrL+pLfcsatyMk+zS7iRRHkT7xQAqAsSxiHFpb7Hoqg5wwI4Qv26HTYc83zFszvzv34+LQKH8N02L+Hz5tom0QLDTx4HSbhIoliMzt+ChcPwTraPrYAsg1JmoPJIW0vE2lFDGWnglV0v9YvF/qbhvr8J4OHaBGsPmYXHqPlMlVAXfisKKVlqVrGnzetgju0CnumqFcgCaiqA44pX2hzOW2OlI3ShbZ/hsGfK0OCL8FtuDLicS+1MtG5CaJ1tL7X9yy1xKvf0Ai3KeDKF5R4WSmSDNA6jLDLI+VNO0UdPFL3mlhPjTTJ/LXBrb0d+qAV4t2JhRQH6oDJdhrBOviOCgRros6bwEXcFBuWHh18PL7oQcdIjw6+mFPUshAtWShcapniggT8NPHFxkjRDr4MPUz0tG0S9gVqBtun3ebSXfRiOOCBWFaAIYCwKhxlHQILQKjkKD9qso04CE28rQMd3BQjJDLhwK5g+b4kAxKS8RC2g8dhhIRFqE5ZTWxBCWT2oMNSFrt8DOOnrKwRVmIAKZMkPwamq8uIHT9dEa4jlgMKFeL5MTRiXQbbBChLa5TlEGiyKpofAxO2gxUzRsLyOmQRBZLTAmYHpmwHc2eUlD2ofAmtm0FOpD3UwETuYENtH4In8zR6TOOcimStOj4Ss1+bn019OL/E2yj1CJLr2zhJ4kd9w8pc+BQs/rxP7RHXnlL6YR8t16CLhdn3uX5IEhPI9dFQAt8slmsKIm3s3UXa0tmAhf5EfLNzFd2Y81v9//M60l9yG6zMbuUNN+bcDf/j8o+rTbzdXiIswdP63mXpmF3ldHe4vK+M3Y6i4tby6TsvLUq/BLw8oQkDjBGkmiWltaBLBhXzBbUuoQJvns/rOEVbdtlBeAd76jys8fvmJtWKPPKtrju0K2WuzQXkTUR4NYInyzdV45vCQEmGRX4MzDevdvVk+YZYjXHta3K/jOtgt89r8hTWZEoAP6ALIkIBhHK/QjskfZ8rNOng1piBNwXgIUyq2iCiFCgqcjHnEHK9Yq2Dq2fG2iSwRmpYwxRwRER907RfiHn1eU1XARJv2bnoVQMiHZxqVjjcrcKXjyZJS9MsXC+zp9eLVbDdRosLR4CHpT87NFHeGPd9cAOTOWhoz50Y9a3NkfIM5ASISvRPQ9x3bSzC28faRbd7iyEnLt/fkfFF2BVfdBOtw8vfnjWjy3GXu6ufPv/62xxQ1C21iilg13Abm4YUKCq7yuVw6jPCiLicm2eK4P349VcbwHue6N0oWMUmqK01jLeCuj4CbSvZMv4DH02UraroFBY65ShbAHmfiHL5MM8R93gdruNoh6bg0TBtfbt92l2vvp7jIc+AMG5jtq3PlQBYt4icSSL+wNXB0Xq61kPfkdajUJmLmGOAK+ZEV61HydpY/ao8LnftOUTLrPKcK7mpGhicZpbB/VG3mjBxGLz+BMgRjuPZbEqXAVid9JUhukqP6npSG8iz9KAu1+2R0kMcSv+5tSe+Jsb5pW9P3+XWTvRPyO//LYgMTjBMHszb8nQhw7znVWiyhpJY//lxvYz+Ey2fg1VJCXaIrGCt1ST48TmJL3cuGDt2QRX6Gib9ybBVeJecJsGOy0n3UUkA00qm0WV9M0JCx0ysZpefr1JGW0ZbDaBdU9x2iwQ0jL/MeGguZktRbQpYxJPdbS6QHz3X3vI9rlsS6Trn+TnVhJ0BUppYaXZqYVJdp/aFRoQGhJ5iZiVJc7U+ru7jjT5ZzPRzsahjpl+bDTvwXCGkPFHqXm7mCuGivlZ96nJznwMRh2fWsbOHdlwviupqmke6N43tsnBQX/WAsIZdI8deSx/4y4vrNOIPu0rtKG/4cznrz4S/ElpyF9ki3jzFm+BIpg9spPTlOOPAZv3mcb3EWQADEmAzREpyylfOMG326Xdbu50gseIFnXFxbvDQbQ3PtRDSwmh9rz/VkQA/a5zda4RVyhTZajXFSGiX+w1BbwrnEbHQZzefNS03r/9rmAOYffl7xqv0xfVL6dWrffUSJYW36Ve/Zx9rnu/fZF7k71kv7bcRADIpMWF6WUFIcLr7cFtE0+zs55cE1JLl5LpYO4v6wH0ZDlrrZyHcnoTr0XVwKSpBD7Ra0qar7+ASQVa1vihiQMn9VposD+3bm9C8zdF5SacNG12fX4ywNPZE887GCcP//OUf+nJaRHZ7piGnpWe4CpVkcp76lOcEA7ETEqk8qcwNhgCXUKLs4Qo+q2weC5foF0ASsX94Wwk67MQMUppCU1kCoRrJLCiQFBGWPeob8JoTgDMrzvUQLjJLBIj+q+xD+iJzhwh1T2Tecfjg2jEuUvmuHncX8jQSoKoX1iMDhLqFp22z7PjeHKCmgOAFpUQ4jBcTsFZUWxxBAIhBoAQuPDzxhrl87++VNwggsZ8SlQggxIGW7vvZUM+AGRfjmj3Mh23Fu+AxWr3mFudOjzFWk1lhF1rti4wnDK7Db7WrF01e39Ska9Q9uqpdyGm7FtxyH6IPf3m8EFeGno8/6L8/XJCbD2t9TVx90IB5zU6toiRZheaGv374cMGuvunrf9EvP+nnxlz/6w8X4pP+8VfoQlx/iIzeEm6T1EsXrfW9+kys/7uNjMMmXv9Q+lItTmKP8U4jmEk2OxOaKixqP1vwyCeLy83cl3V8cYrNOZiFyBQuicdaPb7OFiKr+HxrI3k2CNkckH0s8+uhSW/jPWsZxzfnjys616MpxCohF474UOKqGJafPL+IPEO8sXP9/sd6lZfDfFs46CQDuc60ZcIa/IQ2SRPISt2EjkDyFsfFOlRK8DTn2milABclYiFDP8cWpFFviENdUQx4S/hiw7ltWEMzh8FJMpyLhXVwsQxCkg6uFF8kaTAJhiaJDWwdoJLqjhsjJEmH/GRPJNlxY4Qk6WABjra+LG/QJu3uM4LGc76vfcYdvqtBiD7S+nJN9LTx0KayBSQEEb77V1IasAAIFqnt2PvX/CjWQLIZoeWqaO57zs+EkZaia2ECEUBqTnAhcPqvmvRNtTJXqFFZR/y4mNAhPGOETEAYASaxbJgKyFRpw3BPQFfR4DFxwWsVN29cUBAgQVGlAI/ZvUOYHUjgGRftR1qOvG0GEAwg09KIS5T+K08ADjB3BLGOi/IjrVHeRnmGgVJovwLgEumZMuWpprMAjLSeeRsThMllh4zJTP5Xy+4SE9RC95Ev4+aC6GArjpELHEgp9jLIxsbYEoAQUAEPF0IfExOOs0579MpjDBjZB4zaMmNW4miRwxBlyj5qZEZE31GIOJWOPU+9mENG9w/ui8odDF5PVBbtm1ejopT37uLZxr2PjfodmxshjTkohUjbuuRFViC9ztJCqJjDS2CKjPaxTy88dxgfkhWazKVwCFW1pYBQBFP7UPXAsU6cIoAUGUV8cap5u/CoTqXNtYSChRllt/N+l8Yk9dG8cwB8WL+SVmg5lfmD1Zdx011GFHMeHFKTQ0CKkTm+gjrFGcLsnfvF+xTe3U7wzfM67cq57QqA+nawz23eMoBu0sMFoLyp+9sB1Ka9oLKDACOad3stgsjV0ljRvGTV+ZHSS2mg1tSki2Ji0j5PqSk1aZ9mhNRFMa2IartzsoE9uFYoqKLld47qyj+9UGiR0b0lKCubCg2hPprDwWvhtidzw7aGuTMEA4nmGkUn5wX9Gt4/rwK9RMHPL0+bUGM1zf7+Et1uAsOPOQenXXzRXRJNIV2vCC/KKWCwsD46FFmTxdOyOvaWhSNc/p5zLI/7yhW72gK7bLR5YTywk1SVee51EbsLxPhbF+VwgQst+w1KVDI6Tb69ooVqYQ7dlDvIh7wl5MvhnEiyPcShb2IM5+uRHXw9PRNjuNbcssHCG5AYw0VAy/a2230TY7ie27I93b9vYhwXp3tWYrS3y+6bGF43pw93vUiVo6srpU5SgWRDDO++mc/etWgetfUbU0CV1sYEz/7W6d9wy/m54XXDerzcsCH8CghSPEbNK6/72qPnlVCAFuIuuSOZYDy8Uh0MjnfMK4QUQKUyHaNmltd24iNglgk8zot9NNT7GA83HMRf3oc2wSveJA/xfbwOVp/3Z6v+kfyeL7Fp+JUy5l9hkrxm7rXgOYnLbLP1sezzguv6QH2sNzu8rbc9d50vbyJDJns9+56IugF0vuRXq5u2l9vK9LbWeluqwUzt7E8/DT3kIHoyas1w8Q6XruXZBoYLneEyCrioacDFa27BeDWFKZpLNqqnMV5lnsdDagkNjqme5rGYdcwxoqez0jAseuSsNIwCLp2VhmHhoma4jAEu1ic3crgg+N59h4eVzEn5eZHNBZ0n90QUz2x6DTe7D7s3Z8CMTdccHDCHPZqzrTJy5XNw/MwuzlEAprP6OThgWK+AgW8BDPoOAINsGm4rYmRDvbi+AOO11svoDZZpBTsgeNj3Oa8H41MgGtDZ1/Se3Z2jAMwRCsTAgJkdnqMADEKkqwLRUIO3J8DYGvgzYCYDmGE1TjR7XacGGDUsYGav68QAo4Z1giCv9ZSHtGn3RR1ZyWgddVA4Qh2yK6fJj7z6KXXWYJsou/r1Ic7ytiW1pd1mRA1ps33J23q05gyYIQDDpgKY2Y05CsDk4UrjR8zsxxwJYrruaw2OmHrG9VWwWjzrDw/bW+yZ8lGr8LX9xv9Jh6/dVsHqZBraHq54xBjK6zzZgmNS5b31ioUpsABE1pVYyYDtcHt+luN68OWPS/17ozvzG+q8u06bJ8Jfslp61cvfIv1rMPw5vN/hFcMcFTBj+/tks0AKVLp+U4mALBoxjrK7zlokEAHqbY7b33vQsGwpj9mnyH+reLYe03YPTed9SdxenctjgU1KawjjWrbIimjoWmWTqvpwsD6c5za6yKLvJDgOA60Oy3mDE7gfwDBY4zAj+M2AEQQCWo4EYVwDRu1L9+GesTNcObaWgqOSVRJJbfBDyX3lWOux8if4vRfAN4djMa5U73QWnj1hWcYNxeAsL2yL+CLpZc+kb2uAbauqHlnFlTuruD4/PgYbc5P5fc+baJtogamlyfMqbC50PqV22Gfqf92GHGQa1BGOJFSQMuOSLs1pjAHFAnKpuNbxBXZod4KaRoOIQK4FL+cE11GnF3WUD0EREb4aZiLsilo8EnCooUj12Qa6jrar6E8D3cXzJuWfXmfjJwPgg8A1FsgZvv9ZG0DU27XugiXLEhL5lX62abuN3aQMOBLMXGYIhQARyferu6961trunQAyfw3//Rxud3g0WuN6uYcmjDc7ZUKzcBskaYX2Ga7ngKvSEpQSh/LEZN5buKQ6U4AR8VVWG+EzFPr3jtSr+PFpFUSmS8D2dZuEj6kpsX2K19twxuVZcCkVEJw4Cr03wJJyAIntE+4BlnW/LQCgg/bWXSvTAIr+m9lJhmlZMw09Lvt0wa7NWM9JvM2c770yRzLTUxsLoVcrRagoa2mMAAX1PZl6VRclxLH22XPnZxXpkuzuP92xuVVCec+EOuy54h6Kfqc50vYLezfM3vPy+0XBRTOQG0acGorX4ECr9H6gsAKazv1uEC23RCBIAYgGc6uQDh7iQdwqpO4BZTCd/Pujq4/dW71vZK28IXpvkAYCZtS6rEu/3qlznM/uvNTBbuoMSY7hmihkrBgXOY5ro3BecoyujwIixzVSOC85RtdJQWtvA5KjIZp1SHJ06GXgjRwNCVRDkqNuBPW3sjSkBw1IDrvtMQg52vc9eyfH5GveZxxt1I5bOhRcVDW/MYXP0w4683vmDoUTKjSM6ORzhU7jloRTqthHO5gY75lbCMMplSuhky+53sauluoy4+bO4bSduTjdWUPscyW2PYqTdvUGZxNssBh7OufxjAUyeCqQmRN5xgKZrtn8g0OmgzPkPWsR0zKobMD33LplrDrDsNVc2OHyP7PWOQEVYmAEOcT/rEKMXIUYGDJzZ8mxQIZOBTLv3tV4WOuclmOYzTWLxjLBOyuibNgCEWyuWjQWyHTWPIeGzNx6cqQI6qyIDo2g2f05Fsh0VkSHhozqFTLwLZCZcEuXIyBjw7w62C54UMjwDvFy79l2mViYBJ8roo9khnfXI/iwJa65Y0LPkBm3HjE0ZGYf6Eggwzv3u6UNOXB9QWZuPzk9yAyses6O2OlBpiF5si/IzI7Y6UFmWJ8I75A9OVEDN69fOqmeIbxD/uZEGWLHeVc9Xni/TsVZ5jbfeWpwbfbWSiGWIlyJYLDsIBMYSMoRI9nfyvC+q7KIuWfkWNDnp57QuNE3u1tHgr6Td9amiL7ZczsS9J3sj50i+sZaZp5bncBu4dnXRQXaflZJZa6WqjufkiwGLGgl2gta9U6OAQtaiYaVwo6PS+C5pECJYiU9RyJ238Q7rvzVWUvYiAYTozlLvW/iHFcM67zEaS+G1Ts5jiuGdV5ytBfD6p0cHfI/T2n2EaIlC0WumBSuKC5IwE9zK4n2cloYcUCxVFo7EBRjZmvI90RfW/vVj+/OA+msymCK+guBBVWcICgrxZIFEMXDUeBaAYYIJMbjJphQCjvI3nCPBy54LerljwuEAUYJI6ZrCEOYVSOxKmxwRGKNiw14mmxAiALTVo1JJJXkwjZMtXwgEsjiMXo+eK3L5Y8PkgJBkCYP5hxBZRvy2aQqAkqZ/I5NgnFxwWsSnMfZgLUNLjEnnCot+rlNOrezgQNeOka/OHgtpOWRD5SAlDIEIcKxzdy2e5wIoNIx+unQwYgbJRs4AdSIGyWwopzI8t4lqs4HMXY+eN3+98gHBiSX2h5HCGrTQooyHwQGuHg4Cr2Miw0d7NTtQ/BknkaPgXHlFglf7c+SGNd2fvZLcBuufom3UdqCilzfxkkSP+obVubCp2Dx531q9BXMtbxrY/phHy1foYvJ2fe5fkiSp23aCuhGPxbLNQWRNqjvIm1ObsBCfyK+WaZ9om/M+a3+/3kdpb2xVpcahjfcSK0b/sflH1ebeLs1fZXA0/reZU4aT1TaXeRwPINtPNjkFPdiSwkNGqYVR4NCBknZliK8VVBr9VMwvc4KyvX0lHmD6HJJZPc9HpDptWTUOk7BebFvDlRlXZ2bdZbX4HFzkxZ38slmzICATDJCtUqhCC6rRe3aqen1LriQBAqt2hLbwKTI5IZbzs9j5dVtMV0eI0YAU0ybgFQ/p5V+PO1rzKh43MEpMq8wk1hhEEaAa7MYcYWhhLnNZXVQKtqAOa4lxqpVMzSnD02CAOVQKqbXQIE5JCVo4nrpm5FB0eUrO64BJ6q034aLnHP7kxpju/KT1V6ZN3qVfdLI6tgycxydtyut3AudARGvIDCFL+dpT0GPMETArKVa+ZaCQV7tpIdYRQuvq2dageOmlTrjSBBEXT26SUXHI/5Q6fIdnobK7wZtfaiJ2qgjQglCicRcssp6LCngRbDVexGOC2wuB+ksAqcmApleM7GAkFLMFWO4vBAzDEhxCwuPHJNevcWN9qdLjCyD7UPK9RO7FzdFEmXjCG0YIgIlRcj0v2Jl9onWrWCiAGFQIIIJ4kIQ7Gg/3XSPB/518DLPGv5INPzDwFRYq04QSaSUgLQSKaIQYMVgwdFbnl5z7cYoVwjUYp1yhZnAelWQouI7wK1OrZFJlu/WPX2Yz5TpBV4vHXoOUkpQZZ52iGEZke8yD9j7fqYpYhIwbbYiyARjkJaXf21EVHaRxj1LMZydz+9l/UcCas0Uca2acki1dKm4Vsbu4tM2TXcs3q3Clywfq5iatVgF2220cFmdllXsEF/ekIvl5kmByMwRSW7PnZiGdclsXLO1HqsNeXcZYrVcK8dIuGWkN2dt6Zeb2PgR9rdr0f/wU7w06/fn/wc=</diagram></mxfile>
2106.00162/paper_text/intro_method.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Evaluation metrics heavily influence a field's research direction. The ultimate goal of open-domain dialog systems is to provide an enjoyable experience to users. Previous research mainly focuses on optimizing automatic dialog evaluation metrics such as BLEU, which models the distance between the system responses and a limited number of references available. However, it has been shown that these metrics correlate poorly with human judgments [\(Liu et al.,](#page-10-0) [2016\)](#page-10-0).
4
+
5
+ Open-domain dialog system evaluation has long been one of the most difficult challenges in the dialog community for several reasons: (1) The goal of
6
+
7
+ dialog evaluation should be to evaluate users' conversational experience. Existing automatic evaluation metrics such as BLEU are mostly constrained to a static corpus, and do not capture the user experience in a realistic interactive setting. (2) Currently, self-reported user ratings are widely used to evaluate open-domain dialogs. However, self-reported ratings suffer from bias and variance among different users [\(Liang et al.,](#page-10-1) [2020e\)](#page-10-1). Although we could tell which dialog system is better by running statistical tests on a large number of noisy ratings, it is challenging to locate dialogs with bad performance reliably. Only by identifying these bad dialogs effectively can we correct errors in these samples to improve dialog system quality.
8
+
9
+ User engagement has been recognized as one of the essential metrics for open-domain dialog evaluation [\(Ram et al.,](#page-10-2) [2018\)](#page-10-2). Previous research also confirms that incorporating user engagement as real-time feedback benefits dialog policy learning [\(Yu et al.,](#page-11-0) [2016\)](#page-11-0). One of the most costly bottlenecks of learning to detect user disengagement is to annotate many turn-level user engagement labels [\(Ghazarian et al.,](#page-9-0) [2020\)](#page-9-0). In addition, the data annotation process becomes more expensive and challenging for privacy-sensitive dialog corpora, due to the privacy concerns in crowdsourcing [\(Xia](#page-11-1) [and McKernan,](#page-11-1) [2020\)](#page-11-1).
10
+
11
+ To improve annotation efficiency, we reframe the training data annotation process as a denoising problem. Specifically, instead of manually labeling each training datum, we automatically label the training samples with a set of labeling heuristics. The heuristic functions primarily consist of regular expressions (Regexes) and incorporate open-sourced natural language understanding (NLU) services. Since the automatically generated labels might contain noise, we then denoise the labeled data using the Shapley algorithm [\(Jia](#page-9-1) [et al.,](#page-9-1) [2019a,](#page-9-1)[b\)](#page-9-2). We use the Shapley algorithm to
12
+
13
+ <sup>1</sup>Equal Contribution.
14
+
15
+ quantify the contribution of each training datum, so that we can identify the noisy data points with negative contribution and then correct their labels. Our experiments show that HERALD achieves 86% accuracy in user disengagement detection in two dialog corpora.
16
+
17
+ Our proposed framework HERALD is conceptually simple and suitable for a wide range of application scenarios: First, since our model could detect user engagement in real-time (i.e., after each user utterance), our model could be plugged into existing dialog systems as a real-time user experience monitor module. In this way, dialog systems could detect and react to user's disengagement in both open-domain dialogs [\(Yu et al.,](#page-11-0) [2016\)](#page-11-0) and taskoriented dialogs [\(Yu et al.,](#page-11-2) [2017\)](#page-11-2). During training, our model could also be used as real-time feedback to benefit dialog policy learning [\(Yi et al.,](#page-11-3) [2019\)](#page-11-3). Second, HERALD could quantify user engagement and be used as an automatic dialog evaluation metric. It could locate dialogs with poor user experience reliably to improve dialog system quality [\(Ghazarian et al.,](#page-9-0) [2020;](#page-9-0) [Choi et al.,](#page-9-3) [2019\)](#page-9-3). Third, user engagement is an essential objective of dialog systems, but few dialog datasets with user engagement ratings are available. Our heuristic functions, combined with the proposed workflow, can be readily deployed to annotate new dialog datasets.
18
+
19
+ # Method
20
+
21
+ We defined engagement as the degree to which users are willing to continue conversing with the dialog system Yu et al. (2016, 2017). We focus on identifying the dialog turns with "disengaged" user response, since they usually indicate poor conversation experience. We formulate the user engagement prediction as a binary classification problem: Our goal is to learn a parameterized user engagement predictor $M_{\theta}$ that, given a dialog turn (along with its dialog context) $x \in X$ , predicts the turn-level user engagement label $y \in \mathcal{Y} = \{0, 1\}$ , where label y = 1 means "disengaged" and y = 0 means "engaged". We start from an unlabeled train set $D_{\text{train}} = \{x_i\}_{1}^{N_{\text{train}}}$ without any label $y_i$ . The test set $D_{\text{test}} = \{(x_i, y_i)\}_{1}^{N_{\text{test}}}$ contains the ground-truth label $y_i$ . The development set $D_{\text{dev}}$ has a similar structure as the test set $D_{\text{test}}$ but the development set can be much smaller than a train set (i.e., $N_{\text{dev}} \ll N_{\text{train}}$ ), making it economical to obtain. Following the general architecture of neural classifiers, we formulate our model $M_{\theta} = M(\phi, f) = f(\phi(x))$ : Here BERT (Devlin et al., 2019)-based $\phi$ is a text encoder that maps each dialog turn x to a feature space $\phi(x) \in \mathbb{R}^d$ . f is the final linear layer with softmax activation.
22
+
23
+ To ensure our framework is generalized to various corpora, we investigate multiple open-domain dialog datasets ranging from ASR-based (Gunrock (Liang et al., 2020a)) to text-based (ConvAI2 (Dinan et al., 2019), Blender (Roller et al., 2020), and Meena (Adiwardana et al., 2020)) dialog systems.
24
+
25
+ Gunrock Movie Dataset Gunrock Movie dataset consists of dialog data collected from Gunrock, an ASR-based open-domain social chatbot originally designed for Amazon Alexa Prize (Liang et al., 2020a). The Gunrock dataset comes from a user study where in-lab users were recruited to carry on conversations. We have consent to use the data and we also removed any sensitive information in the conversation. Two dialog experts (co-authors of this paper) randomly annotated 134 dialogs and split them evenly into the test set and development set. In total, the experts labeled 519 turn-level disengaging user responses and 2,312 engaging user responses. They reached a high inter-annotator agreement score (Cohen, 1968) with kappa $\kappa = 0.78$ . The training set contains 276 unlabeled dialogs, with 5644 dialog turns. In addition, we ensure that the data annotation is independent of the labeling heuristics collection, so there is no data leakage problem. A full example dialog can be found in Appendix A.4.
26
+
27
+ **ConvAI2 Dataset** ConvAI2 dataset contains text-based dialog collected from the second Conver-
28
+
29
+ <span id="page-3-0"></span>
30
+
31
+ | Labeling Heuristics | | | age (%) | Example Disengaged User Responses | |
32
+ |---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|---------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--|
33
+ | Heuristics Group | Disengaged intents | Gunrock | ConvAI2 | | |
34
+ | (1) Complain<br>system responses | Complain system repetition Complain system ignoring them Complain system misunderstanding Not understanding system Curse system Express frustration | 1.93 | 1.95 | { You already asked me that. I already told you. Remember? } { You're not listening. You didn't answer my question. } { I never said I don't eat my favorite seafood. } { What are you talking about? } { You're dumb. } { Sigh. } | |
35
+ | (2) Dislike<br>current topic | Express negative opinion<br>Show low interests | 1.90 | 3.45 | { I don't like music. It's boring. } { I don't care. } | |
36
+ | (3) Request to end<br>topic or conversation | Request topic change<br>Request termination | 5.20 | 2.92 | { Let's talk about something else. } { Stop. Bye. } | |
37
+ | (4) End with<br>non-positive responses | End with negative answer<br>End with unsure answer<br>End with back-channeling<br>End with hesitation | 20.13 | 4.86 | { No. I have not. } { I don't know. I don't remember. Well, maybe. } { Yeah. Okay. } { Hmm That's a hard one, let me think. } | |
38
+
39
+ Table 1: Our labeling heuristics designed to capture user disengagement in dialogs. A dialog turn is considered disengaged if any of the heuristic rules apply to the user responses.
40
+
41
+ sational Intelligence (ConvAI) Challenge (Dinan et al., 2019). We select dialogs from the main eight participated chatbots (Bot 1, 2, 3, 4, 6, 9, 11) and exclude dialogs that are one-sided or shorter than three turns. The dialog experts annotated 207 dialogs in total. The dialogs are evenly distributed over all the eight bots to ensure system diversity, and are randomly sampled within each bot. The annotated data consist of 209 disengaging turns and 1684 non-disengaging turns. They reached a high inter-annotator agreement score (Cohen, 1968) with kappa $\kappa = 0.76$ . We split the annotated dialogs evenly into the test set and develop set. The training set contains 2,226 dialogs, with 18,306 dialog turns.
42
+
43
+ Google Meena Dataset Meena (Adiwardana et al., 2020) is the largest end-to-end neural chatbot so far, trained on 867*M* public domain social media conversations. We study the 93 example Human-Menna conversations released by Google.
44
+
45
+ **Facebook Blender Dataset** The Blender bot (Roller et al., 2020) is an open-domain chatbot with several conversational skills: providing engaging talking points and listening to their partners, displaying knowledge, empathy, and personality appropriately while maintaining a consistent persona. We study the 108 example Human-Blender conversations released by Facebook.
46
+
47
+ Our goal is to train a user engagement detector with minimum data annotation efforts. Traditional supervised learning paradigms require annotating many training samples. In addition, it requires additional data annotation to extend the model to a new
48
+
49
+ dialog corpus. To reduce annotation work, we propose HERALD, a two-stage pipeline that annotates large-scale training data efficiently and accurately (Figure 1). Instead of hand-labeling training data points, we use heuristic functions to label each training datum automatically. The heuristic functions are built upon a set of user disengagement heuristics rules. Since the training data are automatically labeled, their labels would be noisy. We then clean the noisy training data with Shapley algorithm (Ghorbani and Zou, 2019) to improve the labeling accuracy. The Shapley algorithm denoises training data by identifying data with wrong labels and flip their labels. Finally, as we received clean training data, we use them to fine-tune a BERTbased model and obtain the final user disengagement detection model.
50
+
51
+ Since labeling large-scale training data is time-consuming, we propose heuristic labeling functions to label training data automatically. The heuristic functions focus on detecting disengagement from user responses, as it directly indicates poor user experience. To build the heuristics functions, we first summarize the heuristic rules shared among users. We investigate the disengaged dialog turns from the four datasets mentioned above and identify four groups of user disengagement patterns: "complain system responses", "dislike current topics", "terminate or change topics", and "end with non-positive responses" (Table 1). We then discuss the implementation of heuristics functions.
52
+
53
+ Group 1: Complain system responses. Complaints are an evident sign of user disengagement. We identify six related disengaged intents. The first three intents ("complain system repetition", "complain system ignoring them" and "complain system misunderstanding") usually appear when the bot makes errors like repeating the same content, ignoring, forgetting, and misunderstanding the user's response. In these cases, users express their disengagement by indicating the bot's error (e.g. "You already told me that", "You're not listening"). Another intent "not understanding system" happens when users cannot understand the system's response (e.g. "I don't know what you're talking about."). In the last two intents, users reveal negative emotions by cursing the system (e.g. "you're dumb") or express frustration (e.g. "sigh") about the conversation.
54
+
55
+ Group 2: Dislike current topics. When discussing a given topic, users might show their disengagement by expressing negative opinions or low interest. For example, given the bot's response, "I write romantic novels under a pen name. ", for users who are not interested in reading, users might say "reading is boring", "I don't like to read", or "I'm not interested in this". We also make sure to handle the corner cases where the user utterance should be labeled as engaged but contains negative opinions. For instance, to respond to the bot's question, "do you want to not work?", a user might say, "Yes. my job is boring. I have to work with mail". Though the user mentions a negative feeling ("boring"), the user agrees with the bot and shares further information.
56
+
57
+ Group 3: Terminate or change topics Group 3 considers the cases where users express disengagement to the current topic in a more straightforward fashion. For example, if users are not interested in the current topic, instead of just expressing their dislike to it, they may request to switch topics with "Let's talk about something else". In some cases, users might show strong disengagement by requesting to end the conversation if the user is no longer interested in continuing the conversation.
58
+
59
+ Group 4: End with non-positive responses A more subtle but common clue of disengagement is when users end the response with non-positive content. For example, non-positive responses like "I don't know", "No", "Yeah", "uh", "Probably",
60
+
61
+ imply that users do not have much to talk about the current topic. To keep the precision of our heuristics high, we carefully consider the counterexamples. One case is that the user follows up with more responses such as questions (e.g., Bot: "Have you seen any movies lately? ", User: "No. Have you?"), and opinion (e.g. Bot: "What's your favorite animation movie?", User: "I don't know, but it might actually be frozen two. My sister loves it.") in the same dialog turn. These turns should not be labeled as disengaged since the user is still interested in sharing more content or asking followup questions. Therefore, we take a conservative approach: we label the dialog turn as disengaged only if no more responses follow the non-positive response.
62
+
63
+ Next, we discuss how to use heuristic functions to auto-label disengaged user utterances. First, we split user responses into segments since user responses may consist of multiple units with different semantic meanings. We use NLTK Sentence Tokenizer for text-based system, and a segmentation model [\(Chen et al.,](#page-9-17) [2018\)](#page-9-17) for ASR (Automatic Speech Recognition)-based system as the segmentation tool. We then apply the heuristic functions on each segment to detect disengaged intents. For heuristic groups 1 to 3, if any segment contains a disengaged intent, the user response is auto-labeled as disengaged. For heuristic group 4 ("End with non-positive responses"), we assign disengaged labels only if the disengaged intents are detected in the last segment.
64
+
65
+ We detect disengaged intents with Regexes. The benefit of using Regexes is that they have minimum dependencies and are easy to modify. We design Regexes for each intent. Following common Regexes complexity metrics [\(Luo et al.,](#page-10-17) [2018\)](#page-10-17), our Regexes for each intent contains 43.9 Regexes groups and 87.7 *or* clauses on average.
66
+
67
+ Our framework also supports incorporating additional resources to improve the intent detection accuracy for automatic training data labeling. For example, we can enhance the recall of Regexes intent detection by incorporating existing deep learning-based NLU (Natural Language Understanding) models. Specifically, we re-purpose an open-sourced dialog act classification model [\(Yu](#page-11-5) [and Yu,](#page-11-5) [2021\)](#page-11-5) to enhance disengagement intent detection: we select 6 out of the 23 supported dialog act labels that are associated with disengaged intents, and map each selected dialog act label to the heuristic groups. The dialog act "complaint" is mapped to the heuristic group "complain system repetition"; "closing" is mapped to the disengaged intent "request termination"; "hold" to "hesitation"; "other\_answers" to "unsure answer"; "back-channeling" to "back-channeling", and "neg\_answer" to 'negative answer'". If a user utterance is detected with disengaged intent by either Regexes or the deep learning model, then the utterance is auto-labeled as disengaged.
68
+
69
+ Overview Next, we denoise the labeled data using Shapley algorithm (Ghorbani and Zou, 2019). Shapley algorithm has been studied in the cooperative game theory (Dubey, 1975) and economics (Gul, 1989) as a fair distribution method. Shapley algorithm computes a Shapley value for each training datum, which quantifies the contribution of each training datum to the prediction and performance of a deep network. Low Shapley value data capture outliers and corruptions. Therefore, we can identify and denoise the incorrectly labeled data by computing their Shapley values and finetune the model on the cleaned training set.
70
+
71
+ Shapley Algorithm Shapley algorithm comes originally from cooperative game theory (Dubey, 1975). Consider a cooperative game with n players $D = \{1, ..., n\}$ and a utility function $v : 2^{[n]} \to \mathbb{R}$ which assigns a reward to each of $2^n$ subsets of players: v(S) is the reward if the players in subset $S \subseteq D$ cooperate. Shapley value defines a unique scheme to distribute the total gains generated by the coalition of all players v(D) with a set of appealing mathematical properties. In our setting, we can consider $D_{train} = \{(x_i, y_i)\}_{1}^{N_{train}}$ as $N_{train}$ players. We define the utility function v(S) as the performance on the development set $D_{\text{dev}}$ . The Shapley value for player i is defined as the average marginal contribution of $\{(x_i, y_i)\}$ to all possible subsets that are formed by other players (Jia et al., 2019a,b):
72
+
73
+ $$s_{i} = \frac{1}{N} \sum_{S \subseteq D_{train} \setminus \{x_{i}\}} \frac{1}{\binom{N-1}{|S|}} [\nu(S \cup \{x_{i}\}) - \nu(S)]$$
74
+
75
+ As suggested by the definition of Shapley value, computing Shapley value requires an exponentially large number of computations to enumerate $O(2^{N_{\text{train}}})$ possible subsets and train the model $M_{\theta}$ on each subset, which is intractable. Inspired
76
+
77
+ by (Jia et al., 2019a,b), HERALD tackles this issue by reducing the deep model $M_{\theta}$ to a K-nearest neighbors (KNN) model and then apply the closed-form solution of Shapley value on KNN: We reduce our BERT-based classification model $M_{\theta} = M(\phi, f) = f(\phi(x))$ to a KNN by first fine-tuning $M_{\theta}$ on the auto-labeled training samples. We then use the feature extractor $\phi$ to map each training datum to the feature space $\{\phi(x_i)\}_1^{N_{\text{train}}}$ . We construct a KNN classifier in the feature space to compute the closed-form Shapley value.
78
+
79
+ Next, we discuss the closed-form solution of Shapley value. We first consider a special case where the development set $D_{\text{dev}}$ only contains one datum $D_{\text{dev}} = \{(x_{\text{dev}}, y_{\text{dev}})\}$ . Given any nonempty subset $S \subseteq D_{\text{train}}$ , we use the KNN classifier to classify $x_{\text{dev}}$ . To do this, we sort the data points in the training set $\{x_i\}_1^{N_{\text{train}}}$ based on their euclidean distance in the feature space $\phi(x)$ to the datum in the development set $x_{\text{dev}}$ , yielding $(x_{\alpha_1}, x_{\alpha_2}, ..., x_{\alpha_{|S|}})$ with $x_{\alpha_1}, ..., x_{\alpha_K}$ as the top-K most similar data points to $x_{\text{dev}}$ . The KNN classifier outputs the probability of $x_{\text{dev}}$ taking the label $y_{\text{dev}}$ as $P[x_{\text{dev}} \to y_{\text{dev}}] = \frac{1}{K} \sum_{k=1}^{K} \mathbb{1}[y_{\alpha_k} = y_{\text{dev}}]$ , where $\alpha_k$ is the index of the kth nearest neighbor. We define the utility function as the likelihood of the correct label:
80
+
81
+ $$\nu(S) = \frac{1}{K} \sum_{k=1}^{\min\{K, |S|\}} \mathbb{1}[y_{\alpha_k(S)} = y_{\text{dev}}]$$
82
+ (1)
83
+
84
+ Jia et al. (2019a,b) proves that the Shapley value of each training point $s_{\alpha_i}$ can be calculated recursively in $O(N \log N)$ time as follows:
85
+
86
+ $$s_{\alpha_N} = \frac{\mathbb{1}[y_{\alpha_N} = y_{\text{dev}}]}{N}$$
87
+
88
+ $$s_{\alpha_i} = s_{\alpha_{i+1}} + \frac{\min\{K, i\}}{i \times K} (\mathbb{1}[y_{\alpha_i} = y_{\text{dev}}] - \mathbb{1}[y_{\alpha_{i+1}} = y_{\text{dev}}])$$
89
+
90
+ The above result for a single point in $D_{\text{dev}}$ could be readily extended to the multiple-point case, in which the utility function is defined by
91
+
92
+ $$\nu(S) = \frac{1}{N_{\text{dev}}} \sum_{j=1}^{N_{\text{dev}}} \frac{1}{K} \sum_{k=1}^{\min\{K, |S|\}} \mathbb{1}[y_{\alpha_k^{(j)}(S)} = y_{\text{dev}, j}]$$
93
+
94
+ where $\alpha_k^{(j)}(S)$ is the index of the *k*th nearest neighbor in *S* to $x_{\text{dev},j}$ . Jia et al. (2019a,b) also prove that the Shapley value in this case is the average of the Shapley value for every single dev point.
95
+
96
+ **Denoising Procedure** Our denoising procedure works as follows: (1) We first fine-tune our BERT-based classification model $M_{\theta} = M(\phi, f) = f(\phi(x))$
97
+
98
+ <span id="page-6-0"></span>
99
+
100
+ | No. | Method | Gunro | ck Movie | ConvAI2 | |
101
+ |------|-------------------------|--------|-------------|---------|----------------------|
102
+ | | | | $F_2$ Score | | F <sub>2</sub> Score |
103
+ | (1) | Heuristics | 78.32 | 65.09 | 76.58 | 58.16 |
104
+ | (2) | Heuristics (regex only) | 62.81 | 35.46 | 72.04 | 49.90 |
105
+ | (3) | Heuristics (NLU only) | 72.68 | 56.32 | 63.62 | 32.86 |
106
+ | (4) | Heuristics w/o Group 1 | 78.21 | 64.88 | 71.20 | 48.44 |
107
+ | (5) | Heuristics w/o Group 2 | 77.96 | 64.49 | 75.45 | 56.22 |
108
+ | (6) | Heuristics w/o Group 3 | 71.52 | 55.36 | 71.96 | 49.80 |
109
+ | (7) | Heuristics w/o Group 4 | 58.34 | 23.97 | 68.32 | 42.68 |
110
+ | (8) | BERT(dev) | 73.98 | 60.74 | 74.97 | 55.40 |
111
+ | (9) | BERT(Auto) | 80.55 | 71.77 | 78.76 | 63.13 |
112
+ | (10) | BERT(Auto+dev) | 80.73 | 72.16 | 80.46 | 64.54 |
113
+ | (11) | HERALD | 86.17* | 80.01* | 86.22* | 70.49* |
114
+
115
+ Table 2: Evaluation results comparison among variants of HERALD. \* indicates that the model is statistically significantly better than baseline models. All numbers in the table are in percentage.
116
+
117
+ on the auto-labeled training samples. This step injects the knowledge in the labeling heuristic into the model $M_{\theta}$ . (2) We then map each auto-labeled training datum to the feature space $\{\phi(x_i)\}_{1}^{N_{\text{train}}}$ , since we want to apply the closed-form KNN formula of Shapley value in the feature space. (3) Next, for a binary classification problem, we duplicate each training datum 2 times with labels [0, 1]. This generates a large training set $D_{\text{large}}$ with $2 \times N_{\text{train}}$ data points, and we note that the origin training set $D_{\text{train}}$ is a subset of $D_{\text{large}}$ , since $D_{\text{large}}$ enumerates all C possible labels for each each training datum. (4) We then calculate Shapley value for the $2 \times N_{\text{train}}$ data points in $D_{\text{large}}$ using the closed-form KNN formula. (5) We remove the data with negative Shapley value in $D_{\text{large}}$ , and get a cleaned training set $D_{\text{clean}}$ . The duplicate-and-remove procedure "flips" the labels of the noisy data points with low Shapley value. (6) Finally, we fine-tune the classification model $M_{\theta}$ on $D_{\text{clean}}$ to get the final user disengagement detection model.
118
+
119
+ To sum up, the Shapley value quantifies the contribution of each training datum. Low Shapley value data capture outliers and corruptions that are not consistent with the distribution of other data points. We identify and correct these outliers and corruptions to provide a clean training set.
2106.02658/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-01-27T01:40:34.354Z" agent="5.0 (Macintosh; Intel Mac OS X 11_0_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36" version="14.2.7" etag="Rust8zdqhbu_tVX90A_g" type="device"><diagram id="1CF9uF74o8k6k49ea022">7Vxtj9o4EP41SHcnHYpf8vZx39qrrpUqbaW2Hw3xhlxDzBlvl71ff05iA45NN+0mgSBYaYkdx9jzPGPPjAcm6Ga5ecvJavGBJTSfQC/ZTNDtBEIY+VC+lTXPdQ2AkVfXpDxLVN2u4j77j6pK3ewxS+jaaCgYy0W2MivnrCjoXBh1hHP2ZDZ7YLn5qSuSUqvifk5yu/ZzlohFXRvBcFf/F83Shf5kEMT1nSXRjVUX6wVJ2FNdVU0O3U3QDWdM1FfLzQ3NS+lpudQSeHPg7nZgnBaizQMKie8kf1RzU+MSz3qycoir8nIt6GqCrleUZ0sqKFdVH3fl66dFJuj9iszL9k8SfFm3EMtcloC8fMg2VKNZlnMyo/k1mX9LOXsskhuWs7LXghW0bM0KoRt7srwWnH2jutEEogjOUBColm/IMstLPr0lnCxZkVSfl+eNTutONGhY1igBUC7o5qAQwRYaSWrK5Hz5s2yiHgiUkD3FZ432044b2FNNFnu80OB4RPEx3Xa9g0xeKNTcCKITRFCC81C92sCYBLPAPwUYIY4NGEPPghFFoCcY8dhhnEU+9r0TgDHwXkIR9qaMvoXih3fvLSArIdNECf8lrA6Jcw+QjiQHfFN0wIOW7CKH6FAw1TN/jfACS1A0kduuKjIuFixlBcnvdrXXO1GW1Nu1ec/YSgnwHyrEs5IUeRTMFC/dZOLL3vXXsqupr0q3G9VzVXjWhULO7Mt+Ye+psrh7rCrp55qQHYK2lkM5edNWYI98rqqUpSEIT6kwuNcCbk5zIrLvZu+vQS60aP+Jk6yYwCCXI7mecQPW4N/H0ryppv/nupLHlWwA8GpTTV7fl1dp+X5LBLGpkefS2qMv6w9Zr2oTsFr0hlWoyNSn2LfUyXeok9/BQhRddKm1LsW2LoH2UHeuTPFFmdook9ytBtMm7Xde1OmAOhm608NGpB79yDLZ45YRyEcmJXAD61rL1VP7XnCzo8DqaAqaJk09R6uzijvbObWjE3CpeLF+YHwpzfexmovbaMIw5iJoEbsYQEs71BztgQyvOQB6Uw+HAYj1/1aK9Cvct+MVY+Q+DBsrBgTTKByU/viyJ7U28fSeZOxTYWvQO7fxgB0ouBJCDiNjXRl6H4jg2WY0CoWOr1DBOewnBusdUQIQHZH1dpzg5u+vo6EobkdRHPZH0cgS4G22nkvIpfMGvU+c0vVoxOnjpvnoFGcQTP3+BGo72yPXeW1DGnYlPJ7O6/HsSfi+3OeWtLJHr4qCCVJveyPhbXOnAt7QOxW0/cexsxY6WIuOyFrbtxzzTnWAoj3uVI5j6fPZqQ6Is9edynFArBNrqvBwVqSWPOWMhSk089RWJ1rYB7Mkz9JCFudSQtWBcim/bE7yK3VjmSVJtYq4UDJXltZZHlAnAeydKl9Xfy8uLq+LY+ljTg1vgC1sgevIGHUQWnYcGe8ryrvigcp5SOlewP3F3TrWWnkUeG2nUqvtXVnnNn0u2LbCNgwb63JgHwn1h2z4Q5tLSemkAm7wVyJuP4/pjy29yLb00DH9k3NyqQMcGQoR2blPIezPRIlHpxHhCWgEcnjsx9QIZHvsd8sZTRKXiTkWTQAwLMEeUBvQTzrpXanHEIx1eOv4iOlnejxuxnpXacppOq4gU+hwOR38dVs3HRG4RWb5WAmMHUvuEY8Dke3cn48R4koL6XPd9UdnhZyCEdID+92pJxEw3XGI4RSAePcyO+wu6wr95PnumNYzx0Evxkdcz+yD3veUPJSjpjmdj3ondi1nvW7DtoOqM0TWK1IYQtQ5IPOaXmV+CE9nv8nByY/x9NvvlWC8tokkkzL0jCf+TZKQNC1ztKqyGoMcfj0M3XjcoaxXaOgrk5ubYUqJ/DSKLKphB9M6+Rqh6yT8ODS7sKy3k44IGxwDHtquUQNwDNuRhTF9Pa6Z8znw1+OwK6Bw0dDz0lDkhUfVUDuScuHYuXEMx6h5JDowy1yRpQvLzoxlsLGSgWE5ZofU7ueM23kUp2ptRH401WGyvQxZRxAY+z3mHWE7P2U/f/Mjp0lWOfR/jFmyVUaX7WyFvUr2vOJRRuxpsGhiDAOJpbd9NSx0EE8hihBGIIQBxFGj/+6+1obtiNe4lpvYh9O46R16zghEh0ohi7vfuqrFvvvJMHT3Pw==</diagram></mxfile>
2106.02658/main_diagram/main_diagram.pdf ADDED
Binary file (30.1 kB). View file
 
2106.02658/paper_text/intro_method.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Ideally, research in Natural Language Processing (NLP) should balance and integrate findings from powerful machine learning approaches with insights and theories from linguistics. With the enormous success of data-driven approaches over the last decades, this balance has arguably and excessively shifted, with linguistic theories playing a less and less critical role. Even more importantly, there are only little attempts made to improve such theories in light of recent empirical results.
4
+
5
+ In the context of discourse, two main theories have emerged in the past: The Rhetorical Structure Theory (RST) [@carlson2002rst] and PDTB [@prasadpenn]. In this paper, we focus on RST. More specifically, on whether the underlying theory can be refined in a data-driven manner.
6
+
7
+ In general, the RST discourse theory postulates a complete discourse tree for a given document. To obtain this formal representation of a document as a projective consituency tree, a given document is first separated into so called Elementary Discourse Units (or short EDUs), representing clause-like sentence fragments of the input document. Afterwards, the discourse tree is built by hierarchically aggregating the EDUs into larger constituents annotated with an importance indicator (in RST called nuclearity) and a relation holding between siblings in the aggregation. The nuclearity attribute in RST thereby assigns each subtree either a nucleus-attribute, indicating central importance of the subtree in the context of the document, or a satellite-attribute, categorizing the subtree as of peripheral importance. The relation attribute further characterizes the connection between subtrees (e.g. Elaboration, Cause, Contradiction).
8
+
9
+ One central requirement for the RST discourse theory, as for all linguistic theories, is that a trained human should be able to specify and interpret the discourse representations. While this is a clear advantage when trying to generate explainable outcomes, it also introduces problematic, human-centered simplifications; the most crude of which is arguably the nuclearity attribute, indicating the importance among siblings.
10
+
11
+ Intuitively, such a coarse (binary) importance assessment does not allow to represent nuanced differences regarding subtree importance, which can potentially be critical for downstream tasks. For instance, the importance of two nuclei siblings is rather intuitive to interpret. However, having siblings annotated as "nucleus-satellite\" or "satellite-nucleus\" leaves the question on how much more important the nucleus subtree is compared to the satellite, as shown in Figure [1](#fig:example_2){reference-type="ref" reference="fig:example_2"}. In general, it is unclear (and unlikely) that the actual importance distributions between siblings with the same nuclearity attribution are consistent.
12
+
13
+ <figure id="fig:example_2" data-latex-placement="t">
14
+ <img src="Paper/example_2.png" />
15
+ <figcaption>Document <span class="math inline"><em>w</em><em>s</em><em>j</em>_0639</span> from the RST-DT corpus with inconsistent importance differences between N-S attributions. (The top-level satellite is clearly more central to the overall context than the lower-level satellite. However, both are similarly assigned the satellite attribution by at least one annotator). Top relation: Annotator 1: N-S, Annotator 2: N-N.</figcaption>
16
+ </figure>
17
+
18
+ Based on this observation, we investigate the potential of replacing the crude nuclearity assessment postulated by RST with automatically generated, real-valued importance scores in a new, **W**eighted-**RST** framework. In contrast with previous work that has assumed RST and developed computational models of discourse by simply applying machine learning to RST annotated treebanks [@ji2014representation; @feng2014linear; @joty2015codra; @li2016discourse; @wang2017two; @yu2018transition], we rely on very recent empirical studies showing that weighted "silver-standard\" discourse trees can be inferred from auxiliary tasks such as sentiment analysis and summarization (e.g. @huber2020mega).
19
+
20
+ In our evaluation, we assess both computational benefits and linguistic insights. In particular, we find that automatically generated, weighted discourse trees can benefit key NLP downstream tasks. We further show that real-valued importance scores (at least partially) align with human annotations and can interestingly also capture uncertainty in human annotators, implying some alignment of the importance distributions with linguistic ambiguity.
21
+
22
+ <figure id="fig:overall" data-latex-placement="t">
23
+ <img src="Paper/approach_overview_new.png" style="width:100.0%" />
24
+ <figcaption>Three phases of our approach to generate weighted RST-style discourse trees. Left and center steps are described in section <a href="#wrst_gen" data-reference-type="ref" data-reference="wrst_gen">3</a>, right component is described in section <a href="#eval" data-reference-type="ref" data-reference="eval">4</a>. <span class="math inline">†</span> = As in <span class="citation" data-cites="huber2020mega"></span>, <span class="math inline">‡</span> = As in <span class="citation" data-cites="marcu1999discourse"></span>, <span class="math inline">*</span> = Sentiment prediction component is a linear combination, mapping the aggregated embedding to the sentiment output. The linear combination has been previously learned on the training portion of the dataset.</figcaption>
25
+ </figure>
2112.07374/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2112.07374/paper_text/intro_method.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Pose transfer, applying the desired pose of a source mesh to a target mesh, is a promising and challenging task in 3D computer vision, which can be widely applied to various industrial fields. However, existing methods [\(Wang et al.](#page-8-0) [2020;](#page-8-0) [Cosmo et al.](#page-7-2) [2020;](#page-7-2) [Zhou, Bhatnagar, and Pons-Moll](#page-8-2) [2020;](#page-8-2) [Chen et al.](#page-7-3) [2021b\)](#page-7-3) can only perform well within given datasets of synthesized/known pose and shape space, and fail to be generalized to other unknown spaces with robust performances, which severely limits the further real-world implementations.
4
+
5
+ To achieve robust performances on unknown latent spaces and other domains as shown in Fig. [1,](#page-0-0) we propose a novel Transformer network targeting generalized 3D mesh pose transfer. Specifically, a novel geometry-contrastive Transformer with geometrically structured encoders is designed that aims to enhance the identity mesh representation under the guidance of the pose mesh with their *global geometric contrasts*. Locally, we introduce a novel central geodesic contrastive loss to improve the geometric representation by considering the *regional contrast of all the geodesic directions* of each vertex as back-propagation gradients. Furthermore, we present a latent isometric regularization module to stabilize the unreliable performance of cross-dataset pose transfer problems.
6
+
7
+ Moreover, we present a new 3D mesh dataset, i.e., SMG-3D, for quantitatively evaluating the 3D pose transfer with unknown spaces. The SMG-3D is based on daily spontaneously performed body gestures with more plausible and
8
+
9
+ Figure 2: An overlook of our GC-Transformer. The left part is the whole architecture of the GC-Transformer. The right part illustrates the architecture details of one GC-Transformer decoder. The GC-Transformer borrows the idea from the work of (Dosovitskiy et al. 2021) but is extensively extended to 3D data processing tasks for both the encoders and decoders.
10
+
11
+ challenging body movements and different than those well-performed poses (Mahmood et al. 2019; Bogo et al. 2017). We use a semi-synthesis way to build the dataset to provide necessary GT meshes for training and validating. Our SMG-3D dataset can be jointly combined with other existing body mesh datasets for cross-dataset qualitative analysis.
12
+
13
+ A natural question to ask is: why not simply use purely synthesized meshes to train and evaluate the model? The short answer is that models trained on purely synthesized meshes will fail in the cross-dataset task. Indeed, using mesh synthesizing models like the SMPL series (Bogo et al. 2016; Zuffi et al. 2017; Pavlakos et al. 2019) can synthesize unlimited poses that can cover the whole latent space, or a large-scale dataset AMASS (Mahmood et al. 2019) to eliminate the inconsistencies with unknown dataset space. However, in practice, even for a small dataset FAUST with only 10 pose categories, it takes more than 26 hours to train a model (Cosmo et al. 2020) to fully learn the latent space. Thus, due to the staggering variability of poses and movements, it's not feasible to train the model with synthesized samples covering the whole pose space. It's desirable that a model can be directly generalized to unknown latent spaces in a more efficient way. To this end, we propose the SMG-3D dataset to tackle the cross-dataset learning issue. It can provide challenging latent distribution allocates on natural and plausible body poses with occlusions and self-contacts instead of well-posed body moves like AMASS (Mahmood et al. 2019), which could advance the research to real-world scenarios one step further.
14
+
15
+ To summarize, our contributions are as follows:
16
+
17
+ - A novel geometry-contrastive Transformer of positional embedding free architectures with state-of-the-art performances on the challenging 3D pose transfer task.
18
+ - A simple and efficient central geodesic contrastive loss that can further improve the geometric learning via preserving the direction gradient of the 3D vertices.
19
+ - A challenging 3D human body mesh dataset (i.e., SMG-3D) providing unknown space of naturally plausible body poses with challenging occlusions and self-contacts for the cross-dataset qualitative evaluation.
20
+ - A new latent isometric regularization module for adapting to challenging unknown spaces on cross-dataset tasks.
21
+
22
+ # Method
23
+
24
+ We define a 3D parametric mesh as $M(\alpha,\beta)$ , where $\alpha,\beta$ denote the parameters of identity (i.e., shape) and pose. Let $M^1(\alpha_{pose},\beta_{pose})$ be the mesh with the desired pose for style transfer and $M^2(\alpha_{id},\beta_{id})$ be the mesh with its identity to preserve. Then the polygon mesh $M'(\alpha_{id},\beta_{pose})$ is the target to generate. The goal of pose transfer is to learn a deformation function f which takes a pair $M^1$ and $M^2$ and produces a new mesh M', so that the geodesic preservation of the resulting mesh M' is identical to the source one $M^2$ and the pose style is identical to $M^1$ .
25
+
26
+ $$f(M^{1}(\alpha_{id}, \beta_{id}), M^{2}(\alpha_{pose}, \beta_{pose})) = M'(\alpha_{id}, \beta_{pose}).$$
27
+ (1)
28
+
29
+ Below, we will first introduce how to use the Transformer architecture-based model, called Geometry-Contrastive Transformer (GC-Transformer) for learning the deformation
30
+
31
+ function f, then the Central Geodesic Contrastive (CGC) loss for detailed geometric learning, and at last, the Latent Isometric Regularization (LIR) module for robust pose transfer on cross-dataset tasks.
32
+
33
+ An overview of the GC-Transformer is depicted in Fig. 2. Our GC-Transformer consists of two key components, one is a structured 3D mesh feature encoder and the other one is a Transformer decoder.
34
+
35
+ Structured 3D Encoder. As mentioned, existing 3D Transformers needs computationally demanding embeddings to encode vertex positions, thus in practice can only process 'toy' meshes. Inspired by NeuralBody (Peng et al. 2021) that uses structured latent codes to preserve the vertex topology, we modify the conventional PointNet (Qi et al. 2017a) into structured 3D encoders to capture the vertex topology by implementing depth-wise 1D convolution instead of redundant positional embeddings commonly used in conventional Transformers. Meanwhile, we replace the batch normalization layers into Instance Normalization (Ulyanov, Vedaldi, and Lempitsky 2016) layers to preserve the instance style which is widely used on style transfer tasks (Huang and Belongie 2017; Park et al. 2019). The resulting latent embedding vector Z with dimension $N_{latent}$ from the encoder will be dimensionally reduced with 1D convolution and fed into the following GC-Transformer decoder. In this way, LARGE meshes with fine-grained details can be handled freely at no cost by our GC-Transformer while preserving the vertex structures.
36
+
37
+ **GC-Transformer Decoder.** We encourage readers to refer to (Dosovitskiy et al. 2021) for a standard Transformer structure, which achieve state-of-the-art results on many tasks such as (Li et al. 2021; Yang et al. 2021). We propose the GC-Transformer decoder that inherits the classical structure with customized designs for 3D meshes. The structure of the GC-Transformer decoder is shown in Fig. 2.
38
+
39
+ The core difference between the GC-Transformer and a standard Transformer is the design of the multihead selfattention. To learn the correlations between the given meshes
40
+
41
+ <span id="page-3-2"></span><span id="page-3-1"></span>![](_page_3_Figure_0.jpeg)
42
+
43
+ Figure 4: A comparison of different losses for both the neighbor vertex sampling strategy and the local inconsistency. Our CGC loss considers the inconsistencies of all the geodesic directions at each vertex, so that direction gradients can be preserved in the back-propagation. Results show that CGC loss can make the local details more tight and realistic.
44
+
45
+ for geometric deformation, the model should be able to perceive the geometric information from the two meshes. Thus, we make the inputs of a GC-Transformer as the latent embedding vectors of *two meshes* instead of a single input like the classical Transformer. Besides, as it's a style transfer task, we utilize the Instance Norm introduced by (Huang and Belongie 2017) as our normalization layers. At last, to preserve the structural information of 3D data, the MLP layers are replaced with 1D Convolutional layers.
46
+
47
+ We denote the latent embedding vectors of the pose mesh and identity mesh from the encoders as $Z_{pose}$ and $Z_{id}$ respectively. We feed the two embedding vectors into different 1D convolution layers to generate the representations $\mathbf{qkv}$ for the standard multihead self-attention (Vaswani et al. 2017). The query $\mathbf{q}$ is from $Z_{pose}$ , and the value $\mathbf{v}$ and key $\mathbf{k}$ are from $Z_{id}$ . Then, the attention weights $A_{i,j}$ based on the geometric pairwise similarity between two elements of $\mathbf{q}$ and $\mathbf{k}$ is given with the following formula:
48
+
49
+ $$\mathbf{A}_{i,j} = \frac{exp(\mathbf{q}_i \mathbf{k}_j)}{\sum_{i=1}^{n} exp(\mathbf{q}_i \mathbf{k}_j)}.$$
50
+ (2)
51
+
52
+ After this, a matrix multiplication between v and the transpose of ${\bf A}$ is conducted to perceive the geometric inconsistency between meshes. Finally, we weigh the result with a scale parameter $\gamma$ and conduct an element-wise sum operation with the original latent embedding $Z_{pose}$ to obtain the refined latent embedding $Z_{pose}'$
53
+
54
+ $$Z'_{pose} = \gamma \sum_{i=1}^{n} (\mathbf{A}_{i,j} \mathbf{v}_i) + Z_{pose}, \tag{3}$$
55
+
56
+ where $\gamma$ is initialized as 0 and updated gradually during the training with gradients. The obtained $Z'_{pose}$ is followed by typical Transformer operators as introduced above Fig. 2 with a convolutional layer and Tanh activation, generating the final output M'. Please refer to the supplementary materials for more implementing details.
57
+
58
+ In such a crossing way, the geometric-perceived feature code can consistently be rectified by the original identity mesh and its latent embedding representations. Note that, different than previous attention-based modules (Wang et al. 2018b; Tang et al. 2020b; Huang and Belongie 2017; Tang et al. 2020a), our GC-Transformer could not only compute the pair-wise correlations and contrasts in a crossing-mesh way but also could fully preserve the local geometric
59
+
60
+ details with the residual layer. Most importantly, our GC-Transformer is designed for 3D mesh processing which has never been attempted in these works. Note that input mesh vertices are all shuffled randomly to ensure the network is vertex-order invariant.
61
+
62
+ Most of the existing 3D mesh representation learning losses, such as triangle regularization loss, edge loss, Chamfer loss and Laplacian loss (Wang et al. 2018a, 2020; Groueix et al. 2018; Sorkine 2005; Zhou et al. 2020) all repeal the gradient of the direction information of 3D vertices. They only compare the scalar (or weak vector) differences of the mesh vertices such as one-ring geodesic lengths to construct the objective function, while the convexity of the mesh surface containing rich directional gradient information is not utilized. To this end, inspired by the superb performances of central difference convolution (Yu et al. 2020, 2021a,b) that considers the directional difference of depth space, we suggest to compare the vector differences of the vertex topology by proposing a simple yet efficient central geodesic contrastive loss as below:
63
+
64
+ <span id="page-3-0"></span>
65
+ $$\mathcal{L}_{contra} = \frac{1}{V} \sum_{\mathbf{p}} \sum_{\mathbf{u} \in \Gamma(\mathbf{p})} \sqrt{u_{M'}^2 + u_M^2 - 2u_{M'}u_M \cdot cos(\theta)},$$
66
+ (4)
67
+
68
+ where $\Gamma(\mathbf{p})$ denotes the neighbor edges of vertex $\mathbf{p}$ and V is the total vertex number of the mesh. $u_M$ denotes the edge of mesh M and $\theta$ denote the included angle of the edges of $u_M$ and $u_{M'}$ . In practice, $\mathcal{L}_{contra}$ can be easily calculated by taking the vector difference of $u_M$ and $u_{M'}$ within the coordinate of each vertex p and divided by the total vertex number as a global normalization.
69
+
70
+ Our CGC loss has three improvements compared to existing losses: 1) the full inconsistencies of vertex vectors are calculated to preserve the direction gradient; 2) each direction of the vertex is separately considered instead of a simple sum-up; 3) the sampling methods of the neighbor vertices of **p** in Eq. (4) is different: the CGC loss samples all the vertices connected to **p** resulting in a flexible *N* neighbor vertices while (Wang et al. 2018a; Groueix et al. 2018) are within the mesh triangle of vertex **p** and fixed to 3. Please refer to Fig. 4 for a better understanding. A point-wise *L*2 reconstruction loss of mesh vertices can only capture the absolute distance in the coordinate space. Contrastively, our
71
+
72
+ <span id="page-4-2"></span>CGC loss captures the inconsistencies of all the geodesic directions at each vertex, so that direction gradients can be preserved in the back-propagation. Note that our CGC loss is similar to Laplacian loss but can preserve full vector differences without Laplacian normalization, thus is not only limited to smooth surfaces. As shown in Fig. 4, our CGC loss could offer additional strong supervision especially in tightening the output mesh surface.
73
+
74
+ **Overall Objective Function.** With our proposed CGC loss, we define the full objective function as below:
75
+
76
+ <span id="page-4-0"></span>
77
+ $$\mathcal{L}_{full} = \lambda_{rec} \mathcal{L}_{rec} + \lambda_{edge} \mathcal{L}_{edge} + \lambda_{contra} \mathcal{L}_{contra}, \quad (5)$$
78
+
79
+ where $\mathcal{L}_{rec}$ , $\mathcal{L}_{edge}$ and $\mathcal{L}_{contra}$ are the three losses used as our full optimization objective, including reconstruction loss, edge loss and our newly proposed CGC loss. $\lambda$ is the corresponding weight of each loss. In Eq. (5), reconstruction loss $\mathcal{L}_{rec}$ is the point-wise L2 distance and the edge loss (Groueix et al. 2018) is an edge-wise regularization between the GT meshes and predicted meshes.
80
+
81
+ Although existing pose transfer methods can deal with fully synthesized/known pose space, they fail to have a robust performance on the pose space that is different from the training one. To facilitate the 3D analysis of human behaviors to realworld implementations, we propose a new SMG-3D dataset as well as a LIR module towards the cross-dataset issue.
82
+
83
+ A New SMG-3D Dataset. The main contribution of the SMG-3D dataset is providing an alternative benchmark towards cross-dataset tasks by providing standard GTs under a challenging latent pose distribution (unlike perfectly synthesized/performed known distributions). As shown in Fig. 3, SMG-3D is derived from an existing 2D body pose dataset called SMG dataset (Chen et al. 2019) that consists of spontaneously performed body movements with challenging occlusions and self-contacts. Specifically, we first adopt the 3D mesh estimation model STRAPS (Sengupta, Budvytis, and Cipolla 2020) to generate the 3D mesh estimations from the original 2D images of SMG. Then, we select 200 poses and 40 identities as templates to form the potential pose space and optimize them by Vposer (Pavlakos et al. 2019). At last, the generated 3D meshes are decomposed into numerical registrations as latent parameters which are paired to synthesize the resulting 8,000 body meshes via the SMPL model (Bogo et al. 2016), each with 6,890 vertices. Compared to synthesized/well-performed meshes, our inthe-wild 3D body meshes are more practical and challenging with the large diversity and tricky occlusions for providing the unknown latent space. Please check more about our dataset in the supplementary materials.
84
+
85
+ **Latent Isometric Regularization Module.** When the poses and shapes are from unknown latent spaces, existing methods suffer from degeneracy in varying degrees (see Fig. 6). We address this issue by introducing the LIR module as shown in Fig. 3 right part, that can aggregate the data distribution of target set and source set. The LIR can be *stacked to existing standard models* to enhance the cross-dataset performance. Specifically, the difference between the two datasets is obtained by comparing the latent pose codes $z_M$
86
+
87
+ <span id="page-4-1"></span>Table 2: Intra-dataset performances on SMG-3D and SMPL-NPT datasets. "NPT MP" stands for NPT model with max pooling layers. Note that the "unseen" setting is still within the same dataset with similar data distributions.
88
+
89
+ | PMD↓<br>(×10 <sup>-4</sup> ) | Seen | | | Unseen | | |
90
+ |------------------------------|------------------------------|---------------------------|--------------------|------------------------------|---------------------------|--------------------|
91
+ | | NPT-MP<br>(Wang et al. 2020) | NPT<br>(Wang et al. 2020) | GC-<br>Transformer | NPT-MP<br>(Wang et al. 2020) | NPT<br>(Wang et al. 2020) | GC-<br>Transformer |
92
+ | SMG-3D<br>SMPL-NPT | 70.3<br>2.1 | 62.1<br>1.1 | 30.7<br>0.6 | 120.3<br>12.7 | 94.6<br>9.3 | 52.8<br>4.0 |
93
+
94
+ and $z_{M'}$ of the shape mesh M' from the target set and the pose mesh M from the source dataset. The target shape mesh will be fed into GC-Transformer along with another randomly sampled mesh from the target set to obtain a newly generated mesh M'. This will be iteratively executed until the latent pose code difference $z_{M'}$ and $z_{M}$ converges to less than $\theta$ , resulting in a normalized target set. In this way, the latent pose distribution of the target set will be regulated while its isometric information can still be preserved. Essentially, our LIR module serves as a domain adaptive normalization to warm-up the unknown target set to better fit the model trained on the source pose space.
2203.04251/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2203.04251/paper_text/intro_method.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ We have seen a great progress in video action classification [@i3d; @action_recog1; @action_recog2; @action_recog4; @action_recog3; @vyas2020multi; @demir2021tinyvirat; @tirupattur2021modeling], where the availability of large-scale datasets is one of the enabling factor [@jhmdb; @ucf101; @k400]. Video action detection on the other hand is much more challenging where spatio-temporal localization is performed on the video. In addition, obtaining large-scale datasets for this problem is even more challenging as annotating each frame is a huge time and cost intensive task.
4
+
5
+ <figure id="fig:varypercentage" data-latex-placement="t">
6
+ <p><img src="images/fig11.png" style="width:23.5%" alt="image" /> <img src="images/fig12.png" style="width:23.5%" alt="image" /></p>
7
+ <figcaption>A comparison of proposed semi-supervised method with supervised baseline showing absolute gain in f-mAP and v-mAP for varying number of labeled samples on UCF-101-24 dataset. The proposed method outperforms supervised baseline and using merely 20% of labeled samples, matches the performance of fully supervised method trained on 100% labels. Sup is supervised and Sup100 is supervised with with 100% labels. </figcaption>
8
+ </figure>
9
+
10
+ In this work, we focus on semi-supervised learning for video action detection which makes use of a small set of annotated samples along with several unlabeled samples. For annotated set, we have video-level class labels as well as frame-level localizations. To the best of our knowledge, this is the *first work* which focuses on semi-supervised learning for video action detection.
11
+
12
+ Semi-supervised learning has been successfully studied for image classification [@mixmatch; @uda; @fixmatch] with some recent works in object detection [@co_ssd; @semiobj1; @semiobj2; @semiobj3; @semiobj4]. Pseudo-labeling [@pseudocls; @rizve2020defense] and consistency regularization [@fixmatch; @uda; @semiobj1] are two main approaches used for semi-supervised learning. Where pseudo-labeling rely on several iterations, consistency regularization relies on single-step training. Since, training a video action detection model is already *computationally expensive* due to high-dimensional input, therefore we propose a *consistency-based* approach for an efficient solution.
13
+
14
+ Video action detection requires a sample level class prediction as well as a spatio-temporal localization on each frame. Therefore, we investigate two different consistency constraints to utilize unlabeled samples; *classification consistency* and *spatio-temporal localization consistency*. Consistency regularization for classification has been found very effective [@mixmatch; @fixmatch], however, it relies on a rich set of augmentations. Extending these augmentations to the video domain for spatio-temporal consistency is not always feasible.
15
+
16
+ We propose a simple formulation for spatio-temporal consistency where it is computed for each pixel in the video. Extending traditional consistency objective to spatio-temporal domain could capture pixel level variations, but it fails to capture any *temporal constraints* as the consistency is computed independently for each pixel. To address this issue, we explore *temporal continuity* of actions in videos. We argue that motion has some temporal continuity and we attempt to utilize this to regularize the spatio-temporal consistency. We investigate two different ways to capture motion continuity, *temporal coherence* and *gradient smoothness*. Temporal coherence aims at refining the uncertain boundary regions that distinguish foreground and background, and, gradient smoothness enforces temporally consistent localization.
17
+
18
+ The proposed method is trained end-to-end utilizing both labeled and unlabeled samples without the need for any iterations which makes it efficient. We demonstrate its effectiveness with an extensive set of experiments on two different datasets, UCF101-24 and JHMDB-21. We show that with *limited labels* it can achieve competitive performance when compared with *fully-supervised* methods outperforming all the *weakly-supervised* approaches. In addition, we also demonstrate the *generalization capability* of the proposed method on Youtube-VOS for video object segmentation. We make the following contributions in this work,
19
+
20
+ - We propose a simple *end-to-end approach* for semi-supervised video action detection. To the best of our knowledge, this is the *first* work focusing on this problem.
21
+
22
+ - We investigate two different consistency regularization approaches for video action detection; *classification consistency* and *spatio-temporal consistency*.
23
+
24
+ - We propose two novel regularization constraints for spatio-temporal consistency, *temporal coherency* and *gradient smoothness*, which focus on the *temporal continuity* of actions in videos.
25
+
26
+ # Method
27
+
28
+ Given a video $v=(v_{1},v_{2}..., v_{n})$ with $n$ frames, we want to perform spatio-temporal localization which provides a class label $p$ for the whole video and localization map $l$ on each frame $v_{i}$. Localization map $l$ can be pixel-wise prediction [@jhmdb] or a bounding-box [@ucf101]. In semi-supervised learning, the dataset is consists of a labeled (${D}_{L}$) and an unlabeled (${D}_{UL}$) set. Let's denote the whole training set with $X$, labeled subset as $X_{L}: \{ v_{l}^{0}, v_{l}^{1}, ..., v_{l}^{N_{l}}\}$ and unlabeled subset as $X_{U}: \{ v_{u}^{0}, v_{u}^{1}, ..., v_{u}^{N_{u}}\}$. We want to utilize both these sets to train an action detection model $M$.
29
+
30
+ Each training sample $v$ is augmented to get a second view $v^{'} (A(v))$. The action detection model $M$ is used to predict a class label and spatio-temporal localization $cls$, $loc$ = $M(v)$ for each sample $v$. A traditional supervised loss is computed for classification $(\mathcal{L}_{cls}^{l})$ and localization $(\mathcal{L}_{loc}^{l})$ for a labeled sample. We utilize consistency regularization for both labeled and unlabeled samples. We calculate the difference between a sample $(v_{u})$ and its augmented view $(v^{'}_{u})$ for consistency. We investigate two different consistency loss for action detection, classification $(\mathcal{L}_{cls}^{const})$ and spatio-temporal $(\mathcal{L}_{loc}^{const})$. An overview of the proposed approach is shown in figure [2](#fig:architexture){reference-type="ref+label" reference="fig:architexture"}. Next, we go through in detail about the action detection model $M$ and these two consistency regularization loss terms.
31
+
32
+ We propose a simple action detection model $(M)$ based on VideoCapsuleNet [@duarte2018videocapsulenet]. VideoCapsuleNet is a 3D convolution based encoder-decoder architecture. It utilizes spatio-temporal features for detecting and localizing actions in a video. Although it is a simple architecture, the use of 3D capsule routing increases the computation overhead significantly. We propose to use 2D routing [@2drouting] instead of 3D routing after pooling the temporal dimension of features and found it to be more efficient without much performance drop. We utilize this adapted model in our experiments. This model $M$ provides a classification prediction $p$, and, a spatio-temporal localization $l$ for an input video.
33
+
34
+ We want the classification prediction for a sample and its augmented view to be similar. We looked into the output of the latent features of the original view ${feat(X)}$ and the augmented view $feat(X^{'})$ from the network. The intuition is that the variation in the distribution should be minimal. To enforce this, we employed Jenson-Shannon divergence (JSD) to compute the difference between them. Using JSD, the classification consistency loss $(\mathcal{L}_{cls}^{const})$ is defined as: $$\begin{equation}
35
+ \label{eqn:clsconst}
36
+ \mathcal{L}_{cls}^{const} = \mathcal{L}_{JSD} = JSD(feat(X), feat(X^{'})).
37
+ \end{equation}$$
38
+
39
+ In this consistency constraint, the network learns to detect spatio-temporal localization for multiple views of a video. Using a sample $(v)$, the action detection network $(M)$ outputs a localization map $(l(v))$, which is a pixel-wise prediction, where each pixel has a probability of either action or not action. If we augment the original sample $(v^{})$, the model should be able to consistently predict the action region $(l(v^{'}))$. Using spatio-temporal consistency, we propose to bring these predictions close to each other. Firstly, analyzing spatial consistency standalone, we need to evaluate a pixelwise difference between the two predicted localization maps of augmented view $(loc(X^{'}))$ and the original view $(loc(X))$.
40
+
41
+ To compare the predictions, we need to inverse the data augmentation for the augmented view $(loc(X^{'}))$ so that mapping between the pixel locations are same while calculating the difference. To minimize this difference in predictions, we use L2 loss. The spatio-temporal consistency loss $(\mathcal{L}_{loc}^{const})$ is defined as, $$\begin{equation}
42
+ \mathcal{L}_{loc}^{const} = \mathcal{L}_{L2} = L2(loc(X), (loc(X^{'})^{-1})),
43
+ \label{eqn:locconst}
44
+ \end{equation}$$ where $loc(X^{'})^{-1}$ indicates reversal of augmentations.
45
+
46
+ The spatio-temporal consistency defined above ([\[eqn:locconst\]](#eqn:locconst){reference-type="ref+label" reference="eqn:locconst"}) only captures the spatial variance for different predicted localization maps, and, doesn't enforce any temporal constraints. Thus, it effectively works similar to any consistency-based object-detection for images. However, we have a third dimension in videos, the temporal dimension, and moving along this dimension, we can enforce *continuity* and *smoothness* constraints. It means that the predictions should not only be continuous, but the transition across each frame should also be smooth as well.
47
+
48
+ Therefore, we explore *temporal continuity* of actions in a video to effectively utilize spatio-temporal consistency. We focus on two different aspects of temporal continuity, *temporal coherency* and *gradient smoothness*. Temporal coherency captures the relative change in the boundary region of actions across time and helps in refining the detection boundaries. On the other hand, gradient smoothness helps in the detection of abrupt changes in predictions across time.
49
+
50
+ Temporal coherence is described as the relative displacement of the foreground pixels (action region) in the temporal dimension over a finite amount of frames $(f_{n})$. We compute the variance of the pixels in the current frame by measuring the relative shift in its position in future and past frames. This pixel-wise variance is computed for all the pixels in a video and is termed as variance map $\mathcal{M}_{var}$. The variance map $\mathcal{M}_{var}$ of a video attend to *short-term fine-grained changes* concentrating on the continuity of predictions. Analyzing variance of a particular frame, it will have two distinct regions ([2](#fig:architexture){reference-type="ref+label" reference="fig:architexture"}), *unambiguous*, and *ambiguous*. If a model is confident that a pixel is an action or non-action, we call it *unambiguous* otherwise we describe it as *ambiguous*. Since the model is already confident on unambiguous regions, we look into the latter. Some of these ambiguous regions will depict the boundaries connecting the foreground and background. Using the variance map we aim to give more *attention* to these regions. This will help the model exploit the ambiguity in spatio-temporal dimensions.
51
+
52
+ We utilize the variance map as attention to regularize the spatio-temporal consistency loss. This regularized loss $\mathcal{L}_{var}^{const}$ is defined as $$\begin{equation}
53
+ \footnotesize
54
+ \mathcal{L}_{var}^{const} = w . (\mathcal{M}_{var} \odot \mathcal{L}_{L2}) + (1 - w) . (\mathcal{L}_{L2}),
55
+ \label{eqn:varconst}
56
+ \end{equation}$$ where, mask $\mathcal{M}_{var}$ is calculated as: $$\begin{equation}
57
+ \footnotesize
58
+ \mathcal{M}_{var} = \frac{\displaystyle\sum_{i=1}^{n}(loc_i - \mu_{n})^2} {n}.
59
+ \end{equation}$$ Here, $loc_{i}$ represents the localization on frame $i$ for which variance is computed, and $n$ represents the total number of frames. $\mu_{n}$ represents the average of $n$ frames. $w$ indicates the weight factor for temporal coherency and non-attentive L2 loss. However, at the beginning of training, the model will only have primitive knowledge of spatial localization of actions. Therefore, in the initial phase of training, we start with $w=0$ where every pixel in the video has equal importance. As the training progresses, the model can recognize the coarse localization of actions, but, is still unsure of boundary regions. Therefore, we exponentially ramp-up the weight $(w)$ of temporal coherence attention mask $({M}_{var})$ used for L2 loss throughout the training, subsequently, reducing the effect of non-attentive L2 loss. Finally, to exploit longer temporal information, we make use of augmented view. We reversed the spatial augmentation and flip it temporally, attach it to the original view except for the last and first frame and calculate the variance for this longer clip. Since this new clip can be used to make a repetitive cycle, it is termed as *cyclic variance*.
60
+
61
+ Taking a deeper look into the temporal aspects of localization, the transition of actor localization should be smooth. To maintain this smoothness constraint, we analyze the change in output localization probability score maps using second-order gradients. Gradient reflects the change in direction. The first-order gradient of a spatio-temporal region along the temporal dimension provides a temporal gradient flow map. Since the offset is small in the temporal dimension, the first-order gradient map should be smooth. Taking the second-order gradient signifies the change in the first-order gradient. As the offset is small, the second-order gradient should be zero. The spikes in the second-order gradient map determine the change in the continuity of the temporal gradient flow map. We utilize this map $\mathcal{M}_{grad}$ as an *attention* to enforce the *long-term smoothness* of spatio-temporal localization. We calculate the gradient smoothness consistency loss as $$\begin{equation}
62
+ \footnotesize
63
+ \mathcal{L}_{grad}^{const} =(\mathcal{M}_{grad} \odot \mathcal{L}_{L2}),
64
+ \label{eqn:gradconst}
65
+ \end{equation}$$ where mask $\mathcal{M}_{grad}$ is calculated as $$\begin{equation}
66
+ \footnotesize
67
+ \mathcal{M}_{grad} = \frac{\partial^2 (loc)}{\partial t^2} \text{where} \frac{\partial (loc)}{\partial t} = \frac{loc_{t+1} - loc_{t-1}}{2}.
68
+ \label{eqn:gradmask}
69
+ \end{equation}$$ Here, the first order partial derivative $\frac{\partial (loc)}{\partial z}$ is approximated using a central difference derivative mask.
70
+
71
+ To formalize the final training objective, we have supervised losses and consistency losses. We calculate the supervised loss for classification $(\mathcal{L}_{cls}^{l})$ and localization $(\mathcal{L}_{loc}^{l})$. For consistency, we have classification $(\mathcal{L}_{cls}^{const})$, spatio-temporal $(\mathcal{L}_{loc}^{const})$, temporal coherency $(\mathcal{L}_{var}^{const})$ and gradient smoothness loss $(\mathcal{L}_{grad}^{const})$. The overall supervised loss is computed as $$\begin{equation}
72
+ \mathcal{L}_{labeled} = \mathcal{L}_{cls}^{l} + \mathcal{L}_{loc}^{l},
73
+ \label{eqn:labeledloss}
74
+ \end{equation}$$ and the combined consistency loss is computed as $$\begin{equation}
75
+ \mathcal{L}_{const} = \lambda_{1} \mathcal{L}_{cls}^{const} + \lambda_{2} (\mathcal{L}_{var}^{const}/\mathcal{L}_{grad}^{const}),
76
+ \label{eqn:totalconstloss}
77
+ \end{equation}$$ where $\lambda_{1}$ and $\lambda_{2}$ are weight parameters for classification and spatio-temporal consistency respectively. Finally, the overall training objective is a combination of these two, $$\begin{equation}
78
+ \label{final_loss}
79
+ \mathcal{L}_{total} = \mathcal{L}_{labeled} + \mathrm{\lambda} \mathcal{L}_{const}.
80
+ \end{equation}$$ Here $(\lambda)$ is a weight parameter used for consistency loss.
2203.12719/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-03-07T10:13:38.216Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36" etag="4cR433i_4nn-KxW8-LIm" version="16.6.6" type="device"><diagram id="Du5jaICsAVYxnHwnBHig" name="Page-1">7V1bc+I4Gv011M4+xGVbli+PgU66eybZSU26arb7ZcpgAd44mDGiQ+bXr2RsfBMgOb5hnE5Vg+zIoHO+uy4jMHndfQ7s9fLRd5A3UmVnNwKfRqqq6ZpO/qMt7/sWVVeMfcsicJ19m5I0PLv/oKhRjlq3roM2mRux73vYXWcbZ/5qhWY402YHgf+WvW3ue9mnru1F9EQ5aXie2R4q3Pan6+DlvtWEqbu/IHexjJ+syNGVVzu+Oepis7Qd/y31LHA3ApPA9/H+1etugjw6evG47Du6P3L18MECtMI8f6DdfJus1prqfEfqr7/+vPn8/evdjaHtu/lpe9voG0efFr/HQxD425WDaC/yCIzfli5Gz2t7Rq++EdRJ2xK/euSdQl569hR5Y3v2sgj/bOJ7fhB2A+7v76BlkVs2OPBfUOqKLOs6AIcr8TjrpGXuel5858pfkWeOi987GoqfKMBol2qKxuEz8l8RDt7JLdHVG8vSJAPu/yxi5g3Qoq/+luCsmkCKblumUY7JYkfsWhwekQBAXkQYCOAB9E7gAZFtto2HaRnt46HEpOADRDkPyEENyCfRMWX6j97/gvBsGd1Ph/85evDSni23AUVg7q9w6k9vdVMemyxgdX2s3t8XgAVsYBeB7bgo6TpqduzN8vBl6ZsnG2MUrMIWVVYqogTRpGqWEIamFAihWXKRDsCoTTzPk4Hgu6YvsT2lTWnoWczYYDvAkdmjCBM7hm13hYLob2a+59nrjRt2tr9j6XrOg/3ub3H8mPhdBsVQxD9Zus7EuxKIQCx3MUSWohcgUkwoqYAhtIZaE0ocNi2D0h9UHsdLP3D/oaPvRQOdh2bz5r569orYe9vJNY390MGhTdhfx8KN5jh6OfUx9l+jN0E0BjITXifw19/sYIHiWxiSufbdFQ5HDY7JLwFhIhO9CMk3m5D3SvKe/NLbAyLEK0ICwizaLbI3+A1tMCczTorCebpwcAEaNVEB8lOBfGPs2t4fxIu0Vwse2Y18TjsBj4GVGB98Mp5zL7QPS9dx0OojEGllINJVMXiizpJhE+7N9qj5sDGRImIDNwXMD5+zPA04vKoLoUHGTaM/zRBkl4UzjbBSKV84emuAL0Zv+NIEK5RYpfddjZiDY9Elx+I9SwQeP8Osyc+wrllhmNyInRPpE/CUUBC53hpQEHHWcuBBpe5EeVpw9NYELZSBFnX4ExevLjiSi4ND0bhDoSiQ36OoK3OhCOQa+6czDpIx5C4UgXTmVRPhynIUSh9ym6F9QMHdT7Q3E0qzXLmazIXOEZjQGuS6ONTH6kKHORcRR0bpaQ3MehHU8+UilRW6axI0rOTHLA6gYijH8fhY6UjAI5ttp+iI9KSEa7ovDj9MDw2Hcu/vW+y5VJCiUmrw8jvpxsWhgybJMNuohq1la/pABxZwCtKljgpFw9kMwfmc3rl3KJV8WVLml8gD7bh9L71Y44WapLGkqK4iryZgcAcOVMmB+Cog3xGmVICRURtdoIiA7R0oUoOaUNQOkECgqjiQoAY9oWdnjHWCEwKVw4ETzduOLnAECJQRB45UyZE2FYNAZWgAvUZj0SYHBMpAAwcaNA4tckLRh1CibcUgyYx8U3eCTZ1DbdSfu4PZcbkBBqtI1mbuLk61XrMkOWhqTiuSJP4FAR2SFYGq6cCBKjnAjsm7QIkhndsuJbqfzh18sJYtRxfSMvqQzu2U6egEJ4Z0bqdtRyc4MqRzW+JIm6ALLPgYQK/RWLTIAWNI6XfTOLTICUuAE9exQwTIGOwbwJie2/wOEZZAJWZYeFF2Pq5VqjLb+EoLSyBvXPVEaiKCe9V1oIWS0EJJ00JJ0YK+rnSWvVXK6erlTGmrP+tuKt0rQoAiV7YQw7rqBTrivLiaRReWQFp58DIa8DK6tGFEf7YkKoMZ5Mas72vArRb3Gqrb+zzuhlTPkyvbUsISyDpfGm1qI8fVbCxhCaSnB8+jOc+jGztLyAJJyh76HhY3an0PUBT5qjcmEmDClaU0DntvXzQxKt1cQpwsV5PnUBSBYLaVqhfHPvvcm+gnO/OfrI46aG5vvdC/SO/gX2RbidMSFEVSc6clwHjXgrRnEW9HmCmqybUtZVEEQtmr9zoboiSndktEWKxWx2IY0GojWLtB7zj8V0nQ2zX0jTLoA0MM/NPGjKe3RoxZH/byrYxfKTPWBLt2WS6k6aFXSjaO3hrZALQPofiFk80ouun91GyqQLQ/+GedIjW/d/ae5ddZZ02ty1mLyTWotqpYIBAqntM+J1AvoctyvTWiy1qcpnWRocCR0/wyxxFmMhVrFLgEJhTQsXFXi5HQbG4Rsgo6feW5y9FbI9xtccLYRXK3Ps7x+n6Xry+HuWhX4/vpjEM4m03UqX2Y3dYpFqilZsz1NI7tw/F8l8quq8vQtVhzqLUSfuEEvJ6snehcvvOs625xXZYBYKxH5Syuz90dcojjuhlVWWyPhjuutOsWp3tVY6UdiC4zHrz52vnJrepKzT9s2IEHLU4MDJel39+Hw9ytFEUF6INSa6L7adniPEhLJFMdQzbRlZPshDsPK+UcR2+NcK7duoEGgWEqA+dCD14tco6xYfAHOMfRWyOcE8j3X5i3lqZ0F122SmJTfoetM8V30JXDIfuk4TRuHvS+nARaTPATNCe3nzTNvCSSNV1+FyCroAtYnrscvTXC3XZX3NPYVpYvibv1cY7XBSzPOY7eGuGcQBXhwlzADifs2vH+2i+/gxaXMNDEyidDNu96qeFMbjL0PnsncqLdkEmph2TXlr0TOUGv24qt1op8Wzy8mowezxHxdNjcme09UFSf/I2LXZ8g/ymmyTi+4dZzF/RCSKo0R2Mmv+5I9LheSpRDsyWhtURRzsaOHNzJlco97zn6sEt7ttwGSIxeR/cTZfiGR4LfIhFL7DCtQlUCubPpYLyGM70TOMMDq+9UR43D30cr5zYIQgSnnj97yYIf7PeCj+E6Agzp4971DoTJgKGdGmLkLBCv/3Jq6ALk2dj9iTKdnxDsJ6r3MvipOfAMkENl42+DGYr+MAGGpy+o5frCYRRT6Ks6xcDheadwjwThA7AXZVCLw8Q8ucLGFFsugBsxFQ67x5sf4IaV7UvLS3/d3IAczsugEwgyJtRzKkEvC3uxK1lvGHV1QJ0TdVjc1cbUJFUrjX2+N6hwYU+wsN9Tt0WprVrYwTEdYWAHnYaZNwVafGyDuClQTP1MX7UrBY74YYCdIqPn9Xfeo+OGvdAVhE2bAo6i/IB6KKFKTtrN2HsTl3Y5/gBJX6YUnxPdGPQc9fIBejq7Pq+bzdLhIDSNM13VjjpHpXlAPUQ9B5Vq6JKmJie7mWWdwAIJSM8Nk2BIBfFqfdmqiQXECMjFruX02YGwWVbEW7eeYgVN5a6PwlZIkqpytoBB2mQ2oAfhAIaUt44wPi8xXb5mwW7lTGh1yVPIkURLZOJYNSebSG8tU37khEX4wUz5GVaJrS1iwWuqH4f3Ufmfonz/sv0cTFZPc/TbYvLy2w1zE089nKVBB5W8XuDwK6fbUtDrf2/9+MLN/tzRW3IDMNe75GLSC1EZ2sNfI2M8ghMyJpi8evz6ODI+0d/wavwk8m1YH2Aa5FsKNx7lpnyemwwSZDkUNdInRhN/gFUPJaKrmirRtYvxT05BgOK6W5NBn3zGrjrtoA+xM589LQS8wKosdta03KEm1dlLtPM8PJ7h5Z2xfVg8/Q3/8+7dsFLnVHZ/xCLcDREsUb/UoCzFpvQQqDDql3UJGXO4Wc5JOM5wQj4Swq7nIKJIf6R06OUCoGsSgIm+y/lDh3ihLSxYGWJB0ygfN40E0VcbL6dzAucrhRPefTljFi8VacOi2+YfBRrqzVk2JtAsu1Yp0HQeUniIfOgMHWC3I2cI3u0vbGaBu8b7h5EI4uA2jeB48vBMp+/yOE8XyxJdl4pbPkBJLapkCCSDMTFOrYsgrPQxxWAeu7dLhO1/9UAnm6Z5SlKttlUyK5kbARGhcOkIWLIqyfJxoygXVzg2isC5GJJDV8LjurIgUL/8+HdP1R2QFQmqR4E2441A2gKalQqqFOgI5V8Yrm1fMVctWQJ6SroLKfLmpJudImLFH4Kog5OuUOTSfH21SazeW19GIWG6pOWS6qZRPIOyWXQ/HtFo2kl0N9vpBtuzl2zWz968ENCoozuhv2nX1o150GMqQBrmZqkANF1qmwwVRD0nFHwC8S3GlAH9Rdi0crZbi8vdrWHLClgEBf20GmcK+lOAHHeGj8n6HNl4G6BNr8VdNS0JKqnyZk7wody64LOCqPrJES1jvmJmWJZkpX+0LDUsubngmk0M1qSZOpy/B3+z6TPORkYD5OfBQNCyAmAefSaoALgs/zdkz5Yo6C/UYbpMS0Gdc/j14vLgZpFmVe/qQPoZbx06jteKNFBA22Zd/XjornJp734785aZX61mGpJeLEHoDGxJUKfVhe7HozVV5ZJk5M1vSMhGhpCumu4tzoqs5WRY11qO2phbu9Xhf43g+DGqJvYUXABUCRiJus65YKDBijMb6qZc7f5DrRqyBMwE6vzUq7ajKuYGZAPUpaCWra5Avfnvnzt8765//etx9waX6+8T02VO3soNdnPzI3OTIRmD39r8SCNGJVkGCjOpETM3w51/0Yle6LmqRSfkbeBTAUxup3vDPPoOonf8Hw==</diagram></mxfile>
2203.12719/main_diagram/main_diagram.pdf ADDED
Binary file (47.4 kB). View file
 
2203.12719/paper_text/intro_method.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Self-supervised learning (SSL) has attracted significant attention over the last years. Recently, several studies are shifting towards adapting SSL to transformer architectures. Originating in natural language processing, where self-supervised transformers [15 ,63 ] have revolutionized the field, these architectures were introduced to computer vision with the *vision transformer* (ViT) [17] as an alternative to convolutional neural networks [26,35,59]. ViT formulates an image as a sequence of tokens obtained directly from raw patches and then follows a pure transformer architecture. Despite the absence of imagespecific inductive bias, ViT shows strong image representation learning capacity.
4
+
5
+ Considering that transformers are data-hungry, many studies advocate pre-training them on unsupervised pretext tasks, determined only by raw data. A prominent paradigm is to mask a portion of the input tokens—words in text or patches in images—and train the transformer to predict these missing tokens [ 2 ,15 ,24 ,72 ,78]. This paradigm, called
6
+
7
+ Correspondence: gkakogeorgiou@central.ntua.gr
8
+
9
+ *masked language modeling* (MLM) in the language domain [15], is remarkably successful and extends to the vision domain as *masked image modeling* (MIM) [2, 72, 78].
10
+
11
+ MIM-based self-supervised methods have already shown impressive results on images. However, an important aspect that has not been well explored so far is how to choose which image tokens to mask. Typically, the selection is random, as has been the norm for text data. In this work, we argue that random token masking for image data is not as effective.
12
+
13
+ In text, random word masking is likely to hide high-level concepts that describe entire semantic entities such as objects (nouns) and actions (verbs). By contrast, an image has much more tokens than a sentence, which are highly redundant, and random masking is less likely to hide "interesting" parts; or when it does, the remaining parts still easily reveal the identity of the visual concepts. As shown in Figure 1(b-d), unless masking is very aggressive, this is thus less likely to form challenging token reconstruction examples that would allow the transformer to develop strong comprehension skills.
14
+
15
+ The question we ask is this: *Can we develop a masking strategy that addresses this limitation and makes informed decisions on which tokens to mask?*
16
+
17
+ To this end, we propose to exploit the intrinsic properties of ViT and in particular its self-attention mechanism. Given an input sequence of image patches, we forward it through the transformer encoder, thereby obtaining an attention map in its output. We then mask the most attended tokens. As shown in Figure 1(f-g), the motivation is that highly-attended tokens form more coherent image regions that correspond to more discriminative cues comparing with random tokens, thus leading to a more challenging MIM task.
18
+
19
+ This strategy, which we call *attention-guided masking* (AttMask), is an excellent fit to popular distillation-based self-supervised objectives, because it is the teacher encoder that sees the entire image and extracts the attention map, and the student encoder that sees the masked image and solves the reconstruction task. AttMask thus incurs zero additional cost.
20
+
21
+ We make the following contributions:
22
+
23
+ - 1. We introduce a novel masking strategy for self-supervised learning, called AttMask, that exploits the intrinsic properties of ViT by leveraging its self-attention maps to guide token masking (subsection 3.2).
24
+ - 2. We show how to efficiently incorporate this above masking strategy into teacherstudent frameworks that use a MIM reconstruction objective and demonstrate significant performance improvements over random masking.
25
+ - 3. Through extensive experimental evaluation, we confirm that AttMask offers several benefits: it accelerates the learning process; it improves performance on a data-limited regime (subsection 4.2) and on a variety of downstream tasks (subsection 4.3); it increases the robustness against background changes, thus revealing that it reduces background dependency.
26
+
27
+ ![](_page_2_Figure_2.jpeg)
28
+
29
+ Fig. 1. Different than random masking strategies (b-d), our *attention-guided masking* (AttMask) uses the attention map arising in the encoder (e) to mask the most highly attended by default (f), or the low-attended (g) patches. (b) is used by SimMIM [72], (c) by MAE [24], (d) by BEiT [2] and (g) by MST [38].
30
+
31
+ # Method
32
+
33
+ A simplified overview of the method is shown in Figure 2. We first discuss in subsection 3.1 preliminaries and background on vision transformers and self-supervision with distillation-based masked image modeling. In subsection 3.2, we then detail our attention-guided token masking strategy, called AttMask, and how we incorporate it into masked image modeling.
34
+
35
+ **Vision Transformer [17].** We are given an input image $X \in \mathbb{R}^{h \times w \times c}$ , where $h \times w$ is the spatial resolution and c is the number of channels. The first step is to tokenize it, *i.e.*, convert it to a sequence of token embeddings. The image is divided into $n = hw/p^2$ non-overlapping patches $P_i \in \mathbb{R}^{p \times p \times c}$ for $i = 1, \ldots, n$ , where $p \times p$ is the patch resolution. Each patch is flattened into a vector in $\mathbb{R}^{p^2c}$ and projected to an embedding vector $\mathbf{z}_i \in \mathbb{R}^d$ using a linear layer, where d is the embedding dimension. A learnable embedding $\mathbf{z}^{\text{[CLS]}} \in \mathbb{R}^d$ of a "classification" token [CLS] is then prepended to form the tokenized image
36
+
37
+ $$Z = (\mathbf{z}^{\text{[CLS]}}; \mathbf{z}_1; \dots; \mathbf{z}_n) \in \mathbb{R}^{(n+1) \times d}, \tag{1}$$
38
+
39
+ ![](_page_4_Figure_2.jpeg)
40
+
41
+ Fig. 2. Simplified overview of AttMask as incorporated in the masked image modelling (MIM) objective of iBOT [78]. A tokenized image Z (1) is given as input to a teacher encoder $f_{\theta'}$ , generating target features $f_{\theta'}(Z)$ and an attention map $\overline{\mathbf{a}}^{\text{[CLS]}}$ (7). We then generate a mask $\mathbf{m}^H$ (9) on the most attended tokens and accordingly a masked version $\widetilde{Z}$ (10) of the image, which is given as input to a student encoder $f_{\theta}$ to generate the predicted features $f_{\theta}(\widetilde{Z})$ . Using $\mathbf{m}^H$ , loss $L_{\text{MIM}}$ (3) is a dense distillation loss between predicted and target features of the masked tokens. Additionally, a global loss $L_{\text{G}}$ (4) between [CLS] tokens is applied (not shown here).
42
+
43
+ where ";" denotes row-wise stacking. The role of this special token is to represent the image at the output. A sequence of position embeddings is added to Z to retain positional information. The resulting sequence is the input to the $transformer\ encoder$ . Each layer of the encoder consists of a multi-head self-attention (MSA) block followed by a multi-layer perceptron (MLP) block. Through all of its layers, the encoder uses a sequence of fixed length n+1 of token embeddings of fixed dimension d, represented by a $(n+1)\times d$ matrix. The embedding of the [CLS] token at the output layer serves as the image representation.
44
+
45
+ An MSA block consists of a number H of heads, each computing a *scaled dot-product self-attention* [63], *i.e.*, the relevance of each image patch to others, encoded as an $(n+1) \times (n+1)$ attention matrix. As discussed in subsection 3.2, we average attention matrices over all the heads of the last encoder layer and we use the row corresponding to the [CLS] token to generate token masks.
46
+
47
+ **Distillation-based Masked Image Modeling.** *Self-distillation*, using a moving average of the student as teacher [60], is studied for self-supervision in BYOL [23] and extended to vision transformers in DINO [8], which applies the distillation loss globally on the [CLS] token. iBOT [78] turns this task into *masked image modeling* (MIM) by applying the loss densely on masked tokens.
48
+
49
+ Given an input image X tokenized as $Z = (\mathbf{z}^{\text{[CLS]}}; \mathbf{z}_1; \dots; \mathbf{z}_n)$ , a mask vector $\mathbf{m} = (m_1, \dots, m_n) \in \{0, 1\}^n$ is generated, giving rise to a masked tokenized image $\widetilde{Z} = (m_1, \dots, m_n)$
50
+
51
+ $$(\mathbf{z}^{\text{[CLS]}}; \tilde{\mathbf{z}}_1; \dots; \tilde{\mathbf{z}}_n)$$
52
+ , with
53
+
54
+ $$\tilde{\mathbf{z}}_i = (1 - m_i) \cdot \mathbf{z}_i + m_i \cdot \mathbf{z}^{[\text{MASK}]} \tag{2}$$
55
+
56
+ for $i=1,\ldots,n$ , where $\mathbf{z}^{[\text{MASK}]}\in\mathbb{R}^d$ is a learnable embedding of a "mask" token [MASK]. Following the strategy of BEiT [2], the mask vector is generated with random *block-wise* token sampling, that is, defined in terms of random rectangles in the 2D layout of the n tokens as a $(h/p)\times(w/p)$ matrix.
57
+
58
+ Following DINO [8], the transformer encoder is followed by a head that includes an MLP and scaled softmax, such that output token embeddings can be interpreted as probabilities. We denote by $f_{\theta}$ the mapping that includes the addition of the position embeddings, the encoder and the head, while $\theta$ is the set of learnable parameters. Given a tokenized image Z, masked or not, we denote by $f_{\theta}(Z) \in \mathbb{R}^{(n+1)\times d}$ the output token sequence and by $f_{\theta}(Z)_i$ , $f_{\theta}(Z)^{\text{[CLS]}} \in \mathbb{R}^d$ the embedding of the i-th and [CLS] token respectively. The teacher parameters $\theta'$ are obtained from the student parameters $\theta$ by exponential moving average (EMA) according to $\theta' \leftarrow \alpha \theta' + (1-\alpha)\theta$ .
59
+
60
+ For each input image, two standard resolution augmented *global views* are generated, with tokenized images $Z^a, Z^b$ and mask vectors $\mathbf{m}^a, \mathbf{m}^b$ . For each view v in $V = \{a, b\}$ and for each masked token, the MIM objective is to minimize the reconstruction loss between the student $f_\theta$ output for the masked input $\widetilde{Z}^v$ and the teacher $f_{\theta'}$ output for the non-masked input $Z^v$ :
61
+
62
+ $$L_{\text{MIM}} = -\sum_{v \in V} \sum_{i=1}^{n} m_i^v f_{\theta'}(Z^v)_i \log(f_{\theta}(\widetilde{Z}^v)_i). \tag{3}$$
63
+
64
+ Following DINO [8], a similar loss is applied globally on the [CLS] tokens between the student output for one masked view $\widetilde{Z}^v$ and the teacher output for the other non-masked view $Z^u$ :
65
+
66
+ $$L_{G} = -\sum_{(u,v)\in V^{2}} \mathbb{1}_{u\neq v} f_{\theta'}(Z^{u})^{[\text{CLS}]} \log(f_{\theta}(\widetilde{Z}^{v})^{[\text{CLS}]}). \tag{4}$$
67
+
68
+ Finally, as detailed in the Appendix section B, a *multi-crop* strategy applies, giving rise to a loss $L_{\rm LC}$ (A11) between local crops and global views. The overall loss of iBOT [78] is a weighted sum of $L_{\rm MIM}$ (3) and $L_{\rm G}$ (4) + $L_{\rm LC}$ (A11). DINO itself uses the sum $L_{\rm G}$ (4) + $L_{\rm LC}$ (A11) without masking.
69
+
70
+ Prior MIM-based self-supervised methods use random or block-wise random token masking. In this section we describe our attention-guided token masking strategy, which hides tokens that correspond to the salient regions of an image and thus define a more challenging MIM objective.
71
+
72
+ **Attention Map Generation.** Given an input sequence $Y \in \mathbb{R}^{(n+1)\times d}$ , a multi-head self-attention (MSA) layer uses three linear layers to map Y to the query $Q_j$ , key $K_j$ and value $V_j$ sequences for $j=1,\ldots,H$ , where H is the number of heads, $Q_j,K_j,V_j \in$
73
+
74
+ $\mathbb{R}^{(n+1)\times d'}$ and d'=d/H. Then, it forms the $(n+1)\times (n+1)$ attention matrix, where softmax is row-wise:
75
+
76
+ $$A_j = \operatorname{softmax}\left(Q_j K_j^{\top} / \sqrt{d'}\right). \tag{5}$$
77
+
78
+ To generate token masks from any layer of the transformer encoder, we average the attention matrices over all heads:
79
+
80
+ $$\overline{A} = \frac{1}{H} \sum_{j=1}^{H} A_j. \tag{6}$$
81
+
82
+ Now, each row of an attention matrix is a vector in $\mathbb{R}^{n+1}$ , that corresponds to one token and, excluding the diagonal elements, determines an *attention vector* in $\mathbb{R}^n$ over all other tokens. We focus on the attention vector of the [CLS] token, which comprises all but the first elements of the first row of $\overline{A}$ :
83
+
84
+ $$\overline{\mathbf{a}}^{[\text{CLS}]} = (\overline{a}_{1,2}, \overline{a}_{1,3}, \dots, \overline{a}_{1,n+1}), \tag{7}$$
85
+
86
+ where $\overline{a}_{i,j}$ is the element i,j of $\overline{A}$ . This vector can be reshaped to $(h/p) \times (w/p)$ attention map, to be visualized as a 2D image, indicating the regions of the input image that the [CLS] token is attending.
87
+
88
+ **Mask Generation: Highly-attended Tokens.** There is a permutation $\sigma_{\downarrow}:\{1,\ldots,n\} \to \{1,\ldots,n\}$ that brings the elements of $\overline{\mathbf{a}}^{\text{[CLS]}}$ in descending order, such that $\overline{a}_{\sigma_{\downarrow}(i)}^{\text{[CLS]}} \geq \overline{a}_{\sigma_{\downarrow}(j)}^{\text{[CLS]}}$ for i < j, where $\overline{a}_i^{\text{[CLS]}}$ is the i-th element of $\overline{\mathbf{a}}^{\text{[CLS]}}$ . Choosing a number $k = \lfloor rn \rfloor$ that is proportional to the total number n of tokens with *mask ratio* $r \in [0,1]$ , we define
89
+
90
+ $$M^{H} := \{ \sigma_{\downarrow}(i), \dots, \sigma_{\downarrow}(k) \}$$
91
+ (8)
92
+
93
+ as the set of indices of the top-k most attended tokens. We thus define the *high-attention mask vector* $\mathbf{m}^H$ with elements
94
+
95
+ $$m_i^H := \mathbb{1}_{M^H}(i) = \begin{cases} 1 & \text{if } i \in M^H \\ 0 & \text{otherwise} \end{cases}$$
96
+ (9)
97
+
98
+ for $i=1,\ldots,n$ . This masking strategy, which we call AttMask-High, essentially hides the patches that correspond to the most discriminative or salient regions of an image. By AttMask we shall refer to this strategy as default.
99
+
100
+ **Low-attended Tokens.** We also examine the opposite approach of AttMask-High that masks the least attended tokens. In particular, we define the set of indices of the bottom-k least attended tokens $M^L = \{\sigma_{\uparrow}(i), \ldots, \sigma_{\uparrow}(k)\}$ and the low-attention mask vector $\mathbf{m}^L$ with $m_i^L \coloneqq \mathbb{1}_{M^L}(i)$ based on the permutation $\sigma_{\uparrow}$ that brings the elements of $\overline{\mathbf{a}}^{\text{[CLS]}}$ in ascending order, that is, $\overline{a}_{\sigma_{\downarrow}(i)}^{\text{[CLS]}} \le \overline{a}_{\sigma_{\downarrow}(j)}^{\text{[CLS]}}$ for i < j. This strategy, which we call AttMask-Low and is similar to the masking strategy of MST [38], hides patches of the image background. Our experiments show that AttMask-Low does not work well with the considered MIM-based loss.
101
+
102
+ ![](_page_7_Figure_2.jpeg)
103
+
104
+ **Fig. 3.** Given image (a), the mean attention map (b) is averaged over heads (6),(7). The AttMask-High strategy (c) masks the most attended patches, while AttMask-Hint (d) reveals few of them to leave hints about the identity of the masked object.
105
+
106
+ **Highly-attended with Hints.** Finally, because AttMask-High may be overly aggressive in hiding the foreground object of an image, especially when the mask ratio r is high, we also examine an alternative strategy that we call AttMask-Hint: While still masking highly attended tokens, we allow a small number of the most highly attended ones to be revealed, so as to leave hints about the identity of the masked object. In particular, we remove from the initial set $M^H$ a small number $m = \lfloor sn \rfloor$ of tokens with *show ratio* s < r. These m tokens are randomly selected from the $\lfloor s_{\max} n \rfloor$ most attended tokens in $M^H$ , where $s_{\max} > s$ . An example comparing AttMask-Hint with AttMask-High is illustrated in Figure 3.
107
+
108
+ Incorporating AttMask into Self-supervised Methods. Because the embedding of the [CLS] token at the output layer of the transformer encoder serves as the image representation, we generate token masks based on the attention vector precisely of the [CLS] token of the output layer. In particular, given a global view tokenized as $Z^v = (\mathbf{z}^{\text{[CLS]}}; \mathbf{z}_1; \dots; \mathbf{z}_n)$ , we obtain the attention vector $\overline{\mathbf{a}}^{\text{[CLS]}}$ (7) and the corresponding high-attention mask vector $\mathbf{m}^H$ (9) at the output layer of the teacher. Then, similarly to (2), we give as input to the student the masked version $\widetilde{Z}^v = (\mathbf{z}^{\text{[CLS]}}; \widetilde{\mathbf{z}}_1; \dots; \widetilde{\mathbf{z}}_n)$ with
109
+
110
+ $$\tilde{\mathbf{z}}_i = (1 - m_i^H) \cdot \mathbf{z}_i + m_i^H \cdot \mathbf{z}^{[\text{MASK}]}. \tag{10}$$
111
+
112
+ We argue that masking highly attended regions using $\mathbf{m}^H$ helps in learning powerful representations. In section 4, we also experiment with low-attended regions using $\mathbf{m}^L$ , supporting further our argument.
113
+
114
+ AttMask can be incorporated into different methods to either replace the block-wise strategy of BEiT [2] or introduce masking. For iBOT [78], we use $\widetilde{Z}^v$ in $L_{\text{MIM}}$ (3) and $L_{\text{G}}$ (4). For DINO [8], we introduce masking by using $\widetilde{Z}^v$ for global views in $L_{\text{G}}$ (4), but not for local crops in the $L_{\text{LC}}$ (A11) loss (see Appendix section B).
2205.12374/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-10-05T13:29:21.613Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" version="15.4.1" etag="1JMGFlkzkDACiiICMGvu" type="google"><diagram id="DlGgEzJTm52W6ng8grzV">7ZxRc5s4EMc/jWfuHppBCDA8NonT3Fyvk2k6c3ePCsjAFSNXyLF9n/4kI2zwqol7wWJo0gcXFiHgp11J/5UmE3y12HzgZJn9wRJaTFwn2Uzw9cR1kecG8j9l2daWaRTWhpTnSW1yDob7/F+q72ysqzyhlbbVJsFYIfJl1xizsqSx6NgI52zdLTZnRdIxLElKO6+hDPcxKSgo9meeiKy2hn6r9C3N06x5MnL0lQVpCusqqowkbN16Fp5N8BVnTNRHi80VLRS8Lpeb71zdvxinpTjlBre+4ZEUK/1tXzKqX01sm+/lbFUmVN3iTPDlOssFvV+SWF1dyxaWtkwsCnmG1CGJsxWnH5T92pOGJctLQfnsUb5TpevQj6Vc0M13Xx3tgUhPomxBBd/KIvqGyAsv/Pom7UZe0wDrQ6M0pqzVHljbiHaDdF/3gZQ80LDM4DAAR8p8QYpxsENoUHieAd5IwDnRkOB8AC6vxgJuUI8LALiEpeMgFw3qcVNDqCbjAIfQoOTC8caqO2isRjBWaRXTHaeR8DvJ8cIz8WtmyS2Ac84WI2GHp0P6HkKA3ZoV85Gw84ZlB8WEGIuYkJp0UHRQTkzcoJBPuJwz+QVSo9aEpPXbSonDy1taPFKRx+Rgkkep+n9WxlJy86YG+fC6kvrqEw2Cnm+QeV4UV6xgfHcvjhP6ED5IeyU4+0pbV3CAI5zoD9AS3vWac/10pM9vyCIvFHe22aa0bEqxFd+9RiaEVPauj9/LH8lU/agC1UXKWFpQssyri1h1cfJCXO2K3szrOuXhodYePAXKThz5wFGmTtSUavuKF/TgK1A92dHsR01PUeLTqanpo2CKSXA23I7FuISCy5rMP8I9D2MaxybcD6Gv+qpzJQas8oY6zUpm4Ih1Qmg4N7IO4pA+zM+WS7DKGkq70QgUmEywSg5KuxFnE6ySg9rOTjphoM50UNgu1IFWAnwY1jBnYZU11I02kxbDjF8wzWFCfq40hwvlpqU0xzC0YWLEqoNDhTrqxIhVdlCxjTkxYhWdDzDRJKVN/oBxkbGUlaSYHayXXZCHMh8ZW2p8/1AhtjorQVaCdeFKNnz7l7pffro+/bt97XqjK6/Pth3e6gU7tKsmj6E+SCscQXhKG1DBya3CaUFE/tit/0WAoeK6pjp9ZDdP1Esf6XhHzrpPubScFTWdWe/eCiUVoFhlZKkO4xUvtpecxF+VGzyHs8t+XuTL2/0VIT2ClfL0XdRbEt4PuhTdPdcWR9cQ9KiJ2BdxDIeI+lOiN+o/UvWtd6r/bmsHFx+7MkYXqjv2p/q3W2fdo+hqjljv3+s0/FCljbhPcAK3i9F3DL58tj4BQxU2zj4BIeCQAexbTX2CF/WAEQFq454IYNcwEfCGmwg07/NTBH2Eho1503LeCGM+CgYNeSibEllLJfL4CZc8QTv1QQYHR2RQs4e5vQroGfyrj1g1LEsNjgQh51gpmpjISYxhYbQXKqZNfrW/EKHScI5OyZHdRIu9MHnRBzI3DC7c55kh37ia3As0k2T53zsP3ssRlNOU06pSY5Pr3HGa5LE0nmUzAuGxHrpljwsTz/JfGD451Jj2IugakT/5GfYmBBj2Sk5g8qZGvL7Im+DK2MS/+qWUP7/uJkS58ihJY6lcZBeTFf22omVMx7L06OOuNDZtEzrfxAIqs98+3c8+f7EA78e3f7SjCR9H05UMoFxNLp1PdA1C8SyxgO3lLT2o+q5nH+230gmLb+1W8my3kmm/ic1mgst2R6PfobGasU5deFftgMku30HeclOPgEdj4efZ3ccfH/WGW6Ma1g8Me2Fs+gEUv7/PZnc22mlY6sNGH5TJr4G6YQuOTehQVb+NTCdu3rHZTFDivzXTift+bDYTzDnU4/7bAP/8ZiFDO51rs5AH0xxv7XTqNiOb8QQTCK9hSmDaoGSTOkwrvA7qcG/T+ajL08Pf9ahX5Q9/HQXP/gM=</diagram></mxfile>
2205.12374/main_diagram/main_diagram.pdf ADDED
Binary file (24.4 kB). View file
 
2205.12374/paper_text/intro_method.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Revising and editing are a central part of the the human creative workflow, with most original content (e.g. art, books, articles, source code) being developed not in a single iteration, but in many iterations with each more refined than the last. How can we model these *editing processes* from inception to completion? In this paper, we attempt to provide a first answer to this question, specifically focusing on generation of sequential data such as natural language documents or source code.
4
+
5
+ <figure id="fig:edits" data-latex-placement="t">
6
+
7
+ <figcaption>An example of a natural editing process based on the description of “Dog” on Wikipedia. The legend below denotes the edit operations for each step of this process.</figcaption>
8
+ </figure>
9
+
10
+ Most current work on language generation tasks such as machine translation [@vaswaniAttentionAllYou2017], language modeling [@baevskiAdaptiveInputRepresentations2018], or summarization [@seeGetPointSummarization2017] generates the target sentence or document in a single pass (usually from left to right). There has been a reasonable amount of work that can generate edits to existing sequences for the purposes of post-editing, grammatical error correction [@omelianchuk2020gector], text style transfer [@mallinsonFelixFlexibleText2020; @malmiUnsupervisedTextStyle2020; @reid2021lewis], sentence fusion [@malmiEncodeTagRealize2019], or machine translation [@guLevenshteinTransformer2019]. However, these works all 1) model only a single editing step and 2) do not fully define a model of incrementally editing a document from a blank slate to the final text, and thus do not stand in for the one-pass generative models of sequences described above.
11
+
12
+ In this context, we propose the task of *modeling editing processes*, in which we look to explicitly model the likelihood of the entire process of revising a document to a polished form. In particular, and in contrast to previous works on modeling edits, we hypothesize that in order to edit more accurately, instead of simply learning to predict the next revision given the current revision, we should have context of multiple previous revisions when deciding when and how to edit the document next. Given the novelty of framing generation problems in this way, this paper simultaneously 1) proposes both baseline and novel models for the task, 2) creates evaluation datasets that can be used to compare models, and 3) discusses intrinsic and extrinsic evaluation methodology.
13
+
14
+ The proposed multi-step editing model predicts discrete edit operations [@levenshtein1966binary] to enable progressive refinement as shown in Figure [1](#fig:edits){reference-type="ref" reference="fig:edits"}, rather than framing sequence editing as a sequence to sequence task [@reid2021lewis; @faltings2021text]. In the figure, for each step of the editing process discrete operations (insert, replace, delete, keep) are predicted and then actions (such as generating a replaced span) are performed based on this. This has two benefits: 1) it allows the model to scale well with respect to input sequence length, and 2) allows us to make substantial changes with fewer actions [@grangier2018quickedit]. We use these edit operations to condition a semi-autoregressive model that is able to insert and replace multiple spans at once. Combined with an encoder that is able to quickly specify which spans of text need to be changed and *how*, this allows for considerable changes to be made to the text (including insertion, deletion, re-ordering, and replacement) in a relatively simple and cheap manner. Furthermore, this allows us to disentangle how likely the model is to operate (replace, delete, etc.) on a given span, and how likely the model thinks the generated text for a given span is. As we are modeling editing *processes*, and hypothesize that context from edits applied to the sequence are helpful, we propose a method for edit-aware sequence compression which can compress sequences into their edit operations and use *relative edit positional embeddings* to specify the position of edits relative to each other.
15
+
16
+ Given that the task of modeling natural editing processes in itself is novel, we collect new datasets to study this behavior; [WikiRevisions]{.smallcaps} and [CodeRevisions]{.smallcaps}. These datasets, in the code and natural language domains respectively, cover over 2.5M and 2.3M natural sequential revisions. We also discuss evaluation methodology, describing a metric of *edit perplexity* (ePPL), the perplexity of generating an edit given the current state of a document, as well as applications to downstream tasks.
17
+
18
+ We train and evaluate our proposed models on these datasets and find that the proposed methodology of modeling the entire editing process, referencing previous edits while generating the next one, significantly improves both intrinsic and extrinsic performance baselines that model edits in isolation. In particular, our method reduces perplexity by up to 22.9% relative over a state-of-the-art editing baseline, and 11.3% relative over a version of our model that does not consider editing history. We also demonstrate the ability of the model to generate qualitatively natural edit sequences, and the utility of the learned representations on downstream tasks of commit message generation [@loyola2017] and edit intention classification [@yang2017identifying].
19
+
20
+ # Method
21
+
22
+ Let $X = \{\boldsymbol{x}_0,\boldsymbol{x}_1,\dots,\boldsymbol{x}_N\}$ be a series of $N$ versions of a document, where the $i$th revised document is denoted by $\boldsymbol{x}_i$. $\boldsymbol{x}_0$ represents an initial state (generally the null string), and $\boldsymbol{x}_N$ represents the current state of the edited document. The probability of this series of document versions occurring can be decomposed as $$\begin{equation}
23
+ p(X) = \prod_{i=1}^{N} p(\boldsymbol{x}_i | \boldsymbol{x}_0^{i-1}),
24
+ \end{equation}$$ where $\boldsymbol{x}_0^{i-1} := \boldsymbol{x}_{0}, \ldots, \boldsymbol{x}_{i-1}$ (similarly below). The right hand side is the likelihood of the transformation of the previous document version $\boldsymbol{x}_{i-1}$ to the current document version $\boldsymbol{x}_{i}$ given the previous revision history $\boldsymbol{x}_{<i}$. We refer to the likelihood of the whole revision process as the *edit likelihood*, and judge learned models based on their ability to achieve high edit likelihood on held-out data.
25
+
26
+ Note that standard generative models (specifically language models; LMs) calculate the probability of only the final version $p(\boldsymbol{x}_N)$, whereas the proposed formulation calculates the probability of the entire sequence of document edits. It nonetheless could theoretically be used to calculate the final version's likelihood by treating the editing process as latent and marginalizing over it[^2] $$\begin{equation}
27
+ p(\boldsymbol{x}_N) = \sum_{\tilde{X}\in\{\tilde{\boldsymbol{x}}_1^N|\tilde{\boldsymbol{x}}_N=\boldsymbol{x}_N\}} p(\tilde{X}).
28
+ \label{eq:marginal}
29
+ \end{equation}$$ Thus, our formulation, in contrast to previous single-step models of edits [@yin_learning_2019; @malmiEncodeTagRealize2019; @reid2021lewis], can also be used to define a generative model over single documents. It is also worth noting that the final document likelihood is lower-bounded by the edit likelihood; i.e. $p(\boldsymbol{x}_N) \ge p(X)$.
30
+
31
+ In this section, we now describe our approach to actually modeling these sequences of edits through (1) a decomposition of the modeling process into a sequential process of modeling edit operations then actual edits, and (2) neural model of modeling these operations and edits.
32
+
33
+ While the probability $p(\boldsymbol{x}_i | \boldsymbol{x}_0^{i-1})$ of the next document given all previous document versions could theoretically be modeled with a single neural sequence model, this is infeasible computationally (and likely infeasible from learning perspective as well). To simplify this problem, we employ the $n$-th order Markov assumption, assuming that the probability of the next document is conditioned only on the previous $n$ documents $p(\boldsymbol{x}_i | \boldsymbol{x}_{i-n}^{i-1})$. This probability could be modeled directly, and in fact in the case of $n=1$ this becomes analogous to the single-step editing problem tackled by previous work [@yin_learning_2019; @malmiEncodeTagRealize2019; @reid2021lewis; @faltings2021text]. To our knowledge, no previous work has modeled natural editing processes with $n>1$.
34
+
35
+ However, in the interest of both efficiency and efficacy, we take an alternative approach where we first predict a set of edit operations $\rve_i$, and then predict the next document version based on the previous documents and these edit operations: $$\begin{align}
36
+ p(\rvx_i | \rvx_{i-n}^{i-1}) & \approx p(\rvx_i, \rve_i | \rvx_{i-n}^{i-1}) \\
37
+ & = p(\rvx_i | \rve_i, \rvx_{i-n}^{i-1}) p(\rve_i | \rvx_{i-n}^{i-1}). \label{eq:edit_prob}
38
+ \end{align}$$ The first approximation becomes an equality when the edit operations can be deterministically derived from $\rvx_i$ and $\rvx_{i-1}$, i.e. $p(\rve_i | \rvx_i, \rvx_{i-1})=1$, as is the case described below.
39
+
40
+ **Edit Operations.** We base the edit operations in $\rve$ on those calculated by the Levenshtein algorithm [@levenshtein1966binary], including token-level insertions, deletions, and substitutions. These are expressed as four operations insert, delete, keep, and replace denoted by $\{$`INSERT`, `DELETE`, `KEEP`, `REPLACE`$\}$. For multi-word insertions and replacements, e.g. a replacement of a contiguous span of words, we apply the the same `REPLACE` label to all tokens in this span. An example of each operation is shown in Figure [1](#fig:edits){reference-type="ref" reference="fig:edits"}.
41
+
42
+ **Decomposed Edit Likelihood.** We can then re-define our previous formulation of edit likelihood: $$\begin{equation}
43
+ P(\rvx_1^N) = \prod_{i=1}^{N} p(\rvx_i | \rve_i, \rvx_{i-n}^{i-1}) p(\rve_i | \rvx_{i-n}^{i-1}),
44
+ \end{equation}$$ and analogously define edit log-likelihood $$\begin{equation}
45
+ \begin{split}
46
+ \gL_{\rvx\rve} &\coloneqq \log P(\rvx_1^N) \\
47
+ & = \sum_{i=1}^{N} \log p(\rvx_i | \rve_i, \rvx_{i-n}^{i-1}) + \log p(\rve_i | \rvx_{i-n}^{i-1}).
48
+ \label{eq:lxe}
49
+ \end{split}
50
+ \end{equation}$$ We can further decompose this into only the components corresponding to the edit operations $\gL_{\rve} \coloneqq \sum_{i=1}^{N} \log p(\rve_i | \rvx_{i-n}^{i-1})$, or the operation-conditioned edits $\gL_{\rvx|\rve} \coloneqq \sum_{i=1}^{N} \log p(\rvx_i | \rve_i, \rvx_{i-n}^{i-1})$, both of which we will utilize for devising evaluation metrics in Section [\[sec:metrics\]](#sec:metrics){reference-type="ref" reference="sec:metrics"} below.
51
+
52
+ <figure id="fig:editor" data-latex-placement="t">
53
+ <embed src="editor.pdf" style="width:70.0%" />
54
+ <figcaption><span class="smallcaps">EditPro</span> given the examples of modeling <span class="math inline"><em>p</em>(<strong>x</strong><sub>3</sub>|<strong>x</strong><sub>2</sub>)</span> from Figure 1. We feed the input tokens into an encoder with an autoregressive tag predictor, and then use the predicted edit operations to condition the generation of <code>REPLACE</code> and <code>INSERT</code> spans.</figcaption>
55
+ </figure>
56
+
57
+ In this section, we propose a model of multi-step editing processes, [EditPro]{.smallcaps}, which is based on a semi-autoregressive edit-conditioned encoder-decoder model with a Transformer [@vaswaniAttentionAllYou2017]. The model (depicted in Figure [2](#fig:editor){reference-type="ref" reference="fig:editor"}) contains three main components: (1) an edit encoder, (2) an operation classifier and (3) an insertion-replacement decoder.
58
+
59
+ **Edit Encoder.** The encoder $f_\text{enc}$ takes in a document version $\boldsymbol{x}_{i-1}$ and feeds it through multiple self-attention and feedforward layers [@vaswaniAttentionAllYou2017] to produce contextual representations for each token. In the case that we perform variable-order edit modeling, we use cross-attention to feed in representations of previous edit steps. For models where $n>1$, we feed in $n-1$ additional edit sequences -- we describe this process after describing our methods for edit sequence prediction.
60
+
61
+ **Edit Operation Prediction.** We use an autoregressive tagger, using a single Transformer layer with a causal attention mask, that models the probability of each edit in edit operation sequence $\rve=e_1^M$ from left to right, $p(e_j|e_1^{j-1})$. Notably, we also performed preliminary experiments with a tagger that predicts operations independently, but found it was heavily biased towards the `KEEP` operation as most words are kept in any single document revision, and thus did not produce coherent multi-word edit sequences when sampling sequences of edits.
62
+
63
+ **Generating Replacements and Insertions.** []{#sec:decoder label="sec:decoder"} When editing, given our four Levenshtein operations (`INSERT`, `REPLACE`, `KEEP`, `DELETE`), two of them --- `INSERT` and `REPLACE` --- entail generation of new content conditioned on the current revision of the document. Given our predicted edit operations $\boldsymbol{e}$, we propose a semi-autoregressive model with a causal Transformer decoder that can decode multiple spans in parallel for efficiency purposes. Each edit span contains the following properties: it has a start index (denoted by $s_\text{start}$), end index (denoted by $s_\text{end}$), and an operation type (denoted by $s_\text{type}$) . Note that these can be simply be extracted by looking at contiguous spans of a certain type in an edit (e.g. `REPLACE` for *descended from* $\rightarrow$ *domesticated descendant of* in Figure [1](#fig:edits){reference-type="ref" reference="fig:edits"}). We use a mean pooling operation to aggregate the contextual vectors produced by $f_{\text{enc}}(\boldsymbol{x})$ into span representation $\hat{x}_s$ : $$\begin{align}
64
+ \hat{x}_s & = \frac{1}{s_\text{end}-s_\text{start}}\sum_{t=s_\text{start}}^{s_\text{end}}f_\text{enc}(\boldsymbol{x})_t
65
+ \end{align}$$ We then update the span representation $\hat{x}_s$ by taking the sum of the appropriate operation embedding for the span type and the current span representation and feed it to a multi-layer perceptron with an intermediate non-linearity: $\hat{x}_s \leftarrow \mathrm{MLP}(W_\text{op}(\boldsymbol{e})_s + \hat{x}_s)$, where $W_\text{op}$ denotes an embedding matrix for each operation. $\hat{x}_s$ is then used to initialize the `<s>` token for the decoder span to further condition the generative process.
66
+
67
+ **Encoding Edit History.**[]{#sec:editcomp label="sec:editcomp"} As we look to investigate variable order edit modeling over long sequences of text, we need a way to be able to represent edits in a way useful for predicting the next editing steps. Previous work [@yin2019learning; @MarreseTaylor2021Variational; @yao2021learning] has focused largely on learning a single-vector representation for edits which is compressed but limited in expressiveness. One the other hand, a perhaps more intuitive way taken from common Transformer-based [@vaswaniAttentionAllYou2017] models would be to use cross-attention between all $n$ previous documents, which is more expressive but prohibitively expensive when $n$ is scaled upwards.
68
+
69
+ Instead, we make a compromise between the above approaches, leveraging predicted edits $\rve_{i-n}^{i-1}$ to compress the sequence and their derived spans (as discussed above). Given each of these spans, we compute the edit-compressed sequence, composed of a sequence of vector representations with each vector representing a different span. For each span in each of the previous revisions in $\boldsymbol{x}_{i-n}^{i-1})$, we mean pool the encoder (pre-edit) and the decoder (post-edit) representations for that span. We then sum this representation with the operation representing its edit operation and feed it into an MLP. Once we have done this for each span, we sum a learned *relative edit positional embedding*, where we learn an embedding matrix where each index in the matrix represents positions $i-1$ to $i-n$. We do this to specify the order of the previous edits. Finally, we compose these into a sequence and treat that as the "edit-compressed" sequence representation for that edit.
70
+
71
+ **Turning Pre-trained Encoder-Decoder Models into Editors.**[]{#sec:plm2edit label="sec:plm2edit"} Despite the fact that our model introduces both an edit prediction and a semi-autoregressive component, it is easy to finetune a pre-trained language model into an editor with our method as it uses vanilla Transformer layers as a backbone. We perform this by batching various spans and their conditioning variables together and training the model to adapt to decode these in parallel.
72
+
73
+ While some datasets of edits exist [@faruqui_wikiatomicedits:_2018; @MarreseTaylor2021Variational], to our knowledge they only consider a single editing step, i.e. dealing with a document $X = \{\boldsymbol{x}_0,\boldsymbol{x}_1\}, N=1$. As we propose learning to model multi-step edits, we develop new datasets in both the code and natural language domains. In addition, previous datasets have only concerned themselves with *atomic* edits [@faruqui_wikiatomicedits:_2018] which only occur at a small scale (usually sentence-level), and we instead look to model larger-scale edits as document level changes, which are more representative of the natural editing process.
74
+
75
+ <figure data-latex-placement="t">
76
+ <img src="wikipedia.data.png" style="width:50.0%" />
77
+ <figcaption>An overview of the <span class="smallcaps">WikiRevisions</span> data generation process for collecting clean multi-step revision data.</figcaption>
78
+ </figure>
79
+
80
+ ::: table*
81
+ Dataset Num. Edits Avg. Len (Max/Min) \% Keep \% Insert \% Replace \% Delete
82
+ ----------------------------- ------------ -------------------- --------- ----------- ------------ ----------- -- -- --
83
+ [WikiRevisions]{.smallcaps} 2.5M 333 (9992/1) 82.4% 0.1% 8.7% 8.8%
84
+ [CodeRevisions]{.smallcaps} 2.3M 774 (9725/1) 76.9% 0.1% 11.3% 11.7%
85
+ :::
86
+
87
+ In order to model the creative process for natural language text, we gather data from Wikipedia, which has extensive logs of the editing process that gave rise to Wikipedia articles, which have been used in a variety of previous works on single-step editing [@marrese-taylor_edit-centric_2019; @MarreseTaylor2021Variational; @yang_identifying_2017; @faruqui_wikiatomicedits:_2018]. We collect data for each revision using dumps from English Wikipedia. Given that the dumps are provided in the XML format, we extract the text with `beautifulsoup` and remove wikitext (custom Wikipedia markup) with `wikiextractor`. With this sanitized data, we gather revision of each document in chronological order removing any metadata-based edits which were stripped as a result of the sanitization process. Now, with our sets of revisions we tokenize all text with the sentencepiece model used by @radfordImprovingLanguageUnderstanding [@liuRoBERTaRobustlyOptimized2019] for congruence with pre-trained models (see Section [\[sec:plm2edit\]](#sec:plm2edit){reference-type="ref" reference="sec:plm2edit"}). We pre-compute Levenshtein operations using [`python-Levenshtein`](https://pypi.org/project/python-Levenshtein/) for use during training. In the case that an article exceeds 2000 tokens, we split the articles into its subsections and treat each subsection as an article (for the purpose of modeling editing processes). Dataset statistics are shown in Table [\[tab:data\]](#tab:data){reference-type="ref" reference="tab:data"}. We note that there is a significant imbalance for the `INSERT` operation, this is because we define insertions to be applied to the token preceding the insertion (as shown in Figure [1](#fig:edits){reference-type="ref" reference="fig:edits"}), rather than applied to an entire span (as we do for the deletion, replacement, and keep operations).
88
+
89
+ **Edit Summaries.** When extracting each edit we keep the edit summary (akin to a commit message) supplied by the editor at time of editing. We then curate these comments and develop a dataset for usage on downstream tasks---for both edit summary generation [@loyola2017] and edit-summary-conditioned text editing [@faltings2021text].
90
+
91
+ Another place where the incremental creative process is on display is in the creation of program source code. When building [CodeRevisions]{.smallcaps}, we scrape a total of 700 Python GitHub repositories using the MIT License with at least 1000 commits and 500 stars. We extract line-level patches from each repository's commit history when forming our code-based corpus and progressively apply each patch and compute the token-level Levenshtien operations between each revision. Note that we also keep commit messages for each commit. For this dataset we operate on the file level. For each series of revisions, we precompute Levenshtein operations based on tokens derived from a `sentencepiece` [@kudo2018sentencepiece] model with a 10k vocabulary. We also curate a dataset of revisions with commit messages as described in the previous subsection.
2206.01078/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-05-19T17:46:25.376Z" agent="5.0 (X11)" version="18.0.3" etag="UncQthJ7im3TU0pIs34N" type="device"><diagram id="vQU-DiIHO68JjmeST50l">7V1Lc+o6Ev41qVlBWfJ7GRKYO1U5d85UFnPP0oAAzzGIMU5C5tePZPySJYMMknECZBEs2zJWfy11f92SHsyn9f7vcbBd/cBzFD1AY75/MJ8fIAS2a5J/tOQzK/GdrGQZh/OsrCx4Df+HskIjK30L52jHXJhgHCXhli2c4c0GzRKmLIhj/MFetsAR+9RtsERcwessiPjSf4fzZHUo9WyjLP8DhctV/mRgZGfWQX5xVrBbBXP8USkyxw/mU4xxcvi23j+hiLZe3i6H+yYNZ4sfFqNNInMDPNzwHkRv2bs9QCcit44WmNRAfmDymb218983nJ8Y7FKZPJILgLfdlyfJtyX9/8+8mt3bNC9LBiAvJT+ocqJSenhqXgyZHwBj/LaZI/rLDXL6YxUm6HUbzOjZD4I0UrZK1hE5AvRnhlH0hCMcp/eao/HYefSLWt9RnKB9Y6OBQhQExAivURJ/kkvyG1w/a7cMwKbpDr2smo8SEFYm5FUFC3lZkEFwWdReSol8yQQlFpp5sdCgJRLaS7hBQdxeGqC1NBCY28hVIw1oWDYjjQG0eFl4AllAQ4EwLF0aNN2h+D1IQrwh14zXUzSfh5ulFukso2C3y9Sq6JHSg98oma2yg5oMFwvkzGaKNMp2/aHvM2KEYqUyoTE0gB7FsgWyrLUr2swf6QhCjma01cIZ25S7JMa/i1EBFs2D5tyQcrJxKm9tC+Cbl8UoIjB5Z6sXNUP2hJ84TIHJtL3nFx+P79uM8qxvsw/Y4bd4hrI6qyNNu8dA9/hjkiBeooR7TCrRosmkhOzcpJBd2x9a+oV8/DHdCdm9SSE7ZieafPwx3QnZu0khe8Aa2noEy1fdnTD92xSm6x3tlr1zJXm0Xuh6usSYu+jH5EhMzC39ut4vKUEwnG7Xm2FWWJUnNSlD4nW/BFMU/cS7MDWGzecpThK8JhdE9MQomP1epuYuY5zST6WOxyhc0nsTTG3fIDuaERggcstoi+KQvCX9/kxeKNzu0M+yaITfkog4R08Fn2CUhfSOd1LPPzapv/u5nmL6AgRjQRSh6Mcb5SXoq2f4rPxKI/3UTOsNpnWq8VMtzqoGfiagCsCh0DdqxrKsOQ0Ar8JEFV+zQxwnK7zEmyAal6Uj1skvr3nBVHIpLv6DkuQzY4WCtwSzqEH7MPmr8v0XrYp0a4ej531Wc3rwmR9syKv9VT2o3EUPy9vSo/w+6R7noLhZUaYRB6Vj1UZCuNJdjrSYdJFAhcoS/3Uzw/1yXyeTse2rIoRs0+IUzeMVDbiaGCFwOSWUijCVXU2K/xJQeUaHRB4jUUbUx8gJb4ZUkRNEvOzwOYCmz8lWF9kHRATT9zeMHL8bV+boY7qzfoGIelLRBz/Gy3Ww749ijvyx8WirUkyzBgubU0sR72uqUEsFNFJuqZTWya/KmVOWSmmc/GJsE5WWii2wVNyedRWkHoNFgQuHwD+vQ4C2w1YGBZUpVHsRT3VQVW5QvXggfxTF5FoO5EXx9Ip9yJPrEf9W0+AOIG+3aRvbRQyWil5/vHkPY7xZo00i2/XnTvkK7YMldbMZnzgrLXxieFq0i3CP8ti9IAQ4duifKiGCGm1Fg0TVj8PJFJgCoQIVEUEgQ2VV/ODM4b/Y8aXizU7SxiJdfhAn/ACUFk/CqLVxWB0ZciWpjgyuJS035T4slKCd2oWo5wHyFrOjrI0C5AIvj+YXvoUt8Bt9UQfkKcAqvCpFwxo+ha3TyvABJwyfilYA72y4CygbeE24K6BsrAYzgTNAfgRkMJ6TW1Iqk/x/RdGC/HtMEvLiKb+T3REfNxTkBqJTFoMUiaqYAgCew6opcRx5LQUiI1eFtM2voqRqlM0SKJtzRWVrTvE530hv0r4nvJ7SoIJybRbpZoNt39rkV6WyE+/JmpiqVNZlVTbP2atorGhYdVSMqvZtKawjUFjvigor4mZ0KexL8IniP3G87oMCjZ58Z6wor5JTINChArlfRYGua5Z6vOKZ8IqK18xnqFe8CUqN0gmOP4J43gftWywWUJvFOXAFHIY2i9O/pv61idwr0SNTQ/S9gV82LBop5iJKhfdv+AxxlZv+6qlnU8SY3AfJSwdJYWxY1zBp3tkbKfUWsDc6VF5abM3szd2h/DoO5QDY/JCsTdVFGT513W+dBnLOiCuhcC2y2TpKJ/HNWljXgGwd8pm1tn9kAOemlCkcsBXNNBMpNTvZ7I9wl2DanNKaXgNibyZqCqaVNczVBKJEESWTNXVl9gin2HaYl6dTbp4tyPfqcIJtM4GkJZ2yy3nRPUynFOWkaxOtgplh6rLFxeMvACdywMhBZXaC7KicWRBMMpfZs4Ga/KK6R1wPc0unchmgXpWvb3D2eBRdyxPTk0CoxhPzBRi0pTGo3hMTpepo7Oi7HJ972NF3OIRbInrr+/bzPerA4VGCE7jG0AcVhvPMGY4CK4LWXPm4rq7OPl9e5I6tnmELerqwBbui5S0RN3jH1tWx5buaoOV7XSFLAYN4R5Z6ZFE+sYIsE6hClsPmwFu2ttlrlswcxcOs+tN2a7DbHubYpxMIGj0PCfZfZaI2BBz7b/GshSgeD1RYszIrU327BhbMxNHWwDLT+b5dAxsdNrAM76afMVHCYlgachsb41bHxg7T1raOi6VralpD2KJTDlxj6IJjHTsOXVgK1lHq/7zk3s1BNhxW6sA+l7UGLuMQAKZablkedQpvKyDMWCU83dcWoAEPrdyJAqCM8+BenrcgyKdt0dl3hbU8OzrvYephDWmsFYpUzGrSB68epappD5BA69yezYY8Au0rTqXIf4/ADJiH7zJWADyVSNZqzWpRRSPyk9cBTVw5JzctfQ22tDdvNiadbXzxSzVaSjLrdp/ruUzSjyKbCrqQ6acGBU3KeC4i1yWf9XWREnSYe/c9GTC/voBW3Y6RH7JcwLBa9RC/RgNJy0JcN4UCNhNvAGqyrHPfLUAB29WsEBVa1oy/KVS4DMNRE+SJwJuqfqO7AJ+tZf35mwJMDSMnImjn9yKdheZsLcvV3zImHDA0zbJTyVfruBwTJypWiAmegH3Bux2Hi3bEp9BHlTHn3UdnpIoihR5LOwx4Y160UG4+beoiU15nwt/fBNR271fKfbbGYKJqQU7LrKlLlyvlOiIGU6dse5+2r1e2XabtO6KUO52y7X2mrl7ZdhixcqCE8fPFQ++WVRvxukwecWToq2/XwB0mjzg3kP7EN3CHySOOVpKFjSLKbhQy9H03L8g9Kz8vaIpb0qMzZm7lVisTFYLSMuzICbPyBaJK4+/MZfn5qoBnDU2rC1/c0crP9BNrPcIQZ2S65GV9nrhrDSnTBcN6F0bqNi1Qbv6gbcaIo5XguYOqLaigP/TsElP5HitKQGXStOsSVBo3MtWy/+EdVNKgGsLqdDMWBj5BmFXhCc+dL0IRdmzKk9EZ2hQkFt7RpqwLc8AQVPctOjfoQfHlNIbfKNtdGSAdSxe6cr634/Qw5dOZyL0uA2WaDqwo3MKkoEsD9OKNo+sb4tR3T2q1N7RfmHJ2bmjn9dYdRYXgajdfN9uPQyavVZDoZ36B7gS4Vq074TI/5dfDqm9goTGJNA/G3QWpQZD1qurrrigUY7t8usvFKLkTTu+l7cHjIuqjrHlu9E9ymJABj49lJ2ifiCzG2tbIgt2SuQ2d6/s+r8P5PGqiX5vxVKZtX7oTMzuKDhyePc1tONVbFbrt2NM26gaFxlD/9agGf27VRvnRr1ZTEXbQoEkOL7V+rsmtzqw+H0mqIUIES0z7iqN0psMFPBeWDr3ne06rxyiEUztK8gZ7hXP3FK7XMwD1qhRKsR0HeANS9Gtz2lRJUZ8Iv8xeC9+xX+eVs0VPzi1YoQok5DDGNEmpvJxYqqsfeI7oFf8H</diagram></mxfile>
2206.01078/main_diagram/main_diagram.pdf ADDED
Binary file (28.8 kB). View file
 
2206.01078/paper_text/intro_method.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In recent years, deep neural networks have become the computational backbone of reinforcement learning, achieving strong performance across a wide array of difficult tasks including games [@mnih2015human; @silver2016mastering] and robotics [@levine2018learning; @gao2020robotic]. In particular, Deep Q-Networks (DQN) [@mnih2015human] revolutionized the field of deep RL by achieving super-human performance on Atari 2600 games in the Atari Learning Environment [@bellemare2013arcade]. Since then, several advancements have been proposed to improve DQN [@hessel2018rainbow], and deep RL has been shown to excel in continuous control tasks as well [@haarnoja2018soft; @fujimoto2018addressing].
4
+
5
+ However, most Deep RL methods assume the agent is operating within a fully observable environment; that is, one in which the agent has access to the environment's full state information. But this assumption does not hold for many realistic domains due to components such as noisy sensors, occluded images, or additional unknown agents. These domains are *partially* observable, and pose a much bigger challenge for RL compared to the standard fully observable setting. Indeed, naïve methods often fail to learn in partially observable environments without additional architectural or training support [@pinto2017asymmetric; @igl2018deep; @ma2020discriminative].
6
+
7
+ To solve partially observable domains, RL agents may need to remember (some or possibly all) previous observations [@kaelbling1998planning]. As a result, RL methods typically add some sort of memory component, allowing them to store or refer back to recent observations in order to make more informed decisions. The current state-of-the-art approaches integrate recurrent neural networks, like LSTMs [@hochreiter1997long] or GRUs [@cho2014properties], in conjunction with fully observable Deep RL architectures to process an agent's history [@ni2021recurrent]. But recurrent neural networks (RNNs) can be fragile and difficult to train, often requiring complicated "warm-up" strategies to initialize its hidden state at the start of each training batch [@lample2017playing]. Conversely, the Transformer has been shown to model sequences much better than RNNs and is ubiquitous in natural language processing (NLP) [@devlin2018bert] and increasingly common in computer vision [@dosovitskiy2020image].
8
+
9
+ Therefore, we propose Deep Transformer Q-Network (DTQN), a novel architecture using self-attention to solve partially observable RL domains. DTQN leverages a transformer decoder architecture with learned positional encodings to represent an agent's history and accurately predict Q-values at each timestep. Rather than a standard approach that trains on a single next step for a given history, we propose a training regime called intermediate Q-value prediction, which allows us to train DTQN on the Q-values generated for each timestep in the agent's observation history and provide more robust learning. DTQN encodes an agent's history more effectively than recurrent methods, which we show empirically across several challenging partially observable environments. We evaluate and analyze several architectural components, including: gated skip connections [@parisotto2020stabilizing], positional encodings, identity map reordering [@parisotto2020stabilizing], and intermediate value prediction [@al2019character]. Our results provide strong evidence that our approach can successfully represent agents' histories in partially observable domains. We visualize attention weights showing DTQN learns an understanding of the domains as it works to solve tasks.
10
+
11
+ When an environment does not emit its full state to the agent, the problem can be modeled as a Partially Observable Markov Decision Process (POMDP) [@kaelbling1998planning]. A POMDP is formally described as the 6-tuple $(\mathcal{S}, \mathcal{A}, \mathcal{T}, \mathcal{R}, \Omega, \mathcal{O})$. $\mathcal{S}$, $\mathcal{A}$, and $\Omega$ represent the environment's set of states, actions, and observations, respectively. $\mathcal{T}$ is the state transition function $\mathcal{T}(s, a, s') = P(s'|s, a)$, denoting the probability of transitioning from state $s$ to state $s'$ given action $a$. $\mathcal{R}$ describes the reward function $\mathcal{R}: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$; that is, the resultant scalar reward emitted by the environment for an agent that was in some state $s\in\mathcal{S}$ and took some action $a\in\mathcal{A}$. And $\mathcal{O}$ is the observation function $\mathcal{O}(s', a, o) = P(o|s', a)$, the probability of observing $o$ when action $a$ is taken resulting in state $s'$. At each time step, *t*, the agent is in the environment's state $s_t\in \mathcal{S}$, takes action $a_t\in \mathcal{A}$, manipulates the environment's state to some $s_{t+1}\in \mathcal{S}$ based on the transition probability $\mathcal{T}(s_t, a_t, s_{t+1})$ and receives a reward, $r_t = \mathcal{R}(s_t, a_t)$. The goal of the agent is to maximize $\mathbb{E}\big[\sum_t \gamma^t r_t\big]$, its expected discounted return for some discount factor $\gamma \in [0, 1)$ [@Sutton1998].
12
+
13
+ Because agents in POMDPs do not have access to the environment's full state information, they must rely on the observations $o_t \in \Omega$ which relate to the state via the observation function, $\mathcal{O}(s_{t+1}, a_t, o_t) = P(o_t|s_{t+1}, a_t)$. In general, agents acting in partially observable space cannot simply use observations as a proxy for state, since several states may be aliased into the same observation. Instead, they often consider some form of their full history of information, $h_t = \{(o_0, a_0), (o_1, a_1), ..., (o_{t-1}, a_{t-1})\}$. Because the history grows indefinitely as the agent proceeds in a trajectory, various ways of encoding the history exist. Previous work has truncated the history to make it a fixed length [@zhu2017improving] or used an agent's belief, which represents the estimate of the current state  [@kaelbling1998planning]. Since the deep learning revolution, others have used forms of recurrency, such as LSTMs and GRUs, to encode the history [@hausknecht2015deep; @yang2021recurrent].
14
+
15
+ Q-Learning [@Watkins92q-learning] aims to learn a function $Q: \mathcal{S}\times \mathcal{A} \to \mathbb{R}$ which represents the value of each state-action pair in an MDP. Given a state $s$, action $a$, reward $r$, next state $s'$, and learning rate $\alpha$, the $Q$-function is updated with the equation $$\begin{equation}
16
+ Q(s, a) := Q(s, a) + \alpha (r + \max_{a'\in \mathcal{A}}Q(s', a') - Q(s, a))
17
+ \end{equation}$$ In more challenging domains, however, the state-action space of the environment is often too large to be able to learn an exact $Q$-value for each state-action pair. Instead of learning a tabular Q-function, DQN [@mnih2015human] learns an approximate $Q$-function featuring strong generalization capabilities over similar states and actions. DQN is trained to minimize the Mean Squared Bellman Error $$\begin{equation}
18
+ \label{MSBE}
19
+ L(\theta) = \mathbb{E}_{(s, a, r, s')\sim\mathcal{D}}\big[\big(r + \max_{a'\in \mathcal{A}} Q(s',a';\theta') - Q(s, a; \theta)\big)^2\big]
20
+ \end{equation}$$ where transition tuples of states, actions, rewards, and future states $(s, a, r, s')$ are sampled uniformly from a replay buffer, $D$, of past experiences while training. The target $r + \max_{a'\in \mathcal{A}} Q(s',a';\theta')$ invokes DQN's target network (parameterized by $\theta'$), which lags behind the main network (parameterized by $\theta$) to produce more stable updates.
21
+
22
+ However, in partially observable domains, DQN may not learn a good policy by simply replacing the network's input from states to observations (i.e., an agent can often perform better by remembering some history). To address this challenge, Deep Recurrent Q-Networks (DRQN) [@hausknecht2015deep] incorporated histories into the $Q$-function by way of a long short-term memory (LSTM) layer [@hochreiter1997long]. In DRQN's training procedure, the sampled states are replaced with histories $h_{t:t+k} = \{o_t, o_{t+1}, ..., o_{t+k}\}$ from timestep $t$ to step $t+k$, sampled randomly within each episode. The hidden state of the LSTM is zeroed at the start of each update.
23
+
24
+ The transformer architecture [@vaswani2017attention], originally introduced for natural language processing, stacks blocks of attention layers [@bahdanau2014neural] and is typically used to model sequential data. Intuitively, the transformer's attention module receives as input a sequence of tokens (e.g., a sequence of observations in an episode) and the model learns to place stronger weights or more *attention* on the most important tokens. For more details about the attention module in transformers, refer to Appendix [8.3](#appendix:attention){reference-type="ref" reference="appendix:attention"}.
25
+
26
+ While the original transformer architecture formed an encoder-decoder structure, recent works often use either the encoder [@devlin2018bert] or the decoder [@radford2018improving]. The key difference between the two is that the decoder applies a causal masking to the attention layer; that is, the $ith$ token cannot attend to tokens which come later in the sequence. In general, the transformer decoder has been shown to perform better on generative tasks like next token prediction, while the transformer encoder is able to learn richer representations and excels on tasks such as language understanding.
27
+
28
+ DTQN utilizes the transformer decoder structure. Given a tensor of shape $(B, C, D)$, where $B$ is the batch size, $C$ is the context length, and $D$ is the model's dimensionality size, the transformer decoder layer returns a tensor of the same shape, enabling us to stack layers on top of each other. The last transformer layer's output can then be projected to the desired shape, or sent as input to another network. To ensure the raw inputs are of the correct shape, we often prepend a feature extraction step, such as a lookup embedding for text or integers, a multilayer perceptron for vectors, or convolutional neural network for images.
29
+
30
+ ![Architectural diagram of DTQN. Each observation in the history is embedded independently, and Q-values are generated for each observation sub-history. Only the last set of Q-values are used to select the next action, but the other Q-values can be utilized for training.](Figures/dtqn_architecture.png){#fig:architecture width="95%"}
31
+
32
+ # Method
33
+
34
+ Transformers seem like a natural fit to represent histories in POMDPs, but there are several open questions regarding how to use them best in deep RL. In particular, it is unclear what form of transformer to use, how to integrate it into deep RL methods and how they should be trained. We chose to build DTQN using a transformer decoder structure incorporating learned position encodings, and train on the Q-values generated for each timestep in the agent's observation history. DTQN takes as input the agent's previous $k$ observations, $h_{t:t+k} = \{o_{t}, o_{t+1}, ..., o_{t+k-1}\}$, linearly projects each observation into the dimensionality of the model, and adds positional encodings to add information about the absolute temporal location of each observation. The embedded history is then passed through $N$ transformer layers, and finally projected to the action space of the environment (see Figure [1](#fig:architecture){reference-type="ref" reference="fig:architecture"} and Algorithm [\[alg:flowchart\]](#alg:flowchart){reference-type="ref" reference="alg:flowchart"}). DTQN outputs a set of Q-values relating to each observation in the input.
35
+
36
+ While we only use the Q-values from the most recent observation during execution, we train the network using all generated Q-values, even those relating to the observations at the beginning of the subhistory using the loss function in Algorithm [\[alg:flowchart\]](#alg:flowchart){reference-type="ref" reference="alg:flowchart"}. This training regime challenges the network to predict the Q-values in situations where it has little to no context, and produces a more robust agent. The remainder of this section expands on each contribution of the DTQN architecture.
37
+
38
+ Before the observation history is passed to DTQN's transformer layers, each observation in the agent's most recent $k$ observations, $h_{t:t+k}$, is linearly projected to the dimensionality of the transformer via a learned observation embedding (see Figure [1](#fig:architecture){reference-type="ref" reference="fig:architecture"}). After embedding, we add a learned positional encoding to each observation based on its position in the observation history. This result, which we call $E^0$ in Algorithm [\[alg:flowchart\]](#alg:flowchart){reference-type="ref" reference="alg:flowchart"}, is the input to the first transformer layer in DTQN.
39
+
40
+ Position encodings are common practice in transformers, especially for NLP tasks, where they are well studied [@wang2020position]. However, the importance of position is less clear in the reinforcement learning setting. In some control tasks, the temporal position of an observation may not have any effect on its importance or meaning to solve the task. For instance, the importance of the priest observation in the classic HeavenHell domain [@Bonet98solvinglarge] is not dependent on when the observation occurs in the episode. On the other hand, domains with more dynamic state transitions may benefit greatly from the positional information. For this reason, we choose to learn our positional encodings as it gives the agent the most flexibility in terms of how it chooses to use them. We ablate this choice by comparing our learned positional encodings to sinusoidal positional encodings (used in the original transformer [@vaswani2017attention]) as well as not using any positional encodings in section [5.4](#sec:pos-results){reference-type="ref" reference="sec:pos-results"}.
41
+
42
+ Like the original GPT architecture [@radford2018improving], each transformer layer in DTQN features two submodules: masked multi-headed self-attention and a position-wise feedforward network. As described in Algorithm [\[alg:flowchart\]](#alg:flowchart){reference-type="ref" reference="alg:flowchart"}, first we project the output of the previous layer, $E^{L-1}$ to the queries, $Q$, keys, $K$, and values, $V$, through the weight matrices $W^Q$, $W^K$, and $W^V$, respectively. After each submodule, that submodule's input and output are combined (see the "Combine" step in Figure [1](#fig:architecture){reference-type="ref" reference="fig:architecture"}) followed by a LayerNorm [@ba2016layer]. Finally, after the last transformer layer, we project the final embedding ($E^N$ in Algorithm [\[alg:flowchart\]](#alg:flowchart){reference-type="ref" reference="alg:flowchart"}) to the action space of our environment to represent the Q-value for each action.
43
+
44
+ DTQN uses a residual skip connection [@he2016deep] to combine the two streams, matching the original transformer, in favor of other choices of combination layers such as the GRU gating combination layer [@parisotto2020stabilizing]. Another contested decision is the position of LayerNorm with respect to each submodule; the original transformer [@vaswani2017attention] and original GPT [@radford2018improving] apply LayerNorm after the combine step whereas other works have moved the LayerNorm to immediately before the submodule [@radford2019language; @parisotto2020stabilizing; @xu2020deep]. DTQN applies the LayerNorm after the combine step, a choice we found to be simple while also demonstrating strong empirical performance. We ablate our choices of network with the aforementioned variants in section [5.3](#sec:gtrxl-results){reference-type="ref" reference="sec:gtrxl-results"}.
45
+
46
+ <figure id="fig:main-results" data-latex-placement="t">
47
+ <figure>
48
+ <img src="Figures/Main_Results/POMDP-hallway-episodic-v0_Success_Rate.png" />
49
+ <figcaption>Hallway</figcaption>
50
+ </figure>
51
+ <figure>
52
+ <img src="Figures/Main_Results/POMDP-heavenhell_3-episodic-v0_Success_Rate.png" />
53
+ <figcaption>Heaven hell</figcaption>
54
+ </figure>
55
+ <figure>
56
+ <img src="Figures/Main_Results/gv_memory.5x5.yaml_Success_Rate.png" />
57
+ <figcaption>Gridverse memory 5x5</figcaption>
58
+ </figure>
59
+ <figure>
60
+ <img src="Figures/Main_Results/gv_memory.7x7.yaml_Success_Rate.png" />
61
+ <figcaption>Gridverse memory 7x7</figcaption>
62
+ </figure>
63
+ <figure>
64
+ <img src="Figures/Main_Results/gv_memory.9x9.yaml_Success_Rate.png" />
65
+ <figcaption>Gridverse memory 9x9</figcaption>
66
+ </figure>
67
+ <figure>
68
+ <img src="Figures/Main_Results/gv_memory_four_rooms.7x7.yaml_Success_Rate.png" />
69
+ <figcaption>Gridverse four rooms 7x7</figcaption>
70
+ </figure>
71
+ <figure>
72
+ <img src="Figures/Main_Results/DiscreteCarFlag-v0_Success_Rate.png" />
73
+ <figcaption>Car flag</figcaption>
74
+ </figure>
75
+ <figure>
76
+ <img src="Figures/Main_Results/legend.png" />
77
+ </figure>
78
+ <figure>
79
+ <img src="Figures/Main_Results/Memory-5-v0_Success_Rate.png" />
80
+ <figcaption>Memory cards</figcaption>
81
+ </figure>
82
+ <figcaption>Results showing the success rate of DTQN against baselines. DTQN is shown in blue, a simple attention network (ATTN) shown in brown, Deep Recurrent Q-Network (DRQN) <span class="citation" data-cites="hausknecht2015deep"></span> is shown in orange, and Deep Q-Network (DQN) <span class="citation" data-cites="mnih2015human"></span> is shown in purple. Lines show the mean and shaded regions represent standard error across 5 random seeds. DTQN excels both in terms of learning speed as well as final performance, clearly outperforming the baselines on nearly all domains. Refer to section <a href="#sec:baseline-results" data-reference-type="ref" data-reference="sec:baseline-results">5.2</a> for discussion of results.</figcaption>
83
+ </figure>
84
+
85
+ DTQN outputs a set of Q-values for each timestep in the agent's observation history. During evaluation, DTQN selects the action with the highest Q-value from the last timestep in its history. It would therefore be straightforward to train DTQN using just the last timestep's Q-values, since those have the most context to work with and are the most informed to select the optimal action. This regime, however, is very wasteful, as only a fraction of the generated Q-values actually get used for training. Instead, we train DTQN using all generated Q-values. Originally used in the NLP setting where each position was tasked with predicting the next character and formed an auxiliary loss [@al2019character], we adapt this training regime to the reinforcement learning setting, as shown in Algorithm [\[alg:flowchart\]](#alg:flowchart){reference-type="ref" reference="alg:flowchart"}. Note that the for loop depicted in Algorithm [\[alg:flowchart\]](#alg:flowchart){reference-type="ref" reference="alg:flowchart"} can be done in one forward pass of the network because of the causally-masked self-attention mechanism.
86
+
87
+ We ablate training based on all Q-values with training only on the last timestep's Q-values in section [5.5](#sec:character-results){reference-type="ref" reference="sec:character-results"}, and show the performance gains in Figure [2](#table:ablations){reference-type="ref" reference="table:ablations"}.
2208.11640/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-09-27T13:55:38.737Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.50" version="20.3.0" etag="RR-su1Dw7NuqZa6GuEt5" type="device"><diagram id="yaM6voqPnmSDtdiOZS9A">7V1bU+M6Ev41rmIekvLdySMQMudsMXuo4WztnqcpxVYSLY7lIzsQ5tevbr5KDoE4EGYdKGLr7u6vpe621BjO9Wb3lYB0/Q1HMDZsM9oZzsywbdv0LfrFUp5FSuC5ImFFUCSSrCrhHv2EMtGUqVsUwaxRMMc4zlHaTAxxksAwb6QBQvBTs9gSx81eU7CSPZpVwn0IYqgU+zeK8rVMtfxplfEbRKu17HpiByJjA8rCIiFbgwg/1fpybgznmmCci6vN7hrGjHgFXcSA5h255cAITPJDKtiiwiOIt/LZ5Ljy5+JhCd4mEWTlLcO5elqjHN6nIGS5T5S9NG2db2KZvcRJLvllufQekFDeevROdgZJDnedA7ZKMlD8QLyBOXmmRWSFaeCIKhI6nuTQU40PgUxb11hg+TIRSN6vyqYr8tALSSE9tRyFODCiwJC3CU7o11VFL5PeYZKv8QonIL7FOJVU+i/M82dJF7DNcZOGcIfy/7DqY9d05P1fLG9smkX+bCfb5zfPxU1CH0hU9Yrbv+p5VTV+V9Src431mOWA5JdMUKrH4mlzxGgjW4iKEmEMsgyFIlEWsXgVgh9KCSke5Q4SRCkPiWxI0JQRsikZeEtCmSRnBzqCFZT8nB4MHAJjkKPHZuvHoMDVyIwf00FcpQQ24OH/vWWCfLXAu1GGfqJkZTiXtMQCkwiSEU3mz29iKhLLmNGS5UpE8KeLorKSle7oXzOVdTaUGCgZ5QxUl5qMBc5zvGnlxSiBo0IsWA5K1pQduch9osMaLQgED3KY7HIEGCHqoxwJsa9KsHqiiHiwVqfyaQmI0DZr5bG5ANGJlfaCVolsFGSQDVSUoFNFXs+N4VIOl6GWkRVK+niy0YLq9Golvzl3WHktew5taLlNwhzhhJaZEYoocoHTPPtCb42g7GVBtH3ohmWWDY1//Mi2KSQ/fozpmkXlZhvmmIxBmsbPF0XLFJfiEZqPRZM57pqpAxR/aSgWDWUpSLQN8XV6lImFmjXFANHRkmE70yagRLOiwHg8HhA4IFA/feVrlI03MMuoxpuxJNaUaXhXhjer9Xb4pBjMXgm2vrRVu1RYrnGMCW/NmdtTZ+poVZlKs+VVeVNyGFYPem5heRRqrqOouY6vU3Ot4HgFx+tUcA5GjduBmqvtasWe8h/gEWQhQSlr7poaiJ1cb/GXSUCTiU2mSW11SbXQVpIUm5mUmULYLmXyBkUR60SLmaZO3+S1avT0wf5Jg/2Oq1o5Ex37neO57+81cip75qZK7dPm8YyaxWOcwtrhd20r5EQmULglj+VspE4iB9g+U9X2sQ6HU+/GT/Cro+NTAcH8OCBMPhIIVg0GFSj0QDh3hvbJPFn1DiO+TsvFxDWbyoQdtHxhAlOyVgUBSgLwXCuWsgJZdz+262n7qRAlWqzwVT7jQZCbKnrJ99//+bWuYwql4eL29httzPwOU4DIl/50xEC3+Nc0Daox+vQzn3eqJbRyXb3kn340hkmL9qbGMarTGL3J8XNBMQ8dozH6Lxgsi3bCtki4IzikFghlKFMlNymKITM3bqgAs+9vwjrh3ZQ65lZjVtR7ABsGg2SRsa9/JVvR+iMgCCxiOK41jxJpSsrnMGwGvdIup/MKs4vcoNuI7tPf3kLj9fV8Pp12obEH1Flec2pxzKkCO5qowm7Sgzu+0INOAbubHYVAzDwY9zCG0u13GAOzNUjZZfhMYUGNfedlNi4Ez28XZQIIH1YcCX9sc4EusdBIXnuaha3F+/n8xr++1sxYzcXK7slgaeNAM/2YGhxM+8CB+tLqZV2ksX5/iGJSaagN/bRSV/doqHOwQTFL+A3Gj5AZsi1A+HtWm5O93jlA6Sks1IYaO/k4NbYYT2MOcWq/5hwxV+PNPf1zK9QdPvVnxdxfd2U0140XvF+vXQE7fb3VZDEKBbtZe2S1uLA9T6xHfA7mF6b7xdB6DLu7r2ihc9/0+owt3/OpfMldwxqczHtZ8w5e/oEzr+dM58TwBh37wFeXp5iRul63XhD4tyhK2DsGdpHAXX7Ye9fDSWC2fpvvvXSzUstmeKHoIDyfS3hOvaoqZm0NfY23YAPGPjvGhlfhn1ScFa8Ad/rwmooAX1hsRdL6is5WmAcgfRCQFPQU7sQ3QEWvtfUyTL1zc98Y9YPhi8ZS+ixYS3W3BedID4Nt+1wdd/wCNTUSpk89A9N7jnZ8Xnlv0ztCj700P+ghv9qc9nZdV2vZva89d17r8ADmzwtmYzDcfkGMHeLzHvh2fnyTb2A4J/ZtNhXFBjE8d3a+eao3XTnDF3O+5TnHTvU183/fnpBhARiQ1zPyjPZGo9N7CbofQ6NrN7c9vauD4PBx1nZjvd4xMEjjp5NGlKAcgbhG726EvdIrMmDk/DByiK+s0+nH3Vx978hkm7Bax47YiUzKuVt+N+MFyoNkmk18S4/9GMpeLvEx2hs7I7gEWyEqHZuzjtrk55puY5ef6yub/BzPVzf5Oa7bw04t9dz9V5hAAnLO6zuCN2mu8O7482IhJQ6D7tEnxurbdDUnyLRb+npgmjNtniUrWqjvzCx28TaY1sfOzO6jhMOrrF/8eHCHz9cogycco2O7ttT1mConL/a/aaNijx7KmZ8Fbqgv8Y3MjrX/PQM96JTS4ej9IFv7j8S8ezSIAaYDTF8D09OHjHgrIk8aRmLqXrnT6lBgRxiJKhjWUfpeUBytKvQ9Lxh7qp7uB6rKZ1leUfQorc9XtD6dRcYP68VQKgV3IMsM17QUVvyfRHyYuK0TVEGgnqQrAgz2HfPB2n+sv//Idp/2HP/bDj1N1ENPVWjJDzj0NFEEVByZ7mXrxcUfKUwufzfEu7Zdj+evfUNxkHiBPffmRvdhN/3px/5PwgbTpm/Et1UzW3f+upeDsNPPIL6O1RTgsUlxeCZCrI0Jc4Bkn0KK9cEVJoE79s3WIX8/YDFBp+WntXh3xHV4Q+iF4jnrMwZeGdztRlUu8ztIHpjq3W+ohVZohdmle1hohcoN2ndYrraUWxM1MJdWzN0exNxWz7sXXk9TukK5LnX2LlDvBJyxW5EI3tXPadsKZxQ2FGEB0IZHo6jzo01WbqWWqbdgAeM7nCHOXWdWmKpXMcu4Kp1kutWPd3aZpSIKNuMDKG6WXCWXRej9Os9TYW/O6W8YJfYYhThZ8hgG45D2aM8jkAP6xdIzZmzhHb8cEbjaxoCMHnE8omSfsxi588VutNiuRpY9GafMKlcg0APXHac1I6pM9x2V534POrOtOzn+gTxf8s/Jec61QPAEM0oxemsFU0ayJ8qwcH1aXltWK46RY6nBRopwNn2HuNkT9Vk6SsKSFZVC7Jj8U086InjinC9qtnmLKULQT/CaoCTNyfiAhbiGrBuf/agL8v7oMu9gM08ndgMQE9XPcbop/8UomacHhNjLaP5JQJItMdl8OCLKmDMfh4iWfmZP3hESqt9LXREeYB6uJdGxiO1zXf6bClMlNf2Zs+6uVswjDKs8jdpWFJ91cSAC2bpk+YtrjX6JUpTCPZ7QOse1CxEPKQfJzSNkkeWkdioXzc1uxf59yBg8Ze54BQHpCSJNhNhTVYN3NYuI28ciErwwZ0ixBkmE6LIL64bV6YVaTk1vsLv2CXWHwtKf40UR+UDjeTmZyKuetD/XBDLO3VEqAh4mCC/pn8u77wq3zs8yc/fBomeLrb16a2JcWzpJ7IVvaixJnSTqNw/MYAxzGW6wEbm6cxvBu8tsK3D5+cls87/4nNJYp7fV/1MSrq/qv1I5N/8D</diagram></mxfile>
2208.11640/main_diagram/main_diagram.pdf ADDED
Binary file (67.9 kB). View file
 
2208.11640/paper_text/intro_method.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The number of people writing code across different languages has steadily grown (Bureau of Labor Statistics 2021) and ranges from novices to experts. Regardless of their experience level, programmers can make mistakes when writing code. Program errors can range from those that are easy to spot and fix, to those that require substantial application knowledge and may be very subtle logical bugs. Even simple mistakes, such as syntax errors that require a relatively small edit and may be apparent to a programming expert, can be frustrating for novice programmers. Moreover they can slow down the workflow of more experienced programmers (Wexelblat 1976; Murphy et al. 2008; Altadmri and Brown 2015; Drosos, Guo, and Parnin 2017).
4
+
5
+ One way to help programmers who encounter these small mistakes is by using automated program repair (APR). These
6
+
7
+ Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
8
+
9
+ methods take a faulty program and a specification of correctness as input, and return as output fixed version of the program that conforms to the specification. Recent work (Bavishi et al. 2022) has introduced the term *last-mile repairs* to broadly describe the class of repairs where the original program is a small edit distance away from the correct program. In this definition, program correctness can be checked without substantial additional context—a parser and a type checker suffice. A quick search on most programming help forums reveals a large number of questions for such errors. For example, as of August 2022, there are over 15K posts on StackOverflow tagged with Python and SyntaxError.
10
+
11
+ Existing work has explored performing these kind of repairs automatically. Symbolic systems, such as Grmtools (Diekmann and Tratt 2020), typically build on error-recovery mechanisms in parsers to enumerate local edits that can resolve errors raised during parsing. Symbolic systems typically restrict the search space to avoid state explosions and they cannot easily encode properties such as the likelihood of particular repair candidates being correct or not.
12
+
13
+ More recently, neural approaches have been successfully applied to repairing syntax and diagnostics errors. For example, Dr. Repair (Yasunaga and Liang 2020), BIFI (Yasunaga and Liang 2021), and TFix (Berabi et al. 2021) use transformer architectures to produce repairs for C compilation errors, Python syntax errors, and JavaScript linter diagnostics, respectively. Some systems, such as LaMirage (Bavishi et al. 2022), have also combined symbolic and neural components to successfully repair broken programs in low-code languages such as Excel and Power Fx.
14
+
15
+ Unfortunately, all these systems share a key drawback: they require substantial engineering (symbolic) or additional data and training (neural) to adapt to new languages. In this paper, we propose a single repair engine, that leverages a large language model trained on code (LLMC) to perform multilingual repair. We select Codex by OpenAI as the LLMC.
16
+
17
+ Our system, RING, shows that repair is nearly generation and exploits Codex's few-shot learning capabilities (Bareiß et al. 2022; Drori et al. 2022) to perform multilingual program repair. To do this effectively, we break down program repair into the same three phases as symbolic automated program repair systems: fault localization, code transformation, and candidate ranking (Goues, Pradel, and Roychoudhury 2019; Liu et al. 2021; Bavishi et al. 2022). We show how each stage
18
+
19
+ <sup>\*</sup>Listed in alphabetical order
20
+
21
+ can be addressed with minimal effort by emulating what a developer would do and using this intuition to design prompts for an LLMC.
22
+
23
+ We evaluate RING on six languages: Excel, Power Fx, Python, JavaScript, C and PowerShell. Our results show that RING repairs significantly more programs than a languagespecific repair engine for three languages and shows competitive results for another two languages. We evaluate the effectiveness of our design choices for each of the three stages of repair. Additionally, we identify possible directions for improvement based on our results, such as language-specific ranking and iterative querying with Codex.
24
+
25
+ Jointly, these results provide the first evidence that an LLMC can enable multilingual repair with the same or better performance than methods designed for a single language. In contrast to other AI-assisted code editing features, such as code completion, this advance opens up the possibility of a *flipped interaction model* where the user writes code and the AI assistant performs the fixing.
26
+
27
+ In summary, we make the following contributions:
28
+
29
+ - We present an LLMC-based approach to multilingual repair that enables a flipped interaction model for AIassisted programming in which the user writes code and the assistant suggests fixes for last-mile mistakes.
30
+ - We implement our approach in the RING system, which employs compiler (or diagnostic) messages, smart fewshot selection, and ranking of repair candidates to perform repair across varying languages.
31
+ - We perform an extensive evaluation across six different languages, showing that multilingual repair with LLMCs is viable and can compete with or outperform languagespecific repair engines.
32
+ - We introduce PowerShell commands as a new application for last-mile repair and collect a benchmark set of 200 PowerShell commands from StackOverflow, which we also release for future research<sup>1</sup> .
33
+
34
+ # Method
35
+
36
+ Figure 1 shows the architecture of RING. We divide the task of fixing bugs into three stages: fault localization, program transformation and candidate ranking. Each stage is based on the intuition for how developers might approach such a stage manually. In the following subsections, we show how to address each stage using an LLMC.
37
+
38
+ We illustrate our approach using a running example – shown in Figure 2 – drawn from the BIFI (Yasunaga and Liang 2021) dataset. The user has incorrectly used tuple notation in the function signature (highlighted in pink). This syntax for unpacking tuples in function signatures was supported in Python 2. In Python 3, it raises a syntax error<sup>2</sup> with
39
+
40
+ <sup>1</sup> https://github.com/microsoft/prose-benchmarks/
41
+
42
+ https://peps.python.org/pep-3113/
43
+
44
+ ![](_page_2_Figure_0.jpeg)
45
+
46
+ Figure 1: RING, powered by a Large Language Model trained on Code (LLMC), performs multi-lingual program repair. RING obtains fault localization information from error messages and leverages LLMC's few shot capabilities for code transformation through example selection, forming the prompt. Finally, a simple, yet effective, technique is used for ranking repair candidates.
47
+
48
+ ```
49
+ 1 def boundary_difference_power(graph,
50
+ (orig image, sigma, spacing) ):
51
+ 2 orig_image = scipy.asarray(orig_image)
52
+ 3 def boundary_term_division(i):
53
+ 4 i = 1. /(i + 1)
54
+ 5 i = scipy.power(i, sigma)
55
+ 6 i[i <= 0] = sys.float_info.min
56
+ 7 return i
57
+ 8 __skeleton_difference(graph,
58
+ 9 orig_image,
59
+ 10 boundary_term_division)
60
+ ```
61
+
62
+ Figure 2: A real Python 3 syntax error from the BIFI dataset. The highlighted code uses tuple parameter unpacking syntax, which was valid in Python 2 but removed from Python 3. All listings are simplified for presentation clarity and brevity.
63
+
64
+ very little detail on the underlying issue. This example highlights that errors can also be introduced as languages evolve. RING fixes this mistake without additional user intervention.
65
+
66
+ As a first step towards debugging, a programmer typically locates the cause of the bug. For most modern languages, locating syntactic mistakes and some semantic errors, such as type errors, is aided by tools like the compiler, static analyzers, or linters. Following this intuition, we include a preprocessed error message produced by the compiler or other static analyzers. We normalize this message to enforce consistency across languages. Figure 3 shows this prompt variant for our running example, where the highlighting corresponds to our prepared syntax error message. For languages where the error messaging may not be precise, particularly with regards to the error location reported, we found that a simple abstraction that removes the reported error location but preserves the error text worked well – we discuss how to create such an abstracted message in our discussion section.
67
+
68
+ ```
69
+ 1 ### Buggy Python
70
+ 2 def boundary_difference_power(graph,
71
+ 3 (orig_image, sigma, spacing)):
72
+ 4 ...
73
+ 5 Error: (1) invalid syntax. Error in
74
+ 6 line: 2 span starts 4 and ends 32.
75
+ ```
76
+
77
+ Figure 3: To aid fault localization, we include a detailed compiler error message with line/column span information. We prepare uniform messages across languages by extracting details from the corresponding language compiler/analyzer.
78
+
79
+ Once a developer has identified the location of a mistake, they must now apply an appropriate transformation—a sequence of edits—to the original source code at this location. Most developers accumulate experience in the type of transformations needed to resolve particular errors over time. Additionally, when novices encounter an unfamiliar mistake, they often search for examples of similar buggy/correct pairs that can inform their own transformation.
80
+
81
+ It has been shown that LLMs are capable of few-shot learning—the ability to learn from a few examples of the intended task—by adding related examples of the task to the prompt (Brown et al. 2020; Poesia et al. 2022). Given examples of transformations that repair programs, we exploit this capability in RING to address the code transformation stage. The main challenge is selecting relevant examples that are related to the mistake made by the developer.
82
+
83
+ Following the intuition that programs with similar mistakes have similar fixes, we select examples from a collection of buggy-fixed pairs based on error message similarity. We call this collection of buggy-fixed pairs the example bank.
84
+
85
+ To capture differences in language tooling, we implement two methods for selecting programs from our example bank. The key difference between these two methods is how they compute a similarity metric over error diagnostics.
86
+
87
+ The first variant, *error vector selection*, assumes that fine-
88
+
89
+ grained error reporting is available. For example, the Excel parser returns a detailed report with many different diagnostic counters. We count the occurrence of each error category reported by the tool and construct a vector out of these frequencies – we refer to this as an error vector. We then select programs from the example bank by minimizing the L2 distance between error vectors.
90
+
91
+ The second variant, *message embedding selection*, assumes that high-level errors are accompanied by detailed descriptions in natural language. For example, the Python parser often returns the same error (like SyntaxError) for different mistakes and instead exposes additional information through the associated natural language error message. We use this description by embedding the compiler messages with a pretrained CodeBert (Feng et al. 2020) model and comparing embeddings based on cosine similarity.
92
+
93
+ Figure 4 shows a simplified few-shot prompt with an example, chosen using message embedding, which exhibits the same error (and required fix) as our buggy program. With this prompt, RING's top candidate is the right repair.
94
+
95
+ ```
96
+ 1 ### Buggy Python
97
+ 2 def initial solution(self, start,
98
+ 3 (max shares, desired weight) ):
99
+ 4 ...
100
+ 5 Error: (1) invalid syntax. Error in line
101
+ : 3, span starts 35 and ends: 36.
102
+ 6 ### Fixed Python
103
+ 7 def initial solution(self, start,
104
+ 8 max shares, desired weight ):
105
+ 9 ...
106
+ ```
107
+
108
+ Figure 4: Our *smart selection of few-shots* retrieves relevant buggy-fix examples from an example bank. Shots are retrieved based on a similarity metric over error diagnostics. The shot selected (pink background) displays the same invalid signature-level tuple parameter unpacking (dark red background, bold) as our target program. The fixed portion of the shot (green background, bold) removes the parentheses.
109
+
110
+ LLMs achieve variation in their output by iteratively sampling each token from promising candidates. The extent to which less likely tokens can be selected is controlled by a parameter called *temperature*. We can thus generate multiple candidates by controlling the temperature during generation.
111
+
112
+ The final step in RING is to rank the candidates obtained by querying Codex using the prompt described in the prior two stages. We use a relatively simple (but effective) ranking strategy to order the candidate programs: averaging the logprobabilities of tokens selected during the decoding process and sort the candidates in descending order of their averages.
113
+
114
+ During development, we found that generating various candidates with higher temperatures – encouraging diverse candidates – and ranking them yields better performance than using lower temperatures such as zero.
115
+
116
+ We evaluate RING on six different languages, ranging from low-code formula languages to popular scripting languages. We describe the dataset, language-specific baseline(s) and evaluation metric for each language.
117
+
118
+ Excel We use a recently released dataset of 200 Excel repair tasks collected from Excel help forums (Bavishi et al. 2022). Each task consists of an Excel formula with syntax errors, some semantic errors (such as wrong function call arity) and a ground truth repair. We also collect a set of 73 tasks where the Excel formula contains at least one type error and annotated each such formula with a ground truth repair. The final collection consists of 273 Excel repair tasks.
119
+
120
+ A successful repair exactly matches the ground truth after normalizing tokens like spaces, capitalizing all the identifiers and cell references. We compare RING to the neurosymbolic repair engine LaMirage (Bavishi et al. 2022).
121
+
122
+ Power Fx Like Excel, we use the recently released 200 Power Fx repair tasks accompanying LaMirage. These tasks consist of syntactic and basic semantic errors, and are collected from help forums and anonymized product telemetry.
123
+
124
+ We use the same evaluation criteria as in Excel and compare to the neurosymbolic repair engine LaMirage.
125
+
126
+ Python We evaluate RING on a random sample of 200 syntactically invalid Python code snippets from the dataset used by the SOTA syntax repair tool for Python: BIFI (Yasunaga and Liang 2021). These code snippets were collected from GitHub repositories.
127
+
128
+ These snippets do not have a ground truth repair, hence, we employ the same evaluation metric described in the BIFI paper. A repair is successful if the produced program is (1) parsed successfully by the Python 3 parser and (2) has a Levenshtein (Levenshtein et al. 1966) token edit distance less than 5 from the buggy program. The pyhon tokens are generated by the Pygments<sup>3</sup> lexer.
129
+
130
+ We compare to BIFI, a transformer-based repair system that iteratively trains a *code breaker* that learns to generate realistic errors and a *code fixer* that repairs such errors.
131
+
132
+ JavaScript We evaluate RING on a random sample of 200 JavaScript (JS) code snippets drawn from the dataset released with TFix (Berabi et al. 2021). Each snippet has at least one error or warning reported by the popular linter ESLint (Tomasd ´ ottir, Aniche, and Van Deursen 2018). In ´ addition to syntax errors, ESLint also reports stylistic issues.
133
+
134
+ The dataset released by TFix contains a ground truth repair code snippet for each buggy snippet. Both buggy and ground truth code snippets were mined by the TFix authors from GitHub commits. The originally released dataset contains only the part of each code snippet relevant to the error and repair. However, these parts are an arbitrary window around the original fault location. We found that providing these arbitrary windows to Codex resulted in false edits, as the windows had syntax errors that were just an artifact of the
135
+
136
+ <sup>3</sup> https://pygments.org/
137
+
138
+ windowing. To mitigate this, we extracted the whole function (or whole file, if not in a function) that encompassed the originally buggy and the repaired code snippets. We refer to these as *extended code snippets*.
139
+
140
+ We compare our performance to TFix, a fine-tuned T5 (Raffel et al. 2020) model for JS repair. A repair is successful if it matches the ground truth corresponding to the buggy program. We run TFix on both the original window snippets and on our extended code snippets.
141
+
142
+ C We evaluate RING on a random sample of 200 C code snippets drawn from the dataset released with DeepFix (Gupta et al. 2017). These programs correspond to real user programs written by students in an introductory programming class and raise at least one compilation error.
143
+
144
+ We compare to Dr. Repair, a neural repair system that uses graph attention to combine information from the buggy code snippet and the associated compiler message (Yasunaga and Liang 2020). We use their success criterion: a repair must not raise any error messages when compiled using gcc -w -std=c99 -pedantic. Following BIFI, a repair must be less than 5 token edits away from the original buggy program.
145
+
146
+ PowerShell We introduce the novel task of repairing syntax errors in PowerShell commands. To create benchmarks, we searched StackOverflow (StackOverflow) for the word "error" in threads tagged with powershell. This resulted in 14,954 threads. We extracted code blocks with least one space from the question and the accepted answer. We keep pairs from question and answer where the question code is invalid and answer code is valid. We judged validity using the PowerShell command Get-Command -syntax.
147
+
148
+ Finally, we manually annotated these candidate tasks from the the associated StackOverflow post, confirming each pair was reflective of the original issue and did not have extra changes. We kept a final set of 208 task pairs.
149
+
150
+ There is no existing language-specific engine to compare with, as we introduce this task. A repair is successful if it exactly matches the associated answer code block.
151
+
152
+ Common Baseline We also use zero-shot Codex as a baseline for all languages. We use the following prompt:
153
+
154
+ ```
155
+ Fix bugs in the below code:
156
+
157
+ <buggy program>
158
+
159
+ ```
160
+
161
+ where <language> is replaced with the appropriate language name for the benchmark task.
2209.10091/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-01-13T11:03:50.559Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" version="15.9.6" etag="KfZh2oyOhy3XZLLUm-P-" type="google"><diagram id="qh5EYBBiCuRikixgoSDz">1Vffr5owFP5reHRBClx9nHq3PbhlmUu2Pd30QoVuhUPqUWF//QocRC64kKnLZmLo+XraQ7/zq1hsmeRvNc/i9xAKZTl2mFtsZTnO1HHm5lEiBSG+O6uRSMuQsBbYyJ+CQJvQvQzFrqOIAApl1gUDSFMRYAfjWsOxq7YF1bWa8Ygs2i2wCbgSPbUvMsS4RmfemfY7IaO4sezaNJPwRpm22MU8hOOZLfZosaUGwHqU5EuhSvYaXuqN3lyYPb2YFimOWeDUCw5c7elslm9/MH96PSyaM2vYp6Eol00ttjjGEsUm40E5ezRuNliMiaJprgPy2txIO9Tw48SU/erBK1WUjFIjKrE1b7o4CI3SMPya4GdAhKRcbIzINFpXaivWIgvSqAzSMcwmIr9IxfREsAlNAYlAXRgVWnCKLorKiUvysXXxnKD43LuNczlFVXTauiXeDIj7YT+wHt8iNCFHYgqpeSxaF9hGAo0xRJBytQbIiIfvArEg6vkeoeuW2ki5czcIYa8DglxKJ64jQQf0RzOpheIoD93dr6HF7YdnibmWVytW4x5zSplCIEZE6S6rq8NW5iWti61UagkKdMv5DQKLTb1OYDleL67YQFyxG4SVN5Te+ZMcSvDb0GY5bFX9bpSU7oukZP2kvBd5/hB5xX9M3l8MvIff9I8/LF4ibW4C7rhS5l1TyhoKnQvxdw05szuQk0v8WjVX22vkb3Wzteckr3LavxKKM+Gj0NKcQmjCRpDbNMt/pFHMh3LVdAmZbrG4/jpDlxJdJ0nvsoKll5p7CXFI0ufKgRO/RT5RprFLN6MtpEiOntq3qQRsxN3GHSgFnne9axrTXd9gLJA/rS3HVyWjz9qMonJ0v+p6v44+ce7WlYzYfg1Uc2cfVezxFw==</diagram></mxfile>
2209.10091/main_diagram/main_diagram.pdf ADDED
Binary file (8.86 kB). View file
 
2209.10091/paper_text/intro_method.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep neural networks have propelled research progress in many domains [@goodfellow2016deep]. However, selecting an appropriate complexity for the architecture of a deep neural network remains an important question: a network that is too shallow can hurt the predictive performance, and a network that is too deep can lead to overfitting or to unnecessary complexity.
4
+
5
+ In this paper, we introduce the *unbounded depth neural network* (UDN), a deep probabilistic model whose size adapts to the data, and without an upper limit. With this model, a practitioner need not be concerned with explicitly selecting the complexity for her neural network.
6
+
7
+ A UDN involves an infinitely deep neural network and a latent truncation level $\ell$, which is drawn from a prior. The infinite neural network can be of arbitrary architecture and containinterleave different types of layers. For each datapoint, it generates an infinite sequence of hidden states $(h_k)_{k \geq 1}$ from the input $x$, and then uses the hidden state at the truncation $h_{\ell}$ to produce the response $y$. Given a dataset, the posterior UDN provides a conditional distribution over the neural network's weights and over the truncation depth. A UDN thus has access to the flexibility of an infinite neural network and, through its posterior, can select a distribution of truncations that best describes the data. The model is designed to ensure that $\ell$ has no upper limit.
8
+
9
+ We approximate the posterior UDN with a novel method for variational inference. The method uses a variational family that employs an infinite number of parameters to cover the whole posterior space. However, this family has a special property: Though it covers the infinite space of all possible truncations, each member provides support over a finite subset. With this family, and thanks to its special property, the variational objective can be efficiently calculated and optimized. The result is a gradient-based algorithm that approximates the posterior UDN, dynamically exploring its infinite space of truncations and weights.
10
+
11
+ We study the UDN on real and synthetic data. We find that (i) on synthetic data, the UDN achieves higher accuracy than finite neural networks of similar architecture (ii) on real data, the UDN outperforms finite neural networks and other models of infinite neural networks (iii) for both types of data, the inference adapts the UDN posterior to the data complexity, by exploring distinct sets of truncations.
12
+
13
+ In summary, the contributions of this paper are as follows:
14
+
15
+ - We introduce the unbounded depth neural network: an infinitely deep neural network which can produce data from any of its hidden layers. In its posterior, it adapts its truncation to fit the observations.
16
+
17
+ - We propose a variational inference method with a novel variational family. It maintains a finite but evolving set of variational parameters to explore the unbounded posterior space of the UDN parameters.
18
+
19
+ - We empirically study the UDN on real and synthetic data. It successfully adapts its complexity to the data at hand. In predictive performance, it outperforms other finite and infinite models.
20
+
21
+ This work contributes to research in architecture search, infinite neural networks, and unbounded variational families. Section [5](#sec:relatedwork){reference-type="ref" reference="sec:relatedwork"} provides a detailed discussion of related work.
22
+
23
+ We begin by introducing the family of unbounded-depth neural networks. Let $\mathcal{D} = \left\{(x_i,y_i)\right\}_{i=1}^n$ be a dataset of $n$ labeled observations.
24
+
25
+ **Classical neural networks.**  A classical neural network of depth $L$ chains $L$ successive functions $(f_\ell)_{1\leq \ell \leq L}$, eventually followed by an output function $o_L$. $$\begin{array}{cccccccccc}
26
+ & & f_1(.) & & f_2(.) & & f_3(.) & & \hdots & f_L(.)\\
27
+ & \nearrow & \downarrow & \nearrow & \downarrow & \nearrow & \downarrow & \nearrow & & \downarrow\\
28
+ x & & h_1 & & h_2 & & h_3 & & & h_L \\
29
+ & & & & & & & & & \downarrow\\
30
+ & & & & & & & & & o_L(.)\\
31
+ \end{array}$$ Each $h_\ell$ is called a *hidden state* and each $f_\ell$ is called a *layer*. Each layer is usually composed of a linear function, an activation function, and sometimes other differentiable transformations like *batch-normalization* [@ioffe2015batch]. In deep architectures, $f_\ell$ can refer to a *block* of layers, such as a succession of 3 convolutions in a Resnet [@he2016resnet] or a dense layer followed by attention in transformers [@vaswani2017attention].
32
+
33
+ We fix a *layer generator* $f$ which returns the layer $f_\ell$ for each integer $\ell$. The layer generator can return layers of different shapes or types for different $\ell$ as long as two consecutive layers can be chained by composition. Similarly, we fix an *output generator* $o_L$ which transforms the last hidden state $h_L$ into a parameter suitable for generating a response variable. We write $\theta_\ell$ for the parameters of $f_\ell$ and incorporate those of $o_L$ into $\theta_L$. With this notation, a finite neural network of depth $L$, generated by $(f,o)$, is written as $$\Omega_L = o_L \circ f_L \circ f_{L-1} \circ ... \circ f_2 \circ f_1$$ and has parameters $\left( \theta_1, ..., \theta_L \right)$.
34
+
35
+ Finally, we fix a distribution $p$ parametrized by the output of the neural network $\Omega_L(x_i; \theta_{1:L})$, and use it to model the responses conditional on the input: $$\begin{align*}
36
+ % \theta_{1:L} &\sim p_1(\theta_{1:L})\\
37
+ \forall i ,\quad y_i | x_i, \theta_{1:L} &\sim p(y_i; \Omega_L(x_i; \theta_{1:L})).
38
+ \end{align*}$$ Given a dataset $\{x_i, y_i\}_{i=1}^{N}$ and Gaussian priors on $\theta_{1:L}$, MAP estimation of the neural network parameters corresponds to classical methods for fitting a neural network with weight decay [@neal1996priors]. In this paradigm, the form of $p$ is related to the loss function [@murphy2012machine].
39
+
40
+ This classical model requires that the layers be set in advance. Its flexibility and ability to capture the data depend crucially on the selection of $L$. Too large and the model is too flexible, and can overfit. Too small and the model is not flexible enough. It is appealing to consider a model that can adapt its depth to the data at hand.
41
+
42
+ **Unbounded depth neural networks.**  We extend the finite construction above to formulate an unbounded depth neural network (UDN). We consider an infinite sequence of hidden states $(h_k)_{k \geq 1}$ generated by $(f_\ell)_{\ell \geq 1}$ and parametrized by an infinite set of weights $\theta \triangleq \left( \theta_\ell \right)_{\ell \geq 1}$.
43
+
44
+ A challenge to conceptualizing an infinite-depth neural network is where to hang the observation. What we do is posit a possible output layer $o_\ell$ after each layer of hidden units. $$\begin{array}{cccccccccc}
45
+ & & f_1(.) & & f_2(.) & & f_3(.) & & \hdots \\
46
+ & \nearrow & \downarrow & \nearrow & \downarrow & \nearrow & \downarrow & \nearrow & \\
47
+ x & & h_1 & & h_2 & & h_3 & & \hdots \\
48
+ & & \downarrow & & \downarrow & & \downarrow & & \\
49
+ & & o_1(.) & & o_2(.) & & o_3(.) & & \hdots \\
50
+ \end{array}$$ We then add an additional parameter, a truncation level $\ell$, to determine which $o_\ell$ will generate the response.
51
+
52
+ The complete UDN models the truncation level $\ell$ as an unobserved random variable with a prior $\mu$. Along with a prior $\rho$ on the weights, the UDN defines a generative model with an infinite-depth neural network: $$\begin{align*}
53
+ \theta &\sim \rho(\theta) && \quad \triangleright \text{network weights} \\
54
+ \ell & \sim \mu(\ell) && \quad \triangleright \text{truncation} \\
55
+ y_i | x_i, \theta, \ell &\sim p(y_i; \Omega_\ell(x_i ; \theta)) && \quad \triangleright \text{response}
56
+ \end{align*}$$ This generative process is represented in figure [1](#fig:graphical_model){reference-type="ref" reference="fig:graphical_model"}. If the truncation prior $\mu$ puts a point mass at $L$ then the model is equivalent to the classical finite model of depth $L$. But with a general prior over all integers, the posterior UDN has access to all depths of neural networks.
57
+
58
+ ![Graphical model for the unbounded depth neural network.](fig/model-infinite-v2.pdf){#fig:graphical_model}
59
+
60
+ The independence of the priors is important. The model does not put a prior on $\ell$ and then samples the weights conditional on it. Rather, it first samples a complete infinite neural network, and then samples the finite truncation to produce its data. What this generative structure implies is that different truncations will share the same weights on their shared layers. As we will see in section [4](#sec:implementation){reference-type="ref" reference="sec:implementation"}, this property leads to efficient calculations for approximate posterior inference.
61
+
62
+ Given a dataset $\mathcal{D} = \left\{(x_i,y_i)\right\}_{i=1}^n$, the goal of Bayesian inference is to compute the posterior UDN $p(\theta, \ell \mid \mathcal{D})$. The exact posterior is intractable, and so we appeal to variational inference [@jordan1999introduction; @wainwright2008graphical; @blei2017variational]. In traditional variational inference, we posit a family of approximate distributions over the latent variables and then try to find the member of that family which is closest to the exact posterior.
63
+
64
+ The unbounded neural network, however, presents a significant challenge to this approach---the depth $\ell$ is unbounded and the number of latent variables $\theta$ is infinite. To overcome this challenge, we will develop an unbounded variational family $q(\theta, \ell)$ that is still amenable to variational optimization. With the algorithm we develop, the "search" for a good distribution of truncations is a natural consequence of gradient-based optimization of the variational objective.
65
+
66
+ We define a joint variational family that factorizes as $q(\theta, \ell) = q(\ell) q(\theta | \ell)$, and note that the factor for the neural network weights depends on the truncation. We introduce the parameters $\lambda, \nu$ and detail the structure of the families $\left\{q(\ell; \lambda)\right\}$ and $\left\{q(\theta | \ell; \nu)\right\}$.
67
+
68
+ **The unbounded variational family with connected and bounded members $q(\ell; \lambda)$.**  For a truly unbounded procedure, we require that the variational family over $\ell$ should be able to explore the full space of truncations $\mathbb{N}^{*}$. Simultaneously, since the procedure must run in practice, each distribution $q(\ell; \lambda)$ should be tractable, that is $\mathbb{E}_{q(\ell)}[g(\ell)]$ can be computed efficiently for any $g$.
69
+
70
+ A sufficient condition for tractable expectations is that $q(\ell; \lambda)$ has finite support; the expectation becomes the finite sum $\sum_{i \in \text{support}(q)} q(i; \lambda) g(i)$. However, to be able to explore the unbounded posterior space $\mathbb{N}^{*}$, the variational family $\left\{q(\ell; \lambda)\right\}$ itself cannot have finite support. It should contain distributions covering all possible truncations $\ell$. Moreover, it should be able to navigate continuously between these distributions.
71
+
72
+ We articulate these conditions in the following definition:
73
+
74
+ ::: {#def:varfam .definition}
75
+ **Definition 1**. A variational family $\mathcal{Q} = \left\{q(\lambda)\right\}$ over $\mathbb{N}^{*}$ is *unbounded with connected and bounded members* if
76
+
77
+ 1. $$\begin{flalign}
78
+ \label{cond:bounded}&&
79
+ \end{flalign}$$ \] $\forall q \in \mathcal{Q}, ~~ \text{support}(q)$ is bounded
80
+
81
+ 2. $$\begin{flalign}
82
+ \label{cond:mode}&&
83
+ \end{flalign}$$ \] $\forall L \in \mathbb{N}^{*}, ~~ \exists q \in \mathcal{Q}, ~~L \in \text{argmax}(q)$
84
+
85
+ 3. $$\begin{flalign}
86
+ \label{cond:continuous}&&
87
+ \end{flalign}$$ \] The parameter $\lambda$ is a continuous variable.
88
+ :::
89
+
90
+ Echoing the discussion above, there are several consequences to this definition:
91
+
92
+ - By ([\[cond:bounded\]](#cond:bounded){reference-type="ref" reference="cond:bounded"}), each $q$ has a finite support. We write the maximal value $m(q) := \max \left\{\ell | q(\ell)>0\right\}$.
93
+
94
+ - Thanks to ([\[cond:mode\]](#cond:mode){reference-type="ref" reference="cond:mode"}), the approximate posterior can place its main mass around any $\ell$. That is, $\mathcal{Q}$ covers the space of all possible truncations: $\bigcup_{q \in \mathcal{Q}} \text{support}(q) = \mathbb{N}^{*}$.
95
+
96
+ - Condition ([\[cond:continuous\]](#cond:continuous){reference-type="ref" reference="cond:continuous"}) ensures that $\mathcal{Q}$ not only contains members with mass on any $\ell$, but it can continuously navigate between them. This condition is important for optimization.
97
+
98
+ **The nested family $q(\theta | \ell ; \nu)$.**  In the UDN model, conditional on $\ell$, the response $y$ depends only on the first $\ell$ layers and not the subsequent ones. Thus the exact posterior $p(\theta | \ell)$ only contains information from the data for $\theta_i$ up to $i \leq \ell$; the posterior of the $\theta_i$ with $i > \ell$ must match the prior.
99
+
100
+ We mirror this structure in the variational approximation, $$\begin{align}
101
+ q(\theta | \ell; \nu) = q(\theta_{1:\ell}; \nu_{1:\ell})\prod_{k=\ell+1}^\infty p(\theta_k). \label{eqn:nested}
102
+ \end{align}$$ @kurihara2007accelerated also introduce a family with structure as in ([\[eqn:nested\]](#eqn:nested){reference-type="ref" reference="eqn:nested"}), which they call a *nested variational family*.
103
+
104
+ **The evidence lower bound.**  The full variational distribution combines the *unbounded variational family* of Definition [1](#def:varfam){reference-type="ref" reference="def:varfam"} with the *nested family* of ([\[eqn:nested\]](#eqn:nested){reference-type="ref" reference="eqn:nested"}), $q(\ell, \theta) = q(\ell; \lambda)q(\theta|\ell, \nu)$. With this distribution, we can now derive the optimization objective.
105
+
106
+ Variational inference seeks to minimize the KL divergence between the variational posterior $q$ and the exact posterior $p$. This is equivalent to maximizing the variational objective [@bishop2006pattern], which is commonly known as the Evidence Lower BOund (ELBO). Because of the factored structure of the variational family, we organize the terms of the ELBO with iterated expectations, $$\begin{align}
107
+ \mathcal{L}(q)
108
+ &= \mathbb{E}_{ q(\ell, \theta)}[\log p(Y,\ell, \theta | X) - \log q(\ell, \theta)] \\
109
+ &= \mathbb{E}_{q(\ell)} \left[ \log
110
+ \textstyle\frac{p(\ell)}{q(\ell)} + \mathbb{E}_{q(\theta \mid
111
+ \ell)} \left[ \log \frac{p(\theta)}{q(\theta | \ell; \nu)}
112
+ \right. \right. \nonumber \\
113
+ &\hspace*{1.3cm} \left. \left. + {\textstyle \sum_{i=1}^n }\log p(y_i \mid \ell, \theta, x_i) \textstyle\vphantom{\frac{p(\ell)}{q(\ell)}} \right] \right]. \label{eqn:finite-elbo}
114
+ \end{align}$$ Further, using the special structure of this variational family, the ELBO can be simplified:
115
+
116
+ - The factor $q(\theta|\ell)$ satisfies the nested structure condition ([\[eqn:nested\]](#eqn:nested){reference-type="ref" reference="eqn:nested"}) so $\frac{p(\theta)}{q(\theta | \ell; \nu)} = \frac{p(\theta_{1:\ell})}{q(\theta_{1:\ell};\nu_{1:\ell})}$. This quantity only involves a finite number $\ell$ of parameters and variables even if the prior and posterior were initially over all the variables.
117
+
118
+ - The factor $q(\ell)$ satisfies ([\[cond:bounded\]](#cond:bounded){reference-type="ref" reference="cond:bounded"}). The outer expectation of $\mathcal{L}(q)$ can be explicitly computed.
119
+
120
+ With these two observations, we rewrite the ELBO: $$\begin{align}
121
+ \mathcal{L}(q)
122
+ &=\sum_{\ell=1}^{m(q)} q(\ell; \lambda)\hspace{-0.1cm} \left[ \log \textstyle\frac{p(\ell)}{q(\ell;\lambda)} + \mathbb{E}_{q(\theta \mid \ell;\nu)} \hspace{-0.1cm} \left[\textstyle \sum\limits_{k=1}^\ell {\textstyle\log \frac{p(\theta_k)}{q(\theta_k ; \nu_k)}} \vphantom{\sum_{i=1}^\ell \log \frac{p(\theta_i)}{q(\theta_i | \ell; \nu_i)}} \right. \right. \nonumber \\
123
+ &\hspace*{2.2cm} \left. \left. { + \textstyle \sum\limits_{i=1}^n }\log p(y_i; \Omega_\ell(x_i ; \theta)) \right] \right]. \label{eqn:final-elbo}
124
+ \end{align}$$ Notice this equation expresses the ELBO using only a finite set of parameters $\lambda, \nu_{1:m(q(\lambda))}$, which we call *active* parameters. Thus we can compute the gradient of the ELBO with respect to $(\lambda, \nu_{1:\infty})$, since only the coordinates corresponding to the active parameters $\lambda, \nu_{1:m(q)}$ can be nonzero. This fact allows us to optimize the variational distribution.
125
+
126
+ **Dynamic variational inference.**  In variational inference we optimize the ELBO of equation ([\[eqn:final-elbo\]](#eqn:final-elbo){reference-type="ref" reference="eqn:final-elbo"}). We use gradient methods to iteratively update the variational parameters $(\lambda, \nu_{1:\infty})$. Equation ([\[eqn:final-elbo\]](#eqn:final-elbo){reference-type="ref" reference="eqn:final-elbo"}) just showed how to take one efficient gradient step, by only updating the active parameters, those with nonzero gradients. From there, a succession of gradient updates becomes possible and still involves only a finite set of parameters. Indeed, even if the special property ([\[cond:mode\]](#cond:mode){reference-type="ref" reference="cond:mode"}) of the variational family guarantees that $q(\ell; \lambda)$ can place mass on any $\ell$, and by doing so, can activate any parameter $\nu_\ell$ during the optimization, successive updates of finitely many parameters will still only affect finitely many parameters.
127
+
128
+ For instance, the inference can start with $m(q(\lambda)) = 5$ active layers, and increase to $m(q(\lambda)) = 6$ after an update to $\lambda$ that favors a deeper truncation. The next gradient update of $(\lambda, \nu)$ will then affect $\nu_{6}$, which was not activated earlier. At any iteration, the ELBO involves a finite subset of the parameters, but this set of active parameters naturally grows or shrinks as needed to approach the exact posterior.
129
+
130
+ Because the subset of active variational parameters evolves during the optimization, we refer to the method of combining the *unbounded variational family* of Definition [1](#def:varfam){reference-type="ref" reference="def:varfam"} with the *nested family* of ([\[eqn:nested\]](#eqn:nested){reference-type="ref" reference="eqn:nested"}), as *dynamic variational inference*. We detail in section [4](#sec:implementation){reference-type="ref" reference="sec:implementation"} how to run efficient computations when we do not know in advance which variational parameters will be activated during the optimization.
131
+
132
+ We end the inference section by proposing priors for $(\ell, \theta)$ and an explicit variational family satisfying the unbounded family conditions ([\[cond:bounded\]](#cond:bounded){reference-type="ref" reference="cond:bounded"}, [\[cond:mode\]](#cond:mode){reference-type="ref" reference="cond:mode"}, [\[cond:continuous\]](#cond:continuous){reference-type="ref" reference="cond:continuous"}) and the nested structure ([\[eqn:nested\]](#eqn:nested){reference-type="ref" reference="eqn:nested"}).
133
+
134
+ **Choice of prior.**  We use a standard Gaussian prior over all the weights $\theta$. We set a Poisson($\alpha$) prior for $\ell$. More precisely, we have $\ell - 1 \sim \text{Poisson}(\alpha)$ because $\ell >0$. The mean $\alpha$ is detailed in the experiment details in the appendix.
135
+
136
+ **Choice of family $q(\ell; \lambda)$.**  To obtain the unbounded family from definition [1](#def:varfam){reference-type="ref" reference="def:varfam"}, we adapt the Poisson family $\mathcal{P} = \left\{\text{Poisson}(\lambda) \mid \lambda > 0\right\}$ by truncating each individual distribution $q(\ast; \lambda) = \text{Poisson}(\lambda)$ to its $\delta$-quantile. $$q^{{\delta}}(\ell ; \lambda) \propto q(\ell; \lambda)\mathds{1}[\ell \leq \delta\text{-quantile}(q(*; \lambda))]$$ This forms the Truncated Poisson family $\mathcal{TP}(\delta) = \left\{q^\delta \mid q \in \mathcal{P}\right\}$ and the following holds:
137
+
138
+ ::: {#th:tp .theorem}
139
+ **Theorem 2**. *For any $\delta \in [0.5, 1[$, $\mathcal{TP}(\delta)$ is unbounded with connected bounded members. For $\delta=0.95$, we have*
140
+
141
+ - *$$\begin{flalign}
142
+ \lambda - \ln 2 \leq m(q^{{0.95}}(\lambda)) \leq 1.3 \lambda + 5 \label{tp:upper}&&
143
+ \end{flalign}$$*
144
+
145
+ - *$$\begin{flalign}
146
+ \forall n\in \mathbb{N}^{*}, ~~ n \in \textnormal{argmax}(q^{0.95}(n+0.5)) \label{tp:argmax}&&
147
+ \end{flalign}$$*
148
+
149
+ - *$$\begin{flalign}
150
+ \lambda > 0 \text{ ~is a continuous parameter.}\label{tp:continuous}&&
151
+ \end{flalign}$$*
152
+ :::
153
+
154
+ Inequalities ([\[tp:upper\]](#tp:upper){reference-type="ref" reference="tp:upper"}) are shown in appendix using Poisson tail bounds from @short2013improved and bounds on the Poisson median from @choi1994medians. It shows that $q^{{0.95}}(\lambda)$ satisfies the bounded support condition ([\[cond:bounded\]](#cond:bounded){reference-type="ref" reference="cond:bounded"}) with a support growing linearly in $\lambda$. The result ([\[tp:argmax\]](#tp:argmax){reference-type="ref" reference="tp:argmax"}) offers explicit distributions in $\mathcal{TP}(0.95)$ that satisfy the unbounded family condition ([\[cond:mode\]](#cond:mode){reference-type="ref" reference="cond:mode"}). Finally, ([\[tp:continuous\]](#tp:continuous){reference-type="ref" reference="tp:continuous"}) ensures the continuity condition ([\[cond:continuous\]](#cond:continuous){reference-type="ref" reference="cond:continuous"}). During inference, $\delta$ is set to $0.95$ and $\lambda$ is a variational parameter.
155
+
156
+ **Choice of family $q(\theta \mid \ell; \nu)$.**  The variational posterior on $\theta$ controls the neural network weights. In the nested structure ([\[eqn:nested\]](#eqn:nested){reference-type="ref" reference="eqn:nested"}), we model $q(\theta_{1:\ell}; \nu_{1:\ell})$ with a mean-field Gaussian.[^1]. With the prior defined earlier, $q(\theta | \ell; \nu)$ becomes $$q(\theta | \ell; \nu) = \mathcal{N}(\nu_{1:\ell}, I_{\ell})[\theta_{1:\ell}]\prod_{k=\ell+1}^\infty \mathcal{N}(0, 1)[\theta_k].$$ We approximate $\mathbb{E}_{q(\theta | \ell)}[g(\theta_{1:\ell})]$ at the first order with $g(\mathbb{E}_{q(\theta | \ell)}[\theta_{1:\ell}]) = g(\nu_{1:\ell})$ for any $g$.
157
+
158
+ **Predictions.**  The UDN can predict the labels of future data, such as held-out data for testing. It uses the learned variational posterior $q(\ell, \theta; \lambda, \nu)$, to approximate the predictive distribution of the label $y'$ of new data $x'$ as $$\begin{align}
159
+ p(\mathit{y_{\text{}}' \mid x_{\text{}}'},\mathcal{D}) &\approx \mathbb{E}_{q(\ell, \theta; \lambda, \nu)}[p(y'; \Omega_\ell(x'; \theta_{1:\ell}))] \nonumber\\
160
+ &\approx \sum_{\ell = 1}^{m(q)} q(\ell; \lambda) \cdot p(y_{\text{}}'; \Omega_\ell(x'; \theta_{1:\ell})). \label{eqn:predictive}
161
+ \end{align}$$ The predictive distribution forms an ensemble of different truncations, that is discovered during the process of variational inference. This is related to @antoran2020depth.
162
+
163
+ # Method
164
+
165
+ We review the computational aspects of both the model and the associated dynamic variational inference.
166
+
167
+ **Linear complexity in $m(q)$.**  Evaluating the ELBO ([\[eqn:final-elbo\]](#eqn:final-elbo){reference-type="ref" reference="eqn:final-elbo"}) or evaluating the predictive distribution ([\[eqn:predictive\]](#eqn:predictive){reference-type="ref" reference="eqn:predictive"}) requires to compute the output of $m(q)$ different neural networks, $\Omega_1$ to $\Omega_{m(q)}$. However, most of the computations can be shared [@antoran2020depth]. We calculate the hidden layers sequentially up to $h_{m(q)}$, as they are needed to compute $\Omega_{m(q)}(x)$. We then apply the output layer $o_\ell$ to each hidden layer $h_\ell$ and obtain the collection $\left\{\Omega_\ell(x)\right\}_{\ell=1}^{m(q)}$. Hence, computing $\Omega_{m(q)}(x)$ alone or the whole collection $\left\{\Omega_\ell(x)\right\}_{\ell=1}^{m(q)}$ has the same complexity in $m(q)$.
168
+
169
+ **Lazy initialization of the variational parameters.**  To compute gradients of the ELBO ([\[eqn:final-elbo\]](#eqn:final-elbo){reference-type="ref" reference="eqn:final-elbo"}), we leverage modern libraries of the Python language for automatic differentiation. As discussed in section [3](#sec:inference){reference-type="ref" reference="sec:inference"}, the gradient of the ELBO only involves a finite set of active parameters, yet, this set can potentially reach any size during the optimization. Hence, the ELBO can depend on every possible variational parameters $(\lambda, \nu_{1:\infty})$, not all of which can be instantiated.
170
+
171
+ In libraries like *Tensorflow 1.0* [@tensorflow2015-whitepaper], the computational graph is defined and compiled in advance. This prevents the dynamic creation of variational parameters. In contrast, a library like PyTorch [@pytorch2019] uses a dynamic graph. With this capability, new parameters $\nu_\ell$ and layers $f_\ell$ can be created only when needed and the computational graph be updated accordingly. Before each ELBO evaluation, we compute the support of the current $q(\ell; \lambda)$ and adjust the variational parameters. The full dynamic variational inference procedure is presented in Algorithm [\[alg:example\]](#alg:example){reference-type="ref" reference="alg:example"}.
172
+
173
+ :::: algorithm
174
+ ::: algorithmic
175
+ data $X, Y$; architecture generators $f, o$; Initialize: $\lambda$ $hidden\_layers, output\_layers = [], []$ Compute $m(q(\lambda))$ Add new layer $f(L+1)$ to $hidden\_layers$ Add new layer $o(L+1)$ to $output\_layers$ Initialize $\nu_{L+1}$ Compute $\mathcal{L}(q)$ in a single forward pass Compute gradients $\nabla_{\lambda, \nu_{1:m(q(\lambda))}} \mathcal{L}(q)$ Update $\lambda, \nu_{1:m(q(\lambda))}$
176
+ :::
177
+ ::::
2210.15777/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-11-16T09:48:43.652Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15" etag="ZbVNIxm_fvOtd5UGYQiB" version="15.7.4" type="google"><diagram id="d0xMO3s1I5wI6qW0vNrO" name="Page-1">7Vxbc9o4FP41zG4fYCzb8uURSNJ2J53upjvT9mnH2AK8MRYxppD8+pVtybYuBgOGkm78kNhHF6xzPp3v6EjQM8aL7fvEW84/4QBFPV0Ltj3jpqfrpg3J30zwXAh02y0EsyQMChGoBF/CF0SFGpWuwwCtuIopxlEaLnmhj+MY+Skn85IEb/hqUxzxn7r0ZvQTtUrwxfciJFX7GgbpvJA6ul3JP6BwNmefDCw6voXHKtOOV3MvwJuayLjtGeME47S4W2zHKMp0x/RStLtrKC1fLEFx2qZB/GiP4+HLj8mzN598fRg9fJiN+kbRyw8vWtMB05dNn5kGEryOA5R1ovWM0WYepujL0vOz0g0xOZHN00VEngC5neI4pUZ06OMYRzjJuzLMG9fVsl5WaYIfUa3EzS9SIo+KDvQHSlK0rYnoKN8jvEBp8kyq0FLDpk0o5CxATbKpDAgsp5DNa8aDdPAexcys7LpSK7mhmlVref33HGtGCofjcGIk4A4/JPO+Di+qZk2jap6GUcTkMY6RSvO3Q2tkWaQk8Fbz/PNBR2ZwNN4MtjOAkiEcKNtBN1jF7i2hnc8SwBJUTrQbQOQEpqR3aowOtGwaBq9lNsI62DWFkqF2LhWbkkZRQHwqfcRJOsczHHvRbSUdEee4zEqnEdoOM7dNZJUdMkVX7e4xXlLhvyhNn6n6vXWKedugbZh+o2bM7r9n9wRaxdPNtlZ088weYqKDb1XF7PF7vaxqlj+xdi1nJLOKnnfA+A4MLMBdRSkjHDjQDM2xTODaML+RcVb6z8NwtsLrxEc7jMlcZ+olM5TuqljUyyy9E7YJirw0/MGz6ykYvP9wH+KvHz+v9AV+XIR30/jzkL0NN8utKKWGycMFpjrraZ0x8KhSYk1UAzETZh30V7ndhqSCs9zWG1gz+j//sIkoCJngHnlJHMYzVkBGGYqViUzqgMiKETCxYqbdexMSg3EzwYvCWUzufQIARMY9yhxMSKKcIS1YhEGQT8QEkbF5k7y/DLBLHMZpbh446sGbfFqmxII4a9V3czInzjEbStFAxtpOLyF5ujKUoy/BRUsqD6gNNMe0OCdIO2qNNdr3n9lYq477gO/V5DvA0+mKTAkRquULHu9B2aS7GEl5yJn6ZyQpw9b3kpQFLshRwD6CozrjI3bfho9E06mIclPyhG7XL0cgGdsEVu2CPMtYA+i4ULOArTsGuYHwNOZoSRzmzyKOXW9dm3kPaJOEmdNsnoFg/wzsYhJpfDxt63I0bWvyJDLPFujtI9m95AmsXexZEuITk6zWJSGGNUasiffw6U7uJHZIeasp/aFiUdWeXlUw4V15B0jRgcMjxVS4WwVSjHMhRZVloBZdenGHSPE7Q0rxYv87pLDnn4WUn8rLFReXK74260SbWyiC9gvFnNCPZ1izJcNaXTNs3pQEH95zrQJdKDRG06Yt0peQrhTrA3BSfcvSBDQWb9xtpA7e8No5Xp3rigjltdf1m/jKLey+So8EATyp/kU8EkurnD8o/+0tKj8sUw/NgS0gyJCXcIYi3NLPtoQzjnBu583VH+PwSg9Xy6icnquv5UaI5XTu6vGZFWBosH71+Ez/XsTW90aPd7+wpftl6cTLM6xyD/r0PIKQhFdn980gV/ARyfo7hIKJ5z++wmS9gO1ym6iWxHcPydqzlfzJWfu+NiAm4VefLPt2Yt7e5VmXb36+rL28x//XUDL5pZOG0LyypKHVONmr6aaeZ6dosnWIe8jxBh68liLJdlEud34VzZrmlWlWtR33OjXrugN4XboFqgWLyJNxQOPMGz/yVqvQ5zXFh+qi3mrLYv2gdXGjwvfGTTVlQoUymexEnlMsKCxH2KQuYkbacFf2TNGXJdi8iCulvjrLpakOyZ1ll2D+tktwINXpPDJsS3GSj0VaF9knAM1xRMdg8d7AchjHGKYAFqgAi6naVDrfsU+2rXXhJG7JPEfnJ67qzMfexAJsyY+nEp8rLGpMATdtWU9aHZnmwBEWjGdmvXKCHXZUUdMsy/dbJDP+eiBdPKCNlwSnpCquwz0JibsuqM0Wjk8DQ/JVF90AZ+FvIx6a7Dz8de3clKDtgq0Uce8FIaA8vKxYAJ2fq66Vc87NJaVpyy0Zwf+35RLpeJ7YUXdEokTNMTs5b6jpCjXwyAhEQo3YUXeo+efb07D/+ePH0frT5OXl0x/v1zjZ/0WJjr4AMcYxccqrYs8hSybR8qQ8uUd6zbiHjDwmDKY9rdGqqPzayawDltI1YU/DtSWKcnWZovQO1lNK2DQHKaWRinPTKYr3GvMqrdZ9qGkKG1O2KZ+1dMr87EUijbfc67EEQCiIxI1aeQkJsmPTsHu6vXBGVm9Osu1am/p+9uotSGEYBGHmFrwod/sx25N+9S6f+yr15RYzhm3zHkYDMk0Y8pxgRxw7P9nRnNGfKEMLCqcsrEhmk98BNPJYYZxla/n7d9lDftZDFY7QouzMxiy3TF/oWc/CLNqbcP+uRU74AYXxFJMZvUA5FoVYpv05katEr/RtwYYfcZC+CNqAVwWqmyMdW9j7teT9yY4W4+Sx+vGNwmVWv2Bi3P4H</diagram></mxfile>
2210.15777/main_diagram/main_diagram.pdf ADDED
Binary file (48 kB). View file
 
2210.15777/paper_text/intro_method.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Interacting through conversations is a natural information-seeking procedure for humans, therefore it is important for AI assistants like Apple Siri and Amazon Alexa to enable and improve such experiences. In recent years Conversational Question Answering (CQA) has gained more attention, where a user can ask a series of related questions and ideally obtain answers that leverage the conversational context.
4
+
5
+ Different from widely-studied question answering (QA) tasks that happen in single-turn [\(Ra](#page-7-0)[jpurkar et al.,](#page-7-0) [2016,](#page-7-0) [2018;](#page-7-1) [Tay et al.,](#page-8-0) [2018;](#page-8-0) [Tang](#page-8-1) [et al.,](#page-8-1) [2019\)](#page-8-1), the interpretation of conversational questions in CQA depends on questions and answers from previous turns.
6
+
7
+ Previous approaches to CQA usually train new models from scratch, which can achieve promising
8
+
9
+ ![](_page_0_Figure_11.jpeg)
10
+
11
+ Figure 1: A conversational question rewriting example.
12
+
13
+ results but also are expensive in terms of obtaining domain-specific training data. In industry settings, there are many single-turn QA models deployed. Training new CQA models with additional annotations to replace each existing single-turn QA model is expensive, and generally not feasible. Moreover, discarding existing single-turn models and datasets is impractical, and studying how to reuse these existing resources to tackle CQA merits attention.
14
+
15
+ Existing approaches to this task, called Conversational Question Rewriting (CQR), often train sequence-to-sequence models supervised by human rewrites to generate self-contained questions [\(Ren et al.,](#page-7-2) [2018;](#page-7-2) [Vakulenko et al.,](#page-8-2) [2021\)](#page-8-2). Such methods have several limitations. First, the CQR training objective is disconnected from CQA performance. The annotation process of existing rewriting datasets has no knowledge of the QA systems, and more human-like rewrites do not guarantee better CQA performance. Second, the rewriting model does not take into account the feedback from downstream QA systems. In industry settings, multiple single-turn QA systems trained with different datasets serve in the backend. It is impractical to replace them with new CQA models, and we argue that their output can still be used as signals to help train rewriting models.
16
+
17
+ To overcome these limitations, we propose an effective CQR approach upon the recent success of Reinforcement Learning (RL) techniques for text generation (Rennie et al., 2017). RL enables flexible ways to incorporate training objectives in the form of reward functions. We systematically analyze different rewards and their effectiveness in terms of final QA performance, as well as the quality of the question rewrites (i.e. the question still has to be understandable and interpretable by humans). To optimize QA performance, we propose various QA rewards to measure the likelihood of a question yielding a better answer. In comparison with the QA rewards, we also propose to use the same RL approach with question rewriting (QR) rewards reflecting the similarity between a model-generated question and the human's ground-truth.
18
+
19
+ We summarize our contributions as follows:
20
+
21
+ - To the best of our knowledge, we are the first to study how to incorporate QA signals to improve CQR using RL.
22
+ - We systematically propose and compare using different training signals as rewards.
23
+ - We conduct experiments on two CQA tasks to show our approach is effective.
24
+ - A user study shows that our method can generate more accurate and detailed rewrites when compared to human annotations.
25
+
26
+ # Method
27
+
28
+ In CQA, each conversation contains a sequence of (question, answer) pairs $D = \{q_1, a_1, ..., q_n, a_n\}$ , where $a_i$ is the answer for question $q_i$ . A conversational question $q_i$ can be ambiguous and its interpretation depends on the conversational context $c_i = \{q_1, a_1, ..., q_{i-1}, a_{i-1}\}$ . The goal of CQR for QA is to learn a model $\mathcal{R}_{\theta}$ , parameterized by $\theta$ , that can translate $q_i$ associated with $c_i$ into $q_i'$ , so that the semantic meaning of $q_i'$ is equivalent to $q_i$ .
29
+
30
+ $$q_i' = \mathcal{R}_{\theta}(q_i, c_i) \tag{1}$$
31
+
32
+ A pretrained single-turn QA model is expected to answer $q_i'$ better than $q_i$ . Note that the QA model can be trained from a single-turn dataset different from D and is fixed when training the rewriter. The motivation is to explore whether the already deployed single-turn QA models can be exploited to train a rewriter and reused without further training by accepting the rewritten questions.
33
+
34
+ We show our CQR approach with a modularized design in Figure 2. There are two major components: a CQR model $\mathcal{R}_{\theta}$ as introduced in Section
35
+
36
+ <span id="page-2-0"></span>![](_page_2_Figure_0.jpeg)
37
+
38
+ Figure 2: Overview of our CQR approach. $h_i$ is human rewriting of $q_i$ and $a_i$ is the ground-truth answer of $q_i$ .
39
+
40
+ 3 and a reward function $\mathcal{F}$ that evaluates rewrite $q_i'$ generated by $\mathcal{R}_{\theta}$ by producing a reward score. Then the CQR training can be formulated as a reinforcement training problem, where the objective is to maximize an expected reward or equivalently minimize the following loss function:
41
+
42
+ $$\mathcal{L}_{rl}(\theta) = -\mathbb{E}_{q_i' \sim \mathcal{R}_{\theta}(q_i, c_i), \ q_i \sim \mathcal{T}}(\mathcal{F}(q_i')) \ , \quad (2)$$
43
+
44
+ where $q_i$ comes from data distribution $\mathcal{T}$ . During training, we push $\mathcal{R}_{\theta}$ to generate $q_i'$ that achieves a higher reward by minimizing Equation 2. Hereinafter, we omit $\theta$ from $\mathcal{R}_{\theta}$ for simplicity.
45
+
46
+ We define two types of rewards: QR rewards evaluate how similar a question rewrite is to the ground truth one produced by human annotators; QA rewards evaluate how well a QA model can answer a question rewrite. We summarize the characteristics of different rewards in Table 1. By maximizing one of the QR or QA rewards, we can explicitly optimize the model to achieve the QR or QA target. Next, we describe the two types of rewards.
47
+
48
+ <span id="page-2-2"></span>
49
+
50
+ | Reward | ROUGE | F1 | Confidence | BM25 |
51
+ |--------------------------------|-------|------------|------------|-----------|
52
+ | Reward Type | QR | QA | QA | QA |
53
+ | CQA Type | - | Extractive | Extractive | Retrieval |
54
+ | <b>Need Annotated Rewrites</b> | Y | N | N | N |
55
+ | Need Annotated Answers | N | Y | N | N |
56
+
57
+ Table 1: Characteristics of different rewards.
58
+
59
+ The rationale of maximizing QR rewards is similar to the aims of prior work: a good question rewrite should be similar to a human rewrite. We use the ROUGE-L score (Lin, 2004) between the question rewrite $q_i'$ and the ground-truth $h_i$ as the QR reward:
60
+
61
+ $$\mathcal{F}(q_i', h_i) = ROUGE_L(q_i', h_i) \tag{3}$$
62
+
63
+ <span id="page-2-3"></span>This reward has been widely used by RL methods for language generation tasks. Note that Eq. 3 does not depend on the QA model and prior work can be considered as maximizing QR rewards.
64
+
65
+ We define QA rewards that reflect how well the question rewrites can help a QA model obtain better answers. Since QA rewards are task/model-dependent, we introduce QA rewards for the following two sub-types.
66
+
67
+ **Extractive CQA** is a machine reading comprehension (MRC) task and an extractive QA model $\mathcal{M}$ extracts the most likely span answer given a question q and an evidence document p:
68
+
69
+ $$a_s = \arg\max_{a_s} P_{\mathcal{M}}(a_s|q, p) \tag{4}$$
70
+
71
+ <span id="page-2-1"></span>We assume that $\mathcal{M}$ is trained on regular singleturn QA data, and expects the input question qto be self-contained. Therefore, CQA questions should be rewritten by $\mathcal{R}$ before being sent to $\mathcal{M}$ . Next, we introduce supervised and unsupervised QA rewards.
72
+
73
+ **Supervised QA rewards.** A straightforward way to measure the quality of a question rewrite $q'_i$ in terms of QA is to calculate the similarity between the predicted answer by $\mathcal{M}$ with $q'_i$ as input and the ground-truth answer $a_i$ . We denote $a'_s$ as the extracted answer span by $\mathcal{M}$ using the rewritten question $q'_i$ as input. We measure the overlap between $a'_s$ and $a_i$ by F1 score:
74
+
75
+ <span id="page-2-4"></span>
76
+ $$\mathcal{F}(q_i', a_i) = F1(\arg\max_{a_s'} P_{\mathcal{M}}(a_s'|q_i', p), a_i) \quad (5)$$
77
+
78
+ Intuitively, the rewrite $q'_i$ is better if $a'_s$ is closer to the ground-truth answer. Compared with Equation 3, Equation 5 depends on the ground-truth answers instead of human rewrites.
79
+
80
+ **Unsupervised QA rewards.** For a predicted span $a'_s$ , $\mathcal{M}$ assigns a probability $r_c = P_{\mathcal{M}}(a'_s|q'_i,p)$ that reflects the model's confidence about the answer. We assume that a higher confidence score of an answer indicates that the QA model has a better question understanding. Therefore, we directly use the probability of the most likely answer as the confidence reward for a question rewrite:
81
+
82
+ <span id="page-2-5"></span>
83
+ $$\mathcal{F}(q_i') = \max P_{\mathcal{M}}(a_s'|q_i', p) \tag{6}$$
84
+
85
+ F1 rewards can be considered as judgment scores on predicted answers by humans since the groundtruth answers are used, while confidence rewards represent the model's self-judgments.
86
+
87
+ We also evaluate our method's generalization on a different **retrieval CQA** task, where the goal is to return a list of documents in descending order of relevance scores produced by a retrieval CQA model:
88
+
89
+ $$rel = \mathcal{M}(q, p)$$
90
+ (7)
91
+
92
+ where p is a document. A retrieval CQA model usually consists of two stages. In the first stage, a lightweight ranking algorithm such as BM25 is used to retrieve top-k candidate documents. In the second stage, a more complex model such as BERT (Devlin et al., 2019) is used to rerank candidate documents. Here, we use the BM25 score between a question and a document, which is a type of QA reward that does not use annotated answers:
93
+
94
+ $$\mathcal{F}(q_i') = BM25(q_i', p) \tag{8}$$
95
+
96
+ <span id="page-3-0"></span>We expect the rewrite $q_i'$ can retrieve documents with higher BM25 scores in the first stage than $q_i$ so that the performance in the re-ranking stage can also be improved.
97
+
98
+ There are two steps in our training framework. The first step, the pre-training step, which has the same supervised target as prior work. The objective is to minimize the cross-entropy loss between the model's prediction q' and human ground-truth rewrites h:
99
+
100
+ <span id="page-3-1"></span>
101
+ $$\mathcal{L}_{sup} = -y_h \log y_{q'} , \qquad (9)$$
102
+
103
+ where $y_h$ is the one-hot vector of h and $y_{q'}$ is the distribution over tokens in q' predicted by the model. Supervised pre-training ensures the model has the basic ability to rewrite the original question given the conversational context.
104
+
105
+ The second step continues training $\mathcal{R}$ with RL to maximize different rewards. In this work, we use Self-Critical Sequence Training (SCST) (Rennie et al., 2017). Given a question q, we generate two question rewrites $q^s$ and q'. $q^s$ is generated by sampling the word distribution from $\mathcal{R}$ at each step, and q' is generated by $\mathcal{R}$ using greedy decoding. Then we minimize the following loss function:
106
+
107
+ $$\mathcal{L}_{rl} = (r' - r^s) \sum_{t=1}^{N} \log P_{\mathcal{R}}(w_t^s | w_{1:t-1}^s, q, c)$$
108
+ (10)
109
+
110
+ Here, $P_{\mathcal{R}}(\cdot)$ , which is defined by $\mathcal{R}$ , is the probability of generating the t-th word conditioning on previously generated tokens of $q_s$ , the original question q and conversational history c. Intuitively, minimizing $\mathcal{L}_{rl}$ increases the likelihood of $q^s$ if it obtains a higher reward than q' (i.e. $r^s > r'$ ), and thus maximizes the expected total reward. Given a reward function, we can obtain $r' = \mathcal{F}(q')$ ( $\mathcal{F}$ can be one of Equation 3,5,6,8) and $r^s = \mathcal{F}(q^s)$ .
111
+
112
+ We only choose one of the reward functions to obtain the reward for a question. We leave the combination of different rewards as future work. Additional training procedure details are described in Appendix A.
113
+
114
+ Similar to Vakulenko et al. (2021), we experiment with CANARD (Elgohary et al., 2019) for extractive CQA and CAsT-19 (Dalton et al., 2020) for retrieval CQA. As CAsT-19 is small compared to CANARD, prior work (Vakulenko et al., 2021) uses the same model trained on CANARD to evaluate the rewriting performance on the test set of CAsT-19. Similarly, we start with the modelnon CANARD, and continue RL training with the BM25 reward on the training set without using any human annotations provided by CAsT-19.
2211.13775/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-11-08T15:31:09.402Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" etag="ZOaj9tL9WsGCU0DOVrYt" version="20.5.3" type="device"><diagram id="Iz5aUu7_yH7bntWBcaTn" name="Page-1">7X1Zk6K+9/er6arnubCLLSyXgLijgjs3XQgICAgCivrq/0HBpdVue7qdmd98SRVtZyEkJ5+cJeSEF5z3NtVQDSzR1w33BUP0zQtefsEwDCEJBP6mSdtDEkDoLMUMbf2Qhp4SevbOyBLzYitbN6KLgrHvu7EdXCZq/mJhaPFFmhqGfnJZbOa7l08NVNO4SuhpqnudOrL12Mp7lrcvzagZtmlljwYgy/DUvHCWEFmq7idnSbjwgvOh78eH/7wNb7gp+XK6HO6r3Mk9Niw0FvEjN/SVZedtHu1I3TEi9I1n6LhWIg+1rFV3lXU4a2y8zSlghv4qyIoZYWxsbtFdnebFket2ocfeQqAYvmfE4RYWySoqoSTC0Ie7MpSUcDonY3JGdJTKEq1zguNZH9RspM3jE07EgP9k9LhNmzWxINAR25qtqxHYYcAKwmmJoIkvUee8yx8S+1Eq3iUZijDYBcFQDH/FCILJA81cUQ/H6GviYQzyiuJPox/4nH7GQmfTaQpjmqtGka294JwVe/BJZRT+C8m70I30MQiMRXHoO8eJSB5TeN/1w32FOLIPMGfmL+KzdJ7P0q8H6+PRN/QLDnE9KGdEBrcAmqWFhqvG9vqSr9wievaErm/DBh6HGJCvJKAIgKMogeAICS4riPxVqBnZPedcIK8mgw6JXgDnk1pjNTSN+KrWPRyOFPgOQh7gPwVCPkNILm5J6v3YMhQFMALHEIAxBP5riDkyjseqfTpkqM8hA0VtkP4bQ63E2PlphVxghDZ8uBGep3dPiVxi2bHRC1QtvTOBJS5hNrM3Rq6fHGAXw8HyFzDK7IF3yMKpV7Av7boX0EoDTDdDVbeNE+wW/sK4wqhuzNSVGz+A7a8i9Ur03BUxAEDBgBwDesk2qFeaRgCBIiSDIgjOoFfiBkVTQlxBnaFfMQQnSUDiCI5RDLgP8m/KHuavg0npX8QJCsBfi4y39pbeCv2mSCVl0LNXm8EKL+GP48L29sYBl1LDhiZBS50abteP7GxAp34c+x4s4KYZnKo55n4grscnr4F1bTO9M/ZT2OzrZ6PgYLcgeUp6pxqrLzh7iGKVeWCYL1jaX2YO/xAsK/UcpSGbLMdKAuuwvMSy5TRHLpvy2AoUk2X7A4Rtwl+WTYsJFMvW00gNXvWerwewkjTOpfFBsNVZVkjjHXhJ25WvsayW3iumid19PaxGZvUVoQhFKEIRilCEIhShCH91qLSRKbYJ1CSNjCQ2Yfl9MryEJi9BJfgUb/V6jnSIszsJhVk9xzyPl5e1TI8uwz98Y69Hs7WOn/7Uf2u/ilCEIhShCEUoQhGKUISvh61Yrm87cyGBF6iXBdCZS5vOnMXYcj0R53VS7LPwckx4ke2yQIrlCbyN67M1iZ0mXMRWHXavW3dfsIrFb1xfr8lJx6bX04W4mozk9cQbrCYYE7dwy9J4etOas2u2nMDiRo2zJljs6jxnKyM9mM6RBUWtYE5jywVKGVkMd8O2KKCJhA19dWCRmjfsGw4YTPAgnOxcp7GjXzCuLPm0UQ6I6ZiLlbFs1WtWPK2CXWdhMnWv4kyxhtvxYNxDXb0qrOtzgtY82eu4DUG2OWI62qy0XbBVqhNY3kL0Gku2tgyu49pK34mrKd5YtHaQNn1nDem1hnSJ0/tb4zaYLmTL4NGVthXt7nyT1kHK1eFugjcCrSYHU4ww98/yKvYUHyLdXgPVR67T8RRLmQ9tpSqhrVHDEjEpFvsVZ9JDLaUsblojaQfT8HZfIERvspnYddOoohGkKqlUxfN2Jvq4EbU8ZqtsGWeyZdxpdRgrI4Bo25TasL92SqVureEo80BQxu255rmJXnXXU5vbt3gyaqz1scTUF0NUHUmk4lW89giWLw/iiSfsFB7ZiCPZao0qjrgTYtjCnVLWULFvbpV5xa6fqAhHgT1vHaQMsJQqYys9MJ+mL5iqlQRScyXydEqxtDwp4Q13MpZduVpBlD6xR2WrLyTwAsMzVLbmEtLuJ7AONOk47a0yguUHsqVXB2kPeeYW/X9f353JDrHVmoxoZX/dwjlU85IDejB53sKGkTJCXYiY3b7/vX3/4T0BbNMQ9oHZ5ggaYMPNCcVp28/r1XF9C3BxC9aap63hzASdHp2INg0xiKb3xxrurvRqhWiNwK6+rZ8oEKl90IdPitRRRLfwPI3I/+8rY+Ewoxq85NBpuzvegc5TvP4xjbOcgSNXu3M4RmczsmsizTrPmn/8Yv0/3wZ48fafbwO8ijEpxqQYk2JMijEpxqQYk2JMijE5XHQXt5LJqB0qY8mElpTT7en4wdZMd2JFrJSwNU4aVDlJqnHmoMqzWo1nfZlPUr3ZH5Y3rCjI5rBqOZOKbI6rG2JRlyO9ISPjKl9f1e2B3+xFUXPjm62duG7O64umHS3z2gVYm9mENQ34JBo0YWNqvL9s2t+7YO00ywqSyTcSjmETwWbZusxKoszOYJ7AcjCbFSqsJHD76xQ41hQFToLNYJ1muh1tX741kGANHJuIPMeKFUiNdCMZN5MEnk1YQZAGvXq/wnFWW55Imq3J0tZUqhbPt/lIE4aDuJY0ym1b8+sN3pqMq5yjNnqTRZN3lu3yAGvvTACtPEEeKLWe4DYHqDwcuvpkNAzmSnXoKZgVqIs2oo8V3Ki5pEn0KpZv1O3JsueA3tANjImnLG2f7LnB0vCX6nIbUT00Dg1iNQ35hO5XNtGsvtXCHsL0h2g8m2B6aBGrrgOM6YLsLJb0rLVjumOMSTfjAQiAdUYpSB2RO1Jq3/EPKSXvKWXCggnEzJ5SAq8n+/Iyxw4sUZr0pKFQm4yFfn3e5HhO5h21ukkkZy6iraQyamwRyR81yn2rMRMO1IK0a/Z8FFq1RKcflGVhWJURq9Fz24PhUBmPKq46wWRX8XRfHQU7vTbEdNwCht8sW/VR1SLshh00+25jNHKBrXjL5jxojryAtINl2NzFrREWUzZYRa3yRoQwpueNbdzqo+3xCGXmCrZqzUFn7KWEmQfk+gatVJZNyqzE8kKKszLNMmMfBrYJA8vbtm2yRbyIF/EiXsT/h+MHScc46Ub657odkK+Xbkk49UoDQDIEQAmEQWiSuPY8wK/dDnD69Qf8H2+6ez7g/fh939qPCHrfuYe5IB0KmFcKgJOzzxXlju7Y56Qj6ScR7gG329xFY+Yam8w7kLvpKBjFahh/wX8w9QnM3HeIa18a/Ajqdz5/D3j/gk99A1HkBpWPib/qHfhVf7/jWP+8R99NuqA3Rpvce82kY5H67B98rWDqcpU6y3O87wWr1BsLQ9IjB8LFKesCJXliWk/p4HcFGRRCIMFmX/R0E2lmv/vnTvOESurMlafCzk3fl4Rph0bmye9gCmd0fIk2NfMA0iBY9v5k712DPFvX09u50IBNzuZ+isQgHYD9kADuBaTsVV3FfnRyNbt0Dss8xs4dzLKku/zmVyGc1UJceoLR1/wXu8VFkPtI/qYLIP0ZsqL8uIkcCegFkrSjY94pMfcePi93BdC+7RkRzGgbCfwr+566+BBtn2MWJ+5gloUYMBZaOg3+OaC+58U/5a/9BS9Y+pU5CyR+gXAMeUXPxCb2oNh8GuCpr53m8fvOq8Ap5PWSOeDXOgbI9Y4L9Qx/HrUecCS/Q61PyP9tet04/IS4xTufSZ37zHN6ujnnRrqvRSU7ZRQL1S2lZ/aUGIqkwIymS9RsNisZJIGVNFozSgZDMySlAUzD6UfkdpL1l134oae6HwruKEgZ7efcFEWDnHWnGTPVs93tIYsN7ewhOftPU0Nz+v+QvQcukv/8/0Oh6dHrt3RWPg7VRZQj5vicY1f2SghylpO1OM046+Yhb63CJi3i69yUZ5d0Q/PDzMP8UCCT8UjOr0sZJ2f3rY0M185KfJmKUz9MxUw6FPsqdDsKXDUlnL1Iay1NXV+Dph/nw2fP3FTlZi0oJYxF6jd/wDKL0wdRluOYxWj6E33M9syb7fGgYmovSq4xS+s5aHVZ2t7Bmr1W9M6nT56K0ZdAPM26PBFPJcc5VEPtIt+K4/Q4qsz6dS3y1fR90zVWkRHCqR9DGLxqvgfzQB/vhrIvG/6bMTB2a4iRQMAq7Y5HMe1Jw/O05kRZ7oKQndd4KxxNcVT1EJ8bC+VlUHKrehwE/W5zSNoeIY7Hmy2QVt1OoCF+LA99cy507GpjN5X75VhRGlxn2V4PXCok3hTR2tltbhVVQ+qtDdmAvlrHerDtNOb8qlfVa60qsl6g605rvZI2TGPtNMhu19KqEYHNwpqQ3B8e2K0DZB5Nnn6mhnx2xsINvaCSzeHyLd3r+6rDUVh8rjp8yrSx5zHtW4defDKp91zkjEUc9b37A55pz4+qx9fP/Fh7/tC8ywyz0FiuoNq7vavf3sHgP4i3PBe91LCOeua5AcbcwCP1PAMM+VCH+Eks/gqIeunBG6Ga9qMLybwKpwc5+imk/m3+leUy5AWcSPBKgWsDhyB/K6BurRW9G4K/97iw3D7408eF5au+BPNKk9A2Q1GcQDAEvFvOfXS9kL48GiqtlqLPjebLap9+PhhWYOTHjpQ7KjDHNwXIKwFOmMGoX8PM0dY/qxenzjDze88hBOj9RejHrN0phICqY7ODtYuTBCjRDKKWgEaAqa4h0OYlC2v3v2ftEgz1ztolwR+2dtO59aG1C9v8JWsXv2vt6uu1QFFdoTHU4kas0lt6ElQ4CSSkGFR6AT4GsW4FpCzRaNdxeqimaOYg6eLlxoLuaPPyeLsptTadCsp7LRHv+aUFu0vWtjXGkHndt9/aA4PnOu0RJY97jqDRWqWNg7q62HSXpjVFGo4jVbu7oF9GvImtC5JqDrsVTypH/R7uOjSKT9arbW2H2FGjzSTxVkDo++PyR8zcWzLmWpZ8afX2vgWBkdilzne9og3wG8IGY56l8QH0YxPic8ZsINiMQabGgTETGEmUSIYhSghqqPiUVgFeLEMWjPnfZMz3lyFHpQCpVHTepURVDEhQd5Jtm17ynflizY+XqldzulumbzXX/QZOh7pi6j1MJDt4P6nIlOyheEA7zYHWH4flmG/5sxCYnFru+oa1IZrlrTFrsh1Rl0pNbb3ZIhGtDv3W1B6ixtyA9bUnb8RCRP1ym9mxzdJmXa/1gV2P5/32TMBRb0hbegfXBUpkEotF+4kpOf9hxkwB/G/jy8h3Xw9NVdrAUka858soDfky0AyqpBlTyJnTk+wLvlzw5f8aX55XIg1dmM44kal21fQFwx30liLWnKijZnPsEhI643yns6Vrk8nQcndI02jWSZHjK46vb6mYZKg23+QquCPotESPqSoRgcAdtnsLMOS6RIkfCYjaRkMrVnp63Km21+TQmSa8XB9KXerNHm1UpFTxeNaBhKi1Sqgso4ZDzzU0MGctfQPExo42lEXFUHueh1nIf5Ev/zV8mGAeWBH9I1s+MPxyjymefyfgjFhU+ikBhsxDfrL/xe5H5mlfJ8k/ofLhQqFuGr0sGhmmB4knnJJu7zi9uY54ve3p/arhAy8AsncGKdnvLjReL1bqamQd2/DoOmSOqz+9Drm9ecPDW1nTU/axE8CIy3ca6Pt67ywzwjFWt2fFsg1un7T2qlHvnnoC76H+n10GZx44575A99+xyn6HD355UR1lXgnqtIv/cjEHfb/t4Ltg/7Dx99p4p6vv2vjkqfHrn/X6RAj/hNR8BVD5BzRC46lRdMmw8k+dnb+BpW/gEgXvIPSDUvMBr5bcOeOgtOn+6tD7a57xiT53/GJeOvXvfGPjnDsciuytkM+ZB/F1zkR8wJnefcilsg8vD37I5VfeFDJf3syMvCIIjtEUAQhAIfAvgV1/Jg6j6FdYKv0SC4pjFNTbfi+8bm1hvtyz/3PuHT9Ws2yYKxfa4buLDSTTMM9v+VGUYusLG0v+kW374PFZ9pjl9u3JkIsaGjJaBGVomgaQ3wJwbZ7cZq3P273H/PqG9Cd/bhLkPmgfGHP4TWo90Xy7tQL5LSl+b4vU17fvo++E+Dvfh79Ain/h62iFFH+mFKfvgO5/WYof2cX/mhSH8xwO6Uq7L8ULGf48Gf7lqVBCXt99OPrvk+rHnW7FQswHMvdPL8R8damFeHeAwg+vrdxfOvwN6yMgd2QtEPsXIzb3rKXfIe/hrzrjv2d5MG/g3XbQyM12PBniD6yO/xlTK3eeOPfaoW6JrKe5kUPT5a72dk/puukKpmnvXcE+0rO+4W7z96pOv+bd81XV6QTnL3iL5a8/P1ONnoizW0vJ73DW33OhAgzPBQNJ3OA6xO8Fw/2F3+OQ9/ayrQDDc8DwB8f+1sLnDwocVofdj9TwAW/hAlE/z14o7IYVfuuos+chLH8X/svbVRFURwmNJDM3AoSelhh8xpQwlUaoKY5oOjIrtqsW21X/xe2q4O521XWdsRwOZWvj9Rs7bo/69LapoG+N7lrZTDr1ib9alZo7+a3eitsju92MKiPUl7FN+GbwnDsTxyiK7BZOVyZNfD2TRA9ZVbWRTtXQt1ptvlDLK7Qdoo6DNavduj6BNreujRvVpkrPZHZqTAa7IVWbi2rMyGXak1RyyRkdiUJ8t2uGi2DE4t2Ji4Nqm0crVM1fsoTwX9yueqXm/Yb9qzAa+imdT3Y7JICVnsWQlvg/</diagram></mxfile>
2211.13775/main_diagram/main_diagram.pdf ADDED
Binary file (39.3 kB). View file
 
2211.13775/paper_text/intro_method.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ A triangular mesh is the primary representation of 3D shapes, with applications in many safety-critical realms. In the medical field, incorrect perception of the geometric subtleties of an organ can lead to life-threatening errors. In robotics and automotive, a precise understanding of the geometry of obstacles is essential to prevent accidents. The security of facial modeling is also dependent on the accuracy of the processed geometry of the mesh.
4
+
5
+ Autoencoders (AEs) are one of the most prominent deeplearning tools to process the mesh's geometry. They are designed to capture geometric features which enable dimen-
6
+
7
+ ![](_page_0_Picture_16.jpeg)
8
+
9
+ Figure 1. A result of our geometric mesh attack. A mesh of a *sphere* (top left) is perturbed into an adversarial example (top right). While the original mesh is accurately reconstructed by an *autoencoder* (AE) (bottom left), our attack fools the AE and changes the output geometry to a *cube*! (bottom right).
10
+
11
+ sionality reduction for both storage and communication purposes [6, 4]. Mesh AEs are also used for segmentation, selfsupervised learning, and denoising tasks [16, 19, 7].
12
+
13
+ Despite their tremendous achievements, neural networks are often found vulnerable to adversarial attacks. These attacks craft inputs that impair the victim network's behavior. Adversarial attacks were extensively studied in recent years, focusing especially on the *semantic* level, where the input to a classifier is carefully modified in an imperceivable manner to mislead the network to an incorrect prediction. Semantic adversarial attacks are abundant in the case of 2D images [8, 20, 3], and recently, semantic attacks on 3D representations have also drawn much attention, both on point clouds [29, 10, 28] and meshes [30, 14, 23, 1].
14
+
15
+ Nonetheless, the vulnerabilities of networks that process geometric attributes, such as AEs, have not been thoroughly investigated. AEs may be imperative to many practical mesh deployments and their credibility and robustness depend on the study of *geometric* adversarial attacks.
16
+
17
+ We propose a framework of a geometric adversarial attack on 3D meshes. Our attack, named SAGA, is exemplified in Figure 1. The input mesh of the sphere is perturbed and fed into an AE that reconstructs a *geometrically different* output, *i.e*., a cube! Ideally, the deformation of the input
18
+
19
+ <sup>1</sup><https://github.com/StolikTomer/SAGA>
20
+
21
+ <sup>\*</sup>Equal contribution
22
+
23
+ should be unapparent and yet effectively modify the output geometry.
24
+
25
+ In our attack, we aim to reconstruct the geometry of a *specific target* mesh by perturbing a clean *source* mesh into a malicious input. We present a white-box setting, where we have access to the AE and we optimize the attack according to its output. A black-box framework is also explored by transferring the adversarial examples to other unseen AEs.
26
+
27
+ Mesh perturbations include shifts of vertices that affect their adjacent edges and faces and possibly result in noticeable topological disorders, such as self-intersections. Therefore, concealed perturbations must address the inherent topological constraints of the mesh. To cope with the fragility of the mesh surface, we apply the perturbations in the spectral domain defined by the eigenvectors of the Laplace-Beltrami operator (LBO) [5]. Particularly, we facilitate an accelerated attack by operating in a *shared* spectral coordinate system for all shapes in the dataset. The source's distortions are retained by using low-frequency perturbations and additional mesh-related regularizations.
28
+
29
+ The attack is tested on datasets of human faces [24] and animals [32]. We evaluate SAGA using geometric and semantic metrics. Geometrically, we measure the similarity between shapes by comparing the mean curvature of matching vertices. Semantically, we use a classifier to predict the labels of the adversarial reconstructions, and a detector network to demonstrate the difficulty of identifying the adversarial shapes. We also conduct a thorough analysis of the attack and a comprehensive ablation study.
30
+
31
+ To summarize, we are the first to propose a *geometric* adversarial attack on 3D meshes. Our method is based on low-frequency spectral perturbations and regularizations of mesh attributes. Using these, SAGA crafts adversarial examples that change an AE's output into a different geometric shape.
32
+
33
+ # Method
34
+
35
+ We attack an autoencoder (AE) trained on a collection of shapes from several semantic classes. In each attack, we use a single source-target pair, where the source and target shapes are selected from different classes. Our goal is to find a perturbed version of the source, with minimal distortion, that misleads the AE to reconstruct the target. Ideally, the source's perturbations should be invisible while still altering the AE's output to the geometry of the target shape.
36
+
37
+ Given an attack setup of a source shape and a target class, we choose, as a pre-processing step, the nearest neighbor shape from the target class in the sense of a Euclidean norm of the difference between matching vertices. Since the AE is sensitive to the geometry of its input, selecting a target that is geometrically similar to the source benefits the attack and reduces the potential magnitude of the perturbation.
38
+
39
+ In the upcoming subsections, we present a preliminary spectral analysis followed by a description of the spectral domain in which the attack is performed. Then, we define the problem statement and elaborate on the perturbation parameters, the loss function, and the evaluation metrics.
40
+
41
+ **Manifolds.** A geometric shape can be described as a 2D Riemannian manifold $\mathcal{X}$ embedded in the 3D Euclidean space $\mathbb{R}^3$ [17]. Let $\Delta_{\mathcal{X}}$ be the Laplace-Beltrami operator (LBO) of the manifold $\mathcal{X}$ , which is a generalization of the Laplacian operator to the curved surface. The LBO admits an eigendecomposition of the shape into a set of discrete eigenvalues $\{\lambda_i\}$ , known as the spectrum of the shape, and a set of eigenfunctions $\{\phi_i\}$ , as follows:
42
+
43
+ $$\Delta_{\mathcal{X}}\phi_i = \lambda_i \phi_i. \tag{1}$$
44
+
45
+ The eigenfunctions $\{\phi_i\}: \mathcal{X} \to \mathbb{R}$ form an orthogonal spectral basis of scalar functions. Thus, the Euclidean embedding values of the manifold in the x,y,z axes can be represented as three linear combinations of the spectral basis using a set of corresponding spectral coefficients $\{\alpha_{i,x}\}, \{\alpha_{i,y}\}, \{\alpha_{i,z}\}.$
46
+
47
+ Mesh graphs. A continuous manifold of a 3D shape can be discretized into a triangular mesh graph M=(V,F). $V\in\mathbb{R}^{n\times 3}$ is the vertices matrix, in which each of the n vertices is assigned a 3D Euclidean coordinate. $F\in\mathbb{R}^{m\times 3}$ is the triangular faces matrix consisting of m triplets of vertices. We calculate the discrete LBO using the prevailing classic cotangent scheme [21]. In this case, the LBO is an $n\times n$ matrix and the eigenvectors are approximated samples of the continuous eigenfunctions on the vertices of the mesh graph [13]. Let us arrange the eigenvectors as the columns of $\Phi\in\mathbb{R}^{n\times n}$ and the n spectral coefficients of each Euclidean axis as the columns of $A\in\mathbb{R}^{n\times 3}$ . Then, the spectral representation of the mesh vertices is given by:
48
+
49
+ ![](_page_2_Picture_9.jpeg)
50
+
51
+ Figure 2. The proposed attack framework. Attack parameters perturb the spectral coefficients of the source shape to craft an adversarial example. The malicious input (Adversary) misleads the AE to reconstruct the geometry of the target mesh. The perturbation is optimized using a loss function that compares the AE's output with the target shape, and regularizes the adversarial shape to preserve the source's geometric properties.
52
+
53
+ $$V = \Phi A. \tag{2}$$
54
+
55
+ The spectral decomposition of a mesh is computationally demanding, and it is restraining the efficiency of our attack. Thus, we propose a novel approach in which the attack is performed in a shared spectral domain. The idea is to represent all the attacked shapes in a shared coordinate system defined by a single set of spectral eigenvectors. This shared basis accelerates the attack by omitting the heavy calculations of a per-shape spectral decomposition.
56
+
57
+ **Shared spectral basis.** The spectral decomposition varies between different shapes since the surface of each shape is a unique manifold and its spectral eigenfunctions are defined over its specific geometric domain. However, the geometric resemblance of the shapes in the dataset can be utilized to construct a shared basis of eigenvectors. The idea of a shared set of eigenvectors assures that, practically, the Euclidean coordinates of the vertices of any shape can be spanned by the shared basis with a negligible error.
58
+
59
+ The shared basis was built as a linear combination of the bases of multiple shapes, which were sampled from different classes. The coefficients of the linear combination were optimized using gradient descent. The loss function was the sum, across all sampled shapes, of the mean-vertex Euclidean distance between the original coordinates and their representation in the shared spectral domain. More details can be found in the supplementary.
60
+
61
+ **Basis transformation.** We denote the shared basis by $\Phi_{shared} \in \mathbb{R}^{n \times n}$ , where its columns are the set of n shared eigenvectors. In the new coordinate system, the vertex matrix V of a mesh M can be replaced by the spectral coefficients matrix $A' \in \mathbb{R}^{n \times 3}$ according to:
62
+
63
+ $$V = \Phi_{shared} A'. \tag{3}$$
64
+
65
+ Given $\Phi_{shared}$ and V, the spectral coefficients are found using least squares. In the following sections, we refer to A'
66
+
67
+ simply as A for ease of notation and assume it was calculated using $\Phi_{shared}$ .
68
+
69
+ We pose the attack as an optimization problem in a white-box framework, where the AE is fixed. We denote the source mesh taken from class $\mathcal S$ by $M_{\mathcal S}=(V_{\mathcal S},F_{\mathcal S})$ , and the target mesh taken from class $\mathcal T$ by $M_{\mathcal T}=(V_{\mathcal T},F_{\mathcal T})$ . The spectral representations of $V_{\mathcal S}$ and $V_{\mathcal T}$ are given by the spectral coefficients matrices $A_{\mathcal S}$ and $A_{\mathcal T}$ , as defined in Equation 3. Let us denote by k the number of frequencies we aim to perturb. We add perturbation parameters from $B\in\mathbb R^{k\times 3}$ to obtain the adversarial input $A_{adv}$ , according to:
70
+
71
+ $$A_{adv}(i) = \begin{cases} A_{\mathcal{S}}(i) + B(i), & \text{if } i < k \\ A_{\mathcal{S}}(i), & \text{otherwise,} \end{cases}$$
72
+ (4)
73
+
74
+ where $A_{\mathcal{S}}(i) = [\alpha_{i,x}, \alpha_{i,y}, \alpha_{i,z}] \in \mathbb{R}^3$ and $B(i) = [\beta_{i,x}, \beta_{i,y}, \beta_{i,z}] \in \mathbb{R}^3$ are the spectral coefficients of frequency i and their perturbation parameters, respectively. Note that the optimized parameters of the attack are the elements of B. The resulting adversarial mesh is $M_{adv} = (V_{adv}, F_{\mathcal{S}})$ , where $V_{adv} = \Phi_{shared} A_{adv}$ . Also, we propose an attack with a multiplicative perturbation, defined as:
75
+
76
+ $$A_{adv}(i) = \begin{cases} A_{\mathcal{S}}(i)(1 + B(i)), & \text{if } i < k \\ A_{\mathcal{S}}(i), & \text{otherwise.} \end{cases}$$
77
+ (5)
78
+
79
+ The advantages of operating in the spectral domain are realized by confining the attack to a limited range of low frequencies. By attacking only the low frequencies, we inherently enforce smooth surface perturbations and reduce sharp local changes of the curvature. Consequently, significantly fewer parameters are used compared to a Euclidean space attack where all vertices are shifted. It also offers the flexibility to control the number of optimized parameters.
80
+
81
+ **Problem statement.** The problem statement is depicted in Figure 2. The parameters of the perturbation B are optimized according to the following objective:
82
+
83
+ $$\underset{B}{\operatorname{argmin}} \quad \mathcal{L}_{recon}(\widehat{M}_{adv}, M_{\mathcal{T}}) + \mathcal{L}_{reg}(M_{adv}, M_{\mathcal{S}})$$
84
+ s.t.
85
+ $$\widehat{M}_{adv} = f_{AE}(M_{adv}),$$
86
+ (6)
87
+
88
+ where $f_{AE}$ is the AE model and $\widehat{M}_{adv}$ is the reconstruction of $M_{adv}$ by $f_{AE}$ . $\mathcal{L}_{recon}$ and $\mathcal{L}_{reg}$ are the loss terms for the target reconstruction and the perturbation regularization, correspondingly. Both terms are further discussed next.
89
+
90
+ **Reconstruction and regularization losses.** The reconstruction of a target shape is achieved by explicitly minimizing the Euclidean distance between the vertices of the AE's
91
+
92
+ output and the vertices of the clean target mesh. Specifically, $\mathcal{L}_{recon}$ is defined as:
93
+
94
+ $$\mathcal{L}_{recon} = \frac{1}{n} \sum_{i=1}^{n} \left\| \widehat{V}_{adv}(i) - V_{\mathcal{T}}(i) \right\|_{2}^{2}. \tag{7}$$
95
+
96
+ where $\widehat{V}_{adv}(i), V_{\mathcal{T}}(i) \in \mathbb{R}^3$ are the 3D coordinates of vertex i in meshes $\widehat{M}_{adv}, M_{\mathcal{T}}$ , respectively. The sign $\|\cdot\|_2$ refers to the $l_2$ -norm.
97
+
98
+ To alleviate the distortion of the source shape, we combine the inherent smoothness provided by the spectral perturbations with the $\mathcal{L}_{reg}$ loss. This loss consists of additional mesh-oriented regularizations that are meant to prevent abnormal geometric distortions.
99
+
100
+ We consider four kinds of regularization measures in $\mathcal{L}_{reg}$ , each with a different weight assigned to it. Inspired by Sorkine [25], the first term, denoted by $\mathcal{L}_{lap}$ , compares the shapes in a non-weighted-Laplacian representation. In this representation, a vertex V(i) is represented by the difference between V(i) and the average of its neighbors. This loss promotes smooth perturbations since it considers the relative location of a vertex compared to its neighbors. Let I be an identity matrix of size $n \times n$ , J be the mesh adjacency matrix, and $D = diag(d_1, ..., d_n)$ be the degree matrix. Then, the non-weighted Laplacian operator, $L_{non}$ , is defined as $L_{non} = I - D^{-1}J$ , and the vertices matrix is transformed into $\tilde{V} = L_{non}V$ . The loss $\mathcal{L}_{lap}$ is defined as:
101
+
102
+ $$\mathcal{L}_{lap} = \frac{1}{n} \sum_{i=1}^{n} \left\| \tilde{V}_{adv}(i) - \tilde{V}_{S}(i) \right\|_{2}^{2}.$$
103
+ (8)
104
+
105
+ The second regularization term, $\mathcal{L}_{area}$ , reduces the Euclidean distance between matching vertices, normalized by the total surface area of all the triangles containing the vertex in the clean source shape. The loss $\mathcal{L}_{area}$ retains changes in heavily sampled regions of high curvature, a vital requirement for geometric details preservation. It is defined as:
106
+
107
+ $$\mathcal{L}_{area} = \frac{1}{n} \sum_{i=1}^{n} \frac{1}{area(i)} \|V_{adv}(i) - V_{\mathcal{S}}(i)\|_{2}^{2}, \quad (9)$$
108
+
109
+ where area(i) is a weight defined by the sum of the surface area of all the faces containing vertex i in $M_S$ .
110
+
111
+ Let us denote by $N(M) \in \mathbb{R}^{m \times 3}$ the normal vectors of all the faces of mesh M and by $E(M) \in \mathbb{R}^d$ the length of all the edges of mesh M, where d is the number of edges. The third and fourth regularization terms in $\mathcal{L}_{reg}$ are denoted by $\mathcal{L}_{norm}$ and $\mathcal{L}_{edge}$ , and are defined as follows:
112
+
113
+ $$\mathcal{L}_{norm} = \frac{1}{m} \sum_{i=1}^{m} \| N(M_{adv})(i) - N(M_{\mathcal{S}})(i) \|_{2}^{2}, \quad (10)$$
114
+
115
+ $$\mathcal{L}_{edge} = \frac{1}{d} \sum_{i=1}^{d} |E(M_{adv})(i) - E(M_{S})(i)|^{2}.$$
116
+ (11)
117
+
118
+ The loss Lnorm prevents the formation of sharp curves in the adversarial mesh by limiting the deviation of the surface's normal vectors. It is particularly beneficial when the geometric differences between the source and target shapes are coarse. The loss Ledge, on the other hand, alleviates local stretches and volumetric changes by keeping the edges' length from changing. Referring to the problem statement in Equation 6, we define Lreg as:
119
+
120
+ $$\mathcal{L}_{reg} = \lambda_l \mathcal{L}_{lap} + \lambda_e \mathcal{L}_{edge} + \lambda_a \mathcal{L}_{area} + \lambda_n \mathcal{L}_{norm}, \quad (12)$$
121
+
122
+ where λ<sup>l</sup> , λe, λa, and λ<sup>n</sup> are the loss terms' weights.
2212.00767/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2212.00767/paper_text/intro_method.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Navigating safely in a dynamic scenario populated by humans who are moving in the same environment is necessary for embodied agents such as home assistants robots. To do so, as depicted in Figure [1,](#page-0-0) the agent should be able to dynamically and interactively navigate the environment by avoiding static objects and moving persons.
4
+
5
+ Recently, the development of photorealistic 3D simulators [\[35,](#page-9-0) [36,](#page-9-1) [21\]](#page-8-0) has provided the tools to train embodied agents and experiment in large-scale indoor environments [\[8,](#page-8-1) [31,](#page-9-2) [15\]](#page-8-2). Thanks to these frameworks, several tasks and challenges have been introduced [\[1,](#page-8-3) [48,](#page-9-3) [14\]](#page-8-4), fostering the development of accompanying techniques to solve these tasks. In particular, in the PointGoal Navigation task (where an agent is required to reach a specific location in an environment), an agent without any sensor/actuation noise trained for billions of steps can obtain almost perfect performance [\[42\]](#page-9-4). Other approaches [\[45,](#page-9-5) [27\]](#page-9-6) obtained impressive results even in the presence of noise. Another relevant task is Object Goal Navigation, where an agent is required to
6
+
7
+ <span id="page-0-0"></span>![](_page_0_Picture_10.jpeg)
8
+
9
+ Figure 1: Illustration of an agent-person "encounter". From top-left to bottom-right: *i*) episode starts; *ii*) the embodied agent/robot sees a person; *iii*) it moves back to avoid a collision; *iv*) it reaches the goal by avoiding the person.
10
+
11
+ find and navigate to a specific object given the object category. This task requires both semantic and navigation capabilities, and lead to development of modular approaches based on semantic-maps [\[9,](#page-8-5) [6,](#page-8-6) [32\]](#page-9-7), as well as end-to-end trained models using reinforcement learning (RL) [\[46,](#page-9-8) [33\]](#page-9-9).
12
+
13
+ However, despite encouraging progress on the challenging task, all the previously mentioned tasks frame navigation in a fundamentally static environment. The dynamic element introduced by sentient, moving human beings in the scene forces us to rethink how the current models are designed. A good navigation policy must not be just effective (i.e., able to achieve its goal) and efficient (i.e., able to achieve the objective through a close-to-optimal path) but also safe (reaching the destination without harming others). This social element is included in the Social Navigation Task (SocialNav) [\[44,](#page-9-10) [29\]](#page-9-11), where an agent must tackle PointGoal Navigation in simulated indoor environments. To tackle this task, Yokoyama et al. [\[47\]](#page-9-12) introduced a simple but quite effective model that placed first in the iGibson 2021 SocialNav challenge. However, the approach does not explicitly encode any social behavior in its navigation
14
+
15
+ <sup>\*</sup>Both authors contributed equally to this work.
16
+
17
+ policy. We believe that a clear encoding of human-agent interactions, as well as social behaviors, are required for safe navigation and interaction with humans. By modeling the movement of humans, the agent could prevent collisions or dangerous behaviors and adapt its path to the dynamic environment in which it is navigating. We encode these "signals" by introducing two *Proximity-Aware Tasks*, referred as *risk* and *proximity compass*. These auxiliary tasks model the present and future danger for the agent's action.
18
+
19
+ Additionally, we define a fine-grained evaluation protocol for the SocialNav task to better analyse the performances of agents during human-agent interactions. Our evaluation protocol is inspired by a similar attempt [\[30\]](#page-9-13) in robotics, which consisted of collecting statistics about specific types of interaction between humans and a robot (through questionnaires). We propose an automated evaluation by identifying and characterizing *encounters* between human and agent. To do so, we extract short sub-sequences where an interaction with a human becomes a predominant factor influencing navigation, and we establish a set of rules for classifying each encounter based on the type of humanagent spatial relationship through time. Finally, we also introduce a dataset of episodes on top of HM3D [\[31\]](#page-9-2) for Embodied Social Navigation to assess our agents in different environments.
20
+
21
+ In summary, the contributions of this work are threefold: (1) A novel architecture for embodied social navigation which is based on Proximity-Aware tasks; we show the effectiveness of the model on two public datasets. (2) A new encounter-based evaluation protocol for analysing social navigation models. (3) A set of episodes for evaluating embodied social navigation based on the HM3D dataset (called HM3D-S).
22
+
23
+ # Method
24
+
25
+ **Overview.** Figure 2 shows an outline of our framework. It comprises two main modules: (i) *Proximity feature extraction*, and (ii) *Policy architecture*. The *Proximity feature extraction* module refines proximity information obtained from the simulator to extract features that describe some aspect of social interactions (ground truth proximity features). The *Policy architecture* extracts from the RGB-D and the GPS+Compass sensors an embedding that serves as input for our Proximity-Aware tasks. These tasks refine this embedding and create *n* task embeddings (one per task) which are then fused together through state attention. An action is sampled from the state attention output.
26
+
27
+ Our policy network comprises the following modules: *i)* two encoders (the *Visual backbone* and the *Position Encoder*) that create an embedding from the RGB-D and the GPS+Compass sensors; *ii)* a *Recurrent State Encoder* that accumulates such embedding through a series of recurrent units; *iii)* a *State Attention* module that fuses the outputs of such units through an attention mechanism to produce the action the robot has to perform.
28
+
29
+ We encode each RGB-D frame $x_t$ using a CNN (Visual Backbone) $f(\cdot)$ to a visual embedding $\phi_t^v = f(x_t)$ . The position and rotation of the agent $\alpha_t$ are encoded using a linear layer $g(\cdot)$ to obtain the embedding $\phi_t^p = g(\alpha_t)$ . The outputs of these two encoders are then concatenated into the final embedding $\phi_t^f = \phi_t^v \oplus \phi_t^p$ . To accumulate embeddings over time, we follow Ye et al. [45]'s design for PointNav and implement our state encoder as a stack of parallel recurrent units. Each unit at each timestep is fed $\phi_t^f$ , and outputs its internal state, called belief.
30
+
31
+ The key idea of having multiple beliefs is that each recurrent unit can focus on a specific navigation aspect. The final decision about what action the robot should take is sampled by weighting each belief according to the situation. For this reason, all beliefs $\mathcal B$ are subsequently fused through the *State Attention* module to compute the mean $\vec{\mu_t}$ and standard deviation $\vec{\sigma_t}$ of the normal distribution from which we sample the action $a_t$ . Formally, given $\{RU^{(i)}\}_{\forall i\in\mathcal B}$ a set of recurrent units, the encoded beliefs $h_t$ are defined as:
32
+
33
+ $$h_t := \{h_t^{(i)}\}_{\forall i \in \mathcal{B}} \leftarrow \{RU^{(i)}(h_{t-1}^{(i)}; \phi_t^f)\}_{\forall i \in \mathcal{B}}$$
34
+ (1)
35
+
36
+ The fusion mechanism of the state attention module SA is:
37
+
38
+ $$\vec{\mu_t}, \vec{\sigma_t} \leftarrow SA(h_t, \phi_t^f) = FC_a(\text{Attention}(h_t, FC_k(\phi_t^f), h_t))$$
39
+
40
+ where $\operatorname{Attention}(Q,K,V) \mapsto \operatorname{Softmax}(\frac{QK^T}{\sqrt{d_k}})V$ and $FC_a$ and $FC_k$ are two linear layers.
41
+
42
+ <span id="page-2-0"></span><sup>&</sup>lt;sup>1</sup>SE(2) is the 2-dimensional special euclidean group.
43
+
44
+ <span id="page-3-0"></span>![](_page_3_Figure_0.jpeg)
45
+
46
+ Figure 2: Pipeline and model overview. *Proximity information* is extracted from Habitat Simulator (left rectangle) and is processed through a *Proximity Feature extraction* procedure (top-right). The policy (bottom-right) uses RGB-D and GPS+Compass data as input and, during training, is conditioned by the extracted proximity features.
47
+
48
+ With multiple beliefs, we can inject different signals in our embeddings, e.g., social dynamics occurring in an episode. To do so, during training, we condition each belief with a unique auxiliary loss jointly optimized with the action and value ones during the optimization step of the policy network. This is done by processing each belief with a specific type of *Proximity feature*, through a *Regressor network* (see Fig. [3\)](#page-4-1), that computes our *Proximity-Aware tasks* predictions. Each auxiliary task is responsible for predicting the proximity features in the time range [t, t+k], conditioned by the corresponding belief h (i) t and the sequence of performed actions {aj}j∈[t,t+k] , where k is the number of future frames to predict. Formally, for a given sequence of proximity features {sj}j∈[t,t+k] , the task aims to optimize the following auxiliary loss:
49
+
50
+ <span id="page-3-1"></span>
51
+ $$\mathcal{L}_{f} = \frac{\sum_{j \in [t, t+k]} \text{MSE}(s_{j}, \hat{s_{j}})}{k}$$
52
+ (2)
53
+
54
+ where {sˆj}j∈[t,t+k] = M(h (i) t , {aj}j∈[t,t+k]) andMis the regressor network. The proximity features are only fed to the model at training time and regressor networks are detached during evaluation.
55
+
56
+ We design two types of proximity tasks corresponding to two social features: (i) *Risk Estimation*, and (ii) *Proximity Compass*. Our design has the benefit of being easily extensible with other, possibly more complex social tasks and to be also compatible with general purpose self-supervised tasks like the ones used in Ye et al. [\[45\]](#page-9-5) (e.g., CPC|A [\[18\]](#page-8-24) or ID [\[28,](#page-9-21) [46\]](#page-9-8)).
57
+
58
+ To exploit different proximity features, we extract from the simulator the relative position of every person w.r.t. the agent. We refer to this data as *Proximity Information*:
59
+
60
+ $$SI_t \stackrel{\text{def}}{=\!=\!=} \{\delta_t^i := (pos(p_t^i) - pos(\alpha_t)) \in \mathbb{R}^2\}_{\forall i \in \mathcal{P}}$$
61
+
62
+ where the function pos(·) extracts the position from an element of α or p i .
63
+
64
+ Risk Estimation. *Risk Estimation* is a Proximity-Aware Task designed to deal with short-range social interactions, to inform the agent about imminent collision dangers. Given SIt, we define the *Risk value* as a scalar representing how close the agent and the nearest person are up to a maximum distance Dr. This value ranges from 0 (the nearest neighbor is further than D<sup>r</sup> meters away) to 1 (the agent and person are colliding). Formally:
65
+
66
+ $$risk_{t} = clamp \left(1 - \frac{\min\{||\delta_{t}^{i}||_{2} | \delta_{t}^{i} \in SI_{t}\}}{D_{r}}, 0, 1\right)$$
67
+ (3)
68
+
69
+ where clamp(·, 0, 1) limits the value to the [0, 1] range.
70
+
71
+ Proximity Compass. The *Proximity-Aware Task* models the long-distance component of social dynamics. This feature captures not only social interaction on a larger area with radius D<sup>c</sup> > D<sup>r</sup> but also a weak indication of the direction a person may come. Much like humans can make
72
+
73
+ <span id="page-4-1"></span>![](_page_4_Figure_0.jpeg)
74
+
75
+ Figure 3: Regressor network. Actions $a_t...a_{t+k}$ are used as input, and a linear layer processes the GRU's hidden states to obtain the predicted proximity features $\hat{s}_t...\hat{s}_{t+k}$ ; $\{s_i\}_{i\in[t,t+k]}$ is the ground truth used by $\mathcal{L}_f$ (from Eq. 2).
76
+
77
+ guesses about people's whereabouts based on previous observations, partial knowledge of the environment, and a person's trajectory; we expect our agent to acquire similar knowledge at training time.
78
+
79
+ Such information is represented through a *Proximity Compass*. In the compass, north represents the direction the agent is looking, and the quadrant is partitioned into a finite number of non-overlapping sectors. Given each person $i \in \mathcal{P}$ , $\theta_{a \to i}$ represents the angle of the segment connecting the agent to that person w.r.t. the north of the compass. These angles are associated with a specific sector. We compute the risk value for each sector among people in the same quadrant. The entire compass is represented as a vector by unrolling the sequence of sectors from the north going clockwise. Formally if we have k number of equivalent sectors, the vector comp $_t \in \mathbb{R}^k$ is defined as:
80
+
81
+ $$\begin{split} \operatorname{comp}_t[j] &= \Big[\operatorname{clamp}\Big(1 - \frac{\min\{||\delta_t^i||_2 \mid \delta_t^i \in \Theta_j\}}{D_c}, 0, 1\Big)\Big] \\ \text{with } \Theta_j &= \Big\{C\delta_t^i \in SI_t \mid \theta_{a \to i} \in \Big[\frac{2\pi}{k} \cdot j, \frac{2\pi}{k} \cdot (j+1)\Big)\Big\} \\ &\quad \forall j \to [0, k-1] \end{split}$$
2212.12192/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-07-07T01:26:02.276Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36" etag="TIW_GGhzbNnGpDinUOte" version="20.0.4"><diagram id="EL-nRj4Y75TxRPdEqEu0" name="Page-1">7X3XltvGFuXX3DUzD/ZCDo/IgUgMIAm8ASByJCKBr5+qDrIkt2xdu2XZ1+plq8kCUOGcfXKh+j+4UD+UPugys73F1X8w5Pb4Dy7+B8NQnGTAL9iyvrTQLPXckvb57aXtl4ZjvsUvjchL65Tf4uGTG8e2rca8+7QxapsmjsZP2oK+b5dPb0va6tNRuyCNf9VwjILq162X/DZmz60MifzSrsZ5mr2OjCIvV+rg9eaXhiELbu3yURMu/QcX+rYdnz/VDyGuIPVe6fL8nPyFqx8m1sfN+DUPaKKFafOJIe+93NWNjzrt8NNLL3NQTS8L/g9GVaA/PmlBt2DW4/pCCuo+ta8XfhqeGMWBG1CmA9zmf7kOqRxEnz5zymvARAyx4gX8e2jroPn4ASqFv6UmAvjpXycAVvI8h+erL0T8MB2sb6fmFsPFoeDykuVjfOyeB14AGEFbNtbVy+UqCOOKD6IyfXpMaKu2B5eatonhkvKqem36D4bfgphJItA+jH1bxh9doSImDpMPs5njfowfX+QH+oHLQD7ito7HfgW3vDyAv2LoRTTwV+Qsv+AMo1/aso8wRry0BS/QTj90/Qv3wYcXAPwXYMD+PBiI9wLD68hDB6++MTIg/PhTUOVp8zz0MAb9+PnYt2AMfhqyOB6Hn5K2r6cq+CkM+p+eHv7t1aD401KiZ+aDEQB/4M/rbcsLR7gG9BtUrzO6xVHbB2PeNk+XnuEF70+COq8Aq7n/w/V5UP2fD8M9z+KL3fw0lHn3U96Uv/T3a2qdsvhJB9aAWhBiP//8M7gU1FAKmnDoPiIoQMYzTf+2dI4DAsfJP0/nF0K/L6WF0wGKaAABDCQ9qIEGgor39T6tGeO+eeovgALHT0PexAO83QyiDH7+hHI5pDrKYsR/MOED4/4hjPobC8QP9H8T9D/rmbzJR9B3PtTgi8abz35V1S5QFv4pdP4bg/f3lcDv+UZw1E8doE9dmTf8npemFx6IUQwVGWiAPk4O/GHu5UKd325wmDc9rl98slfKfuQ8faD4OzhPKEF+6jxRv3aeCIb8tfOEfyvnCf+i83TL5z+M+a8Un/CDZ/vTLxBH+jT8vxhJPluWjz/8v98c5a939r7OaL6N/N+Uk7B/48YnfvyzBIeWZfl9BIfCf19wUPaNqOObCQ7xhuB8xhUQOnfwYzT11cr3AOzx+AVCfsS5TwPEpMo79fUzoK/8osHFtxAKHx5f9L7IvpPOwplPAz6UfiPgeyveo74V6cl/C+lp7LNYG/060qPMtyI99U1i7eiD0vil8dXu/jqUAf1BdfeP8yl+Bz+faU8cl2WWfScRpj7FEfEGjqi/UnnS3x9GXDMsX5+1+4EimNT7m6EIxX/Fp/iWxseXr20/Zm0KfTPpl9bPKPrLPUbbdi+sLeJxXF9y+sE0tp8yHlCrX68vzz998eCXn8nXr+Lj44vi+t8x72VclPjdSASu9bdZCUjTTv2La/s2DV8KHcCHT+Pxt2iNvY2NPq6A1Zs/ncj7c5r9vpz+hbnex9feh9PUrzkty1BwvxOnme/JaebPW4Z3q+yoQDPHzZPajMa2/3Iw92dKPO+gmFHiU8X8odTy3UoyKPU/LK/4XyOv7NfKK/kn5fXlUafNnwTsNeJm2Z/Zj34+jb8x5DPoPE/zpY/P0PNhUn8cUOzfSC2I8bcr+H6Xki5Jfcbc768/kDf4/blCaW4c3CkBfesqGIY8+lIw/87e1x9VM1+vG35f5t/m50f8It9g12vbn9QMHxLUn+fiXrt4Vm2/0gW/3xH51yoV9K1tJL9htuIqbJePLdZTA7jwGtI9sfx7ofKRjx+BEnzzXlEIPv8CSfhl/Rif3xTJv2vlvhbxX3BK/xrEU587WOwfRDzFfsFT+6sQT3xfv+x/IWJGsa+ELPsn/bI/Z0Gxb+ExvaTOgL6D+w//awcI+QoH6A/w/Hv4TAT+uc9E/spn+rCp8q+p5NLfV7a/acz1RnlREP6E5XtPfUB+pT7A/2wK7W2bQn9mnL7SNr2bSXmrGvfeOwheVNabCf4kwcH/n8R1w+t+7Nc29K/Y2vmHdjMgX6t0L1kASZDDecLNS7j83262+efuJHjPKgaFfKq4UfzXwe5fu5Pgg6b5X1Tcf1FyG/3a7PZH7zB8D6/s75Te/uDGIV3bVnmT/u9ktD7PiH/YEPH9XlJ4K6P1vTgvPGUmEqCcnzasYIgRrP9LCc1fsR/77ux/Kxz7jK7vsmfpvbXzn+PDZzsGfnqLD2+ZWvSb8eGv2O/6ItDfyM38XCtQL1oBeZfdtMe4eip2/nucy/fc341/pnfeeDfuLbVDfzO4/z13qf6OCnpPjjDk77v7b26h/LwW8X48+QsD5n+kClLiJn55e+WHEvojkP+srPbWhu2/Vgt9edfwlxD/V6Vf/vTLJNh7vzmcfyU9nt5p++LbgvnXy8oXJOsTEfrBrD/HrF9ecW77PM2bYHx65/AH9/4R3IMvO//g1T+DVygLd3X84NY/g1tPrwwjPFju+oNp/xSmmUEzgZHGqX/OGf9g2z+CbcKzDwIF7v2Y9L/ypvI7vuJP0j9/Gn/99MZ7UvhbKYdvl/X88vt27yhPn2D633O4yGca4M09ClH0+UuIf2tNAQ8L+seq9R8w/PvD8Lnh6TiqHzj7gbNvjLMPB579wNoPrH1jrP32EYY/cPYDZ++Ds+SpRP8DZz9w9m1xFvxA2Q+UfXOU/Yg4f+DsL8FZ0AS3PPgH1PP+EUT9IT5/QnyGKRxyAEaY3/1NSH0Hwv2dTxn+tH7xBRn8Win+Ub/4vH4Bfv5+9Ysvvzbz32vWLz1xy4euCtZndZk3Vf4CQKQO+jR/2XlJAkkBC3x+m+qJrrcbrIJ+dBF542BC+L7FJ4Ol4+OnsH2qev9OQfF9X2X8Ewblv7UEv2+Annv53cPZf/4Dcv22GYbS/sZMP6v+whepYc33l98fv6r5Qbk9kTdsARKe7gqgqnv+OATN8NMQ93ny0XNvHF2J/Pf7ev/12on9G2qnLx9O9ffVTshXgv6dtNjf1z09xtHUA90Tv1Mt6g21814xwQ9svHt2qwmqdRh/sP7fx/omj8p3LAv+4P0/iPfvVqT7wfV/ENeD5oes//u4nv+w7v8qrn8V4as2bX+kFb68rn9bWgGn/n5pBfwrjgz5lkcr/taRe+9A8g+HTn+3M1nwt15L/ozA3/Q439881PAPnpn6vIJ3OWT6S+cV/kWnTL8K1ocT5z/t4WuP3MU/+xta2OenVX/jI3fxt7KDXziCI6+f/qz7ByVpwD//DTrJX/6uGDALY1v/54t/F/wWJ8H0pNA/V7MjPBWOD4bu+Y/NJ/kDgpZ/GpB7bUVeW2BXwRgA6/H8FZM7+KaXkJ95+7AgOyVtOfBjHd1MclPw6QK/cp3AefC3sU+sCPzmj5dK3KO8vkfM1FX12a+rwd9z3E6XC0/hjsfj0q6KnHmKqG3HoRMaHXfbXCtkPU2YkNsGcbeYV26LhFMtPyKCT/cEZ7lLKhRupODBspqac1+NIR/iM+5Uink8TYC4fFI/gnV+JB3lLmhgcn13col6DxYh72PNJDUjL1tP4bNTqmwV02mCZAziynsdV7S7VSdE734TMX7kaGGvgQ6XzDu5+d4keZZj0ssRdMHrPRA3njM0tKwEqWRVr3RGfi+BRhmNEpU4p+fVPO1x07ktmq1nF7eTAzdVCuDCyJqxa4cNKXhNPmBc2x07gsyvPXVMPeWQ+uZJ4JZsBZ4JP5tWJKdr3kjpQ9whyqEwuSz1als7eHkVidxCt4Rsj4itBUdEivPJTWN1n2mG1JZHgnvksFB6ODbKYeQxawuVQ+Z1oA2HaMfkrY5ST76VhJoB30S+Prfi7Pz84fk/etMUfvJDMJo6pxLX2LfUAZOr3ATfQ2Zf/HOm+yVqccITOjSOU9Rgz0lP2Hi65akd/NjHLOVU+EmC6JGOKZd+dN9FSLnlo+9C+fzxbIlHF37mTx6uV/6F3AKFRcReZwjYz+3cneUMuUioadfWHB7JzFfY3D+SRHh9GVp8EHaTjZGCVjdFSmMFHcLGpGIRyb3LYfZql4Lfw8sZ8Y5MrqkpBe5ZbsrAalVmuQIvhmBsTXQnSyAWrSB2msCljqqXftEdD5L3oc+oPtTOUW9v6mGxc2a+4TfcaKLNqNnVX5mHfSpJY+NWY9NW4wqez9EtvpCId01H8Hzx2vdH/Uv+1SqiugLzqeYw51df8Sjvos+3657Vcu3D/a//v87Fv2adXyB5oB6QSGxnAyPBPOQyxPTKeKIVW8VKVRubNJkCkztp+3E/zPPz3T64ZlUoZ1JwQauwsboQnjDEm5u7OsJzD3bFd7FyHi7SoQNUnCIMzPSjlTytRjkTwcWjzlerikow+lNve9CTi1mr//RUNt/EZX4Z9xxh53WPsSPs8Sa+3ZurVIR/efgHpRqfsHFhJ6fYP341M/ftmYH7nuh5UM4bwFcXqYeX9Qk5uAPMBnLUrnTpAJ4Cd3+MJHjXcE/QqwvgKZBH98Cf1ZSnfURdeVyWGA4HJMl3lT/AZXJ7md8fZOkq1T2T63l7keRxP4g3lzbQ7PzY07fMprWso0/r5snhePAYNYaaOAGPM48p8oBd4VmmR3ocT+Y52oCnD9UROQGjym8mbW8DbW1TcjGPaSymUsPpE3eIuLs5paFS4HcNp3h9EcJUunLanTslXJ+mE+fH3JHiatDO92Wz4PeHsQhX+LxIi/1rP/ey5UJCbyWawSheXgQjlfBlR4hGqoCbp1RPa4ILSJ2xqOHIcBUBxoJTXCjiZdSg23EKpFvA22Bh1pSC2eUEbzHCxK0EDx4xWsEDoynTMRHjXpo5HUwu4pzBLoS9NBd9DxWWIMAlM6YsMHNi7Hbgi4LjOBHudLq+Kb6yceKTEuNf/5OJpGmahcFxlQeXeDsvCrxTtuLDbYpDgGBpPo7O6Q4a17NfA5XkbBRFsRlBILjYMs+KscmSyLLHU73jUHSc50RkWZa5ktMMZsfYyHN/gTpRHyZAcWIxu8bT6rPKQ3HVMIyo3O2ojTuz4RFHTMc5H1CTuC7XK46iUQXWiLm9oSrKUIMm0hekaGbQWGv3Sj8618tljcOQJskL9Kv5FxWOk+aybcj6y8j0WJ8Vc7Wd00Rt20brD608lfQHNV8M1zwuqL6fp7ol1ccUKApGeQIRXfuKEFr37r4Yje2EG8uyj0FsO2NYPZ0QhKCNGF3k3e1KOH3fozE1jijw/ZaS3QPnSp78NBhM/XptqmqYT8fjFvlntEF16YWW18OKA8box2kcdQOb8dx1Iosdt558PADg5Z2DNg2OYih9EklW3Lsa5DaO0zQdNfc7JHFclHNJVELCCy26CPHtpusYLV6vV91aRycMAmwOQFfzPDdUeE4SB/TI1Md1fWSWgdJhktxK1IbGT7xarsmeHeF02kidHUmaRjW4/uR0OrH32SdR9JyZ8TnHwtDQ9Wgusgwa7Du/QKhXy2NROwQp6DIkkNK/6ah+ABcMdQe0i4ovixWJYAKWVVIL4yX70tVL/tBVh6AOr4oRRtEwlyln2kf1DjMCchJFEFAVFP6dYRC0Hl7nJIQIRf3goJrgE28z581YVLDi4MbuCvVY1MXCq7PFSdYybnd9cP2LjJKE50agn/V0iPU4CKsGH4YoOqo7m6b7pT8ldhjnh329V+Own8fzrppnR1XiotCx+A5oQnXsbqfhLvSMmVtknWjU7dJWPNkPvAautuJ7o4ncWZKTxRtP3RpRr/IhEOxNpdFqnO8eDSXu1CNdur9j++sDu8QJf1IYojpKptqsvgGjED5jHrOikqQfOacNR89OkaYLQ5XjBFwQWT2XAcdt+skLSOk8X1RA0GS6TNtCTTEQ2BVKcHP1pUb9IN/H1qj9E3XOA/HUpBU/F3WNsRf7AqZ98TPcyB7AXatD6oyujz5EZ4NR1Ie8Nx5Bf/JRajzaaDDnPWYWtbv6PWId5mLXkm2h+iRJ6vO2LAvLhp7Y3RUSje4YUA46GDgMk/OpIrwM6orz1Vg1Yj8n9wTvLE7f+x7QtxdklACeeL4E4jRBd2w99PneVwH33XjUIUVSm4XDQ91VuEc/Fh7FgyCs4X603TKp5bsFIi7ZUdWMZF/0VFUqWVr1u8eRc45NQ8lh4++vAXOvGR+vlMgcLgxUxw/tLs56okNLJR82OBrDuB5Bko8tLjWl22PLrJa2iYWmq4S7LGPjZK4qb9yRCRLF8WWfeWcSQdxzQDMIjmMPOrh3QFDlvq2klg3ZvFWr++HMRfPh6Ju1Fcg00QJjk5ZNw4ulfqlOeJmWhF496nymTbgBTK4x2llYdX0cbknAq6WVGdx6uhedP/WzkPbA98LttMkPB98/OC/6bG4a4zaRLz4sbYXouaHjwdQEXZoGxwd65ZBZzRkFkhwWDDNMITKN2k7BOBVvSuDohJdr0HV+cE+JmxDGFFA1JI+ngeoRBHmYRqfeglH3Cah71BnceUPsCNu1wn2v7vUMcd1r6OoqxhWFhd5jYr33/U7PdqV+Zi5gUix7i5wGp6nAtYInhOb5McKQA9FIBiHIVZamp5wsm8Gke6Cyx+kydpk9kSzQYLUwcycDvVJg5dXxXGam1UNhZMrZsIaFO5G0q0JHobwU+kHrlLN6VDxIwZmrjUZitJ4GI1cWpe7rSkwEYLuo/gYNEyCMjyIQsvTp9Bjsw/7oy2xbA70bjxmwbYjLLwGB3lRzmB8Pgw/n+wFbWKCCeOxZyIBVBf/2adDfENLO7jqTKxLKK0ULcB/ESjAUQiGN4mE+ccDHMrDTMV5ursWoiIJd5xtrAPY1w5N57y5xsL/dboeDepOAHhKFuoXIvBMhkDeIj0Uxaay6VyBck80zlNsdaqzUfZ91Hv4gmCT08K6yjhztyEcQjZKz191jcg3DB1tTsZOtbc9l9ekQrr7IB3nc5wtyJnfKGQbAlnL1LMzms2t0HS/NNJvKyUGjwLas9n4CSlnWgUk8X/d0y6aiY0C8Yadur/rnZSfMIr90umXh2TaqMel167aVeBLHwa24PIWX8t0Hy5QR4hmqydGOBfQkytj6eJD0TqV90HoR9anD1Ntw7o+WTewiM7kQHEENzWMjgaXNm0zSYn4n34CUnK/6bpqmuhecutk9kgF5HEeLlAflXK2zmYsIULHA5wDDR4Nwsemrk5uBX2HOuLuwh8JZG3KVp0trtdA/UxWpAiQ8QDHO85WpieZ6LxMF2DGCeESWVWjDCAF9NB4mhmw2kBSKdkbTeUQtzexblkC00kMXj4kP9C4s7VNIkO3M9AawFCEdnMdArVCky0SI2OJ0bibevvgWPsqqiuHyBaUP9AL0y47LaPgz2uJu3oNAlh/Ui5OSkyJlg32umuZ6dn06CYL7XGKK7aDBBnwm9sZ6BrvGhqfrD+YQAScn9INhHKcJrdD5Gh6ThfJXblTQY6IIzn23lZNnRm5WYqug3o1iNICj1k8ltjw5bokkqre0olOGZWVhFGjSsljUjS6Jo1+EBFFqzLzqaYb3OSUcET5kB/IEFKVsHyOGYc9YKUDn0xJEVd6dSSV054kz6HkwJ7QV+L1zSKNO6FwZA1qeBGY2v0SVsiOYmU6SpJgO/jltRr0Fln4f5EOVFp5yiqlDsVPMdnlAdyk/aXyoZk1baASp8+JZkp69Q4cXT77mTXmriCtQmqXVKEw69Mxd4WI3Lo1s9DLbiRVBD3fVCTxxT2dtv1WPLQxU/LTfBx2StUyPMwaXmiPWtR7v9alnOQHL7b0TzC34BE3juDQHCVaWd0qR9tyyKBfX6BNfgtf3yJItc7unaHEfMB4haFQQPjLGFq4t4z5Mo86Bb8EfHoY8wiSGgN2gdPtcS1rQ1UnDtAVYKD01DPvRajal5ZxOmrNqvFkqEk/A+0PIeiOpMS45n+Zk7uKTRcreIKJJST1U7MU/31UYWRU8AmeY7MOYcKZozwJPRkXMRDD7rFuQbHd7ULyw0ccOWZcev7IjQbpmpGgzULpMzQmtvg6Pfp/MjmAXZiV6nGIO0omUHpDNJk+bNZy0Y9Yn+2RDb5zYw+8Pxx4kk6aBowEadRHc0OqoL66FodLGkyt59gzTl/epsJNX5hBfJQSoaLkC/mFZ2ZNl3EFosnNvUFnv5I3uGGTOYg5dm4crnxGokkUQHUBDoNQ4AciwMrsdw5l3rcJErjcRfdFefPCqeDzaWzP0eautzDDyi1hGcgf5mWv848qYeelmoabUkmxAo+pSWbadHqaQXUmugQYjuWfGQik3exd5F3XiYLarwqOgLklIaKB2THU1D0T1BCvgZvfBur8k7X5ZuzS8AGeM890HXRxQKGcmnvM6yTJDFqqKKyY71fE8L/KF3hURRaEwCabTLtfqFk91lXcMCGxwvLF3/AE3mUGYy0cwdnq3iCJwysKDyVRaae0yMAEGGqrpovRQq/FHh6S0vXESxWICocPAEqbZPwoT3ysUkiaWy6et72m5ohhrXmMUtYtkbpAPMKzlOWglkPwEFKy+g34UIdoySXX3ZcjIqeCcWQw7x0nOLh2vUNOLqjiky3oJRCLlq1NVaN5+Dxyry8YxmeLX0e54ylJej2XRbIpqpxtVRF2s20g2Owy4iIXgjdr1wY19htUfQr5wbwWkXdVNWaKKxe0R2zdY6gzijP0SHZ0bO+wYmxwu+95hB6cllNqyptN5PT5YWzz404CJ5HW49na7HIvQr3k6kF2Hou8P3KX3njTrA+rulUdWzNLKl4cTCbivltij9yOOtTnsamIBFyeuHvKD0zlP2IXlRSjv5CQH14Ak5pkvOQ4Fo6Ena0nktTFkBMbtMt+ElGnPOtJ6vp/s4zjLQ77fAWs3j/58Hm/n7iY8qnSWnCjKJiLLcl67LpRKmT1GlUGhTWfg7fPOjrMZSdDL1dg5O8a8STC7tFom13Gh1hCID1VaVTrSJnLS5SQWWS2L+0SUtpNZnSTeNNLZ9S61zNU7b0CgN3POytFP89sesUQFaqPaJdy2tZU8XDAvFOXjbdhfGyEvshPT3pj94YClPtP37NiG2YgNZ77LdBFtJInsM5o592qHmVQIYzDvsHPSE1FCN2IvO0MuqjWHGD2Ic8bjVpizf9tOe84Dbr5KAxfJ4fbFCkmq7NKGr3YmuVymcqdSUKG0Pr9y3lawPH7DUhg2hukaxrdDkZI3TaEp3mQpzs+WC8dPy2HP5dZQ5JYtSoQ4hPjHeZIeuxIeaVvFAfjB5lJ0y5JZ5MMOFXEIfPvKL61PiIu1KVXlH2Hi6dbrZPpIKb8yel3sY9n0y4U6yAnXitm0Wwpzrw3zXhXERho2sgNi4qj3CyIvwOOPIkaONNmf+YMjUiavp+rjzqXbaUki092ca96GLGs/6oxLqNU0FMaXHYF/oKbEKKfclMqDyi2Yno4XoTVNVsROrf3A7trDmqUrJQemJNnAJ0j2Akzf8Fsf79sU+DYjK5OaMhBGAqIfbTyrRq4ikBIhoZi3s316lq5YveVZlcIo6XZZkTbikCXkbEmI0Aw6btgegR5QLkTcoMg6dtkeXHuF6R9UYvyC9aYmDEbsPFw1bPUu+8LQUsjzCchV7uXDLmwF43gnRSfW1URuD3uiXVUY912qBrlXRLGm1hFxTbkc2Io/pOwQpMNl3WqBvMpaixD1nPqa6Amb4PL4JslceW3FBaNIW3Uvhn5stus5RwnjkKGV7lP8wLwkhh4Uvds/pp3oFAJb5VrNb/QSI4V4jJ1mj2K9fyCbAzdlhZRV05khW+RsodCM7pOhJa8a8M/S/a0DpuysPZ7M6yVWeY4QaOi+e4Zcj48YBJu7wgqIY42Fu+ChChkMV8LJPxWsTvvGtKVav+eLl3wdIqLkBZvS4o6cwm1vJgWXKIV5aRfvPLld6N3CR07ypLh1hSxd9MjMJC8VohArYdEG+KbKAd0uVHOgVdfiPcA9L620btnrvrfXpTVAHsJmkAIXX9AMRP5n1MXRKFexKF/BV7OYzvExXcbHtlE47cfOesvZuNOp/ZqRCgDTCIZB861mDZ9h0hjNL9erpe87+bZEIV+D8KU+3mwiPcIAxQiBPKd4h0ghaV4cxdodHTZ17psRObaYZvAm4xA2tISRtm5Ga2AdZ41ud/V8r9fII6UjnWardG+4ADW5XX9d1xVoh5bYsqMz2U0KTdGVxKKS3bGsDhF5FUJqLlApUneR43ERJwPOLHTtCue5Kh+ERphbfjslvXGxeY/FDcO4xSX0Ofhmxl1Grve54G0npE0c0+Rso5X5MuV6vl9PZjmLVxDimBeCMcgQ6si9HorAm673sS4TfbMWQwQIdU5VEKfqkc2tET87p+u1GcdWJKV0i+7P2Ul5V6DThnXKcvJF797coacgWDfHxnIcTX1bcYO9F0YcZGwa8Rh6pPhEN/nDhS11yQwc5cHUooS2F4XGhQVG1sddFBIqPVhB4sPMscKleuUQ3obw2uJXr7lv9GhyVsShMP9/gxnoannOMscT/F0TvHI/pvyc1ulp3sIWPjOlCSOenMV0z4aF+c/FpaN7tg87UvA0DRZ232efBI38TKGflLCJn+k39k5Q6M+vB79/+mfLXne2vPsGitedGf/TpW3lqQ6ZPZe2RR34Rs6P0vaP0vYfLG0rymelbek3S9tn+8rU8L7b4Udp+7uUtu1TxBjYJwVm75MC89u9fVawJj/h6i8F6fPmX3UxxFDYH/LbM6t+VXQ3P5+Z/4WZMQb+Bo1f+ftS9nbLgwKeAfc+4+IJJydi52LbPQdY5gdeP0iyG3DX8XJF26JGXIdhqfXE5LAIp8HK9DF3L213kXoPk8oSOr5QH3hy2fXmWfZ8q/KgW8fvmCV+JCCMpiVnmy8KUIdX84wFq+no1pw12WzfcKq/U82GO7PJYuG2zdRGo7RBTxFDoj0Q0g0G7xSJmTwzCfglF/ORoicjIC6Mvuz5Vk3tSN72WeZp3MNlRIOPEM27CJx5drh1uVqCIqepKJ8WM5fMxuGvZ5h4XJjpbNzvWKfDr1JB+WbtagfJp+YE65rECFtDLZ6dFrFxmo1JGkQdjM3Dxt6BQRm0q9VNvmvXHhbqmigwFdeZ4c5fvik77agUKDWK0PXd9HLLzG1pFcS4SS2ZdmrxxN3OLHZHdtr8/YZS7E6InetwF8uRJoNs5KmMTUrCLvx7UJfdMa+L4tHid87c7ugdCYsDQlcpdVUDSO4kuUv6lDpksstATOk/e/x4OGINwdgS4g17LaizNhKce0Hi/ZTfBiwEhux0o9kJelJom03WU3JJuD0vPKHivM2Ns5f0CHUhUeHawM0HMJpIYKgdssYmUNFg+AMWo/KyeKYpxFnfrQhdk9PtOF83mP3BaVieOq9k4R8YMtEOuyMyjzEaWVexyUizPt3HGAluNnUsYIJqipuOurZm0VI2O9h07MoX8VCumdL1weJIvuJXIFA6dYuxNB65RIu9y9v+eCSBJbBWTbjWUwWjiwkNhquP3Zps2xr8kWfTIz/DycDVqYxIkdF05eTHVRPvkQBj9NgJrB6lqIOiXXNicoDCWVclxL3x5B6A6UycOT/06Erua0ig+X7xah6a5VltYE1+dvsxPCK1FXMkdXMgvQq1eDwPyZv3+zFriWgHc4UNdxzwEKVva7ILl4dPaZW9OH1ua4MwJ25sWNKDcvgqhA5MPB+7mokNma3ptFQm1d3dqWnsYlzivQlMD72dXMQW4Vz4ZsLDQbpcAd1Hl5Q6BXDf3U1NMqspElrlal2BnjoUVFKsbb9jrx3gzCS3KoMG4/XOyhScL2lpSGx3QAWdJLNGnQCyRlwCS8n3dkMLGVdK+jMY/LutFhhALq+f3ExrjWojHwaGWoV3d8x+l98HELKnuzXCM8IDUeodABOBvsHxIVKySzR51OisUzwItnvmTjjenI2IYux6clELg/fywcOkyorOo4MnBll7lbNwna5PRHauPboouS5bXmHONo0kHCwUd86u6BQwWImrO65JN6bQBkNb4PoM+EaADOtLt7hzm+LQlfdz5pmdHl9rkqhkQTMdriuxkJWx9ShLm8/QpQvWD+hm1VtMZShG37IKc5pi0XZx1YUScVSq/CwN25CTAB0r0cFSfrIbXovScjKQu2Ot61cQLt/znQ1YVTs6BywXhA8OsDV0nePy0sjPl16uizRXITIjp65yNtnayFZjLbQwcrnKa4Q2gi71o7J3FX7r5NAaBlwfsSFR2ONUAnkqp8tdlU9upXtFTzP0bQDutOekUuwo4m1SFcdO96Z6MeiAmQH00Wu8+QN+C7ChF1baaR/hGY0bmgn4LcKtqic3Ev55ZZ6JNdlAV6K0XSGYVd7XpFZHbKVpKjL2tjQTif5MUQhD+xMetFRqeZfRGkA0t5jGGp3BUmlrZW9Td7BS3KN4xYReHV/ZJVQXAc1SLMHUJ50aTEbKOiR4cH7Nia1Zyq7iFsHyiKbmoJNgDpGppJMBq1zXRgGMRfOQ92G+F0gEh0yqSXNKwbdmlaNhVzV+PYcPAknO+jFqHuROWF0d2dPkw2xOoo3vUVOZL4ZxEWOYasNCIzrl7WRgwRFzVw7Lh1zgDurYtXsijqgEWEgt8DjEGmhnQ+NK8pqjq56HY2dt6l1vF8Kzgiw3RZc9n/2rXbpMc2pCS6lPdpq6hiUbbO7oeZtpvAlz/rEjXEXipoc37TEA43XIm3DAdipTCq51tZsaKjdPmO/jtV/bXbd5klRobqpxcKdJvR03NqXk2VRyXCI0XTi4uMd4ZlNkj7a+qt0oAQTvYNIQmcA6ZUQxR8lOSMe80kys+8rW1noe3aGaalmrnRd/Dwgppqt4gQqGanapeQks1+nr3nMVcU9LdjA3YvZUxwgZKrLD/lKuNyfkD6KcPgZVS485S21A7qndY2aFBjc45yQonWC47XBU9w/w6P1g7QtKKRvliBR3vbGdQT6CSNdFiM4QIpc+1nUQaGY7U/lDLDcmOupQbjrNki3Jio9hmV6btqJFajde+svOzaZT6Svj0GsA5lwtqZzZBjC7oEbao1s6QXZamdtLQvosoTgdQ3D7LUzZtRmtt5yGB5WcktKhQxbHvtqDMhTFHt9uRbrxdpglukbeAi7nKOWAVWKWc1uaqpf9UG76pjUbZQHluNT2LEGj7+ctNK9aJtCHNHWYYUzDlu3tk5Rkt4S5X24O0Dp7l0oUC5vg4VYXid+W0oYZ9nUiYJYy3jREYJOKHhYYFACfDaj5vtTTNH3KSbxTWoIg0Z+x331jhqI/5C4+zkm8/hXW989IoP+CjIQK99ZzyUtG4tZUO/ZHRuJHRuIPZiTU+NOMBM/8ZkbCPgKbuP+x2f47brY/SeTfdbO9+ddttp9jQz9DrMb902Z7JZVpv7rm+kGXNL4vEIcvuJfilqJx8t70GEl2D9Le3O00TxLMO1/w2QHuys8qV+LMe1YNRd7ku8sZ3CxpB4Isi93kRc+dtH3fwNjAxxO1ucHdlXg4sTPG19N9ou7U0N+uN6uu7L3C3Uu+IfrSmHYW5ga5hoYpj2RWTQ44txvS/Y4ZT9kKY3OnvnJm4uCUmy6Rsw4ghICe6ojREek/Ts5JLsWZihiNHE5zs5Fab4F/PRJOKtTuaDyHcqiX+bViQWTn3jv3fnmsuZdUh9Qiy86HgV1mnO9IyYH4bijuKLBaTy8STHpuKTjMUMgJRpdFHNdtquwUWRVewuDkerPr07E2JwF4k1nRiw9I0Qu8FqF3NGzRZSK7s8SRO6nKnWNbXoxNF4pDi+zI5kNcY1VnEJLrm4/QKA4izxznHgTGoC4X0LKts0DJ2pfknKtGs2UoydgCXwloaI3NUuuhqrQFslkYFvqDP1ZKcejljGGzBbtIyf2qXvvz2AQwkji1IFC5FUyVimVwrV5XcWSVHLi5EaTWcbruqsTyWAduvoSXATmJMbZ2Z9Ax3Ch8ckRNm5MFBA61pSQTLCRdFgEEFJW5rugwXQ3s6g2KvMKY+4r5ntzeh9wAHT2I0yxhodWzCPe8+PpxC6brcQbrvxrAiMnK2qHZfrhupWgckUNZhiNOo9RpgPFiUR8r+jht1H4g8V13Wh+J34YbGgqDYtfbxu9vXeucRjznjtXSaEpa7B4AlgXJXroWCQ5aHNZ4rYvOUG+32Rwe+sOSHfYinVa/oslh8e/nCe8b3mCnkkxp6DBqGhHZxx0asPGFGC4GxF9Yb/58nPGnLffnYKjFx8vmoOV+3qXpORgv1UkuHll+i6YrGNqpIB5ccFU+jlU4vuSY9pvby+UWJyCSvZ56tMR2slDlATSGFCvVsdMgdx7QuuS16pge8sGmCSa2Kxfcd57pktaH1fe6p2gOYH3Y7ZnE6Q56uYb61iF0doxu5zCE2QHjUt0DJhaBgmQdi61TB2/WgxXM15UNRl15yoTNu/k646SzkQweTEMHJHbzdHGc0AC9wW3CTv4oLQmElul+1cxN5CHirzadzLBO6fq3JqmLc/9UoaZsdcZ6fT9dbz126rsmWW92w3MjV9bggys4gL9UipFZTt6mqgCMZ84d0qEvSbR4i/iTA0W547bilqszhGx/PbAndEi8Fr6CA1GkLZF9M6igyq5dufgWruYHe946X81OHUKRgEMXRUkiELZjhwVtNPRAoNDnWElvXHyYwYOvX7CdbSLTAhQqV6AUqwr3i/ykCRo0PBL4sDO5VKdXbrg4qlJilu+gfLp4PghcjR7O8qoMgMFoNFGW6dmqkJ7jk3uXQ4lxD6hXxQ+vvDSn6YnSM3oPTP9IGdWDIPyTjzs0O2GbFRRY4N5xMyo0bKzlqxy35s6nToTnLHPSIUwMd4WR6mTJCBkuB9ZILysVXwseeHgnNctEjqOOVeBYtxW4KXZ0aLnOt5tjt6crC2beYty69Ge4xSq9+3LriIej6CUrYSmL5zoFDG3PE0rr1XG7jVhxLDayRVdU5r3lMXJSgoggGF7SazFim2/enQfMfBlU4sOtbcWpfbIqogj0MAjlvU0JDrXFaVnmFbjLGC/vF+AW0FYXW33K4SjmNI0i8GJjL8VqTEjgHGCkedpNaupLHndgBYa1AF79Xdxh3mFniVkMkUqXXCybd086Wgpv5JD6wamyL7u9K3viwzoi0ItMVSFg293DMh4HWPbnG85koXkxSJplyKUeqaFgbidlxJj4aivbzuhQJmyZg7EwHta5VBQYwaNU9ZvGesJpiIM7ABWzmwt5DcSs1cTkhmmSmBEMZvo8BO9JiwJAmVDDriyts9Bvv3JOvB3ulAlRZZRHL270PdyFIPMLdTOLjA5O0TUcMSwIBYLeM4E/HxQs4B53S8f3Qifs3H3rNakcC9a+67G93d0xOhj3S+AWh21BwhhXMbhRDDUytrnOFGuXl2NIJOodvkkSyOWOPeW2NGedFUFvolOVhL1LvK1LKjkNt6V7oKuvVdsE1ZzHuCLlMyl1E7KUUhi4jbszuVFLxBt3MVB66NaQ3fzzo8QNm+/kVh81LnHvPjLsDptTUje72Ke8GTIPSYuGwFQunptgxIE5ZrGxhvXZETF/ExXGFH1xYoHaIfxdIcMliPBVIXl9eBKfX2+nDqfrR+/Jp5pwj6W7wwps268XOasT7JYmMGM10uIUhbf0BNOtl6NfrFo0e4db1R+f7FQ8x5gWvCSUZw+nsXMM3Z85ARranvwBC4C3ps/YpjV0pCUzNlOPUN/pR+QRu3nj9BzUTJvHeBG7QJC2h6cIaylKYCLOYpauj4iUkQN7oPeGLJ0vIUcGtwfBeJfzAeqZAw+8wOPSQR3A2+lBIs8MrAMp1iLK67XWaDjRVFcfue2/2Oe7X65Kb63btsrhImDRUdU4EudRJpv53ncuYc94FtwI7M6ydEgULQVNMB5KM7umrmZl7kS1fQj5LlORS2wg2OEapaaFJMtgE4w+GqXbosIQErCWIqTMg1sCqxK8O9wZW0birV1Yk/ZAoMGPZe3CzN7+GDdaWWEaqWO95EdAs/fuESiWe36mctrD7yxJdK4i8nejOUd3hkhrP1iFlamdmjYc5dqTzOV2osO9KKc2eI6ZmDix1ybjT+7oc3N5mboCGgYkeBA714AuZbwruO74YJ3j7s4nojW59yr38n2Q26uTInmZ+5V0b2EikNsbaL0+gLPzeI63pEo+lcdpXwvC++R4qM+OTqB/neChqZ9p+o0Ez39/ZgfcbNjCw2l+OXyhD7rMbG8xvOP/Aw==</diagram></mxfile>
2212.12192/main_diagram/main_diagram.pdf ADDED
Binary file (59.6 kB). View file
 
2212.12192/paper_text/intro_method.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Question generation (QG) is a text generation task focusing on generating a question for the given text paragraph with or without the answer [@heilman-smith-2010-good; @labutov-etal-2015-deep]. It is a challenging task because QG models need to understand the context paragraph to correctly generate the corresponding question [@ma2020improving; @Jia-Ques-Gen-ACL-20; @fei2021iterative; @huang2021entity]. The output of QG models can be adapted to other applications such as summarization [@yin2021summary], poll QG for social media posts [@lu2021engage], QG for news stories [@lelkes2021quiz], or adaptive education [@srivastava2021question].
4
+
5
+ In the normal setting, a question can be generated based on the given context which is usually a paragraph [@heilman-smith-2010-good; @labutov-etal-2015-deep]. However, the generated question lacks information of the answer and thus it creates challenges for QG models. We therefore followed recent studies to focusing on the QG task which requires QG models taking into account the answer (a specific aspect) for generation [@Rajpurkar-SQuAD-EMNLP-16; @ma2020improving; @Jia-Ques-Gen-ACL-20; @fei2021iterative; @huang2021entity]. An example of QG is showed in Table [\[tab:exp\]](#tab:exp){reference-type="ref" reference="tab:exp"}. Given an answer mentioned in the context, QG models are required to semantically and grammatically generate the question that is relevant to the answer and the context.
6
+
7
+ Prior studies addressed QG by using rule-based and pattern-based approaches [@heilman-smith-2010-good; @mazidi-nielsen-2014-linguistic] or language resources such as ontology [@labutov-etal-2015-deep]. These approaches, however, are time-consuming and labor-expensive due to the involvement of humans in rule management. Later, recurrent encoder-decoder approaches (mainly using LSTM) were applied to this problem [@liu2019learning; @yu2020survey; @tuan2020capturing], however, results are quite limited due to the complexity of relations between the context, answer, and question. Recent work has adopted pre-trained language models (PrLMs) such as UniLM [@dong2019unified; @bao2020unilmv2], ProphetNet[@qi2020prophetnet], or ERNIE-GEN [@xiao2020ernie], which are the best for QG. However, the utilization of these PrLMs for QG by considering specific information (i.e. the answer and its relevant information in the context paragraph) is still an open question. Let's take Table [\[tab:exp\]](#tab:exp){reference-type="ref" reference="tab:exp"} as an example. The gold question asks about `‘‘IBM"` and the answer is `‘‘International Business Machines"`. As we can observe, the information of the gold question and the answer can be found in two relevant sentences highlighted in red. We argue that QG models should utilize information from relevant sentences regarding an answer to provide more indicators for the generation.
8
+
9
+ ::: table*
10
+ +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
11
+ | Context | The company originated in 1911 as the Computing-Tabulating-Recording Company (CTR) through the consolidation of The Tabulating Machine Company, the International Time Recording Company, the Computing Scale Company and the Bundy Manufacturing Company. [CTR was renamed \"[**International Business Machines**]{style="color: blue"}\" in 1924, a name which Thomas J. Watson first used for a CTR Canadian subsidiary. The initialism **IBM** followed.]{style="color: red"} Securities analysts nicknamed the company Big Blue for its size and common use of the color in products, packaging and its logo. |
12
+ +:==========:+:==========:+:===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+
13
+ | Answer | [**International Business Machines**]{style="color: blue"} |
14
+ +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
15
+ | Gold Question | What does **IBM** *stand for*? |
16
+ +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
17
+ | Relevant sentences | [S1]{style="color: red"}: CTR was renamed \"International Business Machines\" in 1924, a name which Thomas J. Watson first used for a CTR Canadian subsidiary.\ |
18
+ | | [S2]{style="color: red"}: The initialism **IBM** followed. |
19
+ +------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
20
+ | Prediction | UniLM | What is the name of CTR? |
21
+ | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
22
+ | | ProphetNet | What is CTR? |
23
+ | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
24
+ | | ERNIE-GEN | What does the CTR *stand for*? |
25
+ | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
26
+ | | Our model | What is **IBM**? |
27
+ +------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
28
+ :::
29
+
30
+ In this paper, based on the observation, we introduce a new method that explores the relevancy of sentences inside the context to provide distilled information for the generation. Our hypothesis is that the information of an answer can be found in some relevant sentences in the context paragraph. To capture that, we design a model that composes two modules: a selector works for relevant sentence selection and a generator works for question generation. The selector selects relevant sentences in the context without requiring any additional annotation. The label of relevant sentences is automatically assigned during training, based on the semantic similarity between sentences and the answer. In particular, the selector receives sentences of the context and predicts labels (relevant and non-relevant) for each sentence. Information from the selector is used to support the generator (which uses PrLMs) in the form of joint training. In summary, this paper makes two main contributions as follows. (**i**) We design a new model which utilizes the strength of PrLMs by considering the relevant sentence selection task for QG. The model is trained jointly to take advantage of local information (relevant sentences selected by the selector) and global information (the whole paragraph context encoded by the encoder for generation) from two tasks in a unified architecture. By doing that, we provide a general way to enhance PrLMs for QG by considering relevant auxiliary tasks, e.g. sentence selection. The model is simple but effective and can be easy to adapt for other text generation tasks. (**ii**) We show the effectiveness of our model by comparing it to strong baselines, including methods using PrLMs and methods not using PrLMs. Experimental results show that the proposed model is the best on two benchmark datasets. We also validate the model from multiple perspectives such as human evaluation (Section [5.2](#sec:human-eval){reference-type="ref" reference="sec:human-eval"}), joint training vs. two-step training, and auxiliary tasks (Section [5.3](#sec:ablation){reference-type="ref" reference="sec:ablation"}).
31
+
32
+ # Method
33
+
34
+ Early work of question generation usually used the rule-based approach. @heilman-smith-2010-good used an overgenerate-and-rank framework to automatically generate questions. @mazidi-nielsen-2014-linguistic shared the idea of using the template-based method by applying semantic role labeling to identify patterns in the text. @labutov-etal-2015-deep focused on generating questions in a broad scope by utilizing an ontology-crowd-relevance workflow. Instead of using rules which are labor-expensive for definition, we utilize the power of PrLMs which can automatically learn hidden patterns for the generation task.
35
+
36
+ The QG task can be also formulated as a text generation task by using the encoder-decoder architecture. The architecture can vary by several methods such as an iterative graph network-based decoder [@fei2021iterative], the graph neural network [@huang2021entity], sentence-level semantic matching [@ma2020improving] with some techniques such as attention mechanisms, copy and pointing mechanisms [@yu2020survey]. We follow the direction of using generation models for QG but make two differences. First, we utilize PrLMs instead of training QG models from scratch because of the efficiency of PrLMs for text generation [@dong2019unified; @bao2020unilmv2; @qi2020prophetnet; @xiao2020ernie]. Second, we empower PrLMs by designing a new auxiliary sentence selection task for QG.
37
+
38
+ In recent years, PrLMs have been widely applied to various NLP downstream tasks, including text generation [@li2022survey]. For question generation, UniLM [@dong2019unified; @bao2020unilmv2], ProphetNet[@qi2020prophetnet], and ERNIE-GEN [@xiao2020ernie] are the best on several benchmark datasets. We, therefore, extend the PrLMs by designing a new auxiliary task for QG. The new task is jointly trained with PrLMs in a unified model to generate high-quality questions.
39
+
40
+ <figure id="fig:model" data-latex-placement="!h">
41
+ <embed src="figure/model_architecture.pdf" style="width:100.0%" />
42
+ <figcaption>The proposed model which jointly learns sentence selection and text generation. Red sentences in the selection part are relevant sentences detected by the Selector.</figcaption>
43
+ </figure>
44
+
45
+ Joint training has been explored on multiple NLP tasks such as machine translation [@zhang2018joint], question answering and generation [@qg1; @sachan-xing-2018-self], sentiment analysis and summarization [@10.5555/3304222.3304361], or incomplete utterance restoration [@inoue2022enhance]. We share the idea of using a joint model for QG with @sachan-xing-2018-self, however, our model distinguishes two points. First, instead of generating answers, we design a new auxiliary task that selects relevant sentences to support the generation. Second, we increase the capability of PrLMs by jointly training PrLMs with the auxiliary task. We also share the idea of joint training PrLMs with @inoue2022enhance, however, we select relevant sentences instead of extracting omitted tokens from dialog context. In addition, we focus on the written language while @inoue2022enhance focus on dialogue. Our model is also relevant to the course-to-fine approach for text summarization [@tan2017neural; @li2018improving; @xu2020coarse]. However, instead of designing a method that includes the information filter and summarization sequentially, we design a joint model for sentence selection and generation. The design can take advantage of joint training, which benefits both selector and generator. Also, we extend the course-to-fine approach to QG, which is different from the summarization task.
46
+
47
+ Given a paragraph as the context $C = \{s_{1}, s_{2},..., s_{n}\}$ with each sentence is a sequence of tokens $s_{i} = \{w_0^i, w_1^i,...,w_m^i\}$, for each provided answer represented as a span of tokens $a=\{w^a_0, w^a_1,...,w^a_t\}$ the model needs to generate an appropriate question $Y$. We formulate the task as text generation, which includes two steps: *encoding* and *decoding*. For encoding, a strong encoder $g$ is used for producing the encoded vector $h = g(C, a| \theta)$. This encoded vector carries the meaning of the context, the answer, and their semantic relation. For decoding, the decoder takes the encoded vector from the previous step as the input, and sequentially generates question tokens based on the previously generated token $y_t = f(h, y_{t-1}| \Theta)$. The parameters $\Theta$ can be learnt by fine-tuning PrLMs, e.g. UniLM [@dong2019unified; @bao2020unilmv2], ProphetNet[@qi2020prophetnet], and ERNIE-GEN [@xiao2020ernie], or our joint model on question generation datasets.
48
+
49
+ Our model is described in Figure [1](#fig:model){reference-type="ref" reference="fig:model"} which utilizes the power of PrLMs [@dong2019unified; @bao2020unilmv2; @qi2020prophetnet; @xiao2020ernie] pre-trained on a huge amount of data. Given a pair of context and answer, the model firstly concatenates the context and the answer into a sequence of tokens. This sequence is then put through the encoder where the meaning of the whole sequence is captured and represented as an encoded vector. To support the encoder to focus more on important parts of the input, we define a sentence selector formulated as binary classification. The selector uses labels automatically created during training. For the decoder, we simply follow the teacher-forcing training strategy to train the generator. The selector and generator are jointly trained in a unified model.
50
+
51
+ We adapted the encoder of strong PrLMs [@dong2019unified; @bao2020unilmv2; @qi2020prophetnet; @xiao2020ernie] for encoding inputs. The encoder uses the Transformer architecture [@Vaswani-attention-NISP-17] which stacks multiple encoder blocks for converting input sequences to hidden vectors. After transferring information through every block of the encoder of PrLMs, we got a list of embeddings which are the representation of the input tokens as the encoder's final output.
52
+
53
+ We followed prior work to concatenate the context and the answer by using the separate token \[SEP\] [@dong2019unified; @bao2020unilmv2; @qi2020prophetnet; @xiao2020ernie]. We split it into smaller block texts. The entire input sequence $S = \{w_0^i, w_1^i,...,w_m^i, w^a_0, w^a_1, ...,w^a_t\}$, with $w^i$ is the context's token in the sentence $s_i$ and $w^a$ is the answer's token in the answer $a$, can be represented as $S = \{B_{1}, B_{2}, ..., B_{n}\}$ with $B_i$ is the text block.
54
+
55
+ Each block was fed into the Encoder which was stacked by $L$ numbers of the smaller encoding blocks to produce the final contextual representation vectors $h_{L} = Encoder(h_{L-1})$. The word embeddings were averaged as the input of the decoder part. These embeddings were also reconstructed to get block embeddings by a memory mechanism. The reconstructed vectors were used for relevant sentence classification. This mechanism can be understood as a simple way to locate the position of words in a sentence and then reconstruct it after getting all the word embeddings.
56
+
57
+ It is possible to directly apply strong PrLMs [@dong2019unified; @bao2020unilmv2; @qi2020prophetnet; @xiao2020ernie] for generation. However, as observed in the example in Table [\[tab:exp\]](#tab:exp){reference-type="ref" reference="tab:exp"}, the information of questions and corresponding answers can be found in some relevant sentences in the context. Hence, direct using PrLMs may ignore the implicit relationship between the answer and its relevant sentences. Therefore, we design a selector that detects relevant sentences for the generation. The detection was formulated as a binary classification task on the sentence level. We designed a simple architecture with two feed-forward networks for classification. The architecture receives vectors from the encoder and outputs labels for each sentence.
58
+
59
+ The first step of training the selector is to create labels for each sentence. We assume that an answer has strong implicit relation with some relevant sentences in the context. Suppose we have a PrLM $g$ and given an input sequence, the PrLM maps the input into an encoded vector. Given an input sentence $s_i$ and an answer, the PrLM outputs two corresponding vector $h_s^i = \{h_i^0, h_i^1, ...,h_i^m\}$ for the sentence $s_i$ (with $h_i^{j} \in R^d$ is the encoded vector of the token $j$-th) and $h_a = \{h_a^0, h_a^1, ...,h_a^{m'}\}$ for the answer. The overall information of the whole sequence can be considered as the combination of all token vectors, therefore we can use the average vector for representing the sequence $h = \frac{\sum h_i^j}{m} \in R^d$ for both sentence $s_i$ and the answer. If the sentence $s_i$ contains information of the answer, its hidden vectors should be semantically similar. We used the Cosine similarity to measure the relevancy of the sentence $s_i$ and the answer as follows. $$\begin{equation}
60
+ sim(s_i, a) = cos(h_s^i, h_a)
61
+ \end{equation}$$
62
+
63
+ The top $k$ highest similarity scored sentences were selected as positive samples with the label of 1 and the rest are negative samples with the label of 0. The observation of selecting $k$ is show in Figure [2](#fig:top-sents){reference-type="ref" reference="fig:top-sents"}. For creating an encoded vector, we used two PrLMs: BERT [@DCLT-NAACL-19] and BiEncoder [@reimers2019sentence]. The comparison between the two PrLMs is shown in Table [5](#tab:embeddings){reference-type="ref" reference="tab:embeddings"}.
64
+
65
+ Once labels have created we designed a simple architecture for relevant sentence selection. We used a linear layer followed by a sigmoid layer to estimate the probability of the sentence $s_i$ to be relevant as follows. $$\begin{equation}
66
+ p^i = \frac{1}{1 + \exp(o(h_s^i))}
67
+ \end{equation}$$ where $h_s^i$ is the encoded vector of the sentence $s_i$. For training, the cross entropy loss was used. $$\begin{equation}
68
+ \mathcal{L}_{selection} = -y_ilog(p^i) - (1-y_i)log(1-p^i)
69
+ \end{equation}$$
70
+
71
+ For backpropagation, the gradient accumulated from the sentence selection loss mostly affects the encoder. Therefore this strategy can improve the model by forcing the encoder to more focus on particular sentences. We also explore other auxiliary tasks and the results are shown in Table [4](#tab:supervised-task){reference-type="ref" reference="tab:supervised-task"}.
72
+
73
+ It is possible to use any PrLMs for text generation; however, our model follows three strong PrLMs, including UniLM [@dong2019unified; @bao2020unilmv2], ProphetNet [@qi2020prophetnet], and ERNIE-GEN [@xiao2020ernie] for the generator because they are very competitive for QG. These PrLMs use the encoder-decoder architecture for training text generation models based on Transformer [@Vaswani-attention-NISP-17]. For the encoder, the overall meaning of the context $C$ and the answer $a$ is embedded in the pooled vector $h$. For the decoder, the sequence of the output is recursively decoded by multi-Transformer layers. The probability of token $i$ is calculated using the softmax function: $$\begin{equation}
74
+ p_t = softmax(f(h, y_{<t}, \psi))
75
+ \end{equation}$$ where $\psi$ is the weight matrix. We employed the Teacher Forcing [@krueger2016zoneout; @merity2017regularizing; @serdyuk2017twin] for training. Instead of decoding the next token using previous predicted tokens, we used the correct previous token. This training technique is commonly used in sequence to sequence models. The objective is to minimize the negative likelihood of conditional probability between the predicted outputs from the model and the gold sequence $R=\{r_1,r_2,...,r_k\}$. $$\begin{equation}
76
+ \mathcal{L}_{generation} = -\frac{1}{k} \sum_{t=1}^{k} \log {(p_t \mid R_{<t}, \psi)}
77
+ \end{equation}$$
78
+
79
+ We used the setting of multitask learning to jointly train the selector and generator. Different from PrLMs [@dong2019unified; @bao2020unilmv2; @qi2020prophetnet; @xiao2020ernie] directly applied to generation, the joint training process can allow the model to take into account implicit indicators from the two tasks together for the generation. Also, instead of sequential training, we combined the two losses together using a linear combination. The combination allows the model to reduce accumulated errors in the two-step training procedure. $$\begin{equation}
80
+ \mathcal{L} = \lambda \mathcal{L}_{selection} + (1- \lambda) \mathcal{L}_{generation} \\
81
+ \end{equation}$$
82
+
83
+ with $\lambda$ is a hyperparameter that balances the influence of each task on the model. This simple setting requires minimal effort for implementation that facilitates the confirmation of relevant sentence selection to generation. We also investigated the behavior of our model with two-step training. Experimental results are showed in Table [3](#tab:2-step-joint){reference-type="ref" reference="tab:2-step-joint"}.
2301.07300/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <mxfile host="app.diagrams.net" modified="2024-01-15T05:16:22.591Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.1 Safari/605.1.15" etag="NuzyPO0exnoWEobd8b1_" version="22.1.18" type="device" pages="2">
2
+ <diagram id="otFNdFyq1yh4opo2cXMN" name="第 1 页">
3
+ <mxGraphModel dx="954" dy="647" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="827" pageHeight="1169" math="1" shadow="0">
4
+ <root>
5
+ <mxCell id="0" />
6
+ <mxCell id="1" parent="0" />
7
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-1" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-4" target="lMe2S_kzvvb8Wqns4KNy-10" parent="1">
8
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
9
+ <mxPoint x="300" y="330" as="sourcePoint" />
10
+ <mxPoint x="337" y="377" as="targetPoint" />
11
+ </mxGeometry>
12
+ </mxCell>
13
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-2" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-8" target="lMe2S_kzvvb8Wqns4KNy-4" parent="1">
14
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
15
+ <mxPoint x="267" y="406" as="sourcePoint" />
16
+ <mxPoint x="310" y="330" as="targetPoint" />
17
+ </mxGeometry>
18
+ </mxCell>
19
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-3" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-7" target="lMe2S_kzvvb8Wqns4KNy-4" parent="1">
20
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
21
+ <mxPoint x="250" y="370" as="sourcePoint" />
22
+ <mxPoint x="290" y="320" as="targetPoint" />
23
+ </mxGeometry>
24
+ </mxCell>
25
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-4" value="&lt;font style=&quot;font-size: 18px;&quot; face=&quot;Times New Roman&quot;&gt;&lt;b&gt;$$v_1|4$$&lt;/b&gt;&lt;/font&gt;" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#ffe6cc;strokeColor=#000000;strokeWidth=2;" vertex="1" parent="1">
26
+ <mxGeometry x="270" y="280" width="40" height="40" as="geometry" />
27
+ </mxCell>
28
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-5" value="&lt;font style=&quot;font-size: 18px;&quot; face=&quot;Times New Roman&quot;&gt;&lt;b&gt;$$v_2|3$$&lt;/b&gt;&lt;/font&gt;" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#ffe6cc;strokeColor=#000000;strokeWidth=2;" vertex="1" parent="1">
29
+ <mxGeometry x="340" y="280" width="40" height="40" as="geometry" />
30
+ </mxCell>
31
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-6" value="&lt;font style=&quot;font-size: 18px;&quot; face=&quot;Times New Roman&quot;&gt;&lt;b&gt;$$v_3|3$$&lt;/b&gt;&lt;/font&gt;" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#ffe6cc;strokeColor=#000000;strokeWidth=2;" vertex="1" parent="1">
32
+ <mxGeometry x="420" y="280" width="40" height="40" as="geometry" />
33
+ </mxCell>
34
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-7" value="&lt;font style=&quot;font-size: 18px;&quot; face=&quot;Times New Roman&quot;&gt;&lt;b&gt;$$v_4|2$$&lt;/b&gt;&lt;/font&gt;" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#dae8fc;strokeColor=#000000;strokeWidth=2;" vertex="1" parent="1">
35
+ <mxGeometry x="230" y="340" width="40" height="40" as="geometry" />
36
+ </mxCell>
37
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-8" value="&lt;font style=&quot;font-size: 18px;&quot; face=&quot;Times New Roman&quot;&gt;&lt;b&gt;$$v_5|1$$&lt;/b&gt;&lt;/font&gt;" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#dae8fc;strokeColor=#000000;strokeWidth=2;" vertex="1" parent="1">
38
+ <mxGeometry x="300" y="340" width="40" height="40" as="geometry" />
39
+ </mxCell>
40
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-9" value="&lt;font style=&quot;font-size: 18px;&quot; face=&quot;Times New Roman&quot;&gt;&lt;b&gt;$$v_6|1$$&lt;/b&gt;&lt;/font&gt;" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#dae8fc;strokeColor=#000000;strokeWidth=2;" vertex="1" parent="1">
41
+ <mxGeometry x="380" y="340" width="40" height="40" as="geometry" />
42
+ </mxCell>
43
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-10" value="&lt;font style=&quot;font-size: 18px;&quot; face=&quot;Times New Roman&quot;&gt;&lt;b&gt;$$v_7|3$$&lt;/b&gt;&lt;/font&gt;" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#dae8fc;strokeColor=#000000;strokeWidth=2;" vertex="1" parent="1">
44
+ <mxGeometry x="230" y="230" width="40" height="40" as="geometry" />
45
+ </mxCell>
46
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-11" value="&lt;font style=&quot;font-size: 18px;&quot; face=&quot;Times New Roman&quot;&gt;&lt;b&gt;$$v_{10}|1$$&lt;/b&gt;&lt;/font&gt;" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#dae8fc;strokeColor=#000000;strokeWidth=2;" vertex="1" parent="1">
47
+ <mxGeometry x="380" y="230" width="40" height="40" as="geometry" />
48
+ </mxCell>
49
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-12" value="&lt;font style=&quot;font-size: 18px;&quot; face=&quot;Times New Roman&quot;&gt;&lt;b&gt;$$v_9|3$$&lt;/b&gt;&lt;/font&gt;" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#d5e8d4;strokeColor=#000000;strokeWidth=2;" vertex="1" parent="1">
50
+ <mxGeometry x="310" y="230" width="40" height="40" as="geometry" />
51
+ </mxCell>
52
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-13" value="&lt;b style=&quot;border-color: var(--border-color); font-family: &amp;quot;Times New Roman&amp;quot;; font-size: 18px;&quot;&gt;$$v_8|3$$&lt;/b&gt;" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#f8cecc;strokeColor=#000000;strokeWidth=2;" vertex="1" parent="1">
53
+ <mxGeometry x="270" y="180" width="40" height="40" as="geometry" />
54
+ </mxCell>
55
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-14" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-9" target="lMe2S_kzvvb8Wqns4KNy-5" parent="1">
56
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
57
+ <mxPoint x="271" y="406" as="sourcePoint" />
58
+ <mxPoint x="245" y="357" as="targetPoint" />
59
+ </mxGeometry>
60
+ </mxCell>
61
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-15" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-9" target="lMe2S_kzvvb8Wqns4KNy-6" parent="1">
62
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
63
+ <mxPoint x="281" y="416" as="sourcePoint" />
64
+ <mxPoint x="255" y="367" as="targetPoint" />
65
+ </mxGeometry>
66
+ </mxCell>
67
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-16" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-4" target="lMe2S_kzvvb8Wqns4KNy-12" parent="1">
68
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
69
+ <mxPoint x="373" y="436" as="sourcePoint" />
70
+ <mxPoint x="347" y="387" as="targetPoint" />
71
+ </mxGeometry>
72
+ </mxCell>
73
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-17" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-4" target="lMe2S_kzvvb8Wqns4KNy-13" parent="1">
74
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
75
+ <mxPoint x="383" y="446" as="sourcePoint" />
76
+ <mxPoint x="357" y="397" as="targetPoint" />
77
+ </mxGeometry>
78
+ </mxCell>
79
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-18" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-5" target="lMe2S_kzvvb8Wqns4KNy-11" parent="1">
80
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
81
+ <mxPoint x="321" y="456" as="sourcePoint" />
82
+ <mxPoint x="295" y="407" as="targetPoint" />
83
+ </mxGeometry>
84
+ </mxCell>
85
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-19" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-11" target="lMe2S_kzvvb8Wqns4KNy-12" parent="1">
86
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
87
+ <mxPoint x="403" y="466" as="sourcePoint" />
88
+ <mxPoint x="377" y="417" as="targetPoint" />
89
+ </mxGeometry>
90
+ </mxCell>
91
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-20" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-12" target="lMe2S_kzvvb8Wqns4KNy-10" parent="1">
92
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
93
+ <mxPoint x="413" y="476" as="sourcePoint" />
94
+ <mxPoint x="387" y="427" as="targetPoint" />
95
+ </mxGeometry>
96
+ </mxCell>
97
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-21" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-13" target="lMe2S_kzvvb8Wqns4KNy-10" parent="1">
98
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
99
+ <mxPoint x="423" y="486" as="sourcePoint" />
100
+ <mxPoint x="397" y="437" as="targetPoint" />
101
+ </mxGeometry>
102
+ </mxCell>
103
+ <mxCell id="lMe2S_kzvvb8Wqns4KNy-22" value="" style="endArrow=none;html=1;rounded=0;strokeWidth=2;" edge="1" source="lMe2S_kzvvb8Wqns4KNy-12" target="lMe2S_kzvvb8Wqns4KNy-13" parent="1">
104
+ <mxGeometry width="50" height="50" relative="1" as="geometry">
105
+ <mxPoint x="433" y="496" as="sourcePoint" />
106
+ <mxPoint x="407" y="447" as="targetPoint" />
107
+ </mxGeometry>
108
+ </mxCell>
109
+ </root>
110
+ </mxGraphModel>
111
+ </diagram>
112
+ <diagram id="wt270x7mVf367z5n2ZCC" name="第 2 页">
113
+ <mxGraphModel dx="954" dy="647" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="827" pageHeight="1169" math="1" shadow="0">
114
+ <root>
115
+ <mxCell id="0" />
116
+ <mxCell id="1" parent="0" />
117
+ </root>
118
+ </mxGraphModel>
119
+ </diagram>
120
+ </mxfile>
2301.07300/main_diagram/main_diagram.pdf ADDED
Binary file (11.6 kB). View file
 
2301.07300/paper_text/intro_method.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Given an undirected graph G = (V, E), a clique is a set of vertices that are pairwise adjacent, and a k-plex [\[Seidman and](#page-7-0) [Foster, 1978\]](#page-7-0) is a set of vertices S ⊆ V where each vertex v ∈ S is non-adjacent to at most k vertices (including v itself) in S. The Maximum Clique Problem (MCP) is to find the largest clique in G, while the Maximum k-plex Problem (MKP) is to find the largest k-plex in G.
4
+
5
+ MCP is a famous and fundamental NP-hard problem, and the clique model has been widely investigated in the past decades. However, in many real-world applications, such as social network mining [\[Seidman and Foster, 1978;](#page-7-0) [Pattillo](#page-7-1) *et al.*, 2013; Conte *et al.*[, 2018;](#page-7-2) Zhu *et al.*[, 2020;](#page-7-3) Wang *et al.*[, 2023a\]](#page-7-4) and biological network analysis [\[Gr](#page-7-5)bic *et al.*[, 2020\]](#page-7-5), dense subgraphs need not to be restrictive cliques but allow missing a few connections. Therefore, investigating relaxation clique structures like k-plex is significant, and studies related to k-plex have sustainably grown in recent decades [\[Balasundaram](#page-7-6) *et al.*, 2011; [McClosky](#page-7-7) [and Hicks, 2012;](#page-7-7) [Berlowitz](#page-7-8) *et al.*, 2015; Conte *et al.*[, 2017;](#page-7-9) Wang *et al.*[, 2022\]](#page-7-10).
6
+
7
+ Many efficient exact methods for the NP-hard MKP have been proposed [Xiao *et al.*[, 2017;](#page-7-11) Gao *et al.*[, 2018;](#page-7-12) [Zhou](#page-7-13) *et al.*[, 2021;](#page-7-13) Jiang *et al.*[, 2021;](#page-7-14) [Chang](#page-7-15) *et al.*, 2022; [Wang](#page-7-16) *et al.*, [2023b;](#page-7-16) Jiang *et al.*[, 2023\]](#page-7-17), resulting in various effective techniques, such as reduction rules, upper bounds, inprocessing methods, etc. Most of these studies follow the branch-andbound (BnB) framework [\[Lawler and Wood, 1966;](#page-7-18) [Li and](#page-7-19) [Quan, 2010;](#page-7-19) [McCreesh](#page-7-20) *et al.*, 2017], and their performance heavily depends on the quality of the upper bounds.
8
+
9
+ A BnB MKP algorithm usually maintains the current growing partial k-plex S ⊆ V and the corresponding candidate vertex set C ⊆ V \S. Methods for calculating the upper bound on the number of vertices that C can provide for S in existing BnB MKP algorithms can be divided into two categories. The first calculates the upper bound by considering the connectivity between vertices in C only, such as the graph color bound (GCB) proposed in the Maplex algorithm [\[Zhou](#page-7-13) *et al.*[, 2021\]](#page-7-13). The second considers the connectivity between vertices in C and vertices in S, including the partition-based upper bounds (PUB) proposed in the KpLeX [\[Jiang](#page-7-14) *et al.*, [2021\]](#page-7-14) algorithm and also used in the kPlexS [\[Chang](#page-7-15) *et al.*, [2022\]](#page-7-15) and KPLEX [Wang *et al.*[, 2023b\]](#page-7-16) algorithms.
10
+
11
+ In this work, we observe that the upper bounds of the above algorithms are still not very tight. For a graph G, an independent set I is a subset of V where any two vertices are nonadjacent. Graph coloring assigns a color to each vertex such that adjacent vertices are in different colors, which is widely used for finding independent sets in graphs. GCB [\[Zhou](#page-7-13) *et al.*[, 2021\]](#page-7-13) claims that an independent set I ⊆ C can provide at most min{|I|, k} vertices for S, which actually ignores the connectivity between vertices in I and vertices in S. While PUB [Jiang *et al.*[, 2021\]](#page-7-14) simply regards C as a clique. Also, due to different motivations of the two kinds of upper bounds, they show complementary performance in various instances,
12
+
13
+ <sup>∗</sup>These authors contributed equally.
14
+
15
+ <sup>†</sup>Corresponding author.
16
+
17
+ as indicated in our follow-up examples and experiments.
18
+
19
+ To this end, we propose a new upper bound based on graph coloring called Relaxed Graph Color Bound (RelaxGCB). RelaxGCB first calculates an upper bound for each independent set $I \subseteq C$ that is strictly no worse than GCB by considering the connectivity between not only vertices in I themselves but also vertices in I and vertices in S. Furthermore, RelaxGCB relaxes the restrictive structure of independent sets, allowing to add some extra vertices to a maximal independent set (i.e., not contained by any other independent set) $I \subseteq C$ without increasing the upper bound.
20
+
21
+ Based on our observation that the coloring-based and partition-based upper bounds are complementary, we propose another new upper bound called RelaxPUB. RelaxPUB combines our RelaxGCB with a refined PUB called DisePUB proposed in DiseMKP [Jiang et al., 2023]. Different from common methods for combining various upper bounds that sequentially calculate them until the branch can be pruned or cannot be pruned by any upper bound, RelaxPUB combines RelaxGCB and DisePUB in a novel and compact way. When calculating the upper bound of the number of vertices that C can provide for S, both of them iteratively extracts a subset $I \subseteq C$ from C, calculating the upper bound of the number of vertices that I can provide for S and accumulating the upper bounds. In each iteration, RelaxPUB uses RelaxGCB and DisePUB to respectively extract a subset from C and selects the better one, and repeats such a process until C is empty.
22
+
23
+ We evaluate our proposed two upper bounds by applying them to state-of-the-art BnB MKP algorithms, including Maplex, kPlexS, DiseMKP, and KPLEX. Among them, Maplex only applies coloring-based upper bound, *i.e.*, GCB, and the others only apply PUB. We replace their original upper bounds with our RelaxGCB and RelaxPUB. Extensive experiments show that in both dense and massive sparse graphs using various k values, RelaxGCB is a significant improvement over the GCB, and RelaxPUB can significantly improve the baseline algorithms, indicating the excellent and generic performance of our methods.
24
+
25
+ # Method
26
+
27
+ Given an undirected graph G=(V,E), where V is the vertex set and E the edge set, the density of G is 2|E|/(|V|(|V|-1)), we denote N(v) as the set of vertices adjacent to v, which are also called the neighbors of v. Given a vertex set $S\subseteq V$ , we denote G[S] as the subgraph induced by S. Given an integer k, $S\subseteq V$ is a k-plex if each vertex $v\in S$ satisfies that $|S\backslash N(v)|\leq k$ .
28
+
29
+ For a growing partial k-plex S, we define $\omega_k(G,S)$ as the size of the maximum k-plex that includes all vertices in S, and $\delta(S,v)=|S\backslash N(v)|$ as the number of non-neighbors of vertex v in S. Given an integer k, we further define $\delta_k^-(S,v)=k-\delta(S,v)$ to facilitate our algorithm description. If $v\in S$ , $\delta_k^-(S,v)$ indicates the maximum number of non-adjacent vertices of v that can be added to S. Otherwise, it indicates that, including v itself, the maximum number of its non-adjacent vertices that can be added to S.
30
+
31
+ During the course of a general BnB MKP algorithm, a lower bound lb on the size of the maximum k-plex is maintained, which is usually initialized by some heuristic algorithms [Zhou $et\ al.$ , 2021; Jiang $et\ al.$ , 2021; Chang $et\ al.$ , 2022], and is updated once a larger k-plex is found.
32
+
33
+ A general BnB MKP algorithm usually contains a preprocessing stage and a BnB search stage. During the preprocessing, the algorithm uses some reduction rules [Gao et al., 2018; Zhou et al., 2021; Chang et al., 2022] to remove vertices that are impossible to belong to a k-plex of size larger than lb. In the BnB search stage, the algorithm traverses the search tree to find the optimal solution. During the search, the algorithm always maintains two vertex sets, the current growing partial k-plex S, and its corresponding candidate set C containing vertices that might be added to S. Once the algorithm selects a branching vertex v to be added to S from C, it calculates an upper bound ub on the size of the maximum k-plex that can be extended from $S \cup \{v\}$ , and the branch of adding v to S will be pruned if $ub \leq lb$ .
34
+
35
+ Given a growing partial k-plex S and the corresponding candidate vertex set C, the graph color bound (GCB) proposed in Maplex [Zhou $et\ al.$ , 2021] claims that an independent set $I\subseteq C$ can provide at most $\min\{|I|,k\}$ vertices for S. As introduced in Section 1, our proposed Relaxed Graph Color Bound (RelaxGCB) improves GCB from two aspects, i.e., calculating a tighter bound for each independent set $I\subseteq C$ and allowing add extra vertices to a maximal independent set without changing the upper bound.
36
+
37
+ In the following, we first introduce our two improvements and provide an example for illustration, then present our RelaxColoring algorithm for calculating the RelaxGCB bound.
38
+
39
+ Since vertices in the candidate set C might be non-adjacent to some vertices in the growing partial k-plex S, an independent set $I \subseteq C$ actually cannot provide k vertices for S sometimes even when |I| > k. We introduce a Tighter Independent Set Upper Bound (TISUB) on the number of vertices that an independent set $I \subseteq C$ can provide for S.
40
+
41
+ <span id="page-1-0"></span>**Lemma 1** (TISUB). Suppose $I = \{v_1, v_2, \dots, v_{|I|}\} \subseteq C$ is an independent set and $\delta_k^-(S, v_1) \ge \delta_k^-(S, v_2) \ge \dots \ge \delta_k^-(S, v_{|I|})$ , $\max\{i | \delta_k^-(S, v_i) \ge i\}$ is an upper bound of the number of vertices that I can provide for S.
42
+
43
+ *Proof.* Firstly, ignoring the constraint of at most k nonneighbors of vertices in $S, v_1, v_2, \cdots, v_{|I|}$ is one of the best orders for adding vertices in I to S to obtain the largest k-plex in $G[S \cup I]$ , because the more non-neighbors in S (as indicated by $\delta(S,v)$ ), the easier it is for vertices to violate the constraint. Secondly, suppose vertices $v_1, \cdots, v_i$ are going to be added to S, further adding $v_{i+1}$ to S leads to $\delta(S,v_{i+1})+i+1$ non-neighbors of $v_{i+1}$ in S (including $v_{i+1}$ itself). Therefore, only vertices $v_i \in I$ with $\delta(S,v_i)+i \leq k$ , i.e., $\delta_k^-(S,v_i) \geq i$ , can be added to S, and I can provide at most $\max\{i|\delta_k^-(S,v_i)\geq i\}$ vertices for S.
44
+
45
+ For convenience, in the rest of this paper, we regard the vertices in any independent set $I\subseteq C$ , i.e., $\{v_1,v_2,\cdots,v_{|I|}\}$ , as sorted in non-ascending order of their $\delta_k^-(S,v)$ values. We further define $\mathit{TISUB}(I,S) = \max\{i|\delta_k^-(S,v_i)\geq i\}$ as the upper bound calculated by TISUB on the number of vertices that I can provide for S. Note that the value of $\mathit{TISUB}(I,S)$ is obviously bounded by |I| since $i\leq |I|$ , which eliminates the need for term |I| in TISUB. Moreover, since $\delta(S,v)\geq 0$ , $\delta_k^-(S,v)\leq k$ holds, and $\mathit{TISUB}(I,S)$ is also bounded by k. Therefore, TISUB is strictly never worse than GCB (i.e., $\min\{|I|,k\}$ ).
46
+
47
+ Since the relaxation property of k-plex over clique, an independent set I in the candidate set C can usually provide more than one vertices for the growing the partial k-plex S, and the restriction of independent sets can also be relaxed to contain more vertices.
48
+
49
+ In the following, we define two kinds of vertices and then introduce two different rules for relaxing the restriction of independent sets and making maximal independent sets contain extra vertices without increasing their TISUB.
50
+
51
+ **Definition 1** (Conflict Vertex). Given a vertex set I, we denote vertices $v \in I$ that are adjacent to at least one vertex in I as conflict vertices.
52
+
53
+ **Definition 2** (Loose Vertex). Given a k-plex S and a vertex set $I \subseteq C$ , suppose UB is an upper bound of the number of vertices that I can provide for S, we denote each vertex $v \in I$ with $\delta_k^-(S,v) > UB$ as a loose vertex.
54
+
55
+ **Rule 1.** Suppose UB is an upper bound of the number of vertices that a vertex set $I \subseteq C$ can provide for S. It is allowed to add vertex v to I if the number of vertices that are loose or conflict in $I \cup \{v\}$ is no more than UB.
56
+
57
+ <span id="page-2-2"></span>**Lemma 2.** After adding any vertex v to $I \subseteq C$ according to Rule 1, UB is still an upper bound of the number of vertices that $I' = I \cup \{v\}$ can provide for S.
58
+
59
+ *Proof.* On one hand, if adding a vertex $v \in I'$ that is neither conflict nor loose to S, then at most $\delta_k^-(S,v)-1 < UB$ other vertices in I' can be added to S. On the other hand, by Rule 1, we require the number of conflict or loose vertices in I' to be no more than UB. Therefore, at most UB vertices in I' can be added to S.
60
+
61
+ **Rule 2.** Suppose UB is an upper bound of the number of vertices that a vertex set $I\subseteq C$ can provide for S. It is allowed to add vertex v to I if v is adjacent to at most $UB-\delta_k^-(S,v)$ vertices in I.
62
+
63
+ <span id="page-2-3"></span>**Lemma 3.** After adding any vertex v to $I \subseteq C$ according to Rule 2, UB is still an upper bound of the number of vertices that $I' = I \cup \{v\}$ can provide for S.
64
+
65
+ *Proof.* On one hand, if adding v to S, at most $\delta_k^-(S,v)-1$ other vertices that are non-adjacent to v in I' can be added to S. Since v is adjacent to at most $UB-\delta_k^-(S,v)$ vertices in I', thus after adding v to S, I' can still provide at most UB-1 vertices for S. On the other hand, if not adding v to S, I' itself can only provide at most UB vertices for S. $\square$
66
+
67
+ <span id="page-2-0"></span>![](_page_2_Picture_12.jpeg)
68
+
69
+ Figure 1: An example for comparing the upper bounds.
70
+
71
+ Given a maximal independent set $I\subseteq C$ , both Rule 1 and Rule 2 can add extra vertices to I without increasing its TISUB. Actually, Rule 1 allows us to add finite (at most TISUB(I,S) - 1) *conflict* vertices to I, and Rule 2 can be repeatedly used to add any vertex satisfying the rule to I.
72
+
73
+ We provide an example in Figure 1 to show how the upper bounds, including GCB, TISUB, and RelaxGCB, are calculated and how the two rules are used. Figure 1 illustrates a subgraph of G induced by the candidate set $C = \{v_1, v_2, \cdots, v_8\}$ , i.e., G[C], of a 4-plex S. To simplify the figure, we hide the 4-plex S and only depict the candidate vertices. Vertex $v_i|t$ in Figure 1 identifies a vertex $v_i \in C$ with $\delta_k^-(S, v_i) = t$ .
74
+
75
+ Suppose we sequentially color vertices $v_1,v_2,\cdots,v_8$ under the constraint that adjacent vertices cannot be in the same color, C can be partitioned into 3 independent sets, $I_1=\{v_1,v_2,v_3\},\ I_2=\{v_4,v_5,v_6,v_8\}$ and $I_3=\{v_7\}$ , as indicated by the colors of the vertices. The GCB of $\omega_4(G,S)$ is $|S|+\sum_{i=1}^3\min\{|I_i|,4\}=|S|+3+4+1=|S|+8$ . The TISUB of $\omega_4(G,S)$ is $|S|+\sum_{i=1}^3 TISUB(I_i,S)=|S|+3+2+1=|S|+6$ .
76
+
77
+ Then, let us use Rule 1 to make independent set $I_1$ contain more vertices. For $I_1$ , since $TISUB(I_1,S)=3$ , there is only one loose vertex $v_1$ in $I_1$ . By applying Rule 1, we can add vertices $v_6$ and $v_7$ to $I_1$ without increasing the upper bound of $\omega_4(G[S \cup I_1], S)$ , since there are only 3 loose or conflict vertices, i.e., $v_1, v_6, v_7$ , in $I_1 \cup \{v_6, v_7\}$ . After the operation, C is partitioned into two sets, $I_5 = I_1 \cup \{v_6, v_7\}$ and $I_6 = \{v_4, v_5, v_8\}$ . The new upper bound of $\omega_4(G, S)$ is $|S| + TISUB(I_5, S) + TISUB(I_6, S) = |S| + 3 + 1 = |S| + 4$ .
78
+
79
+ Finally, let us use Rule 2 to further make set $I_5$ contain more vertices. According to Rule 2, all vertices in $I_6$ can be added to $I_5$ without increasing the upper bound of $\omega_4(G[S \cup I_5], S)$ . After the operation, the final RelaxGCB of $\omega_4(G, S)$ is $|S| + TISUB(I_5, S) = |S| + 3$ .
80
+
81
+ This subsection introduces our proposed RelaxColoring algorithm for calculating the proposed RelaxGCB, as summarized in Algorithm 1. The algorithm first uses |S| to initialize the upper bound UB (line 1), and then repeatedly uses the Try-Color() function to extract a subset $I\subseteq C$ and calculate the upper bound on the number of vertices that I can provide for S, i.e., ub (line 3) until $C=\emptyset$ (line 2). After each execution of function TryColor(), the candidate set C and upper bound UB are both updated (line 4).
82
+
83
+ Function TryColor() is summarized in Algorithm 2, which first finds a maximal independent set $I \subseteq C$ (lines 1-3) and
84
+
85
+ ```
86
+ Input: A graph G=(V,E), an integer k, the current partial k-plex S, the candidate set C
87
+ Output: RelaxGCB of \omega_k(G,S)
88
+
89
+ 1 initialize the upper bound UB \leftarrow |S|;
90
+
91
+ 2 while C \neq \emptyset do
92
+
93
+ 3 \begin{cases} \{I,ub\} \leftarrow \operatorname{TryColor}(G,k,S,C); \\ C \leftarrow C \setminus I, UB \leftarrow UB + ub; \end{cases}
94
+
95
+ 5 return UB;
96
+ ```
97
+
98
+ ```
99
+ Input: A graph G = (V, E), an integer k, the current
100
+ partial k-plex S, the candidate set C
101
+ Output: A vertex set I, an upper bound ub of the
102
+ number of vertices that I can provide for S
103
+ ı initialize I \leftarrow \emptyset;
104
+ 2 for each vertex v \in C do
105
+ if N(v) \cap I = \emptyset then I \leftarrow I \cup \{v\};
106
+ 4 ub \leftarrow TISUB(I, S);
107
+ 5 initialize the set of loose or conflict vertices
108
+ LC \leftarrow \{v \in I | \delta_k^-(S, v) > ub\};
109
+ 6 if |LC| < ub then
110
+ for each vertex v \in C \setminus I do
111
+ 7
112
+ CV \leftarrow \{v\} \cup \{N(v) \cap I \setminus LC\};
113
+ 8
114
+ if |LC| + |CV| \le ub then
115
+ I \leftarrow I \cup \{v\};
116
+ 10
117
+ LC \leftarrow LC \cup CV;
118
+ 11
119
+ if |LC| = ub then break;
120
+ 12
121
+ 13 for each vertex v \in C \setminus I \wedge \delta_k^-(S,v) < ub do
122
+ if |N(v) \cap I| \le ub - \delta_k^-(S, v) then
123
+ I \leftarrow I \cup \{v\};
124
+ 16 return \{I, ub\};
125
+ ```
126
+
127
+ calculates its TISUB (line 4). Then, the algorithm initializes the set of loose or conflict vertices LC (line 5) and tries to add as many vertices as possible to I according to Rule 1 (lines 6-12). Once trying to add each vertex v, the algorithm uses CV to denote the extra conflict vertices caused by adding v to I (line 8). Since I is a maximal independent set in C, adding any vertex v to I increases at least one conflict vertices, i.e., v itself (line 8). Thus, the utilization of Rule 1 can be terminated when $|LC| \geq ub$ (lines 6 and 12). Finally, the algorithm applies Rule 2 to further add vertices to I (lines 13-15). Since for each vertex $v \in C \setminus I$ , $|N(v) \cap I| > 0$ holds, only vertex $v \in C \setminus I$ with $\delta_k^-(S,v) < ub$ can be added to I according to Rule 2 (line 13).
128
+
129
+ The time complexities of RelaxColoring algorithm and TryColor function are $O(|C|^2 \times T)$ and $O(|C| \times T)$ , respectively, where O(T) is the time complexity of the intersection operation between N(v) and I (or $I \setminus LC$ ) used in lines 3, 8, and 14 in Algorithm 2. Actually, O(T) is bounded by O(|V|) and much smaller than O(|V|) by applying the bitset encoding method [Segundo $et\ al.$ , 2011].
130
+
131
+ ```
132
+ Algorithm 3: SelectPartition(G, k, S, C)
133
+ ```
134
+
135
+ ```
136
+ Input: A graph G=(V,E), an integer k, the current partial k-plex S, the candidate set C
137
+
138
+ Output: A vertex set I, an upper bound ub of the number of vertices that I can provide for S
139
+
140
+ initialize dise^* \leftarrow 0, ub^* \leftarrow 0, I^* \leftarrow \emptyset;
141
+
142
+ for each vertex v \in S \land \delta_k^-(S, v) > 0 do
143
+
144
+ |I \leftarrow C \setminus N(v);
145
+ |ub \leftarrow \min\{|I|, \delta_k^-(S, v)\};
146
+ |if| |I|/ub > dise^* \lor (|I|/ub = dise^* \land |I| > |I^*|)
147
+ |then
148
+ |dise^* \leftarrow |I|/ub, ub^* \leftarrow ub, I^* \leftarrow I;
149
+
150
+ return \{I^*, ub^*\};
151
+ ```
152
+
153
+ Motivated by the complementarity of the coloring-based and partition-based upper bounds, we propose to combine RelaxGCB with the newest PUB, DisePUB [Jiang *et al.*, 2023], and propose a better and generic upper bound for MKP. In this section, we first introduce DisePUB, then provide two examples to illustrate the complementarity of the coloring-based and partition-based upper bounds, and finally present our new upper bound, RelaxPUB.
154
+
155
+ Given a growing partial k-plex S and the corresponding candidate set C, for each vertex $v \in S$ , DisePUB claims that a subset $I \subseteq C$ can provide at most $\min\{|I|, \delta_k^-(S, v)\}$ vertices for S if $N(v) \cap I = \emptyset$ . Given a vertex $v \in S$ , let $I = C \setminus N(v)$ and $ub = \min\{|I|, \delta_k^-(S, v)\}$ , DisePUB defines a metric for I, i.e., dise(I) = |I|/ub, to evaluate the extraction of vertex set I. The larger the value of dise(I), the more vertices that can be extracted from C and the fewer increments on the upper bound of $\omega_k(G, S)$ .
156
+
157
+ In each step, DisePUB traverses each vertex $v \in S$ with $\delta_k^-(S,v) > 0$ and selects the corresponding set $I = C \backslash N(v)$ with the largest value of dise(I). Ties are broken by preferring larger extractions. We use function SelectPartition() to describe the selection, which is shown in Algorithm 3. Then, DisePUB extracts $C \backslash N(v)$ from C and increases the upper bound of $\omega_k(G,S)$ by $\min\{|C \backslash N(v)|, \delta_k^-(S,v)\}$ .
158
+
159
+ DisePUB repeats the above process until vertices remaining in C are adjacent to all vertices in S. DisePUB denotes the set of remaining vertices in C as $\pi_0$ and finally increases the upper bound of $\omega_k(G,S)$ by $|\pi_0|$ .
160
+
161
+ To better illustrate the complementarity of the coloring-based and partition-based upper bounds (i.e., GCB and PUB), we provide two examples in Figure 2, where the growing 2-plex S contains only one vertex $v_0$ and its corresponding candidate set $C = \{v_1, v_2, v_3, v_4, v_5\}$ .
162
+
163
+ In Figure 2(a), the GCB is tighter than the PUB. The vertices in C are all adjacent to $v_0$ , which means the vertices in C are all in $\pi_0$ . Thus, the PUB is $|S|+|\pi_0|=6$ . While by coloring the vertices in C, it can be partitioned into 2 independent
164
+
165
+ <span id="page-4-1"></span><span id="page-4-0"></span>![](_page_4_Figure_0.jpeg)
166
+
167
+ <span id="page-4-2"></span>Figure 2: Two examples for demonstrating the complementarity.
168
+
169
+ ```
170
+ Algorithm 4: SelectUB(G, k, S, C)
171
+ Input: A graph G = (V, E), an integer k, the current
172
+ partial k-plex S, the candidate set C
173
+ Output: RelaxPUB of \omega_k(G, S)
174
+ 1 initialize the upper bound UB \leftarrow |S|;
175
+ <sup>2</sup> while C \neq \emptyset do
176
+ 3
177
+ \{I_C, ub_C\} \leftarrow \operatorname{TryColor}(G, k, S, C);
178
+ \{I_P, ub_P\} \leftarrow \text{SelectPartition}(G, k, S, C);
179
+ 4
180
+ if |I_C|/ub_C > |I_P|/ub_P \lor (|I_C|/ub_C =
181
+ 5
182
+ |I_P|/ub_P \wedge |I_C| > |I_R|) then
183
+ C \leftarrow C \setminus I_C, UB \leftarrow UB + ub_C;
184
+ C \leftarrow C \backslash I_P, UB \leftarrow UB + ub_P;
185
+ \mathfrak{p} return UB;
186
+ ```
187
+
188
+ sets $I_1=\{v_1,v_2,v_3,v_5\}$ and $I_2=\{v_4\}$ , and the GCB is $|S|+\sum_{i=1}^2\min\{|I_i|,2\}=4$ . In contrast, the PUB is tighter than the GCB in Figure 2(b), where C can be partitioned into 3 independent sets $I_1=\{v_1,v_5\},\,I_2=\{v_2,v_3\},$ and $I_3=\{v_4\}$ . Thus, the GCB is $|S|+\sum_{i=1}^3\min\{|I_i|,2\}=6$ . While vertices in C except $v_1$ are non-adjacent to $v_0,\,\pi_0=\{v_1\},$ thus the PUB is $|S|+|\pi_0|+\delta_2^-(S,v_0)=3$ .
189
+
190
+ Both RelaxGCB and DisePUB extract a subset from C and accumulate the upper bound of $\omega_k(G,S)$ . The dise metric in DisePUB can also be used for the vertex set returned by TryColor(). RelaxPUB combines RelaxGCB and DisePUB by using them to select a promising extraction in each step.
191
+
192
+ We propose an algorithm called SelectUB for calculating the RelaxPUB of $\omega_k(G,S)$ , which is presented in Algorithm 4. The algorithm calls TryColor() and SelectPartition() in each step and figures out whose returned vertex set is better according to the dise metric. Ties are broken by preferring larger extraction. Once a better extraction is selected, The algorithm updates the candidate set C and accumulates the upper bound of $\omega_k(G,S)$ .
193
+
194
+ The time complexities of functions TryColor() and Select-Partition() are $O(|C| \times T)$ and $O(|C| \times |S|)$ [Jiang *et al.*, 2023], respectively, where O(T) is much smaller than O(|V|) as referred to Section 3.4. The time complexity of the SelectUB algorithm is $O(|C|^2 \times (|S| + T))$ .