Eric03 commited on
Commit
71d55b7
·
verified ·
1 Parent(s): 1772b17

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 2002.09505/main_diagram/main_diagram.drawio +1 -0
  2. 2002.09505/main_diagram/main_diagram.pdf +0 -0
  3. 2002.09505/paper_text/intro_method.md +186 -0
  4. 2006.10178/main_diagram/main_diagram.drawio +1 -0
  5. 2006.10178/main_diagram/main_diagram.pdf +0 -0
  6. 2006.10178/paper_text/intro_method.md +113 -0
  7. 2009.04861/main_diagram/main_diagram.drawio +1 -0
  8. 2009.04861/main_diagram/main_diagram.pdf +0 -0
  9. 2009.04861/paper_text/intro_method.md +153 -0
  10. 2104.08801/main_diagram/main_diagram.drawio +1 -0
  11. 2104.08801/main_diagram/main_diagram.pdf +0 -0
  12. 2104.08801/paper_text/intro_method.md +41 -0
  13. 2105.04459/main_diagram/main_diagram.drawio +1 -0
  14. 2105.04459/main_diagram/main_diagram.pdf +0 -0
  15. 2105.04459/paper_text/intro_method.md +192 -0
  16. 2105.05391/main_diagram/main_diagram.drawio +1 -0
  17. 2105.05391/main_diagram/main_diagram.pdf +0 -0
  18. 2105.05391/paper_text/intro_method.md +7 -0
  19. 2111.12918/main_diagram/main_diagram.drawio +0 -0
  20. 2111.12918/paper_text/intro_method.md +126 -0
  21. 2203.10452/main_diagram/main_diagram.drawio +1 -0
  22. 2203.10452/paper_text/intro_method.md +13 -0
  23. 2205.15209/main_diagram/main_diagram.drawio +1 -0
  24. 2205.15209/main_diagram/main_diagram.pdf +0 -0
  25. 2205.15209/paper_text/intro_method.md +353 -0
  26. 2205.15307/main_diagram/main_diagram.drawio +1 -0
  27. 2205.15307/main_diagram/main_diagram.pdf +0 -0
  28. 2205.15307/paper_text/intro_method.md +145 -0
  29. 2206.03377/main_diagram/main_diagram.drawio +1 -0
  30. 2206.03377/main_diagram/main_diagram.pdf +0 -0
  31. 2206.03377/paper_text/intro_method.md +123 -0
  32. 2206.13452/main_diagram/main_diagram.drawio +1 -0
  33. 2206.13452/main_diagram/main_diagram.pdf +0 -0
  34. 2206.13452/paper_text/intro_method.md +118 -0
  35. 2208.00147/main_diagram/main_diagram.drawio +0 -0
  36. 2208.00147/paper_text/intro_method.md +122 -0
  37. 2209.09338/main_diagram/main_diagram.drawio +1 -0
  38. 2209.09338/main_diagram/main_diagram.pdf +0 -0
  39. 2209.09338/paper_text/intro_method.md +13 -0
  40. 2301.11647/main_diagram/main_diagram.drawio +0 -0
  41. 2301.11647/paper_text/intro_method.md +418 -0
  42. 2302.05259/main_diagram/main_diagram.drawio +1 -0
  43. 2302.05259/main_diagram/main_diagram.pdf +0 -0
  44. 2302.05259/paper_text/intro_method.md +194 -0
  45. 2302.06091/main_diagram/main_diagram.drawio +1 -0
  46. 2302.06091/main_diagram/main_diagram.pdf +0 -0
  47. 2302.06091/paper_text/intro_method.md +81 -0
  48. 2302.08712/main_diagram/main_diagram.drawio +1 -0
  49. 2302.08712/main_diagram/main_diagram.pdf +0 -0
  50. 2302.08712/paper_text/intro_method.md +131 -0
2002.09505/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="www.draw.io" modified="2020-02-19T23:17:34.218Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.100 Safari/537.36" version="12.7.2" etag="yibtMiKWFKoNXItHe9r-" type="device"><diagram id="YZM9wy7nHEkWTYqvtD9A">3L3XruVIki34NfU4ADU3Hyk3tdZv1Fprfv2QJzK7b1f1BQaDFF0dkYjD4ySdTnOzZWuZO5H/gOnu/M7RWCpDmrX/gID0/AfM/AOCoA8IPj/elutXCwiB8K+WYq7S39r+s8Gq7uy3RuC31q1Ks+W/XLgOQ7tW439tTIa+z5L1v7RF8zwc//WyfGj/61PHqMj+pcFKovZfW70qXctfrR8U+M92PquK8vcng8BvZ7ro94t/a1jKKB2O/6MJZv8B0/MwrL+OupPO2td6v9vl133c/+Xsfwxszvr1/8sN0K8b9qjdfnu338a1Xr+/7DxsfZq91wP/gKmjrNbMGqPkPXs88/u0lWvXPr+Bz+GyzkPzH0aBn5Y0Wsqfu9/TydBVyW89vSf0aF2zuf85CwHvFcUcpdUzeHpoh/lnAHD+8+c599tYs3nNzv/r+4L/YcXH/7Khy9b5ei75/YbfDf+b6yH4b78f/zmPv09j+X9M4e9t0W+eU/xHz/9p3OfgN/v+97aG/6fZ+o+wJ/732RP5X2hP6G/0T/R/oz3/Rv/E/hfaE/4b/RP/32jPv9E/P/8L7Yn8jf5J/G+059/on7+z/f9VBkX/Rgf9XXf97zLo3+mh/+Pk0x9gUOzv9NA/XyP9ARb6ZxH5HzH9V1joz1c9f4SF8L/RQn++jvkDLPTPQu8vtdCfr0z+CAv9nT7052uNP8BC/yzG/lIL/fnq4Y+w0N/pQ3++HvgDLPTPgumvtNDvj/ofbqG/0YegP5+x/wEW+mdN85da6M+n4H+Ehf5OH/q34NT/rDr+Ugv9e3Dqf7bQX6jL/oK1gT/CQv8cZX+lhf49OPXf6UP/Fpz6X1THX2mhfw9O/Xf60L8Fp/4X1fFXbjL49+DUf6MPwf8WnPpfVMdfaaF/D079d/rQvwWn/hfV8Vda6N+CU/+L6vgrLfTfcWqsfZ5A5cMz/MdUv298xKbt3RBIgf95+BwVv/38uSX+L6b9/aq3o//nVzfkcwEMjOe/dmH93scz4vif+33afo3m9+Y/dg5/X9H6o7aj/dN8Yn+hioT/fAXwR1sL/xut9eergT/YWv+snf5Sa/35yuCPttbf6Vt/vkr4g631z5rqr7QW8ucrhj/aWn+jbyF/vnr4g631z1rrL7XWn68k/mhr/Z2+9eerij/YWv+swf5Sa/35CuOPttbf6Vv/d7XxL5z/d/nx/09OfP+v0uG/Uxn/oyfsn+XhXzph/52c+DvkId1Wef4/RCLmVdv+H1+GkZ/3758kHf+7UsCfNtn/H9TQUkbje1h1P18nUu/bVknUylGctfqwVGs1vFtB42Fdh+65oH1PUFHSFD8z8N99Ufd7H2RbFe+96/BOR7SMv76azKvznTfq55Hk763A7y3PcRqt0eM8v36FuLEv/gHRlUtp5gFI32Ignz+q5ZSsUzxHsvP8Q+80Gbw/cX4k3oaCZFvWcE2EzOA0xY3GjZ2nr4R/IIu7mSP4kmGfC0IBOF+RGunnEVxRVm5DfU3/GEhgcI7Pp6VK03SUK2DR6pYX1TrC877rtlc8Rtt8fH771CE0vTAwBd3nbxn20556BDgxaNn/A3odOQyx9I6GdbF7RYu/QgJ9BRt71Bf10WBFC3nVuIHvcflcwWyjGrL1O8cQpyXvzU/+5j7IE0PU/av57nV0FWp0hIMDn9D5nvEwI38eRQXwfrfxDqPEumVZBgWzi+HKupZV9dx57tcFwTcfp2maLyel6jeH49CadvmvrnEYXucQ/Lg3wKdPA5XE4L6rMYzc24dV45kkkYK5J22fwGi7W/wdG2qAM3SXz6Fa5GDHszXq4wQGEv3+tO0pfkTnzD+HrfRECye6HQRPtVQ4GJPjHgpf74Oh9emLUrEM6j6Ek4BKG+R7dPcfIPsEJR8KSZ73FQT7cSwbki76+K8R57l+133fayeBw2uOXm9YfJZhXn6dzxBzkqHSKsA0nv2MwCAD3DNFU0IB1ysii1/z3h3k/rr8GcOaNSe6d9fdDct14se5Z42gVbQgnjicB0GQL0tAELEe/9yLQSxkJx/CxGWBAsFodKG07XsUTPO9byHT9X352zwXLsSvR+xKeLm39xzN+2v2DJXxB+c4R1+rxcih+jUVMLSfsS1ee4HKQeOIIMjL6xYE5rxjDDIFfq3bquvqSbKvGfA+e+sH+8yA5Pt+GE6ZnYG8v9xm7Lw94keJZsOX8yifj/IwfHqBB2s0SaAOcfjuNaLx3x5rrtbme7QIQnomAMSxd6Q5zevg9b7FIfVan2dxli/y+/qoRTjTpxYRK3oiiyKiZW/W0d+I2fVGiG3NPQAGKBajN8A+AquH6rosH6gqLISJ9jl023V1szdSAgRBxX13ZxgDFe5Q7jj7bQ7hMDjxu9OuII1znVhLuIVuF4yj1aXCtIvyGhEU/d5+szClZCtIbaCZ5a8Ln/3SLNvmmWLuEOjVahM38yYc52HxpBj9uaEXLbIjcr2/cKWo68/xhpI++/uN44T0RmEfvy+f/7zy75G5vFBC4N88/IlM5Mhu0YbRK533GLbwONu2tT3oryg/Efu8ChXCWflVP5MnP95GQai+PlBsvr0EKLxLCnTIOLy/L92+KbEWbQ+7JgrGcXnV8ViA2pJY0onsrGQjUynBYav1qWzQftLlCydCvlAWgRhxCOHpbCXLPi3NIDQkEy1JknUE3Mp9Ln55Ct41JV2eUPiMcej3+9qmvFvhxIqMcq1mfJyv+N3PWaNLuzA2Vcz9zCAhsbps9gFnrV5IP9Z0lqi+4JGxqRmcztCpQyMp3Gv0WXffnDX+dL8w8PkP3sP/CDIC/8yg8+SLxZn98cv0+myXAoMlrzVwn8bWClVY/Q45aDrJPNZi2DvMD8lgs0cQ0QsVJ2pPON57+4aVyqG278a2uz1XcAjQUsWGcqc8ZbdioDh/XI/f6y/xulmeWzX/gQbT6o7xaWjxBg8BItxUcrWsNMZxOCM6/40FmsMlPdeMenPQudGqD/wLq641/NSHggUU9iYVwd9n8uzfsQMjnbZi4zmbd4fLdjy41qrMxBiHufj+Xpf1LzwnDvCFnAoB2m0qT4ee3ykPgkd/UBzRDTDfYZLKlAaJ/eYPynHfn0PTNJpsnDAtlMYd+DiKouRTZSk4w3sq9GnAqNM4ohgWsTQRA5JVJcjz5+IeT3vDQ+f55yQibu77+ATmX//Hz2lOtm14nj2zEj3quY0g4iJCrl9E3JfR+uKmOeZ18vJ1z4vCwYnN261Tjjwc5h7nBh+xWMraLVuO0pRAUOG5DooBQXkxTiO5QLveAt4Tf+zyOgQ98vFP9rBeAOaSYPoErZ7Tcv3F8ryby99S0uwvGPGDLDsaioxN+jbwrpQ9SPii4zOi6YUhrEUJfM97n58nS009v9zU1p+41PIwJ3ljY/zKJJOonbC9vujzPO+rLpp/9fdBB82qYD2ImxHa97Tp8Bq/s9nL18JXrOMUYSHbJs3J4EHAL5ySuECUxPSM2HpnDn6nlSI15MlC7WvSLcdrGF6Cuy8y14vYRnVWzkSdLEVRFIIBPVjbDjJtXOpQ432bFIeobjc+mEQzcycDv+IkYCiokly4p+F3jNMPGmcCEb7TkLbAPBLZ7qvnYPsFMtX1hMDzbzQhe6e7xxfqF8LZLwDJuqKZrhTjFFaHJG/6OcKs/ipHuvW43Juc3ruoiKS6Gd5SicewYxDX/MdG+L2T+GSttrih+oXoD1bBn5rvjuV5AjVwoGQZtSxcrBL21MZ5Kz66eDHiULJmbdkCBkkKxeEkpKIH2mszAMB1gliXnb0vEUbT1DZqUx8YQEXQzzNebvpqC80M7J0xrWBEqk0c5DuXip6cPOw7jt94PKQpGfLhT9ZS0rbGwNernl73Pn+72DZNu8r7J+biXr4LqyBguaRDfeaPVga2HbZ/0jiAc08+uH2XqWz6J9NVwVgH43ndqHZz7ppzBswBQEvwn8yvsy9+N6wqkoKipDU3fb6vG8Pwtlz0KqufL/iheSBT7oGWRUyKyxL1e1KjOBD0ncCS0T41zdAQKcBVqIsXdphNq23MJFK2pSF4XrJ3Xsom9VswfXl9GIK8q1yO/PqubSFU9M7/O25wTryDLSrv8gatCmGL/wW4+c+UwejngctWv630oTwjehRN31Kz+T1UA8OtMwHJ31CdN/rhSfd3t3Z3/87OYpOCjogCIlIZDIe/5WLNSUKWWqrYWepjrBSKDKvOR3lXgF728FUf81Nm2wwYWQRkKD1zdvdv8lNgQuXeeagqWrumvsmBg8TzadmGfP7Jwm8Y06junarQgfebgokQOKNCYzhHywjgSVarbi0yKGoCjC8FVpQ9tY8Y2aWefFRtarliHHcEDoz9+6De4BpX5qMirCWjkV0hlD4yf65IW2Vt9sksPE9C3r3FEyUI9duymBygc5zj+L7vdYfM1viEwx4nMxoswTwd1/rDJyZF6xTaKzxZ8MOJ1UEiwXjN+EIhDHm+74wUk9KNWLSS0Cp3Ro3NCrbdeH2tWa76h8RRBh2wG5hVMHzQIqI0zW4j34X/DkzqdDT7cn2UC6LTEpzTy+rjRX8uq97mE8F6GIzeKE2tiExewKEV/KOQDIf3MVBPn7asQWB8ba6Kzz8skIYn+7L/FIdnn62R+2qyNVb1I0VeF8ON22nBJyuY4vd7M1/20hSTRJkqwuWvuLifnMiQc5JXQBTFQiSf4KCyh8Nhkyz26KE9I3iRo8Oa1gwa3Pa3wVHEocg4gX2ciUw+H0J9ySEpMkpWW5roLM7V718lRG78suLb97IC7rI8D4Pjm6qiRMgWD3vkmaLfUVgwBzYYv7/P02zztOePi8Dsiw5zjpC1193gvn+XZ6jJRxelmeTRwtSr2k1zDvlvd5siNIvjOIXtBxfshlLzC2DorRVhp6jjQgjikAeA2Uhd11DPMRPBIg23ZHwVWvz4Qr+9iRGVpOYUnqQ/qoHSSseejjkOeGwKO5OpGhb6WZQ1xzcCB+e6GzK7R0IrWadU0agRIAgT8lfrtR2JHmUoc2YSFSkQYYi2WN+8hEzzHqA8BGJWqkNauOVh5YTV79UIxOyHBFD151dOOMi12GBBdPdeeumC/Hrlq556oB/EUOYFzfR7KDM3MnAgyP48LKHcxHnbNo4DEUTjP5uBfwd7rdgLXpiKL/hLN69yKkhm3CgRy4MT5vdv/FEIJPmgNId+zaW+mIUivK7v3ViChIdRVzWckjQk2lhjjztJDahZXMPcMEYujd/hzMC6hnFgeN13DcETyr41h5zDO1WZZVsHrDk+kBN597AXeIWVNGU4QW2dPAKNbqjwcVFUqnOVoCQSjbS1dx6mjzg0cTfoGOqwD6m9VSl2Ime1M4TCObP6fOKCN5ErpyDk9XooQsM3afowNQUYpRk0cDhKXIz3D4fIO5Jm01gt3BCqIvPAjdX4fkN4rIZNHQ5abcjoIbG7HZlYmmM83kwNjNTbk5+p7SLIPr+yQt61MJiYgbchw0hCFXR8p8KXgxo++PzgloUuNuqxekaUtI69zAjBsIeAiEJfnt+MLOjWYO34rPL6dZCD8EAnG+hBjPPakRrbUZfVT8IDG3Jjszx31ewsQleEloR16IdYAhpFOlGc1SFpIg4/iuIkZx8XKdTJc5ujUn+iQ2E2SrVHYkqLTdp1MZuDgRp6FndSWCgcZ3dwAFIHUbp11IpPDMThrU02P2r7MLIJM0zTj55oKV4yYffSY1ko2PYovQd6QtH9dB8vNdHbbFNSnpT6M6OrBsa+XuWbVI89BrAOqHwTMx6+aLwEIStKnzXOpYNE69HNomzXhspKUAmHVPbaqflVHC9CYQ+I5slb4lC1gQWl+2op/tSqG67o6pqbflGrTnFH8bBow+D8I0xDx3Brq5u9y1xI34hFA7qy0dCpsIq+YxJhNPQFtAvQH5qDE0q/W0MRSw0ewzA89xv6bqShwDp0WS3taeEjw3bSSoOoGE3kUzo6PpScemTLnqHvCMsbIl9aqL4xSiyKHE6h9P0cX8SjPi7zJtRxH7kdjs3d9dFeAhEpJnPqBmZ3cuX3NL92TDd5qkXrwO5+i3TvGWqxrA/KfC9DmUWkC29Lnp9MfZHOWK7TqjFy0TV2TjG+JebQSbxA/AAgLgeymcp+2X8AyURKRuTecglyJt7u9c2OX7aay/cKxOk4hkhXJogCikHLfD4j0xfKC5i09UZPloamzDRDqvWAqwqTLSv4WF3npBaP38ibtOkkcZV9fXv4W544fpPgQ6kS4/E6XWukM6sve+QI1xxx9hdbLT+aD5AZpJay7YFAH3l8xWHqeniByVQLRIhCfWoj9AAEzbujlUI5usl+LtA88mULvepvVcUek3gyuLphmlk3SPMoQHC5Rzz5KySwVXPNGr6Bod3U4jL6MHbXFrC5T4M1FxvjKTES5eVNTnrHqO9MDqkJBvit3uJK714heL0MteLOkyfqH7mOnHu+yU90J0P6jKJC8RKb4KO3ogqp3DTTFM3/qfKQ3pxucCRCHtZPmLaV6lb08klbbzIGGtB/qz4UgaLJo09CPPCBGMT97olHFzb4txJ0Dr1JnQOimrRrzy+yP/ecre8axDo8wC0FuA9l+xgkdVIDInKH550+VGczY0JIq3YoAw5vSinOA6rmHubyKN0UVwL2YDgpS1NujG619icbUUWhMMWeF6kv12/cPLEmxT3M5zPoU5AKSZq25ZsEif2nEJXHn2n0d+9hOG73qwBHoat7b6/CZh+i1TNrosArh1byZVmRkSXqoBOqcR9ymLKXEx3bzK/r8/aO9bpLQRqfALrcldnBC02D2Xx7lCQzsC6n24xIIk/v+FHdqg3hjTXI8YdDOkx8waCRToHjSs/jzHrF6t1e7OjVsE4LQ77flwY+VJrHhwidZ244kS2FzqxyflnWxQIxIG5O8EXRFUe1ph9K+SrvL+csCoa3mX/HwbiwQFCY4Qfy5rBs1CtkKs8Js2LY0jgIIZTtIdiMCH0q3AkeCoapx2PIWeywgS4FNGAKwIRkBrQGnOOREERsKrC3muMAAS45F4i8Z07zZpLxM6pcVGzuPemVzs21lxCYO6NWSQ9fkBdbQJlWtS+SWRgJFl8lUS0y4K40ZI3gDfD2Oxo4rNa1B0EF71C1qBnZZtwNU8g2qNc36fg+oq4vQN5L3QiY8UqjhhoV9UgXTPYDH/Ls43Ac5Utq6mAPZH7xFlDJUMCyiGQUq+z2UjvHa/W2SU57v0Xd2kEUIcaDRqV3fjlX1/dBCJmIDW4dbtUMsNuaoae8AvOpfHzrQKPQ55C34cF4V80LsJHVOm1GWvzYbbRBJGGYyW6T5wd5U/k2rM3woed30YpaqLJePMeGJmF2d6GJiTMnOF+PJk19KTWsrd8ZSSiJ7aAYnLBoXCDXIEEDAuF1mpVBZnToSPoQE6UzCuK0z4gPCvWYn3XwEqfXSbjyGOE31gigIjKkHc9WiBEnYYB9zZnmW/2oF/IoMw3T2PX1v51mGX7IUO+tIjHU1YGeu40Md+ej8lzt1Ixluw/katvwjIWHegT7YN5AlQfiMla7vyV91dosDjp9buugK7Y0knn9MPmsen19OToWq2uiAxgfT/7R2+yWqfhXJF0ZjnGwwjJ9Hnrcj91sudVogUayXeEJBwPQsyidOezeMe1mAYV0PEHKsPCg3aAZbctV65/EQziObThANSIX6KsjhhPFZmtdBBS6U53tHHpweV3Ouaa5Ja8MY12ccWRkExhmaBbggjNvceqOOB+1IwTEqHNJedB5KGnHCXy/wgTTJ3NEyHT4GcLWW55JWuHND6EChvb+BK8C0e5Xgb4VvcCP0j6PlbVdQTDRhJfaMoRewJpMWdnDLxuBCV8QDZ3W4Pi8lF0uFoDzEb4f8wWgUvJw0YJ6pgg6V7qTz5MCJWn2moFqCYWKnU7mIa1NCB6lNxH+0Ab6RKqRUIFzXFjzCQFv5eCC3FyTr1LI9edGofgdHYJ3iL5I68ErrF8ozNFgj+k3w3EjrJfStX6lRpN50iJDOQW7++CObYJY1xp1Y1C3QS48VuwhE+iwqnqh7xFHuX8Doj0u2fAN5wGxT8EgbyW+3tq8e+x7QCqQO+UI0uL7o0xleB0YqtTFJ8HMRZe1FQeqzxQdELeaUCAZ5MLkaq1Z+twJbB2NQLe8cMpEqnqIm1VYyDc2aL0H+4DvCv3V3Peha81bMdBOU9SgatcNadaFa1ObissYYEOvzD0SIQw17tRCWSwgd/UvOQwWWvteZvuWPZKyvz/MFD75tTIL7ovAJBKgcHLYVKeIKAg4XXQXGMn396Pm6wRXQ4xqChvc5DFozPL29yWF9y5LGClzK7zGTWyitGa1IkhCuShT+pJa2+lEAgaWF/SQcmGGhw+XhvMWicyLVEtI1q/ktY1a/PpDR4lh39Z36V3QWAXSlHyh3LDo1DpbIaPageQ5mqpomNLPQ3QCplOQKMlwe5zEJW1NCMOvb9sAJoqPbeORa4OxKuJPQjpgSNcfZbOVArl+i5r2LqvJUY8n36oLrClZ+1YOU9wznuu8wzbRib0Z3NeK3JOaFjlEneC14OHAwwaH255AkjVqpWxc6sPjinE4AEgM8Qv+JK6KPr4duTRilKKhFG1bfO/rFMgHuWXBufG3wnVk/E+98U2dl0OCmUUC5IcHoT0byPsLQXAV1RXwoo14tGL8aBcsFltTdahJEgwr2x4gnfGCXpqfegF0+bAkRXkDhXTtVoadIA8PWEpfEXUGx44oJmPiAySgFRJNo0eUKeB4v/fK67bB56cMVOypGmv9h2BXm5Pz8i1dQG/wLNzmvaxQEHxU09ww/kbpPI6S6iRbZmXRtr6covu4g/kTwSZxMXVl/7DtDgozGDl/lTepxvtslrYIH23I26Cfgvud/o1kjPMa3Cd7+jDaJOwyDWrGMfav5dEnz846nHLze2mko0xYZkxqvbZ7WavlSwGpHUijv4xcj8qIkiNFxHUkpq2hQmVJaQIep3bnLopTISOEfjqS6rc873jBq1WomG+hW4O0+4FjSN0WiI793oSvZpvlhhi/qTPVI8so/EbGj+T8Du6wlABIkcs7/vNb+r7eSwKUsJcIySLig6ZFYE6Ew8miWduDfFis6rkm2B/Q53BMUjZRfutrpUN17VidOccvJA1wtnAMCXR2AyxZVg+NEQyPEfPdeMvLIPhLzm/Z+WzVHNUo2tYSGR8eWy0OkznFRYa0K4GR5tIPYr8l16L5YIreDSI8IPQnKq5CGFm5UOqD0Njp1kWInJ0xlu9t62GsExRpzdwvwbK0cwWgErffNc0UbBZ7H8zoblRq4mHHNTdqnSQCRnPv1mtKhNzqEVievNfekkt48jvVZp6LdY/VKSMK+6ODZw5lPqRSvhkiNlNDrRybRvSYhibNCvQ+bdhS8JHiIecaZgOfd53gxqXQvV8LP1YgIQlUuiv5qC6IK7fYgQMbUgznr8O6eh/vdiR5th7tKKpK/rIPTY8YwvUjs2RUE9hnLE1tx1MAHJkahNdar6gUfyyLdTwRdF0g2QNgiJs9PL2nna4f8noFrtJZlnEJkOYlu3ugvqku74KYvhgc0QS6VNwp1BkXpcvADJWm/zIMCTjJpTLKsd1a8dHB/IG8iJv2dvIuQc59im2gxBBbuBagmrcfCZvkvrhXQWDJ/maRbcZQl2XGPGbmNn24GyZLtMbwRlG2hIPOOJ+B1VfBaExBOfw4PofCvGwe1vn+5TjWo1hLiTNE8JESyBTypO2KtgGPzSLw2PCwmKrR6g/r88NbnZES+SK90s8s7BZGzBqaL4b0n7qE9zV/UsL3luatC0276gYEv9CW7V7v0i/HUCADffEncROqdppUMPE1x7lv0gYsOCWoCGP5ttEYbKMeaJ3jorVhNl78vfysnr8k4gqU6avleRxLb61g5s/JLdbZcj3LEoOSFSRvFFJLl+uAf6SR34oHS7shM5imwMCzUcDj6vYjawRHnPq8h6qXJzEvVhjgu6+g2oGFir1lgkog6z2gbYxgEHC2haQHs1Y8ihJzHlMFpJITXYXzQRv/eMhfdEqv/FZRFcM1MO5qayttJxd9zk8P7Jh9fnZigpb6tI2tjDAYdZjvVxdYI4GbyQcdy5obeEWTPRQB8co9xmnGfIMZWsmIIGptFvf7C+EIixpGOVhljLPpe6W6qE2UtiAdXZQpiHG5q6EB4VHYb0ExGr+CZX1A4VPFvvq0VJmrm7KTamnh9IOreTKRYGvJT61qra4oja+lJwFus6PVD3MXdQ6yOFtNepX9ug+Bgh7C3cU1UjZC/Y7s16AwHP9IxRhc9nIwBszPlsO9JRF/zVX/LCQWnZ7Jf0m0DNvqPBSFlE/aGmjafaLTqxzqmXndS9nnVVpNQKoNF+xyxOEzHmpgZeZd//v1Ikkjce7RkVwBP7nyla6bm6dL9dAnJDY3UtkyW2iZ0730SFCXXcIH2iGQY3r95Tqdsd9g33U3iQgObg03WYpqylBQpEbumsE8DWK50kcnatskSTqiNjgdXE92eDJGw3Smy0glNk9aXinQ9uL40dTCiogV4k5Q4nzjKi7bUFGG6UwY+DhIBV89mJVSGdutgC0B8uHjsEqrqbuYusV9VX9dWDrWURPRghTJw+nAHYBSQu5LiI7tN2UDVBVgleKbVs0dzFPL9qAFD4cPUrdrQPg1lI4bCphG9KHDEMXQSefGlyzUadsq1BeUOVFkODEGraGAjIvYH6hi0zeb56UVnqAK9QbD4wfAdSyKgrA/v6xfGf3ik0OI9CsdI+KFCG+GTyn/URsv77xA3bAm6jthQTWZwIAit1ednmtpCsjgJFSWqT3BI1V/xzfGPAX3OeDIG8TDl4D95P31zAE5P3bQ8drORCRI9bnAvgR9wqTC36404kxSZT5QwWgbti8xsFu71jTJjIaR1QprYjxH3rgAXQ1o5IE1M7K5BBhRl4WGtsAwh8MYTJ0AKFEPUaBwFkfVL493lczVMIkX54LCUa6epWNBXZ0RSEOgGJYyClKQA5otro1RXgArAwEy3/W4WhvMwv+6xveCQp/gkcTNtexdzzMW9bOljdUQqLqpU7nRM76CHAV8WSCtJnDMvuotRwwewNVtReWWfh2ILn0GW1Kk583FeMLL5tU9FkTtJMIJuHOXnP1v3c9GSAznjOyYEPWPZjmtiK1EK1WUgm1CoI1y7EQjLE8Rdr7v9IUy637jzW3SmY+zUjc7K3fmTrGhq434NSlRpnAfnqN12R0xUwBEX2QcV8iy5edUdJ3LEMCZ2699yvUgkJat5mNqOn63ZY37rwzLLo+aJMjLc66f4Kxr5DKxetLCAZA4sVKVE+Vt7qfp3J+V6aZ4FWBtUJATmcUAjTnkcg+z4MnK1qvHsz/eaBXn6qy7HQoZtpUugYJObi9Tf180BJ+GHuINONffgTFRoka/NtNdn3lfPocVFpi+PJfAUtJ8I2dHj0mxh1URUFIUMj7k4yR5J0+NQBhaP5z01ZPeKvCqekmJ5hYvNGVlOOY1HKKNo57eoIYIGzh0p83GSowLk3j6N5Bnj/EXP3MtiEdJDFmMRkslUq50LXIb5ZxxUWR3HMc90a8eET2tASqthm94iq0WUPQdjQi1mHNSb9eHwNi3OuyWbdeYh7D5vjslOv0osKx+5KKDLys/v+vASfrxICdcjAGMOhG62KoR09Em9LeG2fQfv50cddn4E3hriWMdM2AMdx9L2NZXMTlTn/udebiVVTQjhgZB31Ol5zEDyd7EnrAMPepQ5wlbO3tnt9dJOWXdNAeoknpE6h6r6gxUgNZKUvrf3eettZ8csAKX+NXHnuf7cXB0Bq80x9hZ/m3naqJ8ahuvQXa8KK1IN3i+yIIS6LRz7HRtejKz4JKZJKXv+7LC3xSQi+5PPVl1ApCXeuHXMjeV9v02fiDR0gKS692yHUKtCT4fSzWsvOOa6AWl/uoqJgEqTKwyfJl8BFz1mj3PRDK8ysYIPwDp3kbdCVvmKauozg2Je3kCfpPgFOVOTMXfHWqEmVbkGCI/S1rNuGNNZqUkfEWhxQfIT/2X+yTC2DDQaPjTAuiI8QDgQJlABXiRr0e4C+VNbkB0oJa2bvrsu3kFu8fBy9I1Jq1TGAz302o7i49iga5d6aJS8SPwBKMtbE1HrarT0NI49vty4vqEed0wjFyditZuU0UA+W/HPyxYtLmPVOFvQVtm7y5VP8x9FqACXk+i51oGh3nJYkl+Yf2tVJsa+llKjKkDhRt6oxpNk7ebnaOEwL0u5MLsvHxfYFmgTVS7lTrNmDr2Lq6qsCzfF233hcHbjAAc96nZVxauxFIQwi8ornHjGXNgfPjsROzXY7s1rDx417wVtbz37XyWCPAZ3Fz3DHX6+z2y4TriZgKYUh0bcohPbv6kqe/t8L1ChTO3k2Fd1GcHAvTjXxcflgANPgj5MnyUohamMxe9QAqS4VYlJWYXIpDsm5sJw2YQyuKzTBBrfWjyKstz01BnJtoWbFLTQBukAqXotFyRpmnVDHdt2UIOlcS0VORK85E9n+SfJMuoccU3/up8nllmoiFAcnTZfPkzpBMzsbYf25QhRLB4UF7a+crHuVkDjaf1ER+vJz8MhqOVZfdnEMr2nIDnT+u+MT6POP4Ndw2rShk/PuuGtYWVgx1gMoM7FeuWHB1/D3aQnWmUzGYzvdQtxvCmR0tZyvIT2i2nrIi7ioQSBXF8ErBhuhCpVtSoJdGQPS0WK8pUBaDKyFgTCI8zs9R07rLPW+h01cdhH0hpwUN8MAWO71hCm0cnagMzgxMsMxaGIgZ2CpFv4ci2ox56ZHaDuvK7uomjbC+YJgF98oSYAxm1SluRVGQvYrBIh3dZj7rsqpkq3oP7aVk8kNkVME9IknI8yzEpl8zjKxgJX24n0498gIhdp9Ojo/msjpQxHuQ/mWdT8VlMSkjqSKhGuQ9fG0GVSqV4Z7SSpPchIt82PF6UI1/ZNC938MGiid1qVIXmg+Kdn4pevHgvjZE5SJQ1Po/CcYLNmVxSb+F6q3XTrMuzV49dSPcKjSe0jHPP87SFcmF9nAPH9Tjfs1wDDwnEZWaoaJDv7vHMIl+WX3Kp34kHecUXEoT6QwHL1YLIu1YIrhbzG5I1TpJtyesA71hPeYayPPdj7apN6RKpu8b8LFz3d2fYkK361d9fIMji465h793TFl71d49E/KdaXqsGeL7ce9qt8OpqJyCcIjaGE70DF4e7btvu5qL3qfAmvWmHS3tkxnJdRqbv80C/JlOEQkgbu/w4qN7A8Pzu8irSq3tNdp49trQehAC/bTwXBAQXiEx88aYQqtQy3XeBqGJAa0lwtIIjZac06Mr7W4jeM51vWzIKqQaDeNssUU6ZjGAKZ0lQgYU7DVi6993Q+FjiYfI9XhyW7+qMx6MNr2EM75PnRtvLq548aYhA9r23FcPGzb5pz0ltKPNBknWIjcloxCkSVh6oSGjz0C7Ntbn3pZUccvX39fOST+6WdyoB6J1qLBRwWd5ALY1PkZZBFdoswrGn1AuhEVy+DLgz6Ow0Xw6MPpkALz7ONzFurYcTE3aMSeRhnewLY+ejuBextG9N/KVcVLCg+jThIZEiPGhI4C6oRV15u4U8FELz9irelkdPj8P7f7aijqyYG/Hd3nJ86x2RDMdqEZpAbCClvMB6/Pijjqwd8Cr9NVl4q8m3SK31oD5qqy6GNkxg6fJFf5ZZile4SUGgyI5L3d/AmxDAIwi3sje9PhxfeJ9IWq8mW/2KucLCahNdOu5kZyJGcUtalUVJZd5yL5XuLrU6dNov6YLiNoUh4wILkDELDL3yYwV1ODHw68Pqje8Zuo4A+zWZW69A+XYixRgnlpHK/aSwB4tt4+fLABFU8YlkIoxDTeNdea9WTu0tjm872tT4AYKaD1a77ZPWDuFYrBmp/UGoyWXbaCqvcts2vwY54n3q2Bg4QBdT9NHXpHI5YheyNjfGm2ioZb0eJbsm2zzE6mKQTxqHyHX/roTFNsUQmfSEoMcdStiPSIUP74WEb+gx1qLNbO/eQAj+KhTuLToPvg68mnRkvN2blX7nwrgjvS9iVfcFbXadFAcD3npCeeWXtkj7ePx3eh2hLLDdNjVdDWLKkpyr6/gT0Z8sUuubc+D10nUTBS2Hfo6bCo+5Cs4BhmSHzKC0rpblx7pm0UWzgUXl441Gur7lQgofhUoqfoOkQEAFF5aGeYpDTy9dK5gWv2UqBt/RHFG+pWGY59MfE7FvryAIMS/CbLPzFGZu4N0q0mXwuhHLkQp5+cHZkKFKKug/TXKQs34XpS0KaQaY7fhxHGEU34J30plq6tCE1bi97bQJcTlF2sMW/60PTofeyRd3BSssOBNa1ZcVTQiR7yjn6C8AoUpjJivyRAea0VbSVRwybPPBtg/G5iZCNL7DdyhlcWK0swgbgEDIkUW/wPq8+EYyuliUO43ulBV71YEcL7Z6+ptnh8sv15zTW7WOMDFJVY6DvC/Px1HhbLS7TAcJC48OCd/9DlhgDVlXd7LexjRhJmrXTsLUm+ejnMZvYkJi+VMkj7BgoYJuBCnSuD4UbJOA0tqslD2Jt2OsQDIthXr38oBmPupwJB6ntvTjOgaD+yJmtXRMBDSLmMQFIyGkP1zXrzSgc0lOUrkxucq46cB3I/n8a4HtZsELh0fOshq9Mkz+YCc/H4ociNx+irPe7KYzYCSeOu9RvZtuhbl1juwgO5Ek0LaqhmA/T1Mxd+oMXvXZIBbOk+yexvyHyO2H9D3qmqkkFz8xlOiPzFwEajJGKSoVPFq99c7TqoseGrm1nvfJEiXxe5N8AskFaNHEfMV5V/E48vUfmboJ68WUE/g4Fju8qY7KGj6390bHBNR17sgfOLdryuXd7Vj76mpHAwAoKuZO1m02TDsvG+Ft6dmWT0b2lFnfn5fv7ECjVEflqaFf528R6h9w/YBv/d56GIA0Y0lGevAkXokLezk4lIo/WjXZfESHOeLRsrK0fbT/p6O5uwO4D7dN78139sj6E5jnLxEkuA1i0gcPuJqTGMUE/HdOOiRuUFB2+GkRZyICNGlQowprVoEHpFGOeBjSqlUAl/yW2wUD+sT3qeEEbXgM8go2D+4ESXXfhNNnKFXdyPdDvnEEY7ZcCcfrFjlGWvzrG1V5fcV3TxGkETjSLwE71qYZL44Ez5ueBtsSUKcmO8mcTgqYZBOvOYY2lvjRI6BJsQd8oLEtsFa+fmNISSHvyJwjwZv8bCYpfu0sB+qYDzxggrKWFEDRzu0YgoN9iKXiPhqhEth5g8TBZg2KDkWK4zmBUpko7Hufun5WFRv3oB+2JuqkJzk4FBK1GTqK1P6sw+rVI1M8lNd12nynNydNWNSE/uwlasSjJv7AvZ9IrvjD1sEBqeVssvnLc46j/gD1SNK49VbvlpXuXWp28lcAciNyto5WMG/lu36B1MZoYxNDqv4KgNdIxIrKQXCVkVpaYk9iX/CgVEUj3+VWE3sy0ChrTstlS8R4QB+2JNk6Zf+z7930p6LfXxxiGZEz5G3WL4i2ed5PE7wMXDg3ceQsIRagzUYdbXOZe0mh/OYqCEV8SFOxXPYk3Qw3zJyfteo4F24RUJMlew9wRULo0dB0emwSCzpGx2Qd64ZbT68p62JW1kLgGqgERCR0hK+MPInqla3VHCCNqmKXD9+amnSkBffKXTFj/iW4qJlJ9lEdkY9jWJTsEDVXj0qOMf0oL1X6xCjHIsTHVx+uAYVapgVfyv/yn1HHh/GIvvLmRyZWAXSjoYdAc9xLNuIPZyNdj+S5ZqEHt2cM+FZGyHJrUqe0dy36WRSAAXdSkeSBs4RTLs8vnR8KEfOe7DrMR5xDmORh92WyVPfJXaF2vssBQy1wrP4V7vGk0FtqWN1JtRpJLTRPdSiJUL2RN9/vPlRpUL8fs3LiW1yEv5gHFe/q2qnFESJnrpKexIiiWNQjLx2+TrAzG14qAvcoS8XA3rUtBnDfMjdHKaNIRU5hAFiKgRmeB+snKsLi8MbYb+vVOApAz0kv/I7sE/JyWWUvEH9Dmb/hYrcorgi0FrJH8xIKIAciHb2mKFFECLrpJogfEqgiXlPocW9NPOz8fA1BEm9S/MT60Otu7jh5C1Y8qqqW9tYO1EAStncvLI29i9ff/hhBEGw7dDr5oSL4h8f5Grbi30ENyLNhD8j0768ABRHYVLP9rons5tAJzc61ugylj8ytmdlYKiMyoaggxCc2NbWwFEzIBy+a6nVtNo2wx/PDVIhjI4ZRB4chDEL/vA+bEma+V9umaq8LBDGDX/pbwnUjIf/alABdmOYnIXKywf2VplewWK1c8/VDbUL9qmy9qYiv9RWBj8i7ewxqBCD27ZvomIHtitnRWiaRFqJAhPmiiCtg22BmeT6n300IGL+UYvxWQtIS+zbk7XEjww8dB7kWlb0gMZ0tiRmJXG+M5c3n/m5cfIUmyzuhQdLeJn7tmAXgmKBA/cO+WzRmnwHyL1P2xcMMjXfBl/hWscNnyWbypsn0TkFeii+wTygmjb6w9gbVw9mL4DF6InhFNui6xDkEqsm++u3jat1SdLVCXaDpDJtAai9V4sphtFZMmCYNtu69O2eHGiyWmSL8CbQnC41PRCzferO6d015vh9e1w/WuXfsWAVW4YoZeqsp+RWm90MiqjiiKeIe/pnuspJ8pcvMQAlUnCqXXCZN1QCfrXVL9yoSoZ1WOsnoZEk13kV3j5Gbk06IdnzrlQ7hkijVesXs3z7BLT60qEVBCjY2TUf1ah6+7Xldu5K1Cx6qb4qw026keoX35YhLkEftT43NMy0jX8QGXdJHIlkePDZY8F2TqPfgqgJOWbMcnRRQlqw+c0kPDMkIDHmH8jckGZDPaJRDVdHT0zIdqHAz676vajs7RS6p4ZGsSSrbnMG5d7t/E7EJttyDxSUYutGXuan53WiMkGlu8vz5fmMMGE+ShAyrLFI2Uu/dweWponCMf6tvHyFIsmZK1Ie3bzvG0QjNw9jM8/y45xuMv9ykvYpW543ySwIewtY+ha/WW0BRzNzhr1AxRVo6v1GaKQiDD+JD6RdNphCuJln9a5PRpFB1bTVC6PIfktbr7QCtgsjS7ILqBIUm8sv9IEB1+iQXajC7Wt3FSKOi6Q+GGUxrhxEqSp0byWOzzGoyN3K2PCYMfSEb3jKjQTN2ARejxBTG2Y4t6liYqr1L87x/gYaArm+RYChd5fkF8RzbmJ4QivRuFPkqYvfz2rEiHE7JpqXE6Mb3AzWshF44dVv3lYIqo1qNl5W+ZOlK2nVq2wTkk07JMU4oiWawlzhTJMYsU3nwB7S6OFZcIl5nZHwKg333EqFlxFtqeK80jDWMvi0uzsBX0fsQ82EorH4S6/ZuzmswA4ZTlHio3fux91bO3ZYkRMq5IEbkPZHoHy7wDgrklwCSsQuXpHj7zhOm+/Nb2cCUt05KobBuU3JfuXZoLlnHBp0Cn30sd203qOFAnlAq2RLfJHL+7cjWsCCdKg/BwHKo21FFK4SJ5FZJDn3S/kixwKP8o89r6dTrlkwEseXeZbHSjx341Mm1RY+GbpLSBi1p0S700zjlQzXvVQy+UGEX5KEm/fyznX8HdoRka7rnxwyQIwPwXX8jYdQFsm0kk8ngFhLh9LnDnVqu08SMaTndo1CvUdmAqjqJZ4UFITVl96qsS2pxt2uLpKgAu3pTlFDd+I3ufK61lApg6+vxi/XDjnFDufeOPbkWu38WKRDcedfeOJ2kIME26Hf5YhVl+QPrHo28G0xKEaznJXKUFUqST1pZJrXvZVw55mVFBYytWddx5usqKavHsTtdAjl/8JDlDbLhUxU3yMfNKSHIOGmQkJsUnjyTbOzNh11f3mOVjO9uFMRkyynR5GDwKghYjGUUANJ/2DOYTkFzHGc3pDEYPXkxwRKQ4smHOVnhcQZf1wdamtbMLBoDKCuJ5k2QSLUl5sPunIqGDOFDLbWXr5Li0sucEWJPjU9stQJaDVYsZpHTeg7atnxXUMtMB+Tt8o0tKrasoWkSCg69ycIFLISgnLK55/u+w8Tq4y1gEyLctFMzH4yuD7VNBgeVQmV8Sk40+IG9AkM8Qq0cYYwnCWcdUW6ab22R7l10SnTk0N+JfTfOUtQbc+raDusxFi2cPY4fPf4/fydKKc4ohJ8ME58wQyZr+pUeDDB61om9wgVVkdauQshHJ4NoM8LUdyH21kVztPukqq1s1VkYP1L3AbiNQA4oaugqh8/0rq7MPVOPNwmr0kq1cua2hVHdIHh4XWXRWcxcusYe8KysqJtQEKQ3IzUFXqfK1iE6iQNrbb8bqCuiVtqwnG802PrvYKkaAGAzh2jm7NC5i4UifJsw2dBAeWWN3DWMamAIdIjRqvQTKy5hrNaFEA5gaszTM2DS/tnAZcFq/YlVr82Mk1RPkOCm7+Sc5JAFG0NxU+uorcVVWGq6B4lBgqB7PHWZj40VRAVaBUAfUi+IKqZJj0/juOAIqAWwx1TBjtPfNYBGu9LsE5NatbZ8HwIrvK944TtpaYqYEPkF13OxdTXU5pAWkBc7N9E3qZM2R5FwrpGVbxwnWJR5YoRViSIupEspLd/hcyQwjJMCX6jAcXaeJSbreSlXHDo4MxOkhDaTyXLzge/jcAj5Y5VkvATdKYAjmrKZvCybRSJyoKz7N0w5T0rWBGR8JI2Uny3+mTLRtDgrBduGqC9RquQGWtme/mZuZuqV/n0dDVHeju28jPxaCZEwTDLbIyBqEuCMm/TqrQG5DWeKMAWZMEGP+PWu3DGqTv4tv2UF06BARnuqyQjhucjrgknGsupYrzbCYAbnPi9Vc1QH5U2SCMjuQ3MTkPcQNVDxMkSI6uMXl6uFYtLXjoYsINWbtuc+CgiykMI4nNQ9gRQ943BisvKEvb3YhlF9V5xjraMVSsHOSUl3HibxkKg48EUomW9bJn8rqs8I8Z4GyN7ywDuCXhX8TUmM/XRuZ8hK8a5R0QFRdKwXBp/ZwI5SrQ1Ej4btdri0hRT1/23lSnZdNYLoFyVyM7MEM9oMZjSwYzRzM4P5+tD3SU+JsooUtsUCqltV59RwkC/zFTap8HXONMMC4y4bKCSt4/w1yUOkgqkvwq90fTk1ktOsk/wr3mwFmCajjM0cRwnUEzSQH3j9yMNnV6zj4RDm0ya0+c2ZqzMgZKs2HIQg6uAVv3BdHDDauieTbyxNbwmsbZ87lS86ccgDwWKuG5VvU2GqbHhDt4Mjzg/8Vlguu108Sy40NJ5ZCyH0/KKvAt/Qn8OcjKtdISUW1qqAzzP+gRH1CrZYlaWTdJc8aRMe1tMiPFlbygCC2yukeh7ftyQmPRef1W7U1EWFLL1MzWd7abV8PLTOgGcHD1xCCE+Cq9vEWeZxfRQqhjLteFo7Iz2weoFGKpYYXzjX/yo78n5txrWSvlOU1ivCDAfWH5YVTk0hJiDk+1YB9WtM8+rNhtJJMfbpQbrCmwCYeQOTJNqacjNEXawSUgYv7KSHudCmgxXT6wCImZQYNBRnilHWr6xDQK7W3EdDAZCF5GhsVaIW7dGJ2NLZwEo0wpHnyWLNGjOGmTxOLwmeOKS1C1d+FU7xJWVruyzgMXbhobCOZe3VBvup4Q01WQYg3RzcT2Cg33s6vohtQBuWjwXhbqKtVj7KxeUVgRZ3v05e9Eax+CpX3q6/so7R4DuMZHRVSzEBMKF1BJ+P63xK1hEshY14Gd6kpFfwSnm4N+UhyoeLZh2c3BGzB84bFM2n4P4iGJq74Elj+yr1eYYbs3JjYt892ZoMy8W85l1bdV36X19ai0jazSLNLfr9sp/5t5hZPkhjv86d87MT1BMVMfc0Lev+sOl7iVWmhfI6vK7A/SNEVBs2mQrolf2S0xgqV+jUM0pQlQ7y9P0p2NAiT9lQ3Z/hKasiCXdvURe/dNLo9ISUAzyqkhDtrgeQHYQeiyI0hWxejx5IA32IfWFwGcC/LwsqVP9o+xVROM83vhXFeOzMsGrnni7sfjoTdD+L8mIVuV2kFywpTaAzo1+Em/zLPG1iXEbbuWvk2Wotz8Zle86/9g8piwfY5evMBquQ8/lJjLdTX2mZHyUmH5tRIAfhR9ZizRC3SlBqfnDvYV2esq3kRQymbEp2eoSOVLvKzOTenULMN1OEQayELHLP19hcd54mFdpYu5eSo4JEgpr7Bkm5+JjlLysDbSTGXwHLQzFy2OP2VpgUUekjHlQFa/YmpsmRM93PyTzHzxXw3FH1r6SKvxDFolRyfG1VoQOKoCRF6K1Irxtsd9qnsgSpxFwBxi76mHU16+WIhIZ2FDcZQYyRBmko8cyG24FhsUxAypv/FEvkcWUavbqiEaa1gQ3KwrCyW4DK7QVY5pg34zJBDTSMITSC0K8zC8hvAdjL9OP6BGLgMtHEzF05ABxFS27xcaKCy/Ge9DsP5vtOIDEuTmwlt3FWq7vf/x/htD+Ifwqn/RZJ+5twGsDpP2/Ev7XTAI79Sf5n+TREziHSqPttk6d4KHWY5eiNvwA=</diagram></mxfile>
2002.09505/main_diagram/main_diagram.pdf ADDED
Binary file (29.2 kB). View file
 
2002.09505/paper_text/intro_method.md ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ We are interested in solving problems specified through a Markov Decision Process, which consists of states $s \in S$, actions $a \in A$, rewards $r(s, s') \in R$, and a transition model $T(s, a, s')$ that indicates the probability of transitioning to a specific next state given a current state and action, $P(s' | s, a)$ [@sutton1998reinforcement][^1]. For simplicity, we refer to all rewards $r(s,s')$ as $r$ for the remainder of the paper. Importantly, we assume that the reward function does not depend on actions, which allows us to formulate QSS values without any dependency on actions.
4
+
5
+ Reinforcement learning aims to find a policy $\pi(a|s)$ that represents the probability of taking action $a$ in state $s$. We are typically interested in policies that maximize the long-term discounted return $R=\sum_{k=t}^H \gamma^{k-t} r_k$, where $\gamma$ is a discount factor that specifies the importance of long-term rewards and $H$ is the terminal step.
6
+
7
+ Optimal QSA values express the expected return for taking action $a$ in state $s$ and acting optimally thereafter: $$\begin{align*}
8
+ Q^*(s,a) = \mathbb{E}[r + \gamma \max_{a'} Q^*(s',a')|s,a].
9
+ \end{align*}$$ These values can be approximated using an approach known as Q-learning [@watkins1992q]: $$\begin{align*}
10
+ Q(s,a) \leftarrow Q(s,a) + \alpha [r + \gamma \max_{a'} Q(s',a') - Q(s,a)].
11
+ \end{align*}$$ Finally, QSA learned policies can be formulated as: $$\begin{align*}
12
+ \pi(s) = \mathop{\mathrm{arg\,max}}_a Q(s,a).
13
+ \end{align*}$$
14
+
15
+ We propose an alternative paradigm for defining optimal values, $Q^*(s, s')$, or the value of transitioning from state $s$ to state $s'$ and acting optimally thereafter. By analogy with the standard QSA formulation, we express this quantity as: $$\begin{equation}
16
+ Q^*(s,s') = r + \gamma \max_{s'' \in N(s')} Q^*(s', s'').
17
+ \end{equation}$$ Although this equation may be applied to any environment, for it to be a useful formulation, the *environment must be deterministic*. To see why, note that in QSA-learning, the max is over actions, which the agent has perfect control over, and any uncertainty in the environment is integrated out by the expectation. In QSS-learning the max is over next states, which in stochastic environments are not perfectly predictable. In such environments the above equation does faithfully track a certain value, but it may be considered the "best possible scenario value" --- the value of a current and subsequent state assuming that any stochasticity the agent experiences turns out as well as possible for the agent. Concretely, this means we assume that the agent can transition reliably (with probability 1) to any state $s'$ that it is possible (with probability $>$ 0) to reach from state $s$.
18
+
19
+ Of course, this will not hold for stochastic domains in general, in which case QSS-learning does not track an actionable value. While this limitation may seem severe, we will demonstrate that the QSS formulation affords us a powerful tool for use in deterministic environments, which we develop in the remainder of this article. Henceforth we assume that the transition function is deterministic, and the empirical results that follow show our approach to succeed over a wide range of tasks.
20
+
21
+ We first consider the simple setting where we have access to an inverse dynamics model $I(s,s') \rightarrow a$ that returns an action $a$ that takes the agent from state $s$ to $s'$. We also assume access to a function $N(s)$ that outputs the neighbors of $s$. We use this as an illustrative example and will later formulate the problem without these assumptions.
22
+
23
+ We define the Bellman update for QSS-learning as: $$\begin{align}
24
+ Q(s,s') \leftarrow Q(s,s') + \alpha [r + \gamma \max_{s'' \in N(s)} Q(s',s'') - Q(s,s')].
25
+ \label{eqn:qss_learning}
26
+ \end{align}$$ Note $Q(s,s')$ is undefined when $s$ and $s'$ are not neighbors. In order to obtain a policy, we define $\tau(s)$ as a function that selects a neighboring state from $s$ that maximizes QSS: $$\begin{equation}
27
+ \tau(s) = \mathop{\mathrm{arg\,max}}_{s' \in N(s)} Q(s,s').
28
+ \end{equation}$$ In words, $\tau(s)$ selects states that have large value, and acts similar to a policy over states. In order to obtain the policy over actions, we use the inverse dynamics model: $$\begin{equation}
29
+ \pi(s) = I(s, \tau(s)).
30
+ \label{eqn:inverse_dynamics}
31
+ \end{equation}$$ This approach first finds the state $s'$ that maximizes $Q(s,s')$, and then uses $I(s,s')$ to determine the action that will take the agent there. We can rewrite Equation [\[eqn:qss_learning\]](#eqn:qss_learning){reference-type="ref" reference="eqn:qss_learning"} as: $$\begin{equation}
32
+ Q(s,s') = Q(s,s') + \alpha [r + \gamma Q(s', \tau(s')) - Q(s,s')].
33
+ \end{equation}$$
34
+
35
+ <figure id="fig:heatmap" data-latex-placement="t">
36
+ <figure>
37
+ <img src="images/vanilla_Q.png" />
38
+ <figcaption><span class="math inline">$\max\limits_a Q(s,a)$</span></figcaption>
39
+ </figure>
40
+ <figure id="fig:model_q">
41
+ <img src="images/model_Q.png" />
42
+ <figcaption><span class="math inline">$\max\limits_{s'} Q(s,s')$</span></figcaption>
43
+ </figure>
44
+ <figure>
45
+ <img src="images/delta_Q.png" />
46
+ <figcaption><span class="math inline">$\frac{QSS - QSA}{|QSS|}$</span></figcaption>
47
+ </figure>
48
+ <figcaption>Learned values for tabular Q-learning in an 11x11 gridworld. The first two figures show a heatmap of Q-values for QSA and QSS. The final figure represents the fractional difference between the learned values in QSA and QSS.</figcaption>
49
+ </figure>
50
+
51
+ Let us now investigate the relation between values learned using QSA and QSS.
52
+
53
+ ::: theorem
54
+ **Theorem 1**. *QSA and QSS learn equivalent values in the deterministic setting.*
55
+ :::
56
+
57
+ ::: proof
58
+ *Proof.* Consider an MDP with a deterministic state transition function and inverse dynamics function $I(s, s')$. QSS can be thought of as equivalent to using QSA to solve the sub-MDP containing only the set of actions returned by $I(s, s')$ for every state $s$: $$\begin{equation}
59
+ Q(s, s') = Q(s, I(s, s'))
60
+ \nonumber
61
+ \end{equation}$$ Because the MDP solved by QSS is a sub-MDP of that solved by QSA, there must always be at least one action $a$ for which $Q(s, a) \ge \max_{s'} Q(s, s')$.
62
+
63
+ The original MDP may contain additional actions not returned by $I(s, s')$, but following our assumptions, their return must be less than or equal to that by the action $I(s, s')$. Since this is also true in every state following $s$, we have: $$\begin{equation}
64
+ Q(s, a) \le \max_{s'} Q(s, I(s, s'))\quad\text{for all }a
65
+ \nonumber
66
+ \end{equation}$$ Thus we obtain the following equivalence between QSA and QSS for deterministic environments: $$\begin{equation}
67
+ \max_{s'} Q(s, s') = \max_a Q(s, a)
68
+ \nonumber
69
+ \end{equation}$$ This equivalence will allow us to learn accurate action-values without dependence on the action space. ◻
70
+ :::
71
+
72
+ <figure id="fig:stochastic_values" data-latex-placement="t">
73
+ <figure id="fig:stochastic_vanilla_q">
74
+ <img src="images/vanilla_Q_1_0.png" />
75
+ <figcaption><span class="math inline">$\max\limits_a Q(s,a)$</span></figcaption>
76
+ </figure>
77
+ <figure id="fig:stochastic_model_q">
78
+ <img src="images/model_Q_1_0.png" />
79
+ <figcaption><span class="math inline">$\max\limits_{s'} Q(s,s')$</span></figcaption>
80
+ </figure>
81
+ <figure id="fig:euclidean_distance">
82
+ <img src="images/euclidean.png" />
83
+ <figcaption>Value distance</figcaption>
84
+ </figure>
85
+ <figcaption>Learned values for tabular Q-learning in an 11x11 gridworld with stochastic transitions. The first two figures show a heatmap of Q-values for QSA and QSS in a gridworld with 100% slippage. The final figure represents the euclidean distance between the learned values in QSA and QSS as the transitions become more stochastic (averaged over 10 seeds with 95% confidence intervals).</figcaption>
86
+ </figure>
87
+
88
+ <figure id="fig:redundant_actions" data-latex-placement="t">
89
+ <figure id="fig:vanilla_actions">
90
+ <img src="images/vanilla_actions.png" />
91
+ <figcaption>QSA</figcaption>
92
+ </figure>
93
+ <figure id="fig:model_actions">
94
+ <img src="images/model_actions.png" />
95
+ <figcaption>QSS</figcaption>
96
+ </figure>
97
+ <figure id="fig:id_actions">
98
+ <img src="images/id_actions.png" />
99
+ <figcaption>QSS + inverse dynamics</figcaption>
100
+ </figure>
101
+ <figure id="fig:transfer_actions">
102
+ <img src="images/Figure_1.png" />
103
+ <figcaption>Transfer of permuted actions</figcaption>
104
+ </figure>
105
+ <figcaption>Tabular experiments in an 11x11 gridworld. The first three experiments demonstrate the effect of redundant actions in QSA, QSS, and QSS with learned inverse dynamics. The final experiment represents how well QSS and QSA transfer to a gridworld with permuted actions. All experiments shown were averaged over 50 random seeds with 95% confidence intervals.</figcaption>
106
+ </figure>
107
+
108
+ In simple settings where the state space is discrete, $Q(s,s')$ can be represented by a table. We use this setting to highlight some of the properties of QSS. In each experiment, we evaluate within a simple 11x11 gridworld where an agent, initialized at $\langle 0, 0 \rangle$, navigates in each cardinal direction and receives a reward of $-1$ until it reaches the goal.
109
+
110
+ We first examine the values learned by QSS (Figure [3](#fig:heatmap){reference-type="ref" reference="fig:heatmap"}). The output of QSS increases as the agent gets closer to the goal, which indicates that QSS learns meaningful values for this task. Additionally, the difference in value between $\max_a Q(s, a)$ and $\max_{s'} Q(s, s')$ approaches zero as the values of QSS and QSA converge. Hence, QSS learns similar values as QSA in this deterministic setting.
111
+
112
+ The next experiment measures the impact of stochastic transitions on learned QSS values. To investigate this property, we add a probability of slipping to each transition, where the agent takes a random action (i.e. slips into an unintended next state) some percentage of time. First, we notice that the values learned by QSS when transitions have 100% slippage (completely random actions) are quite different from those learned by QSA (Figure [7](#fig:stochastic_values){reference-type="ref" reference="fig:stochastic_values"}-). In fact, the values learned by QSS are similar to the previous experiment when there was no stochasticity in the environment (Figure [2](#fig:model_q){reference-type="ref" reference="fig:model_q"}). As the transitions become more stochastic, the distance between values learned by QSA and QSS vastly increases (Figure [6](#fig:euclidean_distance){reference-type="ref" reference="fig:euclidean_distance"}). This provides evidence that the formulation of QSS assumes the best possible transition will occur, thus causing the values to be overestimated in stochastic settings. We include further experiments in the appendix that measure how stochastic transitions affect the average episodic return.
113
+
114
+ One benefit of training QSS is that the transitions from one action can be used to learn values for another action. Consider the setting where two actions in a given state transition to the same next state. QSA would need to make updates for both actions in order to learn their values. But QSS only updates the transitions, thus ignoring any redundancy in the action space. We further investigate this property in a gridworld with redundant actions. Suppose an agent has four underlying actions, up, down, left, and right, but these actions are duplicated a number of times. As the number of redundant actions increases, the performance of QSA deteriorates, whereas QSS remains unaffected (Figure [12](#fig:redundant_actions){reference-type="ref" reference="fig:redundant_actions"}-).
115
+
116
+ We also evaluate how QSS is impacted when the inverse dynamics model $I$ is learned rather than given (Figure [12](#fig:redundant_actions){reference-type="ref" reference="fig:redundant_actions"}). We instantiate $I(s,s')$ as a set that is updated when an action $a$ is reached. We sample from this set anytime $I$ is called, and return a random sampling over all redundant actions if $I(s,s')=\emptyset$. Even in this setting, QSS is able to perform well because it only needs to learn about a single action that transitions from $s$ to $s'$.
117
+
118
+ The final experiment in the tabular setting considers the scenario of transferring to an environment where the meaning of actions has changed. We imagine this could be useful in environments where the physics are similar but the actions have been labeled differently. In this case, QSS values should directly transfer, but not the inverse dynamics, which would need to be retrained from scratch. We trained QSA and QSS in an environment where the actions were labeled as 0, 1, 2, and 3, then transferred the learned values to an environment where the labels were shuffled. We found that QSS was able to learn much more quickly in the transferred environment than QSA (Figure [12](#fig:redundant_actions){reference-type="ref" reference="fig:redundant_actions"}). Hence, we were able to retrain the inverse dynamics model more quickly than the values for QSA. Interestingly, QSA also learns quickly with the transferred values. This is likely because the Q-table is initialized to values that are closer to the true values than a uniformly initialized value. We include an additional experiment in the appendix where taking the incorrect action has a larger impact on the return.
119
+
120
+ :::: algorithm
121
+ ::: algorithmic
122
+ **Inputs:** Demonstrations or replay buffer $D$ Randomly initialize $Q_{\theta_1}, Q_{\theta_2}, \tau_\psi, I_\omega, f_\phi$ Initialize target networks $\theta'_1 \leftarrow \theta_1, \theta'_2 \leftarrow \theta_2, \psi' \leftarrow \psi$ Sample from demonstration buffer $s, r, s' \sim D$ Take action $a \sim I(s, \tau(s)) + \epsilon$ Observe reward and next state Store experience in $D$ Sample from replay buffer $s, a, r, s' \sim D$ Compute $y = r + \gamma \min\limits_{i=1,2} Q_{\theta'_i} (s', C(s', \tau_{\psi'}(s')))$ // Update critic parameters: Minimize $\mathcal{L}_\theta=\sum_i \Vert y - Q_{\theta_i}(s,s') \Vert$ // Update model parameters: Compute $s'_f = C(s, \tau_\psi(s))$ Minimize $\mathcal{L}_\psi = -Q_{\theta_1}(s, s'_f) + \beta \Vert \tau_\psi(s) - s'_f)\Vert$ // Update target networks: $\theta' \leftarrow \eta \theta + (1-\eta)\theta'$ $\psi' \leftarrow \eta \psi + (1-\eta)\psi'$ // Update forward dynamics parameters: Minimize $\mathcal{L}_\phi = \Vert f_\phi(s,Q_{\theta'_1}(s,s')) - s' \Vert$ // Update forward dynamics parameters: Minimize $\mathcal{L}_\phi = \Vert f_\phi(s,a) - s' \Vert$ // Update inverse dynamics parameters: Minimize $\mathcal{L}_\omega = \Vert I_\omega(s,s') - a\Vert$
123
+ :::
124
+
125
+ []{#ref:alg_d3g label="ref:alg_d3g"}
126
+ ::::
127
+
128
+ :::: algorithm
129
+ ::: algorithmic
130
+ $q = Q_\theta(s, s'_\tau)$ $s'_f = f_\phi(s, q)$ $a = I_\omega(s, s'_\tau)$ $s'_f = f_\phi(s, a)$
131
+ :::
132
+
133
+ []{#ref:alg_cycle label="ref:alg_cycle"}
134
+ ::::
135
+
136
+ In contrast to domains where the state space is discrete and both QSA and QSS can represent relevant functions with a table, in continuous settings or environments with large state spaces we must approximate values with function approximation. One such approach is Deep Q-learning, which uses a deep neural network to approximate QSA [@mnih-2013-arXiv-playing-atari-with; @mnih2015human]. The loss is formulated as: $\mathcal{L}_\theta=\Vert y - Q_\theta(s,a) \Vert$, where $y = r + \gamma \max_{a'} Q_{\theta'}(s',a')$.
137
+
138
+ Here, $\theta'$ is a target network that stabilizes training. Training is further improved by sampling experience from a replay buffer $s,a,r,s' \sim D$ to decorrelate the sequential data observed in an episode.
139
+
140
+ Deep Deterministic Policy Gradient (DDPG) [@lillicrap2015continuous] applies Deep Q-learning to problems with continuous actions. Instead of computing a max over actions for the target $y$, it uses the output of a policy that is trained to maximize a critic $Q$: $y = r + \gamma Q_{\theta'}(s, \pi_{\psi'}(s))$. Here, $\pi_\psi(s)$ is known as an actor and trained using the following loss: $$\begin{align*}
141
+ \mathcal{L}_\psi = -Q_\theta(s, \pi_\psi(s)).
142
+ \end{align*}$$ This approach uses a target network $\theta'$ that is moved slowly towards $\theta$ by updating the parameters as $\theta' \leftarrow \eta \theta + (1-\eta)\theta'$, where $\eta$ determines how smoothly the parameters are updated. A target policy network $\psi'$ is also used when training $Q$, and is updated similarly to $\theta'$.
143
+
144
+ Twin Delayed DDPG (TD3) is a more stable variant of DDPG [@fujimoto2018addressing]. One improvement is to delay the updates of the target networks and actor to be slower than the critic updates by a delay parameter $d$. Additionally, TD3 utilizes Double Q-learning [@hasselt2010double] to reduce overestimation bias in the critic updates. Instead of training a single critic, this approach trains two and uses the one that minimizes the output of $y$: $$\begin{align*}
145
+ y = r + \gamma \min_{i=1,2} Q_{\theta'_i} (s', \pi_{\psi'}(s')).
146
+ \end{align*}$$ The loss for the critics becomes: $$\begin{align*}
147
+ \mathcal{L}_\theta = \sum_i \Vert y - Q_{\theta_i}(s,a) \Vert.
148
+ \end{align*}$$ Finally, Gaussian noise $\epsilon \sim \mathcal{N}(0,0.1)$ is added to the policy when sampling actions. We use each of these techniques in our own approach.
149
+
150
+ A clear difficulty with training QSS in continuous settings is that it is not possible to iterate over an infinite state space to find a maximizing neighboring state. Instead, we propose training a model to directly output the state that maximizes QSS. We introduce an analogous approach to TD3 for training QSS, Deep Deterministic Dynamics Gradients (D3G). Like the deterministic policy gradient formulation $Q(s,\pi_\psi(s))$, D3G learns a model $\tau_\psi(s) \rightarrow s'$ that makes predictions that maximize $Q(s,\tau_\psi(s))$. To train the critic, we specify the loss as: $$\begin{align}
151
+ \mathcal{L}_\theta=\sum_i \Vert y - Q_{\theta_i}(s,s') \Vert.
152
+ \end{align}$$ Here, the target $y$ is specified as: $$\begin{align}
153
+ y = r + \gamma \min_{i=1,2} Q_{\theta_i'}(s', {\tau}_{\psi'}(s'))].
154
+ \end{align}$$
155
+
156
+ Similar to TD3, we utilize two critics to stabilize training and a target network for Q.
157
+
158
+ We train $\tau$ to maximize the expected return, $J$, starting from any state $s$: $$\begin{align}
159
+ \nabla_\psi J &= \mathbb{E}[\nabla_\psi Q(s, s')_{s' \sim \tau_\psi(s)}] \\
160
+ &= \mathbb{E}[\nabla_{s'} Q(s, s') \nabla_\psi \tau_\psi(s)] && \text{[using chain rule]} \nonumber
161
+ \end{align}$$ This can be accomplished by minimizing the following loss: $$\begin{align*}
162
+ \mathcal{L}_\psi = -Q_\theta(s, \tau_\psi(s)).
163
+ \end{align*}$$ We discuss in the next section how this formulation alone may be problematic. We additionally use a target network for $\tau$, which is updated as $\psi' \leftarrow \eta \psi + (1-\eta)\psi$ for stability. As in the tabular case, $\tau_{\psi}(s)$ acts as a policy over states that aims to maximize $Q$, except now it is being trained to do so using gradient descent. To obtain the necessary action, we apply an inverse dynamics model $I$ as before: $$\begin{equation}
164
+ \pi(s) = I_\omega(s,\tau_\psi(s)).
165
+ \end{equation}$$ Now, $I$ is trained using a neural network with data $\langle s,a,s' \rangle \sim D$. The loss is: $$\begin{equation}
166
+ \mathcal{L}_\omega = \Vert I_\omega(s,s') - a\Vert.
167
+ \end{equation}$$
168
+
169
+ <figure id="fig:cycle" data-latex-placement="t">
170
+ <embed src="images/cycle_v2_crop.pdf" />
171
+ <figcaption>Illustration of the cycle consistency for training D3G. Given a state <span class="math inline"><em>s</em></span>, <span class="math inline"><em>τ</em>(<em>s</em>)</span> predicts the next state <span class="math inline"><em>s</em><sub><em>τ</em></sub><sup>′</sup></span> (black arrow). The inverse dynamics model <span class="math inline"><em>I</em>(<em>s</em>, <em>s</em><sub><em>τ</em></sub><sup>′</sup>)</span> predicts the action that would yield this transition (blue arrows). Then a forward dynamics model <span class="math inline"><em>f</em><sub><em>ϕ</em></sub>(<em>s</em>, <em>a</em>)</span> takes the action and current state to obtain the next state, <span class="math inline"><em>s</em><sub><em>f</em></sub><sup>′</sup></span> (green arrows). </figcaption>
172
+ </figure>
173
+
174
+ DDPG has been shown to overestimate the values of the critic, resulting in a policy that exploits this bias [@fujimoto2018addressing]. Similarly, with the current formulation of the D3G loss, $\tau(s)$ can suggest non-neighboring states that the critic has overestimated the value for. To overcome this, we regularize $\tau$ by ensuring the proposed states are reachable in a single step. In particular, we introduce an additional function for ensuring cycle consistency, $C(s, \tau_\psi(s))$ (see Algorithm [\[ref:alg_cycle\]](#ref:alg_cycle){reference-type="ref" reference="ref:alg_cycle"}). We use this regularizer as a substitute for training interactions with $\tau$. As shown in Figure [13](#fig:cycle){reference-type="ref" reference="fig:cycle"}, given a state $s$, we use $\tau(s)$ to predict the value maximizing next state $s'_\tau$. We use the inverse dynamics model $I(s, s'_\tau)$ to determine the action $a$ that would yield this transition. We then plug that action into a forward dynamics model $f(s,a)$ to obtain the final next state, $s'_f$. In other words, we regularize $\tau$ to make predictions that are consistent with the inverse and forward dynamics models.
175
+
176
+ To train the forward dynamics model, we compute: $$\begin{equation}
177
+ \mathcal{L}_\phi = \Vert f_\phi(s,a) - s' \Vert.
178
+ \end{equation}$$
179
+
180
+ We can then compute the cycle loss for $\tau_\psi$: $$\begin{align}
181
+ \mathcal{L}_\psi = -Q_\theta(s, C(s, \tau_\psi(s)) + \beta \Vert \tau_\psi(s) - C(s, \tau_\psi(s)) \Vert.
182
+ \end{align}$$ The second regularization term further encourages prediction of neighbors. The final target for training Q becomes: $$\begin{align}
183
+ y = r + \gamma \min_{i=1,2} Q_{\theta_i'}(s', C(s', \tau_{\psi'}(s')))
184
+ \end{align}$$ We train each of these models concurrently. The full training procedure is described in Algorithm [\[ref:alg_d3g\]](#ref:alg_d3g){reference-type="ref" reference="ref:alg_d3g"}.
185
+
186
+ We found it useful to train the models $\tau_\psi$ and $f_\phi$ to predict the difference between states $\Delta = s' - s$ rather than the next state, as has been done in several other works [@nagabandi2018neural; @goyal2018recall; @edwards2018forward]. As such, we compute $s'_\tau = s + \tau(s)$ to obtain the next state from $\tau(s)$, and $s'_f = s + f(s,a)$ to obtain the next state prediction for $f(s,a)$. We describe this implementation detail here for clarity of the paper.
2006.10178/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-09-23T09:07:58.289Z" agent="5.0 (X11)" version="13.7.4" etag="ctzuBP_8wGKBP0u_pyft" type="google"><diagram id="Eo46S7L1arZ_KfClc4T0">7Zxdc5s4FIZ/jS83A0ji4zK20+7NzmQmF+1edVQj20yx5QU5Jv31K0CAwSQmrZDkEucicICD/Bx9vQfhGVjsss8JPmz/oSGJZ44VZjOwnDmOYwGb/8stL6XFti1QWjZJFApbY3iKfhJhtIT1GIUkbZ3IKI1ZdGgbV3S/JyvWsuEkoaf2aWsat+96wBtyYXha4fjS+iUK2ba0+shq7H+TaLNl9fcTR3a4Olm4SLc4pKfSVJwDHmZgkVDKyq1dtiBxTq/iUjr69MrRumAJ2bMhFzjlBc84PorvJsrFXqovy4t4yDe5UxzHJKabBO9mYH4gSbQjjCTdY4/NgflpGzHydMCr3MOJVwdu27JdzPdsvrmOMlLFN99PWUJ/1EwBt4jykYSR7NXvaNfkeJ0jlN88eeGnVBfYUOAX9Q0KKNapCZ5d2bZngXOFDYv6sql9N0j5hqDaTxhMgrDbJgxchYThFAhbOgkj0wmvozhe0JgmRWHAp+Ijibyjk7x7Y+TnxZ8k8kAneW/K5KFO8r7p5KX05zrnJIHphMes245O8pWCmSZ6oBW9PWX0UCv6SchMy+kg9lUinobO1IrYeKE56rCpFf2tKVC5w6ZW9JOWoFAreuM1qJRh02ojRlAlYuPF5pipLVsr+mmrUJ3oK7/TRA+0op+2CtWKfgoqNPDbhF1LJWHjReiYo6mlFf2tiVO5Exmt6I0Xp1ImLFoRGy9CpUxMtCKegtgMOs81XZXPNSfyYFMrYuM1pZTRTidiYLx2lDLaaUVsvEaUMtppRWy8FhxThiOdy3vArYlEuei1rkW+NZEoF73OZznAeJE4pj5HOvN+YBLiEeoUj2AK4vFiVazaGcs71OM6Jtl9/p4T/+JkH4rN5SrGaRqtuuhazX5dfMrrKn7+neMh33Us1w9chGy/PFzxtu4C4AXAtgCEPj8zeCMAJLx4s+oq/jO6qAduZUtIjFn03HbfR1zc4ZFG/MZnqioI7oAV1B/U35wqjyk9JisinDRh7PXb8uR4HU8MJxvCLjwVFaLGMKyO9MlfN2Z5mGlRqKayuP8daXXgr7QI5T0/wYGHrDnItzbFf16D0SI97vIN4ZEXpnQqTulWRl606JCS640Xp4fyJb2iFStTzbCn9fo99cuX0Hir4a4nMGH0/BtxkRhdtiUMf3t4K8K1uSj0jcYdIIVx71Py8kKWv076fT3z5tk3e+Yt39E4OU/WiUYcbfb5IMG5FkNuTj1a4fheHNhFYZhfPk8ILx3+Xriy8gE877kKSGg+Q8vc15HRdPQsWLU8qupX++aUXk9gHRmBfUf+4GM0/oXRGIG3RmPYbZyDR2ME2qOxjUYbjWFfouOj01fS6XcFZ522VtHp92VZRun0ncl1+hCBdvOtVoYr6fSNz+HIQOwivz1fUvouv/G5mjEzlC5oo1eaHIbG53BGRW930KtMDkPjVwaMmhwO2uiVJofhJFYMIK+NWGlyuJph/+GIO8Om0uQwej3PYIzUqOetP6/OWwdJDqWzWRk15OJdskBhwgL1JSzMrSHX0ll/Zg25WEFuq6whA9ZlnKWv9nSfcwpxus11fMHgjHRuf8SMU94XFscCF10veivhlUXsa87+znY9sf9vsW8BIPaXmQhOsfNytnM2OhS28r6vT2AuEmFlPknAhYMDqepRVaeeoG4Sa3A6DKK2JwA6nuSlw9CAxSfq6te7a4Q50e++Ovjrjya97iOxYLzoD8idfER/QPSvxszItt+X1vl4MC2UQ6c9gx79KylXzXeb35Yt49j8RC94+B8=</diagram></mxfile>
2006.10178/main_diagram/main_diagram.pdf ADDED
Binary file (15.4 kB). View file
 
2006.10178/paper_text/intro_method.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ We address the problem of learning representations of spatial environments, perceived through RGB-D and inertial sensors, such as in mobile robots, vehicles or drones. Deep sequential generative models are appealing, as a wide range of inference techniques such as state estimation, system identification, uncertainty quantification and prediction is offered under the same framework (Curi et al., 2020; Karl et al., 2017a; Chung et al., 2015). They can serve as so-called *world models* or environment simulators (Chiappa et al., 2017; Ha & Schmidhuber, 2018), which have shown impressive performance on a variety of simulated control tasks due to their predictive capability. Nonetheless, learning such models from realistic spatial data and dynamics has not been demonstrated. Existing spatial generative representations are limited to simulated 2D and 2.5D environments (Fraccaro et al., 2018).
4
+
5
+ On the other hand, the state estimation problem in spatial environments—SLAM—has been solved in a variety of real-world settings, including cases with real-time constraints and on embedded hardware (Cadena et al., 2016; Engel et al., 2018; Qin et al., 2018; Mur-Artal & Tardos´ , 2017). While modern visual SLAM systems provide high inference accuracy, they lack a predictive distribution, which is a prerequisite for downstream perception–control loops.
6
+
7
+ Our approach scales the above deep sequential generative models to real-world spatial environments. To that end, we integrate assumptions from multiple-view geometry and rigid-body dynamics commonly used in modern SLAM systems. With that, our model maintains the favourable properties of generative modelling and enables prediction. We use the recently published approach of Mirchev et al. (2019) as a starting point, in which a variational state-space model, called DVBF-LM, is extended with a spatial map and an attention mechanism. Our contributions are as follows:
8
+
9
+ - We use multiple-view geometry to formulate and integrate a differentiable raycaster, an attention model and a volumetric map.
10
+ - We show how to integrate rigid-body dynamics into the learning of the model.
11
+ - We demonstrate the successful use of variational inference for solving direct dense SLAM for the first time, obtaining performance close to that of state-of-the-art localisation methods.
12
+ - We demonstrate strong predictive performance using the learned model, by generating spatially-consistent real-world drone-flight data enriched with realistic visuals.
13
+ - We demonstrate the model's applicability to downstream control tasks by estimating the cost-to-go for a collision scenario.
14
+
15
+ ![](_page_1_Figure_1.jpeg)
16
+
17
+ Figure 1: Illustration of the proposed quadcopter localisation and dense mapping. Left: top-down view of the localisaton estimate. Right: generative depth and colour reconstructions for one time step.
18
+
19
+ The contributions allow the reformulated model to tackle realistic RGB-D scenarios with 6 DoF.
20
+
21
+ # Method
22
+
23
+ **Background** We adhere to the graphical model of DVBF-LM (Mirchev et al., 2019), but we introduce novel design choices for every model component and implement the overall inference differently, to allow for real-world 3D modelling. In the following, we will first describe the assumed factorisation and then explain the introduced modifications. The assumed joint distribution of all variables is:
24
+
25
+ $$p(\mathbf{x}_{1:T}, \mathbf{z}_{1:T}, \mathbf{m}_{1:T}, \mathcal{M} \mid \mathbf{u}_{1:T-1})$$
26
+
27
+ $$= p(\mathcal{M})\rho(\mathbf{z}_1) \prod_{t=1}^{T} p(\mathbf{x}_t \mid \mathbf{m}_t) p(\mathbf{m}_t \mid \mathbf{z}_t, \mathcal{M}) \prod_{t=1}^{T-1} p(\mathbf{z}_{t+1} \mid \mathbf{z}_t, \mathbf{u}_t), \tag{1}$$
28
+
29
+ where $\mathbf{x}_{1:T}$ are observations, $\mathbf{z}_{1:T}$ agent states, $\mathbf{m}_{1:T}$ map charts and $\mathbf{u}_{1:T-1}$ conditional inputs (controls). The factorisation defines a traditional state-space model extended with a global map variable $\mathcal{M}$ . For a single step t, an observation $\mathbf{x}_t$ is generated from a map chart $\mathbf{m}_t$ —the relevant extract from the global $\mathcal{M}$ around the current agent pose $\mathbf{z}_t$ (cf. fig. 2a). Chart extraction is given by $p(\mathbf{m}_t \mid \mathbf{z}_t, \mathcal{M})$ , which can be seen as an attention mechanism. In this graphical model, SLAM is equivalent to inference of the agent states $\mathbf{z}_{1:T}$ and the map $\mathcal{M}$ . For the remainder of this work, we assume all observations $\mathbf{x}_t \in \mathbb{R}^{w \times h \times 4}$ are RGB-D images. Next, we will describe the functional forms of the map $\mathcal{M}$ , the attention $p(\mathbf{m}_t \mid \mathbf{z}_t, \mathcal{M})$ , the emission $p(\mathbf{x}_t \mid \mathbf{m}_t)$ and the states $\mathbf{z}_{1:T}$ .
30
+
31
+ ![](_page_3_Figure_1.jpeg)
32
+
33
+ Figure 2: (a) One time step of the proposed probabilistic graphical model. (b) Linear interpolation during ray casting for a single ray in the emission model. $d_k$ is the depth corresponding to the first ray value that exceeds $\tau$ . The output depth d is formed by linearly interpolating between $d_{k-1}$ and $d_k$ based on the occupancy values $\mathbf{o}^{i,j,k-1}$ and $\mathbf{o}^{i,j,k}$ .
34
+
35
+ **Geometric map** The map random variable $\mathcal{M} = (\mathcal{M}^{\text{occ}}, \mathcal{M}^{\text{col}})$ consists of two components. $\mathcal{M}^{\text{occ}} \in \mathbb{R}^{l \times m \times n}$ is a spatially arranged 3D grid of scalar values that represent occupancy. $\mathcal{M}^{\text{col}}$ represents the parameters of a feed-forward neural network $f_{\mathcal{M}^{\text{col}}} : \mathbb{R}^3 \to [0, 255]^3$ . The network assigns an RGB colour value to each point in space. In this work, the network weights are deterministic and point-estimated via maximum likelihood, the fully-Bayesian treatment of the colour map is left for future work. The prior and approximate posterior distributions over the occupancy map are:
36
+
37
+ $$p(\boldsymbol{\mathcal{M}}^{\text{occ}}) = \prod_{i,j,k} \mathcal{N}(\boldsymbol{\mathcal{M}}_{i,j,k}^{\text{occ}} \mid 0,1), \quad q_{\boldsymbol{\phi}}(\boldsymbol{\mathcal{M}}^{\text{occ}}) = \prod_{i,j,k} \mathcal{N}(\boldsymbol{\mathcal{M}}_{i,j,k}^{\text{occ}} \mid \mu_{i,j,k}, \sigma_{i,j,k}^2).$$
38
+
39
+ Here and for the rest of this work $q_{\phi}$ will denote a variational approximate posterior distribution, with all its optimisable parameters summarised in $\phi$ . We assume $p(\mathcal{M}_{occ})$ and $q_{\phi}(\mathcal{M}_{occ})$ factorise over grid cells. The variational parameters $\mu_{i,j,k}$ , $\sigma_{i,j,k}$ are optimised with Bayes by Backprop (Blundell et al., 2015).
40
+
41
+ **Attention** In the proposed model, the composition of the attention $p(\mathbf{m}_t \mid \mathbf{z}_t, \mathcal{M})$ and the emission $p(\mathbf{x}_t \mid \mathbf{m}_t)$ implements volumetric raycasting. We engineer them based on our understanding of geometry to ensure generalisation across unseen environments. The attention $p(\mathbf{m}_t \mid \mathbf{z}_t, \mathbf{\mathcal{M}})$ forms latent charts $\mathbf{m}_t$ , which correspond to extracts from the map $\mathcal{M}$ around $\mathbf{z}_t$ . We identify $\mathbf{m}_t$ with the part of the map contained in the frustum of the current camera view. To attend to that region, first the intrinsic camera matrix K (assumed to be known) and the agent pose $z_t$ are used to cast a ray for any pixel $[i,j]^T$ in the reconstructed observation. The ray is then discretised equidistantly along the depth dimension into r-many points, resulting into a collection of 3D world coordinates $\mathbf{p}_t \in \mathbb{R}^{w \times h \times r \times 3}$ . Depth candidate values $d \in \{k\epsilon\}_{1 \leq k \leq r}$ are associated with each point along a ray, where $\epsilon$ is a resolution hyperparameter. The latent chart $\mathbf{m}_t = (\mathbf{o}_t, \mathbf{c}_t)$ factorises into an occupancy chart $\mathbf{o}_t \in \mathbb{R}^{w \times h \times r}$ and a colour chart $\mathbf{c}_t \in \mathbb{R}^{w \times h \times r \times 3}$ . Let $p_t^{ijk} \in \mathbb{R}^3$ be a 3D point in the spanned camera frustum. To form the occupancy chart $\mathbf{o}_t$ , cells from the map $\mathcal{M}^{occ}$ around $p_t^{ijk}$ are combined with a weighted kernel $o_t^{ijk} = \sum_{l,h,s} \mathcal{M}_{l,h,s}^{\text{occ}} \alpha_{l,h,s}(p^{ijk})$ . Note that here l,h,s are indices of the occupancy map voxels. We choose a trilinear interpolation kernel for $\alpha$ , merging only eight map cells per point. This makes the attention fast and differentiable w.r.t $\mathbf{z}_t$ . The colour chart $\mathbf{c}_t = f_{\mathcal{M}^{\text{col}}}(\mathbf{p}_t)$ is formed by applying $f_{\mathcal{M}^{col}}$ , the colour neural network, *point-wise* to each 3D point. In this work, we keep the chart $\mathbf{m}_t$ deterministic. The full attention procedure can be described as:
42
+
43
+ $$p(\mathbf{m}_t \mid \mathbf{z}_t, \mathbf{\mathcal{M}}) = \prod_{ijk} \delta(\mathbf{m}_t^{ijk} = f_A(\mathbf{\mathcal{M}}, p^{ijk})), \quad p^{ijk} = \mathbf{T}(\mathbf{z}_t) \mathbf{K}^{-1}[i, j, 1]^T \underbrace{d}_{:=k\epsilon}.$$
44
+
45
+ Here $\mathbf{T}(\mathbf{z}_t) \in \mathbb{SE}(3)$ denotes the rigid camera transformation defined by the current agent state $\mathbf{z}_t$ and i, j, k index the points lying inside the attended camera frustum.
46
+
47
+ **Emission through ray casting** The emission model factorises over the observed pixels:
48
+
49
+ $$p(\mathbf{x}_t \mid \mathbf{m}_t) = \prod_{ij} p(\mathbf{x}_t^{ij} \mid \mathbf{m}_t), \quad p(\mathbf{x}_t^{ij} \mid \mathbf{m}_t) = p(d_t^{ij}, \tilde{\mathbf{c}}_t^{ij} \mid \mathbf{o}_t, \mathbf{c}_t).$$
50
+
51
+ It operates on the extracted chart $\mathbf{m}_t = (\mathbf{o}_t, \mathbf{c}_t)$ . Here $\mathbf{x}_t^{ij} \in \mathbb{R}^4$ denotes an RGB-D pixel value, i.e. for each pixel $[i,j]^T$ we reconstruct a depth $d_t^{ij}$ and a colour value $\tilde{\mathbf{c}}_t^{ij}$ . The mean of the depth value $d_t^{ij}$ is formed by a function $f_E$ :
52
+
53
+ $$f_E(\mathbf{o}_t)^{ij} = \epsilon \cdot \min_{k \in [r]} k$$
54
+ s.t. $\mathbf{o}_t^{ijk} > \tau$ .
55
+
56
+ $f_E$ traces the ray for pixel $[i,j]^T$ , searching for the minimum depth $d=\epsilon k$ for which the occupancy value $\mathbf{o}_t^{ijk}$ exceeds a threshold $\tau$ (a hyperparameter). Since the above min operation is not differentiable in $\mathbf{o}_t$ , we linearly interpolate between the depth value for the first ray hit and its predecessor to form the mean of the emitted depth (cf. fig. 2b):
57
+
58
+ $$\mu_{d_t}^{ij} = \alpha f_E(\mathbf{o}_t)^{ij} + (1 - \alpha)(f_E(\mathbf{o}_t)^{ij} - \epsilon), \quad \alpha = \frac{\tau - \mathbf{o}_t^{i,j,k-1}}{\mathbf{o}_t^{i,j,k} - \mathbf{o}_t^{i,j,k-1}}.$$
59
+
60
+ The mean of the emitted colour $\mu_{\tilde{\mathbf{c}}_t}^{ij} = \mathbf{c}_t^{ijk}$ directly corresponds to the k-th element of the attended colour values, where k is the index of the first hit from raycasting above. A heteroscedastic Laplace distribution is assumed for both the emitted depth and colour values:
61
+
62
+ $$p(\mathbf{x}_t^{ij} \mid \mathbf{m}_t) = \text{Laplace}(\mathbf{x}_t^{ij}; (\mu_{d_t}^{ij}, \boldsymbol{\mu}_{\tilde{\mathbf{c}}_t}^{ij}), \text{diag}(\boldsymbol{\sigma}_E^{ij})).$$
63
+
64
+ **Agent states** All agent states are represented as vectors $\mathbf{z}_t = (\boldsymbol{\lambda}_t, \boldsymbol{\omega}_t, \mathbf{z}_t^{\text{rest}}) \in \mathbb{R}^{d_{\mathbf{z}}}$ . $\boldsymbol{\lambda}_t \in \mathbb{R}^3$ is the agent location in space. $\boldsymbol{\omega}_t \in \mathbb{H}^4$ is the agent orientation, represented as a quaternion. $\mathbf{z}_t^{\text{rest}} \in \mathbb{R}^{d_{\mathbf{z}}-7}$ is a remainder. Depending on the used transition model, $\mathbf{z}_t^{\text{rest}}$ can be $\dot{\boldsymbol{\lambda}}_t$ alone or it can contain an abstract latent portion not explicitly matching physical quantities. The approximate posterior variational family over the agent states factorises over time:
65
+
66
+ $$q_{\boldsymbol{\phi}}(\mathbf{z}_{1:T}) = \prod_t q_{\boldsymbol{\phi}}(\mathbf{z}_t) = \prod_t \mathcal{N}(\mathbf{z}_t \mid \boldsymbol{\mu}_t^{\mathbf{z}}, \operatorname{diag}(\boldsymbol{\sigma}_t^{\mathbf{z}})^2).$$
67
+
68
+ Here $\mu_t^{\mathbf{z}} \in \mathbb{R}^{d_{\mathbf{z}}}$ and $\sigma_t^{\mathbf{z}} \in \mathbb{R}^{d_{\mathbf{z}}}$ are free variables for each latent state and are optimised with SGVB (Kingma & Welling, 2014). Notably, the above factorisation over states bears similarity to pose-graph optimisation. One can see the individual terms $q_{\phi}(\mathbf{z}_t)$ as graph nodes, and the loss terms induced by the transition and emission in the objective presented next as the edge constraints.
69
+
70
+ **Overall objective** The elements described so far, together with the transition $p(\mathbf{z}_{t+1} \mid \mathbf{z}_t, \mathbf{u}_t)$ discussed in the next section, form the probabilistic graphical model in eq. (1). The assumed variational approximate posterior is
71
+
72
+ $$q_{\phi}(\mathbf{z}_{1:T})q_{\phi}(\mathcal{M}) \approx p(\mathbf{z}_{1:T}, \mathcal{M} \mid \mathbf{x}_{1:T}, \mathbf{u}_{1:T-1}).$$
73
+
74
+ For the optimisation objective we use the negative *evidence lower bound* (ELBO) (Jordan et al., 1999), given as
75
+
76
+ $$\mathcal{L}_{\text{elbo}} = -\mathbb{E}_{q} \left[ \sum_{t=1}^{T} \log p(\mathbf{x}_{t} \mid \mathbf{m}_{t}) \right] + \text{KL}(q_{\phi}(\mathcal{M}) \mid\mid p(\mathcal{M})) + \mathbb{E}_{q} \left[ \sum_{t=2}^{T} \text{KL}(q_{\phi}(\mathbf{z}_{t}) \mid\mid p(\mathbf{z}_{t} \mid \mathbf{z}_{t-1}, \mathbf{u}_{t-1})) \right].$$
77
+ (2)
78
+
79
+ We employ the approximate particle optimisation scheme from (Mirchev et al., 2019) to deal with long data sequences. The only optimised parameters are $\phi$ , containing the parameters of the map and the agent states.
80
+
81
+ Making image reconstruction tractable Using the full observations during inference is not feasible, as raycasting for all pixels is too computationally demanding. To ensure tractability of the inference method we therefore use reconstruction sampling (Dauphin et al., 2011), emitting a random part of $\mathbf{x}_t$ at a time, by randomly selecting c-many pixel coordinates $[i,j]^T$ for every gradient step. Here c is a constant much smaller than the image size wh, speeding up gradient updates by a few orders of magnitude. Note that this results in an unbiased, faster and more memory-efficient Monte Carlo approximation of the original objective, avoiding loss of information due to subsampling or sparse feature selection.
82
+
83
+ $<sup>\</sup>mathbf{0}_{t}^{ijk}$ is set to 0 for $k \leq 1$ and $f_{E}(\mathbf{0}_{t}) = r\epsilon$ if no value exceeds $\tau$ along the ray.
84
+
85
+ The introduced model factorisation includes a transition $p(\mathbf{z}_{t+1} \mid \mathbf{z}_t, \mathbf{u}_t)$ , which allows the natural inclusion of agent movement priors. This is reflected in the corresponding KL terms in eq. (2). Note that using variational inference lets us integrate any differentiable transition model as-is, without additional linearisation. In the following, we assume the agent has an inertial measurement unit (IMU) providing readings $\ddot{\lambda}_t^{\text{imu}}$ (linear acceleration) and $\dot{\omega}_t^{\text{imu}}$ (angular velocity) over time, which we choose to treat as conditional inputs $\mathbf{u}_t = (\ddot{\lambda}_t^{\text{imu}}, \dot{\omega}_t^{\text{imu}})$ .
86
+
87
+ **Engineering rigid-body dynamics** In the absence of learning, one can use an engineered transition prior that integrates the IMU sensor readings over time. The latent state $\mathbf{z}_t = (\lambda_t, \omega_t, \dot{\lambda}_t)$ then contains the location, orientation and linear velocity of the agent at every time step. The transition is defined as:
88
+
89
+ $$p(\mathbf{z}_{t+1} \mid \mathbf{z}_t, \mathbf{u}_t) = \mathcal{N}(\mathbf{z}_{t+1} \mid f_T(\mathbf{z}_t, \mathbf{u}_t), \operatorname{diag}(\boldsymbol{\sigma}_T)^2).$$
90
+
91
+ The state update $f_T$ implements standard rigid-body dynamics using Euler integration (see appendix D.3). This engineered model will serve as a counterpart for the learned transition model presented next.
92
+
93
+ **Learning a dynamics model** Engineered models of the agent movement are often imperfect or not available. We therefore provide a method for learning a fully-probabilistic transition model from streams of prerecorded controls and agent pose observations, which we can then seamlessly include as a prior in the full model. We do not learn the transition with per-step, fully-supervised regression. Instead we formulate a generative sequence model for T time steps. This allows us to separate the aleatoric uncertainty in the observed agent states from the uncertainty in the transition itself. We follow the literature on variational state-space models (Fraccaro et al., 2016; Karl et al., 2017a). We assume we have a sequence of locations $\hat{\lambda}_{1:T}$ and orientations $\hat{\omega}_{1:T}$ as observations, and a sequence of IMU readings, as well as per-rotor revolutions per minute (RPM) and pulse-width modulation (PWM) signals, as conditional inputs $\mathbf{u}_{1:T-1} = (\ddot{\lambda}_{1:T-1}^{imu}, \dot{\omega}_{1:T-1}^{imu}, \mathbf{u}_{1:T-1}^{rpm}, \mathbf{u}_{1:T-1}^{pwm})$ . We define the generative state-space model:
94
+
95
+ $$p(\hat{\boldsymbol{\lambda}}_{1:T}, \hat{\boldsymbol{\omega}}_{1:T}, \mathbf{z}_{1:T} \mid \mathbf{u}_{1:T-1}) = \delta(\mathbf{z}_1)p(\hat{\boldsymbol{\lambda}}_1, \hat{\boldsymbol{\omega}}_1 \mid \mathbf{z}_1) \prod_{t=1}^{T-1} p_{\boldsymbol{\theta}_T}(\mathbf{z}_{t+1} \mid \mathbf{z}_t, \mathbf{u}_t)p(\hat{\boldsymbol{\lambda}}_{t+1}, \hat{\boldsymbol{\omega}}_{t+1} \mid \mathbf{z}_{t+1}).$$
96
+
97
+ The objective is to learn generative transition parameters $\boldsymbol{\theta}_T$ , such that the marginal likelihood of observed agent poses $p_{\boldsymbol{\theta}_T}(\hat{\boldsymbol{\lambda}}_{1:T},\hat{\boldsymbol{\omega}}_{1:T}\mid \mathbf{u}_{1:T-1})$ is maximised. The latent state is $\mathbf{z}_t = (\boldsymbol{\lambda}_t, \boldsymbol{\omega}_t, \dot{\boldsymbol{\lambda}}_t, \mathbf{z}_t^{\text{rest}})$ , identifying its first three components with location, orientation and linear velocity. The remainder $\mathbf{z}_t^{\text{rest}}$ acts as an abstract state part. Its role is to absorb any quantities that might affect the transition, for example higher moments of the dynamics or sensor biases accumulated over previous time steps. The transition is implemented as a residual neural network on top of Euler integration:
98
+
99
+ $$\begin{split} &p_{\boldsymbol{\theta}_T}(\mathbf{z}_{t+1} \mid \mathbf{z}_t, \mathbf{u}_t) = \mathcal{N}(\mathbf{z}_{t+1} \mid \boldsymbol{\mu}_{t+1}, \mathrm{diag}(\boldsymbol{\sigma}_{t+1})^2) \\ &\boldsymbol{\mu}_{t+1} = \begin{bmatrix} f_T(\mathbf{z}_t, \mathbf{u}_t) \\ \mathbf{0} \end{bmatrix} + \mathrm{MLP}_{\boldsymbol{\mu}}(\mathbf{z}_t, \mathbf{u}_t), \quad \boldsymbol{\sigma}_{t+1} = \mathrm{MLP}_{\boldsymbol{\sigma}}(\mathbf{z}_t, \mathbf{u}_t), \end{split}$$
100
+
101
+ where $f_T$ is the engineered Euler integration from the previous section and the abstract remainder of the latent state is formed entirely by the network (MLP). This strong inductive bias shapes the transition to resemble regular integration in the beginning of training, exploiting engineering knowledge, while still allowing the MLP to eventually take over and correct biases as necessary.
102
+
103
+ The emission isolates the location and orientation from the latent state as its mean:
104
+
105
+ $$p(\hat{\boldsymbol{\lambda}}_t, \hat{\boldsymbol{\omega}}_t \mid \mathbf{z}_t) = \mathcal{N}(\hat{\boldsymbol{\lambda}}_t, \hat{\boldsymbol{\omega}}_t \mid (\boldsymbol{\lambda}_t, \boldsymbol{\omega}_t), \operatorname{diag}(\boldsymbol{\sigma})^2).$$
106
+
107
+ The inference over the latent states uses Gaussian fusion as per (Karl et al., 2017b) and the necessary inverse emission is given by a bidirectional RNN that looks into all observations and conditions:
108
+
109
+ $$\hat{q}(\mathbf{z}_t \mid \hat{\boldsymbol{\lambda}}_{1:T}, \hat{\boldsymbol{\omega}}_{1:T}, \mathbf{u}_{1:T-1}) = \mathcal{N}(\mathbf{z}_t \mid \text{RNN}(\hat{\boldsymbol{\lambda}}_{1:T}, \hat{\boldsymbol{\omega}}_{1:T}, \mathbf{u}_{1:T-1})).$$
110
+
111
+ We minimise the negative ELBO w.r.t. $\theta_T$ , omitting the conditions in q for brevity:
112
+
113
+ $$\mathcal{L}(\boldsymbol{\theta}_T) = -\mathbb{E}_q \left[ \sum_{t=1}^T \log p(\hat{\boldsymbol{\lambda}}_t, \hat{\boldsymbol{\omega}}_t \mid \mathbf{z}_t) \right] + \mathbb{E}_q \left[ \sum_{t=2}^T \mathrm{KL}(q(\mathbf{z}_t) \mid\mid p_{\boldsymbol{\theta}_T}(\mathbf{z}_t \mid \mathbf{z}_{t-1}, \mathbf{u}_{t-1})) \right].$$
2009.04861/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2020-07-31T23:20:35.316Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/13.5.7 Chrome/83.0.4103.122 Electron/9.1.2 Safari/537.36" version="13.5.7" etag="O-NXp4wpmJPeqP-vE4zg" type="device"><diagram id="_lvnzHi1Pyc57fH_6NIa">7Vxbb6M4FP41SLMPEwEGkjw2me5FmpWqnd3Z7tPIDW6ChmAWnDbZX7822NxsCm24TBNG1QgfwODznfMdfyaggfX++EsEw93v2EW+ZuruUQOfNNNcOAv6PzOcUoNjzFPDNvLc1GTkhi/ef4gbdW49eC6KSwcSjH3ihWXjBgcB2pCSDUYRfi4f9oj98lVDuEWS4csG+rL1b88lOz4sc57bf0XedieubDjLdM8eioN5F/EOuvg5NSWDA7caWEcYk3Rrf1wjn/lO+CX1wM81e7Mbi1BA2pxgpic8Qf/Ax8bvi5zEYCN8CFzEjjc0sHreeQR9CeGG7X2m6FLbjux9vhtGGw6XwZq8dxQRdKy9QyMbN40XhPeIRCd6CD/BAhx0ESu8+Zw73rS4bVdwOhDBAjnY26zr3B90g7tE7R7ww7tnPqJ3LIV3HJ9eYfVAN7YkGWFqeMR0OEW/Of8esNjxMU58ckMPMJzwmO8Uvdwe4T6kJ5r6HU1z0Se9vbTb8qWouXD5ClrU0aQMSUwi/B2tsY8jaglwgNhdeb5fMUHf2wa0uaHAIGpfMdg8ygk3fMfec112GWUM5FGis2t+R2Sz440OgsAwy1Ewl6PAALYcBWYHQWAPFQRrHx7iKQZqeXI5Xgw4ihig5GDdf6P3rGs2hS5w+VaAWQzcf8vDJMqRsjRBKi/iNRQ0cmxoJtCTf2yPqN0dIWgbZQSzrC5AuFBxeQcIziUEo0gCAgXuDZs9MTf7MI69TRkWdPTIPdue6YZo/8P8M9MpTaTtT0fusKRxKjTuUOTR+2bgpbaAjuG+2Ej7skUz7yppnUpIILcyhYvxIdpwE+B1ncBoi0gphlvAFSEfEu+p3L3K+fzUO+wlvMdhBnYZZrCo4JfeKT+rOHerdFTNeKMaCOnwpI4ohvBUOCxkB8RStGQjbhVACymA/gpdSJAcRBSYz/CBCgKadoiSPnxIdumVmVLrfNaKU33emZbRWkNGzs+EuG1+LVsy5MXx4pYRRY8cuRiOI8WlzyRJQWsZS+bUpjdxm2BYsZ2e1kSurIsqu7ZgSUcmSRGF/bOkrVeAtuy3sWQWDqIjZ0SWNAwphL7iH54kDf1MiFtnmLwccW6GOcUM+6jPDMtsyrFXpEsxNyxzsNyokuDyrblRWT0wbTCrdDVodsjLLe8hO8yhskO13pLMDDJZJc8dpgnGeROMbPVsiAmGvJbSKf1R9nvbBGPezwRDBHSRRe32iJ3JolX5ZNpv1GFVFgWmNVsW/43IqPLKzHtgVHsoRu1k2aMm0vuPYHNhliPYWb4UeG3jWabA+YgR/E4XFoyhVhYM1dJCxTW0YIZsk4+i6A5VUY4pwIQ/p2Leoy4g0AsSajeStu/DMPZy/252nu9+hid8IOIyotXJ842qGFSsjBqqqmx1UJVNWfbXOpgOiXjQ/4MFTLBt42vZl26Ewz9FRWSGJI9QdPuEWDqlNsVEjOCQ7/TRozj3AROC97wRccdknSZesVf0j45xzZZWbTqaNW0beZv+scMjssYBnZNBL8EMwZg8o7g9wEs1wk0IdvF4wpRVt/SM6r7Vo6NzcC7SCEMAU489+klZ2VHKQUEXsLbCwqzR8wUsnL6gkBW+BMVpgkIQn+Jxfm/IqH7rUEGGzR3jCR3psfsQ8KjE/1SFOqpCpzJQgxQlWex/YOOlJv2nS80na7zCIyth48q8PEJNkfW1Zq7m1+n3QauFrJprnX4xolDxKLg/UfgK1T2V49qMWTTWg97qr1iwn0RhMaJHqc2ghT6/KlFYA8XwBRy0kOtXJwpfRmfIMg9avJ8wVaG3VqERRCGQVf4H/cJFYRrE4xQeWYPrV+blEWqK4m2BW1OjKbUElyrJG7w/aM1QafILl4aqH/H0Jg3BK7T3VJRrM6bm+fkgVVgl7q9YGoJmmd4XW1ktVPo1ScM6KIYv41YL0X5t0rABnSHLvPg1+lSF+qhCI0hDS9b64nmhcanSMA3icQqPrMQvVRrWeXmEmqL4mMClS8MG7w9aM1Tv8V+4NAT6gNLQeoX2nopybcY4jVWhvyqsEvdXLA2tZpneG1u1UOnXJA3roBi+jNstRPu1ScMGdIYs87ZKuU9VqKMqNII0tGWtL54aXqw0tGumeUPkj6zEL1Wc1Hl5hJqi+v376kr9Pmi1UH3gr9VXB6avDNQCKL1iu5QB7esrA7as8t/zS88AGDOr+5eegb4Y81MotrxS8C5ee7ZrZkWdv/Zs1wtxr/o10Exb3MSnYLOLcIAPTHvAPSOH4CEOiwdnfPUb8xTcEI9OE9VKJbN5tepFZrf2SFThLE9x2SwWHgiO+QeFu2EmUPmUrIKYsiW0M5mJNvMPXac5kn8tHNz+Dw==</diagram></mxfile>
2009.04861/main_diagram/main_diagram.pdf ADDED
Binary file (26 kB). View file
 
2009.04861/paper_text/intro_method.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Tsetlin machines (TMs) [\(Granmo,](#page-9-0) [2018\)](#page-9-0) have recently demonstrated competitive results in terms of accuracy, memory footprint, energy, and learning speed on diverse benchmarks (image classification, regression, natural language understanding, and speech processing) [\(Berge et al.,](#page-9-0) [2019;](#page-9-0) [Yadav et al.,](#page-10-0) [2021a;](#page-10-0) [Abeyrathna et al.,](#page-9-0) [2020;](#page-9-0) [Granmo et al.,](#page-9-0) [2019;](#page-9-0) [Wheeldon et al.,](#page-10-0) [2020;](#page-10-0) [Abeyrathna et al.,](#page-9-0) [2021;](#page-9-0) [Lei](#page-9-0) [et al.,](#page-9-0) [2021\)](#page-9-0). They use frequent pattern mining and resource allocation principles to extract common patterns in the data, rather than relying on minimizing output error, which is prone to overfitting. Unlike the intertwined nature of pattern representation in neural networks, a TM decomposes problems into self-contained patterns, expressed as conjunctive clauses in propositional logic (i.e., in the form if input X satisfies condition A and not condition B then output y = 1). The clause outputs, in turn, are combined into a classification decision through summation and thresholding, akin to a logistic regression function, however, with binary weights and a unit step output function. Being based on the human-interpretable disjunctive normal form [\(Valiant,](#page-10-0) [1984\)](#page-10-0), like Karnaugh maps [\(Karnaugh,](#page-9-0) [1953\)](#page-9-0), a TM can map an exponential number of input feature value combinations to an appropriate output [\(Granmo,](#page-9-0) [2018\)](#page-9-0).
4
+
5
+ Recent progress on TMs Recent research reports several distinct TM properties. The TM can be used in convolution, providing competitive performance on MNIST, Fashion-MNIST, and Kuzushiji-MNIST, in comparison with CNNs, K-Nearest Neighbor, Support Vector Machines, Random Forests, Gradient Boosting, BinaryConnect, Logistic Circuits and ResNet [\(Granmo et al.,](#page-9-0) [2019\)](#page-9-0). The TM has also achieved promising results in text classification [\(Berge et al.,](#page-9-0) [2019\)](#page-9-0), word sense disambiguation [\(Yadav et al.,](#page-10-0) [2021b\)](#page-10-0), novelty detection [\(Bhattarai et al.,](#page-9-0) [2021c;b\)](#page-9-0), fake news detection [\(Bhattarai et al.,](#page-9-0) [2021a\)](#page-9-0), semantic relation analysis [\(Saha et al.,](#page-10-0) [2020\)](#page-10-0), and aspect-based sentiment analysis [\(Ya](#page-10-0)[dav et al.,](#page-10-0) [2021a\)](#page-10-0) using the conjunctive clauses to capture textual patterns. Recently, regression TMs compared favorably with Regression Trees, Random Forest Regression, and Support Vector Regression [\(Abeyrathna et al.,](#page-9-0) [2020\)](#page-9-0). The above TM approaches have further been enhanced by vari-
6
+
7
+ <sup>\*</sup>Equal contribution (The authors are ordered alphabetically by last name.) <sup>1</sup>Department of Information and Communication Technology, Unviersity of Agder, Grimstad, Norway. Correspondence to: Ole-Christoffer Granmo <ole.granmo@uia.no>.
8
+
9
+ <span id="page-1-0"></span>ous techniques. By introducing real-valued clause weights, it turns out that the number of clauses can be reduced by up to 50× without loss of accuracy [\(Phoulady et al.,](#page-10-0) [2020\)](#page-10-0). Also, the logical inference structure of TMs makes it possible to index the clauses on the features that falsify them, increasing inference- and learning speed by up to an order of magnitude [\(Gorji et al.,](#page-9-0) [2020\)](#page-9-0). Multi-granular clauses simplify the hyper-parameter search by eliminating the pattern specificity parameter [\(Gorji et al.,](#page-9-0) [2019\)](#page-9-0). In [\(Abeyrathna](#page-9-0) [et al.,](#page-9-0) [2021\)](#page-9-0), stochastic searching on the line automata [\(Oom](#page-10-0)[men,](#page-10-0) [1997\)](#page-10-0) learn integer clause weights, performing on-par or better than Random Forest, Gradient Boosting, Neural Additive Models, StructureBoost and Explainable Boosting Machines. Closed form formulas for both local and global TM interpretation, akin to SHAP, was proposed by [Blakely](#page-9-0) [& Granmo](#page-9-0) [\(2020\)](#page-9-0). From a hardware perspective, energy usage can be traded off against accuracy by making inference deterministic [\(Abeyrathna et al.,](#page-9-0) [2020\)](#page-9-0). Additionally, [Shafik et al.](#page-10-0) [\(2020\)](#page-10-0) show that TMs can be fault-tolerant, completely masking stuck-at faults. Recent theoretical work proves convergence to the correct operator for "identity" and "not". It is further shown that arbitrarily rare patterns can be recognized, using a quasi-stationary Markov chain-based analysis. The work finally proves that when two patterns are incompatible, the most accurate pattern is selected [\(Zhang](#page-10-0) [et al.,](#page-10-0) [2021\)](#page-10-0). Convergence for the "XOR" operator has also recently been proven by [Jiao et al.](#page-9-0) [\(2021\)](#page-9-0).
10
+
11
+ Paper Contributions In all of the above mentioned TM schemes, the clauses are learnt using Tsetlin automaton (TA)-teams [\(Tsetlin,](#page-10-0) [1961\)](#page-10-0) that interact to build and integrate conjunctive clauses for decision-making. While producing accurate learning, this interaction creates a bottleneck that hinders parallelization. That is, the clauses must be evaluated and compared before feedback can be provided to the TAs.
12
+
13
+ In this paper, we first cover the basics of TMs in Section 2. Then, we propose a novel parallel and asynchronous architecture in Section [3,](#page-3-0) where every clause runs in its own thread for massive parallelism. We eliminate the above interaction bottleneck by introducing local voting tallies that keep track of the clause outputs, per training example. The local voting tallies detach the processing of each clause from the rest of the clauses, supporting decentralized learning. Thus, rather than processing training examples one-by-one as in the original TM, the clauses access the training examples simultaneously, updating themselves and the local voting tallies in parallel. In Section [4,](#page-4-0) we investigate the properties of the new architecture empirically on regression, novelty detection, semantic relation analysis and word sense disambiguation. We show that our decentralized TM architecture copes well with working on outdated data, with no measurable loss in learning accuracy. We further
14
+
15
+ investigate how processing time scales with the number of clauses, uncovering almost constant-time processing over reasonable clause amounts. Finally, in Section [5,](#page-8-0) we conclude with pointers to future work, including architectures for grid-computing and heterogeneous systems spanning the cloud and the edge.
16
+
17
+ The main contributions of the proposed architecture can be summarized as follows:
18
+
19
+ - Learning time is made almost *constant* for reasonable clause amounts (employing from 20 to 7, 000 clauses on a Tesla V100 GPU).
20
+ - For sufficiently large clause numbers, computation time increases approximately proportionally to the increase in number of clauses.
21
+ - The architecture copes remarkably with working on outdated data, resulting in no significant loss in learning accuracy across diverse learning tasks (regression, novelty detection, semantic relation analysis, and word sense disambiguation).
22
+
23
+ Our parallel and asynchronous architecture thus allows processing of more massive data sets and operating with more clauses for higher accuracy, significantly increasing the impact of logic-based machine learning.
24
+
25
+ A TM takes a vector X = [x1, . . . , xo] of o Boolean features as input, to be classified into one of two classes, y = 0 or y = 1. These features are then converted into a set of literals that consists of the features themselves as well as their negated counterparts: L = {x1, . . . , xo, ¬x1, . . . , ¬xo}.
26
+
27
+ If there are m classes and n sub-patterns per class, a TM employs m × n conjunctive clauses to represent the subpatterns. For a given class<sup>1</sup> , we index its clauses by j, 1 ≤ j ≤ n, each clause being a conjunction of literals:
28
+
29
+ $$C_j(X) = \bigwedge_{l_k \in L_j} l_k. \tag{1}$$
30
+
31
+ Here, lk, 1 ≤ k ≤ 2o, is a feature or its negation. Further, L<sup>j</sup> is a subset of the literal set L. For example, the particular clause C<sup>j</sup> (X) = x<sup>1</sup> ∧ ¬x<sup>2</sup> consists of the literals L<sup>j</sup> = {x1, ¬x2} and outputs 1 if x<sup>1</sup> = 1 and x<sup>2</sup> = 0.
32
+
33
+ The number of clauses n assigned to each class is userconfigurable. The clauses with odd indexes are assigned positive polarity and the clauses with even indexes are assigned
34
+
35
+ <sup>1</sup>Without loss of generality, we consider only one of the classes, thus simplifying notation. Any TM class is modelled and processed in the same way.
36
+
37
+ <span id="page-2-0"></span>negative polarity. The clause outputs are combined into a classification decision through summation and thresholding using the unit step function u(v) = 1 if $v \ge 0$ else 0:
38
+
39
+ $$\hat{y} = u \left( \sum_{j=1,3,\dots}^{n-1} C_j(X) - \sum_{i=2,4,\dots}^{n} C_j(X) \right).$$
40
+ (2)
41
+
42
+ Namely, classification is performed based on a majority vote, with the positive clauses voting for y=1 and the negative for y=0.
43
+
44
+ ![](_page_2_Figure_4.jpeg)
45
+
46
+ Figure 1. TM learning dynamics for an XOR-gate training sample, with input $(x_1 = 0, x_2 = 1)$ and output target y = 1.
47
+
48
+ TM learning is illustrated in Fig. 1. As shown, a clause $C_j(X)$ is composed by a team of TAs. Each TA has 2N states and decides to Include (from state 1 to N) or Exclude (from state N+1 to 2N) a specific literal $l_k$ in the clause. In the figure, TA refers to the TAs that control the original form of a feature $(x_1 \text{ and } x_2)$ while TA' refers to those controlling negated features $(\neg x_1 \text{ and } \neg x_2)$ . A TA updates its state based on the feedback it receives in the form of Reward, Inaction, and Penalty (illustrated by the features moving in a given direction in the TA-part of the figure). There are two types of feedback associated with TM learning: Type I feedback and Type II feedback, which are shown in Table 1 and Table 2, respectively.
49
+
50
+ **Type I feedback** is given stochastically to clauses with odd indexes when y=1 and to clauses with even indexes when y=0. Each clause, in turn, reinforces its TAs based on: (1) its output $C_j(X)$ ; (2) the action of the TA – *Include* or *Exclude*; and (3) the value of the literal $l_k$ assigned to the TA. As shown in Table 1, two rules govern Type I feedback:
51
+
52
+ • *Include* is rewarded and *Exclude* is penalized with probability $\frac{s-1}{s}$ if $C_j(X)=1$ and $l_k=1$ . This reinforcement is strong<sup>2</sup> (triggered with high probability) and
53
+
54
+ | INPUT | CLAUSE | 1 | 0 | |
55
+ |-----------------|-------------|-------------------------------|---------------------------------|--|
56
+ | INPUI | LITERAL | 1 0 | 1 0 | |
57
+ | INCLUDE LITERAL | P(REWARD) | $\frac{s-1}{s}$ NA | 0 0 | |
58
+ | INCEUDE EITERAE | P(INACTION) | $\frac{1}{s}$ NA | $\frac{s-1}{s}$ $\frac{s-1}{s}$ | |
59
+ | | P(PENALTY) | 0 NA | $\frac{1}{s}$ $\frac{1}{s}$ | |
60
+ | EXCLUDE LITERAL | P(REWARD) | $0 \frac{1}{s}$ | $\frac{1}{s}$ $\frac{1}{s}$ | |
61
+ | EXCEOSE ETTERNE | P(INACTION) | $\frac{1}{s}$ $\frac{s-1}{s}$ | $\frac{s-1}{s}$ $\frac{s-1}{s}$ | |
62
+ | | P(PENALTY) | $\frac{s-1}{s}$ 0 | 0 0 | |
63
+
64
+ Table 1. Type I Feedback
65
+
66
+ | INPUT | CLAUSE | 1 | 0 | |
67
+ |-----------------|-------------|--------|---------|--|
68
+ | INPUI | LITERAL | 1 0 | 1 0 | |
69
+ | INCLUDE LITERAL | P(REWARD) | 0 NA | 0 0 | |
70
+ | INCLUDE LITERAL | P(INACTION) | 1.0 NA | 1.0 1.0 | |
71
+ | | P(PENALTY) | 0 NA | 0 0 | |
72
+ | EXCLUDE LITERAL | P(REWARD) | 0 0 | 0 0 | |
73
+ | EXCLUDE LITERAL | P(INACTION) | 1.0 0 | 1.0 1.0 | |
74
+ | | P(PENALTY) | 0 1.0 | 0 0 | |
75
+
76
+ Table 2. Type II Feedback
77
+
78
+ makes the clause remember and refine the pattern it recognizes in X.
79
+
80
+ • *Include* is penalized and *Exclude* is rewarded with probability $\frac{1}{s}$ if $C_j(X) = 0$ or $l_k = 0$ . This reinforcement is weak (triggered with low probability) and coarsens infrequent patterns, making them frequent.
81
+
82
+ Above, parameter s controls pattern frequency.
83
+
84
+ **Type II feedback** is given stochastically to clauses with odd indexes when y=0 and to clauses with even indexes when y=1. As captured by Table 2, it penalizes *Exclude* with probability 1 if $C_j(X)=1$ and $l_k=0$ . Thus, this feedback produces literals for discriminating between y=0 and y=1.
85
+
86
+ The "state" is realized as a simple counter per Tsetlin Automaton. In practice, reinforcing Include (penalizing Exclude or rewarding Include) is done by increasing the counter, while reinforcing exclude (penalizing Include or rewarding Exclude) is performed by decreasing the counter.
87
+
88
+ As an example of learning, let us consider a dataset with XOR-gate sub-patterns. In particular, consider the input $(x_1 = 0, x_2 = 1)$ and target output y = 1 to visualize the learning process (Fig. 1). We further assume that we have n = 4 clauses per class. Among the 4 clauses, the clauses $C_1$ and $C_3$ vote for y = 1 and the clauses $C_0$ and $C_2$ vote for y = 0. For clarity, let us only consider how $C_1$ and $C_3$ learn a sub-pattern from the given sample of XOR-gate input and output. At Step 1 in the figure, the clauses have not yet learnt the pattern for the given sample. This leads to the wrong class prediction $(\hat{y} = 0)$ , thereby triggering Type I feedback
89
+
90
+ <sup>&</sup>lt;sup>2</sup>Note that the probability $\frac{s-1}{s}$ is replaced by 1 when boosting true positives.
91
+
92
+ <span id="page-3-0"></span>for the corresponding literals. Looking up clause output $C_1(X) = 0$ and literal value $x_1 = 0$ in Table 1, we note that the TA controlling $x_1$ receives either Inaction or Penalty feedback for including $x_1$ in $C_1$ , with probability $\frac{s-1}{s}$ and $\frac{1}{s}$ , respectively. After receiving several penalties, with high probability, the TA changes its state to selecting Exclude. Accordingly, literal $x_1$ gets removed from the clause $C_1$ . After that, the TA that has excluded literal $\neg x_1$ from $C_1$ also obtains penalties, and eventually switches to the Include side of its state space. The combined outcome of these updates are shown in Step t for $C_1$ . Similarly, the TA that has included literal $\neg x_2$ in clause $C_1$ receives Inaction or Penalty feedback with probability $\frac{s-1}{s}$ and $\frac{1}{s}$ , respectively. After obtaining multiple penalties, with high probability, $\neg x_2$ becomes excluded from $C_1$ . Simultaneously, the TA that controls $x_2$ ends up in the *Include* state, as also shown in Step t. At this point, both clause $C_1$ and $C_3$ outputs 1 for the given input, correctly predicting the output $\hat{y} = 1$ .
93
+
94
+ **Resource allocation** dynamics ensure that clauses distribute themselves across the frequent patterns, rather than missing some and over-concentrating on others. That is, for any input X, the probability of reinforcing a clause gradually drops to zero as the clause output sum
95
+
96
+ $$v = \sum_{j=1,3,\dots}^{n-1} C_j(X) - \sum_{i=2,4,\dots}^{n} C_j(X)$$
97
+ (3)
98
+
99
+ approaches a user-set margin T for y=1 (and -T for y=0). If a clause is not reinforced, it does not give feedback to its TAs, and these are thus left unchanged. In the extreme, when the voting sum v equals or exceeds the margin T (the TM has successfully recognized the input X), no clauses are reinforced. They are then free to learn new patterns, naturally balancing the pattern representation resources (Granmo, 2018).
100
+
101
+ # Method
102
+
103
+ Even though CPUs have been traditionally geared to handle high workloads, they are more suited for sequential processing and their performance is still dependant on the limited number of cores available. In contrast, since GPUs are primarily designed for graphical applications by employing many small processing elements, they offer a large degree of parallelism (Owens et al., 2007). As a result, a growing body of research has been focused on performing general purpose GPU computation or GPGPU. For efficient use of GPU power, it is critical for the algorithm to expose a large amount of fine-grained parallelism (Jiang & Snir, 2005; Satish et al., 2009).
104
+
105
+ While the voting step of the TM (Eq. 3) hinders parallelization, the remainder of the TM architecture is natively parallel. In this section, we introduce our decentralized inference scheme and the accompanying architecture that makes it possible to have parallel asynchronous learning
106
+
107
+ and classification, resolving the voting bottleneck by using local voting tallies.
108
+
109
+ A voting tally that tracks the aggregated output of the clauses for each training example is central to our scheme. In a standard TM, each training example $(X_i,y_i), 1 \leq i \leq q$ , is processed by first evaluating the clauses on $X_i$ and then obtaining the majority vote v from Eq. (3). Here, q is the total number of examples. The majority vote v is then compared with the summation margin T when y=1 and -T when y=0, to produce the feedback to the TAs of each clause, explained in the previous section.
110
+
111
+ ![](_page_3_Figure_11.jpeg)
112
+
113
+ Figure 2. Parallel Tsetlin machine architecture.
114
+
115
+ As illustrated in Fig. 2, to decouple the clauses, we now assume that the particular majority vote of example $X_i$ has been pre-calculated, meaning that each training example becomes a triple $(X_i, y_i, v_i)$ , where $v_i$ is the pre-calculated majority vote. With $v_i$ in place, the calculation performed in Eq. (3) can be skipped, and we can go directly to give Type I or Type II feedback to any clause $C_j$ , without considering the other clauses. This opens up for decentralized learning of the clauses, facilitating native parallelization at all inference and learning steps. The drawback of this scheme, however, is that any time the composition of a clause changes after receiving feedback, all voting aggregates $v_i, 1 \leq i \leq q$ , become outdated. Accordingly, the standard learning scheme for updating clauses must be replaced.
116
+
117
+ Our decentralized learning scheme is captured by Algorithm 1. As shown, each clause is trained independently of the other clauses. That is, each clause proceeds with training without taking other clauses into consideration. Algorithm 1 thus supports native parallelization because each clause now can run independently in its own thread.
118
+
119
+ ```
120
+ Input: Example pool P, clause C_i, positive polarity
121
+ indicator p_j \in \{0,1\}, batch size b \in [1,\infty), voting
122
+ margin T \in [1, \infty), pattern specificity s \in [1, \infty).
123
+ Procedure: UpdateClause : C_i, p_i, P, b, T, s.
124
+ for i = 1 to b do
125
+ (X_i, y_i, v_i) \leftarrow \text{ObtainTrainingExample}(P)
126
+ v_i^c \leftarrow \mathbf{clip}\left(v_i, -T, T\right)
127
+ e = T - v_i^c if y_i = 1 else T + v_i^c
128
+ if rand() \leq \frac{e}{2T} then
129
+ if y_i xor p_j then
130
+ C_j \leftarrow \text{TypeIIFeedback}(X_i, C_j)
131
+ C_j \leftarrow \text{TypeIFeedback}(X_i, C_j, s)
132
+ end if
133
+ o_{ij} \leftarrow C_j(X_i)
134
+ o_{ij}^* \leftarrow \text{ObtainPreviousClauseOutput}(i, j)
135
+ if o_{ij} \neq o_{ij}^* then
136
+ AtomicAdd(v_i, o_{ij} - o_{ij}^*)
137
+ StorePreviousClauseOutput(i, j, o_{ij})
138
+ end if
139
+ end if
140
+ end for
141
+ ```
142
+
143
+ Notice further how the clause in focus first obtains a reference to the next training example $(X_i, y_i, v_i)$ to process, including the pre-recorded voting sum $v_i$ (Line 3). This example is retrieved from an example pool P, which is the storage of the training examples (centralized or decentralized).
144
+
145
+ The error of the pre-recorded voting sum $v_i$ is then calculated based on the voting margin T (Line 5). The error, in turn, decides the probability of updating the clause. The updating rule is the standard Type I and Type II TM feedback, governed by the polarity $p_j$ of the clause and the specificity hyper-parameter s (Lines 6-11).
146
+
147
+ The moment clause $C_j$ is updated, all recorded voting sums in the example pool P are potentially outdated. This is because $C_j$ now captures a different pattern. Thus, to keep all of the voting sums $v_i$ in P consistent with $C_j$ , $C_j$ should ideally have been re-evaluated on all of the examples in P.
148
+
149
+ To partially remedy for outdated voting aggregates, the clause only updates the *current* voting sum $v_i$ . This happens when the calculated clause output $o_{ij}$ is different from the previously calculated clause output $o_{ij}^*$ (Lines 12-17). Note that the previously recorded output $o_{ij}^*$ is a single bit that is stored locally together with the clause. In this manner, the algorithm provides *eventual consistency*. That is, if the clauses stop changing, all the voting sums eventually become correct.
150
+
151
+ A point to note here is that there is no guarantee that the clause outputs in the parallel version will sum up to the same number as in the sequential version. The reason is that the updating of clauses is asynchronous, and the vote sums are not updated immediately when a clause changes. We use the term *eventual consistency* to refer to the fact that if the clauses stop changing, eventually, the tallied voting sums become the exact sum of the clause outputs. Although not analytically proven, experimental results show that the two versions provide consistent final accuracy results after clause summation and thresholding.
152
+
153
+ Employing the above algorithm, the clauses access the training examples simultaneously, updating themselves and the local voting tallies in parallel. There is no synchronization among the clause threads, apart from atomic adds to the local voting tallies (Line 15).
2104.08801/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="math.draw.io" modified="2021-05-07T04:38:28.473Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36" etag="1diHJlWZ4JWaJmLRBeFa" version="14.6.1"><diagram id="polM8Ss2yI4jhA6O7i59" name="Page-1">7V1bc6s4Ev41rtp9OJSugB5PnJmzU7Wzk5zs1s48bRGbxNRxIAPkNr9+hbkYCYGxLQGxnTo1YwQ0oO7+1N1qtWZ4/vT+LfaeV79GS389Q2D5PsPXM4QYsfl/s4aPvIFCkDc8xsEyb4LbhrvgL79oLC97CZZ+IlyYRtE6DZ7FxkUUhv4iFdq8OI7exMseorX41Gfv0W803C28dbP1v8EyXeWtLnK27f/wg8dV+WRos/zMk1deXJBIVt4yesubNh+Hf5rheRxFaf7r6X3ur7O+K/sl74GfW85WLxb7YdrnBvf6l9VqPr+6nr/+9hb6V8Cj118QxjmdV2/9Unxy8brpR9kHcfQSLv2MDJjhq7dVkPp3z94iO/vGmc7bVunTmh9B/jNJ4+hH1VdZS/EAP07999Z3h1WPcEnyoyc/jT/4JcUNNi3EoZCi4lPB25YllcSsauwguGj0CjF4rEhve4r/KDprj46DNjLecQ/Bej2P1lG8oYYf+J/rVlfWzmAbM7zU1NWOK3Q1hLDR15Aq+poa62pX0dMk+0fn6cpPvf99K46rVv89nTlZ/6V+nGTo46XeDNlr/qpX9zH/9Zj9mjnX5V0S4/zlo/9P755DmcAibx08hvz3gvelz/v+KuvpgIPF1+LEU7BcZjSuYj8J/vLuN/Qy3j9HQZhuOoZezeh1F6sKJCturtR6DxYiQiwbO4DzySWOQ5HIUAAsyhB2XBc4Nj+L2/lWPOIme/sthS9QvCN6eEj8tMHn6sWOYH0L529a+LZDxbzkOR8jHoL3TC1lDVss/Xv3vlPDRC1FenQOUmTZEpcQ51IT5FhT78o27XrXA+D8cPk1G2QznVh7SRIsxA5fvMSvG/zLDvjFPwfr8lROK1O0fTut1h9dOBT7ay8NXkXyPYS85Inb5AmlJU9KOkn0Ei/84tb64CtRw6APtdSLH/20QY13sfdRu6wAk9ZXR6T5MFY+bCsROVmt+qoyJgpMXnkZJN+2Y+6xurvw6cPD8LqLmGtBPDXdVQ2Z7bp7v44WP3ppLvikmgsP1lyEG9SYMcVtosQweotUA21uLj1EmzfcCo7950tUnviSbBy2r/wCCJ/ftycrE6thqIGtfZa5d37ox17qZ0baBhoaNhqyvacMCsL7JPufEknym/iX5y9b3ikJ/OapCpO7hIowCn0JV4qm/nafCsREL0CHueAyC1JRwAlVQA4ErsVcBeyYMtVJD2/yZHFHNcYfgTtNm9Ac7jQRcxjcIaTdYMgR4/vFtaurPoXQArDdtYOW7dZcO7u3DA/u2hHawvrb07IPCRJDVxBB7n6D7R9uwPaglmIZoD1TN49QbquLWoQc1wIKDu2L4NTZm7QmPKew8WQMlU82i+5OO7rnRtyNOXdwpFAOdWCD6QQrmT6axp+zc0gIE+EY2ko43lfZCRPpsm6ymhSd2H2ealbJmUKaDLiO32XXMeYvHvivB7iON+frOhLqWIDLavUnupHIVo9PY/qUpeKfJVxR2wxcUTgGXFEyOlzZbW7HqU0pYSjaf7ZiynxIq8NW+RmnPjcwOSacoUWOMbHo1PigMpqOcbqnPo5Bm4gccKTQVO+wKGLdhDQNVZBJL8wGCIc6KlPnMg0zUVsaEmIBKcrCmsAyuMHsoGGk6OKRabERCHe5YM0jE9Pv+Og1AYlqm6ZJgqdg7cVB+vE3bj/PBX7+vW0CpumrJRy00w4Hzn8P0t+zLrdocfRH7cz1e8GNzcFHeRDyL6/dlB1Wd2UH29s2R+V9IoNFb3Hzngd4j/z7N8NbD78kH752mnDjzXGKsxkUSiR6J0RJGb3ElgjpGsfL+Y3yORh0v5fkokrXmxn3Idhv3rxAMpWG1PQDdOrH5C1GRCzsbmMDRIxU2QdPrOeERWJ2z4l1bQxHKktPndOs1wvrkUTeTETfYqKufH5XSjJHI+fLQWUGlDoT4RQZ4kyPIabmFeZRmARJ6oeL7PvzRJIgfOxreOpe07Hgf53c16Fu0poORMDOML8K2s1xG7fh4Z0pIPQ3f2PrnQyEuOzi0fQOtwHhnSkEnAYnZAQcnxN7ZuYc50x1G4tbZ6ruSm2dJ7UrNZjzVMHHTu+pEO+xjFoHSUIGHJFEXyPWkRGdSoQ0uU8OEJc2oGIGwaw7dGCGyk7JbwkJ7JTjQmPgHvpykfwdki/jZm/JlwNkMiFNku+OI/lNq/fKW/z4ksZeEG6sVFkNJp7O3KXgrWPxF2ARB5LjBG+A1anK1BQdPsqdv36o8fzMQuI2lQY3xawKUbknxlagU5VRfGH0sYx2gczo5mTHwIxGB1keOowMwWQ4tPRDH6Nit6lQq08yhqkgjbzswBw32VJw+yW16RsdVEF9ffOwN2c6YWq7ong4zeDVwJihmh0dATP6IsBYei1b1IfqtQuQJa1+RQNP31BVFqk+1b49U9V2Jqfah4XgBNWuq2hN0aFd13Tu8QDs7lR3fnTjxwH/uoxbewYTeoz7bErjfqVlew/8suuOBx75lemdmUaLWTK3XVkyn9Kvz/WlKUAfAv0j5URaUCtrv0l/X5U+rQcOXC6jEh4AfN54INsLB+MBJ2QByWJgWQejagK0rDE3FEIocjS/F9mTpwIFLXlqtRAfdaXFTceJzRAAcPTS2Q57wJEBwEEmAWAqwzymku7tMczLao0ZFtQaDTzuH71U8SIeMuofLB4K1B9bPEwFhO7yaUIEltGTF4Tn5zdiOXut6Thi2hTZrtKnR1YH7hET0p9JxliPELHWTDLMRq8O3Lqi13wq7c4eHyGDjICxMze7VvoaTqWdAkPkRLIJMETltuoYd/6de5BnPO5gidV2g9XDBiyVK0g3kafb3uuzJu5u7mCJY1uEMZdhiBhwmSMwCEGHG4AuQ67LCKHlKL1HPQ6B3ICF0B2zk4mXlZi84yESrcgSN8fT5qYV+a1YqX1qasudNebaxOHKSYEtlSWCWdk8AKhLuBlPHFoyykCPm53Yuyyb56yG4orMCahZR1WUSxXaljXrBGyr0EIpjWtTa/KT7DDiqEK6Qn6NXidlvD1GbAvX13yJJXFQGUIZzUtx2iZNT7hAFAbQYhPjg3vO9fagu0NLDsxZQjijK6GkmSJ7CHR/Ahygyh50TSVMXyoZmahkxJDFUE1kREcI2c1w+uA1aKCLzhiXMGjix8FYRIfDIrf1UYbxp2MnqsuegcqNJZBFW/cM5MJmwU+ysQR0VXNvQlrtaRiPBE2+1Dx0OybnTrXg6CfYAoCbuWc8mKr2/MCa9vwATdLqnTe0bwWg2G1khD0/oHt0itwnlizVli+6JAuNJVlUoS5jSBZT2XTtkvX5iyZXfleZbOMcWmwR7yCky9xn0nOKddOG5aIt38SUvTdWIXMmRnJGr2IO2Zjl5MeyuqfHhbYZjCEC52PZ2Gx6Rf1huQXU2QxQpBSoalw5sKo/kVaCmKrqT6j0wkNU9UdANaNyKcg+0WA4AdMs64/KiZzLnMpnECPK5UMWoynU8kflS3UV87/tvUz5Usy/yyXZvcSWtVRCG2p7OU3l/G04TDn/KsO6fM6Ocv42YF3Xmxrwzy1SUaV4HRupoHIEwVCkgtIRIhUI9FgVdsJ7be9KrnEODI2qJsdlWroiXKTxKDzEhDYCqom08xGdxrbNVNM+qPL+qjvo6oIfaViCI+yEisDOHTlPcW5WtFNgNoc+oYlZBEyt1bvECoxsp00tyZKARI0go/p8cL/Ro7m/VHPdbOWSwXqlok3VQguTqljFKZWmwBJ2MIlfvZ0lAi3Hre3CLpO1IOgX/9Q2FsD9KltdBKSPgGBoREA42eEFBO2XMNRHQKbOTQjFVYgyPPfecE6iQ+RaGMZ5t19KzkW5+yi3Y0S3nTFUe79py4t49BGPhpLrkY8sdXFwAcHo7LCfc0EL9st0Bsf+csrpjHgnaaLNDmMdamjewJzT77NNnnOOHotLpjM87/RbXFPnHZTKOhyMmHIkf2jWqawhvQG4Kub2S/j8kub3/MtLX2JvveFokniPWTXHmXN9w6/P/xV3/faSbu+JgiRj8J8vfpIGUZjfckm+yML4UpnCZl0uiKsdxIeJwZU1TQYQLClQm32T2KQUPFGMbvtIniCrl4Bwll8wQcFT5Y0d6d+VOTy1DB7umEDSncXDD2S3TvAUxc1vSnrmKuDvTOSp1ljvTOQprhxr9HWZaPTCQ0OPLpXCHP3yL7QNv0Tla5pAyf+EycuzH78GCdd/BK6LCpbAW3rPqZfjYMfi78+Ea0svWWktOSvJiALkEFTIukGIU3m5JqQm268xH+qEsXERhUmQpH64+PikMqJBLJDl0Nq8JyiZUkESnMBYaLy+XyER8zhKkoukqCXFsSASJEVy+zGxmjWTh5cV1ZSXEVn5WHAyF1k5RFYwNSsr/DCOMu5trRz+oatfo2VW6OWn/wM=</diagram></mxfile>
2104.08801/main_diagram/main_diagram.pdf ADDED
Binary file (69.6 kB). View file
 
2104.08801/paper_text/intro_method.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In this section, we describe the source and target domain datasets, models for question generation and passage retrieval, and the evaluation metrics.
4
+
5
+ :::: table*
6
+ ::: tabularx
7
+ lcccc & & &\
8
+ & & & **NaturalQuestions** & **MLQuestions**\
9
+ & & & &\
10
+ & & &\
11
+ & & &\
12
+
13
+ & & & &\
14
+ & & &\
15
+
16
+ & & & &\
17
+ & & &\
18
+
19
+ & & & &\
20
+ & & &\
21
+
22
+ & & & &\
23
+ & & &\
24
+ :::
25
+ ::::
26
+
27
+ We use the NaturalQuestions dataset [@kwiatkowski2019natural] as our source domain. NaturalQuestions is an open-domain question answering dataset containing questions from Google search engine queries paired with answers from Wikipedia. We use the long form of the answer which corresponds to passages (paragraphs) of Wikipedia articles. It is the largest dataset available for open-domain QA, comprising of 300K training examples, each example comprising of a question paired with a Wikipedia passage. We label 200 random questions of NaturalQuestions and annotate them into 5 different classes based on the nature of the question as per @nielsen2008taxonomy. [\[tab:taxonomy-table\]](#tab:taxonomy-table){reference-type="ref+Label" reference="tab:taxonomy-table"} shows these classes and their distribution. As seen, 86% of them are descriptive questions starting with *what, who, when* and *where*. Refer to [8.2](#sec:nq-preprocess){reference-type="ref+Label" reference="sec:nq-preprocess"} for details on dataset pre-processing and [8.4](#sec:taxonomy){reference-type="ref+Label" reference="sec:taxonomy"} for detailed taxonomy description.
28
+
29
+ Our first target domain of interest is machine learning. There is no large supervised QA dataset for this domain, and it is expensive to create one since it requires domain experts. However, it is relatively cheap to collect a large number of ML articles and questions. We collect ML concepts and passages from the Wikipedia machine learning page[^1] and recursively traverse its subcategories. We end up with 1.7K concepts such as *Autoencoder*, *word2vec* etc. and 50K passages related to these concepts.
30
+
31
+ For question mining, we piggy-back on Google Suggest's *People also ask* feature to collect 104K questions by using above machine learning concept terms as seed queries combined with question terms such as *what*, *why* and *how*. However, many questions could belong to generic domain due to ambiguous terms such as *eager learning*. We employ three domain experts to annotate 1000 questions to classify if a question is in-domain or out-of-domain. Using this data, we train a classifier [@liu2019roberta] to filter questions that have in-domain probability less than 0.8. This resulted in 46K in-domain questions, and has 92% accuracy upon analysing 100 questions. Of these, we use 35K questions as unsupervised data. See [8.3](#ood_clf){reference-type="ref+label" reference="ood_clf"} for classifier training details and performance validation.
32
+
33
+ The rest of the 11K questions are used to create supervised data for model evaluation. We use the Google search engine to find answer passages to these questions, resulting around 11K passages. Among these, we select 3K question and passage pairs as the evaluation set for QG (50% validation and 50% test). For IR, we use the full 11K passages as candidate passages for the 3K questions. We call our dataset *MLQuestions*.
34
+
35
+ [\[tab:taxonomy-table\]](#tab:taxonomy-table){reference-type="ref+Label" reference="tab:taxonomy-table"} compares MLQuestions with NaturalQuestions. We note that MLQuestions has higher diversity of question classes than NaturalQuestions, making the transfer setting challenging.
36
+
37
+ Our second domain of interest is biomedicine for which we use PubMedQA [@jin2019pubmedqa] dataset. Questions are extracted from PubMed abstract titles ending with question mark, and passages are the conclusive part of the abstract. As unsupervised data, we utilize PQA-U(nlabeled) subset containing 61.2K unaligned questions and passages. For supervised data, we use PQA-L(abeled) subset of 1K question-passage pairs manually curated by domain experts. We use the same dev-test split of 50-50% as [@jin2019pubmedqa] as the evaluation set for QG. For IR, in order to have the same number of candidate passages as MLQuestions, we combine randomly sampled 10K passages from PQA-U with 1K PQA-L passages to get 11K passages as candidate passages for 1K questions.
38
+
39
+ We use BART [@lewis2020bart] to train a supervised QG model on NaturalQuestions. BART is a Transformer encoder-decoder model pretrained to reconstruct original text inputs from noisy text inputs. Essentially for QG, BART is further trained to learn a conditional language model $P_\mathcal{S}(q|p)$ that generates a question $q$ given a passage $p$ from the source domain. For experimental details, see [8.1](#sec:train-details){reference-type="ref" reference="sec:train-details"}.
40
+
41
+ We use the pretrained Dense Passage Retriever (DPR; @karpukhin2020dense) on NaturalQuestions. DPR encodes a question $q$ and passage $p$ separately using a BERT bi-encoder and is trained to maximize the dot product (similarity) between the encodings $E_P(p)$ and $E_Q(q)$, while minimizing similarity with other closely related but negative passages. Essentially, DPR is a conditional classifier $P_S(p|q)$ that retrieves a relevant passage $p$ given a question $q$ from the source domain. For model training details, see [8.1](#sec:train-details){reference-type="ref" reference="sec:train-details"}.
2105.04459/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-05-07T19:43:53.350Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" etag="29EVbiPe-ZrBsHP5h97w" version="14.6.11" type="google"><diagram id="R2lEEEUBdFMjLlhIrx00" name="Page-1">7Vttc5s4EP41nrn74AwCQ+yPdhJfO532cpd27vqpo4AMaoREhVzb/fUVIGyD5HOuMYYkzHhsWK3enl3ts5Jh4FzF6z84TKL3LEBkYFvBeuBcD2x74jjyOxNsCoHrXhaCkOOgEIGd4A7/QEpoKekSByitKArGiMBJVegzSpEvKjLIOVtV1RaMVHtNYKh6tHaCOx8SpKn9gwMRFdKxu6f9BuEwKnsGliqJYamsBGkEA7aqiNBazBkVaoi3iMeQIipkyXvIHxAfuDeRENlMpwN7Lj+LTPsiZCwkCCY4vfBZLMV+KlXmCxhjksG819BMNSS7c24GzhVnTBRX8foKkcxUpRmKMc0PlG5x4Fm7j6jw4Wv45fM1+vdb9DCdbv4e/sXfgCFQxv8OyVIBrMARmxJxzpY0QFkrYODMVhEW6C6Bfla6ki4mZZGIiSpWzSEu0PrgQMF2+tJLEYuR4Bupoiq4jrKG8lDbU/ernb23Fov2bG1fKiFUBgy3be9wkRcKGjNMNFgsYvd2E3jBu+jP+6+bjxEels5/Ypg0TAzIHYTJsR8Bk2WCyWoKJq+DKNWdaayjBMYGlIDTFEp2B1EaVVFyLANKngmlxnwJaCh9Gn5Aon2ovBpUtgEqU3RqzJ8cDamPK3YnUKJhJacoqoCkgrMHdMUI41JCGZWaswUmpCaCBIdU3voSOMleziwDDEtWnqqCGAdB1o3RAjsbWScywvi4v5oWtd2UDdzn4q0Gkjivt+oJxzVb0RTGCUFvabLUMXs5Xus9gouMiU1Txpg8E7cdtR5kgc5HLz7KjqyavxqCx6ghdzXuVkaaDU7hqU/arAD3OEbNZU5GlBrkoqdBdfmL4e8Ui9qIlL5feQ5r+mlGmBz316YyJ6MN9Hygm9561jzfiNS4e9HPHR1f0s2dQRhR0lOa9lGqcYRxd93YGYT53K+ZA62nwfSYnfV5qdSQ9XUjOrn1DfCo7egE7NdHpp513GPPSqZAPwvqqL+2nvsBfTPxnA4iTmsNEyE1dRBhtkZXNy11qEbtB9pXuGvx6v91GcLHWU8igJ6Mvz4jtH0cBBo8vjxt0Ji0HTTKzir5vkcyxwzwd1ngE5gWnXrfltkjCbP8QYztnVQRWChAS+FtrmKNK2o73EuhX7j1VBby8P43ORT5kfOw9q5+zy7z+tkjFUPVSFaFMh5DUi1eKcwM5QQJuViGqbQvpqFRJVuRQyw9gqomrGRdLRQc0nQha5X1KdoqrBgPqs1r1QPkMw4FZnS//g4jL1S//2kAAjdsKaYcwf9fV0K+jOkRuxRI50/4ZKMETjaPQz0ldYGEgB5utnz2ZZqb2JNZVV7VcUbyl+L4fplyFlM2IVqpPoZiOWextdK1Pu1pIpO3oc9ovnwLM5mRL7qk92myNyW5tIqmqzM9IG4QAOVMZbwfKibIrZQIfUKgvaHq2M67Mxj71XhMfcAcpRdFaN0tHmuxzJlClh0fv7bepSyPNr8ifakpkXb+adgdHDj/bIro9ROrnuh7ou+Jvpthuyf6nuhPRfSgJ/rGiF572Lp9otf/b+mJvif6nui7GbZ7ou+J/kRE3xP6CQgd1N8LapDQ5e3uNb+8bO/VTOfmJw==</diagram></mxfile>
2105.04459/main_diagram/main_diagram.pdf ADDED
Binary file (23.3 kB). View file
 
2105.04459/paper_text/intro_method.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Learning maps between feature vectors or spaces is an important task. Feature vector maps are used to improve representation learning [@chen2020exploring], or to learn correspondences in natural language processing [@blitzer2006domain]. Maps between spaces are important for generative models when using normalizing flows [@kobyzev2020normalizing] (to map between a simple and a complex probability distribution), or to determine spatial correspondences between images, , for optical flow [@horn1981determining] to determine motion from videos [@fortun2015optical], depth estimation from stereo images [@laga2020survey], or medical image registration [@sotiras2013deformable; @tustison2019learning].
4
+
5
+ Regular maps are typically desired; , diffeomorphic maps for normalizing flows to properly map densities, or for medical image registration to map to an atlas space [@joshi2004unbiased]. Estimating such maps requires an appropriate choice of transformation model. This entails picking a parameterization, which can be simple and depend on few parameters (, an affine transformation), or which can have millions of parameters for 3D nonparametric approaches [@holden2007review]. Regularity is achieved by 1) picking a simple transformation model with limited degrees of freedom, 2) regularization of the transformation parameters, 3) or implicitly through the data itself. Our goal is to demonstrate and understand how spatial regularity of a transformation can be achieved by encouraging *inverse consistency* of a map. Our motivating example is image registration/optical flow, but our results are applicable to other tasks where spatial transformations are sought.
6
+
7
+ Registration problems have traditionally been solved by numerical optimization [@modersitzki2004numerical] of a loss function balancing an image similarity measure and a regularizer. Here, the predominant paradigm is *pair-wise* image registration[^1] where many maps may yield good image similarities between a transformed moving and a fixed image; the regularizer is required for well-posedness to single out the most desirable map. Many different regularizers have been proposed [@holden2007review; @modersitzki2004numerical; @risser2011simultaneous] and many have multiple hyperparameters, making regularizer choice and tuning difficult in practice. Deep learning approaches to image registration and optical flow have moved to learning maps from *many image pairs*, which raises the question if explicit spatial regularization is still required, or if it will emanate as a consequence of learning over many image pairs. For optical flow, encouraging results have been obtained without using a spatial regularizer [@dosovitskiy2015flownet; @ranjan2017optical], though more recent work has advocated for spatial regularization to avoid "vague flow boundaries and undesired artifacts" [@hui2018liteflownet; @hur2019iterative]. Interestingly, for medical image registration, where map regularity is often very important, almost all the existing work uses regularizers as initially proposed for pairwise image registration [@shen19; @yang2017quicksilver; @balakrishnan2019voxelmorph] with the notable exception of [@bhalodia19] where the deformation space is guided by an autoencoder instead.
8
+
9
+ Limited work explores if regularization for deep registration networks can be avoided entirely, or if weaker forms of regularizations might be sufficient. To help investigate this question, we work with binary shapes (where regularization is particularly important due to the aperture effect [@horn1986robot]) and real images. We show that regularization is necessary, but that carefully encouraging *inverse consistency* of a map suffices to obtain approximate diffeomorphisms. The result is a simple, yet effective, nonparametric approach to obtain well-behaved maps, which only requires limited tuning. In particular, the in practice often highly challenging process of selecting a spatial regularizer is eliminated.
10
+
11
+ are as follows: (1) We show that *approximate* inverse consistency, combined with off-grid interpolation, results in approximate diffeomorphisms, when using a deep registration model trained on large datasets. Foregoing regularization is insufficient; (2) Bottleneck layers are not required and many network architectures are suitable; (3) Affine preregistration is ; (4) We propose randomly sampled evaluations to avoid transformation flips in texture-less areas and an inverse consistency loss with beneficial boundary effects; (5) We present good results of our approach on synthetic data, MNIST, and a 3D magnetic resonance knee dataset
12
+
13
+ Image registration is typically based on solving optimization problems of the form $$\begin{equation}
14
+ \theta^* = \underset{\theta}{\text{argmin}}~\mathcal{L}_{\text{sim}}(I^A\circ \Phi^{-1}_\theta,I^B) + \lambda\mathcal{L}_{\text{reg}}(\theta)\enspace,\label{eq:basic_reg}
15
+ \end{equation}$$ where $I^A$ and $I^B$ are moving and fixed images, $\mathcal{L}_{\text{sim}}(\cdot,\cdot)$ is the similarity measure, $\mathcal{L}_{\text{reg}}(\cdot)$ is a regularizer , $\theta$ are the transformation parameters, $\Phi_\theta$ is the transformation map, and $\lambda\geq 0$. We consider images as functions from $\mathbb{R}^N$ to $\mathbb{R}$ and maps as functions from $\mathbb{R}^N$ to $\mathbb{R}^N$. We write $\|f\|_p$ for the $L^p$ norm on a scalar or vector-valued function $f$.
16
+
17
+ Maps, $\Phi_\theta$, can be parameterized using few parameters (, affine, B-spline [@holden2007review]) or nonparametrically with continuous vector fields [@modersitzki2004numerical]. In the nonparametric case, parameterizations are infinite-dimensional (as one deals with function spaces) and represent displacement, velocity, or momentum fields [@balakrishnan2019voxelmorph; @shen19; @yang2017quicksilver; @modersitzki2004numerical]. Solutions to Eq. [\[eq:basic_reg\]](#eq:basic_reg){reference-type="eqref" reference="eq:basic_reg"} are classically obtained via numerical optimization [@modersitzki2004numerical]. Recent deep registration networks are conceptually similar, but *predict* $\tilde{\theta}^*$, , an estimate of the true minimizer $\theta^*$.
18
+
19
+ There are three interesting observations: *First*, for transformation models with few parameters (, affine), regularization is often not used (, $\lambda=0$). *Second*, while deep learning (DL) models minimize losses similar to Eq. [\[eq:basic_reg\]](#eq:basic_reg){reference-type="eqref" reference="eq:basic_reg"}, the parameterization is different: it is over network *weights*, resulting in a predicted $\theta^*$ instead of optimizing over $\theta$ directly. *Third*, DL models are trained over *large collections of image pairs* instead of a single $(I^A,I^B)$ pair. This raises the following questions: **Q1**) Is explicit spatial regularization necessary, or can we avoid it for nonparametric registration models? **Q2**) Is using a *single* neural network parameterization to predict *all* $\theta^*$ beneficial? For instance, will it result in simple solutions as witnessed for deep networks on other tasks [@shah2020pitfalls] or capture meaningful deformation spaces as observed in [@yang2017quicksilver]? **Q3**) Does a deep network parameterization itself result in regular solutions, even if only applied to a single image pair, as such effects have, , been observed for structural optimization [@hoyer2019neural]?
20
+
21
+ Regularization typically encourages spatial smoothness by penalizing derivatives (or smoothing in dual space). Commonly, one uses a Sobolev norm or total variation. Ideally, one would like a regularizer adapted to deformations one expects to see (as it encodes a prior on expected deformations , as in [@niethammer2019metric]). In consequence, picking and tuning a regularizer is cumbersome and often involves many hyperparameters. While avoiding explicit regularization has been explored for deep registration / optical flow networks [@dosovitskiy2015flownet; @ranjan2017optical], there is evidence that regularization is beneficial [@hui2018liteflownet].
22
+
23
+ *Our key idea is to avoid complex spatial regularization and to instead obtain approximate diffeomorphisms by encouraging inverse consistent maps .*
24
+
25
+ Assume we eliminate regularization ($\lambda=0$) and use the $p$-th power of the $L^p$ norm of the difference between the warped image, $I^A\circ\Phi_\theta^{-1}$, and the fixed image, $I^B$, as similarity measure. Then, our optimization problem becomes $$\begin{equation}
26
+ \theta^* = \underset{\theta}{\arg \min}~\int (I^A(\Phi_\theta^{-1}(x))-I^B(x))^p~\mathrm{d}x\,,~p\geq 1, \label{eq:only_similarity}
27
+ \end{equation}$$ , the image intensities of $I^A$ should be close to the image intensities of $I^B$ *after* deformation. Without regularization, we are entirely free to choose $\Phi_\theta$. Highly irregular minimizers of Eq. [\[eq:only_similarity\]](#eq:only_similarity){reference-type="eqref" reference="eq:only_similarity"} may result as each intensity value $I^A$ is simply matched to the closest intensity value of $I^B$ regardless of location. For instance, for a constant $I^B(x)=c$ and a moving image $I^A(y)$ with a unique location $y_c$, where $I^A(y_c)=c$, the optimal map is $\Phi_\theta^{-1}(x) = y_c$, which is not invertible: only *one point* of $I^A$ will be mapped to the *entire* domain of $I^B$. Clearly, more spatial regularity is desirable. Importantly, irregular deformations are common optimizers of Eq. [\[eq:only_similarity\]](#eq:only_similarity){reference-type="eqref" reference="eq:only_similarity"}.
28
+
29
+ Optimal mass transport (OMT) is widely used in machine learning and in imaging. Such models are of interest to us as they can be inverse consistent. An OMT variant of the discrete reformulation of Eq. [\[eq:only_similarity\]](#eq:only_similarity){reference-type="eqref" reference="eq:only_similarity"} is $$\begin{equation}
30
+ \theta^* = \underset{\theta}{\arg\min}~\mathrm{d}x \sum_{i=1}^S (I^A(\Phi_\theta^{-1}(x_i))-I^B(x_i))^p\,, p\geq 1\label{eq:only_similarity_discrete}
31
+ \end{equation}$$ for $p\geq 1$, where $i$ indexes the $S$ grid points $x_i$, $\Phi_\theta^{-1}(x_i)$ is restricted to map to the grid $y_i$ of $I^A$, and $\mathrm{d}x$ is the discrete area element. Instead of considering all possible maps, we attach a unit mass to each *intensity* value of $I^A$ and $I^B$ and ask for minimizers of Eq. [\[eq:only_similarity_discrete\]](#eq:only_similarity_discrete){reference-type="eqref" reference="eq:only_similarity_discrete"} which transform the intensity distribution of $I^A$ to the intensity distribution of $I^B$ via *permutations* of the values only. As we only allow permutations, the optimal map will be *invertible* by construction. This problem is equivalent to optimal mass transport for one-dimensional empirical measures [@peyre2019computational]. One obtains the optimal value by ordering all intensity values of $I^A$ ($I^A_1\leq \cdots\leq I^A_S$) and $I^B$ ($I^B_1\leq \cdots\leq I^B_S$). The minimum is the $p$-th power of the $p$-Wasserstein distance ($p\geq 1$) $\mathcal{W}_p^p = \sum_i |I^A_i-I^B_i|^p$. In consequence, minimizers for Eq. [\[eq:only_similarity\]](#eq:only_similarity){reference-type="eqref" reference="eq:only_similarity"} are related to sorting, but do not consider spatial regularity. Note that solutions might not be unique when intensity values in $I^A$ or $I^B$ are repeated. Solutions via sorting were empirically explored for registration in [@Rholfing12] to illustrate that they, in general, do not result in spatially meaningful registrations. At this point, our idea of using inverse consistency (, invertible maps) as the only regularizer appears questionable, given that OMT often provides an inverse consistent model (when a matching, , a Monge solution, is optimal), while resulting in irregular maps (Fig. [1](#fig:matching_examples){reference-type="ref" reference="fig:matching_examples"}).
32
+
33
+ ![Source and target functions for a 1D registration example. Panels (c) and (d) show two possible solutions for mean square error (MSE) and OMT, respectively. In both cases, solutions may not be unique. However, for OMT, matching solutions will be one-to-one, , invertible. OMT imposes a stronger constraint than MSE on the obtainable maps, but irregular maps are still permissible. ](src_tgt.pdf){#fig:matching_examples width="0.98\\columnwidth"}
34
+
35
+ **Simplicity**. The highly irregular maps in Fig. [1](#fig:matching_examples){reference-type="ref" reference="fig:matching_examples"} occur for *pair-wise* image registration. Instead, we are concerned with training a network over an *entire image population*. Were one to find a global inverse consistent minimizer, a network would need to implicitly approximate the sorting-based OMT solution. As sorting is a continuous piece-wise linear function [@blondel2020fast], it can, in principle, be approximated according to the universal approximation theorem [@leshno1993multilayer]. However, this is a limit argument. Practical neural networks for sorting are either *approximate* [@liu2011learning; @engilberge2019sodeep] or very large (, $O(S^2)$ neurons for $S$ values [@chen1990neural]). Note that deep networks often tend to simple solutions [@shah2020pitfalls] and that we do not even want to sort *all* values for registration. Instead, we are interested in more *local* permutations, rather than the global OMT permutations, which is what we will obtain for neural network solutions with inverse consistency.
36
+
37
+ **Invertibility**. Requiring map invertibility implies searching for a matching (a Monge formulation in OMT) which is an optimal permutation, but which may not be continuous[^2]. Instead, our goal is a *continuous and invertible* map. We therefore want to penalize deviations from $$\begin{equation}
38
+ \Phi_{\theta}^{AB} \circ \Phi_{\theta}^{BA} = \operatorname{Id}\,,\label{eq:inverse_consistency}
39
+ \end{equation}$$ where $\Phi_{\theta}^{AB}$ denotes a predicted map (by a network with weights $\theta$) to register image $I^A$ to $I^B$; $\Phi_{\theta}^{BA}$ is the network output with reversed inputs and $\operatorname{Id}$ denotes the identity map.
40
+
41
+ Inverse consistency of maps has been explored to obtain symmetric maps for pair-wise registration [@hart2009optimal; @christensen2001consistent] and for registration networks [@zhang18; @shen19]. Related losses have been proposed on images (instead of maps) for registration [@boah19; @boah20] and for image translation [@zhang2020cross]. However, none of these approaches study inverse consistency for regularization. Likely, because it has so far been believed that additional spatial regularization is required for nonparametric registration.
42
+
43
+ As we will show next, *approximate inverse consistency* by itself yields regularizing effects in the context of pairwise image registration.
44
+
45
+ Denote by $\Phi_{\theta}^{AB}(x)$ and $\Phi_{\theta}^{BA}(x)$ the output maps of a network for images $(I^A,I^B)$ and $(I^B,I^A)$, respectively. As inverse consistency by itself does not prevent discontinuous solutions, we propose to use *approximate* inverse consistency to favor $C^0$ solutions. We add two vector-valued independent spatial white noises $n_1(x),n_2(x)\in\mathbb{R}^N$ ($x \in [0,1]^N$ with $N$=2 or $N$=3 the image dim.) of variance $1$ for each space location and dimension to the two output maps and define $$\begin{align*}
46
+ \Phi_{\theta \varepsilon}^{AB}(x) & = \Phi_{\theta}^{AB}(x) + \varepsilon n_1(\Phi_{\theta}^{AB}(x))\enspace, \\
47
+ \Phi_{\theta \varepsilon}^{BA}(x) & = \Phi_{\theta}^{BA}(x) + \varepsilon n_2(\Phi_{\theta}^{BA}(x))\enspace,
48
+ \end{align*}$$ with $\varepsilon>0$. We then consider the loss $\mathcal{L} = \lambda\mathcal{L}_{\text{inv}} + \mathcal{L}_{\text{sim}}$, with inverse consistency component ($\mathcal{L}_{\text{inv}}$) $$\begin{equation}
49
+ \begin{split}
50
+ \mathcal{L}_{\text{inv}} & =
51
+ \left\| \Phi_{\theta \varepsilon}^{AB}\circ \Phi_{\theta \varepsilon}^{BA}- \operatorname{Id} \right\|^2_2 +
52
+ \left\| \Phi_{\theta \varepsilon}^{BA}\circ \Phi_{\theta \varepsilon}^{AB}- \operatorname{Id} \right\|^2_2
53
+ %+ \right. \\
54
+ %&~~~~~\,\, \, \, \left. \left\| \fxpsivarepsilon \circ \fxvarphivarepsilon - \operatorname%{Id} \right\|^2_2 \right)
55
+ \end{split}
56
+ \label{EqLossSymmetric:partInv}
57
+ \end{equation}$$ and similarity component ($\mathcal{L}_{\text{sim}}$)
58
+
59
+ $$\begin{equation}
60
+ \mathcal{L}_{\text{sim}} =
61
+ \left\| I^A \circ \Phi_{\theta}^{AB}- I^B \right\|^2_2 +
62
+ \left\| I^B \circ \Phi_{\theta}^{BA}- I^A \right\|^2_2\enspace.
63
+ \label{EqLossSymmetric:partSim}
64
+ %\left. \| \fxpsivarepsilon \circ \fxvarphivarepsilon - \operatorname{Id} \|^2_2 \right)
65
+ \end{equation}$$ Importantly, note that there are *multiple* maps that can lead to the same $I^A \circ \Phi_{\theta}^{AB}$ and $I^B \circ \Phi_{\theta}^{BA}$. Therefore, among all these maps, minimizing the loss $\mathcal{L}$ drives the maps towards those that minimize the two terms in Eq. [\[EqLossSymmetric:partInv\]](#EqLossSymmetric:partInv){reference-type="eqref" reference="EqLossSymmetric:partInv"}.
66
+
67
+ **Assumption**. *Both terms in Eq. [\[EqLossSymmetric:partInv\]](#EqLossSymmetric:partInv){reference-type="eqref" reference="EqLossSymmetric:partInv"} can be driven to a small value (of the order of the noise), by minimization.*
68
+
69
+ We first Taylor-expand one of the two terms in Eq. [\[EqLossSymmetric:partInv\]](#EqLossSymmetric:partInv){reference-type="eqref" reference="EqLossSymmetric:partInv"} (the other follows similarly), yielding $$\begin{equation}
70
+ \begin{split}
71
+ \left\| \Phi_{\theta \varepsilon}^{AB}\circ \Phi_{\theta \varepsilon}^{BA}- \operatorname{Id} \right\|^2_2 \approx
72
+ & \left\| \Phi_{\theta}^{AB}\circ \Phi_{\theta}^{BA}~+ \right. \\
73
+ & ~~~~\varepsilon n_1(\Phi_{\theta}^{AB}\circ \Phi_{\theta}^{BA})~+ \\
74
+ & ~~~\left. \mathrm{d}\Phi_{\theta \varepsilon}^{AB}(\varepsilon n_2(\Phi_{\theta}^{BA})) - \operatorname{Id}\right\|^2_2\enspace.\nonumber
75
+ \end{split}
76
+ \end{equation}$$ Defining the right-hand side as $A$, developing the squares and taking expectation, we obtain $$\begin{equation}
77
+ \begin{split}
78
+ \mathbb{E}[A] = & \left\| \Phi_{\theta}^{AB}\circ \Phi_{\theta}^{BA}- \operatorname{Id} \right\|^2_2 \\
79
+ & + \varepsilon^2 \mathbb{E}\left[\left\|n_1\circ(\Phi_{\theta \varepsilon}^{AB}\circ \Phi_{\theta \varepsilon}^{BA})\right\|^2_2\right] \\
80
+ & + \varepsilon^2\mathbb{E}\left[\left\|\mathrm{d}\Phi_{\theta \varepsilon}^{AB}( n_2) \circ \Phi_{\theta}^{BA}\right\|^2_2\right]\enspace,
81
+ \end{split}
82
+ \label{eqn:expectation_of_A}
83
+ \end{equation}$$ since, by independence, all the cross-terms vanish (the noise terms have $0$ mean value). The second term is constant, , $$\begin{alignat}
84
+ {1}
85
+ &\mathbb{E}\left[\left\|n_1\circ(\Phi_{\theta \varepsilon}^{AB}\circ \Phi_{\theta \varepsilon}^{BA})\right\|^2_2\right] =
86
+ \\
87
+ %\\ \mathbb{E}[\int \| n_1\|^2_2(y)\operatorname{Jac}((\fxpsivarepsilon)^{-1} \circ (\fxvarphivarepsilon)^{-1}) dy]\\
88
+ &\int \mathbb{E}\left[\|n_1\|^2_2(y)\right] \operatorname{Jac}((\Phi_{\theta \varepsilon}^{BA})^{-1} \circ (\Phi_{\theta \varepsilon}^{AB})^{-1})~\mathrm{d}y= \text{const.} \,, \notag
89
+ \end{alignat}$$ where we performed a change of variables and denoted the determinant of the Jacobian matrix as $\operatorname{Jac}$. The last equality follows from the fact that the variance of the noise term is spatially constant and equal to $1$. By similar arguments, the last expectation term in Eq. [\[eqn:expectation_of_A\]](#eqn:expectation_of_A){reference-type="eqref" reference="eqn:expectation_of_A"} can be rewritten as $$\begin{multline}
90
+ \label{EqWhiteNoise}
91
+ \mathbb{E}\left[\left\|\mathrm{d}\Phi_{\theta \varepsilon}^{AB}( n_2) \circ \Phi_{\theta}^{BA}\right\|^2_2\right] = \\ % \int \mathbb{E}[(n_2^{\top}d(\fxvarphivarepsilon)^{\top} d\fxvarphivarepsilon(n_2)) \circ \fxpsi]\,dx\\
92
+ %= \int \operatorname{Tr}([d(\fxvarphivarepsilon)^{\top} d\fxvarphivarepsilon] \circ \fxpsi) dx %\label{EqSecondToLast}
93
+ \int \operatorname{Tr}(\mathrm{d}(\Phi_{\theta \varepsilon}^{AB})^{\top} \mathrm{d}\Phi_{\theta \varepsilon}^{AB}) \operatorname{Jac}((\Phi_{\theta}^{BA})^{-1})~\mathrm{d}y\,,
94
+ \end{multline}$$ where $\operatorname{Tr}$ denotes the trace operator. As detailed in the suppl. material, the identity of Eq. [\[EqWhiteNoise\]](#EqWhiteNoise){reference-type="eqref" reference="EqWhiteNoise"} relies on a change of variable and on the property of the white noise, $n_2$, which satisfies null correlation in space and dimension $\mathbb{E}[n_2(x) n_2(x')^\top] = \operatorname{Id}_{\mathbb{R}^N}$ if $x=x'$ and $0$ otherwise.
95
+
96
+ **Approximation & $H^1$ regularization**. We now want to connect the approximate inverse consistency loss of Eq. [\[EqLossSymmetric:partInv\]](#EqLossSymmetric:partInv){reference-type="eqref" reference="EqLossSymmetric:partInv"} with $H^1$ norm type regularization. Our assumption implies that $\Phi_{\theta}^{AB}\circ \Phi_{\theta}^{BA},\Phi_{\theta}^{BA}\circ \Phi_{\theta}^{AB}$ are close to identity, therefore one has $\operatorname{Jac}((\Phi_{\theta}^{BA})^{-1}) \approx \operatorname{Jac}(\Phi_{\theta}^{AB})$. Assuming this approximation holds , we use it in Eq. [\[EqWhiteNoise\]](#EqWhiteNoise){reference-type="eqref" reference="EqWhiteNoise"}, together with the fact that, $\Phi_{\theta \varepsilon}^{AB}\approx \Phi_{\theta}^{AB}+ O(\varepsilon)$ to get at order $\varepsilon^2$ (see suppl. material for details) to approximate $\mathcal{L}_{\text{inv}}$, , $$\begin{equation}
97
+ \begin{split}
98
+ \mathcal{L}_{\text{inv}} & \approx \left\| \Phi_{\theta}^{AB}\circ \Phi_{\theta}^{BA}- \operatorname{Id}\right\|^2_2 + \left\| \Phi_{\theta}^{BA}\circ \Phi_{\theta}^{AB}- \operatorname{Id}\right\|^2_2 \\
99
+ + \varepsilon^2 &\left\| d \Phi_{\theta}^{AB}\sqrt{\operatorname{Jac}(\Phi_{\theta}^{AB})} \right\|^2_2
100
+ + \varepsilon^2 \left\| d \Phi_{\theta}^{BA}\sqrt{\operatorname{Jac}(\Phi_{\theta}^{BA})} \right\|^2_2 \,
101
+ \end{split}
102
+ \label{EqH1regularization}
103
+ \end{equation}$$ We see that approximate inverse consistency leads to an $L^2$ penalty of the gradient, weighted by the Jacobian of the map. This is a type of Sobolev ($H^1$ more precisely) regularization sometimes used in image registration. In particular, the $H^1$ term is likely to control the compression and expansion magnitude of the maps, at least on average, on the domain.
104
+
105
+ **Inverse consistency with no noise and the implicit regularization of inverse consistency**. Turning the noise level to zero also leads to regular displacement fields in our experiments when predicting maps with a neural network. In this case, we observe that inverse consistency is only approximately achieved. Therefore, one can postulate that the error made in computing the inverse entails the $H^1$ regularization as previously shown. The possible caveat of this hypothesis is that the inverse consistency error might not be independent of the displacement fields, which was assumed in proving the emerging $H^1$ regularization. Last, even when the network should have the capacity to exactly satisfy inverse consistency for all data, we conjecture that the implicit bias due to the optimization will favor more regular outputs.
106
+
107
+ *A fully rigorous theoretical understanding of the regularization effect due to the data population and its link with inverse consistency is important, but beyond our scope here.*
108
+
109
+ We base our registration approach on training a neural network $F_\theta^{AB}$ which, given input images $I^A$ and $I^B$, outputs a grid of *displacement* vectors, $D_\theta^{AB}$, in the space of image $I^B$, assuming normalized image coordinates covering $[0,1]^N$. We obtain *continuous* maps by interpolation, , $$\begin{equation}
110
+ \Phi_\theta^{AB} = D_{\theta}^{AB} + \operatorname{Id}, \quad D_{\theta}^{AB} = \operatorname{interp}(F_{\theta}^{AB})
111
+ %\Phi_\theta^{AB} = \operatorname{interpolate}[F_\theta^%{AB}] + \operatorname{Id},~D_\theta^{AB} := F_\theta^{AB}\,,
112
+ \label{eq:map_interpolation}
113
+ \end{equation}$$ where $I^A\circ\Phi_\theta^{AB} \approx I^B$. Under the assumption of linear interpolation (bilinear in 2D and trilinear in 3D), $\Phi_\theta^{AB}$ is continuous and differentiable except on a measure zero set. Building on the considerations of Sec. [2](#section:background){reference-type="ref" reference="section:background"} we seek to minimize $$\begin{equation}
114
+ \mathcal{L}(\theta) = \mathbb{E}_{p(I^A,I^B)}\left[\mathcal{L}_{\text{sim}}^{AB} + \lambda \mathcal{L}_{\text{inv}}^{AB}\right],\label{eq:overall_loss}
115
+ \end{equation}$$ where $\lambda\geq 0$ and $p(I^A,I^B)$ denotes the distribution over all possible image pairs. The similarity and invertibility losses depend on the neural network parameters, $\theta$, and are $$\begin{alignat}
116
+ {1}
117
+ \mathcal{L}_{\text{sim}}^{AB} &= \mathcal{L}_{\text{sim}}(I^A \circ \Phi_\theta^{AB}, I^B) + \mathcal{L}_{\text{sim}}(I^B \circ \Phi_\theta^{BA}, I^A) \notag \\
118
+ \mathcal{L}_{\text{inv}}^{AB} &= \mathcal{L}_{\text{inv}}(\Phi_\theta^{AB},\Phi_\theta^{BA}) + \mathcal{L}_{\text{inv}}(\Phi_\theta^{BA},\Phi_\theta^{AB})
119
+ \label{eqn:similarity_and_consistency_loss}
120
+ \end{alignat}$$ with $$\begin{equation}
121
+ \mathcal{L}_{\text{sim}}(I,J) = \|I-J\|_2^2\,,
122
+ \mathcal{L}_{\text{inv}}(\phi,\psi) = \|\phi \circ \psi - \operatorname{Id}\|_2^2\,.
123
+ \end{equation}$$
124
+
125
+ For simplicity, we use the squared $L^2$ norm as similarity measure. Other measures, , normalized cross correlation (NCC) or mutual information (MI), can also be used. When $\mathcal{L}_{\text{inv}}^{AB}$ goes to zero, $\Phi_\theta^{AB}$ will be approx. invertible and continuous due to Eq. [\[eq:map_interpolation\]](#eq:map_interpolation){reference-type="eqref" reference="eq:map_interpolation"}. Hence, we obtain approximate $C^0$ diffeomorphisms without differential equation integration, hyperparameter tuning, or transform restrictions. Our loss in Eq. [\[eq:overall_loss\]](#eq:overall_loss){reference-type="eqref" reference="eq:overall_loss"} is symmetric in the image pairs due to the symmetric similarity and invertibility losses in Eq. [\[eqn:similarity_and_consistency_loss\]](#eqn:similarity_and_consistency_loss){reference-type="eqref" reference="eqn:similarity_and_consistency_loss"}.
126
+
127
+ **Displacement-based inverse consistency loss**. A general map $\Phi_\theta^{AB}$ may map points in $[0, 1]^N$ to points outside $[0, 1]^N$. Extrapolating maps across the boundary is cumbersome. Hence, we only interpolate displacement fields as in Eq. [\[eq:map_interpolation\]](#eq:map_interpolation){reference-type="eqref" reference="eq:map_interpolation"}. We rewrite the inverse consistency loss as $$\begin{alignat}
128
+ {1}
129
+ \mathcal{L}_{\text{inv}}(\Phi_\theta^{AB},\Phi_\theta^{BA}) & = \left\|(D_\theta^{AB} + \operatorname{Id}) \circ (D_\theta^{BA} + \operatorname{Id}) - \operatorname{Id}\right\|^2_2 \notag \\
130
+ & = \left\|(D_\theta^{AB}) \circ \Phi_\theta^{BA} + D_\theta^{BA} \right\|_2^2
131
+ \end{alignat}$$ and use it for implementation, as it is easier to evaluate.
132
+
133
+ <figure id="fig:flips">
134
+ <table>
135
+ <tbody>
136
+ <tr>
137
+ <td style="text-align: center;"></td>
138
+ <td style="text-align: center;"></td>
139
+ </tr>
140
+ </tbody>
141
+ </table>
142
+ <figcaption>The left output is generated by a network trained with inverse consistency, evaluated on a grid instead of randomly. As a result, the loss cannot detect that maps generated by this network flip the pair of pixels in the upper right corner, as that error is not represented in the composed map. The right output is obtained from a network trained with random evaluation off of lattice points.</figcaption>
143
+ </figure>
144
+
145
+ ![In this example, grid points (solid black discs) map to each other inverse consistently. The forward map (a) is inverted by the backward map (b). However, folding of the space occurs as the middle two points swap positions. Off-grid points map under linear interpolation according to (c/d). We see that the interpolated displacements for the small solid red disc ([$\bullet$]{style="color: red"}) do not result in an invertible map. Hence, this mismatch would be penalized by the inverse consistency loss, but only when evaluated off-grid.](fwd_bwd_displacement.pdf){#fig:off_grid_resampling width="\\columnwidth"}
146
+
147
+ **Random evaluation of inverse consistency loss**. $\mathcal{L}_{\text{inv}}^{AB}$ can be evaluated by approximating the $L^2$ norm, assuming constant values over the grid cells. In many cases, this is sufficient. However, as Fig. [2](#fig:flips){reference-type="ref" reference="fig:flips"} illustrates, swapped locations may occur in uniform regions where a registration network only sees uniform background. This swap, composed with itself, is the identity as long as it is only evaluated at the center of pixels/voxels. Hence, the map appears invertible to the loss. However, outside the centers of pixels/voxels, the map is not inverse consistent when combined with linear interpolation. To avoid such pathological cases, we approximate the $L^2$ norm by random sampling. This forces interpolation and therefore results in non-zero loss values for swaps. Fig. [3](#fig:off_grid_resampling){reference-type="ref" reference="fig:off_grid_resampling"} shows why off-grid sampling combined with inverse consistency is a stronger condition than only considering deformations at grid points. In practice, we evaluate the loss
148
+
149
+ $$\begin{align}
150
+ %\begin{split}
151
+ & \mathcal{L}_{\text{inv}}(\Phi_\theta^{AB},\Phi_\theta^{BA})\\
152
+ & ~~= \left\| (D_\theta^{AB}) \circ \Phi_\theta^{BA} + D_\theta^{BA} \right\|_2^2 \nonumber \\
153
+ & ~~= \mathbb{E}_{x \sim \mathcal{U}(0,1)^N} \left[(D_\theta^{AB}) \circ \Phi_\theta^{BA} + D_\theta^{BA}\right]^2(x) \nonumber \\
154
+ & ~~\approx \nicefrac{1}{N_p}\sum\nolimits_{i} \left([(D_\theta^{AB}) \circ (D_\theta^{BA} + \operatorname{Id}) + D_\theta^{BA}] (x_i + \epsilon_i) \right)^2 \nonumber \\
155
+ & ~~= \nicefrac{1}{N_p}\sum\nolimits_{i} \left([D_\theta^{AB} \circ (D_\theta^{BA} \circ (x_i + \epsilon_i) + x_i + \epsilon_i) \right. \nonumber \\
156
+ & ~~~~~~~~~~~~~~~~~~~~~~~~+ \left. D_\theta^{BA} \circ (x_i + \epsilon_i)]\right)^2 \nonumber
157
+ %\end{split}
158
+ \end{align}$$ where $N_p$ is the number of pixels/voxels, $\mathcal{U}(0,1)^N$ denotes the uniform distribution over $[0,1]^N$, $x_i$ denotes the grid center coordinates and $\epsilon_i$ is a random sample drawn from a multivariate Gaussian with standard deviation set to the size of a pixel/voxel in the respective spatial directions.
159
+
160
+ # Method
161
+
162
+ We experiment with four neural network architectures. All networks output displacement fields, $D_\theta^{AB}$. We briefly outline the differences below, but refer to the suppl. material for details. The first network is an **MLP** with 2 hidden layers and ReLU activations. The output layer is reshaped into size $2 \times W \times H$. Second, we use a convolutional encoder-decoder network (**Enc-Dec**) with 5 layers each, reminiscent of a U-Net *without* skip connections. Our third network uses 6 convolutional layers without up- or down-sampling. The input to each layer is the concatenation of the outputs of all previous layers (**ConvOnly**). Finally, we use a **U-Net** with skip and residual connections. The latter is similar to **Enc-Dec**, but uses LeakyReLU activations and batch normalization. In all architectures, the final layer weights are initialized to 0, so that optimization starts at a network outputting a zero displacement field.
163
+
164
+ <figure id="fig:regularity_by_inexact_inverse_consistency">
165
+
166
+ <figcaption>Comparison between <strong>U-Net</strong> results and <strong>direct optimization</strong> (no neural network; over <span class="math inline"><em>Φ</em><sub><em>θ</em></sub><sup><em>A</em><em>B</em></sup></span> and <span class="math inline"><em>Φ</em><sub><em>θ</em></sub><sup><em>B</em><em>A</em></sup></span>) w/ and w/o added noise, using the inverse consistency loss with <span class="math inline"><em>λ</em> = 2, 048</span>. Direct optimization w/o noise leads to irregular maps, while adding noise or using the <strong>U-Net</strong> improves map regularity (best viewed zoomed).</figcaption>
167
+ </figure>
168
+
169
+ Sec. [2.3](#subsec:h1_by_approximate_inverse_consistency){reference-type="ref" reference="subsec:h1_by_approximate_inverse_consistency"} formalized that approximate inverse consistency results in regularizing effects. Specifically, when $\Phi_\theta^{AB}$ is approximately the inverse of $\Phi_\theta^{BA}$, the inverse consistency loss $\mathcal{L}^{AB}_{\text{inv}}$ can be approximated based on Eq. [\[EqH1regularization\]](#EqH1regularization){reference-type="eqref" reference="EqH1regularization"}, highlighting its implicit $H^1$ regularization. We investigate this behavior by three experiments: Fig. [4](#fig:regularity_by_inexact_inverse_consistency){reference-type="ref" reference="fig:regularity_by_inexact_inverse_consistency"} shows some sample results, supporting our theoretical exposition of Sec. [2.3](#subsec:h1_by_approximate_inverse_consistency){reference-type="ref" reference="subsec:h1_by_approximate_inverse_consistency"}: Pair-wise image registration without noise results in highly irregular transformations even though the inverse consistency loss is used. Adding a small amount of Gaussian noise with standard deviation of 1/8th of a pixel (similar to the inverse consistency loss magnitudes we observe for a deep network) to the displacement fields before computing the inverse consistency loss, results in significantly more regular maps. Lastly, using a **U-Net** yields highly regular maps. Notably, all three approaches result in approximately inverse consistent maps. The behavior for pair-wise image registration elucidates why inverse consistency has not appeared in the classical (pair-wise) registration literature as a replacement for more complex spatial regularization. The proposed technique *only* results in regularity when inverse consistency errors are present.
170
+
171
+ *In summary, our theory is supported by our experimental results: approximate inverse consistency regularizes maps.*
172
+
173
+ \
174
+
175
+ ![Comparison of networks as a function of $\lambda$. **U-Net** and **MLP** show the best performance due to their ability to capture long and short range dependencies. **Enc-Dec** and **ConvOnly**, which capture only long range and only short range dependencies, resp., also learn regular maps, but for a narrower range of $\lambda$. In all cases, maps become smooth for sufficiently large $\lambda$. Best viewed zoomed.](TrianglesFivesCombined.pdf){#fig:registration_across_architectures width="0.97\\columnwidth"}
176
+
177
+ Sec. [4.3](#subsec:imperfect_inverse_consistency){reference-type="ref" reference="subsec:imperfect_inverse_consistency"} illustrated that approximate inverse consistency yields regularization effects which translate to regularity for network predictions, as networks will, in general, not achieve perfect inverse consistency. A natural next question to ask is "how much the results depend on a particular architecture"? To this end, we assess four different network types, focusing on MNIST and the triangles & circles data. We report two measures on held-out images: the *Dice score* of pixels with intensity greater than $0.5$, and the mean number of *folds*, , pixels where the volume form $\mathrm{d}V$ of $\Phi$ is negative.
178
+
179
+ One hypothesis as to how network design could drive smoothness would be that smoothness is induced by convolutional layers (which can implement a smoothing kernel). If this were the case, we would expect the **MLP** to produce irregular maps with a high number of folds. Vice versa, since the **MLP** has no spatial prior, obtaining smooth transforms would indicate that smoothness is promoted by the loss itself. The latter is supported by Fig. [5](#fig:registration_across_architectures){reference-type="ref" reference="fig:registration_across_architectures"}, showing regular maps even for the **MLP** when $\lambda$ is sufficiently large. Note that $\lambda=0$ in Fig. [5](#fig:registration_across_architectures){reference-type="ref" reference="fig:registration_across_architectures"} corresponds to an unregularized MSE solution, as discussed in Sec. [2.1](#subsection:regularization_thoughts_and_sorting){reference-type="ref" reference="subsection:regularization_thoughts_and_sorting"}; maps are, as expected, highly irregular and regularization via inverse consistency is clearly needed.
180
+
181
+ A second hypothesis is that regularity results from a *bottleneck* structure within a network, , a **U-Net**. In fact, Bhalodia  [@bhalodia19] show that autoencoders tend to yield smooth maps. To assess this hypothesis, we focus on the **Enc-Dec** and **ConvOnly** type networks; the former has a bottleneck structure, while the latter does not. Fig. [5](#fig:registration_across_architectures){reference-type="ref" reference="fig:registration_across_architectures"} shows some support for the hypothesis that a bottleneck promotes smooth maps: for a specific $\lambda$, **Enc-Dec** appears to have more strongly regularized outputs compared to **U-Net**, with **ConvOnly** being the most irregular. Yet, higher values of $\lambda$ (, 1,024 or 2,048) for **ConvOnly** yield equally smooth maps. Overall, a bottleneck structure does have a regularizing effect, but regularity can also be achieved by appropriately weighing the inverse consistency loss (see Tab. [\[tab:registration_across_architectures\]](#tab:registration_across_architectures){reference-type="ref" reference="tab:registration_across_architectures"}).
182
+
183
+ *In summary, our experiments indicate that the regularizing effect of inverse consistency is a robust property of the loss, and should generalize well across architectures.*
184
+
185
+ For experiments on real data, we focus on the 3D OAI dataset. To demonstrate the versatility of the advocated inverse consistency loss in promoting map regularity, we refrain from affine pre-registration (as typically done in earlier works) and simply compose *multiple U-Nets* instead. In particular, we compose up to four U-Nets A composition of two U-Nets is initially trained on low-resolution image pairs. Weights are then frozen and this network is composed with a third U-Net, trained on high-resolution image pairs. This network is then optionally frozen and composed with a fourth U-Net, again trained on high-resolution image pairs. During the training of this multi-step approach, the weighting of the inverse consistency loss is gradually increased. We train using ADAM [@Kingma15a] with a batch size of 128 in the low-res. stage, and a batch size of 16 in the high-res. stage. MSE is used as image similarity measure.
186
+
187
+ We compare our approach, [InverseConsistentNet]{style="color: ggreen"} , against the methods of [@shen2019networks], in terms of (1) cartilage Dice scores between registered image pairs [@AmbellanTackEhlkeetal.2018] (based on manual segmentations) and (2) the number of folds. The segmentations are not used during training and allow quantifying if the network yields semantically meaningful registrations. Tab. [\[tab:oai_results\]](#tab:oai_results){reference-type="ref" reference="tab:oai_results"} lists the corresponding results, Fig. [\[fig:teaser\]](#fig:teaser){reference-type="ref" reference="fig:teaser"} shows several example registrations. Unlike the other methods in Tab. [\[tab:oai_results\]](#tab:oai_results){reference-type="ref" reference="tab:oai_results"}, except where explicitly noted, Notably, despite its simplicity, `ICON` yields performance (in terms of Dice score & folds) comparable to more complex, explicitly regularized methods. We emphasize that our objective is not to outperform existing techniques, but to present evidence that regular maps can be learned *without* carefully tuned regularizers.
188
+
189
+ *In summary, using the proposed inverse consistency loss yields (1) competitive Dice scores, (2) acceptable folds, and (3) fast performance.*
190
+
191
+ ::: small
192
+ :::
2105.05391/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-04-11T15:30:22.417Z" agent="5.0 (X11)" etag="H329dB64TKJex2XYGHaa" version="14.5.10" type="google"><diagram id="SFImUzd9079kJpAZp9lu" name="Page-1">7VtZk6M2EP41rto82AWIy49eHzOVSlKTTGp3Jy9bLMhYOxg5gMd4f30kEJcQvhZsT2xcNWO1pJbo/tTqbsk9MF7GD4G1WvyOHej1FMmJe2DSUxTd0MhfStimBGCaKcENkJOS5ILwjH5ARpQYdY0cGFYaRhh7EVpViTb2fWhHFZoVBHhTbTbHXnXUleWyEaWC8GxbHqw1+4ycaJFSTa3U+hEid5GNLEusZmlljRkhXFgO3pRIYNoD4wDjKP22jMfQo7LL5JL2mzXU5hMLoB8d0uFV+qz889fb11+97Zc/puu4//XPRZ9xebO8NXvhnqJ7hN/HOSZsyayjLROF/u8aZxX9MFHUiDQgU4iLSvLNpf/HcsaHTChllVYwWeRclQCvfQfSOUqkerNAEXxeWTat3RBEEdoiWnqkJNOxkeeNsYeDpC8YjyXysDmV6DONfhj9mY1F+4dRgF9hqeU8efJ5laWZiQYGEYxLJCbdB4iXMAq2pElWO2SaZlCXNSUtbwrgGKzJooSZDEsWg6qbcy60Sb4whR6hXKUj5T62w6YZIl0gpwkhOxB1OeRkNmMPckBXyAFdmQXlHGbB0aDpqIcocZY83ShRUc3LLn+1q+V/XiWGpAfy3b8xaT8BF9eqrF92aWr3Hbs1uwvkK9uxjbvd/WklXtzumne728HSvLRLNBRolRM79J0RDTlJyfasMER2VdKpwLIwkoqU9GAhb1pvBVGp3ChG6GRBa4MQS0LSdkA/gJ4VobdqqCuSHBvhCaMEwExHij6s6sjghB/idWBD1qscmvKMhlVGis4xIoJxYVRjlCgyf+3TdZth7a7cZuXWdHKqcmso6Vq5omCmRYM51+hHZA/15Kk7O+lD6C4FEhs0Tw/RghtYDoJFJx/7sB27qkqcXZXrdlWEKkXqyrDK9TjlEVqOh8grs/3uW3Dwrplsl9zG+QB9GFgRkSSvdyK0SLSMObmXNc5Ilodcn9oCogpI6B+pCpBteSNWsUSOQ4cRoqmKt2RjTS2DIrWlZr2iZk2taVk9p08ki8KWn7awvOBgjKIvrDX9/kIFPNBYaRIzeSeFbVbwyfvRTn1pIKkZIe0JQFYu+ialSucnGCAiJYoCRszs/k5lXomdByYfA6mn2fmaZeEZdW3n9ctBTDkQY1IVX4a2F18HQSlV0Q7ZsLgy1cABe+W1YJPPi5raidiUTY6Rfl5s1gP7pw9HR4VqNSqU7NylKIgkCJsMidNR2wS7GY3lc46JVbuYhiSJp9EzxsceMnQxPUMbTYzhETm3X96Bp6K25KkYVU9FVrWaq5IfqlY90q58FVEC50p8lffkWahaW56FcWHPop776cp6G9ORPhUZsrPapcd9R7M3Zb2b9rjGpPpNWW9QWZqK4ATlvNZb6SSXd480298PTs4oqoa8m1HDfkB0bm1LzVa0Qbhjwrp4wgU8U46tbjb5Gjz9+AgIj4/yZFpBKNvRb3ttK7+IPA+twibLU1pNVrhKb+bNUUxtUSf5rqE50MQhXgncQKuDG3AhZXuGSHT/4iRDVEods63y7mAmbADglE6wf5pJ0bgsWI1Rxy6mUs+PPv8vEgR3h7by0lft0O5N2tyun6vxmcihMqjnKYbndHRF6e6bvd37nm6ZnfMK4csq/vT6FL68mb/Z3z+Zs/7WVYT3wg9xTJIrDBmZrVvuinV52dVdlITBDNHZCtyN4pRkkJ+LvGQOzI7QqFFFe487rs2fkaup0PyI9mhvBnDWqrv7NEKAteb5Nvm36sHgkW4VPLJunggePm3bGnhIsfi5U9q8+M0YmP4H</diagram></mxfile>
2105.05391/main_diagram/main_diagram.pdf ADDED
Binary file (25.1 kB). View file
 
2105.05391/paper_text/intro_method.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ **Electra Finetune** stands for an Electra model finetuned on the training set of HLGD, inputting the two headlines, divided by a separator token. Headline order is chosen randomly at each epoch. Because we train a model for several epochs (see Appendix [8](#appendix:model_details){reference-type="ref" reference="appendix:model_details"}), a model is likely to see pairs in both orders. This model only uses headlines of articles for prediction, and falls under *Challenge 1*.
4
+
5
+ **Electra Finetune on content** represents a similar model to that described above, with the difference that the model makes predictions based on the first 255 words of the contents of the two news articles, instead of the headline. This evaluates the informativeness of contents in determining headline groups. This experiment requires the contents and falls under *Challenge 3*.
6
+
7
+ **Electra Finetune + Time** corresponds to an Electra model with time information. The model's output goes through a $768$x$1$ feed-forward layer, and is concatenated with the day difference of publication, which is run through a $2$x$2$ feed-forward, and a `softmax` layer. This model uses headline and time information, and falls under *Challenge 2*.
2111.12918/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2111.12918/paper_text/intro_method.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep learning has shown outstanding results in medical image analysis (MIA) [24, 34, 35]. Compared to computer vision, the labelling of MIA training sets by medical experts is significantly more expensive, resulting in low availability of labelled images, but the high availability of unlabelled
4
+
5
+ ![](_page_0_Figure_9.jpeg)
6
+
7
+ (a) Diagram of our ACPL (top) and traditional pseudo-label SSL (bottom)
8
+
9
+ ![](_page_0_Figure_11.jpeg)
10
+
11
+ (b) Imbalanced distribution on multi-label Chest X-ray14 [39] (left) and multi-class ISIC2018 [36] (right)
12
+
13
+ Figure 1. In (a), we show diagrams of the proposed ACPL (top) and the traditional pseudo-label SSL (bottom) methods, and (b) displays histograms of images per label for the multi-label Chest X-ray14 [39] (left) and multi-class ISIC2018 [36] (right).
14
+
15
+ images from clinics and hospitals databases can be explored in the modelling of deep learning classifiers. Furthermore, differently from computer vision problems that tend to be mostly multi-class and balanced, MIA has a number of multi-class (e.g., a lesion image of a single class) and multi-label (e.g., an image from a patient can contain multiple diseases) problems, where both problems usually contain severe class imbalances because of the variable prevalence of diseases (see Fig. 1-(b)). Hence, MIA semi-supervised learning (SSL) methods need to be flexible enough to work with multi-label and multi-class problems, in addition to handle imbalanced learning.
16
+
17
+ State-of-the-art (SOTA) SSL approaches are usually based on the consistency learning of unlabelled data [5, 6, 32] and self-supervised pre-training [25]. Even though consistency-based methods show SOTA results on multiclass SSL problems, pseudo-labelling methods have shown
18
+
19
+ <sup>\*</sup>First two authors contributed equally to this work.
20
+
21
+ <sup>&</sup>lt;sup>1</sup>Supported by Australian Research Council through grants DP180103232 and FT190100525.
22
+
23
+ <sup>&</sup>lt;sup>2</sup>Code is available at https://github.com/FBLADL/ACPL
24
+
25
+ better results for multi-label SSL problems [29]. Pseudolabelling methods provide labels to confidently classified unlabelled samples that are used to re-train the model [22]. One issue with pseudo-labelling SSL methods is that the confidently classified unlabelled samples represent the least informative ones [30] that, for imbalanced problems, are likely to belong to the majority classes. Hence, this will bias the classification toward the majority classes and most likely deteriorate the classification accuracy of the minority classes. Also, selecting confident pseudo-labelled samples is challenging in multi-class, but even more so in multi-label problems. Previous papers [2, 29] use a fixed threshold for all classes, but a class-wise threshold that addresses imbalanced learning and correlations between classes in multilabel problems would enable more accurate pseudo-label predictions. However, such class-wise threshold is hard to estimate without knowing the class distributions or if we are dealing with a multi-class or multi-label problem. Furthermore, using the model output for the pseudo-labelling process can also cause confirmation bias [1], whereby the assignment of incorrect pseudo-labels will increase the model confidence in those incorrect predictions, and consequently decrease the model accuracy.
26
+
27
+ In this paper, we propose the anti-curriculum pseudolabelling (ACPL), which addresses multi-class and multilabel imbalanced learning SSL MIA problems. First, we introduce a new approach to select the most informative unlabelled images to be pseudo-labelled. This is motivated by our argument that there exists a distribution shift between unlabelled and labelled samples for SSL. An effective learning curriculum must focus on informative unlabelled samples that are located as far as possible from the distribution of labelled samples. As a result, these informative samples are likely to belong to the minority classes in MIA imbalanced learning problems. Selecting these informative samples will naturally balance the training process and, given that they are selected before the pseudolabelling process, we eliminate the need for estimating a class-wise classification threshold, facilitating our model to work well on multi-class and multi-label problems. The information content measure of an unlabelled sample is computed with our proposed cross-distribution sample informativeness that outputs how close an unlabelled sample is from the set of labelled anchor samples (anchor samples are highly informative labelled samples). Second, we introduce a new pseudo-labelling mechanism, called informative mixup, which combines the model classification with a Knearest neighbor (KNN) classification guided by sample informativeness to improve prediction accuracy and mitigate confirmation bias. Third, we propose the anchor set purification method that selects the most informative pseudolabelled samples to be included in the labelled anchor set to improve the pseudo-labelling accuracy of the KNN classifier in later training stages.
28
+
29
+ To summarise, our ACPL approach selects highly informative samples for pseudo-labelling (addressing MIA imbalanced classification problems and allowing multi-label multi-class modelling) and uses an ensemble of classifiers to produce accurate pseudo labels (tackling confirmation bias to improve classification accuracy), where the main technical contributions are:
30
+
31
+ - A novel information content measure to select informative unlabelled samples named cross-distribution sample informativeness;
32
+ - A new pseudo-labelling mechanism, called informative mixup, which generates pseudo labels from an ensemble of deep learning and KNN classifiers; and
33
+ - A novel method, called anchor set purification (ASP), to select informative pseudo-labelled samples to be included in the labelled anchor set to improve the pseudo-labelling accuracy of the KNN classifier.
34
+
35
+ We evaluate ACPL on two publicly available medical image classification datasets, namely the Chest X-Ray14 for thorax disease multi-label classification [39] and the ISIC2018 for skin lesion multi-class classification [8,36]. Our method outperforms the current SOTA methods in both datasets.
36
+
37
+ # Method
38
+
39
+ To introduce our SSL method ACPL, assume that we have a small labelled training set $\mathcal{D}_L = \{(\mathbf{x}_i, \mathbf{y}_i)\}_{i=1}^{|\mathcal{D}_L|}$ , where $\mathbf{x}_i \in \mathcal{X} \subset \mathbb{R}^{H \times W \times C}$ is the input image of size
40
+
41
+ ```
42
+ 1: require:
43
+ Labelled set \mathcal{D}_L, unlabelled set \mathcal{D}_U, and
44
+ number of training stages T
45
+ 2: initialise \mathcal{D}_A = \mathcal{D}_L, and t = 0
46
+ 3: warm-up train p_{\theta_t}(\mathbf{x}) with
47
+ \theta_t = \arg\min_{\theta} \frac{1}{|\mathcal{D}_L|} \sum_{(\mathbf{x}_i, \mathbf{y}_i) \in \mathcal{D}_L} \ell(\mathbf{y}_i, p_{\theta}(\mathbf{x}_i))
48
+ 4: while t < T or |\mathcal{D}_U| \neq 0 do
49
+ build pseudo-labelled dataset using CDSI from (2)
50
+ and IM from (6):
51
+ \mathcal{D}_S = \{(\mathbf{x}, \tilde{\mathbf{y}}) | \mathbf{x} \in \mathcal{D}_U, h(f_{\theta_t}(\mathbf{x}), \mathcal{D}_A) = 1,
52
+ \tilde{\mathbf{y}} = g(f_{\theta_t}(\mathbf{x}), \mathcal{D}_A)
53
+ update anchor set with ASP from (7):
54
+ \mathcal{D}_A = \mathcal{D}_A \bigcup (\mathbf{x}, \tilde{\mathbf{y}}), \text{ where }
55
+ (\mathbf{x}, \tilde{\mathbf{y}}) \in \mathcal{D}_S, and a(f_{\theta_*}(\mathbf{x}), \mathcal{D}_U, \mathcal{D}_A) = 1
56
+ t \leftarrow t + 1
57
+ 7:
58
+ optimise (1) using \mathcal{D}_L, \mathcal{D}_S to obtain p_{\theta_t}(\mathbf{x})
59
+ 8:
60
+ update labelled and unlabelled sets:
61
+ \mathcal{D}_L \leftarrow \mathcal{D}_L \bigcup \mathcal{D}_S, \mathcal{D}_U \leftarrow \mathcal{D}_U \setminus \mathcal{D}_S
62
+ 10: end while
63
+ 11: return p_{\theta_t}(\mathbf{x})
64
+ ```
65
+
66
+ $H \times W$ with C colour channels, and $\mathbf{y}_i \in \{0,1\}^{|\mathcal{Y}|}$ is the label with the set of classes denoted by $\mathcal{Y} = \{1,...,|\mathcal{Y}|\}$ (note that $\mathbf{y}_i$ is a one-hot vector for multi-class problems and a binary vector in multi-label problems). A large unlabelled training set $\mathcal{D}_U = \{\mathbf{x}_i\}_{i=1}^{|\mathcal{D}_U|}$ is also provided, with $|\mathcal{D}_L| << |\mathcal{D}_U|$ . We assume the samples from both datasets are drawn from the same (latent) distribution. Our algorithm also relies on the pseudo-labelled set $\mathcal{D}_S$ that is composed of pseudo-labelled samples classified as informative unlabelled samples, and an anchor set $\mathcal{D}_A$ that contains informative pseudo-labelled samples. The goal of ACPL is to learn a model $p_\theta: \mathcal{X} \to [0,1]^{|\mathcal{Y}|}$ parameterised by $\theta$ using the labelled, unlabelled, pseudo-labelled, and anchor datasets.
67
+
68
+ Below, in Sec. 3.1, we introduce our ACPL optimisation that produces accurate pseudo labels to unlabelled samples following an anti-curriculum strategy, where highly informative unlabelled samples are selected to be pseudo-labelled at each training stage. In Sec. 3.2, we present the information criterion of an unlabelled sample, referred to as cross distribution sample informativeness (CDSI), based on the dissimilarity between the unlabelled sample and samples in the anchor set $\mathcal{D}_A$ . The pseudo labels for the informative unlabelled samples are generated using the proposed informative mixup (IM) method (Sec. 3.3) that mixes up the results from the model $p_{\theta}(.)$ and a K nearest neighbor (KNN) classifier using the anchor set. At the end of each training stage, the anchor set is updated with the anchor set purification (ASP) method (Sec. 3.4) that only keeps
69
+
70
+ ![](_page_3_Figure_0.jpeg)
71
+
72
+ Figure 2. Anti-curriculum pseudo-labelling (ACPL) algorithm. The algorithm is divided into the following iterative steps: 1) train the model with $\mathcal{D}_S$ and $\mathcal{D}_L$ ; 2) extract the features from the anchor and unlabelled samples; 3) estimate information content of unlabelled samples with CDSI from (4) with anchor set $\mathcal{D}_A$ ; 4) partition the unlabelled samples into high, medium and low information content using (2); 5) assign a pseudo label to high information content unlabelled samples with IM from (6); 6) update $\mathcal{D}_S$ with new pseudo-labelled samples; and 7) update $\mathcal{D}_A$ with ASP in (7).
73
+
74
+ the most informative subset of pseudo-labelled samples, according to the *CDSI* criterion.
75
+
76
+ Our ACPL optimisation, described in Alg. 1 and depicted by Fig. 2, starts with a warm-up supervised training of the parameters of the model $p_{\theta}(.)$ using only the labelled set $\mathcal{D}_L$ . For the rest of the training, we use the sets of labelled and unlabelled samples, $\mathcal{D}_L$ and $\mathcal{D}_U$ , and update the pseudo-labelled set $\mathcal{D}_S$ and the anchor set $\mathcal{D}_A$ containing the informative unlabelled and pseudo-labelled samples, where $\mathcal{D}_S$ start as an empty set and $\mathcal{D}_A$ starts with the samples in $\mathcal{D}_L$ . The optimisation iteratively minimises the following cost function:
77
+
78
+ $$\ell_{ACPL}(\theta, \mathcal{D}_L, \mathcal{D}_S) = \frac{1}{|\mathcal{D}_L|} \sum_{(\mathbf{x}_i, \mathbf{y}_i) \in \mathcal{D}_L} \ell(\mathbf{y}_i, p_{\theta}(\mathbf{x}_i)) + \frac{1}{|\mathcal{D}_S|} \sum_{(\mathbf{x}_i, \tilde{\mathbf{y}}_i) \in \mathcal{D}_S} \ell(\tilde{\mathbf{y}}_i, p_{\theta}(\mathbf{x}_i)),$$
79
+ (1)
80
+
81
+ where $\ell(.)$ denotes a classification loss (e.g., cross-entropy), $\theta$ is the model parameter, $\mathbf{y}_i$ is the ground truth, and $\tilde{\mathbf{y}}_i$ is the estimated pseudo label. After optimising (1), the labelled and unlabelled sets are updated as $\mathcal{D}_L = \mathcal{D}_L \bigcup \mathcal{D}_S$ and $\mathcal{D}_U = \mathcal{D}_U \setminus \mathcal{D}_S$ , and a new iteration of optimisation takes place.
82
+
83
+ The function that estimates if an unlabelled sample has high information content is defined by
84
+
85
+ $$h(f_{\theta}(\mathbf{x}), \mathcal{D}_A) = \begin{cases} 1, & p_{\gamma}(\zeta = \text{high}|\mathbf{x}, \mathcal{D}_A) > \tau, \\ 0, & \text{otherwise,} \end{cases}$$
86
+ (2)
87
+
88
+ where $\zeta \in \mathcal{Z} = \{\text{low, medium, high}\}$ represents the information content random variable, $\gamma = \{\mu_{\zeta}, \Sigma_{\zeta}, \pi_{\zeta}\}_{\zeta \in \mathcal{Z}}$ denotes the parameters of the model $p_{\gamma}(.)$ , and $\tau = \max\{p_{\gamma}(\zeta = \text{low}|\mathbf{x}, \mathcal{D}_A), p_{\gamma}(\zeta = \text{medium}|\mathbf{x}, \mathcal{D}_A)\}$ . The function $p_{\gamma}(\zeta|\mathbf{x}, \mathcal{D}_A)$ can be decomposed into $p_{\gamma}(\mathbf{x}|\zeta, \mathcal{D}_A)p_{\gamma}(\zeta|\mathcal{D}_A)/p_{\gamma}(\mathbf{x}|\mathcal{D}_A)$ , where
89
+
90
+ $$p_{\gamma}(\mathbf{x}|\zeta, \mathcal{D}_A) = n(d(f_{\theta}(\mathbf{x}), \mathcal{D}_A)|\mu_{\zeta}, \Sigma_{\zeta}), \tag{3}$$
91
+
92
+ with $n(.; \mu_{\zeta}, \Sigma_{\zeta})$ denoting a Gaussian function with mean $\mu_{\zeta}$ and covariance $\Sigma_{\zeta}$ , $p_{\gamma}(\zeta|\mathcal{D}_{A}) = \pi_{\zeta}$ representing the ownership probability of $\zeta$ (i.e., the weight of mixture $\zeta$ ), and $p_{\gamma}(\mathbf{x}|\mathcal{D}_{A})$ being a normalisation factor. The probability in (3) is computed with the density of the unlabelled sample $\mathbf{x}$ with respect to the anchor set $\mathcal{D}_{A}$ , as follows:
93
+
94
+ $$d(f_{\theta}(\mathbf{x}), \mathcal{D}_{A}) = \frac{1}{K} \sum_{\substack{(f_{\theta}(\mathbf{x}_{A}), \mathbf{y}_{A}) \in \\ \mathcal{N}(f_{\theta}(\mathbf{x}), \mathcal{D}_{A})}} \frac{f_{\theta}(\mathbf{x})^{\top} f_{\theta}(\mathbf{x}_{A})}{\|f_{\theta}(\mathbf{x})\|_{2} \|f_{\theta}(\mathbf{x}_{A})\|_{2}},$$
95
+ (4
96
+
97
+ where $\mathcal{N}(f_{\theta}(\mathbf{x}), \mathcal{D}_A)$ represents the set of K-nearest neighbors (KNN) from the anchor set $\mathcal{D}_A$ to the input image feature $f_{\theta}(\mathbf{x})$ , with each element in the set $\mathcal{D}_A$ denoted by $(f_{\theta}(\mathbf{x}_A), \mathbf{y}_A)$ . The F-dimensional input image feature is extracted with $f_{\theta}: \mathcal{X} \to \mathbb{R}^F$ from the model $p_{\theta}(.)$ with $p_{\theta}(\mathbf{x}) = \sigma(f_{\theta}(\mathbf{x}))$ , where $\sigma(.)$ is the final activation function to produce an output in $[0,1]^{|\mathcal{Y}|}$ . The parameters $\gamma$ in (2) are estimated with the expectation-maximisation (EM) algorithm [10], every time after the anchor set is updated.
98
+
99
+ After selecting informative unlabelled samples with (2), we aim to produce reliable pseudo labels for them. We can provide two pseudo labels for each unlabelled sample $\mathbf{x} \in \mathcal{D}_U$ : the model prediction from $p_{\theta}(\mathbf{x})$ , and the K-nearest neighbor (KNN) prediction using the anchor set, as follows:
100
+
101
+ $$\tilde{\mathbf{y}}_{\text{model}}(\mathbf{x}) = p_{\theta}(\mathbf{x}),
102
+ \tilde{\mathbf{y}}_{\text{KNN}}(\mathbf{x}) = \frac{1}{K} \sum_{(f_{\theta}(\mathbf{x}_{A}), \mathbf{y}_{A}) \in \mathcal{N}(f_{\theta}(\mathbf{x}), \mathcal{D}_{A})} \mathbf{y}_{A}.$$
103
+ (5)
104
+
105
+ $\mathbf{y}_A$ is the label of anchor set samples. However, using any of the pseudo labels from (5) can be problematic for model training. The pseudo label in $\tilde{\mathbf{y}}_{\text{model}}(\mathbf{x})$ can cause confirmation bias, and the reliability of $\tilde{\mathbf{y}}_{\text{KNN}}(\mathbf{x})$ depends on the size and representativeness of the initial labelled set to produce accurate classification. Inspired by MixUp [42], we propose the **informative mixup** method that constructs the pseudo-labelling function g(.) in (1) with a linear combination of $\tilde{\mathbf{y}}_{\text{model}}(\mathbf{x})$ and $\tilde{\mathbf{y}}_{\text{KNN}}(\mathbf{x})$ weighted by the density
106
+
107
+ ![](_page_4_Figure_0.jpeg)
108
+
109
+ Figure 3. **ASP**: 1) find KNN samples from an informative unlabelled sample to the anchor set $\mathcal{D}_A$ ; 2) find KNN samples from each anchor sample of (1) to the unlabelled set $\mathcal{D}_U$ ; and 3) calculate the number of surviving nearest neighbours. Samples with the smallest values of c(.) are selected to be inserted into $\mathcal{D}_A$ .
110
+
111
+ score from (4), as follows:
112
+
113
+ $$\tilde{\mathbf{y}} = g(f_{\theta}(\mathbf{x}), \mathcal{D}_{A}) = d(f_{\theta}(\mathbf{x}), \mathcal{D}_{A}) \times \tilde{\mathbf{y}}_{\text{model}}(\mathbf{x})
114
+ + (1 - d(f_{\theta}(\mathbf{x}), \mathcal{D}_{A})) \times \tilde{\mathbf{y}}_{\text{KNN}}(\mathbf{x}).$$
115
+ (6)
116
+
117
+ The informative mixup in (6) is different from MixUp [42] because it combines the classification results of the same image from two models instead of the classification from the same model of two images. Furthermore, our informative mixup weights the the classifiers with the density score to reflect the trade-off between $\tilde{\mathbf{y}}_{model}(\mathbf{x})$ and $\tilde{\mathbf{y}}_{KNN}(\mathbf{x})$ . Since informative samples are selected from a region of the anchor set with low feature density, the KNN prediction $\tilde{\mathbf{y}}_{\text{KNN}}(\mathbf{x})$ is less reliable than $\tilde{\mathbf{y}}_{\text{model}}(\mathbf{x})$ , so by default, we should trust more the model classification. The weighting between the two predictions in (6) reflects this observation, where $\tilde{\mathbf{y}}_{\text{model}}(\mathbf{x})$ will tend have a larger weight given that $d(f_{\theta}(\mathbf{x}), \mathcal{D}_A)$ is usually larger than 0.5, as displayed in Fig. 2 (see the informativeness score histogram at the bottom-right corner). When the sample is located in a highdensity region, we place most of the weight on the model prediction given that in such case, the model is highly reliable. On the other hand, when the sample is in a low-density region, we try to balance a bit more the contribution of both the model and KNN predictions, given the low reliability of the model.
118
+
119
+ After estimating the pseudo label for informative unlabelled samples, we aim to update the anchor set with informative pseudo-labelled samples to maintain density score from (4) accurate in later training stages. However, adding all pseudo-labelled samples will cause anchor set over-sized and increase hyper-parameter sensitivity. Thus, we propose
120
+
121
+ the Anchor Set Purification (ASP) module to select the least connected pseudo-labelled samples to be inserted in the anchor set, as in (see Fig. 3):
122
+
123
+ $$a(f_{\theta}(\mathbf{x}), \mathcal{D}_{U}, \mathcal{D}_{A}) = \begin{cases} 1, & c(f_{\theta}(\mathbf{x}), \mathcal{D}_{U}, \mathcal{D}_{A}) \leq \alpha, \\ 0, & \text{otherwise,} \end{cases}$$
124
+ (7)
125
+
126
+ where with the pseudo-labelled samples $a(f_{\theta}(\mathbf{x}), \mathcal{D}_U, \mathcal{D}_A) = 1 \text{ and } \tilde{\mathbf{y}} = g(f_{\theta}(\mathbf{x}), \mathcal{D}_A) \text{ from } (\mathbf{6})$ are inserted into the anchor set. The information content $c(f_{\theta}(\mathbf{x}), \mathcal{D}_U, \mathcal{D}_A)$ of a pseudo-labelled sample $f_{\theta}(\mathbf{x})$ in (7) is computed in three steps (see Fig. 3): 1) find the KNN samples $\mathcal{N}(f_{\theta}(\mathbf{x}), \mathcal{D}_A)$ from $f_{\theta}(\mathbf{x})$ to the anchor set $\mathcal{D}_A$ ; 2) for each of the K elements $(\mathbf{x}_A, \mathbf{y}_A) \in \mathcal{N}(f_{\theta}(\mathbf{x}), \mathcal{D}_A)$ , find the KNN set $\mathcal{N}(f_{\theta}(\mathbf{x}_A), \mathcal{D}_U)$ from $f_{\theta}(\mathbf{x}_A)$ to the unlabelled set $\mathcal{D}_U$ ; and 3) $c(f_{\theta}(\mathbf{x}), \mathcal{D}_U, \mathcal{D}_A)$ is calculated to be the number of times that the pseudo-labelled sample **x** appears in the KNN sets $\mathcal{N}(f_{\theta}(\mathbf{x}_A), \mathcal{D}_U)$ for the K elements of set $\mathcal{N}(f_{\theta}(\mathbf{x}), \mathcal{D}_A)$ . The threshold $\alpha$ in (7) is computed with $\alpha = \min_{\mathbf{x} \in \mathcal{D}_S} c(f_{\theta}(\mathbf{x}), \mathcal{D}_U, \mathcal{D}_A)$ .
2203.10452/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-11-18T03:48:33.136Z" agent="5.0 (X11; CrOS x86_64 14150.74.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.114 Safari/537.36" etag="zd0NXSnMvSGGTqQPJDkb" version="15.4.3" type="google"><diagram id="Bb-5RMC6s4m-gpOw8fSq" name="Page-1">7V1Zc+M2tv41rko/mEUs3B7bWyY16UnnuidJ90uKlihLt2lRV5Lbdn79BSiCAgGQhChusuCZSkugBFL4PhycDQcX6Prp9ed1uJp/SqZRfAHt6esFurmAMPAB+S9teNs1OBjuGh7Xi+muCewb7hf/RFmjnbU+L6bRpvDBbZLE28Wq2DhJlstosi20het18lL82CyJi3ddhY/ZHe19w/0kjCPpY38uptt51opt7uP/ihaP8+zWvpNdeArzD+8aNvNwmrxw90K3F+h6nSTb3aun1+sopmPHxmXX0V3J1fzB1tFyq/OFzY/ZxH2e4D/9v778jgP7K/QWl2yYf4Txc/aLs6fdvrEhIA++oi8n4Zp0dfUyX2yj+1U4oW0vBHTSNt8+xeQdIC9nyXJ7ncTJOv0ysm3XJWOFrjbbdfI94q7Qdu4KG11gObSXRRxzn727u726vSLtYbx4XJK2OJqRH301XawJ6IuENi2TNe3garPjkE+7XkXrxVO0jdb0eRfLR9JOm6fhZh5Nswembz6HW/KhZdoCbdqaDUu03kavpQMOchgJ/aOE3Gn9Rj6SfQH7zu4rGfUBwrv3L3siYZR1O+c4BH03I3BG3se87z2+5EUG8QFwu22jLeCRjX2ABWw2yXOKjQDrze3d3c2Vihw5bQpQqZjSGXwIYAt4AJMpnf7XLYCJoAwm8IHls4+n/5WxBbgFbCPv7uqX11f4v0/xn/9e/v46f/z1r0sMvHp0a+Cch5P58zr6mbbfUBBXyWJJhvb2Bxm0TTZ7BBRIyzp5Xk5TlGwZ5Ft0B+/uVCAjFwXkBbpK6Ozc0pF1aAePcbjJb/Y92k7mqnlbgqcC9VKIIQDCDLUlUD07kFFEvn08ivPbb9731y/ht7dJMvl08+3+5e7rJVRg6Mbp3FqFywKY7v8907XjarIb0Y/k4vrx4SfyZOT/5O429+oDfUnHzKby+XIWPi3it913SEfh0yq9iBDFfB7FP6LtYhJKV4qd7CY77QLA1atwbfeU9CKRy09hXLz8ko0lvY53z5lejCM6kS83TForvk+g3V5mYodenkSUoMXLC8LGZda9zT1aenG7DpebGemUdb+M8g+8JOtp8e781x/CyffHlOuXwpgTWZ+PNcTB/rXDjfx0sVnFYTbqi2W84G48i5Nwyz8QA5e8etz9i+n/nWuqVzzMLryr6MK7+Zv8+8tv5EV2OaMKYd6OLey7ghSg41Av21uYYIgtedkEw64jS0026QoTrIX5pZSSwFPNMGF8ouX0I9Ud6TJIZdFiUhyZzTZcbzNdlb4nn+fekcVuPYk+y6qHTy6Sbz5G25KLimUw05FKsYimTH8tQYIbaUcx0KxtHcXhdvGjqPWqRj+7w2e6MHBAe8iCRWGKoIBhNi67L/KKqtxXsSNP6CgbQ7GjlA/5Lz+CIi4+RYpEr4vtX3ShTNUi+u5rtmzS1zev/Js39mZJhuov/s3XfQ/07f5r6Tv2vVI67n5Y5eBmEmD3K6s+6TijojjEXlGWiTqANr/ZxCjrqGN+69B7AD1xFsw8eq1ET5wOqScWVzGVmgg8tyM1UT05+oGwhbHzimSHWDF2tmLWAtzV2MkKwJ+7+5aoSQpzN1M60RUdB6Ipxx+zC0+L6ZR+/WodEfU4fEi7stnopr/EubpwqNETPm+TDbckFFifKoHFKZI1tQAJELSyvIsaQSou6K0hgiRE/khfngsgPhwXHofpP9k4lCk/uKj94KPUn73CYnm5kpLrORUay1478nj1CFSqR0eoObpKzqhUHICLHsvGKjwQpD4C/ao4zngJnPPQtgFHRPLex9W6OnmTd1pH0JEQyrMtYHN/gpxzAsvnL3vtsA26PSvUZ8K2XPgC1t3eYrR9VCeB6bsTJDBPULvomYC+bRGFtg0CV97GC6yA/8O90lsjrGDofar0Ftd7C+LGHK7WHTpmaSCxFEg0fa/2AxBcUyojG/ZpQCjSCs4YDXdoNGT7WhbhZ4OG7w+Mhkpd5EO8LGi3a3tmDfcRlbbJjPzn9nVFhimakpepo2TDBf6exQ40g4HlSR0nCTvyi6jbsksl9zD2A7uvgJ2L6ZIBvvCu/qBBXcCiue99ijJwUCCD0++clLWHMnDguYGjWL36BQfKykQZOOjcwPGHnjlQ1i3YwkMzfciVWZh5SllKyzWxKBZpys5/ohdFpotzVWjk+5rkQcH9B1AUug+uK3f08er6mlsEd13ka2P+4fwzD2vV5eb3BxCh+vs7N6Wfee8kLokQejKnPQWlu4sPqrzHBRrIOXhiQpxP08ZEQliWde5YB8ASNDOEZH1clRHWnQQr18dzXfq3VbQO01xmpaJ9jho1cKDWNO0ON5Vf8qiVZ7ZYTn86RuLnKwlrIPBwjHkQPydJges85Xb5sFkduf4M8zSuFz4EM42nSf2A2k/z4dwlJy7GBIAri03Qa5IAVBm0zflBQ+HnjXAAi3qQwvDqd2FUGcXHqfat6NVaWv15U8mvjwf06vxyoYJKAhZ0Z8VK/7fnWz4zCC74XZVK/aWY9IYUfmCAMU0PksYFsMdtfWAcHeXzebt63tb4ek9RBa2miv7KKOEY9MptlXuq/S1bA+626mg7Vf1OtEm+nCzT5aRkN1rNlrBBt22pnFlSRKaGLQb/08X/GP/l7V25otOISEbsnA/tZP+2YYthSy1bzAp1fuA3j3BR88WsUIZ2R8mcQgS0k7oFLRmaSHDQae5k7KyWgauxC7TEq3LYiOj7WtiWZkVxnBLfiud3NDpINTqCa+WX5Tl5Vg6POUgo+gp+d+ZZwXIg/ke4PtlE4rZACQpSCMhuzF5du1j2YBKQZEF0XiAFaFwgybG621frIs+KPV+cCjCpCvT1CpMccdvBdPbTqQCTI+dy9woTC7NX6V2pNqyqgaIqi8ihuOJ2kFH9/IGMLLerTIEV6YPe7DovgEoh1aru8bgOp4toX66zaxSLFRRVBRSBUkXEXldAqsKSOx2Q3CCkeu88XG+yHfzMYHnezi79ckP6ANs2M2wzq7bUpK2yZ8uN2TpLttyMrbZhKwzYGus1HweV8cpGs8pw1bFac5M1t1dLjVWlpSq4ww6O4hzmR9H0pRkGHM0ARXZ4je/KTOQRwnjkRO4mLmLgPz34DfZnh33jOMdND5F4w6OT4dHQgYtjc8dV5St7DVw4qgJzJjJYPpnOKjI4tLFleNI5TxRbZdszxoxcGB3eJqfNsOXUctoMVc6HKo0tQ2wy4AztjpZQp2pIsrSC4QxJjULPJhKrOgFAqDALLcXBTD1HY4GiTqGE5njP3SkWoGVF5g8sLD+SurBSRe1AmMG6VWHzByrrqOuznIBOKXgy6PfZ2yh+SF5u9w1XaQO5ME/Wi3/IspR6S/XrHfdDweX0bkEHRjjUyXJs74KrYwwsG7dbx7j+gANvZMx2i4TErNrTwczGviWclgBsy3bRvky3q0V1QqTwjftYtq6U/4Kg+AvY8Wj7mbPrseV5dFiR+7HIZm4qFCZCzSQ4gu/uuE4tAx6yEPbR/q9Ify+wWNr/4TPAkfpyuCL2wrN2Lul16tS/W0mPg0CU9LBnST825suSvnDIAmjKemQBl+vHlycB1GL+sYKfLV0dC345dfyn3WuC9NNDNJ0S3m4+SFPtRBKUjz7SFrKVcbjixW5PB3/zAzqNZuFzvG1pTIUEb4jlDG9VTRdWKbSDivkapmjnBYtAUbTkafAFG922FDY6dLsbGY3D4Q7ZdLgf6qMHzBYUcsUxegAoaISdzgZLw/rcH4oK6j17pUeeNjyR9TjH054MB3ieChAB2e3kqwpwdSY7kYZds5mHqxSO8C0dVXHss4XIVYw6UK5eipVJXPLWyTbc8ksgp6ftFlDeF7lZhqsvyU5fUB21y9NGYkqRFbYFnW4YULNdp7N6zgBprI8G41ZnuZxh1jPkquKkBvIOIYeDQ64qV2og7xByRR5pv5Czng3kfUGuWUi2Q8g1ap4YyNuE3Bkccmgg7xdyReXxniHXcL8YyNuE3Bsaclc2xNPzck7Trdzaeit4tGR3Sb+uZqix/J60Q6vMGXkyDi0GiBGdDUWnPgMGk5XKQ/cMxl3O8qGXR6gRdjGQtwn54A4t5TmEBvIOIR/coaU8i9BA3iHkgzu0lMcYGsg7hHxwh5by5DwDeYeQD+7QUp6lZyDvEPLhHVqqWa4+th6c9rH1bUF2KWTs7nPbBsytVE1cNYrQoKhCEYHhUcTNkxZbTOcUklwDWUIp0xOZIOlgWN55eiI++fRERXV9ozd0w4Dhgp3GAuwL47F4cxXnMRjIO4V8cG+u4mwHA3mnkA/uzdU5J8JA3ibkg3tzHZOe2DPkg3tzlYeIGMg7hHxwb65j8i96hnxwb64nu0o+J0lcCvt2Tf79J6GdFmDI23kkarwqs8VrxBeWKIfpWJ9LhiNPtQuI7tK/cncMcz/S/viiai/RZtuLhpdXwpFIoqovATvzp3kaCt8Ii+RkBcxo9SL3gi9ihiEraqYuY0bfaBcQ2T16xfBhVsFlJAVEECjSC/qB5TlHFxBBUCwc1XtRNE/WUv/ISnp8SqbPhKrvUqyly+NONAU2L7cs4OyrdvkOrhCBpfQ+wBWBBV550LIDrqIMksWYwumMOqta4Gl4p3oofiGWV3MUhVZU5xHndSQ7GBkNJ84pB0xy5E82YOKbgMlRqvnhDBjcGvNN/KRnyHu0xua337zvr1/Cb2+TZPLp5tv9y93XS5X53X5Z9cr64/Mo/hHRnI3y2uND12YvL3veZen2YUuqc5k5D7ML7yqiiTnk3w35l6XnDFBp/fiFVZFCgxRTrrNK67mmZaRs51J2uJXURC96XkkHz0TwTfSiZ8gHz0TwTfSiZ8gHz0Twze7RniEfPBPBN7tHe4Z88EwEX/ZOXifLSbgtBd6484935yOmQuVl2aHEA1fBA+haoLOjlAINgX/UYQiji12yA5e+XnCBysMOX6qNUAJnXEccIE88gaPhYTYo8C2xL1ZPs+VTDKRndpFA9y5OMQh0DgcZHbun4WaeR5Ki/LCPC+6gD+5gm/bPsgn8UdEde0VJC0HDEDz2is411HcAPpAjrJ9+/WyW6Q6Xacy22VQs045tucrTFLpcqTVCyuOTTBF38hCZTAWBFPhujUiKVCcP7Zdy7GF+Mbfs3Sl2PaUh5Wr06clHDCxHON8LIek4Im0p6bhWwP35Us++jXuVm9CWgxC/3n/5VCo4Cf5hHEdx8rgOnwThWbg2ZgF6YDZmC7KSHhtZWB8VPktV5g1yyyl7lJSEthyZMMC3D7zrjA74ERzEhAPBwgdYGhWgOipZNMdaHBYN+/6EE9H2sJ9qIhq0jf/1GP/rIQwYyuEKbZNs2BfGI4mXQ9skG/YM+dDxcmibYg09Qz50vBzaplhDz5APHS/PVxYDeV+QDx0vz8MHBvK+IB96rwgE0EDeL+RD79yHQOUq2e0BmC5+FKBn+wnE7RNxNNuqdhvwe07ytt1ukOXDZnWx2wzSYdNPmyhcE2KW7WnIm9NfWmxt/8dPkmW6ayJ9zqoH4ts2z09P4frtw4G/QZizp1xjtcGkkk46VzgkVaVEO9sekvvbTBiz4zBmNYVGEom8dC3P9oGfRQ49UIwdgsAK7H0lBQSEMKJuhBJAKwhc5LHbMK84mxVkzAMP5zcKOotWKncplq87Zpei2aWo2qU4GXaXYhPn3Lh2KUKg4ZEf9TKUZ/N9ze/eILVPWAY0clmy1IqhFgwHu1ZQlN5YrNOkuyoo+kJ+z4kq4CTzTTkeBo5fVIcwDJqoQ7oUHSxnSqrKD6zAa0Y8PyDWNAM67w9aNuyXfEy8nhj5ek3lz1PJRsJDKrR8rgYUKMov7BRS8YJmBK25Se881fC/jpmnLuSZekktPNiChVe2hmtwelAKe2LWoKjm6dLUs4s0hQPTVMNnPG6aCn6I/P1ALIVDstSnWc2gLKsZgcCyPe5yww1VvphIKVaL7Jy0GnmTxaDBWBMED9/pERTHnjSoTqCyFRyrotNxhinS2ZC5FyLZaJRJEFwUIfgoGbKXFCAoCAo6OboSFIPpWaBqUwNZeizs+KKCdLCeVX0TwkfMXfUEc6OljZYOEKaBy6IGnW61zOuSGqYPzHRESIjKlP22mF59k2ZMb4+Kh3kDDRW7oiLRuXx3zxIsUzHgFP6GOlfNXagWgDgyuj1z8TCPoOFiV1z0FEt+G+wT+x2ab2XHD3+mkSbVGcT72JNAy1EmV9DgI6N/Je8O2SIuQKjIUOz34FrIinDooogMitgfHYpY5ZCvQBEaFLE7PhRlw8ayrNNAqAVEHFZHIF/gHIVbRRXu7xASWcE/K0iEOjajgERHz30PlceW07tFzO6/V6GDABV0aMdp0dte71wf1xFKeeFRlvjnNfQzBOJZTGLpp5acZnkmLXtgH1U+F7QrP9+Ri01xfPSXtxVNQbueR5PvlL7ilBvM1z+aIiYgUJQEUCWnAauzsg2KQ6ANcNImIXd8wCmOcjbAicD5YITAqXJMDsvCTmN3WRYwoEIlfw0PycWekKVukaY3L6OXTvKxTcp1ecr1DvKHfNfRds2lWD8odht1knZ9+CLmCmVtAtkMZipLP/q98rhrM6HMhDqNCeUxc3Q8E+r408wA3bjBJhE1sdhrJzAT6kQnVGpmnsCE8j2NCaU6vra7CXXqW1Hb2QNUv+UHj6t8rSNUaM77PTzNXegI9hx7VZT3Pi0G+rsU4ZyDlu3CGh4etftHg62IZbTW+0B3cA9FYzENnpi8FuYrnTfcXUSz4j27PHMRWk754fSdM/6gg7HfVbqxWDOXJn4EiqPKe803RopK2VUiyKQbdSQNXK8u3zgAXPKRsFDpyoaau1BGchLIEzbWthQ7cT054TibCJ2GQ5CiOLjh+hBcD+oyjlvhevVdmnG9PS6q3GOGi/1rYaAu5ZjucG+miFV3TQnI//Wb8450CuYbAvZAQKRY6htTTuxsaJKpdrOZtNRKM2F0ycVIeUSBSUutRBGOD8XzzoH0WElXLgdSYX73mgSJFKcEnBUmSM5LHR6TU3fK9hYWCMalSIn+VMe2cEPbscaHCt2CD7Xn0+2QTgn698NQrlAqZinSWcq0zVKouyhQUh9pYLnFGpGGYMiZIRYZSSMNHLsbVomiFVJqAg3cHBFPRu18kmg4+95ppMEXJSG1AoGsf/YbadApqW8cDj1IA1gXafDdo+s91NyEEpKTHB1VNvGhItAABCZ3EWgAxrc2DqrjukBDG1SvvkkzqrdHRVNPahxUdGvjDPbxVKy+CaUiX2an32wndODRD4aKXVFRLHzQEvnEboemm6qok4k9VBkOnqinDe+1VhZIN7GHShTFCkMjQFH2qZ6Tn9t3oSAaHZVB3q+fmxW0P1dMfEXsYWhM4Fl5dktRrI89MJE2FpVK9LDS2ENDPQrYovsO9mwz5pPJlITTXnOd0a25ysreBsUqFF17fChqGM3vNIwh1mcHDlImsw0X0jB1g8ex/IolzIGHaQ5laexRvz57NQM9vcD/sRXZgetYNv/XQ+0oZOoQj4TbQuV0oc5HW0yvvksz3rfHxcNSsgwXu+KiUDndlbnIVchmUdBj67MLd3GwZXP1sn3YMxc1drAaLvZfn7019on9Dsw3rArg7mp7UAuIXJnkLpl9sRDywKlzhmtSlIcRK7TQIidp6RDlScQadmNec2T3aNU1R969OSnWiQeO4nipfs3JQyuMm6CYVCd+DCjKDjr7NPBpAQ/ZKMOWE0iQ9IwIlBA5p2CKWPMdKBxn/UZSsOz9PCtAPNGCGxoQR8NX9k4dmWLlF2pOO/6YHJmOcfaMwqgRS6NQo8bBe6NGsD30y75UE7CnQi/ApSt1z35Mx/iOxkHtoM6P2QbRq2/SjPatUdE9cB+9ichKEdkRGF+OcQCOQqCI5XVkZ7Tjc+7ApjuBq+9CJIoLOeegcD5B5xLFVO8bBxeR7IxuhX1iv0PzTZUZVnBGd+RkRu/TyUzfcxmVmdO+kzpCI1g5WRb0WbotZRsMWywtfjC3pXveXjKxJhCgZzHLruR+HWWuHPA7K0yEmkCjwCTQ2ND9Tp2XYjEJqoIAj3NeyjKsZ+dlYLY4j0IJFsstUCUYIE4JbujUkfoVGNiN81K6q0vXa855iQXmduG8DMyW6XFQG9c5L1shevVdGtG+PSqaXPdxUNGtc3sBzxcpdGwhCdnrBQHHxKBnKppo5TioKGZGtUQ+KeFqYLrJEcTzcVv4+GD1v18XhvJEoTHlx8L36bpsgVtioYrhXZRYcRzSOTlfxEIV1PmicFL26nzBimN7zgoTX3aIDY8JPOc1UeUSG3gdxIry/hIgVG+8z95G8UPycrtvuEobyAUGyU4tHlmpkWm4mac+VlBUth1e2760Ldt1D1a3y8qY7AuXvBbe1ZYxqa0LnW/HqC14ghnlRqL6O9gRpkDDM1hdoSPUkTvPEcNv2dB36r/DivMdbp8eoumUcp3Kguww5zXTBX9Nku/PK2ni7mMLlPc9xhZsCzqCHCXT8S79uygNO0hivgWh6wpOCOTLGe3siOnC+dHCXuUWJa5OZqmOxJ0n68U/RL0eqczdy8WCVNwLydrC/Y7DnB/ZN9Hh0rmUQo1lp1bhfntQKcs4zurRBA1dKq7gS8a2ng/lYCkbFO/DAqcdS1nZBWCkbEOT3CmWr0aeHHpXSlkhEa09Kat12MkZS9lj5KL6AJ8TODodtSQXPWHfYldy0cPF+/QjF3WOQHm3JiH2PcEkJHLrBExC99jpO9SkdEXfoYuKCRNNc6B9T5g73ViIHhJ9n31YiFpF8d/D8pZNTHBRMASqZ2TbdWVrZ19uKtRXoGVe6ZHMPrLCUCWysJghXKyX2HD+ua4ldNxR3TtXKPjIojEdz8C2HKdmBrYyA8uUUo0pCYecgT4srh9EtFuCVXRAIlhxyfMF72pbOY7CffqZcEi26O6jcE0MaGhfk/lDo0biBBwwlETmEHKJwj5VWei3XvqNhM7BLR1Fr63tO0UyOUARGlemWXcWYGKO1Co5+bhOmF9F59fT3IzlMppsM3QuSv1t+VokrBCuYlhYFgE/LF53w6KxNaDGlVTcOTC8Y0kGsIYR2qxGRUVCAR5QcNrtzrkEFeAduJ8Rrl4LaUSzcFL8wjWR+gsioqD9n+hFlTx03A3zXsInyqvlw2a1exC9ptliOf2pNDOJu01tSlUUug+ue2BKlfLHXF8f+kCNR+1a9g6zlunihxojJW6aQ3IkWk2bykazqnmzCpd5m6MYe/K9wmeOGJCPlZjLj8N1cF3zOKRZgaR8k34BseXh7GduAUhXrLrZRdSiqqmxG2fFI2QPT2+/fnz4CdC4k512ZQMP7F87wYfBZ8Ul0CNyw4H+cOis01Z542hGr4whd6otxeCyqO56FiCj6jo+df9hnzmyeUXBVSkKnWl5yrqpopNgtCY/3X4gmP12G9sRqtGvt/WZOjgSH9oltH2LFutgf0J+skgvXXNe0HrFbjrev4AVtUl/e96unmUrm+Lwa/gQxWq504aprUuifMIdbTPaFoBCmA+2Qhev2Knw/WQ220TdAKo4E+hTMiWonYbXZDKJnNnsoguviQOK+Scqt4lKgHToNTls2+RYFg7m96Vx02yH3D4jwa7N/KLvWkzgGtk6AVxA9/Mh9sfOWGekgw3dvg5zfpR11N5K8fDRd4LZ7d3/fIJf/vXbMv5P9Ed8qZMuMD6q7rMMeQ2nWrvZhzVwwJMbWLbTLrnrU2lYnabaaZBni49kGjjioZdN9SNH3GOoGe5oi/caHtx3yHskEB+ivokf6BLfHVesXSK+3VDge+JGop6Jf1jhnPdBfNcRJT4IeiY+o2E98QN1MHywJBPQFvGBZwG8Tw0rJh0D37Ec4SC79mbC9OEm/uvPf9+8Pf/87e37L7+Hsy1UqD43yVO4WF6o0sc3q2iymJGZoLoYLSfEFltvpJn0XndDQnYiGktWcDKxwnFS5awTd4w1sbmUWJ7Scu7b9H/C0uwVlmbbCny3iYTaC0oX8LmwtrWz6N5VLRLouhZwuWQ3IYPGIaPQMEk87dpzkVfMo2M9Q2T50MOdmWpKjp/Syt0TxwvagOW8R4rTRZOrvCImbwVHUNyroDjGNIDQIcWVyg9QHJfwC+npN9L0KZk+E56LlN/MwxV9uV2Tf/9JaPdXKw7lvJ2Dvm6f12zxGvETo/NknXWyJSxJ6PofpB9iRVfoLneu8jeWPat5Yk8byzouOqMwy0jjya7ItGpjJ9j89pv3/fVL+O1tkkw+3Xy7f7n7eqlKlD8sGm1fsPgz9+oDfbnLhKFR3Vn4tIjfdt/JA8bUmU2Hex7FPyKqoUlXip2IoeHCtd1T0ovLZP0UxsXLL9lQ0ut495zpxTjaEvZcbpioVXyfKpiXmU5JL2dqZeHygnB3mXVvc4+WXiTTY7mZkU5Z90RFZB94SdbT4t35rz+Ek++P6cy4FMY8dW9lY51a/Oy1w438dLFZxWE26otlvOBuPIuTcMs/kLr4UrKSaiwVQ/pqbbxm8rcxj6Bj2UKxBE+eSkBVvwW1oCKrZaujoUB0ncyJgLDMOIGlmRGIuxsYjf04g+0TPqjScw3w2gQGYhlSCSJfsQyA7hDSqP7IFIGX8C0dVXHsszXVVYw6UJrXCtNZtMnTdZu30TnFYyeKea1jswxXX5Kd8lSnWEhMUeoNHTJArrSB+52UGkFUA3krkA+HsUY1FYNxm9MaDj6tNU5SM5C3CTkaGnKdA8sM5G1CjgeHXCOtxEDeJuSKs757hhwayPuF3B0cco2om4G8Tci9oSHXOpzmPRRO2IetaEKGkDxycVTUig9asR0a6oiVTCTdfBPAarjV5psAdrTCSCJeyMOWJ9SH9YVetDdhiEEtMee7pZoK9Jkd5Z3aqqrwI7lHePnf5++Lt/+u79y/f/s7vDbRERMdqYiO3P59P9LwCJJSyuRljZVYaDs0svkxm7jPE/yn/9eX33Fgf4Xe4lJ2Q8nFSez756enMP0ZJZFoOdlLNyNLd5OnCpqislIojEfffA7pLFmmLTBduYTzh20b33DnDytCPBLW2sDmBepr0sJctyNggSrkVb/z2nXFg2KyHevrx+enKP3efRRHk22y5ujQdIvvabCEjkk7LMFCzTLAdMgeWFJS2XQEkVFXKHkIPWixulhDqft5tZZKdT+OF6tNGfH41FeaQ0sfOU2vqbSzRl0DZ0+Xg6NpGbaB5XNHPtpY3uoOFTB3tmExq+xrDPmGhvwhhBhsKutkf4zaDncdT7DDUbvb9hpUH2Rwjf5QAWG3HRQ3Helv23Mtr7hzY7dQceJMyEftOGWUaJ3DL94OHOHizVKtzOKtpMu7WbxNePW4xVufEMPp4Rr5b13LOB+JMs62Atm+73lgdM5jOj8Z55WAfrIyzmQNHSfj9Akx2FRWHC/66/2XT6U4k18dxnEUJ2SyPAljX7g25o0/RwuAow8yRYqUQF8BPDrc+0berhPqQN3r+2TE57u6Vej2/wE=</diagram></mxfile>
2203.10452/paper_text/intro_method.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Program synthesis is the problem of automatically constructing source code from a specification of what that code should do [@mannaw71; @RISHABHSURVEY]. Program synthesis has been long dogged by the combinatorial search for a program satisfying the specification---while it is easy to write down input-output examples of what a program should do, actually finding such a program requires exploring the exponentially large, discrete space of code. Thus, a natural first instinct is to learn to guide these combinatorial searches, which has been successful for other discrete search spaces such as game trees and integer linear programs [@anthony2017thinking; @nair2020solving].
4
+
5
+ This work proposes and evaluates a new neural network approach for learning to search for programs, called [CrossBeam]{.smallcaps}[^3], based on several hypotheses. First, learning to search works best when it exploits the symbolic scaffolding of existing search algorithms already proven useful for the problem domain. For example, AlphaGo exploits Monte Carlo Tree Search [@ALPHAGO], while NGDS exploits top-down deductive search [@NGDS]. We engineer [CrossBeam]{.smallcaps} around *bottom-up enumerative search* [@TRANSIT], a backbone of several successful recent program synthesis algorithms [@TFCODER; @BUSTLE; @PROBE]. Bottom-up search is particularly appealing because it captures the intuition that a programmer can write small subprograms first and then combine them to get the desired solution. Essentially, a model can learn to do a soft version of a divide-and-conquer strategy for synthesis. Furthermore, bottom-up search enables execution of subprograms during search, which is much more difficult in a top-down approach where partial programs may have unsynthesized portions that impede execution.
6
+
7
+ Second, the learned model should take a "hands-on" role during search, meaning that the learned model should be extensively queried to provide guidance. This allows the search to maximally exploit the learned heuristics, thus reducing the effective branching factor and the exponential blowup. This is in contrast to previous methods that run the model only once per problem [@TFCODER; @DEEPCODER], or repeatedly but at lower frequency [@PROBE].
8
+
9
+ Third, learning methods should take the global search context into account. When the model is choosing which part of the search space to explore further, its decision should depend not only on recent decisions, but on the full history of what programs have already been explored and their execution results. This is in contrast to hill-climbing or genetic programming approaches that only keep the "best" candidate programs found so far [@STOKE; @FRANGEL], or approaches that prune or downweight individual candidate programs without larger search context [@GARBAGECOLLECTOR; @BUSTLE]. Search context can be powerful because one subprogram of the solution may not seem useful initially, but its utility may become more apparent after other useful subprograms are discovered, enabling them to be combined. Additionally, the model can learn context-specific heuristics, for example, to combine smaller expressions for more breadth earlier in the search, and combining larger expressions when the model predicts that it is closer to a solution.
10
+
11
+ Combining these ideas yields [CrossBeam]{.smallcaps}, a bottom-up search method for programming by example ([\[fig:overview\]](#fig:overview){reference-type="ref+label" reference="fig:overview"}). At every iteration, the algorithm maintains a search context that contains all of the programs considered so far during search, as well as their execution results. New programs are generated by combining previously-explored programs chosen by a pointer network [@vinyals2015pointer]. The model is trained on-policy using beam-aware training [@negrinho2018learning; @negrinho2020empirical], i.e., the training algorithm actually runs the [CrossBeam]{.smallcaps} search, and the loss function encourages the model to progress towards the correct program instead of other candidates proposed by the model. This avoids the potential problems of distribution shift that can arise if the search model is trained off-policy, as in previous works for learning-based synthesis [@ROBUSTFILL; @BUSTLE].
12
+
13
+ On two different domains, we find that [CrossBeam]{.smallcaps} significantly improves over state-of-the-art methods. In the string manipulation domain, [CrossBeam]{.smallcaps} solves 62% more tasks within 50K candidate expressions than [Bustle]{.smallcaps} [@BUSTLE] on the same test sets used in the [Bustle]{.smallcaps} paper. In inductive logic programming, [CrossBeam]{.smallcaps} achieves nearly 100% success rate on tasks with large enough solutions that prior state-of-the-art has a 0% success rate.
2205.15209/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2022-10-04T08:50:07.164Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/19.0.2 Chrome/102.0.5005.63 Electron/19.0.3 Safari/537.36" etag="e4di7tu8_gGiCj7KxECF" version="19.0.2" type="device"><diagram id="P0m93sUgeTYzwMhwn_JG" name="Page-1">7V1tc5s4EP41nvuUDJJ4/Zg4ba9315n2Munblw41is0UmxwmidNff8KAbaQlcQxCtuA+9IwgAvZZrXYfrZYRGc9X7xL/bvYhDmg0wkawGpGrEcaeabN/s4anvMFGXt4wTcIgb0LbhuvwNy0ajaL1PgzosnJhGsdRGt5VGyfxYkEnaaXNT5L4sXrZbRxV73rnT6nQcD3xI7H1Sxiks6IV2d72xJ80nM7KW9uWmZ+Z+5ur84blzA/ix7xp/XbkzYiMkzhO81/z1ZhGmfBKweQieFtzdvNkCV2k+/zBp/dP3+azn+OfFx+/fnfIyn3z/a8z7OTdPPjRffHKI2xHrMPL25j1yx47fSqEYf93H5cnzpZrqC7YBci4Y2hfbs+zX9P8/0wU5u0PnP8o+2VPmHddXoUrd8EpXWXts3QesQbEfi7TJP5Fx3EUJ6xlES9o9hRhFHFNfhROF+xwwiRCWfvlA03SkIF5UZyYh0GQ3ebycRam9PrOn2T3fGSay9qS+H4R0ExYxuaxdqVbCDzrk652mgppv6PxnKbJE7ukOIuJlf9Jofuk0OnHrSLZRdNsR4XKy/xCdaebjrfgsh8Fvq/B2gOw5oS/FQKqEdMOLrsQjDAJfOreTgS82Blj/d/mTDmScEtydjk5E0HOJiBnV5acETSmNJSzbXvn1l6itp3mov73/ae/31Dyxbu8uX08u/nnJo4+nyHDbWy/3GfN18gaL8Pp3D9FG5Y9YjGlopZsmsmPNc/tzKjVaID0GezzKWLfBtYGOS6ssS0bajbar091tLfisVQRx5bq0Y1Mafb94n46p2tUTwhkCSZdcFM9cVLvGHTIUW11nN/0dIAji8faUYu1hxpDPYzvlzB3KphbHcahIOYulob59eer3uONbbOKt6F6EgfgzqzvqjTCbcbGFnUDE4qNCZEZGyNcNazIFAeZLBIC9pUBoevAQRhHJmfIP820+vcfP1CNfh+H5WlD6T0eDMWeo6Wp0ruHKb0pS85QHL5Wet01HjtHpvFQpFSYH6w7GIL5cVUHruRl+0MXwUW2LsmOfkbx5FcViaqM2LVvw6g8VysxGlTWMEV57cjDAuRRtiU08tPwgVY6h4RU3OFjHK699RIOm4PD8c7L4VL2sozvkwkt/nArbaEvYZrn5+/UT6Y0FTpaw7Z58wZIQtN6b5DEJid9qwGS/Nxld4wk5BO0vMCNBjapyjQoM8DSFwiGbIYN2EQ12NDCX/e8glzXn+cVsI0EqXca7yJowV2HGMs4NkE/49r3jlnAtq3W2BS5dfqpvXuY2kujFjavAKg90V3teXpBvdrjejR6xy8QQzUaA79QgcNojV8gBvc4kqNSPPALu2bObYAkP4G5HSMpnV8YclXKyVF1+uEe+fP6DlrCR6INBi2x1Q5aAjmZGnjzglxVe/PkGf/R1N1/JLwBU+3Nkz28jtMkyeyKoC0kLgR2mxSyB1ug70yBbC7h0DIPd+8QB63FdSR5pjBrSeYekHC87JFiB8yso0RX+lNDAhhllqeyXCvIxuVg6M8M8WAACVndggH5WRo4tBhxU4khOrSQoJHhyZI0xMH1ZEIQ0VCt9nXZt2sbxO7K3tq47x0s5eBXBku905Q1n/USFEtx8GdBC/c9cZ4EMBzFuStOrwNEk8+1aEAlmsJaQrdUotN8A+NQfgCmz9CRbUZ3ICd7qDUibSMj4ZOAlCtAne+v/b4XkxwZle3IprIVBbzCdKZ6BceB1rcHnVei81B2e73TWEi11mfcsdju2oVkz/i1uHJ98C278NwqD69WG2czO3oqjgJ/OdsMslqx517YMy9XvMqReKjC/qMme5n4xHmgL9lO6h5J28eqOUeqEY2yz0y12WfO6xJZXqkOpwVruaa1XV88x4fC6gh9dQqr22tSATtCmZvD85N4voh0O0BdiFToL5I8pXMwjC4SBrdsJKGQcah1JK3WkWqKwO13Yrcwclu0wR0Tu66mVZwJ7/EAi7gQw4CQtDFTu9NR/yV1YnBoAKmL3RaUqUvr+a1/Wo8AhmrGx6vNb9B4i7u5p0EislJ8PIjy1MDy8xSGekHX1hHr4RZ30zCBryl0a250dXm4bdXE3U/x5dV2MHpN1fA7eU3UXqUwE3Ogya4vZfR6U7aJBPEfnspj8X11G/EhQ9dFZb7qu3r797qlTN0GDT/1Nxg0FlE9aGqjd909OEsIXFRXREN7uBWnaMB4JVdvwPpd6pVfwG7kwJUkhyoHDiwaqcGgwXz5opJAf2HQOLLCfQTWYu1LvC98pdK1VMf7aJ8vVQ55Qc+bQsCXOzSBhAi+nHHe7XI1Ast7ZgNT/w1bLi98onx8gsUQhw8etgc6wUf2iUtUWuB+OpaQBTw4gcBRHBmD5RrNnhTP5EtRmYZ6dwesumj2pLYsv7NEPVsBlk4cmCPJWMCfrK9BQq9P9FWLttg2EYQui0QChb7HTH+SvIPBy9kDLD8k6bJuWOuS7vNEzOcZ2G53Li4Ixus4B80cXGRXB4dj4AZL33xfpNpRew4uCOTrNpVpBiQ2WwTSVQsktILXKsvwoaf0ArK4qRBYt+rWz4N8jl643PynNZRPg+AnBgcolEDR/LNxQ2GX1ygA4RVAtVmsrY2mVfzLLUAqDn/BD5xpEf8el5zBT5f1ZdVd+IaT4qmm1PFh1u9wpy6MxEAK7UScqsHo9bZpfj+E04QT4rrqlkkAl3R6gyOfFmk1oYS4rjrGsYMPaaNTjIbasL3cbgxLcfCDT7g42G5ZuWeDjGOxELx5bmAhyItdSTYS+3xFTWdjX+V2SxKrKfnvuhJrFME4QjFJbpSD8AG09ZkxPivsambsI3qb1hj7vKPlnb9oYdYYWeO5n86S+ci5ZO/kXAlzSH6j6s1Z8/pFdJ9ZBF5NWsIVO0ziDJ+tHrJ3mn2IA5pd8T8=</diagram></mxfile>
2205.15209/main_diagram/main_diagram.pdf ADDED
Binary file (37.2 kB). View file
 
2205.15209/paper_text/intro_method.md ADDED
@@ -0,0 +1,353 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Density estimation techniques have proven effective on a wide variety of downstream tasks such as sample generation and anomaly detection [\[3](#page-9-2)[–8\]](#page-9-3). Normalizing flows and autoregressive models perform very well at density estimation but do not easily scale to large dimensions [\[8–](#page-9-3)[10\]](#page-9-4) and have to satisfy strict design constraints to ensure efficient computation of their Jacobians and inverses. Advances in other areas of machine learning cannot be utilized as flow architectures because they are not typically seen as being invertible; this restricts the application of highly optimized architectures from many domains to density estimation, and the use of the likelihood for diagnosing these architectures.
4
+
5
+ Methods using standard convolutions and residual layers for density estimation have been developed for architectures with specific properties [\[11–](#page-9-5)[14\]](#page-10-0). These methods do not provide a recipe for converting general architectures into flows. There is no known correspondence between normalizing flows and the operations defined by linear and convolutional layers.
6
+
7
+ In this paper we show that a large proportion of machine learning models can be trained as normalizing flows. The forward pass of these models remains unchanged apart from the possible addition of uncorrelated noise. To demonstrate our formulation works we apply it to fully connected layers, convolutions and residual connections.
8
+
9
+ The contributions of this paper include:
10
+
11
+ <sup>∗</sup>Equal contribution.
12
+
13
+ - In §3.1 we show that linear layers induce densities as augmented normalizing flows [2] with the multi-scale architecture used in RealNVPs [6]. We also show how these layers can be viewed as funnels [15] to increase their expressivity. We term this process *flowification*.
14
+ - In §3.2 we argue that most ML architectures can be decomposed into simple building blocks that are easy to flowify. As an example, we derive the specifics for two dimensional convolutional layers and residual blocks.
15
+ - In §4 we flowify multi-layer perceptrons and convolutional networks and train them as normalizing flows using the likelihood. This demonstrates that models built from standard layers can be used for density estimation directly.
16
+
17
+ Given a base probability density $\{p_0(z)|z\in Z\}$ and a diffeomorphism $f:X\to Z$ , the pullback along f induces a probability density $\{p(x)|x\in X\}$ on X, where the likelihood of any $x\in X$ is given by $p(x)=p_0(f(x))|\det(J_x^f)|$ , where $J_x^f$ is the Jacobian of f evaluated at x. Thus, the log-likelihoods of the two densities are related by an additive term, which will be referred to as the *likelihood contribution* $\mathcal{V}(x,z)$ [1]. Normalizing flows [3] parametrize a family $f_\theta$ of invertible functions from X to Z. The parameters $\theta$ are then optimized to maximize the likelihood of the training data. A lot of development has gone into constructing flexible invertible functions with easy to calculate Jacobians where both the forward and inverse passes are fast to compute [6–8, 16, 17].
18
+
19
+ As the function f must be invertible it is required to preserve the dimension of the data. This limits the expressivity of f and makes it expensive to model high dimensional data distributions. To reduce these issues several works have studied dimension altering variants of flows [2, 1, 18–21].
20
+
21
+ **Reducing the dimensionality** A simple method for altering the dimension of a flow is to take the output of an intermediate layer z' and partition it into two pieces $z' = \{z'_1, z'_2\}$ . Multiscale architectures [6] match $z'_2$ directly to a base density and apply further transformations $z'_1$ . Funnels [15] generalize this by allowing $z'_2$ to depend on $z'_1$ , i.e. they work with the model $p(z') = p(f'(z'_1))p(z'_2|f'(z_1))|\det J^{f'}_{z'_1}|$ where the conditional distribution $p(z'_2|f'(z_1))$ is trainable. It is useful to think of these factorization schemes as dimension reducing mechanisms from $\dim(z')$ to $\dim(z'_1)$ .
22
+
23
+ **Increasing the dimensionality** Dimension increasing flow layers can improve a models flexibility, as demonstrated by augmented normalizing flows [2]. To increase the dimensionality, x is embedded into a larger dimensional space and data independent noise u is added to the embedding to obtain a distribution with support of nonzero measure. This noise addition $x \mapsto (x, u)$ is similar to dequantization [6, 22], but is orthogonal to the distribution of x and increases its dimension from $\dim x$ to $\dim x + \dim u$ . Under an augmentation the likelihood of x can be estimated using
24
+
25
+ $$\log p(x) = \log \int du \, p(x, u) \tag{1}$$
26
+
27
+ $$= \log \int du \, \frac{p(u)p(x,u)}{p(u)} \tag{2}$$
28
+
29
+ $$\geq \int du \, p(u) \log \frac{p(x,u)}{p(u)} \tag{3}$$
30
+
31
+ $$= \mathbb{E}_{u \sim p(u)} \left[ \log \frac{p(x, u)}{p(u)} \right] \tag{4}$$
32
+
33
+ $$= \mathbb{E}_{u \sim p(u)} \left[ \log p(x, u) - \log p(u) \right]. \tag{5}$$
34
+
35
+ In practice, we estimate this expectation value by sampling u everytime a datapoint is passed through the network. This means the integral is estimated with a single sample as in surVAEs [1].
36
+
37
+ Suppose $\mathcal{A}$ is a network architecture with parameter space $\Theta$ . Then for any choice of $\theta \in \Theta$ the network with parameters $\theta$ realizes a function $\mathcal{A}_{\theta}: \mathbb{R}^D \to \mathbb{R}^C$ for some D and C. Similarly, a normalizing flow model $\mathcal{F}$ is a parametric distribution on some $\mathbb{R}^E$ , where for any choice of $\gamma$ from their parameter space $\Gamma$ they define a density function $\mathcal{F}_{\gamma}$ on $\mathbb{R}^E$ . In this work we show that a large class of neural network architectures can be thought of as a flow model by constructing a map
38
+
39
+ $$\left\{ \begin{array}{c} \text{network} \\ \text{architectures} \end{array} \right\} \rightarrow \left\{ \begin{array}{c} \text{flow} \\ \text{models} \end{array} \right\}.
40
+ \tag{6}$$
41
+
42
+ The embedding of $\mathcal{A}$ to its flowification $\mathcal{F}^{\mathcal{A}}$ results in a flow model that can realize density functions on the augmented space $\mathbb{R}^D \times \mathbb{R}^N$ for some $N \geq 0$ , which in turn induces a density on $\mathbb{R}^D$ by integrating out the component on $\mathbb{R}^N$ . The parameter space of $\mathcal{F}^{\mathcal{A}}$ factorises as $\Theta \times \Phi$ where $\Theta$ is the parameter space of $\mathcal{A}$ and also that of the forward pass of $\mathcal{F}^{\mathcal{A}}$ , while $\Phi$ parametrises the inverse pass of $\mathcal{F}^{\mathcal{A}}$ . In the simplest case $\Phi = \emptyset$ , i.e. flowification does not require additional parameters. It is in this sense that we claim that a large fraction of machine learning models are normalizing flows.
43
+
44
+ **Terminology** In what follows we work with conditional distributions such as p(z|x) and it will be practical to think of them as "stochastic functions" $p:x\mapsto z$ , that take an input x and produce an output $z\sim p(z|x)$ . Conversely, we think of a function $f:x\mapsto z$ as the Dirac $\delta$ -distribution $f(z|x)=\delta(z-f(x))$ . These definitions allow us to have a unified notation for deterministic and stochastic functions such that we can talk about them in the same language. Consequently, when we say "stochastic function", it will include deterministic functions as a corner case. Depending on whether f and $f^{-1}$ are deterministic or stochastic, we talk about left, right or two-sided inverses. We will be careful to be precise about this.
45
+
46
+ **Method** In the following we consider the standard building blocks of machine learning architectures and enrich them by defining (stochastic-)inverse functions and calculating the likelihood contribution of each layer. Treating each layer separately allows density estimation models to be built through composition [1]. The stochastic inverse can use the funnel approach, which increases the parameter count, or the multi-scale approach, which does not. For simplicity we will only consider conditional densities in the inverse as this is more general, though it is not required. We will refer to this process as *flowification* and the enriched layers as *flowified*; non-flowified layers will be called *standard layers*. Flowified layers can then be seen as simultaneously being
47
+
48
+ - Flow layers that are invertible, their likelihood contribution is known and therefore can be used to train the model to maximize the likelihood.
49
+ - **Standard** layers that can be trained with losses other than the likelihood, but for which the likelihood can be calculated after this training with fixed weights in the forward direction.
50
+
51
+ Let $L_{W,b}: \mathbb{R}^n \to \mathbb{R}^m$ denote the linear layer of a neural network with parameters defined by a weight matrix $W \in \mathbb{R}^{m \times n}$ and bias $b \in \mathbb{R}^m$ . Formally, $L_{W,b}$ is defined as the affine function
52
+
53
+ $$x \mapsto L_{W,b}(x) := Wx + b \qquad x \in \mathbb{R}^n.$$
54
+ (7)
55
+
56
+ <span id="page-2-1"></span>**Definition 1.** Let $\phi(z|x): \mathbb{R}^n \to \mathbb{R}^m$ be a stochastic function. We say that $\phi$ is **linear in expectation** if there exists $W \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R}^m$ such that for any $x \in \mathbb{R}^n$ the expected value of $\phi$ coincides with the application of $L_{W,b}$
57
+
58
+ $$\mathbb{E}_{z \sim \phi(z|x)}[z] = L_{W,b}(x). \tag{8}$$
59
+
60
+ Similarly, we say that a stochastic function $\psi(z|x)$ is convolutional in expectation if the deterministic function $x \mapsto \mathbb{E}_{z \sim \psi(z|x)}[z]$ is a convolutional layer.
61
+
62
+ In this section we *flowify* linear layers, by which we mean we construct a pair of stochastic functions, a forward $\mathcal{L}(z|x):\mathbb{R}^n\to\mathbb{R}^m$ and an inverse $\mathcal{L}^{-1}(x|z):\mathbb{R}^m\to\mathbb{R}^n$ such that the forward is linear in expectation and is compatible with the inverse in a way that will be made precise in the following paragraphs.
63
+
64
+ To build a flowified linear layer, the first step is to parametrize the weight matrices by the singular value decomposition (SVD)[23]. This involves writing $W \in \mathbb{R}^{m \times n}$ as a product $W = V \Sigma U$ , where $U \in \mathbb{R}^{n \times n}$ is orthogonal, $\Sigma \in \mathbb{R}^{m \times n}$ is diagonal and $V \in \mathbb{R}^{m \times m}$ is orthogonal. This parametrization is particularly useful for our purposes because the orthogonal transformations are easily invertible and do not contribute to the likelihood, and the non-invertible piece of the transformation is localized to $\Sigma$ .
65
+
66
+ **Parametrizing** U and V We generate elements of the special orthogonal group SO(d) by applying the matrix-exponential to elements of the Lie algebra $\mathfrak{so}(d)$ of skew-symmetric matrices. We parametrize $\mathfrak{so}(d)$ and perform gradient descent there. As the Lie-algebra is a vector space, this is significantly easier than working directly with SO(d). See Appendix G for details.
67
+
68
+ **Parametrizing** $\Sigma$ The matrix $\Sigma$ is of shape $m \times n$ containing the singular values on the main diagonal. We ensure maximal rank of $\Sigma$ , by parameterizing the logarithm of the main diagonal, this way all singular values are greater than 0.
69
+
70
+ It is important to note that this parametrization is not without loss of generality. In particular, it does not include matrices of non-maximal rank nor orientation reversing ones, where either $U \in O(n) \setminus SO(n)$ or $V \in O(m) \setminus SO(m)$ . This implementation detail does not change the general perspective we provide of linear layers as normalizing flows, but instead simplifies the implementation of flowified layers.
71
+
72
+ **Definition 2.** We call the tuple $(\mathcal{L}(z|x), \mathcal{L}^{-1}(x|z))$ a dimension decreasing flowified linear layer if $\mathcal{L}$ is dimension decreasing, linear in expectation and the following conditions are satisfied
73
+
74
+ - <span id="page-3-1"></span>(i) The forward is deterministic, given by $\mathcal{L}(z|x) = L_{W,b}(x)$ ,
75
+ - <span id="page-3-2"></span>(ii) The layer is right-invertible, $\mathcal{L} \circ \mathcal{L}^{-1} = id_z$ ,
76
+ - <span id="page-3-3"></span>(iii) The likelihood contribution of $\mathcal{L}$ can be exactly computed.
77
+
78
+ To flowify dimension decreasing linear layers, we define the forward function $\mathcal{L}$ as a standard linear layer with parameters W and b,
79
+
80
+ <span id="page-3-0"></span>
81
+ $$\mathcal{L}(z|x) = \delta(z - L_{W,b}(x)). \tag{9}$$
82
+
83
+ Since W is parametrized by the SVD decomposition, $W=V\Sigma U$ , we need to invert V,U and $\Sigma$ separately. As V and U are rotations, they are invertible in the usual sense. To construct a stochastic inverse to $\Sigma$ , we think of it as a funnel [15] and use a neural network $p_{inv}((Ux)_{(m:)}|\Sigma Ux)$ that models the n-m dropped coordinates as a function of the m non-dropped coordinates. Again, this is not required to calculate the likelihood under the model, even a fixed distribution could be used, but introducing some trainable parameters significantly improves the performance of the flow that is defined by the layer. We use $\Sigma^{-1}$ to denote this stochastic inverse to $\Sigma$ . The stochastic inverse function $\mathcal{L}^{-1}$ can then be written as
84
+
85
+ $$\mathcal{L}^{-1}(x|z) = U^T \circ \Sigma^{-1} \circ V^T(z-b). \tag{10}$$
86
+
87
+ Since the rotations don't contribute to the log-likelihood, the likelihood of data under a dimension decreasing flowified linear layer is
88
+
89
+ $$\log p(x) = \log p_{inv}((Ux)_{(m:)}|\Sigma Ux) + \log \Sigma + \log p(z), \tag{11}$$
90
+
91
+ where $\log \Sigma$ denotes the sum of the logarithms of the diagonal elements of $\Sigma$ .
92
+
93
+ <span id="page-3-4"></span>**Theorem 3.** The above choices for $\mathcal{L}$ and $\mathcal{L}^{-1}$ define a dimension decreasing flowified linear layer.
94
+
95
+ Sketch of proof. The definition of the forward pass (9) makes the forward pass linear in expectation and satisfies (i) by definition. Unpacking the definitions and decomposing W into its SVD form yields right-invertibility (ii) which in turn implies that the likelihood contribution can be exactly computed (iii).
96
+
97
+ When the inverse density is not made to be conditional the above ideas can be visualized as a standard multi-scale flow architecture [6] as shown in Fig. 1.
98
+
99
+ <span id="page-4-0"></span>![](_page_4_Figure_0.jpeg)
100
+
101
+ Figure 1: A mutli-scale flow with a base density $p(z, z_2')$ on the left. A dimension reducing linear layer with activation $\sigma$ as a multi-scale flow with a base density $p(z, z_2')$ on the right.
102
+
103
+ **Definition 4.** We define the Moore-Penrose pseudoinverse $L_{W,b}^+$ of a linear layer $L_{W,b}: \mathbb{R}^n \to \mathbb{R}^m$ as the affine transformation $\mathbb{R}^m \to \mathbb{R}^n$
104
+
105
+ $$z \mapsto L_{W,b}^+ := W^+(z-b) \qquad z \in \mathbb{R}^m$$
106
+ (12)
107
+
108
+ where $W^+$ denotes the Moore-Penrose pseudoinverse of the matrix W.
109
+
110
+ **Definition 5.** We call the tuple $(\mathcal{L}(z|x), \mathcal{L}^{-1}(x|z))$ a dimension increasing flowified linear layer if $\mathcal{L}$ is dimension increasing, linear in expectation and the following conditions are satisfied
111
+
112
+ - <span id="page-4-3"></span>(iv) The inverse $\mathcal{L}^{-1}$ is deterministic, given by $\mathcal{L}^{-1}(x|z) = L_{W,b}^+(z)$ ,
113
+ - <span id="page-4-4"></span>(v) The layer is left-invertible, $\mathcal{L}^{-1} \circ \mathcal{L} = id_x$ ,
114
+ - <span id="page-4-2"></span>(vi) The likelihood contribution of $\mathcal{L}$ can be bounded from below.
115
+
116
+ To construct dimension increasing flowified linear layers, we rely again on the SVD parametrization where the only nontrivial component is $\Sigma$ . In this case $\Sigma$ is a dimension increasing operation and we think of it as an augmentation step [2] composed with diagonal scaling. To augment, we sample m-n coordinates from a distribution p(u) with zero mean and then apply a scaling in m dimensions. The likelihood contribution is then given by
117
+
118
+ $$\log p(x) \ge \mathbb{E}_{u \sim p(u)} \left[ \log p(z) - \log p(u) \right] + \log \Sigma, \tag{13}$$
119
+
120
+ $$\log p(x) = \log \Sigma + \log p(z), \tag{14}$$
121
+
122
+ where $\log \Sigma$ denotes the sum of the logarithms of the m scaling parameters. The inverse function $\mathcal{L}^{-1}$ is the composition of the inverse rotations, the inverse scaling and the dropping of the sampled coordinates. This sequence of steps is visualized in Fig. 2.
123
+
124
+ <span id="page-4-1"></span>![](_page_4_Figure_14.jpeg)
125
+
126
+ Figure 2: A dimension increasing flowified linear layer.
127
+
128
+ <span id="page-4-5"></span>**Theorem 6.** The above choices for $\mathcal{L}$ and $\mathcal{L}^{-1}$ define a dimension increasing flowified linear layer.
129
+
130
+ Sketch of proof. The augmentation step [2] results in a lower bound on the likelihood contribution, implying (vi). Since $\mathbb{E}_{u \sim p(u)} = 0$ , the augmentation does not influence the expected value of z, i.e. the forward pass is linear in expectation. Simple calculations using the SVD decomposition then imply both (iv) and (v).
131
+
132
+ Dimension preserving layers are a corner case of both of the above scenarios, where padding and sampling are not needed in either direction and the layer is non-stochastically invertible. All this implies (i),(ii),(iii),(iv) and (v) are satisfied.
133
+
134
+ Convolutions can be seen as a dimension increasing coordinate repetition followed by a matrix multiplication with weight sharing. In the previous section we derived the specifics of matrix multiplication. We begin this section with the details of coordinate repetition after which we put the pieces together to build a flowified convolutional layer. In Appendix H we describe an alternative approach relying on the Fourier transform.
135
+
136
+ **Repeating coordinates** In this paragraph we focus on the N-fold repetition of a single scalar coordinate x. This significantly simplifies notation, but the technique generalizes in an obvious way. Intuitively, the idea is to expand the one dimensional volume to N dimensions by first embedding and then increasing the volume of the embedding such that the volume in the N-1 directions complementary to the embedding can be controlled.
137
+
138
+ We have seen in §2 that the operation
139
+
140
+ $$x \mapsto (x, \underline{\mathbf{u}}) \qquad \underline{\mathbf{u}} = (u_1, ..., u_{N-1}),$$
141
+ (15)
142
+
143
+ has likelihood contribution $\mathbb{E}_{\underline{\mathbf{u}}}[-\log p(\underline{\mathbf{u}})]$ . Now, we can apply any N-dimensional rotation $R_N$ which maps $(1,\underline{\mathbf{0}})$ to $\frac{1}{\sqrt{N}}(1,\underline{\mathbf{1}})$ to obtain<sup>2</sup>
144
+
145
+ $$R_N(x, \underline{\mathbf{u}}) = R_N(x, \underline{\mathbf{0}}) + R_N(0, \underline{\mathbf{u}}) = \frac{1}{\sqrt{N}}(x, \underline{\mathbf{x}}) + R_N(0, \underline{\mathbf{u}}).$$
146
+ (16)
147
+
148
+ Note that this rotation does not contribute to the likelihood. Finally, we apply a diagonal scaling in N dimensions with factor $\sqrt{N}$ such that
149
+
150
+ <span id="page-5-4"></span><span id="page-5-2"></span>
151
+ $$x \mapsto (x, \underline{\mathbf{x}}) + R_N(0, \sqrt{N}\underline{\mathbf{u}}),$$
152
+ (17)
153
+
154
+ where the final scaling has likelihood contribution $N \log(\sqrt{N}) = (N/2) \log N$ and x is now repeated N times. The overall contribution to the likelihood of the embedding (17) is
155
+
156
+ $$\mathcal{V}(x,z) = \mathbb{E}_{\mathbf{u}}[-\log p(\underline{\mathbf{u}})] + (N/2)\log N. \tag{18}$$
157
+
158
+ <span id="page-5-3"></span>By construction, the padding distribution $R_N(0, \sqrt{N}\underline{\mathbf{u}})$ is orthogonal to the diagonal embedding $x \mapsto (x, ..., x)$ of the data distribution. The inverse function is given by the projection to the diagonal embedding,
159
+
160
+ $$(z_1, ..., z_N) \mapsto \frac{1}{N} \sum_i z_i.$$
161
+ (19)
162
+
163
+ **General architectures** Now that the likelihood contribution of arbitrary linear layers and coordinate repetition has been computed it is possible to flowify more general architectures such as convolutions and residual connections. It is important to note that just because an architecture works well for certain tasks, it is not clear if its flowified version will perform well at density estimation.
164
+
165
+ **Decomposing convolutional layers** To flowify convolutional layers, we decompose it as a sequence of building blocks that are easy to flowify separately. A standard convolutional layer performs the following sequence of steps:
166
+
167
+ - 1. **Padding** of the input image with zeros to increase its size.
168
+ - Unfolding of the padded image into tiles. This step replicates the data according to the kernel size and stride.
169
+ - 3. **Applying a linear layer.** Finally, we apply the same linear layer to each of the tiles produced in the previous step. The outputs then correspond to the pixels of the output image.
170
+
171
+ <span id="page-5-1"></span> $<sup>{}^2\</sup>underline{\mathbf{0}}$ and $\underline{\mathbf{1}}$ denote the (N-1)-dimensional vectors (0,...0) and (1,...,1), respectively. Similarly $\underline{\mathbf{x}}$ denotes the (N-1)-dimensional vector (x,...,x).
172
+
173
+ Flowification Steps 1 and 3 are already flowified, i.e. their likelihood contribution is computed and an inverse is constructed, in [§3.1.](#page-2-0) We denote their flowification with Pad and Linear, respectively. Step 2 fits into the discussion of the previous paragraph of repeating coordinates, where both its inverse [\(19\)](#page-5-3) and its likelihood contribution [\(18\)](#page-5-4) are given. We will denote this operation by Unfold.
174
+
175
+ Definition 7. *Let* Linear, Unfold *and* Pad *be as above and define* C *and* C <sup>−</sup><sup>1</sup> *be the following stochastic functions*
176
+
177
+ $$C = Linear \circ Unfold \circ Pad$$
178
+ (20)
179
+
180
+ $$C^{-1} = \operatorname{Pad}^{-1} \circ \operatorname{Unfold}^{-1} \circ \operatorname{Linear}^{-1}$$
181
+ (21)
182
+
183
+ *call the resulting layer* (C, C −1 ) *a flowified convolutional layer.*
184
+
185
+ ![](_page_6_Figure_5.jpeg)
186
+
187
+ <span id="page-6-1"></span>Figure 3: A flowified 1D-convolution with kernel size 2 applied to a vector with 3 features. The x<sup>2</sup> component appears in the operation twice, and so it is first duplicated so that the kernel can be applied to non-overlapping tiles.
188
+
189
+ A flowified convolutional layer (C, C −1 ) is then *convolutional in expectation* (Definition [1\)](#page-2-1), i.e. there exists a convolutional layer C<sup>θ</sup> with parameters θ such that
190
+
191
+ $$\mathbb{E}_{z \sim \mathcal{C}(z|x)}[z] = C_{\theta}(x). \tag{22}$$
192
+
193
+ The flowification of a convolution without padding can be seen in Fig. [3.](#page-6-1) The Unfold operation is implemented as coordinate duplication and Linear is a flowified linear layer parameterized by the SVD.
194
+
195
+ Activation functions Functions that are surjective onto R and invertible fit well in our framework as they can be used out of the box without any modifications. In our experiments we use LeakyReLU and rational-quadratic splines [\[7\]](#page-9-7) as activations functions. Non-invertible activations can also be used when equipped with additional densities [\[1\]](#page-9-0).
196
+
197
+ Residual connections Residual connections can be seen as coordinate duplication followed by two separate computational graphs {f1, f2} with the outputs recombined in a sum. The sum can be inverted by defining a density over one of the summands p(f1(x + u)|f1(x + u), f2(x − u)) and sampling from this density, which will also define the likelihood contribution. Then, if the likelihood contribution can be calculated for each individual computational graph, the likelihood of the total operation can be calculated [\[1\]](#page-9-0).
198
+
199
+ # Method
200
+
201
+ ```
202
+ 1 import torch . nn as nn
203
+ 2 import flowification as ffn
204
+ 4 model = nn . Sequential (
205
+ 5 ffn . Flatten () ,
206
+ 6 ffn . Linear (784 , 512) , RqSpline () ,
207
+ 7 ffn . Linear (512 , 256) , RqSpline () ,
208
+ 8 ffn . Linear (256 , 128) , RqSpline () ,
209
+ 9 ffn . Linear (128 , 64) , RqSpline () ,
210
+ 10 ffn . Linear (64 , 32) , RqSpline () ,
211
+ 11 ffn . Linear (32 , 8)
212
+ 12 )
213
+ ```
214
+
215
+ Figure 8: FMLP Architecture for MNIST.
216
+
217
+ ```
218
+ 1 import torch . nn as nn
219
+ 2 import flowification as ffn
220
+ 4 model = nn . Sequential (
221
+ 5 ffn . Flatten () ,
222
+ 6 ffn . Linear (3072 , 1024) , RqSpline () ,
223
+ 7 ffn . Linear (1024 , 512) , RqSpline () ,
224
+ 8 ffn . Linear (512 , 512) , RqSpline () ,
225
+ 9 ffn . Linear (512 , 256) , RqSpline () ,
226
+ 10 ffn . Linear (256 , 128)
227
+ 11 )
228
+ ```
229
+
230
+ Figure 9: FMLP Architecture for CIFAR-10.
231
+
232
+ ```
233
+ 1 import torch . nn as nn
234
+ 2 import flowification as ffn
235
+ 4 model = nn . Sequential (
236
+ 5 ffn . Conv2d (3 , 27 , kernel_size =3 , stride =2) , RqSpline () , # 16
237
+ 6 ffn . Conv2d (27 , 64 , kernel_size =3 , stride =2) , RqSpline () , # 8
238
+ 7 ffn . Conv2d (64 , 100 , kernel_size =3 , stride =2) , RqSpline () , # 4
239
+ 8 ffn . Conv2d (100 , 112 , kernel_size =3 , stride =2) , RqSpline () , # 2
240
+ 9 ffn . Conv2d (112 , 128 , kernel_size =2 , stride =2) , RqSpline () , # 1
241
+ 10 ffn . Flatten () ,
242
+ 11 ffn . Linear (128 , 128) , RqSpline () ,
243
+ 12 ffn . Linear (128 , 128) , RqSpline () ,
244
+ 13 ffn . Linear (128 , 128) , RqSpline () ,
245
+ 14 ffn . Linear (128 , 128) , RqSpline () ,
246
+ 15 ffn . Linear (128 , 128) , RqSpline () ,
247
+ 16 ffn . Linear (128 , 128) , RqSpline () ,
248
+ 17 ffn . Linear (128 , 128) , RqSpline () ,
249
+ 18 ffn . Linear (128 , 128) , RqSpline () ,
250
+ 19 ffn . Linear (128 , 128) , RqSpline () ,
251
+ 20 ffn . Linear (128 , 128) , RqSpline ()
252
+ 21 )
253
+ ```
254
+
255
+ Figure 10: FCONV1 Architecture for CIFAR-10.
256
+
257
+ ```
258
+ 1 import torch . nn as nn
259
+ 2 import flowification as ffn
260
+ 4 model = nn . Sequential (
261
+ 5 ffn . Conv2d (1 , 16 , kernel_size =3 , stride =2) , RqSpline () , # 14
262
+ 6 ffn . Conv2d (16 , 24 , kernel_size =2 , stride =2) , RqSpline () , # 7
263
+ 7 ffn . Conv2d (24 , 32 , kernel_size =3 , stride =2) , RqSpline () , # 3
264
+ 8 ffn . Conv2d (32 , 48 , kernel_size =2 , stride =1) , RqSpline () , # 2
265
+ 9 ffn . Conv2d (48 , 64 , kernel_size =2 , stride =1) , RqSpline () , # 1
266
+ 10 ffn . Flatten () ,
267
+ 11 ffn . Linear (64 , 64) , RqSpline () ,
268
+ 12 ffn . Linear (64 , 64) , RqSpline () ,
269
+ 13 ffn . Linear (64 , 64) , RqSpline () ,
270
+ 14 ffn . Linear (64 , 64) , RqSpline () ,
271
+ 15 ffn . Linear (64 , 64) , RqSpline () ,
272
+ 16 ffn . Linear (64 , 64) , RqSpline () ,
273
+ 17 ffn . Linear (64 , 32) , RqSpline () ,
274
+ 18 ffn . Linear (32 , 32) , RqSpline () ,
275
+ 19 ffn . Linear (32 , 32) , RqSpline () ,
276
+ 20 ffn . Linear (32 , 32) , RqSpline () ,
277
+ 21 ffn . Linear (32 , 32) , RqSpline () ,
278
+ 22 ffn . Linear (32 , 32) , RqSpline () ,
279
+ 23 ffn . Linear (32 , 32) , RqSpline () ,
280
+ 24 ffn . Linear (32 , 8) , RqSpline ()
281
+ 25 )
282
+ 26
283
+ ```
284
+
285
+ Figure 11: FCONV1 Architecture for MNIST.
286
+
287
+ The Lie algebra so(d) of the rotation group SO(d) is the space of d × d skew-symmetric matrices
288
+
289
+ ```
290
+ so(d) = {X ∈ Md(R)|X = −XT
291
+ }
292
+ ```
293
+
294
+ Moreover, since SO(d) is connected and compact, the (matrix-)exponential map so(d) → SO(d) is surjective [\[41,](#page-11-7) Corollary 11.10], i.e. every rotation U ∈ SO(d) can be written as the exponential of some skew-symmetric X ∈ so(d).
295
+
296
+ Motivated by this result, we propose to parametrize the Lie algebra so(d), and use the exponential map to obtain rotations. Since the exponential map is differentiable, we can propagate gradients all the way back to the Lie algebra and do gradient based optimization there.
297
+
298
+ Note that is significantly eases the optimization process. The reason for this is that the Lie group of unitary transformations has non-trivial geometry and doing gradient descent is cumbersome on such spaces. On the other hand, the Lie algebra is a vector space, where gradient descent can shine.
299
+
300
+ Since PyTorch [\[37\]](#page-11-3) has built-in support for the matrix exponential, random orthogonal transformations can be generated with just a few lines of Python code:
301
+
302
+ ```
303
+ 1 d = 5 # size of matrix
304
+ 2 x = torch . randn (d , d ) # random (d,d) matrix
305
+ 3 S = x - x . T () # S is skew - symmetric
306
+ 4 U = torch . matrix_exp ( S ) # U is a rotation
307
+ ```
308
+
309
+ Figure 12: Generating a random rotation in PyTorch
310
+
311
+ Comparison with Householder transformations The matrix exponential has the advantage of generating the whole rotation group, while many Householder transformations need to be composed to have the same level of expressivity. This, of course, comes with an increased (O(d 2 )) computational cost. We did not explore Householder transformations, but we expect that for linear layers with large input or output dimensionality, the matrix exponential will simply become too expensive and it will become necessary to rely on Householder transformations.
312
+
313
+ An alternative way of flowifing convolutional layers is through the convolution theorem which states that the Fourier-transformation diagonalizes convolutions. In this section we sketch the idea and discuss the issues with it.
314
+
315
+ Fig[.13](#page-19-0) displays the Pytorch code for performing convolutions through the convolution theorem. Line 12 of the code shows that in the Fourier basis, we can perform convolution as follows; at each frequency we form the vector of dimension (# of in\_channels) and apply a weight matrix of size (# of in\_channels) × (# of out\_channels) to it. If this weight matrix is parametrized by the SVD, we can guarantee its invertibility. Since the FFT is also invertible, this construction should be a flowification of the convolutional layer if we can also solve the following two issues:
316
+
317
+ If the weight matrix is parametrized in frequency domain,
318
+
319
+ - (1) it's inverse transform might not be real.
320
+ - (2) it's inverse transform might not be of the appropriate size (i.e. nonzero outside of the first (kernel\_size)-many pixels).
321
+
322
+ The first of these is easy to solve with the symmetric parametrization induced by
323
+
324
+ $$k$$
325
+ is real $\Leftrightarrow FFT(k)_i^* = FFT(k)_{N-i}$
326
+
327
+ where N is the length of the signal. To deal with the second condition, a auxiliary loss term is introduced that minimizes the terms that should be 0 . Although the idea of FFT seems promising, this approach turns out be slower (probably due to the FFT), worse performing and more cumbersome than the flowificaton presented in [§3.2.](#page-5-0) Fig. [14](#page-19-1) shows some samples from a Fourier-flowified convolutional network trained on MNIST.
328
+
329
+ ```
330
+ 1 in_channels , out_channels = 4 , 3
331
+ 2 signal_length , kernel_size = 32 , 5
332
+ 3 batch_size = 24
333
+ 4
334
+ 5 K = torch . randn ( out_channels , in_channels , kernel_size )
335
+ 6 x = torch . randn ( batch_size , in_channels , signal_length )
336
+ 7 z1 = torch . nn . functional . conv1d ( input =x , weight = K )
337
+ 8
338
+ 9 x_ = torch . fft . fft (x , norm ='ortho ')
339
+ 10 # pad kernel to match the length of x
340
+ 11 K_ = torch . fft . fft ( K . flip ( -1) , n = x . size () [ -1] , norm ='ortho ')
341
+ 12 z_ = torch . einsum ('bix ,jix - >bjx ', ( x_ , K_ ) )
342
+ 13 z2 = torch . fft . ifft ( z_ , norm ='forward ')
343
+ 14 z2 = z2 [: , : , ( kernel_size -1) :]
344
+ 16 print (( z1 - z2 ) . abs () .max () . item () )
345
+ 17 # tensor (2.8610e -06)
346
+ 18
347
+ ```
348
+
349
+ Figure 13: 1D convolutions in PyTorch
350
+
351
+ <span id="page-19-1"></span>![](_page_19_Figure_2.jpeg)
352
+
353
+ Figure 14: Samples from and Fourier-flowified convolutional network
2205.15307/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-12-27T09:38:45.896Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/16.0.2 Chrome/96.0.4664.55 Electron/16.0.5 Safari/537.36" version="16.0.2" etag="Yp7LLleZP02DI3ilml0o" type="device"><diagram id="t7oXXoHcXUX1nYpc9MUl">7Vtbb6M4FP410T6N5fvlcdrO7LysNFKl3Z2nFZPQBJWGLCFtur9+TbDBBichFzLJaFClYOMLfOfzOccfdETuX9a/59Fi9kc2idMRhpP1iDyMMEZQMP1T1rybGoRVVTPNk4mpayoek/9i29XUrpJJvPQaFlmWFsnCrxxn83k8Lry6KM+zN7/ZU5b6sy6iqZkRNhWP4yiNO83+SibFrKql0Gn9JU6ms6J14SWybc0Iy1k0yd6cqcinEbnPs6yozl7W93FagmdhqQb6vOVqfV95PC/6dMBVh9coXZlHM/dVvNtn1R00rLpw9zZLivhxEY3LK2/atLpuVrykuoT0abRcVFg/JetYj3+3LPLsuQaI6hozW5wX8XrrHaMaB02gOHuJi/xdNzEdOOVVF8sdLgEnqj644dZbYxdh0J85JrF1kWHCtJ6oQUufGMDC4JEe4M0nH0u+6dI8m8c+YPqJ8/e/3cI3XYC28LD2Su+2tE6KshMEEBFT3vQDTHFTbrqWBduza4/y8tc4T/Sjx7lpVT1DPJn6XF9mq3xsq6hZc1E+ja3VcG9LOqZhAdPYujxOoyJ59W8jZC8zw9cs0RPXRGGEa0RgfVCfNpADhCRUjHFNH8mYP371uGZIdwG1ZqFUAEGlIIJQxLDCzJ9GSCAVwhxzJqmAHPvTVAh2ptkwr0asFxnpiWQMk6MimhLIIxrHYifRamLDQ4l9JjJeHRc5AYI3TooMwkXOpT8uVZ5rlGIo8rHByOcSD/UknXaM3CMeoHgf98rSMeTDAU8orox9tOX7MJY7edGXb4yUMYcond4IjQQV3iwEQkCYRJASyAmSnA7FPn5qHK4dHWKen5NKHRpQGxIqhdsk5Mew8Ei3eAvEFES2KOMTUx3JS65jMqKac0gKSWl7kkvxUpyJl37wZQdSsj9XroYXXDInqWr5FUx0VGsu8nZK1ZclgjPgUE35LMGQ6Fma7JGQoVgiB4udDVe+uVQJ8qZPrKNXRhOmk2umPLNxCmwSdLjXIIDJjvN5t3MBLhyyED4UH9T5+dAEJeaGJLQnHG3hxFWHlNKKshtCrO+nJ6Q6u4YdjA1W8TpdZPD3Yg0T9sgMgwefUAp9bQGJskB0OEMGLZgEmFl5gBGfVlgRRzxgpOXYzkgytJ9k0zxblYKf/plPSnnPcREBQc8or9F3298YosjjqPi4fEzm09aV/hIgtZTZIgFaa1xAArR3cikB9SlJ0/sszfLN0ORpc/QWVvevocFw6qGVFnkSVazYB5SPC+riAuG9Psr6bF6Y1wcEb9hbaD+QzXX5g4IenTcDWzl+482e42I8M4UnbcM/nfMvpsdJwNcvRABjTmyhLQ2RAOKkHkp1FW67xfFcHweCn8FyfYTFczK8sRhuM95Et36Y74RcO9N26ij1thA3Cb9QHZR3BZiTID5MPhun0XKZjH1kfbfs0p5vYI5yW7EpV/HJqgqlwbRDNp1NXAlfrAzmLbXyCJpyvMpf66XlvLbAvsrC1B41OSCANK9OAISqldmoo9Q+jfDnJLV49klcQm9Brm2PxLX34A6t/chZuh6X9PBI5ZlJsWMWtGeWMyYyISmQp4VZEt6S4v+uMnvhw3KzNj7qBoQv1s1FfTatfmn5x+6jZWHOzbj6lqqhbcPWwvUDzB7/GHB2/oI7owekUhvF94CsrnHIiTWBUJef+ByOr49CdpuxxRESWlsH4ctOnF8uzhwmNf2KMz8uzlxN/NB7Uk/48FNTrDiQtJu5Hh4/FIB0W/woZwnlxwPEj5D49it+bI0faKuX0yZzLCZl18sNGFdwSDQrQX+22J8EsuP0yGVB5xgB2fr+CXahpQTYPcW5YwgOSUU/DbSul/H5bHfYF0I5JCuV8M5uHGUqIODtr18oQK603t14D4l0SJgqIX67daR5CesPJXFIOdqQ+Lcbx7Z8ydRiMaEXhTakGG1Ye/vQciD8z5UI4gBdFt6QjGASuPIT7u9PI3E3HomHf/RvMtcnt446oYC2PpBlQTWA6axvINBDYsA20LNVcfuoC50CC3lR1HWx+eeCaoPU/IsG+fQ/</diagram></mxfile>
2205.15307/main_diagram/main_diagram.pdf ADDED
Binary file (11.6 kB). View file
 
2205.15307/paper_text/intro_method.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Tensorial Convolutional Neural Networks (TCNNs) are important variants of Convolutional Neural Networks (CNNs). TCNNs usually adopt various tensor decomposition techniques to factorize large convolutional kernels into lower-rank tensor nodes, aiming to reduce the number of parameters. For example, Tensor Ring (TR) is utilized to decompose CNNs (Wang et al., 2018), leading to a high compression rate while maintaining comparably good performance. Tensor Train (TT) was used to improve performance of CNNs for image classification with parameter reduction (Yin et al., 2021). The CP-Higher-Order convolution (CP-HOConv) was proposed to factorize higher-order convolutional neural networks and has achieved the state-of-the-art results in spatio-temporal facial emotion analysis (Kossaifi et al., 2020).
4
+
5
+ In addition to the advantages in reducing model parameters, TCNNs are promising to be explored as a more general family of CNNs if the corresponding structures can be represented with hypergraphs. A hypergraph is a tensor diagram with a dummy tensor (as illustrated in Figure 1(c)) and a hyperedge (as illustrated in Figure 1(d)). Equipped with the hypergraph representation, TCNNs can include not only factorized CNNs based on tensor decomposition methods (e.g., Tensor Ring (TR) decomposition (Wang et al., 2018), Tensor Train (TT) decomposition (Novikov et al., 2015; Gao et al., 2019; Garipov et al., 2016), CAN-DECAMP/PARAFAC (CP) decomposition (Lebedev et al., 2015; Pan et al., 2022), Tucker decomposition (Kim et al., 2016; Elhoushi et al., 2019), Block-Term Tucker decomposition (Ye et al., 2018; 2020)), but also traditional CNN variants (e.g., low-rank convolution (Rigamonti et al., 2013; Idelbayev & Carreira-Perpiñán, 2020), factoring convolution (Szegedy et al., 2016), and even the vanilla convolution), since each of them can be represented as a hypergraph.
6
+
7
+ Despite these merits, TCNNs suffer from unstable training due to inappropriate weight initialization (Wang et al., 2018; Elhoushi et al., 2019). A common and direct initialization method generates weights by sampling from a probability distribution (Pan et al., 2019; Li et al., 2021). Unfortunately, this initialization method is sensitive to the choice of distribution variance; the distribution parameters are usually
8
+
9
+ <sup>&</sup>lt;sup>1</sup>Harbin Institute of Technology Shenzhen, Shenzhen, China <sup>2</sup>University of Electronic Science and Technology of China, Chengdu, China <sup>3</sup>Tokyo Institute of Technology, Tokyo, Japan <sup>4</sup>State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China <sup>5</sup>School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China <sup>6</sup>Pengcheng Laboratory, Shenzhen, China. Correspondence to: Yu Pan <iperryuu@gmail.com>, Zenglin Xu <zenglin@gmail.com>.
10
+
11
+ <span id="page-1-3"></span><span id="page-1-2"></span><span id="page-1-0"></span>![](_page_1_Picture_1.jpeg)
12
+
13
+ Figure 1. Tensor graphical instances. (a) A tensor $\mathcal{T} \in \mathbb{R}^{\mathbf{i}_0 \times \mathbf{i}_1 \times \mathbf{i}_2}$ ; (b) Tensor Contraction; (c) Dummy Tensor; (d) Hyperedge.
14
+
15
+ <span id="page-1-5"></span><span id="page-1-4"></span>![](_page_1_Picture_3.jpeg)
16
+
17
+ Figure 2. Illustration for vanilla CNN. (a) $G_f$ : A hypergraph forward process formulated as $\mathcal{Y} = \mathcal{X} \circledast \mathcal{C} + \mathbf{b}$ , where $\mathcal{C} \in \mathbb{R}^{\hat{\mathbf{c}}_{in} imes \mathbf{c}_{out} imes k imes k}$ denotes a convolutional kernel, $\mathcal{X} \in$ $\mathbb{R}^{\mathbf{c}_{in} \times h \times w}$ denotes the input feature (Cyan Inverted Triangle), $\mathbf{\mathcal{Y}} \in \mathbb{R}^{\mathbf{c}_{out} \times h' \times w'}$ denotes the output feature, $\mathbf{b} \in \mathbb{R}^{\mathbf{c}_{out}}$ represents the bias, and $\circledast$ denotes the convolutional operator. krepresents kernel window size, $c_{in}$ is the input channel, h and w denote height and width of $\mathcal{X}$ , $\mathbf{c}_{out}$ is the output channel, h' and w' denote height and width of $\mathcal{Y}$ ; (b) $G_b$ : A hypergraph backward process derived directly from $G_f$ ; (c) $G_{bt}$ : A hypergraph backward process equivalently transformed from $G_b$ with Reproducing Transformation. $G_{bt}$ is completely the same as $G_{b}$ . In (b) and (c), Purple Triangle means input gradient $\Delta \mathbf{y} \in \mathbb{R}^{\mathbf{c}_{out} \times h' \times w'}$ . $\tilde{h}'$ and $\tilde{w}'$ denote height and width of transformed $\Delta \mathbf{y}$ . In dummy tensor represented graphs, convolutional kernel vertex should connect arrow head of the dummy tensor and data-flow should connect the arrow tail. Thus, $\mathbf{G_f}$ and $\mathbf{G_{bt}}$ are convolutions, while $\mathbf{G_b}$ is not.
18
+
19
+ tuned manually, which is inefficient in practice. Another straightforward method is to borrow some adaptive weight initialization methods widely used in CNNs, such as the Xavier initialization (Glorot & Bengio, 2010) and the Kaiming initialization (He et al., 2015), however, they usually fail to initialize weights at a correct scale for TCNNs (Wang et al., 2020; Chang et al., 2020). In addition, ad-hoc initialization methods, such as modified Xavier methods proposed in (Wang et al., 2018; Chang et al., 2020) or the method of decomposing corresponding CNN weights (Elhoushi et al., 2019), are either designed for specific TCNNs, or dependent on special tensor decomposition methods.
20
+
21
+ Therefore, it is necessary to design a universal initialization scheme for various TCNN variants. To this end, we propose a unified paradigm that studies TCNNs from their topology. In detail, by extracting a backbone graph (BG) from a convolution hypergraph, we can encode an arbitrary TCNN into an adjacency matrix with the backbone graph and then calculate a suitable initial variance through the adjacency matrix, which can simply initialize any TCNN.
22
+
23
+ <span id="page-1-7"></span><span id="page-1-1"></span>![](_page_1_Picture_7.jpeg)
24
+
25
+ Figure 3. The overall workflow of the proposed unified initialization. A TCNN contains a forward hypergraph $\mathbf{G_f}$ and a backward hypergraph $\mathbf{G_b}$ , besides the network weights $\boldsymbol{\mathcal{W}}$ . The objective is to achieve an acceptable variance $\sigma^2$ for $\boldsymbol{\mathcal{W}}$ in order to keep the magnitude of data-flow stable across layers. To reach the goal, we derive a unified paradigm to calculate the desired $\sigma^2$ . Note that the paradigm is applicable only to a backbone graph (BG) derived from a convolutional hypergraph. As $\mathbf{G_b}$ cannot be converted into the BG representation, we propose a reproducing transformation to transfer $\mathbf{G_b}$ in a convolutional representation $\mathbf{G_{bt}}$ . With $\mathbf{G_f}$ and $\mathbf{G_{bt}}$ , we can initialize TCNNs by regulating data-flow variance.
26
+
27
+ <span id="page-1-8"></span><span id="page-1-6"></span>The unified paradigm can be applicable in controlling variance of two data-flow types, i.e., features in the forward process (fan-in mode) and gradients in the backward process (fan-out mode). For the fan-in mode, since the forward hypergraph (namely $G_f$ in Figure 2(a)) is a dummy based convolution, the unified paradigm can inherently be applied. However, in the fan-out mode, the backward hypergraph (namely $G_b$ in Figure 2(b)) cannot represent a convolution process due to the conflict with the dummy tensor definition in Section 2.2. To solve this problem, we originally propose the Reproducing Transformation to reproduce $\mathbf{G_b}$ as a convolution hypergraph G<sub>bt</sub> shown in Figure 2(c). Through the Reproducing Transformation, the unified paradigm can be applicable to the backward process. The overall working flow is illustrated in Figure 3. In brief, our principal initialization can unify a variety of tensor formats, and meanwhile, fit both forward and backward propagation.
28
+
29
+ Through extensive experiments on various image classification benchmarks, we demonstrate that our method can produce appropriate initial weights for complicated TCNNs compared with classical initialization methods. Last but not least, we show that our paradigm is intrinsically a generalization of Xavier and relevant methods (Wang et al., 2018; He et al., 2015; Chang et al., 2020), while working more effectively for arbitrary TCNNs.
30
+
31
+ # Method
32
+
33
+ In this section, we introduce the necessary preliminaries about tensors, and Xavier/Kaiming initialization.
34
+
35
+ A tensor diagram mainly consists of two components, a tensor vertex and tensor contraction.
36
+
37
+ ![](_page_2_Figure_1.jpeg)
38
+
39
+ Figure 4. Reproducing Transformation, an equivalent transformation from $G_b$ to $G_{bt}$ . Notations follow Figure 2. According to Section 2.2, the purple gradient vertex should connect to the arrow tails of dummy tensors, therefore $G_b$ cannot denote a convolution. To overcome the problem, we design Reproducing Transformation to bond the gradient vertex to arrow tails for the convolution representation. Taking vanilla convolution as an example, Reproducing Transformation utilizes an equivalent replacement with reversal matrix $\mathbf{R}$ and transformation matrix $\mathbf{T}$ (details in Section 3.1) to exchange the arrow tail entry and the extra entry. Then $G_{bt}$ can be derived through contracting $\mathbf{R}$ and $\mathbf{T}$ with the weight vertex and the gradient vertex, respectively.
40
+
41
+ **Tensor Vertex.** A tensor is denoted as a vertex whose order is given by the number of edges connected to it. The integer assigned to each edge denotes the dimension of the corresponding mode. For example, Figure 1(a) shows a 3rd-order tensor $\mathcal{T} \in \mathbb{R}^{i_0 \times i_1 \times i_2}$ .
42
+
43
+ **Tensor Contraction.** The inner-product of two tensors on matching modes denotes tensor contraction. As illustrated in Figure 1(b), a tensor $\mathcal{A} \in \mathbb{R}^{\mathbf{i}_0 \times \mathbf{i}_1 \times \mathbf{i}_2}$ and a tensor $\mathcal{B} \in \mathbb{R}^{\mathbf{j}_0 \times \mathbf{j}_1 \times \mathbf{j}_2}$ , can contract in the corresponding position, forming a new tensor of $\mathbb{R}^{\mathbf{i}_0 \times \mathbf{i}_1 \times \mathbf{j}_2 \times \mathbf{j}_3}$ , when they have equal dimensions: $\mathbf{i}_2 = \mathbf{j}_0 \triangleq e_0$ . The contraction operation can be formulated as
44
+
45
+ $$(\mathcal{A} \times_2^0 \mathcal{B})_{i_0, i_1, j_2, j_3} = \sum_{m=0}^{e_0 - 1} \mathcal{A}_{i_0, i_1, m} \mathcal{B}_{m, j_2, j_3}.$$
46
+ (1)
47
+
48
+ To enhance expressive ability of the tensor diagram in deep models, Hayashi et al. (2019) proposed hypergraph to represent forward process of TCNNs through the dummy tensor and the hyperedge.
49
+
50
+ **Dummy Tensor.** A vertex with an arrow symbol denotes a dummy tensor which is able to represent a convolutional operation. As depicted in Figure 1(c), for a dummy tensor $\mathcal{P} \in \{0,1\}^{\alpha \times \alpha' \times \beta}$ , $\alpha$ is the arrow tail entry, $\beta$ is the arrow head entry and $\alpha'$ is the extra entry. Relation among the three entries is formulated as $\mathcal{P}_{j,j',k} = 1$ if j = sj' + k - p and 0 otherwise. Here, s represents the stride size; p denotes the padding size. A vector convolution in Figure 1(c) can be formulated as $\mathbf{c} = \mathbf{a} \times_0^0 \mathcal{P} \times_1^0 \mathbf{b} = \mathbf{a} \circledast \mathbf{b} \in \mathbb{R}^{\alpha'}$ , in which $\circledast$ is the convolutional operator. This formulation represents a convolution in which a means a data-flow and $\mathbf{b}$ denotes a convolutional kernel. For a dummy tensor, convolutional kernel vertex should connect arrow head of the dummy tensor and data-flow should connect the arrow tail.
51
+
52
+ <span id="page-2-1"></span>**Hyperedge.** A hyperedge $\varphi$ can connect to more than two tensor vertices. As shown in Figure 1(d), an output of a special case, connecting three vectors through a hyperedge, can be calculated as $y = \sum_{k=0}^{\varphi-1} \mathbf{a}_k \mathbf{b}_k \mathbf{c}_k$ . There is usually at most one hyperedge in a hypergraph layer, connecting to all weight vertices (Hayashi et al., 2019). A hyperedge $\varphi$ of a hypergraph represents summation over sub-structures, the parts without the hyperedge. For such an adding composite structure, we can derive the whole architecture initialization by processing each sub-structure.
53
+
54
+ Xavier initialization (Glorot & Bengio, 2010) and Kaiming initialization (He et al., 2015) are widely used in CNNs. They aim to control the variance of features and gradients for stable training. We will introduce them through a vanilla CNN (Figure 2(a)), formulated as $\mathbf{\mathcal{Y}} = \mathbf{\mathcal{X}} \circledast \mathbf{\mathcal{C}} + \mathbf{b}$ , where $\mathbf{\mathcal{C}} \in \mathbb{R}^{\mathbf{c}_{in} \times \mathbf{c}_{out} \times k \times k}$ denotes a convolutional kernel, $\mathbf{\mathcal{X}} \in \mathbb{R}^{\mathbf{c}_{in} \times h \times w}$ denotes the input, $\mathbf{\mathcal{Y}} \in \mathbb{R}^{\mathbf{c}_{out} \times h' \times w'}$ denotes the output, $\mathbf{b} \in \mathbb{R}^{\mathbf{c}_{out}}$ represents the bias, and $\circledast$ denotes the convolutional operator. k represents kernel window size, $\mathbf{c}_{in}$ is the input channel, k and k denote height and width of $\mathbf{\mathcal{X}}$ , $\mathbf{c}_{out}$ is the output channel, and k and k denote height and width of $\mathbf{\mathcal{Y}}$ .
55
+
56
+ Xavier initialization makes the following assumptions: (1) Elements of $\mathcal{C}$ , $\mathcal{X}$ and $\mathbf{b}$ all satisfy the i.i.d. condition; (2) $\mathbb{E}(\mathcal{C}) = 0$ ; (3) $\mathbb{E}(\mathcal{X}) = 0$ ; and (4) $\mathbf{b} = \mathbf{0}$ . There are two modes of Xavier initialization: (1) maintaining the variance of feature $\mathcal{X}$ which is referred to as the fan-in mode: $\sigma^2(\mathcal{C}) = \frac{1}{k^2\mathbf{c}_{in}}$ ; (2) maintaining the variance of gradients as the fan-out mode: $\sigma^2(\mathcal{C}) = \frac{1}{k^2\mathbf{c}_{out}}$ . In practice, the harmonic form is preferred: $\sigma^2(\mathcal{C}) = \frac{2}{k^2(\mathbf{c}_{in} + \mathbf{c}_{out})}$ .
57
+
58
+ Kaiming initialization extends the Xavier initialization to incorporate ReLU activation function. In accordance with Assumption (3) of Xavier initialization, Kaiming initializa-
59
+
60
+ ![](_page_3_Figure_1.jpeg)
61
+
62
+ Figure 5. Reproducing Transformation Cases. $\sigma^2$ denotes initial variance of each wight vertex. (i) Standard Convolution; It is the most common convolution in CNNs. We observe that the graphical initialization will degenerate to Xavier/Kaiming initialization on the standard convolution, as they have the same weight variance formulation; (ii) Hyper Tucker-2 (HTK2) Convolution; Tucker-2 (TK2) is a classical tensor decomposition, known as the bottleneck structure in ResNet (He et al., 2016). We apply hyperedge to its weight vertices to form the HTK2; (iii) Odd Convolution; We introduce a particularly complicated tensor format (named Odd Tensor here) originally proposed by Li & Sun (2020). Odd Tensor contains 9 vertices and 14 edges. The connection among these vertices is irregular, making weight initialization a complex problem. By connecting all weight vertices with a hyperedge $\varphi$ , it is flexible to construct HOdd (Graph-in: $\frac{1}{\sqrt[9]{p_a \varphi} \prod_{i=0}^{1} \prod_{j=0}^{13} k^2 i_i r_j}$ ; Graph-out: $\frac{1}{\sqrt[9]{p_a \varphi} \prod_{i=0}^{1} \prod_{j=0}^{13} k^2 o_i r_j}$ ). By successfully training Hyper Odd (HOdd) based networks, we can better demonstrate the potential adaptability of our method to diverse TCNNs.
63
+
64
+ tion requires the distribution of $\mathcal{C}$ to be symmetric. Similarly, Kaiming initialization also contains two modes: (1) the fan-in mode: $\sigma^2(\mathcal{C}) = \frac{2}{k^2\mathbf{c}_{in}}$ ; (2) the fan-out mode: $\sigma^2(\mathcal{C}) = \frac{2}{k^2\mathbf{c}_{out}}$ .
65
+
66
+ In this section, we introduce our proposed unified initialization paradigm designed for various TCNNs. We first introduce our Reproducing Transformation, then we demonstrate the derivation of our unified paradigm, and finally we provide a simple exemplar initialization method that can be directly obtained based on the paradigm.
67
+
68
+ We build our unified initialization through derivation on a convolution hypergraph, whereby we can directly achieve the fan-in mode initialization from the forward hypergraph $G_f$ since it is a natural convolution. However, the backward hypergraph $G_b$ directly derived from $G_f$ cannot represent a convolution as elaborated in Figure 2, which hinders the derivation of the fan-out mode. To solve this problem, we build Reproducing Transformation to convert $G_b$ to a convolution hypergraph $G_{bt}$ . Before presenting the transformation, we first formulate the forward process.
69
+
70
+ In the forward process of a convolutional layer, we denote the output tensor by $\mathcal{Y}$ and the input tensor by $\mathcal{X}$ . Then we have $\mathcal{Y} = a(f(\mathcal{X}, \theta)) \triangleq g(\mathcal{X})$ , where $f(\cdot)$ means a linear mapping function, $\theta$ denotes parameters of $f(\cdot)$ , and $a(\cdot)$ denotes an activation function (usually a ReLU function).
71
+
72
+ For the backward propagation, $\mathfrak{L}$ denotes the Loss. In this process, we utilize a reversal matrix and a transformation matrix to achieve the equivalent transformation. These two
73
+
74
+ <span id="page-3-2"></span>auxiliary matrices will only change element position when they contract with another tensor, which helps calculate the variance of data-flow and weight vertices.
75
+
76
+ **Reversal Matrix.** A reversal matrix $\mathbf{R} \in \mathbb{R}^{r \times r}$ is an anti-diagonal matrix, where $\mathbf{R}_{ij} = 1$ when i + j = r - 1, $\mathbf{R}_{ij} = 0$ otherwise.
77
+
78
+ **Transformation Matrix.** A transformation matrix $\mathbf{T} \in \mathbb{R}^{t \times \tilde{t}}$ is an identity-like matrix, where $\tilde{t} = \varepsilon(t-1) + 1$ and $\varepsilon \in \mathbb{R}^N$ is a coefficient. $\mathbf{T}_{ij} = 1$ when $i = \frac{j}{\varepsilon}$ , $\mathbf{T}_{ij} = 0$ otherwise.
79
+
80
+ With these two matrices, we can derive Theorem 3.1.
81
+
82
+ <span id="page-3-1"></span>**Theorem 3.1.** Given a vector $\mathbf{a} \in \mathbb{R}^{\alpha}$ and a vector $\mathbf{b} \in \mathbb{R}^{\beta}$ , let $\mathbf{y} = \mathbf{a} \circledast \mathbf{b} \in \mathbb{R}^{\alpha'}$ , then $\Delta \mathbf{a} = \Delta \mathbf{y} \mathbf{T} \circledast \mathbf{R} \mathbf{b}$ , where $\mathbf{R} \in \mathbb{R}^{\beta \times \beta}$ denotes a reversal matrix, $\mathbf{T} \in \mathbb{R}^{\alpha' \times \tilde{\alpha}'}$ represents a transformation matrix, $\circledast$ means convolution operation and $\Delta \bullet \triangleq \frac{\partial \mathfrak{L}}{\partial \bullet}$ denotes the gradient.
83
+
84
+ The proof of Theorem 3.1 is provided in Appendix A. Theorem 3.1 is corresponding to the equivalent replacement in Figure 4. We implement the Reproducing Transformation by applying the equivalent replacements to the original backward hypergraph $G_b$ , then, we contract R and T with the weight vertex and the gradient vertex, respectively. Finally, we can obtain the transformed backward hypergraph $G_{bt}$ which denotes the backward convolution. We show some Reproducing Transformation cases in Figure 5.
85
+
86
+ Here, we will derive a unified paradigm through variance analysis. We first give Proposition 3.2 and Proposition 3.3 to describe the relationship between variance and tensor calculation. Then we introduce Backbone Graph (BG) to illustrate inner production in a hypergraph. At last, we obtain
87
+
88
+ ![](_page_4_Figure_1.jpeg)
89
+
90
+ Figure 6. An example of deriving Graph-in mode for Hyper Tucker-2 (HTK2) convolution. Step 1: Since a hyperedge $\varphi$ indicates adding operation over $\varphi$ sub-structures (Tucker-2 here), we can derive the whole architecture initialization by processing each sub-structure. Step 2: Since a convolution only calculates on the kernel window, we can remove the dummy tensors by leaving kernel k to derive Intermediate Graph (IG). Step 3: Since elements of IG have same variance, we can further diminish $\mathbf{c}_{out}$ edge while merging repetitive-edges to derive Backbone Graph (BG). Then the initial variance of convolutional weights can be derived as $\frac{1}{\sqrt[3]{p_a \varphi k^2 c_{in} \mathbf{r}_0 \mathbf{r}_1}}$ in terms to the adjacent matrix of BG, where $p_a$ denotes the scale of activation function. Graph-out case is shown in Figure 12 of Appendix.
91
+
92
+ the paradigm in terms of BG and these two propositions.
93
+
94
+ <span id="page-4-0"></span>**Proposition 3.2.** Given tensors $\mathcal{X} \in \mathbb{R}^{\mathbf{i}_0 \times \mathbf{i}_1 \times \cdots \times \mathbf{i}_{m-1}}$ and $\mathcal{Y} \in \mathbb{R}^{\mathbf{i}_0 \times \mathbf{i}_1 \times \cdots \times \mathbf{i}_{m-1}}$ , where elements of $\mathcal{X}$ and $\mathcal{Y}$ are independent with each other, the variance of their elementwise sum $\mathcal{Z} = \mathcal{X} + \mathcal{Y}$ is
95
+
96
+ $$\sigma^2(\mathbf{Z}) = \sigma^2(\mathbf{X}) + \sigma^2(\mathbf{Y}). \tag{2}$$
97
+
98
+ <span id="page-4-1"></span>**Proposition 3.3.** A tensor $\mathcal{X} \in \mathbb{R}^{\mathbf{i}_0 \times \mathbf{i}_1 \times \cdots \times \mathbf{i}_{m-1}}$ (i.i.d.) and a tensor $\mathcal{Y}$ contract d dimensions ( $d \leq \min(m, n)$ ), where $\mathcal{Y} \in \mathbb{R}^{\mathbf{j}_0 \times \mathbf{j}_1 \times \cdots \times \mathbf{j}_{m-1}}$ is i.i.d. and follows a zero-mean symmetrical distribution. The $\mathbf{x}_t$ -th dimension of $\mathcal{X}$ corresponds to the $\mathbf{y}_t$ -th dimension of $\mathcal{Y}$ , where $\mathbf{x}_t \neq \mathbf{x}_u$ and $\mathbf{y}_t \neq \mathbf{y}_u$ if $t \neq u$ , $\mathbf{x}_t \leq m-1$ , and $\mathbf{y}_t \leq n-1$ . Without loss of generality, let $\mathbf{i}_{\mathbf{x}_t} = \mathbf{j}_{\mathbf{y}_t} = \mathbf{v}_t$ , for $t \in \{0, 1, \dots, d-1\}$ . The variance of contracted tensor $\mathcal{Z} = \mathcal{X} \times_{\mathbf{x}_0, \mathbf{x}_1, \dots, \mathbf{x}_{d-1}}^{\mathbf{y}_0, \mathbf{y}_1, \dots, \mathbf{y}_{d-1}} \mathcal{Y}$ is calculated by
99
+
100
+ $$\sigma^{2}(\mathbf{Z}) = \sigma^{2}(\mathbf{X})\sigma^{2}(\mathbf{Y})\prod_{t=0}^{d-1}\mathbf{v}_{t}.$$
101
+ (3)
102
+
103
+ The proofs of the two propositions are given in Appendix B and C. It is worth mentioning that $\mathcal{X}$ in Proposition 3.3 is hard to satisfy i.i.d, but assuming $\mathcal{X}$ non-i.i.d is still applicable in practice as the empirical elaboration in Appendix H.1.
104
+
105
+ According to Proposition 3.3, variance change depends not only on weight and input, but also on contracted dimension $\mathbf{v}_t$ . Therefore, we introduce Backbone Graph (BG) that only contains contracting edges (i.e., contracted dimensions). Figure 6 shows a process to derive BG from a dummy tensor based convolution. An adjacency matrix of $\tau$ -vertex BG is defined as $\mathbf{E} \in \mathbb{R}^{\tau \times \tau}$ , whose element $e_{ij}$ satisfying $e_{ij} = e_{ji}$ and diagonal element $e_{ii} = 1$ , where $i, j \in \{0, 1, \ldots, \tau - 1\}$ . As shown in Figure 6, the adjacency matrix in the tensor diagram is specially designed
106
+
107
+ <span id="page-4-2"></span>to fit the calculation of the variance where each element denotes the contraction between two nodes. Thus, $e_{ij}=1$ means the contracting dimension between node i and node j is equal to 1, suggesting that there is no edge between node i and node j. $\mathbf{E}$ is symmetric and each vertex does not connect to itself. A supergraph denotes an output tensor $\mathbf{\mathcal{Y}}$ . We use $BG(\mathbf{E})$ to denote the Backbone Graph that comes from $\mathbf{\mathcal{Y}}$ . $BG(\mathbf{E})$ can be regarded as an element $\mathbf{\mathcal{Y}}_*$ of $\mathbf{\mathcal{Y}}$ .
108
+
109
+ Since $\mathbf{E} \in \mathbb{R}^{\tau \times \tau}$ is symmetric, we consider edges $e_{ij}$ satisfying i < j only. Then based on Proposition 3.3, we present Theorem 3.4 to reveal the scale after the input through a TCNN. The proof of Theorem 3.4 is in Appendix D.
110
+
111
+ <span id="page-4-3"></span>**Theorem 3.4.** Assume the input $\mathcal{X}$ contracts with n weight vertices $\{\mathcal{W}^{(i)}\}_{i=0}^{n-1}$ . Meanwhile, input variance is $\sigma^2(\mathcal{X})$ and output variance is $\sigma^2(\mathcal{Y})$ , then
112
+
113
+ <span id="page-4-4"></span>
114
+ $$\sigma^{2}(\mathbf{y}) = \sigma^{2}(\mathbf{x}) \prod_{k=0}^{n-1} \sigma^{2}(\mathbf{w}^{(k)}) \prod_{i=0}^{n-1} \prod_{j=i+1}^{\tau-1} e_{ij}.$$
115
+ (4)
116
+
117
+ Next, considering activation function and a hyperedge $\varphi$ , variance of final output $\mathbf{\mathcal{Y}}_o$ is $\sigma^2(\mathbf{\mathcal{Y}}_o) = \mathbf{p_a}\varphi\sigma^2(\mathbf{\mathcal{Y}})$ according to Proposition 3.2, where $\mathbf{a}$ is an activation map, $\varphi$ means a hyperedge value, and $\mathbf{p_a}$ denotes scale caused by activation function. For example, $\mathbf{p_{ReLU}} = \frac{1}{2}$ and $\mathbf{p_{tanh}} = 1$ . We set $\sigma^2(\mathbf{\mathcal{Y}}_o) = \sigma^2(\mathbf{\mathcal{X}})$ to maintain the data-flow variance equal. Thus, We can re-formulate Eq. (4) as
118
+
119
+ <span id="page-4-5"></span>
120
+ $$\frac{\sigma^2(\mathcal{X})}{p_a \varphi} = \sigma^2(\mathcal{X}) \prod_{k=0}^{n-1} \sigma^2(\mathcal{W}^{(k)}) \prod_{i=0}^{n-1} \prod_{j=i+1}^{\tau-1} e_{ij}.$$
121
+ (5)
122
+
123
+ From Eq. (5), We find that $\sigma^2(\mathbf{y}_o)$ is highly related to $\varphi$ and edges of BG, and will change exponentially when edge number increases. Notably, Xavier and Kaiming fail since they only consider channel edges and convolutional window
124
+
125
+ <span id="page-5-3"></span>![](_page_5_Figure_1.jpeg)
126
+
127
+ Figure 7. Activation distribution before training and results of the activation propagation analysis. Xavier (NN) and Xavier (HOdd) represent the case of applying Xavier initialization to Linear-5 and HOdd-5 respectively. Graph-in and Graph-out represent the case of applying the proposed initialization to HOdd-5. $\varphi$ means a hyperedge. Xavier (NN) works since maintaining activation in the unsaturated region of activation function tanh, and Graph(-in/-out) also benefits from this. Under $\varphi$ =1 and $\varphi$ =4, activation of Graph(-in/-out) distributes in the unsaturated region, which indicates that Graph(-in/-out) can fit the sophisticated HOdd format and integrate with a hyperedge. Nevertheless, Xavier (HOdd) suffers from activation explosion in the saturated region and fails to train HOdd-5. Orthogonal initialization cannot train HOdd-5 either. By contrast, Graph(-in/out) successfully trains the model and derives relatively good results.
128
+
129
+ size edges, namely, part of edges. An expository example is in Appendix F.2. As a result, we can derive
130
+
131
+ $$\prod_{k=0}^{n-1} \sigma^2(\mathcal{W}^{(k)}) = \frac{1}{p_a \varphi \prod_{i=0}^{n-1} \prod_{j=i+1}^{\tau-1} e_{ij}}.$$
132
+ (6)
133
+
134
+ If the initialized weight satisfies Eq. (6), then we can attain the same effects as what Xavier and Kaiming achieve, even on multi-vertex tensor graphs. Thus, Eq. (6) can serve as a **unified paradigm** to ensure the effectiveness of weight initialization methods on TCNNs.
135
+
136
+ To ensure that Eq. (6) holds, there are plenty of choices to set the variance of weight vertices, which indicates potentially numerous weight initialization schemes. To verify the feasibility of our paradigm, we propose an exemplar choice by setting all the variance of weight vertices the same through
137
+
138
+ $$\sigma^{2}(\mathbf{W}^{(*)}) = \frac{1}{\sqrt[n]{p_{a}\varphi \prod_{i=0}^{n-1} \prod_{j=i+1}^{\tau-1} e_{ij}}}.$$
139
+ (7)
140
+
141
+ In this way, we can determine a specific weight initialization method, to which we refer as Graph Initialization. It has two modes, **Graph-in** and **Graph-out**, similar to fan-in and fan-out modes of Kaiming initialization.
142
+
143
+ Graph-in and Graph-out are constructed by applying Eq. (7) on a TCNN's $G_f$ and $G_{bt}$ , respectively. We take derivation
144
+
145
+ <span id="page-5-4"></span><span id="page-5-2"></span><span id="page-5-0"></span>of Graph-in for HTK2 convolution as an instance in Figure 6. After extracting BG from the HTK2 $\mathbf{G_f}$ , we can calculate suitable initial variance for weights of HTK2 convolution, by applying Eq. (7) on the BG's adjacency matrix. Graphout is derived from $\mathbf{G_{bt}}$ exactly the same as Graph-in. We show some Graph Initialization demos in Figure 5.
2206.03377/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2022-01-16T08:37:07.186Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/13.9.9 Chrome/85.0.4183.121 Electron/10.1.5 Safari/537.36" version="13.9.9" etag="fRJW5j-iUzSqOyiEJIVC" type="device"><diagram id="W2yLoF5930SujZoaMB68">7V1bc6u4lv41qTrnISrdgcd9PfPQZ6qru6vm9COxSeLejsnYTu9kfv1IGGGQBAhbAjthP3TH2MhGa61P6/Jp6YZ8eXr91zZ9fvx3vszWNxguX2/I1xuMEUFM/E9eeTtciQk9XHjYrpblh44Xfl/9X1ZehOXVl9Uy2zU+uM/z9X713Ly4yDebbLFvXEu32/xn82P3+br5rc/pQ2Zc+H2Rrs2r/7Na7h/Lp2DweP2/stXDo/pmBMt3nlL14fLC7jFd5j9rl8i3G/Jlm+f7w19Pr1+ytZw8NS+H+763vFv9sG222bvcgA83/J2uX8pnK3/X/k097DZ/2Swz+Xl0Qz7/fFzts9+f04V896cQr7j2uH9al29XzwPFi4d1utuVf+9+ZPvFo3qx3+Y/qqkj4sp9vtmXckby9TLdPVbfefj4l3ydb4ufRL5///It+S5vW63X6vom32TlSN/Tp9Va6tYfqyehJxj+d/ZT/Pe3/CndiI+Uz5xt99lr67yhShpCjbP8Kdtv38RH1A0UIkAIrP6V8/ZWqXgCGDlc+3lUEcxiUKr6Y11Bogjg0ibSUjcfqu88ik/8UUrQLk3SL82arOQErIRSf1qvHsS0fL3L9/v8SbyRbZafpJXIa+t88aO4JH7Kf6T4AFMv/yzHKV58fS1le3j1Vr5ykIYu+sNPzpaaCe7T7UNWzhd1Fldt7uPS5LbZOt2v/m6Obpv1crRf85X4kkryCYo1UeOkOcYuf9kusvK2uu3pI/EEkEgfjAPEkuO/5tCHSTCGLvSiemonVaEWVeFrMZmfnxsKw//3RWJRIaXbXSGmT+IDiDy/FrJS74u/HuT//9imm919vn3KtmpA8VOe1dvnoUvd3AUM3KHl8l5qWVpq8EKog/heC2IgGJFEwsNytRUrwiqXHxeCkspxupKeBSEJhTpmxABxAzOU3tYBAyMQ0/PxgoXHi1sIYFxHDAholPSgRvHq12wrRFHIMyCU8AmhBEGsqwDBp2EJgtxAkljOdPUvDoUk3LMLMY01igmkzQnEHBq2qMyzsXgrt+8cS4z6J1HcIPzbrH8C093zwem9X73KSddhc8myeEltKBnjO8L5lDKIdBnEQC2vdTFYIBF5kEI8S8EqhRgBRkeTQjJLwSqFhIEEGq7hCAJRY8wS0ZdYYnPWx5AICiqRS5lfRgGfZn4d8iLvYH5FAFqbXeUijjG/DpmKdzC/JJlKf23h/fub3yQCEZ1Ef83IGQBgTLF42L2WH20saiplaWYxjYyGHnw/rZbLdZvwjvEWLF7t0zLjkZwVSJ8pPpZo4ov64EdlxxqpDx/ic4hZT018vK72hzwpJ7R8LdMeMg8CWXnhmPaQL95qL/SkxzHv2syhVEnY4HnXQwbicIlbEigocdaEABkUAQqgjrBMU7EYgkgb1TmlgiIKLONR0wfvSaoIHUnfah97lh/YdT0V0fNCehnHvEM3ruYd4o/Drzg1y4NsGYpDflcq0o0tZ3yXLn48FFh0uzjAm0wdbx/u/oGZmFjxnbD+xz8LJYSumeZvm/1qL6f1SeidgLddLd98+EktKWep4r+kd9laW+acIXebiZ+X3hXjSWMr5SkGZ59v2FeLba3l132upsNjyaoscZa/5qaCx4Y58k5khiKI4Q3tuUVerJMCFacorUQgxs1R8vv7XXZuChI5JG7OxnOSkBqeIyBrfkPhPAwsM3xhsJwY5VA9R+qe145iAElzNJZY40LPKGw8BKU9IAx12O69w4Bt7Y7zYduWTPNR5+sFf9eBvv72Tbz9bbPIl42SYRPC50pip2udYLOSnHCAO4MjcVPp3jTCoxjQDoN3dbGxQ87wbEhW/vSfdfy1g7HyoyvCwp/1d0b1opVN1uE6jieEa4QhBbGmP1SEtJbgejCAY0LNgS1hn//CJHZIkl5DZRITDswFNarMtKc+GTNAPHAF1OSNUgTgME1ExGGBX/w14lDZZu36ffFvSiElppA4nKhi48IEm4WlZaOEU4knEVbYZO27FFYcT1R5c+FMzcJqLvhoqsK1CzdpFpbmnU1VE7RwoOaaSp8EhbSozkdO+rCRWiToo6ziwp+6CoebJ4ATYwaPJhJZebpY3GQxDe5hYsekRF0hjkUYYCN8TaqQdgTwIg7pjo8soZgCw4rGFVBYxtTVCyiJbAKaxhNQJPhZVlZZEUyBkVObTFZzpqFTVsRqV2MCn5lduAq3GsUT+tUEcsPECASJmXHFyFpE8bGZM9zuLH91jYnqFYwA4aRX/zSWjtybq7iCg+sVBDJQ9/25ZezYeNs3A0j+CsOllV9de+pY06bzKsUkHCfu2tWNIAhILaI2Qg0G4lNJZnJsHLWPDWMAa1+t4Yq/Yhlp53fd3VwEUeAGf+6nB9y10gPMBc4n4St92ee7Un8t6jxSUwIChTLp6aA47iGh40DpIBKOlXX9iIKqZNExb0dBpNn3ABhBYkVsb0cRmWN7xI52ktHZVv9bMdPCI8RwIduJrO5XHTShS7L5mi8LzVCmYhy1E4yaqdKRIASzxCA9M9oXXNqCFx8QQsdgEdGoTuyEABYh5qk8fcAYvalxjJBYv/u6pATud1AnHylbrZOPIjItFupgxTXNGYSChvJaORgBGqrYEoxDmZYm/H369IdkY18g/3Gi0BxxJiy0KWSUcJCY3ZYK3LIAU2SshSdhk0OW8uxuS3Uk6d/1ozANgYqbfrhR9hW5DK46N+GHTuqKwViHHz3r5r5hSAFXtWWfaEN52yCEoP1Ht/82s9vQ0DvUrnNfGQcasF3Z0RBggyQsVufhm/DCWMK0ao8tlHHMUFfzMfdlONJb6aglPsCy60CeuwbOAYkIiLSIEMm9L8kxqLPwfZk1++yjbkDDEt2mmmVu8GVQjECCO8ObQKUZGpaedjlTjCED1MhcjzLFYzbWusLypE1YNALIbBw3hrDG7L/1ToRFaNUJcgwJzUS0wRISCzsdr9zPbBkznxs+77ZehrmoVMYFbeWkEQX6puACkS0wzBhQHTl8J15Zezbr7JT+t7+zYoTfskW+XYo/vmbdm3817biorP44WXkac4MOoK/TJiO5aiPiXTtOS30VTTd+FbNbTmAVwRvya+mXUYd4XcT7/PlG7ywvk16NjvRVW3l4eOdZ/tin1wd5TAFY5bsIrBb5ZgeW+eLlqRDORKXcOMFmgRARwEwpq71+dSETUgX3Z8nZIUOjz+Ld89MGrPNcQ2mBquvVRqCvOpHBUhP7Lv4liWXOT8RnH5bHiYgOieENH40Q9+yyxzFQhfCGhDzssWe23EcokP5Xtsm2ZSn2CnB6LGDG5hYqhHs3J7JQwDz3pTvJadczb5hjQM2CErIk23jVGvkswbmkgo557gMVYmFmLuuTU0uCI35TT4JTntwESII3mv4fKlDH2hUR2HhCFfwwC939NFROvF5UYic11LDZJfOUcReLiF6cls1WTj09RGCIXp+BbtWmE7Ls7ITub64rze8SSjZCmELMT3fZcrnaPFxyL7d6zmGkBm6su4GbMDeEkBY0elFaBpIIQqMCUX7HLcJVhzev/dyY942kg8lXEy1GTC/sokQI11yIrJxNAriHzabMluM7wdZbXco/3p6lsX+5bjrfOP4lg8f87lElegtWoQJ/3p5dPL8RnIq5hYZIQZ+sFFP4sq3n3jUPsqsfgjdRYiEmgDRhXMQrwiXsJHiG6hjAzTTjHK8M7qMtdyxFXfFmqD7a3JYH9Lbd4zDQ7jndWAfSWgrDQx9h9b9GM+HDvfKTm3z7lK5r7/0sp0S+SeEh4oDrbC+05VZ89UI4o+adUh9vS9WS71Xapd5bCbXZlKPC8gEP7+zV+Xhq1EJNpbjy7bL5jdWNy9XueS2JVvLqaiPzaOUTrPN0rw/kOsFZNcMv51WGnmpQXYxlVHoOMtTk+mLfV/RxxOo6v6gxv3eW+XVZIB1oTxPkz4yWuSLW6VmLbPlulHih9HJbvnvGs+vAs90MODPgOAAOMbaIECvtdSTICVnDUSck1DbR/ZqunONtVdork1N16dmEu9unWyVSDAvL2uxTYdzb8iZhaev0ebc6RuGLx9V6+Uv6lr/s1feoVwcFKX+MNdZvZoCGnDxeS/gc8lo3euVYTxqV9vWwTZerzJr/m0CXJQMA4tZNn5gmQHnojc16HDB0vItYlDuOQII9KLcDh1gpmXju/Spd/yZzoZsHF30z1Wm5zZ//UEWAKsuTbYvMk9IHi1gL7kDx5jq7V/cqdsIh9ivnxkwdfZYrhdyaw77KJYNJ9FSvi7zSc74V+rIR+pWuCsFm6W7/M9vt3RTlpBDygCqm5rw1Y7+BWuElZHQodJ2jEvVkuhdp52KqhV8ikeVxtRTLcUixMbvYxgjlHYjSs1zsQExMeqei8HkXk0NlZBaTXUyRyZ1Se7C8i8mBJz0vfOMvfHS6hS9yaGXwcS03mWzhi8wSxCyXPrmMv/BFtlLDLCYnMY248EUDGMPzwjfewsdPXPhiDyrhsEP7w1ruwV6GL3w+5BI4OfMu5TJw4fMhpjlhcrKYXBc+H2IKyJDtLR0OLvkPLJEN/aFnPzFqLVF0FabOnh9fjAhv8/DoMg+ujIuLpleOVTKRnaNrlGa9ZMIJIGZ35zrMW5hwepenk/DDltib8WPGj7MGamfRzvhxGn7EcTd+4J6j7oLhR8AevB8PP37M+DH7HxP4H/LckW4qUij8iG3liRk/ZvyY/Y+Lwo8EduNHBCLzNJEx8CNgwyBv9rG9Fvw4LX6ZDUQejtfjoDfPMmDjGYitgDkbyKgL7GwgvQZCYN9JksEMpH3zjTfH6Gy9K3an9mpew3I+oI5hBoHa2HDcZByfkFr1wUlQ++dnxbp+xSrSb5piyQ4ZeDxtspXN5z2Cvveq3VXNH2+1hy7aRMvnLboRF38w9eiBd0rPOwu9SuuUaOi97UcUHpfRQJDYz+EYZ0NibOObzPh2HXugZ6Tqm3fXmcQfDXM4AxxPhTlTkqc+ju7PmHPZmEM+GuZEyXS9XuIpCVcfR/dnzLlszPnr9qNFVxSyCkBGgpopuVkfR+VnqLlwqPloQINRTyktIOYkJp9r7qzaVxrliXk6qP1ocZzYDl4lQG0vOkt0Ic/eUi3Bvr0KU1+UTcHSzVJObfa8zXZisuZTXrSeyZFxckP/GS/WE/58FKBUY67APbhrPePqujIrhboBEaI681eIH0Ug6jwNCkWhtMJT69J2xDiARO20jm+bRS4P65iVoqYU9ASoCKcUA7beL16267fPws5/ZA5ulpfei4Qkyffv1Tt+ei9Od1ZcnFBD9rRX9nEo2Q/Y3j/L/mzDh1FkEb2N5ERtnqMXiQ/oFDBL/PxoIaYAcyehY8qAaoPiXeoDGje+R6lv8u2YUheGDrDu9jGb+x/Ozh16QMpz0VTzZDlB+UO+Sdffjlc16R4/80txdGtx8a9sv38r5126V92Zlur0P9kKqXb0H7rpPvbPWXCdhwCqjov1MwCpu3idz0lzFpJDB0jLEcnK8a2ORq4d+ni3zhc/bo7nOkKQCAevfq5jhNFNgHMd7bmWTnFQaopDHaQ47EhG5R+df/wiq85ErqijEbO5Z4OPYqTKgT+eYqGdvuzvKEYEzcyeaf5HrSkhtHEOVYcVmwdon26fl3P2JqO84ghX1Wcdi91P3rSMVvX3DCFxhxaVw45DnCa/SlEk5s0wt0ogFPXQARAGiirUSJn7WFWRSt932tV6vXretSW165mN3fPhmPP71auUin4CJYdpgiKbw4S/RhwqYLYdfD+Z8GLAIl1itOf0c1sSFHmRlpnvmusbvSK07u2q03F6pCnEbSlZJdURleeJ1MxWzSI9D1IpaR4NaO4Hr1l1E1K9VCERdMhCzaDaNEPGbfyUcEjqcvT8LKKGiHgvUgaTlkO6Z5ZWU1px3MPDCCctl1TN6dK6nDmmEPVVPoPNsUOm5X3MMZrM20YuOYd3MceYT6XHyCHKfx9zTMd1cFSqxk/CzFLH0BPhVX4W8qSRn5VdogfnZ1uW5MXL9u9jccY81jBJoL/sepWCupyUHuGWlT6x+mXD87lSa7kRNh3THuJrtOioJd0nlCp9q32s5K20Pxa19185xujUkiNrf5Ce0ahtA87Rrg6//uS8JXI4DMSr4akKVFVyOphdVY5qqUDVCioRaRhsROKzDTaFsPCIzdJoBOGXL7opezbYegGmSnFejhVTFgOIYpXNq7s6EfFgyMwWDnSChDdDpp2mJ8nhQwy5ezQSex2NDxuNdI/GOkbzADIOHLfLBpkEXTvIXBCeEARgLIYrlU13PSMGYK3gq1Vp3XGFaAVfktBwLkEPknTZ13DbTzocDA/W6pIPDuCLo+ayzjoNbjaudqDnrN24KAxlXAiEsq3utUvCiceVMAm7Erok8kOshHIJS5qroTxW03k9RM0ombHh62Ex9mUvkk1P/PKMm3YbN/Nk3NS5nDK2rcPYp613BeoebN2lDBTG1iFnmq3HsaOt34q7mysxZSSIrS9ZFi/pZGmyS7d11m3r3JOt26JvO8lk7DRawnym0SIe1NbbO9W4H/Ng7Ov77dOnP25xbe/ec+vGvWHcwqYh3qHl8l5amEG+MYwRwYgkBUydtqkjSP2CG75oVLU2rlkrt1grFqEoabdN9/rFOFTyqAHMjOPhSYkgVPIT8dMXb5ziHq/BWjAcjpW6okl3xC3oOQVTsENB9xooxYwSAHU4npZHrJ5iZui0SIwChnWJTcZsUMKZSadDRMhAYorrUnjEeKaGe8HRycnD2KW0MyNpw/ZGJg/jmd89WETTkYfxTPUeLK3pyMPYJcd3bYRA2xxPSB7G75GgbZ3j6cjD+D0StK1zPB15mDjE8+9jjkcmDxOHaF65/7+kd9n613y3Kh36KvOmxwf7XJtx1aDl6fVBSOMR3KW71QKIe37cnNOHpaui07r2NlfxwyGB4noue7/upRjYoILJmSrAEeBaUy2Z35nKzMxswWja8PrhVQECFBlib0Q+E2nFMJ7zQkpqtbDlHpxKsEUVNbpp1GAZqz7SVoO1VU2P+X/FiPqz9k4LO8pPqVRVtRqlUpWAvaBSqQCbuHZspd6zQuBTAo23h6f/k+6vQZ1f47EwQNoPy3TuI4ps9cYCDudWoU3vJqLtB/Fi0tM6kgbqHCmgdF7lLnWVi6bzfRySR7NWTKMV0yWpyDAimg/fB1Z0s9L3QbjP91EscABxs2Me62aCn8hysPpaLk6Rgu+GU0TiS3OKGIlAwlrV8ZBrwlW66VROBEtigGH710AKktomMa126NMpcsgSXgNbgkMOuNay9gy2hJ+TShAJmx+89qoEhzFARzdRsaEugDtBzaTjXGjvlWYCmLbbdWrCBDVznLMcT4DSyQkT1CE9OSNpK5KOzJ2gLmnDWVpt0pqORkFnXtI5gpsuWKVh2UoXOd0TkitoWLrRZU73dDwL+h75Qn3TPR3lgr5H6lDfdI/MvqAuqYEQrdsaSUI8qF3ERFtPLzBLyLr9AE/N2qTK1kujBlskXLM2ltiC+UEd2qxDhG3Lxsy0TVijOr9jEsMNg4xP6BBhFHQYzKzNIISifv0cqhnE5VkpxxBwFLxjEkdCz7ExzhEMwvVP4sQ0sYFNk2xDBO6UxFy2EV7Y0lcZVd/SV1qYJ6MacM7XaEaVAFRrQ6Yfy+CpU1KvUQXrmyQf0LCpYc2SbEME7pCkyMRX1yGJaFXrqmOS364pEy+KF171lotAl1F76pDESdOfnaZZktXAh3VIkkN0lO9l1xTzQfzZ+lTNh6Fm5+5uLkRNN5eecJrjRcWil27QtNugPbVB4rTboMfpiGQNNIe1QZJDdBq0LfXpz6Dnqss5mTs2WdWFjd3z9XwkTjhvIHHCuqG4zY+yQfS4sZEVii8wYEIRiGPIYjsZnsYikklghS6nB0wR4HFkODYy4YaOhEW37RUn+FTdfEbO2vFz+GgEB0XjqZrNVhzeo2FD51BK0odZ07RlB6B3FEsp075010soL5f2riktikFMPdi5GJ6adk5iSdoKb+e2hCMaFjuRbleLo6DGPVV32Vu5zUFrL8sR7bFup6DHarMhIqHmSksuzvKgCLt151Cqk5YpHLCmWkjlMQEJ0TmN/kIZ0r3ydRjH4NFkSjKkqbmwCAKto0gryPFowEaaSJXglIccqJGz8AiL81K63eZPDEIaPIORXJwxS8eVUq3kLUkxGIt5K1WWn2rZslc7imOjvyYDDLZu7/G2itJu/5bBQd5y92iS+xvSym08Fh+7qn9NhZph+O31Od0sV5uHeYN1DdgjBHjd59JEjgmg5p76muUmoTZY85EZGM7JLQ/uz+WAI8WdpzhSHhm7Lga0a45EJNm6eV++fa7zc86GRT4yeWB47PsRFCwKqGDRxAqGx1WwDoevxV/8CAoWR6C2//lkP69P1+TXoCmVzVbKHeg9yTMwCrdJc6C+5LcHKeWb832ndXa/fw+eE40wQDVekcF0wD2k+yiY5+ShPVGHKogodbfPNiIMnHXBURc47qkrxsF0wVZX9AkL99l21oUhuiDeTtA0umArRx3EthOh8Vm64E2pik31LbpUXT783A8cthNZ14xavREcx4DUOuVxQ8lIMCWzlUU86Uavti4OPq4cZPtw9w/xPOIHQ/W/fxZDwMO3He6Vn9zk26d0XXvvZzkn8k0KD22J4DrbC9253cnWVpsH806pdbelosn3Kl1T7602y0K68k1YPuDhnf023ezuxVhq1CIOkPLKt8vmN1Y3Lle757XMtsmrq816pe65X+fpXh/IdYIrGN+9uJ+aZhvoqW6sL8+9Jqy+9e6Di9V1fnFjfu8s8+uyCjs0YJoA26B+tDGW/fxqaGa2DLH1fUGeOi/x9pMFZ0CbAW0GNC+ARj4UoMU2SvFogOahzjYD2gxoH1OsrvO7/0iARiDv6cMdEtBUIrVeVlk+ZL+XL/Pt/jF/yDfp+tvxapMqc/zIL7lstlxc/Eso3ls5uzLqb059xfK5aWw6vOni9zRKO5ayn/x3o1d68Dly7iTwlGniOn2HOuuCc9nGXYy2euy8Ll3HurSbF445tHdYOAgwWs9QwKdbOmwl+hlzZsx5H/M+R992zOFoSnc1IFNjxpwZc64Ec+gHw5w4ntLPCUgJmjFnrJzfjDlnYg56x5iDI2geBEzMWimqoKkONCYL+zScCUg3m3GmuvEuXfx4KI78uNUeGsuOy/J5sWwLXPzB1KPP6HRN0przRbJQSoHBtCcRwCYRXztAxkS4GCDuA+HaSZQzwk1tMzPCXZO05uyUPB8wAvUjrXW2uAC7uMbzRSbYMYAsgaMXqJupvJdrPDPUXZO0XOf/r9v3HKISFAED4GTfzy6As6XFKKBeAG6m9s5Zsfc7786g854hR0aL+pZwcowF654Ut+TfiQ+YCXt2c/9E6nPvZWIjDqjWIEiWK3jtbNcR++eqTZNeTx+vz9nDOt3JH1gQ9B7TZdGyoXjxI9svHm/Mxly2fg4G6Y+QJPn+fUR7iFB1SrISGxLXEmIuvNCWRuFepOXSeqWXwtmU5wkcTl3Gtc5tsNllHkc3g7vMe+FoKrJ+naTJ3MV+dnMNo68Z0eTv3ECDIX0o3fC9tT409luU5515a1UW22hj/hC9pVWnAR4xviNFE+5p1oCEaDVuFHMbkIQD/flw6CHiwZCBuNbMzVIbDCapsOcSXMj8irh0KvfH4fiAQe5Pw8Wp+z5Nd6bH93nYpstVdjSK8vJUrBymJR8wI1XJvO742OtHCEQ+ooHYoSn85YpqcgcWMm5Qq0SMpw4+qbe4QKYQIwoSH3tV4t7U+KKaomO4rbb4mLmMzX61l0/5JP86p8tWUz5KiKa4Xbvu2HTvqJ1m/DMcibVFk3KaFI1WfCA00xzBKkwco91O3JtdPElHZs0IoRkYQkBoVx46nJ707scepCc7mVQ8qyPXrCmdmiKTXMG6domX21zK8xiaigl7/He+zOQn/h8=</diagram></mxfile>
2206.03377/main_diagram/main_diagram.pdf ADDED
Binary file (59.8 kB). View file
 
2206.03377/paper_text/intro_method.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Event extraction (EE) task aims to detect the event from texts and then extracts corresponding arguments as different roles, so as to provide a structural information for massive downstream applications, such as recommendation [@Li-Gao-re; @Chun-Yi-CPMF], knowledge graph construction [@Xindong-graph; @Antoine-graph] and intelligent question answering [@Jordan-QA; @Qingqing-QA].
4
+
5
+ ![An example document for the event type of Equity Pledge, including selected sentences that are involved in multiple event records and where the event arguments scatter across sentences. We can observe that the relations between these entity mentions have intuitive patterns that could be leveraged to enhance the event extraction task. More information of entity color and complete event-related relations can be found in Appendix [7.2](#section:A.2){reference-type="ref" reference="section:A.2"}.](figures/example_table.png){#fig:example width="\\textwidth"}
6
+
7
+ Most of the previous methods focus on sentence-level event extraction (SEE) [@David-see; @Shasha-see; @Qi-see; @Yubo-see; @Thien-see; @Yue-see; @Lei-see; @Haoran-see; @Xinya-see; @Fayuan-see; @Giovanni-see; @Yaojie-see], extracting events from a single sentence. However, SEE is mostly inconsistent with actual situations. For example, event arguments may scatter across different sentences. As illustrated in Figure [1](#fig:example){reference-type="ref" reference="fig:example"}, the event argument \[ORG1\] of event role *Pledger* is mentioned in Sentence 4 and the corresponding argument \[ORG2\] of event role *Pledgee* is in Sentence 5 and 6. We call this **across-sentence issue**. Another situation involves the **multi-event issue**, which means that multiple events may exist in the same document. As seen in the example in Figure [1](#fig:example){reference-type="ref" reference="fig:example"}, where two event records coincide, we should recognize that they may partially share common arguments.
8
+
9
+ Recently, document-level event extraction (DEE) attracts great attention from both academic and industrial communities, and is regarded as a promising direction to tackle the above issues [@DCFEE; @Doc2EDAG; @GIT; @DE-PPN; @PTPCG]. However, by our observation, we discover that the relations between event arguments have patterns which are an important indicator to guide the event extraction. This information is neglected by existing DEE methods. Intuitively, the relation information could build long-range relationship knowledge of event roles among multiple sentences, which could relieve the across-sentence issue. For multi-event issue, shared arguments within one document could be distinguished to different roles based on the different prior relation knowledge. As illustrated in Figure [1](#fig:example){reference-type="ref" reference="fig:example"}, \[ORG1\] and \[ORG2\] have a prior relation pattern of *Pledger* and *Pledgee*, as well as \[ORG1\] and \[SHARE1\] for the relation pattern between *Pledger* and its *Pledged Shares*. Therefore, the relation information could increase the DEE accuracy if it is well modeled.
10
+
11
+ In this paper, we propose a novel DEE framework, called Relation-augmented Document-level Event Extraction (ReDEE), which is able to model the relation information between arguments by designing a tailored transformer structure. This structure can cover multi-scale and multi-amount relations and is general for different relation modeling situations. We name the structure as Relation-augmented Attention Transformer (RAAT). To fully leverage the relation information, we introduce a relation prediction task into the ReDEE framework and adopt multi-task learning method to optimize the event extraction task. We conduct extensive experiments on two public datasets. The results demonstrate the effectiveness of modeling the relation information, as well as our proposed framework and method.
12
+
13
+ In summary, our contributions are as follows:
14
+
15
+ - We propose a Relation-augmented Document-level Event Extraction (ReDEE) framework. It is the first time that relation information is implemented in the document-level event extraction field.
16
+
17
+ - We design a novel Relation-augmented Attention Transformer (RAAT). This network is general to cover multi-scale and multi-amount relations in DEE.
18
+
19
+ - We conduct extensive experiments and the results demonstrate that our method outperform the baselines and achieve state-of-the-art performance by 1.6% and 2.8% F1 absolute increasing on two datasets respectively.
20
+
21
+ # Method
22
+
23
+ Firstly, we clarify several key concepts in event extraction tasks. 1) ***entity***: a real world object, such as person, organization, location, etc.2) ***entity mention***: a text span in document referring to an entity object. 3) ***event role***: an attribute corresponding a pre-defined field in an event. 4) ***event argument***: an entity playing a specific event role. 5) ***event record***: a record expressing an event itself, including a series of event arguments.
24
+
25
+ In document-level event extraction task, one document can contain multiple event records, and an event record may miss a small set of event arguments. Further more, a entity can have multiple event mentions.
26
+
27
+ ![Overall of our proposed ReDEE framework.](figures/architecture.png){#fig:model structure width="\\textwidth"}
28
+
29
+ In this section, we introduce the proposed architecture first and then the key components in detail.
30
+
31
+ End-to-end training methods for DEE usually involve a pipeline paradigm, including three sub-tasks: named entity recognition, event role prediction and event argument extraction. In this paper, we propose the Relation-augmented Document-level Event Extraction (ReDEE) framework coordinated with the paradigm. Our framework features leverage the relation dependency information in both encoding and decoding stages. Moreover, a relation prediction task is added into the framework to fully utilize the relation knowledge and enhance the event extraction task.
32
+
33
+ More specifically, shown in Figure [2](#fig:model structure){reference-type="ref" reference="fig:model structure"}, there are four key components in our ReDEE framework: Entity Extraction and Representation(EER), Document Relation Extraction(DRE), Entity and Sentence Encoding(ESE), and Event Record Generation(ERG). In the following, we would introduce the detailed definition of each component.
34
+
35
+ We treat the component of entity extraction as a sequence labeling task. Given a document $D$ with multiple sentences $\{s_1,s_2,...,s_i\}$, we use a native transformer encoder to represent the token sequence. Specifically, we use the BERT [@bert] encoder pre-trained in Roberta setting [@roberta]. Then we use the Conditional Random Field(CRF) [@CRF] to classify token representations into labels of named entities. We adopt the classical BIOSE sequence labeling scheme. The labels are predicted by the following calculation: $\hat{y}_{ne} = CRF(Trans(D))$. Then all the intermediate embeddings of extracted entity mentions and sentences are concatenate into a matrix $M_{ne+s}\in \mathbb{R}^{(j+i)\times{d_e}}$ by max-pooling operation on each sentence and entity mention span, where $j$ and $i$ are the numbers of entity mentions and sentences, and $d_e$ is the dimension of embeddings. The loss function for named entity recognition is denoted: $$\begin{equation}
36
+ \small
37
+ \mathcal{L}_{ne}= -\sum_{s_i \in D} logP(y_i|s_i)
38
+ \end{equation}$$ where $s_i$ denotes the $i^{th}$ sequence sentence in document, and $y_i$ is the corresponding ground truth label sequence.
39
+
40
+ The DRE component takes the document text ($D$) and entities ($\{e_1,e_2,...,e_j\}$) extracted in the previous step as inputs, and outputs the relation pairs among entities, in a form of triples ($\{[e_1^h,e_1^t,r_1],[e_2^h,e_2^t,r_2],...,[e_k^h,e_k^t,r_k]\}$). \[$e_k^h,e_k^t,r_k$\] means the head entity, the tail entity and the relationship of the $k^{th}$ triple respectively.
41
+
42
+ An important aspect is how to define and collect the relations from data. Here we assume that every two arguments in an event record can be connected by a relation. For example, *Pledger* and *Pledgee* in the *EquityPledge* event could have a relation named as *Pledge2Pledgee*, and the order of head and tail entities is determined by the pre-order of event arguments [@Doc2EDAG]. In this way, every event record with $n$ arguments could create $C_n^2$ relation samples. Note that this method to build relations is general to event extraction tasks from various domains, and the supervised relation information just comes from event record data itself, without any extra human labeling work. We do statistics for the relation types for ChiFinAnn dataset. Table [1](#tab:stats snippet){reference-type="ref" reference="tab:stats snippet"} shows a snippet of statistics and the full edition can be found in Appendix [7.3](#section:A.3){reference-type="ref" reference="section:A.3"}.
43
+
44
+ ::: {#tab:stats snippet}
45
+ **Relation Type** **#Train** **#Dev** **#Test**
46
+ ---------------------------- ------------ ---------- -----------
47
+ Pledger2PledgedShares 20002 2567 2299
48
+ Pledger2Pledgee 20002 2567 2299
49
+ PledgedShares2Pledgee 20002 2567 2299
50
+ Start2EndDate 19615 2239 1877
51
+ Pledger2TotalHoldingShares 18552 2412 2173
52
+
53
+ : The example relations with top 5 quantities in the ChiFinAnn dataset. The complete statistic can refer to the Appendix [7.3](#section:A.3){reference-type="ref" reference="section:A.3"}.
54
+ :::
55
+
56
+ To predict the argument relations in this step, we adopt the structured self attention network [@SSAN] which is the latest method for document-level relation extraction. However, different from previous work using multi-class binary cross-entropy loss, we use normal cross-entropy loss to predict only one label for each entity pair. The relation type is inferred by this function: $$\begin{equation}
57
+ \small
58
+ \hat{y}_{i,j}= argmax(e_i^TW_re_j)
59
+ \end{equation}$$ where $e_i, e_j \in \mathbb{R}^d$ denote entity embedding from encoder module of DRE and $d$ is the dimension of embeddings. $W_r\in \mathbb{R}^{d \times c \times d}$ denotes biaffine matrix trained by DRE task and $c$ is the total number of relations. And the loss function for optimize the relation prediction task is denoted: $$\begin{equation}
60
+ \small
61
+ \mathcal{L}_{dre}= -\sum_{y_{i,j} \in Y} logP(y_{i,j}|D)
62
+ \end{equation}$$ where $y_{i, j}$ denotes ground truth label between the $i^{th}$ and $j^{th}$ entity, $D$ for document text and $Y$ for set of all relation pairs among entities.
63
+
64
+ Now we have embeddings of entity mentions and sentences from EER component and a list of predicted triple relations from DRE component. Then this component encodes data mentioned above and output embeddings effectively integrated with relation information. In this subsection, we would introduce the method that translates triple relations to calculable matrices and the novel RAAT structure for encoding all the above data.
65
+
66
+ ![RAAT structure. Firstly each relation between entities and sentences are represented as matrices. Then the matrices are clustered by the head entities. At last the clustered matrices are integrated into the transformer structure for attention calculation.](figures/raan.png){#fig:raan structure height="8.5cm" width="7.5cm"}
67
+
68
+ First, we introduce a mechanism: entity and sentence dependency, which not only includes relation triples, but also describes links among sentences and entities beyond triples.
69
+
70
+ *Co-relation* and *Co-reference* are defined to represent entity-entity dependency. For the former one, two entities have a *Co-relation* dependency between them if they belong to a predicted relation triple. Entity pairs are considered having different *Co-relation* if their involved triples have different relations. *Co-reference* shows dependency between entity mentions pointing to same entities. That is, if an entity has several mentions existing across document, then each two of them has *Co-reference* dependency. However, in the case that head and tail entities in relation triple are the same (i.e. *StartDate* and *EndDate* share same entities in some event records), then *Co-relation* and *Co-reference* are both held between them.
71
+
72
+ We use *Co-existence* to describe dependency between entities and sentences where entity mentions come from. To be more specific, the entity mention together with its belonged sentence has *Co-existence*. For remaining entity-entity and entity-sentence pairs without any dependency mentioned above, we uniformly treat them as *NA* dependency.
73
+
74
+ Table [2](#tab:dependency system){reference-type="ref" reference="tab:dependency system"} shows the complete dependency mechanism. *Co-relation* differs from *NA*, *Co-reference*, and *Co-existence* in that it has several sub-types, with number equaling to that of relation types defined in document relation extraction task.
75
+
76
+ ::: {#tab:dependency system}
77
+ **sentence** **entity**
78
+ -------------- ----------------- -----------------------------
79
+ **sentence** NA Co-existence/NA
80
+ **entity** Co-existence/NA Co-relation/Co-reference/NA
81
+
82
+ : All types of dependency among sentences and entities
83
+ :::
84
+
85
+ In order to effectively encode entity and sentence dependencies, we design the RAAT which takes advantage of a calculable matrix representing dependencies and integrates it into attention computation. According to the architecture shown in Figure [3](#fig:raan structure){reference-type="ref" reference="fig:raan structure"}, RAAT is inherited from native transformer but has a distinct attention computation module which is made up of two parts: self-attention and relation-augmented attention computation.
86
+
87
+ Given a document shown as $D = \{s_1, s_2, ... s_j\}$, all entity mentions in this document as $E^m = \{e^m_{1}, e^m_{2}, ..., e^m_t\}$, where $e^m_i$ denotes entity mentions with the superscript $m$ denotes mention, and the subscript $i$ denotes index, and a list of triples $\{[e_1^h,e_1^t,r_1],[e_2^h,e_2^t,r_2],...,[e_k^h,e_k^t,r_k]\}$, we build a matrix $T\in \mathbb{R}^{c \times (t + j) \times (t + j)}$ where $c$ for the number of dependencies, and $t$ and $j$ for the number of sentences and entity mentions respectively. $T$ is comprised of c matrices with same dimensions $R\in \mathbb{R}^{(t + j) \times (t + j)}$, and each $R$ represents one type of dependency $r \in \{Co-relation_k, Co-reference, Co-existence, NA\}, k = 1, 2, ... N$, $N$ as the number of relation types. For element within $T$, $t_{k, i, j}$ represent the dependency between $node_i$ and $node_j$. Specifically, $t_{k, i, j} = 1$ if they have the $k^{th}$ dependency, otherwise, $t_{k, i, j} = 0$. Here, $node_k \in \{e^m_{1}, e^m_{2}, ..., e^m_t, s_1, s_2, ... s_j\}$ can be either entity mention or sentence.
88
+
89
+ However, $T$ would be giant and sparse if we use the above strategy. To squeeze $T$ and decrease training parameters, we cluster *Co-relation* dependency based on the type of head entity in relation triple. For example, *Pledger2Pledgee* and *Pledger2PledgedShares* are clustered as one Co-relation dependency, and two matrice $R_a$ and $R_b$ corresponding to them are merged into one matrix. As a result, we finally get $T \in \mathbb{R}^{(3 + H) \times (t+j) \times (t + j)}$ where H denotes the number of head entity type in *Co-relation*, and 3 covers *NA*, *Co-reference*, and *Co-existence*. Let $X \in \mathbb{R}^{(t+j) \times d}$ as input embeddings of attention module, $W_{rq}, W_{rk}, W_q, W_k, W_v \in \mathbb{R}^{d \times d}$, $M \in \mathbb{R}^{(3 + H) \times d \times d}$ as weight matrices, we compute relation-augmented attention in the following steps: $$\begin{equation}
90
+ \small
91
+ Q_r=XW_{rq}, K_r=XW_{rk}
92
+ \end{equation}$$ $$\begin{equation}
93
+ \small
94
+ S_a = \frac{\sum_{i=1}^{3+H} Q_rM[i, :, :]K^T_r \cdot T[i,:,:]}{\sqrt{d}} + bias_i
95
+ \end{equation}$$ where $S_a$ denotes score matrix of relation-augmented attention, $\cdot$ denotes element-wise multiplication. We compute self attention score and combine it with $S_a$ in the following way: $$\begin{equation}
96
+ \small
97
+ Q=XW_q, K=XW_k, W_v = XW_v
98
+ \end{equation}$$ $$\begin{equation}
99
+ \small
100
+ S_b = \frac{QK^T}{\sqrt{d}}
101
+ \end{equation}$$ $$\begin{equation}
102
+ \small
103
+ O = (S_a + S_b)V
104
+ \end{equation}$$ where $O$ is the output of attention module. Similar to the structure of native transformer, RAAT has multiple identical blocks stacking up layer by layer. Furthermore, $T$ is extensive since the number of *Co-relation* can be selected. RAAT can be adaptive to the change of input length, which is equivalent to the total number of sentences and entity mentions.
105
+
106
+ With the outputs from previous component, the embeddings of entities and sentences, this ERG component actually includes two sub-modules: event type classifier and event record decoder.
107
+
108
+ Given the embeddings of sentences, we adopt several binary classifiers on every event type to predict whether the corresponding event is identified or not. If there is any classifier identifying an event type, the following event record decoder would be activated to iteratively generate every argument for the corresponding event type. The loss function to optimize this classifier is as the following: $$\begin{equation}
109
+ \small
110
+ \mathcal{L}_{pred}= - \sum_i log(P(y_i|S))
111
+ \end{equation}$$ where $y_i$ denotes the label of the $i^{th}$ event type, $y_i = 1$ if there exists event record with event type i, otherwise, $y_i = 0$. $S$ denotes input embeddings of sentences.
112
+
113
+ To iteratively generate every argument for a specific event type, we refer to the entity-based directed acyclic graph (EDAG) method [@Doc2EDAG]. EDAG is a sequence of iterations with the length equaling to number of roles for certain event type. The objective of each iteration is to predict event argument of certain event role. Inputs of each iteration are come up with entities and sentences embeddings. And the predicted arguments of outputs will be a part of inputs for next iteration. However, different from EDAG, we substitute its vanilla transformer part with our proposed RAAT structure (i.e. RAAT-2 as shown in Figure [2](#fig:model structure){reference-type="ref" reference="fig:model structure"}). More specifically, the EDAG method uses a memory structure to record extracted arguments and adds role type representation to predict current-iteration arguments. However, this procedure hardly captures dependency between entities both in memory and argument candidates and sentences. In our method, RAAT structure can connect entities in memory and candidate arguments via relation triples extracted by the DRE component, and it can construct a structure to represent dependencies. In detail, before predicting event argument for current iteration, Matrix $T$ is constructed in the way shown above so that dependency is integrated into attention computation. After extracting the argument, it is added into memory, meanwhile, a new $T$ is generated to adapt next iteration prediction.
114
+
115
+ Therefore, the RAAT can strengthen the relation signal for attention computation. The RAAT-2 has the same structure with RAAT-1 but independent parameters. The formal definition of loss function for event recorder decoder is: $$\begin{equation}
116
+ \small
117
+ \mathcal{L}_a= -\sum_{v \in V_D}\sum_elog(P(y_e|(v, s)))
118
+ \end{equation}$$ where $V_D$ denotes node set in event records graph, $v$ denotes extracted event arguments of event record by far, $s$ denotes embedding of sentences and event argument candidates, and $y_e$ denotes label of argument candidate $e$ in current step. $y_e = 1$ means $e$ is the ground truth argument corresponding to current step event role, otherwise, $y_e = 0$.
119
+
120
+ To train the above four components, we leverage the multi-task learning method [@Ronan-multi-task] and integrate the four corresponding loss functions together as the following: $$\begin{equation}
121
+ \small
122
+ \mathcal{L} = \lambda_1\mathcal{L}_{ne} + \lambda_2\mathcal{L}_{dre} + \lambda_3\mathcal{L}_{pred} + \lambda_4\mathcal{L}_{a}
123
+ \end{equation}$$ where the $\lambda_i$ pre-set to balance the weight among the four components.
2206.13452/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-09-13T01:56:47.368Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" version="14.8.6" etag="oLogBemFcz9zOVyti6ms" type="google"><diagram id="YT7QkFDqH4t8oUqwP09p">5V1bc6M2FP41ntm+ZIxu4MeNk15m2plO89Ddpx0Csk2LLQ8mG6e/viIIbEDILDFH2M5DxgiQxXcO37mCJ3i+3v+S+NvVHyLk8QRNw/0EP0wQmk0d+T8beMsHKGL5wDKJwnzIOQw8Rf9xNThVoy9RyHeVA1Mh4jTaVgcDsdnwIK2M+UkiXquHLURc/datv+SNgafAj5ujf0dhuspHPeQexn/l0XJVfLPDZvmetV8crKbYrfxQvOZD7xeHHyd4ngiR5p/W+zmPM+wKXHIEfm7ZWy4s4Zu0ywkoP+G7H7+oa1PrSt+Ki03Eyybk2fHOBN+/rqKUP239INv7KqUrx1bpOla7d2ki/uVzEYvk/WzsoWfMmNyziOK4GN+IDS8PLhCU136vVsOTlO9br8gpcZL6xcWap8mbPESdQKZKSZRuOYjm268HSTlYHbM6llJxoq+0Y1nOfQBQflAY6vHEQ+MZurNnuVB7eCJQPMnQeLLA488Li3hiUDxpAz4eSmpTm+q6D4hmiiaSdCWWYuPHvwuxVTj+w9P0TRGz/5KKKsp8H6Vfjj5/zaa6o2rrYa9mft94UxsLsUnVhA4roc5WVyVM8ZIEakjRbeonS65A8jqLI+Gxn0bfq7N/BFoGA+1GLutLAWC28fV444Dt+9bbEdJ1gTgA4nC63x6d5aFO/VNEcsbytkJejfZZ7W7JV6rOqkm1XEYnQbsaTpI8RXYT+uh8m7j36cR9yIeaKhHH0knhp4nK321zz2UR7TN9aRHIh6hIclEVs4J2jqiIaJiInIGIvNMgonvnQoDEyB6Qs9EyegtV5WcZyKoH9RSBwdDc01UoxXq06o0umiM8ONUuJGhC8YJJAhJJXZyXQeZ/S8+P3ZEnLT3shRfwIND53s8eJbT9jj+r3kKCjUdLyX2olWqodWaRWjWqCoVuD3NX+uxnNHY6idg0dtRA0/iSjR0C9OMcoPgRiDW8po4W9s+Kjro3zxoaiVjlcVMESC6aNSB9jdlp7r1cFxmSf5EpZCMXjySgTpbscxWWDCGNJWP2eLNYj1ZP6SXzJoa8268qRkOaiOBQjLagozcfo2klYpM1TDEau2jWgLRs7DT3Xq6PAMq/piIau3gkIXXSG60l61er7hPbany0AToB9JVn3Gg46lZ5bk5UL7XUJ8ov7xwl7BFXDXuIHztN8WNkz9gWPHq7rSBaiVCoO5IQWr2RhusFKS7UcjNIreQWUu6FZGJqxzyD1SPTGsyA/kNxe4+gf8QO9p5F7McbOwO2nGACQ3GdpaJrlYVvORkHEwH639gUWcM2qYyDiSCxb4vHr7atpa7pkGBbLKEOEDPMmvRNsEX6thjGjyNlqpOIVYPaobQKT+qLBWd62imfDBqAZiALsQQoeobhGaJJTRFiT6uJxeLsOHhGKxGLzF+sx3IjzDh4BtCfIbpnNe20ztjB3rPI8aZgFbbZZhzYQ+r9VT0DSliTzanFRlPSoURsIRXgc2+h1ery+ecB2ByyoEyuKjolmliIWmzxJzcfnWolYpNnTNEpXEPPOHgG0HoWMh9BC5Ad7D17HF8w4AiahsaBPaTeo9HaV7A2I6zxNYfoM2npamC1QkvfPiNar5YN12dE7ZWKzT0t59GZ0N+tyjcPdVAg7VNxA3QF6+WOPHLnEORgyhB2sTNlFTVAM3rnUddjBLsMMW9G+qkXxvp5iq9x9asYQPksdqGfPySgmo5yarGjnJrq3hM6z151F/ixNMpzaZCvu/+AurWnraaAhtmUcGiRwzX3IzBqURZtXe1X249Q13yHAKI93s73Hvxe9Kge8zuzmPKhFlvEx5Hy0UlkCIurd6MYmx67UWYvajgnipnyHRXz8pdFMw9TtWmYeQZHdsyU+2iRwzVX0BpmHlIW482F9KE5rDE8FjtUmKkuX9H032wGFiAZvzrjlA8aQmi5qUbfIodrzr7WGQdUFuOt2YNlX6nm/cKA7hhuaXP/0fQYc09MdEbfzd4rxcaYfdU1/g1Rwm3Lvs5M2VeMzGnRH3iI2JR9xcSc5D2j8tlryBij8mnD+wG8rBblmxlT/+dTPmPqH1D5dC+Xq2vjJvyc/RbMQRkrMj3SsY7GqoMYj1wFqnEVirGPvpGg9i58gqd306M/p6d0a9MOaLh0zS03Iz5KBxFffdrhxOc2c0af/J8aEpSueFqVVNV/V2LV/DCLH0fLjdwMpMC4HL/PHPtIBiOf1Y51FIZxW6RRtTlDvA7m9LunkUZ96mX4PoGC20wTfXq+IejrvQ2Q0KMm9MENQc86lILOhL3cPPxEWk5Qh9+Zw4//Aw==</diagram></mxfile>
2206.13452/main_diagram/main_diagram.pdf ADDED
Binary file (39.2 kB). View file
 
2206.13452/paper_text/intro_method.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Model-based Reinforcement Learning (MBRL) enables an agent to predict what would happen if it executed various actions, thus allowing it to learn from *imagined* experience (Hafner et al., 2019). However, such an approach relies on the model being learned accurately. Many model-based approaches rely on dense models that predict the next step value of each variable based on the action and all variables in the current state, as shown in Fig. 1 (a). Such dense models are sensitive to spurious correlations which lead to poor generalization. For example, when door B is at angles unseen during training or the clock is at unseen times, the prediction of door A can be inaccurate due to unnecessary
4
+
5
+ Proceedings of the 39<sup>th</sup> International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).
6
+
7
+ ![](_page_0_Figure_9.jpeg)
8
+
9
+ Figure 1. With two doors that the robot can open and go through and a clock on the wall, (a) dense dynamics models predict dynamics of each state variable unnecessarily using all variables; (b) based on a pre-defined reward (e.g., for navigation), existing state abstractions learn to omit the clock but still use a dense model for the remaining variables; (c) our causal models reason and only keep necessary dependencies (i.e., doors A and B depend on the action individually) and derives a state abstraction independent from any reward function.
10
+
11
+ dependence on those variables in the model.
12
+
13
+ Observing that unnecessary dependencies are the source of poor generalization, existing state abstractions mitigate this problem by removing some of those dependencies. Specifically, state abstractions group many states into an abstract state by omitting some state variables (Chapman & Kaelbling, 1991; McCallum, 1996; Jong & Stone, 2005). For example, bisimulation (Zhang et al., 2020b), which is particularly related to this paper, omits variables irrelevant to a pre-defined reward function. Though doing so removes unnecessary dependence on omitted variables, as shown in Fig. 1 (b), the same generalization issues persist as dense models are still used for the remaining variables, leaving unnecessary dependencies in the abstract state. Furthermore, despite bisimulation improving sample efficiency of task learning by reducing the learning space, such methods only find problem-dependent abstractions: they may omit variables that are irrelevant to the current task but that the agent can control and utilize for future tasks.
14
+
15
+ Given that good generalization requires only keeping necessary dependencies, this paper introduces Causal Dynamics Learning (CDL) for Task-Independent State Abstraction which learns a *causal* model that explicitly reasons about
16
+
17
+ <sup>&</sup>lt;sup>1</sup>Department of Electrical and Computer Engineering, <sup>2</sup>Department of Computer Science, The University of Texas at Austin, Austin, USA <sup>3</sup>Sony AI. Correspondence to: Zizhao Wang <zizhao.wang@utexas.edu>.
18
+
19
+ which actions and variables affect which variables from collected data, as shown in Fig. 1 (c). For example, by not depending on door B and the clock, the causal model's predictions about door A are unaffected by those variables and likely to be more accurate than dense models. Specifically, we prove that each variable's dependence on other variables (or actions) can be determined by a single conditional independence test. Such a test is then carried out by a novel architecture which estimates the conditional mutual information while learning the dynamics model.
20
+
21
+ Furthermore, by revealing all unnecessary dependencies, certain state variables which no other variables depend on (e.g., the clock) can be omitted for planning, forming a new form of state abstraction. Specifically, our model partitions state variables into those that it can change (*controllable variables*, e.g., doors A and B) with its actions, those that it cannot change but that influence actions' results on those that it can (*action-relevant variables*, e.g., an obstacle that may block door A's motion), and the remainder (*actionirrelevant variables*, e.g., the clock) which have no influence on others and thus can be omitted during planning. Also, in the abstract state, the dynamics model is still free of unnecessary dependencies and exhibits the same generalization benefits. Derived purely from dynamics, our state abstraction includes all controllable variables that the agent can use in the future, enabling it to solve a wider range of tasks than bisimulation which only retains variables specific to a single task.
22
+
23
+ CDL is compared against state-of-the-art dense models in two simulated environments: a chemical environment with causal relationships of different complexities, and a tabletop manipulation environment with challenging rigid body dynamics. We find that CDL learns causal relationships accurately and retains similar prediction accuracy on unseen states to the accuracy on seen states, while the prediction accuracies of dense models drop 60 ∼ 90% in some complex environments. When applied to downstream tasks, policies with the proposed causal state abstraction learn with higher sample efficiency and also generalize better than those with dense models.
24
+
25
+ # Method
26
+
27
+ We begin with a formal definition of controllable, action-relevant, and action-irrelevant state variables. If we are given the causal graphical model of a Markov Process with $V = \{s_t^{1:d_S}, a_t, s_{t+1}^{1:d_S}\}$ as nodes and E as edges describing causal relationships from $s_t^{1:d_S}$ and $a_t$ to $s_{t+1}^{1:d_S}$ , ancestors of a state variable $s^i$ are defined as all nodes that have a directed path leading to node $s^i$ (not necessarily from the immediate previous time step but can be from any previous step). For example, for the causal dynamics model shown in Fig. 2 (a), $s^4$ is an ancestor of $s^2$ as there is a path of $s_t^4 \rightarrow s_{t+1}^3 \rightarrow s_{t+2}^2$ . Descendants of nodes are defined in the same way but in the opposite direction. Then we have:
28
+
29
+ **Definition 1** (Controllable State Variables) $s^{\mathcal{C}}$ are the descendants of the action $a_t$ .
30
+
31
+ **Definition 2** (Action-Relevant State Variables) $s^{\mathcal{R}}$ are ancestors of controllable state variables, excluding those already belonging to $s^{\mathcal{C}}$ .
32
+
33
+ **Definition 3** (Action-Irrelevant State Variables) $s^{\mathcal{I}}$ are those that belong to neither $s^{\mathcal{C}}$ nor $s^{\mathcal{R}}$ .
34
+
35
+ In the above definitions, C, $\mathcal{R}$ , and $\mathcal{I}$ are the set of state dimension indices for controllable, action-relevant, and action-irrelevant state variables respectively.
36
+
37
+ Given these definitions, the type of each state variable in the example causal dynamics model is shown in Fig. 2 (b). Further in (c), one may notice that the causal graph can be split into three parts, allowing us to rewrite the transition probabilities as $\mathcal{P}(s_{t+1}|s_t, a_t) = p(s_{t+1}^{\mathcal{C}}|s_t^{\mathcal{R}}, s_t^{\mathcal{R}}, a_t) \cdot p(s_{t+1}^{\mathcal{R}}|s_t^{\mathcal{R}}) \cdot p(s_{t+1}^{\mathcal{R}}|s_t^{\mathcal{R}})$ , where the edge from $s_t^{\mathcal{R}}$ to $s_{t+1}^{\mathcal{I}}$ is optional as it does not change the split of state variables ( $s^{\mathcal{I}}$ are still neither $a_t$ 's descendants nor $s^{\mathcal{C}}$ 's ancestors).
38
+
39
+ Then CDL forms the state abstraction $\phi$ by omitting action-irrelevant state variables, i.e., $\phi(s_t) = (s_t^{\mathcal{C}}, s_t^{\mathcal{R}})$ , and the
40
+
41
+ dynamics in the abstract space can be expressed by removing the subgraph involving action-irrelevant state variables, remaining a causal dynamics model itself, as follows:
42
+
43
+ $$\mathcal{P}(\phi(s_{t+1})|\phi(s_t), a_t) = p(s_{t+1}^{\mathcal{C}}|s_t^{\mathcal{C}}, s_t^{\mathcal{R}}, a_t) \cdot p(s_{t+1}^{\mathcal{R}}|s_t^{\mathcal{R}}).$$
44
+
45
+ This state abstraction $\phi$ can be used to solve any *actively*accomplishable downstream task. Here, downstream tasks are those defined in the same Markov Process so that the agent can use $\phi$ to solve the task by learning the provided rewards. Meanwhile, actively-accomplishable means that the reward function of the task only depends on controllable and action-related state variables $(s^{\mathcal{C}}, s^{\mathcal{R}})$ . Additionally, if the task involves extra variables V, e.g., a varying goal $g_t$ for certain controllable state variables to reach, those variables should also be provided so that the agent can learn the reward accurately. Notice that actively-accomplishable tasks do not cover those involving action-irrelevant state variables (e.g., for the example in Fig. 1, a reward of +1for opening door A when the clock shows 1 pm and 0 otherwise). However, as the reward function of a task can be arbitrarily designed using any state variable, being able to solve all tasks means that no state variable can be omitted (i.e., no state abstraction). We assume that in practice, it is relatively uncommon for a task's reward function to involve action-irrelevant state variables, making $\phi$ fairly generally applicable.
46
+
47
+ So far, we have defined three types of state variables and the derived state abstraction for a known causal dynamics model. However, for real-world problems, such a model is usually not accessible. Instead, agents can only collect transition data via its interactions with the environment. Hence, this paper introduces a novel method that: (1) learns a causal dynamics model $F_{\theta}: \mathcal{S} \times \mathcal{A} \to \mathcal{S}$ , (2) learns a transition collection policy $\pi: \mathcal{S} \to \mathcal{A}$ to learn $F_{\theta}$ efficiently, (3) derives the state abstraction $\phi: \mathcal{S} \to \mathcal{S}^{\mathcal{C}} \times \mathcal{S}^{\mathcal{R}}$ and dynamics $F_{\theta}^{\phi}$ in the abstract space, (4) learns a reward predictor for any actively-accomplishable task in the abstract space $R_{arphi}$ : $\phi(\mathcal{S}) \times \mathcal{A} (\times \mathcal{V}) \to \mathbb{R}$ , and (5) uses planning methods to solve the task with the learned $F^{\phi}_{\theta}$ and $R_{\varphi}$ .
48
+
49
+ The key challenge when learning a causal dynamics model is to determine whether a causal edge exists between two state variables, i.e., $s_t^i \to s_{t+1}^j$ . First, adapting the work from Mastakouri et al. (2021), we present a method for inferring whether such a causal relationship exists, based on the following assumptions about the ground truth dynamics model:
50
+
51
+ A1. Causal Markov condition and Faithfulness in the underlying dynamics (definitions in appendix Sec. A.1).
52
+
53
+ A2. The state is fully observable and the dynamics is Marko-
54
+
55
+ A3. The edge $s_t^i \to s_{t+1}^i$ exists for all state variables $s^i$ .
56
+
57
+ A4. No simultaneous or backward edges in time, i.e., for all
58
+
59
+ $i,j,s_t^i o s_t^j$ and $s_t^i o s_{t-1}^j$ . A5. The transitions for each state variable are independent, i.e., $\mathcal{P}(s_{t+1}|s_t,a_t)=\prod_{j=1}^{d_{\mathcal{S}}}p(s_{t+1}^j|s_t,a_t)$
60
+
61
+ Except for A4, which requires no redundant information in state variables (e.g., state variables that include both joint angles and the end-effector of a robot arm), and A5, which does not necessarily hold for rich observation spaces (e.g., images), these assumptions are commonly made for causal inference and dynamic systems. Moreover, for partially observable or high-dimensional state spaces, low-dimensional disentangled representations that encode the space can be learned to adhere to A2 and A5.
62
+
63
+ **Theorem 3.1.** Assuming A1 - A5, we define the conditioning set $\{a_t, s_t \setminus s_t^i\} = \{a_t, s_t^1, \dots, s_t^{i-1}, s_t^{i+1}, \dots, s_t^{d_s}\}.$ Then, for any two state variables at indices i and j, if $s_t^i \not\perp \!\!\! \perp s_{t+1}^j | \{a_t, s_t \setminus s_t^i\}, \text{ then } s_t^i \rightarrow s_{t+1}^j.$ Similarly, if $a_t \not\perp \!\!\! \perp s_{t+1}^j | s_t$ , then $a_t \to s_{t+1}^j$ .
64
+
65
+ Proof in appendix Sec. A.2. This result shows that the causal relationship between state variables (or between the action and any state variable) can be inferred with one Conditional Independence Test (CIT). For simplicity, in the remainder of the paper, we will only describe the CIT between two state variables, $s_t^i \not\perp \!\!\! \perp s_{t+1}^j | \{a_t, s_t \setminus s_t^i\}$ . For the test between the action and the state variable, $a_t \not\perp s_{t+1}^{\jmath} | s_t$ , the same method applies by changing $s_t^i$ to $a_t$ and the conditioning set according to Theorem 3.1.
66
+
67
+ In theory, the CIT between $s_t^i$ and $s_{t+1}^j$ can be made by measuring the Conditional Mutual Information, CMI<sup>ij</sup>:
68
+
69
+ $$\mathbb{E}_{s_{t},a_{t},s_{t+1}^{j}} \left[ \log \frac{p(s_{t}^{i}, s_{t+1}^{j} | \{a_{t}, s_{t} \setminus s_{t}^{i}\})}{p(s_{t}^{i} | \{a_{t}, s_{t} \setminus s_{t}^{i}\}) p(s_{t+1}^{j} | \{a_{t}, s_{t} \setminus s_{t}^{i}\})} \right] \\
70
+ = \mathbb{E}_{s_{t},a_{t},s_{t+1}^{j}} \left[ \log \frac{p(s_{t+1}^{j} | a_{t}, s_{t})}{p(s_{t+1}^{j} | \{a_{t}, s_{t} \setminus s_{t}^{i}\})} \right], \tag{2}$$
71
+
72
+ where the expectation is over the joint distribution of $\{s_t, a_t, s_{t+1}^j\}$ , all p are the ground truth conditional densities, and the derivation of Eq. (2) is in the appendix Sec. A.3. If CMI<sup>ij</sup> $\geq \epsilon$ where $\epsilon$ is a pre-defined threshold, it suggests that $\boldsymbol{s}_t^i$ is necessary to predict $\boldsymbol{s}_{t+1}^j$ accurately, $s_t^i \not\perp \!\!\!\perp s_{t+1}^j | \{a_t, s_t \setminus s_t^i\}$ is true, and the causal edge $s_t^i \to s_{t+1}^j$ exists.
73
+
74
+ In practice, as the ground truth joint distribution and conditional densities are unknown, the expectation is computed over the collected transition data $\mathcal{D}$ and CDL learns predictive models $\hat{p}(s_{t+1}^j|a_t,s_t), \hat{p}(s_{t+1}^j|\{a_t,s_t\setminus s_t^i\})$ to approximate conditional densities.
75
+
76
+ However, computing $CMI^{ij}$ for all state variable pairs
77
+
78
+ ![](_page_4_Figure_1.jpeg)
79
+
80
+ Figure 3. The predictive model for the state variable $s_{t+1}^j$ . Different conditional densities can be represented by applying different masks $M_j$ . $\hat{p}(s_{t+1}^j|\{a_t,s_t\setminus s_t^i\})$ is shown as an example in the figure.
81
+
82
+ requires training $(d_S)^2$ predictive models, which is intractable. Instead, we develop a novel architecture and training paradigm which reduces the requirement to $d_{\mathcal{S}}$ models. As shown in Fig. 3, to predict each state variable $s_{t+1}^{j}$ : (1) the action and all current state variables are individually mapped to features $f_j^a(a_t), f_j^1(s_t^1), \dots, f_j^{d_S}(s_t^{d_S})$ where each feature is a $d_f$ -dimensional vector; (2) certain features are masked to $-\infty$ according to a binary map $M_i$ ; (3) the overall feature $h_i$ is computed by taking the elementwise max of all features; and (4) a predictive network $q_i$ takes $h_j$ as input and predicts the distribution of $s_{t+1}^j$ . This architecture can represent conditional densities with different masks $M_j$ : (1) for $\hat{p}(s_{t+1}^j|a_t,s_t)$ , none of the features is masked; (2) for $\hat{p}(s_{t+1}^{j}|\{a_t, s_t \setminus s_t^i\})$ , the feature $f_j^i(s_t^i)$ is masked to $-\infty$ so that it will not be used to predict $s_{t+1}^j$ , as shown in Fig. 3; and (3) for $\hat{p}(s_{t+1}^{j}|\mathbf{P}\mathbf{A}_{s^{j}})$ , after deriving parents of $s_{t+1}^j$ from $\{CMI^{ij}\}_{i=1}^{d_S}$ ), all non-parent features are masked to $-\infty$ .
83
+
84
+ The architecture described above predicts one state variable $s_{t+1}^j$ , and the whole causal dynamics model $F_\theta$ consists of $d_{\mathcal{S}}$ such models, where $\theta$ parameterizes all feature extractors $\{f_j^a, f_j^{1:d_{\mathcal{S}}}\}_{j=1}^{d_{\mathcal{S}}}$ and predictive networks $q_{1:d_{\mathcal{S}}}$ . To train $\theta$ , we maximize the following log-likelihood:
85
+
86
+ $$\mathcal{L}_{\theta} = \sum_{j=1}^{d_{S}} \left[ \log \hat{p}(s_{t+1}^{j} | a_{t}, s_{t}) + \log \hat{p}(s_{t+1}^{j} | \{a_{t}, s_{t} \setminus s_{t}^{i}\}) + \log \hat{p}(s_{t+1}^{j} | \mathbf{P} \mathbf{A}_{s^{j}}) \right],$$
87
+ (3)
88
+
89
+ where i is uniformly sampled from $\{1,\ldots,d_{\mathcal{S}}\}$ for each j, and $\mathbf{PA}_{s^j}$ are inferred from $\{\mathrm{CMI}^{ij}\}_{i=1}^{d_{\mathcal{S}}}$ learned so far. In Equation 3, the first two predictive likelihoods train models necessary to evaluate $\mathrm{CMI}^{ij}$ , and the last prediction likelihood finetunes the performance of the inferred causal dynamics model $\hat{p}(s_{t+1}^j|\mathbf{PA}_{s^j})$ . We split the collected transition data $\mathcal{D}$ into the training part used to maximize $\mathcal{L}_{\theta}$ and the validation part for evaluating CMI.
90
+
91
+ After training, CDL evaluates the causal graph by checking if each CMI<sup>ij</sup> $\geq \epsilon$ and derives the learned state abstraction $\phi(s) = (\hat{s}^{\mathcal{C}}, \hat{s}^{\mathcal{R}})$ from the learned causal graph, according
92
+
93
+ to Definitions 1-3 (for a ground truth variable or distribution x, we denote $\hat{x}$ as its learned prediction or distribution). The dynamics in the abstract space $F^{\phi}_{\theta}$ can also be derived by omitting the prediction networks for action-irrelevant state variables $\hat{s}^{\mathcal{I}}$ .
94
+
95
+ The collected transition data $\mathcal{D}$ are important to accurate causal dynamics learning, as they are used in predictive model training (Eq. 3) and CMI evaluations (Eq. 2). An ideal transition collection policy $\pi$ would cover all stateaction pairs to expose causal relationships thoroughly and actively explore states where the causal dynamics model is not accurate.
96
+
97
+ To this end, an RL agent is used for transition collection with a reward function that is the prediction difference between the dense predictor and the causal predictor learned so far:
98
+
99
+ $$r_t = \tanh\left(\tau \cdot \sum_{j=1}^{d_S} \log \frac{\hat{p}(s_{t+1}^j | s_t, a_t)}{\hat{p}(s_{t+1}^j | \mathbf{P} \mathbf{A}_{s^j})}\right), \tag{4}$$
100
+
101
+ where $\tau$ is the scaling factor and tanh is used to keep the reward bounded. This reward motivates taking transitions where the dense predictor is better than the causal predictor, which usually suggests the learned causal graph is inaccurate.
102
+
103
+ When solving actively-accomplishable downstream tasks, like many MBRL algorithms, CDL simultaneously (1) learns a reward predictor with the abstract state, action, and any provided extra variables as input, $R_{\varphi}:\phi(\mathcal{S})\times\mathcal{A}\left(\times\mathcal{V}\right)\to\mathbb{R}$ , and (2) uses a planning algorithm with the learned dynamics and reward predictor for action selection. As the reward predictor is learned in an abstract space rather than the full state space, it is more sample-efficient and less vulnerable to spurious correlations brought about by excessive information (i.e., action-irrelevant state variables). Meanwhile, planning in the abstract space also reduces the computation cost by relieving the need to roll out action-irrelevant state variables.
104
+
105
+ The reward predictor is modeled as a neural network and trained by minimizing the prediction error,
106
+
107
+ $$\varphi^* = \arg\min_{\varphi} \mathbb{E}_{(s_t, a_t, g_t, r_t) \sim \mathcal{B}} \mathcal{L}(R_{\varphi}(\phi(s_t), a_t, g_t), r_t),$$
108
+ (5)
109
+
110
+ where $\mathcal{B}$ is the task data collected so far, and $\mathcal{L}$ can take any loss function (we experiment with the absolute value of the prediction error).
111
+
112
+ For planning, we use the cross entropy method (CEM, Rubinstein (1997)), a population-based optimization algorithm,
113
+
114
+ ![](_page_5_Picture_1.jpeg)
115
+
116
+ Figure 4. (a) different types of causal graphs. (b) illustration of the chemical environment (left: ground truth causal graphs, right: transitions after applying the action).
117
+
118
+ to search for the best action with the learned dynamics and reward predictor. For each time step t, depending on whether the action is continuous or discrete, CEM initializes a time-dependent belief over the optimal action sequence $a_{t:t+L} \sim \mathcal{N}(\mu_{t:t+L}, \sigma^2_{t:t+L})$ or Categorical $(\alpha_{t:t+L})$ where L is the planning length. Starting from a unit normal distribution for $\mathcal{N}$ or a uniform distribution for Categorical, it repeatedly samples J candidate action sequences, evaluates them based on cumulative rewards, and updates the action belief to the distribution of the top K candidates. After N iterations, the planner returns $\mu_t$ or $\arg\max\alpha_t$ as the current optimal action.
2208.00147/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2208.00147/paper_text/intro_method.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In recent years, the computer vision community has witnessed astonishing performance breakthroughs in many traditional vision tasks. These breakthroughs are mainly due to the emergence of deep learning models and algorithms, publicly available large data sets for training, and powerful GPU computing devices. Despite their popularity, current deep learning techniques mostly rely on large-scale supervised data to train accurate models. A deep neural network (DNN) with tens of thousands of parameters cannot be easily adapted to a new task by training on just a few examples. In addition, conventional deep learning models lack the capability of preserving previous knowledge while adapting to new tasks. When a neural network is fine-tuned to learn a new task, its performance on previously trained tasks will significantly deteriorate, a problem known as catastrophic forgetting [@goodfellow2013empirical; @mccloskey1989catastrophic]. Exploring the fast learning and memorizing capability of deep learning models is an important step toward improving their practical application ability.
4
+
5
+ In this paper, we tackle this significant research direction --- Few-Shot Class-Incremental Learning (FSCIL). FSCIL requires the trained model to not only quickly adapt to continually arriving new tasks, but also to retain the old knowledge about previously learned tasks. Considering real-life application, an ideal FSCIL model needs to have the following characteristics: 1) The model needs to perform well on all classes equally, no matter what the training presentation sequence is; and 2) the model needs to be robust to extreme data scarcity, such as the one-shot scenario. However, current SOTA methods mainly use sole class-wise average accuracy to evaluate the model performance which cannot assess whether there is a prediction bias due to class imbalance and data imbalance. As there are normally more base classes than incremental classes and only limited data is provided for each incremental class, prediction bias towards base classes can easily happen. In addition, current SOTA methods rarely consider the extreme one-shot setting which can happen in the real world due to incremental data collection and rare data types. A well-established task setup is a cornerstone for the development of this task since an improper task setup will misguide the method design and lead to methods with limited application. Thus, before designing our method, we reformulate the setup for the FSCIL task.
6
+
7
+ Considering the paucity of incremental session data and the absence of old session data, we think the feature extractor trained on the base session should not be limited to extracting discriminative features for the base categories. The ability of representing new unseen samples from future novel classes is also critical. On the one hand, we are motivated by the similarity between FSCIL and face recognition tasks. The face recognition system learns to distinguish and recognize new faces quickly via its deep metric learning framework. The capability of handling new identities without the need for retraining is a major achievement of modern face recognition methods and is also what the FSCIL task desires. On the other hand, we are motivated by the intuitive connection between FSCIL and data augmentation. Data augmentation focuses on improving the generalization of a DNN. The capability of extracting diverse features that is transferable across base and incremental classes is important for the FSCIL task. Hence in this work, we adopt some ideas from both modern face recognition and data augmentation to design our method.
8
+
9
+ **The contributions** of this paper are: **(1)** We reevaluate the current benchmark task settings of FSCIL and propose additional experimental settings and evaluation metrics to more comprehensively assess the capability of FSCIL methods. **(2)** We solve the FSCIL task from a new perspective of the open-set problem. We analyze the angular penalty loss from face recognition and adapt it to FSCIL to improve the discrimination of the model. **(3)** We further analyze how data processing, such as class augmentation, data augmentation, and balanced data embedding affect FSCIL performance and aim to improve the generalization of the model. **(4)** Significant improvements on three benchmark datasets, CIFAR100, miniImageNet, and CUB200, demonstrate the effectiveness of our method against SOTA methods.
10
+
11
+ # Method
12
+
13
+ <figure id="fig: framework" data-latex-placement="tbp">
14
+ <img src="framework" style="width:12cm" />
15
+ <figcaption>The framework of our proposed method. On the one hand, with sufficient base task data available, angular penalty loss, class augmentation, and data augmentation are utilized to obtain a general open-set feature extractor. On the other hand, as only limited incremental task data is available, the few-shot new class data and the carefully chosen same number of base class data are utilized to generate the balanced class-wise prototypes. Nearest class mean and cosine similarity are adopted to do the final classification. </figcaption>
16
+ </figure>
17
+
18
+ FSCIL task comprises a base task with sufficient training data and multiple incremental tasks with limited training data. During the learning of each new task, only the data for the current task is available and the model is required to learn this new task information whilst retaining old task knowledge.
19
+
20
+ To be specific, assume an $M$-step FSCIL task. Let $\{D^0_{train}, D^1_{train}, ..., D^m_{train}\}$ and $\{D^0_{test}, D^1_{test}, ..., D^m_{test}\}$ denotes the training and testing data for sessions $\{0, 1, ..., m\}$, respectively. For session $i$, it has training data $D^i_{train}$ with the corresponding label space of $C^i$. Training data from different sessions have no overlapped classes, so when $i \neq j$, $C^i \cap C^j = \varnothing$. During testing, the model will be evaluated on all seen classes so far, so for session $i$, its testing data $D^i_{test}$ has the corresponding label space of $C^0 \cup C^1 ... \cup C^i$. In addition, for the base session ($i = 0$), a sufficient amount of training data is provided and for the following incremental sessions ($i > 0$), only a limited amount of data is provided.
21
+
22
+ Most papers about FSCIL [@cheraghian2021semantic; @zhao2020mgsvf; @dong2021few; @zhang2021few; @zhu2021self; @cheraghian2021synthesized] follow the task setting proposed by Tao *et al.* [@tao2020few]. As FSCIL focuses on mimicking real-life situations, we think some aspects of the current benchmark experimental protocol are not sufficient to evaluate the efficiency of an FSCIL method. Thus, before proposing our method, we propose a more comprehensive and practical setup for the FSCIL task.
23
+
24
+ **Number of Few-Shot Data.** Current benchmark experiments are performed with 5-shot, 10-shot, or more data being available for each incremental step. The extreme data scarcity condition of 1-shot which can easily happen in the real world due to extremely scarce data type is rarely considered.
25
+
26
+ **Evaluation Metric.** Current benchmark evaluation metrics mainly use class-wise average accuracy to evaluate the performance of an FSCIL model. As there are normally more base classes than incremental new classes, using average accuracy cannot indicate if there is a prediction bias between base and incremental classes. A method cannot be regarded as a good FSCIL method if its good performance is mainly determined by the base class performance.
27
+
28
+ **Dataset.** The similarity between base classes and new classes will strongly affect model performance since the high re-usability of base features such as fine-grained datasets will naturally reduce the challenge of catastrophic forgetting. An optimal FSCIL model needs to not only perform well on high-distributional-match fine-grained datasets but also on low-distributional-match datasets.
29
+
30
+ To sum up, to comprehensively simulate the real-world FSCIL condition and evaluate the robustness of an FSCIL method, we consider both benchmark 5-shot and 1-shot settings. Also, for the evaluation metric, we propose to use both average accuracy and harmonic accuracy to evaluate not only the overall performance but also the performance balance between base and incremental classes. In addition, we perform experiments on both general (CIFAR100 and mini-ImageNet) and fine-grained (CUB200) datasets to remove the possible performance benefit due to high similarity between base and incremental classes.
31
+
32
+ In this section, we propose the FSCIL method ALICE using angular penalty, class and data augmentation, and data balancing. First, for the base session, we apply the angular penalty loss to train the feature extractor to obtain compact intra-class clustering and wide inter-class separation. Class augmentation and data augmentation are also adopted to improve the generalization of the feature extractor. Then, for the incremental sessions, specifically chosen balanced data are utilized to generate prototypes for each class. Nearest class mean and cosine similarity are combined to perform the classification. Figure [1](#fig: framework){reference-type="ref" reference="fig: framework"} demonstrates the framework of our method.
33
+
34
+ <figure id="fig: angular_penalty" data-latex-placement="tbp">
35
+ <img src="angular_penalty_loss" style="width:12cm" />
36
+ <figcaption>An illustration of feature distributions of a cross-entropy loss trained model and an angular penalty loss trained model. The light color arrows represent examples of different class features on the latent feature space. The dark color arrows represent the average feature prototype of corresponding classes. Angular penalty loss provides more compact intra-class clustering and wider inter-class separation than cross-entropy loss. Compact clustering leaves more room on the latent feature space to accommodate the new classes. </figcaption>
37
+ </figure>
38
+
39
+ Under the FSCIL setting, we want to obtain a feature extractor which can rapidly adapt to continually coming new tasks, as well as be stable to overcome catastrophic forgetting for the previously learned tasks. Thus, we want to use a loss function that: 1) minimizes the distance between intra-class feature vectors, and 2) maximizes the distance between inter-class feature vectors. The compact intra-class clustering and wide inter-class separation will leave more room in the latent feature space for the incrementally arriving new classes and hence lead to better open-set classification. Figure [2](#fig: angular_penalty){reference-type="ref" reference="fig: angular_penalty"} illustrates an example. As many innovative angular penalty losses have been explored and proposed for face recognition studies [@wang2018cosface; @deng2019arcface] and considering the similarity between FSCIL and face recognition tasks, we adapt the cosFace penalty strategy [@wang2018cosface] to FSCIL training.
40
+
41
+ First, we use cosine similarity as the distance metric to measure data similarity and compute scores. It has two effects: 1) it makes training focus on the angles between normalized features instead of absolute distance in the latent feature space, and 2) the normalized weight parameters of the fully connected layer can be regarded as the center of each category. To calculate cosine similarity in the final fully connected layer, we fix the bias to 0 for simplicity. Then the data prediction procedure can be written as:
42
+
43
+ ::: small
44
+ $$\begin{equation}
45
+ f = \mathcal F(x)
46
+ \label{eq: feature_extractor}
47
+ \end{equation}$$
48
+ :::
49
+
50
+ ::: small
51
+ $$\begin{equation}
52
+ y_i = W_i^T f = \|W_i\| \|f\| \cos(\theta_i) = \cos(\theta_i), \nonumber
53
+ \end{equation}$$ $$\begin{equation}
54
+ \|W_i\| = \|f\| = 1
55
+ \end{equation}$$ []{#eq: cosine_similarity label="eq: cosine_similarity"}
56
+ :::
57
+
58
+ where $f$ is the feature obtained from the input image $x$ through the feature extractor $\mathcal F$. The feature $f$ and the weight parameter $W_i$ are normalized by $\ell 2$ normalization, so the magnitude is 1. The quantity $y_i$ is the calculated cosine similarity between the feature $f$ and the weight parameter $W_i$ for class $i$. It measures the angular similarity of image $x$ towards class $i$ which indicates the likelihood that image $x$ belongs to class $i$.
59
+
60
+ Normally, the cosine similarity prediction is used with cross-entropy loss to separate features from different classes by maximizing the probability of the ground-truth class. The loss function is:
61
+
62
+ ::: small
63
+ $$\begin{equation}
64
+ \begin{aligned}
65
+ L & = -\frac{1}{N}\sum_{j=1}^{N}\log(p_j) = -\frac{1}{N}\sum_{j=1}^{N}\log(\frac{e^{y_{j}}}{\sum^{C}_{i=1}e^{y_i}}),\\
66
+ & = -\frac{1}{N}\sum_{j=1}^{N}\log(\frac{e^{\|W_{j}\| \|f\| \cos(\theta_{j})}}{\sum_{i=1}^{C}e^{\|W_i\| \|f\| \cos(\theta_i)}}),\\
67
+ & = -\frac{1}{N}\sum_{j=1}^{N}\log(\frac{e^{\cos(\theta_{j})}}{\sum_{i=1}^{C}e^{\cos(\theta_i)}}) \\
68
+ \end{aligned}
69
+ \label{eq: cross_entropy}
70
+ \end{equation}$$
71
+ :::
72
+
73
+ where $N$ is the number of training images and $C$ is the number of classes. The quantity $p_j$ describes the softmax probability for image $j$. The quantity $y_j$ describes the cosine similarity towards its ground truth class for image $j$.
74
+
75
+ To make features better clustered, inspired by cosFace [@wang2018cosface], a cosine margin $m$ is introduced to the classification boundary. With the help of the extra margin, the intra-class features become more compactly clustered and the inter-class features become more widely separated. Following cosFace, we also re-scale the normalized feature by a preset scale factor $s$. The loss function is:
76
+
77
+ ::: small
78
+ $$\begin{equation}
79
+ \begin{split}
80
+ L_{AP} = -\frac{1}{N}\sum_{j=1}^{N}\log(\frac{e^{s(\cos(\theta_{j}) - m)}}{e^{s(\cos(\theta_{j}) - m)} + \sum_{i\neq j}e^{s\cos(\theta_i)}})
81
+ \label{eq: angular_penalty}
82
+ \end{split}
83
+ \end{equation}$$
84
+ :::
85
+
86
+ The scale factor $s$ is set to 30 and the cosine margin $m$ is set to 0.4 for all experiments.
87
+
88
+ Diverse and transferable representation is the key for open-set problems. Exposure to a large number of classes is one way to obtain such kind of feature extractors. To this end, a simple and effective method is to introduce auxiliary classes.
89
+
90
+ ::: wrapfigure
91
+ r0.5 ![image](class_fusion){width="47%"}
92
+ :::
93
+
94
+ Inspired by Mixup [@zhang2017mixup] and IL2A [@zhu2021class], we randomly combine pairs of different class examples from the base session data to synthesize auxiliary new class data. The new class data generating function is:
95
+
96
+ ::: small
97
+ $$\begin{equation}
98
+ \begin{split}
99
+ x_{k} = \lambda x_i + (1 - \lambda) x_j
100
+ \label{eq: class_augmentation}
101
+ \end{split}
102
+ \end{equation}$$
103
+ :::
104
+
105
+ where $x_i$ and $x_j$ are two training samples from two different classes $i$ and $j$ randomly picked from the $C$ base session classes. $\lambda$ is the interpolation coefficient. $x_{k}$ is the generated new class data. Figure [\[fig: class_fusion\]](#fig: class_fusion){reference-type="ref" reference="fig: class_fusion"} shows an example. In our experiments, following IL2A [@zhu2021class], we restrict $\lambda$ to be a randomly chosen value between $[0.4, 0.6]$ to reduce the overlap between the augmented and original classes. For a $C$-class classification task, by pair combination, we will generate $(C \times (C - 1) / 2)$ new classes, so the original $C$-class classification task now becomes a $(C + C \times (C - 1) / 2)$-class classification task.
106
+
107
+ Exposure to various image conditions during training is also a good method to obtain a general feature extractor. Inspired by self-supervised learning [@chen2020simple; @chen2021exploring], we use two augmentations of each image to enhance training data diversity. Figure [1](#fig: framework){reference-type="ref" reference="fig: framework"} shows the augmentation procedure. During training, for each input image, we randomly generate two augmentations from a set of preset transformation strategies. For the utilized transformation methods, we randomly apply resized crop, horizontal flip, color jitter, and grayscale. Then both transformed data are sent to the backbone network. The losses from two sets of augmentation are averaged and back-propagated to update model parameters. In addition, to avoid the feature extractor over-specialize to base session data, following SimCLR [@chen2020simple], we utilize extra projection layers before the final fully connected layer. By leveraging the nonlinear projection head, more information can be formed and maintained in the feature extractor.
108
+
109
+ After base session training, the projection head and the augmented classification head are discarded. Only the feature extractor is left and it is frozen to avoid both overfitting and catastrophic forgetting. During testing, nearest class mean and cosine similarity are utilized to do the classification. As there is only limited data provided for each incremental session, to alleviate the possible prediction bias due to data imbalance, we use the same amount of few-shot data as the following incremental steps to generate the base class prototypes. To select suitable examples, we first use all base session data to calculate the class-wise mean for each base class. Then the required few-shot amount of data which has the smallest cosine distance with the calculated mean is used to generate the final prototype for each base session class.
110
+
111
+ For the evaluation metric, current SOTA methods generally report the class-wise average accuracy. However, we argue that the class-wise average accuracy is not enough to evaluate the performance of an FSCIL method, since the number of classes from the base session is often a large fraction of the total number of classes. Following the experimental settings on benchmark papers, for CIFAR100 [@krizhevsky2009learning] and miniImageNet [@russakovsky2015imagenet], 60 out of 100 (60%) categories are used as base classes. For CUB200 [@wah2011caltech], 100 out of 200 (50%) categories are used as base classes. A model with good performance on the base session and poor performance on the following incremental sessions can still have a good average accuracy due to the high ratio of base classes to the overall classes. For example, with 60 base classes, on one step of incremental learning 5 classes, an algorithm that shows 100% accuracy on base classes with 0% on incremental classes would be rated 92.3% using average accuracy, yet it would have demonstrated no learning on the new task. To compensate for this deficiency of average accuracy, we adapt the harmonic accuracy metric that requires well-balanced performance across both base and incremental classes. The formula for harmonic accuracy ($A_{h}$) is:
112
+
113
+ ::: small
114
+ $$\begin{equation}
115
+ \begin{split}
116
+ A_{h} = \frac{2 \times A_{b} \times A_{i}}{A_{b} + A_{i}}
117
+ \label{eq: harmonic_accuracy}
118
+ \end{split}
119
+ \end{equation}$$
120
+ :::
121
+
122
+ where $A_{b}$ is the average accuracy for base session classes and $A_{i}$ is the average accuracy for the following incremental session classes. In the simple example above, the harmonic accuracy would be 0% which is much more appropriate as the network has indeed learned nothing at all. An ideal balanced FSCIL classifier will have equally high performance on both average accuracy and harmonic accuracy. If a model has good average accuracy but poor harmonic accuracy, this means that its good performance is mainly due to performance on the base session classes and the model has poor incremental learning capability overall.
2209.09338/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-09-17T10:37:20.304Z" agent="5.0 (X11)" etag="p9lhn9AIDbaG1rL_x6zB" version="20.3.2" type="github"><diagram id="VV5_KS7O6Hbvfwi9dA4s" name="Page-2">7V1bV9u4Fv41PMZL98tjgdJ5mZmu04fT05cuQwxxJ8RMYlrorz9yfIklK0QJvshg2tUmirEdf5+29v60tXWGL+6fPq3Dh8WfyTxaniEwfzrDl2dI/UCp/stanvMWTkDecLeO53kT3DV8iX9HRWN52GM8jzbagWmSLNP4QW+8SVar6CbV2sL1OvmlH3abLPWrPoR3xRXBruHLTbiMGof9N56ni7xVIL5r/yOK7xbllSErvvB9WB5cnGKzCOfJr9q18MczfLFOkjR/df90ES2zh1c+l/yGrvZ8Wt3YOlqlLr/wr/z9/Vv8bf719stnyP7zmTxfP86Ks/wMl4/FFy5uNn0un8A6eVzNo+wk4Ayf/1rEafTlIbzJPv2lMFdti/R+qd5B9fI2Xi4vkmWy3v4uvqXZn6w9WaW19vxHtW/SdfJPVPuEbX/UJ/Nws9heFVaHlQBkvziP1wruOFmp96tknbWfN59I+fWidRo91ZqKJ/QpSu6jdP2sDik+nRFQ8KDgK4YyoHnLrx3+rDxqUcMes6IxLDh3V51+B4t6USBzBEqobZQUGkVXQ6SJGiPi6urKhg4EHMvoNHQSdTtxmj1WCloCCwKsgUUxDBhuoIWEaKIFsegILTyhtQctqnctDI5ASwaYdwQYmQDbA5jQAIMUB9ACmMQWwCjqCC3a7ZB19ZFBAGzoXBNGAfEFHUT1gQoK0EAGc9ZEphrhWkeGHUbm5nH9s3p00Wr+IXPRts9pFemw6BiqQ6/i7G7Kd0X/Uu6lDQID04hCYsf06uqc8hcRScP1XZS+xMfiW0bz0pPcA1wNGGpxH8q2dbQM0/in7n/asCqu8DmJ1S3vei3nAaJEWRkgJOeUMd3ochRgAaof4/yb5HF9ExWnrDuT5lUgQAEisvqhum1HRAQUYCQ5AIgjYpju/KE2LqPYED7XDnvIDti88F0rK1NeVtLSQu2InJ91R+sKktOZziem+8B0EgD1fSWFKmQQUEjdGWSyHaZD8CLTKQoIFIIyRCWjUuBuqF71oaojw+55Lt4iz3PkX/jWpWYxvg5BRWCwk3VFeyIDJiQFnEjCCOvKwEu9U6sRpXvWy8OsbxJdcwprrM/aP4dpGq1X2xYEcIPgqNE7NuoBpsYVTjfcJaF94SmlARe7P1IPMwglgeA75jF6IoUZ10+MpTqxEfXvIWpbVCplzLfDJUS94hLEMIAQAy4BlIAjrru7RPkIELXAJSGIYVqxMrbQbbBvjU0Ocm2NTTfLcLOJb14ahY/ihieQMz3wZQAEapziGHEqiTBlPVeEJTLwzc6r6JSRCrLMyewXatQH1NFTnH7NPg9o8e5/pSenXl8+ld5Z9ua5eOM7PaiuCROQiVavDgMUPaTc6w2ZFyG0X7I4SM4d2oWDDjXGXjFkJg0TggAJEDuNGJjigIIm9NWUhLIrcPcx6pcYDtL2oMTwy5mYScSMUYAo/6IjauBs4EIccyQExJQB1i83HIT047ixlwcHncoyKcATHhBsOgNEoIAQQQFiCjDJT3UjJVDjCKCYA0GwFFLXdJjyVvv2MxxE+4kFZY8lIsCQCIgEJuqA06MJhgMJIWeEAyoEItplOGzanK5p4KBoDzlQlNQ4LHVAv0ikAkc9IqUSBKc6oRTr9oJAGnCYRblQQgChowrXGmkc5OEhSeNZYKJsLTAMCgWyoSE4RyR6nIMgDxhBAgEKs39xz56mTTRly3Q7H/5T4wT79zHLedumScw2W9X/gzoAkYenLdzl5+rVXfH/sjz+lSci2V96keXm3YTLM37+1xm//B4XHxTXUd8/v5R+edW8/SZlq0H0NHpKdWrrUxMFm+vzGEVTuIzvMj3vRhE8Uu3nWRJArO7vQ/HBfTyfZ5exZjLoHairtDgpTYdIuSqW5B1Bm92HVKxvPzHOA3n1dPvkmywvAAu4Mc0DSXCqojajPICIZbZIKH9JMt3LoYRr0my/Y1fVB7y3V7dn9KOyVMsZzGyVelUYsOvb/HW6iNKw+Cg/KPvbg0WbR7fhY/5Na0Zt1zoCuwZ1tkPObQluZUhW73NcdmXS0CkmzXGa/c34UtLUaTDYjUfHz3XTpt3jgUAQQQExwMqz7lfQRccJugdJYJX5K2V/lDI/fFmBPzntQVGhnvVAmpehnACCMcoGr755cZyee4gXvmNcfrvajBykAiAEgKTQnGRxx1iY3Z0q/1adj1NMMlWHk35RtSmxXvoiU+x0vI8hzJwsT0Kn44Rfz0In37JQkHxpepgSPaPpxIAKYqifFolA7M9w7dpq2STjyWq9EauFqalVYuVzl9lVA5qt4zRn38wW98pszaSKmkxXSEU+5hyk84S3oUkTxgZMfkAOebz7stcd5irMVX57WXFwzsqzLAjIjewYAgNBGBCAgMzz5iemQ8y40PNxKeWuGZTHJo4jY6hEXJQ+V6ep49hBkPaDc36lZDU5hxVeghIOgPqXmWnb7gkXQj8xJc0Zt5Y4h4GxQgejfjg3Gim7rlwvMpW6rmxPPtnBWThMjE4CbHI1tqyf70yuLqeZR06+HxP5DpEPmVPADFCbjtEv/WxC+XH0423Rb/+JCvo9LOLvtTm74rVqnebsjil8wA0VBEJbXEoqumruCgtKDb19LraebL2bvINvytljpimhkgYCAQElUW4TE4a5cPf2zIRa1pwV7DjqxK8X8yeLNDaLpLMuCzYtFolCq0WiHVqk1pO7W7NInkkejAcqfFNDCc9S8yXQHW0s5EtCu7sAwqDpQ9Vc+L4MlOep3qX/eDjVG3mW6s05CCTECEvACaVGcToO0cmJ36x0oEsLg0DAkGCEYMagZMZ42TWDbBq8r0PcXXh/H1oGuW37NMwdMcxRYpSDE6Iqp6D1sD3DXFdTQvi4eh19jnLCKwPVEFmhCKQQhAjEqVB2xEjBPSLRVw/JuHQW9tuyScQmvPtqkx42vYidb8DmEIOyWDYNDrEaHDM7rL1qoa0X4XA0N86zy2NbaDkTDAXIUATY6cvkbOdjAva9tJbYpHGDKH3MBR5eh1KWCRjbgEY5bazQZYo2bURqekhPedN5P3rasHWCHRC/66NOlLm9P87QRWxzeP2f1TDZ3tUsB2S6K4PKXlyf4bAwurPa78SmKo9lgm05zeoepBw3ChapYD8oDfJQ02rEJmJnOH7YAeolRsvwOlqehzf/3G3bjSvlnybrebQ2b2tnXnBpbmpFRfPB5KxRx7szK0R1vQfWslYGM0Q23TDnhOrwihbnsDa0vFV25AX9h2aHWeKMVfVCBmOHTRPM2bGaONE/JxiQ1uSMXjnhQSHdvc92bLnaRAXEjEqOKSaSQ4b1sjJEgNPrlBlp25jBIFsSV/3op+04bKYOKbRZX31w7zbV9mDhdXkGYMeySiQwJ+qgQLYRuJz51b2ywKwP1t7mJw7iU63P2MzfZhE+ZG/jTXLxeB1lXea6Mn9/P6bLOOuG219Xh3xY3W3PC6lhvBS5tsZrnaRh0TSTwLDXbrvdNIF8mRhHLO9voFjpQnUB0bJLCu9qs6GyZvyE4EkIcstubP3i57A0f8JvL36z3YKjwRB0yNObENyP4K5lKAS5Qx/s2kVo1AvPqrs5eggQkWocav/hvGN6V8Q4nd4MDW6guE19mxB0RbD0sAaDzyHrcoJvL3xYDt8BHVIkJwT3IlguvB4MPocaARN8PgfJ3EE5nBD0N0gWDhrehJ/PQbJ4x0JjGwgOHyRLBxvadZCMzV0jGXGOkQHsLkaGYHQGymFfckd6V8QYtY8OwegslFcQDu2kw7Ia1ITfaV4686ALjm4uxCsILUUL+oVvdEqxV/B54KVDMDqp2C8Ih3fTIXCQi3M/3ZoL13xYrXnvFDeqQ2LqnAbT6SQXBKOTaNsj/o4yo57mgmB0Oq1fGA490QXB6GRavwD0IYyG71gJaQPDweNo+I51kDYA9GC6C8J3rIW0geHgE14QvmMppA0EfQim4TvWQ1rB0INo2mVX+2GiaV7OOe3yRRvPqv842mX/d784z+Qlhpc2zoNIACGO4HxFlnHH0S6bt08YehxHu2ykPgHoeRw9Oi3ELwwHj6Nd9hKfAPQ7jrbu6j1hOKI4ugRsQnC8cbTL7tUThn7H0dadpr0sO3arbeSTF3eu6pCVRZ+Lj/KDppLPznsXm442tVaVKatS9FKbDI5nu+yJmt1RsxnFZ1KMbdeNXrlpk9OmImkDFMTCTK9Wj3f7lw1VDwu+sGW2s8WB+ywOP99rMw7yzJ0/60jdSyF5Z/AV1W3VeamyaJkLFT6mSX6/R1G4FczLlMCqGjAIynIIB0An3YG+f7eMFkC/fPegI4Z00LFrR+8Qc5sYOFVD7M34I2FywtkQdGf9rXtDt2UIkLopEATBWSZjXrx7o4DLXa6qkQAFgjcIwHq1CtaNmovK2fFqi9xkGjo3DSYzvDANNtXTy5C2iGAVhZTD+de0b7cj54zhyG6NhG0bm2pzm9ewbh4tWfz77o+/Hzez5x93z9+/ff0xe/W28Ri4U+X89dtOeDA0mWXi9zDBwpf9i2wgMIUMTGwiG5IvFBdunRzOi5L2PQF7dtQxT2Ym9YSpI9YetVSC1/pkxpds41oY4wCP9/PE10li6x2PL9fGI/z6nCC23u/4smw8Qq/fyWG70zG+LBufAOx1ZtgOoEOKTd+eAaVuqdRdugXjW8LTKbGh19lj9lseX9qKTwgO7hqMb/mOV/j54ByMrpqJXwh64B14oBuYsgFxcw7aWmdlfy6TatCkia+rrOy3POkGr0GwxzVW9vudhIPXwNfvCivrLY9wfY5PCPa4vsoO3/iW5vgEnwfxcQnYhOBJCA4eH49wWY5P+HkQH1sX5UwIjig+tq5dGTY+pj7Ex9Z1E++X2flzHlV8bF3bMCE4lvjYukphgm9M8fGkcLwGwaHjY+vqgAm+EcXH1vT+CcHRxMfWJPwJvxHFx3hSOF6F4PDxMXZQOLqOj81N6wgjtrGly03r7M9mdNkRzuVpT2F3TpVRueh4dCqHVwgO7qKPTuLwCr5+d6yz3/LoVA6vEOxvwzr77Y5O4fAKPQ8cdDI6kcMvBLt00NXbdZIt460++6Se+OLPRN26avw/</diagram></mxfile>
2209.09338/main_diagram/main_diagram.pdf ADDED
Binary file (29.8 kB). View file
 
2209.09338/paper_text/intro_method.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Current advancements in Graph Neural Networks (GNNs) are being evaluated on a small range of tasks and accompanying datasets. Though these datasets are sourced from different domains, they require preprocessing the raw data into a computationally digestable format to be usable by GNNs, referred to as *embeddings*. In this work we focus on node classification and thus *node embeddings*.
4
+
5
+ Common node classification datasets [\[2,](#page-4-0) [5,](#page-4-1) [9,](#page-4-2) [12\]](#page-4-3) focus on text classification with the primary node embedding being Bag of Words (BoW). Though this is a suitable method for text, this results in current GNNs being optimised to BoW. Equally, this form of node embedding is not always applicable, image data for example, and so GNNs are only being optimised for limited forms of data, mainly text. Existing literature has focused on the shortcomings of GNN training and the effect that the dataset can have on the model performance [\[9\]](#page-4-2), but there is no comment on how different node data preprocessing may affect performance.
6
+
7
+ To demonstrate this problem we introduce three new datasets as alterations of existing datasets that are commonly used in literature. Each dataset is accompanied by a set of node embeddings. To evaluate the effect of node embeddings on GNN performance we train and test standard GNN archictures: Graph Convolution Network [\[5\]](#page-4-1), Graph Attention Network[\[11\]](#page-4-4) (with GATv2 [\[1\]](#page-4-5)), and GraphSAGE [\[2\]](#page-4-0) with two different samplers. For these models we find that their performance and relative rank is dependant on the embeddings used. In this work we make the following contributions:
8
+
9
+ - We put forward three new datasets and a rich set of accompanying embeddings to better test the performance of GNNs.
10
+ - We demonstrate that GNN performance depends on the embedding used. The choice of embedding provides large variance and prevents a fair comparison of different architectures.
11
+ - We demonstrate that current GNN architecture design overfits to limited styles of embedding.
12
+
13
+ S. Purchase et al., Revisiting Embeddings for Graph Neural Networks (Extended Abstract). Presented at the First Learning on Graphs Conference (LoG 2022), Virtual Event, December 9–12, 2022.
2301.11647/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2301.11647/paper_text/intro_method.md ADDED
@@ -0,0 +1,418 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Time series are ubiquitous in many areas such as finance, economics, robotics, agriculture, and healthcare. One is typically interested in modelling the evolution of a target quantity through time, which is known to be affected by a set of time-evolving features. For example, pollution levels in a city are driven by quantities such as temperature, pressure, traffic, or economic activity measured through time. Mathematically, one wishes to model the evolution of a quantity $y_t \in {\mathbb{R}}^p$, $p \geq 1$, as a function of some time evolving features $x_t \in {\mathbb{R}}^d$, $d \geq 1$, for $t \in [0,1]$. In other words, the goal is to learn the dynamics that link the target to the features.
4
+
5
+ Such an interaction is typically modelled via differential equations, which are a common choice of model in natural sciences [@zwillinger1998handbook]. In this article, we assume that there exists a function $\mathbf{G}: {\mathbb{R}}^p \times {\mathbb{R}}^d \to {\mathbb{R}}^p$ such that $$\begin{equation}
6
+ \label{eq:ode}
7
+ y_t = y_0 + \int_0^t \mathbf{G}(y_s, x_s)ds
8
+ \end{equation}$$ or equivalently $$dy_t = \mathbf{G}(y_t,x_t)dt, \quad y_0 \in \mathbb{R}^p.$$ The value $y_t$ depends on the trajectory of the features time series $x_s$ up to time $t$. Learning the dynamics of the system can be framed as learning the solution map of [\[eq:ode\]](#eq:ode){reference-type="eqref" reference="eq:ode"}, i.e., a function $\Psi$ which, given a time $t$, an initial point $y_0 \in \mathbb{R}^p$, and the history of the path up to time $t$, denoted by $x_{[0,t]} = (x_s)_{s \in [0,t]}$, outputs the value of $y$ at time $t$.
9
+
10
+ If we know $\Psi$, we gain access to the values of $y$ at any point in time provided we know the values of $x$ up to this point ; this encompasses many tasks such as forecasting or interpolating between points of $y$. We specifically have in mind applications where we have an easy access to $x$ but a limited one to $y$.
11
+
12
+ This problem is extremely common in healthcare. For example, in obstetrics, the lactic acidosis (LA) of the fetus, which is a proxy for fetal distress, is a quantity of high medical interest for predicting complications in the first hours after birth. This biomarker cannot be measured during pregnancy but only at birth because the measurement is highly invasive. Some vitals such as heart rate and fetal movement are however easy to measure during pregnancy. In this case, $x$ is the non-invasive measurements made during pregnancy, while $y$ is the invasive measurement of LA at birth. Predicting the value of $y$ at any time $t$ (both before and at birth) would allow for early diagnosis. Similarly, after surgery, patients are often monitored to detect hemorrhage. While some vitals such as heart rate of saturation are monitored in continuous time, haemoglobin---which is highly predictive of hemorrhage---is only measured by blood samples taken a few times a day, which can significantly delay hemorrhage diagnostic.
13
+
14
+ In practice, the functions $x$ and $y$ are measured on discrete grids and take the form of time series. These often present a lot of heterogeneity, both within and across individuals.
15
+
16
+ - For every individual, the time between any two measurements can vary, and thus individuals may not be recorded on the same grid.
17
+
18
+ - The number of total sampling points might vary between individuals.
19
+
20
+ - Each measurement in time might be corrupted by measurement noise.
21
+
22
+ Mathematically, we consider $n$ pairs of functions $\{(x^{1}, y^{1}), \dots, (x^{n}, y^{n})\}$. Each $x^i$ deterministically produces a specific $y^i$ through the Ordinary Differential Equation (ODE) [\[eq:ode\]](#eq:ode){reference-type="eqref" reference="eq:ode"}. We call $x^{i}$ the feature path and $y^{i}$ the target path. Both $x^{i}$ and $y^{i}$ are only observed at a finite set of times specific to every individual. We denote by $$D^{i} = \left(t^i_1,\dots,t^i_{k_i} \right), \quad i= 1, \dots, n,$$ the sampling grid of $x^{i}$ and by $\bar{D}^{i}$ the sampling grid of $y^{i}$. We stress that both the number of sampling times $k_i$ and the sampling times $t^i_{1},\dots,t^i_{k_i}$ themselves are individual specific, as described in $(i)$ and $(ii)$. Moreover, the observations are corrupted by additive noise, such that we observe $$X^i_t = x^i_t + \xi^i_t$$ for all $t \in D^i$, and similarly $Y_t^i = y^i_t + \varepsilon^i_t$ for every $t \in \bar{D}^i$, where the $\xi^i_t$ and $\varepsilon^i_t$ are sub-gaussian i.i.d. random vectors. Each input may therefore be written as a matrix $\mathbf{X}^i = (X^i_t)_{t \in D^{i}} \in \mathbb{R}^{k_i \times d}$ which we call the feature time series. Similarly, the quantity of interest is a matrix $\mathbf{Y}^{i}=(Y^i_t)_{t \in \bar{D}^{i}} \in \mathbb{R}^{m_i \times p }$ (where $m_i$ is the length of $\bar{D}^{i}$) and is called the target time series. The grid $\bar{D}^{i}$ is assumed to be a subset of $D^i$: in our setup $y^{i}$ is hard to sample and therefore measured at only a few points (and sometimes only one) while $x^{i}$ is easy to access and measured at high frequency. Our goal is to approximate the dynamics linking $x$ and $y$ from the irregular, heterogeneous, and fuzzy data $\mathbf{X}^{i}$ and $\mathbf{Y}^{i}$.
23
+
24
+ Such heterogeneity is difficult to handle by classical machine learning algorithms such as Long short-term memory networks [LSTM, @hochreiter1997long] which assume that the data is regularly sampled. Some more recent approaches  [@rubanova2019latent; @de2019gru; @kidger2020neural; @herrera2021neural] have adapted these models by introducing continuously evolving hidden states to account for the irregular spacing between observation times.
25
+
26
+ We build upon the approach of Neural Controlled Differential Equations  [Neural CDE, @kidger2020neural; @morrill2021neuralrough], which have proven to be very successful for time series classification and online prediction tasks [@morrill2021neural]. The key idea of Neural CDE is that under some fairly mild assumptions, any general ordinary differential equation of the form [\[eq:ode\]](#eq:ode){reference-type="eqref" reference="eq:ode"} can be rewritten as $$\begin{equation}
27
+ \label{eq:cde}
28
+ y_t = y_0 + \int_0^t \mathbf{F}(y_s) dx_s,
29
+ \end{equation}$$ where $\mathbf{F}: {\mathbb{R}}^p \to {\mathbb{R}}^{p \times d}$ is a matrix-valued vector field, such that the right-hand-side of [\[eq:cde\]](#eq:cde){reference-type="eqref" reference="eq:cde"} is a matrix-vector product [see, e.g., @fermanian2021framing Proposition 2, for a proof]. The function $x$ is often called the driver of the CDE. In a Neural CDE setting, the driver $x$ is a continuous interpolation of the feature times series, $y$ corresponds to a continuously-evolving state, and $\mathbf{F}$ is chosen to be a neural network. This network is then trained such that the values of $(y_t)$ can be used as features for classification or regression tasks. While Neural CDE have been shown to outperform other architectures with limited memory usage, their training time is considerable and no statistical guarantees exist.
30
+
31
+ We model the interactions between the target and the feature paths through a CDE of the form [\[eq:cde\]](#eq:cde){reference-type="eqref" reference="eq:cde"}. This modelling choice encapsulates a broad variety of settings, since the vector field $\mathbf{F}$ can be any (regular enough) function. A priori, the solution map $\Psi$ of this CDE is a complex function of time and the history of $x$ up to $t$; however, by linearizing the model, we are able to approximate $\Psi$ by a simple scalar product between a deterministic transformation of the history of $x$, called the signature of $x$ at order $N \geq 1$ and denoted by $S_N(x_{[0,t]})$, and a time independent matrix $\theta_N^\ast$. Informally, we have $$\Psi\big( x_{[0,t]}, t \big) \approx S_N\big(x_{[0,t]} \big)^\top \theta_N^\ast.$$ Two striking features of this linearized model are *(i)* that $\theta^*_N$ can be learned on any time horizon $[0,t]$ since it is independent of time, and *(ii)* that once it has been learned, the model can be called at any time $t$.
32
+
33
+ Our contributions are threefold. First, we frame the task of learning the interactions between two time series as learning the flow of a CDE, which can be linearized in the signature space. While the connection between CDEs and signatures is well-known, this is the first time CDEs are used as a statistical model. We then leverage this linearization to derive statistical guarantees on the prediction error with an explicit dependence on both sampling irregularities and the noise affecting measurements. To our knowledge, this is the first bound of this type for signature-based models, allowing for better understanding of the dependencies between prediction performance and sampling roughness. Finally, the resulting algorithm, called SigLasso, is shown to be computationally cheap and competitive compared to existing baselines on a wide range of simulated data and a real-world example of hospitalization growth rate prediction during the Covid pandemic.
34
+
35
+ # Method
36
+
37
+ Let $(E, \left\lVert\cdot\right\rVert_E)$ be a normed vector space and $x:[0,1]\to E$. The supremum norm of $x$ is defined for all $t\in [0,1]$ as $$\left\lVert x\right\rVert_{\infty,[0,t]} = \sup_{s \in [0,t]} \left\lVert x_s\right\rVert_E.$$
38
+
39
+ When referring to the total variation $\left\lVert x\right\rVert_{\textnormal{1-var},[0,1]}$ of a path $x:[0,1]\to \mathbb{R}^d$ over the whole domain, depending on the mathematical context, we will sometimes drop the time subscript and simply write $\left\lVert x\right\rVert_{\textnormal{1-var}}$.
40
+
41
+ When referring to a matrix $A = (A_{ij}) \in \mathbb{R}^{n \times p}$, we define classicaly the infinite and Frobenius norms by $$\left\lVert A\right\rVert_{\infty} = \max_{\substack{i=1,\dots,n \\ j=1,\dots,p}} \abs{A_{ij}} \quad \text{and} \quad \left\lVert A\right\rVert_F = \sqrt{\sum\limits_{\substack{i=1,\dots,n \\ j=1,\dots,p}} \abs{A_{ij}}^2}.$$
42
+
43
+ We now introduce some notations to take advantage of the structure of $\theta^*_N$. The true parameter of the Taylor expansion of the model CDE, defined in Equation [\[eq:linear_problem\]](#eq:linear_problem){reference-type="eqref" reference="eq:linear_problem"}, can be written in block notation as
44
+
45
+ $$\begin{align}
46
+ \label{eq:theta_matrix}
47
+ \theta^*_N = \left[\begin{array}{ccc}
48
+ \begin{matrix}
49
+ \theta^*_{[0],1} & \cdots & \theta^*_{[0],p} \\
50
+ \hline
51
+ &&\\
52
+ \theta^*_{[1],1} & \cdots & \theta^*_{[1],p} \\
53
+ &&\\
54
+ \hline \\
55
+ && \\
56
+ & & \\
57
+ \theta^*_{[2],1} & \cdots & \theta^*_{[2],p} \\
58
+ && \\
59
+ &&\\
60
+ \hline
61
+ && \\
62
+ & \vdots &\\
63
+ && \\
64
+ \hline
65
+ &&\\
66
+ && \\
67
+ &&\\
68
+ \theta^*_{[N],1} & \cdots & \theta^*_{[N],p} \\
69
+ && \\
70
+ && \\
71
+ &&\\
72
+ \end{matrix}
73
+ \end{array}\right] \in \mathbb{R}^{s_d(N) \times p}, \quad \text{where} \quad \theta^\ast_{[k], \ell} \in {\mathbb{R}}^{d^k \times 1}, k=0, \dots, N, \, \ell=1, \dots, p.
74
+ \end{align}$$
75
+
76
+ Every column of $\theta_N^*$ corresponds to a dimension of the target, while blocks of lines correspond to signatures layers. Thus for every $k=0,\dots, N$ and $\ell = 1, \dots, p$, $\theta^*_{[k], \ell}$ is a column vector of size $d^k$.
77
+
78
+ Similarly, for a general $\theta \in \mathbb{R}^{s_d(N) \times p}$ and the SigLasso estimator $\widehat{\theta}_{N,M}$, we will refere to the blocks forming these matrices as respectively $\theta_{[k],\ell}$ and $\widehat{\theta}_{[k],\ell}$, for $k=0,\dots, N$ and $\ell=1,\dots p$.
79
+
80
+ Likewise, the signature feature matrix $\mathbf{S}_N^\mathcal{D} \in \mathbb{R}^{M \times s_d(N)}$ can be written in block notation as $$\mathbf{S}_N^\mathcal{D} = \begin{bmatrix}
81
+ \, 1 & \vline & \mathbf{S}^\mathcal{D}_{\cdot,[1]} & \vline & \mathbf{S}^\mathcal{D}_{\cdot,[2]} & \vline & \cdots & \vline & \mathbf{S}^\mathcal{D}_{\cdot,[N]}
82
+ \end{bmatrix} =
83
+ \begin{bmatrix}
84
+ \, 1 & \vline & \mathbf{S}^\mathcal{D}_{1,[1]} & \vline && \mathbf{S}^\mathcal{D}_{1,[2]} && \vline & & \vline &&& \mathbf{S}^\mathcal{D}_{1,[N]} &&& \\
85
+ \, \vdots & \vline & \vdots & \vline && \vdots && \vline & \cdots & \vline &&& \vdots &&& \\
86
+ \, 1 & \vline & \mathbf{S}^\mathcal{D}_{n,[1]} & \vline && \mathbf{S}^\mathcal{D}_{n,[2]} && \vline & & \vline &&& \mathbf{S}^\mathcal{D}_{n,[N]} && \end{bmatrix},$$ where for any $k=1, \dots, N$, $\mathbf{S}^\mathcal{D}_{\cdot,[k]} \in {\mathbb{R}}^{M \times d^k}$ and, for every individual $i=1,\dots,n$, $\mathbf{S}^\mathcal{D}_{i,[k]} \in {\mathbb{R}}^{m_i \times d^k}$ (recall that $m_i$ is the number of measurements of the target path $y^i$). More precisely, given her target sampling grid $\Bar{D}^i = (\Bar{t}_1^i,\dots, \Bar{t}^i_{m_i})$, the individual-specific signature block of depth $k$ is equal to $$\mathbf{S}^\mathcal{D}_{i,[k]} = \begin{bmatrix}
87
+ 1 & S^{(1)}(X^i_{[0,\Bar{t}^i_1]}) & \cdots & S^{(d,\dots,d)}(X^i_{[0,\Bar{t}^i_1]})\\
88
+ \vdots & \vdots & & \vdots
89
+ \\
90
+ 1 & S^{(1)}(X^i_{[0,\Bar{t}^i_{m_i}]}) & \cdots & S^{(d,\dots,d)}(X^i_{[0,\Bar{t}^i_{m_i}]})\\
91
+ \end{bmatrix},$$ where the path $t \to X^{i}_t$ is a linear interpolation of the observed time series $\mathbf{X}^{i}$. The same notations will be used for the true signature feature matrix $\mathbf{S}_N$. We use the bracket notation $[\cdot]$ both in $\theta^*_N$ and $\mathbf{S}_N^\mathcal{D}$ to emphasise that both the columns of the feature matrix and the lines of learned parameter correspond to words of the alphabet $\{1,\dots,d\}$.
92
+
93
+ The unobserved matrix of true values of the target writes as
94
+
95
+ $$\begin{align}
96
+ \label{eq:true_target_matrix}
97
+ \mathbf{y} = \begin{bmatrix}
98
+ \mathbf{y}^1 \\
99
+ \vdots \\
100
+ \mathbf{y}^n
101
+ \end{bmatrix} =
102
+ \begin{bmatrix}
103
+ \mathbf{y}^1_{1} & \cdots & \mathbf{y}^1_{p}\\
104
+ \vdots & & \vdots \\
105
+ \mathbf{y}^n_{1} & \cdots & \mathbf{y}^n_{p}
106
+ \end{bmatrix} =
107
+ \begin{bmatrix}
108
+ y^1_{1,\Bar{t}^1_1} & \cdots & y^1_{p,\Bar{t}^1_1}\\
109
+ \vdots & & \vdots \\
110
+ y^1_{1,\Bar{t}^1_{m_1}} & \cdots & y^1_{p,\Bar{t}^1_{m_1}}\\
111
+ \vdots & & \vdots\\
112
+ y^n_{1,\Bar{t}^n_{1}} & \cdots & y^n_{p,\Bar{t}^n_{1}}\\
113
+ \vdots & & \vdots \\
114
+ y^n_{1,\Bar{t}^n_{m_n}} & \cdots & y^n_{p,\Bar{t}^n_{m_n}}
115
+ \end{bmatrix} \in \mathbb{R}^{M \times p}
116
+ \end{align}$$
117
+
118
+ and the measurement matrix $\mathbf{Y} \in \mathbb{R}^{M \times p}$ can be written in a similar fashion as $$\begin{align}
119
+ \label{eq:target_matrix}
120
+ \mathbf{Y} = \begin{bmatrix}
121
+ \mathbf{Y}^1 \\
122
+ \vdots \\
123
+ \mathbf{Y}^n
124
+ \end{bmatrix} =
125
+ \begin{bmatrix}
126
+ \mathbf{Y}^1_{1} & \cdots & \mathbf{Y}^1_{p}\\
127
+ \vdots & & \vdots \\
128
+ \mathbf{Y}^n_{1} & \cdots & \mathbf{Y}^n_{p}
129
+ \end{bmatrix} =
130
+ \begin{bmatrix}
131
+ Y^1_{1,\Bar{t}^1_1} & \cdots & Y^1_{p,\Bar{t}^1_1}\\
132
+ \vdots & & \vdots \\
133
+ Y^1_{1,\Bar{t}^1_{m_1}} & \cdots & Y^1_{p,\Bar{t}^1_{m_1}}\\
134
+ \vdots & & \vdots\\
135
+ Y^n_{1,\Bar{t}^n_{1}} & \cdots & Y^n_{p,\Bar{t}^n_{1}}\\
136
+ \vdots & & \vdots \\
137
+ Y^n_{1,\Bar{t}^n_{m_n}} & \cdots & Y^n_{p,\Bar{t}^n_{m_n}}
138
+ \end{bmatrix} =
139
+ \begin{bmatrix}
140
+ y^1_{1,\Bar{t}^1_1} + \varepsilon^1_{1,\Bar{t}^1_1} & \cdots & y^1_{p,\Bar{t}^1_1} + \varepsilon^1_{p,\Bar{t}^1_1}\\
141
+ \vdots & & \vdots \\
142
+ y^1_{1,\Bar{t}^1_{m_1}} + \varepsilon^1_{1,\Bar{t}^1_{m_1}}& \cdots & y^1_{p,\Bar{t}^1_{m_1}} + \varepsilon^1_{p,\Bar{t}^1_{m_1}}\\
143
+ \vdots & & \vdots\\
144
+ y^n_{1,\Bar{t}^n_{1}} + \varepsilon^n_{1,\Bar{t}^n_{1}}& \cdots & y^n_{p,\Bar{t}^n_{1}} + \varepsilon^n_{p,\Bar{t}^n_{1}}\\
145
+ \vdots & & \vdots \\
146
+ y^n_{1,\Bar{t}^n_{m_n}} + \varepsilon^n_{1,\Bar{t}^n_{m_n}}& \cdots & y^n_{p,\Bar{t}^n_{m_n}} + \varepsilon^n_{p,\Bar{t}^n_{m_n}}
147
+ \end{bmatrix}
148
+ %\in \mathbb{R}^{M \times p}.
149
+ \end{align}$$
150
+
151
+ Using the definition of $\Lambda_k(\mathbf{F})$ (see Equation [\[eq:def_Lambda_k\]](#eq:def_Lambda_k){reference-type="eqref" reference="eq:def_Lambda_k"}), we get the following proposition which allows to obtain an explicit dependence of the oracle bound on the regularity of $\mathbf{F}$.
152
+
153
+ ::: {#prop:norm_theta_star .proposition}
154
+ **Proposition 14**. *Let $\theta^\ast_N$ be defined as in Equation  [\[eq:linear_problem\]](#eq:linear_problem){reference-type="eqref" reference="eq:linear_problem"}. Then []{#lemma:bound_norm_theta label="lemma:bound_norm_theta"} $$\left\lVert\theta^*_N\right\rVert_\textnormal{F}^2 \leq \sum\limits_{k=0}^N d^k \Lambda_k(\mathbf{F})^2,$$ and, for all $k =0,\dots,N$ and $\ell=1,\dots,p$, $$\big\|\theta^*_{[k],\ell}\big\|_1 \leq d^k\Lambda_k(\mathbf{F}).$$*
155
+ :::
156
+
157
+ ::: proof
158
+ *Proof.* By definition, $$\left\lVert\theta^*_N\right\rVert_\textnormal{F}^2 = \sum \limits_{k=0}^N \quad \sum\limits_{1 \leq i_1,\dots,i_k \leq d} \left\lVert F^{i_1}\star \dots \star F^{i_k} (y_0)\right\rVert_2^2.$$
159
+
160
+ Since for all $(i_1,\dots,i_k) \in \left\{1,\dots,d \right\}^k$, $$\left\lVert F^{i_1}\star \dots \star F^{i_k} (y_0)\right\rVert_2^2 \leq \Lambda_k(\mathbf{F})^2,$$ we get $$\begin{align*}
161
+ \sum \limits_{k=0}^N \quad \sum\limits_{1 \leq i_1,\dots,i_k \leq d} \left\lVert F^{i_1}\star \dots \star F^{i_k} (y_0)\right\rVert_2^2 & \leq \sum \limits_{k=0}^N \quad \sum\limits_{1 \leq i_1,\dots,i_k \leq d} \Lambda_k(\mathbf{F})^2\\
162
+ & \leq \sum \limits_{k=0}^N d^k \Lambda_k(\mathbf{F})^2.
163
+ \end{align*}$$
164
+
165
+ We now turn to the second inequality. For $k=0$, the inequality holds by definition. For $k=1,\dots,N$ and $\ell=1,\dots,p$, by definition of the $\ell_1$ norm, $$\begin{align*}
166
+ \left\lVert\theta^*_{[k],\cdot}\right\rVert_1 = \sum\limits_{1 \leq i_1,\dots,i_k \leq d} \left\lVert\Phi^{I}_\mathbf{F}(y_0)\right\rVert_1,
167
+ \end{align*}$$ This yields $$\left\lVert\theta^*_{[k],\cdot}\right\rVert_1 \leq d^k \Lambda_k(\mathbf{F})$$ and thus $$\left\lVert\theta^*_{[k],\ell}\right\rVert_1 \leq d^k \Lambda_k(\mathbf{F})$$ for $\ell=1,\dots,p$. ◻
168
+ :::
169
+
170
+ The following lemma is needed to leverage classical proof techniques to bound the prediction error of the Lasso estimator.
171
+
172
+ ::: {#lemma:word_bound .lemma}
173
+ **Lemma 15**. *Let $x \in C^{\textnormal{1-var}}_L([0,1],\mathbb{R}^d)$. Then conditionally on $A_\xi(\delta)$, for a given signature layer $k \geq 1$, the maximum among all signature coefficients and individuals is bounded from above, that is $$\left\lVert\mathbf{S}^\mathcal{D}_{\cdot, [k]}\right\rVert_{\infty} \leq \frac{1}{k !}.$$*
174
+ :::
175
+
176
+ ::: proof
177
+ *Proof.* It is well known [see, e.g., @fermanian2022functional Proposition 3] that if $\mathbb{X}^k$ is the signature of a path $x \in C_L^{\textnormal{1-var}}([0,1], \mathbb{R}^d)$, then $$\begin{align*}
178
+ \left\lVert\mathbb{X}^k\right\rVert_{(\mathbb{R}^d)^{\otimes k}} \leq \frac{\left\lVert x\right\rVert_{\textnormal{1-var}}^k}{k!}.
179
+ \end{align*}$$ As a consequence, for every word $I$ of size $k$, one gets $$\begin{align*}
180
+ \big|S^I(x))\big| \leq \frac{\left\lVert x\right\rVert_{\textnormal{1-var}}^k}{k !}. %\leq \frac{L^k}{k !}.
181
+ \end{align*}$$ The matrix $\mathbf{S}_N^\mathcal{D}$ is constructed by taking signatures of linear interpolations of the $\mathbf{X}^{i}$s normalized by their total variation. It therefore contains only signatures of paths of total variation bounded by 1. Taking the maximum on $I \in \{1,\dots,d \}^k$ and individuals $i=1,\dots,n$, we get $$\left\lVert\mathbf{S}^\mathcal{D}_{\cdot, [k]}\right\rVert_{\infty} \leq \frac{1}{k !}.$$ ◻
182
+ :::
183
+
184
+ This final inequality being stated, we can now go back to the proof of Lemma [3](#lemma:estimation_error){reference-type="ref" reference="lemma:estimation_error"}. We prove it in full generality for $p \geq 1$. In this proof, we make extensive use of the notations introduced in Subsection [8.1](#appendix:proof_notations){reference-type="ref" reference="appendix:proof_notations"} and refer the reader to it if a notation is unclear.
185
+
186
+ ::: proof
187
+ *Proof.* In all the proof, we place ourselves on the set $A_\xi(\delta)$ defined by Equation [\[eqn:a_xi\]](#eqn:a_xi){reference-type="eqref" reference="eqn:a_xi"}, which ensures that the matrix $\mathbf{S}_N^{\mathcal{D}}$, seen as a random quantity, is well defined. Recall that we have two sources of randomness: the feature noises $\xi^{i}_t$ on the $\mathbf{X}^{i}$s and the target noises $\varepsilon^{i}_t$ on the $\mathbf{Y}^{i}$s. The feature noises appear only in $\mathbf{S}_N^{\mathcal{D}}$ and make it a random quantity. For $\mathbf{S}_N^{\mathcal{D}}$ to be well-defined, we then need the total variation of the linear interpolation of the feature time series $\mathbf{X}^{i}$ to be finite. This holds on the set $A_\xi(\delta)$ since all noises are then bounded.
188
+
189
+ Recall that we have defined $\widehat{\theta}_{N,M}$ as $$\begin{equation*}
190
+ \widehat{\theta}_{N,M} \in \mathop{\mathrm{arg\,min}}\limits_{\theta\in \mathbb{R}^{ s_d(N) \times p}} \frac{1}{2M}\left\lVert\mathbf{Y}-\mathbf{S}_N^{\mathcal{D}}\theta\right\rVert_\textnormal{F}^2 + \Omega(\theta).
191
+ \end{equation*}$$ Note that $$\begin{equation*}
192
+ \frac{1}{2M}\left\lVert\mathbf{Y}-\mathbf{S}_N^{\mathcal{D}}\theta\right\rVert_\textnormal{F}^2 + \Omega(\theta) = \sum_{\ell=1}^p \frac{1}{2M}\left\lVert\mathbf{Y}_{\ell}-\mathbf{S}_N^{\mathcal{D}}\theta_{[\cdot],\ell}\right\rVert_2^2 + \Omega(\theta_{[\cdot],\ell}),
193
+ \end{equation*}$$ where $\mathbf{Y}_{\ell} \in \mathbb R^M$ is the $\ell$-th column of the target measurement matrix defined in Equation [\[eq:target_matrix\]](#eq:target_matrix){reference-type="eqref" reference="eq:target_matrix"}. The quantity $\theta_{[\cdot],\ell}\in \mathbb{R}^{s_d(N)}$ is the $\ell$-th column of the parameter matrix defined in Equation [\[eq:theta_matrix\]](#eq:theta_matrix){reference-type="eqref" reference="eq:theta_matrix"}.
194
+
195
+ By definition, for any $\theta \in {\mathbb{R}}^{s_d(N)}$, we have $$\begin{align*}
196
+ \left\lVert\mathbf Y_{\ell}-\mathbf{S}_N^{\mathcal{D}} \widehat{\theta}_{[\cdot],\ell}\right\rVert_2^2 \leq \left\lVert\mathbf Y_{\ell}-\mathbf{S}_N^{\mathcal{D}} \theta_{[\cdot],\ell} \right\rVert_2^2 + \Omega(\theta_{[\cdot],\ell}) - \Omega(\widehat{\theta}_{[\cdot],\ell}).
197
+ \end{align*}$$ Moreover, letting $\bm{\varepsilon}_\ell = (\varepsilon^1_{\ell, \Bar{t}^1_1},\dots, \varepsilon^n_{\ell, \Bar{t}^n_{m_n}})^\top \in \mathbb R^M$ be a vector of i.i.d. noises (see Equation [\[eq:target_matrix\]](#eq:target_matrix){reference-type="eqref" reference="eq:target_matrix"}), we have $\mathbf{Y}_\ell = \mathbf{y}_\ell + \bm{\varepsilon}_\ell$. The Pythagorean theorem then yields for any $\theta \in {\mathbb{R}}^{s_d(N)}$, $$\begin{align*}
198
+ \left\lVert\mathbf Y_{\ell}-\mathbf{S}_N^{\mathcal{D}} \theta \right\rVert_2^2 = \left\lVert\mathbf y_{\ell}-\mathbf{S}_N^{\mathcal{D}} \theta \right\rVert_2^2 + \left\lVert\bm{\varepsilon}_\ell\right\rVert^2 + 2 \langle \bm{\varepsilon}_\ell, y_{\ell}-\mathbf{S}_N^{\mathcal{D}} \theta \rangle.
199
+ \end{align*}$$ Applying this equation to $\theta_{[\cdot],\ell}$ and $\widehat{\theta}_{[\cdot],\ell}$, we obtain $$\begin{equation}
200
+ \label{eqn:thm2step1}
201
+ \frac{1}{2M}\left\lVert\mathbf y_{\ell}-\mathbf{S}_N^{\mathcal{D}} \widehat{\theta}_{[\cdot],\ell}\right\rVert_2^2 \leq \frac{1}{2M}\left\lVert\mathbf y_{\ell}-\mathbf{S}_N^{\mathcal{D}} \theta_{[\cdot],\ell}\right\rVert_2^2 + \frac{1}{M} \langle \bm{\varepsilon}_{\ell} , \mathbf{S}_N^{\mathcal{D}} (\widehat{\theta}_{[\cdot],\ell} - \theta_{[\cdot],\ell}) \rangle + \Omega(\theta_{[\cdot],\ell}) - \Omega(\widehat{\theta}_{[\cdot],\ell}).
202
+ \end{equation}$$ We now work at each layer of the signature matrix $\mathbf{S}_N^{\mathcal{D}}$. Towards that end, we rewrite $$\begin{equation*}
203
+ \mathbf{S}_N^{\mathcal{D}} \big(\widehat{\theta}_{[\cdot],\ell} - \theta_{[\cdot],\ell} \big) = \sum_{k=0}^N \mathbf{S}_{\cdot,[k]}^{\mathcal{D}} \big(\widehat{\theta}_{[k],\ell} - \theta_{[k], \ell }\big),
204
+ \end{equation*}$$ and bound $$\begin{equation*}
205
+ \big\langle \bm{\varepsilon}_{\ell} , \mathbf{S}_N^{\mathcal{D}} (\widehat{\theta}_{[\cdot],\ell} - \theta_{[\cdot],\ell}) \big\rangle = \sum_{k=0}^N \big\langle \bm{\varepsilon}_{\ell} , \mathbf{S}_{\cdot, [k]}^{\mathcal{D}} (\widehat{\theta}_{[k],\ell)} - \theta_{[k],\ell}) \big\rangle \leq \sum_{k=0}^N \| \bm{\varepsilon}_{\ell}^\top \mathbf{S}_{\cdot,[k]}^{\mathcal{D}} \|_{\infty} \|\widehat{\theta}_{[k],\ell} - \theta_{[k],\ell}\|_1
206
+ \end{equation*}$$ by $\ell_1 - \ell_\infty$ norms duality. We fix $k$ and study the term $\| \bm{\varepsilon}_{\ell}^\top \mathbf{S}_{\cdot,[k]}^{\mathcal{D}} \|_{\infty}$. Lemma [15](#lemma:word_bound){reference-type="ref" reference="lemma:word_bound"} ensures that each of the words of the signature layer of depth $k$ is bounded by $1/k!$. As a consequence, by Lemma [13](#lemma:sum_subgaussian){reference-type="ref" reference="lemma:sum_subgaussian"}, under Assumption [6](#assump:target_noise){reference-type="ref" reference="assump:target_noise"}, every element of the vector $\bm{\varepsilon}_{\ell}^\top \mathbf{S}_{\cdot,[k]}^{\mathcal{D}}$ is $v_\varepsilon M / k!^2$-subgaussian. It follows that, for any real number $\mu>0$, $$\begin{equation*}
207
+ % \mathbb P_{A_\xi(\delta)}
208
+ \mathbb P \Big( \| \bm{\varepsilon}_{\ell}^\top \mathbf{S}_{\cdot,[k]}^{\mathcal{D}} \|_{\infty}> \mu \Big)\leq 2 d^k \exp \Big( - \frac{(k!)^2 \mu^2 }{v_\varepsilon M}\Big).
209
+ \end{equation*}$$ We furthermore place ourselves on $A_\varepsilon(\Bar{\delta})$ defined by $$\begin{align*}
210
+ A_\varepsilon(\Bar{\delta}) = \bigcap_{\ell=1}^p \bigcap_{k=0}^N \Big\{ \| \bm{\varepsilon}_{\ell}^\top \mathbf{S}_{\cdot,[k]}^{\mathcal{D}} \|_{\infty} \leq \frac{1}{k!}\sqrt{v_\varepsilon M \log(2pNd^k/\Bar{\delta})}\Big\}.
211
+ \end{align*}$$ We have just seen that, under Assumption [6](#assump:target_noise){reference-type="ref" reference="assump:target_noise"} (and still conditionally on $A_\xi(\delta)$), one has $\mathbb{P}(A_\varepsilon(\Bar{\delta})) \geq 1-\Bar{\delta}$. Putting together all terms in Equation [\[eqn:thm2step1\]](#eqn:thm2step1){reference-type="eqref" reference="eqn:thm2step1"} and plugging the definition of $\Omega$ given in Equation [\[eq:penalty\]](#eq:penalty){reference-type="eqref" reference="eq:penalty"}, we obtain that, on the set $A_\varepsilon(\Bar{\delta}) \cap A_\xi(\delta)$, for all $\theta \in \mathbb{R}^{s_d(N)\times p}$, $$\begin{align*}
212
+ \frac{1}{2M}\left\lVert\mathbf y-\mathbf{S}_N^{\mathcal{D}} \widehat{\theta}_{N,M}\right\rVert_\textnormal{F}^2 &\leq \frac{1}{2M}\left\lVert\mathbf y-\mathbf{S}_N^{\mathcal{D}} \theta\right\rVert_\textnormal{F}^2 \\
213
+ & \quad + \sum_{\ell=1}^p\sum_{k=0}^N \Big( \frac{1}{M} \| \bm{\varepsilon}_{\ell}^\top \mathbf{S}_{\cdot,[k]}^{\mathcal{D}} \|_{\infty} \|\widehat{\theta}_{[k],\ell} - \theta_{[k],\ell}\|_1 + \frac{C_k(\bar{\delta})}{k!\sqrt{M}}%\sqrt{v_\varepsilon \log(2pNd^k/\Bar{\delta})}
214
+ \big( \|\theta_{[k],\ell}\|_1 - \|\widehat{\theta}_{[k],\ell}\|_1 \big) \Big)\\
215
+ & \leq \frac{1}{2M}\left\lVert\mathbf y-\mathbf{S}_N^{\mathcal{D}} \theta\right\rVert_\textnormal{F}^2 \\
216
+ & \quad + \sum_{\ell=1}^p\sum_{k=0}^N \frac{1}{k!\sqrt{M}}\sqrt{v_\varepsilon \log(2pNd^k/\Bar{\delta})} \big(\|\widehat{\theta}_{[k],\ell} - \theta_{[k],\ell}\|_1 + \|\theta_{[k],\ell}\|_1 - \|\widehat{\theta}_{[k],\ell}\|_1 \big).
217
+ % \\&\leq \frac{1}{2M}\norm{\mathbf y-\mathbf{S}_N^{\mathcal{D}} \theta}_2^2 + \frac{\sqrt{v_\varepsilon M\big(\bar \delta + \log d^N \big)}}{M} \sum_{k=1}^N \Big( \frac{1}{k!} \|\widehat{\theta}_{N,M}^{(k)} - \theta^{(k)}\|_1 + \Omega(\theta^{(k)}) - \Omega(\widehat{\theta}_{N,M}^{(k)})\Big)
218
+ \end{align*}$$ Choosing $\theta=\theta_N^\ast$, by the triangular inequality, $$\begin{align*}
219
+ \|\widehat{\theta}_{[k],\ell} - \theta^\ast_{[k],\ell}\|_1 + \|\theta^\ast_{[k],\ell}\|_1 - \|\widehat{\theta}_{[k],\ell}\|_1 \leq 2 \|\theta^\ast_{[k],\ell}\|_1,
220
+ \end{align*}$$ which finally gives us $$\begin{align*}
221
+ \frac{1}{2M}\left\lVert\mathbf y-\mathbf{S}_N^{\mathcal{D}} \widehat{\theta}_{N,M}\right\rVert_\textnormal{F}^2
222
+ &\leq \frac{1}{2M}\left\lVert\mathbf y-\mathbf{S}_N^{\mathcal{D}} \theta_N^\ast\right\rVert_\textnormal{F}^2 + \frac{2}{\sqrt{M}}\sqrt{v_\varepsilon \log(2pNd^N/\Bar{\delta})}\sum_{\ell=1}^p\sum_{k=0}^N \frac{ \| \theta^*_{[k],\ell}\|_1 }{k!}
223
+ \\ &\leq \frac{1}{2M}\left\lVert\mathbf y-\mathbf{S}_N^{\mathcal{D}} \theta_N^\ast\right\rVert_\textnormal{F}^2 + \frac{2p}{\sqrt{M}}\sqrt{v_\varepsilon \log(2pNd^N/\Bar{\delta})}\sum_{k=0}^N \frac{d^k \Lambda_k(\mathbf F)}{k!}\\
224
+ & = \frac{1}{2M}\left\lVert\mathbf y-\mathbf{S}_N^{\mathcal{D}} \theta_N^\ast\right\rVert_\textnormal{F}^2 + \frac{2pC_N(\Bar{\delta})}{\sqrt{M}}\sum_{k=0}^N \frac{d^k \Lambda_k(\mathbf F)}{k!},
225
+ \end{align*}$$ where the second inequality comes from Proposition [14](#prop:norm_theta_star){reference-type="ref" reference="prop:norm_theta_star"}. To conclude the proof, we just need to compute the probability of the set $A_\xi(\delta) \cap A_\varepsilon(\Bar{\delta})$. It is an immediate consequence of Lemma [12](#lemma:concentration_max_subgaussian){reference-type="ref" reference="lemma:concentration_max_subgaussian"} that $\mathbb P(A_\xi(\delta)) \geq 1 - \delta$, and we have seen that $\mathbb P (A_\varepsilon(\bar{\delta}) | A_\xi(\delta)) \geq 1 - \bar{\delta}$, which yields that $$\mathbb P(A_\xi(\delta) \cap A_\varepsilon(\bar{\delta})) \geq (1-\bar{\delta})(1-\delta).$$ ◻
226
+ :::
227
+
228
+ This proof relies on bouding the remainder of the Taylor expansion of the CDE.
229
+
230
+ ::: proof
231
+ *Proof.* For every $i=1,\dots,n$ and a given point $t_i \in \Bar{D}^i$, one has, using the upper bound of the approximation error of a CDE by its Taylor expansion provided by @fermanian2021framing [Proposition 4] $$\left\lVert y^i_{t_i} - S_N(x^i_{[0,t_i]}) \theta_N^*\right\rVert \leq \frac{d^{N+1}\Lambda_{N+1}(\mathbf{F})}{(N+1)!}.$$ This immediately gives $$\begin{align*}
232
+ \frac{1}{M} \left\lVert\mathbf{y}-\mathbf{S}_N\theta_N^*\right\rVert^2_{\textnormal{F}} = \frac{1}{M} \sum\limits_{i=1}^n \sum\limits_{t_i \in \Bar{D}^i} \left\lVert y^i_{t_i} - S_N(x^i_{[0,t_i]}) \theta_N^*\right\rVert^2 & \leq \frac{1}{M} \sum\limits_{i=1}^M \Big(\frac{d^{N+1}\Lambda_{N+1}(\mathbf{F})}{(N+1)!}\Big)^2 = \Big(\frac{d^{N+1}\Lambda_{N+1}(\mathbf{F})}{(N+1)!}\Big)^2,
233
+ \end{align*}$$ which concludes the proof. ◻
234
+ :::
235
+
236
+ We now prove that signature layers are locally Lipschitz mappings. We start with the following proposition.
237
+
238
+ ::: {#lemma:1varklayer .proposition}
239
+ **Proposition 16**. *Let $x \in C_L^{\textnormal{1-var}}([0,1],\mathbb{R}^d)$. Then for all $t \in [0,1]$, the path $t \mapsto \mathbb{X}^k_{[0,t]}$ has $1$-variation bounded by $$\begin{align*}
240
+ \left\lVert\mathbb{X}^k\right\rVert_{\textnormal{1-var}, [0,t]} \leq \frac{L^k}{k!}.
241
+ \end{align*}$$*
242
+ :::
243
+
244
+ ::: proof
245
+ *Proof.* By definition of the total variation, $$\left\lVert\mathbb{X}^k\right\rVert_{\textnormal{1-var}, [0,t]} = \sup_D \sum\limits_{i=1}^m \left\lVert\mathbb{X}^k_{[0,t_{i+1}]}-\mathbb{X}^k_{[0,t_{i}]}\right\rVert_{(\mathbb{R}^d)^{\otimes k}} = \sup_D \sum\limits_{i=1}^m \left\lVert\mathbb{X}^k_{[t_i,t_{i+1}]}\right\rVert_{(\mathbb{R}^d)^{\otimes k}},$$ since $\mathbb{X}^k_{[0,t]} = \int_0^t dx_{u_1}\otimes \dots \otimes dx_{u_k}$, and where the supremum is taken over finite dissections $D=\{0=t_1,\dots,t_m = 1\}$ of $[0,1]$. Notice that the signature layer of depth $k$ is here written as an element of $(\mathbb{R}^d)^{\otimes k}$, which is more convenient for this proof. Then $$\sup_D \sum\limits_{i=1}^m \left\lVert\mathbb{X}^k_{[t_i,t_{i+1}]}\right\rVert_{(\mathbb{R}^d)^{\otimes k}} \leq \sup_D \sum\limits_{i=1}^m \frac{\left\lVert x\right\rVert_{\textnormal{1-var},[t_i,t_{i+1}]}^k}{k!} \leq \frac{1}{k!} \sup_D \Big(\sum\limits_{i=1}^m \left\lVert x\right\rVert_{\textnormal{1-var},[t_i,t_{i+1}]} \Big)^k = \frac{1}{k!} \sup_D \left\lVert x\right\rVert_{\textnormal{1-var},[0,1]}^k \leq \frac{L^k}{k!},$$ where the second inequality follows from the multinomial theorem and the last equality comes from the fact that for all $s<u<t$, $\left\lVert x\right\rVert_{\textnormal{1-var},[s,u]}+ \left\lVert x\right\rVert_{\textnormal{1-var},[u,t]} = \left\lVert x\right\rVert_{\textnormal{1-var},[s,t]}$. This ends our proof. ◻
246
+ :::
247
+
248
+ We now state a bound on the difference between the $k$-th layer of the signatures of two different paths.
249
+
250
+ ::: {#thm:layerbound .theorem}
251
+ **Theorem 17**. *Let $x,z \in C_L^{\textnormal{1-var}}([0,1],\mathbb{R}^d)$. Then for all $k \geq 2$, the difference in supremum norm between the paths $t \to \mathbb{X}^k_{[0,t]}$ and $t \to \mathbb{Z}^k_{[0,t]}$ is bounded by $$\left\lVert\mathbb{X}^k - \mathbb{Z}^k\right\rVert_{\infty,[0,t]} \leq 2L^{k-1} \sum\limits_{j=1}^{k-1} \frac{1}{j!}\left\lVert x-z\right\rVert_{\infty,[0,t]} \leq 2eL^{k-1}\left\lVert x-z\right\rVert_{\infty,[0,t]}$$ and $$\left\lVert \mathbb{X}^1_{[0,t]}-\mathbb{Z}^1_{[0,t]}\right\rVert \leq 2 \left\lVert x-z\right\rVert_{\infty,[0,t]}.$$*
252
+ :::
253
+
254
+ ::: proof
255
+ *Proof.* Our proof works by induction. Let $x,z \in C_L^{\textnormal{1-var}}([0,1],\mathbb{R}^d)$, and for $t \in [0,1]$ denote by $\mathbb{X}^k_{[0,t]}$ (resp. $\mathbb{Z}^k_{[0,t]}$) the $k$-th layer of the signature of $x$ (resp. $z$). For $k=1$ and $t \in [0,1]$, remark that $$\mathbb{X}^1_{[0,t]}-\mathbb{Z}^1_{[0,t]} = \int_0^t d(x_u - z_u) = x_t-z_t -(x_0 - z_0)$$ such that $$\left\lVert \mathbb{X}^1_{[0,t]}-\mathbb{Z}^1_{[0,t]}\right\rVert \leq \left\lVert x-z\right\rVert_{\infty,[0,t]} + \left\lVert x_0-z_0\right\rVert \leq 2 \left\lVert x-z\right\rVert_{\infty,[0,t]}.$$
256
+
257
+ Consider now $k \geq 2$. We have $$\mathbb{X}^k_{[0,t]} - \mathbb{Z}^k_{[0,t]} = \int_0^t \mathbb{X}^{k-1}_{[0,s]} \otimes dx_s - \int_0^t \mathbb{Z}^{k-1}_{[0,s]} \otimes dz_s = \int_0^t \mathbb{X}^{k-1}_{[0,s]} \otimes d(x_s-z_s+z_s) - \int_0^t \mathbb{Z}^{k-1}_{[0,s]} \otimes dz_s,$$ and thus $$\mathbb{X}^k_{[0,t]} - \mathbb{Z}^k_{[0,t]} = \int_0^t \mathbb{X}^{k-1}_{[0,s]} \otimes d(x_s-z_s) + \int_0^t \big(\mathbb{X}^{k-1}_{[0,s]} -\mathbb{Z}^{k-1}_{[0,s]} \big)\otimes dz_s.$$ We now bound each of these terms separately. First, $$\left\lVert\int_0^t \big(\mathbb{X}^{k-1}_{[0,s]} -\mathbb{Z}^{k-1}_{[0,s]} \big)\otimes dz_s\right\rVert_{(\mathbb{R}^d)^{\otimes k}} \leq \left\lVert\mathbb{X}^{k-1} -\mathbb{Z}^{k-1}\right\rVert_{\infty,[0,t]} \left\lVert z\right\rVert_{\textnormal{1-var},[0,t]} \leq \left\lVert\mathbb{X}^{k-1} -\mathbb{Z}^{k-1}\right\rVert_{\infty,[0,t]}L.$$ Moving to the first integral, integration by parts yields $$\int_0^t \mathbb{X}^{k-1}_{[0,s]} \otimes d(x_s-z_s) = \mathbb{X}^k_{[0,t]}\otimes(x_t-z_t) -\mathbb{X}^k_{[0,0]}\otimes(x_0-z_0) - \int_0^t (x_s-z_s) \otimes d\mathbb{X}^{k-1}_{[0,s]}.$$ We stress that Proposition [\[prop:int_by_part_riemann\]](#prop:int_by_part_riemann){reference-type="eqref" reference="prop:int_by_part_riemann"} applies since the integral over the tensor product is taken coordinate-wise. Since $\mathbb{X}^{k-1}_{[0,0]} = 0$, we are left with $$\int_0^t \mathbb{X}^{k-1}_{[0,s]} \otimes d(x_s-z_s) = \mathbb{X}^k_{[0,t]}\otimes(x_t-z_t) -\int_0^t (x_s-z_s) \otimes d\mathbb{X}^{k-1}_{[0,s]}.$$ Using Lemma [16](#lemma:1varklayer){reference-type="ref" reference="lemma:1varklayer"} and submultiplicativity of the tensor norms, this can thus be bounded by $$\begin{align*}
258
+ \left\lVert\int_0^t \mathbb{X}^{k-1}_{[0,s]} \otimes d(x_s-z_s)\right\rVert_{(\mathbb{R}^d)^{\otimes k}} & \leq \left\lVert\mathbb{X}^{k-1}_{[0,t]}\right\rVert_{(\mathbb{R}^d)^{\otimes (k-1)}}\left\lVert x-z\right\rVert_{\infty,[0,t]} + \left\lVert x-z\right\rVert_{\infty,[0,t]} \left\lVert\mathbb{X}^{k-1}\right\rVert_{\textnormal{1-var}, [0,t]}\\
259
+ & = \frac{2L^{k-1}}{(k-1)!} \left\lVert x-z\right\rVert_{\infty,[0,t]}.
260
+ \end{align*}$$ Finally, we are left with $$\begin{align*}
261
+ \left\lVert\mathbb{X}^k - \mathbb{Z}^k\right\rVert_{\infty,[0,t]} \leq \frac{2L^{k-1}}{(k-1)!} \left\lVert x-z\right\rVert_{\infty,[0,t]} + \left\lVert\mathbb{X}^{k-1} -\mathbb{Z}^{k-1}\right\rVert_{\infty,[0,t]}L,
262
+ \end{align*}$$ which can be recursively bounded by $$\left\lVert\mathbb{X}^k - \mathbb{Z}^k\right\rVert_{\infty,[0,t]} \leq 2L^{k-1}\left\lVert x-z\right\rVert_{\infty,[0,t]} \sum\limits_{j=1}^{k-1} \frac{1}{j!} \leq 2L^{k-1}e \left\lVert x-z\right\rVert_{\infty,[0,t]}.$$ ◻
263
+ :::
264
+
265
+ Note that this inequality implies that if $z$ is chosen as the linear interpolation of a discretization of $x$ on a grid $D$, and if the grid gets finer, all signature layers converge at speed $\left\lVert x-z\right\rVert_{\infty,[0,t]}$ but the multiplicative constant increases with depth (if $L \geq 1$). Figure [4](#fig:sig_convergence){reference-type="ref" reference="fig:sig_convergence"} illustrates this phenomenon.
266
+
267
+ <figure id="fig:sig_convergence" data-latex-placement="ht">
268
+ <p><embed src="figures/sig_convergence_4.pdf" style="width:30.0%" /> <embed src="figures/sig_convergence_w_008_noise.pdf" style="width:30.0%" /> <embed src="figures/sig_convergence_w_05_noise.pdf" style="width:30.0%" /></p>
269
+ <figcaption>Difference between the signature of a continuous path <span class="math inline"><em>x</em></span> and the signature of its discretized and noisy counterpart <span class="math inline"><em>X</em></span>, without noise on the discretization points (left), with noise of variance <span class="math inline"><em>v</em><sub><em>ξ</em></sub> = 0.08<sup>2</sup></span> (middle) and with noise of variance <span class="math inline"><em>v</em><sub><em>ξ</em></sub> = 0.5<sup>2</sup></span>. For every number of sampling points, we average the distance between the two signature over <span class="math inline">50</span> randomly chosen discretizations of the interval <span class="math inline">[0, 1]</span>. The discretized path is generated as in the well-specified setting (see Appendix <a href="#appendix:details_well_specified" data-reference-type="ref" data-reference="appendix:details_well_specified">9.4</a>).</figcaption>
270
+ </figure>
271
+
272
+ First, recall that for a generic path $x:[0,1] \rightarrow \mathbb{R}^d$, a modulus of continuity is a continuous function $\omega_x:\mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0}$ vanishing at $0$ such that for all $s, t \in [0,1]$ $$\left\lVert x_t-x_s\right\rVert \leq \omega_x(\abs{t-s}).$$
273
+
274
+ Also recall that by Heine's theorem, we can define such a modulus of continuity for every continuous mapping $[0,1]$ to $\mathbb{R}^d$.
275
+
276
+ We start by giving a general lemma that bounds the difference between the signature layers of a path and its discretized version. Its proof is based on the results of the previous section.
277
+
278
+ ::: {#lemma:distance_two_sigs .lemma}
279
+ **Lemma 18**. *Let $x \in C_L^{\textnormal{1-var}}([0,1],\mathbb{R}^d)$ and $\omega_x:\mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ its modulus of continuity. Let $x^D:[0,1] \rightarrow \mathbb{R}^d$ be the path obtained by linear interpolation of the discretization of $x$ on a grid $D$ corrupted by additive noise $\xi$. Let $\mathbb{X}^{k}_{[0,t]}$ and $\mathbb{X}^{k,D}_{[0,t]}$ be their respective $k$-th layers of signature on $[0,t]$. Then for all $k\geq 2$ $$\left\lVert\mathbb{X}^{k} - \mathbb{X}^{k,D}\right\rVert_{\infty, [0,1]} \leq 2L^{k-1}\sum\limits_{j=1}^{k-1}\frac{1}{j!} \Big(\max\limits_{0 \leq s \leq \abs{D}} \omega_x(s) + \max\limits_{t \in D} \left\lVert\xi_t\right\rVert\Big),$$ and for $k=1$ $$\left\lVert\mathbb{X}^{1} - \mathbb{X}^{1,D}\right\rVert_{\infty,[0,1]} \leq 2\Big(\max\limits_{0 \leq s \leq \abs{D}} \omega_x(s) + \max\limits_{t \in D} \left\lVert\xi_t\right\rVert\Big).$$*
280
+ :::
281
+
282
+ ::: proof
283
+ *Proof.* Theorem [17](#thm:layerbound){reference-type="ref" reference="thm:layerbound"} yields for $k \geq 2$ $$\left\lVert\mathbb{X}^{k} - \mathbb{X}^{k,D}\right\rVert_{\infty, [0,1]} \leq 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \left\lVert x-x^D\right\rVert_{\infty,[0,t]}$$
284
+
285
+ Now, remark that $$\begin{align}
286
+ \left\lVert x-x^D\right\rVert_{\infty,[0,1]} \leq \left\lVert x - \Tilde{x}\right\rVert_{\infty,[0,1]} + \max\limits_{t \in D} \left\lVert\xi_t\right\rVert
287
+ \end{align}$$ from the triangular inequality, where $\Tilde{x}$ is the piecewise linear path obtained by linear interpolation of $x_{0},x_{t_1},\dots, x_{t_j}$. Now, since the paths $x$ and $\Tilde{x}$ coincide on $0,t_1,\dots,t_j$, we have $$\begin{align*}
288
+ \left\lVert x - \Tilde{x}\right\rVert_{\infty,[0,1]} = \max\limits_{i=0,\dots,j-1} \left\lVert x-\Tilde{x}\right\rVert_{\infty,[t_i,t_{i+1}]} \leq \max\limits_{i=0,\dots,j-1} \omega_x\big(\abs{t_{i+1} - t_{i}}\big) = \max\limits_{0 \leq s \leq \abs{D}} \omega_x(s).
289
+ \end{align*}$$
290
+
291
+ This gives us $$\begin{align*}
292
+ \left\lVert\mathbb{X}^{k}- \mathbb{X}^{k,D}\right\rVert_{\infty,[0,1]} \leq 2L^{k-1} \sum\limits_{j=1}^{k-1} \frac{1}{j!}\Big(\max\limits_{0 \leq s \leq \abs{D}} \omega_x(s) + \max\limits_{t \in D} \left\lVert\xi_t\right\rVert\Big).
293
+ \end{align*}$$
294
+
295
+ For the case $k=1$, we immediately get $$\left\lVert\mathbb{X}^{1} - \mathbb{X}^{1,D}\right\rVert_{\infty,[0,1]} \leq 2\Big(\max\limits_{0 \leq s \leq \abs{D}} \omega_x(s) + \max\limits_{t \in D} \left\lVert\xi_t\right\rVert\Big)$$ using the same technique as above. ◻
296
+ :::
297
+
298
+ This result is illustrated in Figure [4](#fig:sig_convergence){reference-type="ref" reference="fig:sig_convergence"}. One can notice that as predicted by our theoretical bounds, the convergence of signature of high order happens at the same rate than the convergence of signatures of lower order. However, the multiplicative constant controlling the tightness of the bound increases with $N$, leading to a slower convergence when $N$ increases. Strong noise hinders the convergence of the signature of the discretized path since in this case, the noise's variance is independent of the number of sampling points : adding more sampling points means adding more noise. There are therefore two trade-offs when learning with signatures. A first trade-off is between sampling frequency and order: with paths sampled at low resolution, one should prefer lower order signatures, which trade model complexity against precise features. A second trade-off is between sampling and noise: if the feature time series are very noisy, the precision of the features increases up to a certain point, past which noise prevails.
299
+
300
+ With this result in hand, we can now prove Lemma [5](#lemma:disc_error){reference-type="ref" reference="lemma:disc_error"}.
301
+
302
+ ::: proof
303
+ *Proof.* We restrict ourselves to the $\omega$-Lipschitz case.
304
+
305
+ In our setup, after linearly interpolation the time series to obtain $x^D$, we normalize it by its total variation $\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]}$, which is a standard practice when learning with signatures [@morrill2020generalised]. This means that we compute the signature of the path $$\begin{align}
306
+ \frac{1}{\left\lVert x^D\right\rVert}_{\textnormal{1-var},[0,t_j]}x^D.
307
+ \end{align}$$
308
+
309
+ Theorem [17](#thm:layerbound){reference-type="ref" reference="thm:layerbound"} gets us for $k \geq 2$ $$\begin{align}
310
+ \left\lVert\mathbb{X}^{k} - \mathbb{X}^{k,D}\right\rVert_{\infty, [0,1]} & \leq 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \left\lVert x- \frac{1}{\left\lVert x^D\right\rVert}_{\textnormal{1-var},[0,1]} x^D\right\rVert_{\infty,[0,1]}\\
311
+ & \leq 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \left\lVert x- x^D\right\rVert_{\infty,[0,1]} + 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \left\lVert x^D- \frac{1}{\left\lVert x^D\right\rVert}_{\textnormal{1-var},[0,1]}x^D\right\rVert_{\infty,[0,1]}.
312
+ \end{align}$$
313
+
314
+ The first term can be bounded by using the fact that in our setting, $\omega_x(s) = \omega s$, and we thus get $$\begin{align}
315
+ 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \left\lVert x- x^D\right\rVert_{\infty,[0,1]} \leq 2L^{k-1} \sum\limits_{j=0}^{k-1} \frac{1}{j!} \Big( \omega \abs{D}+ \max\limits_{t \in D} \left\lVert\xi_t\right\rVert\Big) \leq 2L^{k-1}e \Big( \omega \abs{D}+ \max\limits_{t \in D} \left\lVert\xi_t\right\rVert\Big)
316
+ \end{align}$$
317
+
318
+ The second term can be bounded by $$\begin{align}
319
+ 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \left\lVert x^D- \frac{1}{\left\lVert x^D\right\rVert}_{\textnormal{1-var},[0,1]}x^D\right\rVert_{\infty,[0,t]} & \leq
320
+ 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \left\lVert\Big(1- \frac{1}{\left\lVert x^D\right\rVert}_{\textnormal{1-var},[0,1]}\Big)x^D\right\rVert_{\infty,[0,t]}\\
321
+ & \leq 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \Big|1-\frac{1}{\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]}}\Big| \left\lVert x^D\right\rVert_{\infty,[0,t]}.
322
+ \end{align}$$
323
+
324
+ In order to bound $$\Big|1-\frac{1}{\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]}}\Big| = \Bigg|\frac{\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]}-1}{\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]}}\Bigg|,$$ we need both an upper and a lower bound on $\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]}$.
325
+
326
+ Remark that $$\begin{align}
327
+ \left\lVert x^D\right\rVert_{\textnormal{1-var},[0,t_j]} = \sum\limits_{t_u,t_{u-1} \in D} \left\lVert x_{t_u}+ \xi_{t_{u}}-x_{t_{u-1}}-\xi_{t_{u-1}}\right\rVert.
328
+ \end{align}$$ Recall that we assume the path $(x_t)$ to be time-augmented, and that the measurement times are not noisy. This means that $$\begin{align}
329
+ \sum\limits_{t_u,t_{u-1} \in D} \left\lVert x_{t_u}+ \xi_{t_{u}}-x_{t_{u-1}}-\xi_{t_{u-1}}\right\rVert \geq \sum\limits_{t_u,t_{u-1} \in D} \abs{t_u-t_{u-1}} \geq t_2-t_1 = t_2
330
+ \end{align}$$ since $t_1 = 0$ and Assumption [4](#assump:sampling_grid){reference-type="ref" reference="assump:sampling_grid"} guarantees that there are at least two sampling points in every grid. This gives us that $$\begin{align}
331
+ \frac{1}{\left\lVert x^D\right\rVert}_{\textnormal{1-var},[0,1]} \leq \frac{1}{t_2} \leq \frac{1}{\eta},
332
+ \end{align}$$ since we have required that the last sampling time is at least $\eta$ in Assumption [4](#assump:sampling_grid){reference-type="ref" reference="assump:sampling_grid"}. Turning to the upper bound, we get that $$\begin{align*}
333
+ \Big| 1-\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]} \Big| \leq 1 - L + \sum\limits_{t_u, t_{u-1}\in D} \left\lVert\xi_u - \xi_{u-1}\right\rVert
334
+ \end{align*}$$ by definition of the total variation of a piecewise linear path. Finally, $$\begin{align}
335
+ \sum\limits_{t_u, t_{u-1}\in D} \left\lVert\xi_u - \xi_{u-1}\right\rVert \leq 2 \# D \max_{t \in D} \left\lVert\xi_t\right\rVert.
336
+ \end{align}$$
337
+
338
+ Putting everything together gives us $$\begin{align}
339
+ \Big|1-\frac{1}{\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]}}\Big| \leq \frac{1 - L + 2 \# D \max\limits_{t \in D} \left\lVert\xi_t\right\rVert}{\eta}.
340
+ \end{align}$$
341
+
342
+ In the end, we get that $$\begin{align}
343
+ 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \Big|1-\frac{1}{\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]}}\Big| \left\lVert x^D\right\rVert_{\infty,[0,1]} \leq 2L^{k-1} e\frac{1 - L + 2 \# D \max\limits_{t \in D} \left\lVert\xi_t\right\rVert}{\eta} \left\lVert x^D\right\rVert_{\infty,[0,1]}.
344
+ \end{align}$$
345
+
346
+ Now, remark that since the path $x^D$ is piecewise linear, $$\begin{align}
347
+ \left\lVert x^D\right\rVert_{\infty,[0,1]} = \left\lVert x^D-x_0 + x_0\right\rVert_{\infty,[0,1]} & \leq \max\limits_{t \in D} \left\lVert x_t + \xi_t - x_0\right\rVert + \left\lVert x_0\right\rVert\\
348
+ & \leq \max\limits_{t \in D} \left\lVert x_t - x_0\right\rVert + \max\limits_{t \in D} \left\lVert\xi_t\right\rVert + \left\lVert x_0\right\rVert \\
349
+ & \leq \left\lVert x_0\right\rVert + L + \max\limits_{t \in D} \left\lVert\xi_t\right\rVert
350
+ \end{align}$$
351
+
352
+ where the inequality $$\max\limits_{t \in D} \left\lVert x_t - x_0\right\rVert \leq L$$ follows from the definition of the total variation.
353
+
354
+ This means that $$\begin{align}
355
+ 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \Big|1-\frac{1}{\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]}}\Big| \left\lVert x^D\right\rVert_{\infty,[0,1]} \leq 2L^{k-1} e\frac{1 - L + 2 \# D \max\limits_{t \in D} \left\lVert\xi_t\right\rVert}{\eta} \Big(\left\lVert x_0\right\rVert + L + \max\limits_{t \in D} \left\lVert\xi_t\right\rVert\Big).
356
+ \end{align}$$
357
+
358
+ We have written these inequalities for a generic random variable. Let us now consider individual observations of our dataset.
359
+
360
+ On the set $A_\xi(\delta)$, one has $$\begin{align}
361
+ \max\limits_{i=1,\dots,n,t\in D^i}\left\lVert\xi^i_t\right\rVert \leq v_\xi \sqrt{d} + v_\xi \sqrt{c^{-1}\log(\delta^{-1}\# \mathcal{D})}.
362
+ \end{align}$$
363
+
364
+ To simplify notations, let us write $$\begin{align}
365
+ C_\delta := v_\xi \sqrt{d} + v_\xi \sqrt{c^{-1}\log(\delta^{-1}\# \mathcal{D})}.
366
+ \end{align}$$
367
+
368
+ We get $$\begin{align}
369
+ 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \left\lVert x^i- x^{i,D}\right\rVert_{\infty,[0,1]} \leq 2L^{k-1}e \Big( \omega \abs{\mathcal{D}}+C_\delta \Big),
370
+ \end{align}$$ where we recall that $\mathcal{D}$ is the collection of individual grids, and $\abs{\mathcal{D}}$ is the biggest sampling gap among individuals. Similarly, $$\begin{align}
371
+ 2L^{k-1} \sum\limits_{j=1}^{k-1}\frac{1}{j!} \Big|1-\frac{1}{\left\lVert x^D\right\rVert_{\textnormal{1-var},[0,1]}}\Big| \left\lVert x^D\right\rVert_{\infty,[0,1]} \leq 2L^{k-1}e \frac{1-L+2\#\mathcal{D}C_\delta}{\eta}\big(\left\lVert x_0\right\rVert+L+C_\delta\big)
372
+ \end{align}$$
373
+
374
+ Now moving to the feature matrices, we have $$\begin{align*}
375
+ \frac{1}{M}\left\lVert(\mathbf{S}_N - \mathbf{S}^\mathcal{D}_N)\theta^\ast_N\right\rVert^2_F & \leq \frac{1}{M} \sum\limits_{i=1}^n \sum\limits_{k=0}^N \left\lVert(\mathbf{S}_{i,[k]} - \mathbf{S}^\mathcal{D}_{i,[k]})\theta^*_{[k],\cdot}\right\rVert_F^2\\
376
+ & \leq \frac{1}{M} \sum\limits_{i=1}^n \sum\limits_{t \in \bar{D}^i} \sum\limits_{k=0}^N d^k \Lambda_k(\mathbf{F})^2\big(2eL^{k-1}\big(\omega \abs{\mathcal{D}} + C_\delta + \frac{1-L+ 2\# \mathcal{D} C_\delta}{\eta}\big(\left\lVert x_0\right\rVert+L+C_\delta\big)\big)\Big)^2\\
377
+ & \leq 4 e^2 \Bigg(\omega \abs{\mathcal{D}} + C_\delta + \frac{1-L+ 2\# \mathcal{D} C_\delta}{\eta}\big(\left\lVert x_0\right\rVert+L+C_\delta\big)\Bigg)^2 L^2 \sum\limits_{k=0}^N \frac{d^k \Lambda_k(\mathbf{F})^2}{k!^2} \times k!^2\\
378
+ & \leq 4 e^2 N!^2 \Bigg(\omega \abs{\mathcal{D}} + C_\delta + \frac{1-L+ 2\# \mathcal{D} C_\delta}{\eta}\big(\left\lVert x_0\right\rVert+L+C_\delta\big)\Bigg)^2 L^2 \sum\limits_{k=0}^N \frac{d^k \Lambda_k(\mathbf{F})^2}{k!^2}.
379
+ \end{align*}$$
380
+
381
+ Writing $$C_{\mathcal{D},N}(\delta) = 4e^2 L^2 N!^2 \Bigg(\omega \abs{\mathcal{D}} + C_\delta + \frac{1-L+ 2\# \mathcal{D} C_\delta}{\eta}\big(\left\lVert x_0\right\rVert+L+C_\delta\big)\Bigg)^2,$$ one finally gets with probability $1-\delta$ that $$\begin{align*}
382
+ \frac{1}{M}\left\lVert(\mathbf{S}_N - \mathbf{S}^\mathcal{D}_N)\theta^\ast_N\right\rVert^2_F \leq C_{\mathcal{D},N}(\delta) \sum \limits_{k=0}^N \frac{d^k \Lambda_k(\mathbf{F})^2}{k!^2}.
383
+ \end{align*}$$ ◻
384
+ :::
385
+
386
+ We finally combine all Lemmas to obtain the desired oracle bound.
387
+
388
+ ::: proof
389
+ *Proof.* First, we have from Lemma [3](#lemma:estimation_error){reference-type="ref" reference="lemma:estimation_error"} that on $A_\varepsilon(\Bar{\delta})$, $$\begin{align*}
390
+ \frac{1}{2M}\left\lVert\mathbf{y} - \mathbf{S}_N^{\mathcal{D}}\widehat \theta_{N,M}\right\rVert_\textnormal{F}^2 \leq \frac{1}{2M}\left\lVert\mathbf{y}-\mathbf{S}_N^{\mathcal{D}} \theta^\ast_N\right\rVert_\textnormal{F}^2 + \frac{2pC_N(\Bar{\delta})}{\sqrt{M}}\sum_{k=0}^N \frac{d^k \Lambda_k(\mathbf F)}{k!}.
391
+ \end{align*}$$ The first term of the right-hand side of this inequality is bounded by $$\begin{align*}
392
+ \frac{1}{2M}\left\lVert\mathbf{y}-\mathbf{S}_N^{\mathcal{D}} \theta^\ast_N\right\rVert_\textnormal{F}^2 \leq \frac{1}{M} \left\lVert\mathbf{y} - \mathbf{S}_N \theta^\ast_N\right\rVert_\textnormal{F}^2
393
+ & + \frac{1}{M}\left\lVert\mathbf{S}_N \theta^\ast_N - \mathbf{S}_N^{\mathcal{D}} \theta^\ast_N \right\rVert_\textnormal{F}^2.
394
+ \end{align*}$$
395
+
396
+ By Lemma [4](#lemma:bias_bound){reference-type="ref" reference="lemma:bias_bound"} and Lemma [5](#lemma:disc_error){reference-type="ref" reference="lemma:disc_error"}, this can in turn be bounded on $A_\varepsilon (\Bar{\delta}) \cap A_\xi(\delta)$ by $$\begin{align*}
397
+ \frac{1}{2M}\left\lVert\mathbf{y}-\mathbf{S}_N^{\mathcal{D}} \theta^\ast_N\right\rVert_\textnormal{F}^2 \leq \Bigg(\frac{d^{N+1}\Lambda_{N+1}(\mathbf{F})}{(N+1)!}\Bigg)^2 + C_{\mathcal{D},N}(\delta) \sum \limits_{k=0}^N \frac{d^k \Lambda_k(\mathbf{F})^2}{k!^2}
398
+ \end{align*}$$ Combining all the pieces, this finally gives us, on $A_\varepsilon (\Bar{\delta}) \cap A_\xi(\delta)$, $$\begin{align*}
399
+ \frac{1}{2M}\left\lVert\mathbf{y} - \mathbf{S}_N^{\mathcal{D}}\widehat \theta_{N,M}\right\rVert_\textnormal{F}^2 & \leq \Bigg(\frac{d^{N+1}\Lambda_{N+1}(\mathbf{F})}{(N+1)!}\Bigg)^2 \\
400
+ & +C_{\mathcal{D},N}(\delta) \sum \limits_{k=0}^N \frac{d^k \Lambda_k(\mathbf{F})^2}{k!^2} \\
401
+ & + \frac{2pC_N(\Bar{\delta})}{\sqrt{M}}\sum_{k=0}^N \frac{d^k \Lambda_k(\mathbf F)}{k!}.
402
+ \end{align*}$$ ◻
403
+ :::
404
+
405
+ We briefly discuss the asymptotic behaviour of the upper bound of the oracle inequality.
406
+
407
+ A natural question is whether the bias of our estimator vanishes as $N \to \infty$. If we have perfect sampling, i.e. the limit case where $\mathcal{D}=0$ and $v_\xi =0$, our bound on the prediction error becomes on $A_\varepsilon(\Bar{\delta})$ $$\begin{align*}
408
+ \frac{1}{2M}\left\lVert\mathbf{y} - \mathbf{S}_N^{\mathcal{D}}\widehat \theta_{N,M}\right\rVert_\textnormal{F}^2 & \leq \Bigg(\frac{d^{N+1}\Lambda_{N+1}(\mathbf{F})}{(N+1)!}\Bigg)^2 \\
409
+ & + \frac{2pC_N(\Bar{\delta})}{\sqrt{M}}\sum_{k=0}^N \frac{d^k \Lambda_k(\mathbf F)}{k!}.
410
+ \end{align*}$$
411
+
412
+ The first term of this bound vanishes as an immediate consequence of Assumption [3](#assump:decay_derivatives_F){reference-type="ref" reference="assump:decay_derivatives_F"}, while the second term is a statistical error term that behaves like $\frac{\sqrt{\log (Nd^N)}}{\sqrt{M}}$. In order to obtain an asymptotic convergence, we thus need that $N\log(dN) = o(M)$.
413
+
414
+ In the more realistic setting where $\abs{\mathcal{D}} > 0$, the discretization bias behaves like $L^{N-1} N! \abs{\mathcal{D}}$. It is thus sufficient to assume that $\abs{D} =o(1/N!)$. If $v_\xi >0$, our estimator is durably biased due to the measurement noise, and this bias increases with $N \to \infty$. This is due to a \"propagation of chaos\" phenomenon: the difference between the unobserved feature path and the interpolated feature time series is amplified by taking the successive iterated integrals that define the signature. This advocates for using simple, low-order signature models in the presence of noise, as the gain in precision obtained when taking higher $N$ and reducing the truncation bias will at some point be lost because of the amplified noise.
415
+
416
+ Our oracle bound only depends on $p$ through the statistical error term. This term is proportional to $p \sqrt{\log p}$, which is expected in multitask regression.
417
+
418
+ Our oracle bound exhibits multiple dependencies in $d$. First, the truncation bias grows polynomially with $d$. Similarly, the discretization bias also depends polynomially on $d$. Finally, the statistical error term is proportional to $\log d$ times a polynomial term.
2302.05259/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2023-01-25T15:38:53.824Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36" etag="w7PBN1hIgmYXxySJii2l" version="20.8.11" type="device"><diagram name="Page-1" id="C09qoJ6vz-g3rqimnUE8">7Vzfc5s4EP5r/JgMkgDDY+Kk7cN15u7SuaZ9uVGMbNPIyIdlx85ffxJI/I7j2BioTSaToJW0ht3v0+4iJQM0mm8+h3gx+8o8QgfQ8DYDdDeA0LQc8VMKtrEAIjsWTEPfi0UgFTz4r0QJDSVd+R5Z5gZyxij3F3nhmAUBGfOcDIche8kPmzCa/9QFnpKS4GGMaVn63ff4LJY6cJjKvxB/OtOfDGw37pljPVipWM6wx15iUfRw6H6ARiFjPL6ab0aESttpu8QW+PRGb3JjIQn4PhP8n1749ftrMDOdv57//sdc278er/RzLPlWPzHxhAFUM2CB+HUrbn0hm8rILBSy8SpcE6kaiEbIVoEXtQzRYiGfsSkLMP2DsYUa8otwvlX+xSvOhGjG51T1ko3PHzPXP5QqeX23yTa2uhHwcPuYbfzINtJJUUvPoviJ0Fs8fp5GtzxiVD7MnUcmeEW5fFQesmdSlk9YwD/huU+lpi+Ergn3x1h1qMcCQLXL00ng3Ug0SiNSvFz6YyGMTS/t/aZLNXbYKhyTHX7U1MDhlPAd41ACPEFYwuZEGEfMCwnF3F/n7wMr6kyTcWqqeBK8zQxYMD/gy4zmP6VADFCLAHAUmdUaYJo5pIqLWKNuZW4tFUVo/giynR7Zl4Rssw1k6zU3QTZsANmW1SP7kpAN3VagbeyEdmk8sgrjURNUsHsqXBIVkNEJKoDdVND9jUYF9dBrTFfKDGVqUCrKJsmIl5nPycMCR055EYVbHs54uYhrqYm/kbS4nfiUalQMIPIs4nhmCW6ix4FPyLYTcKxJyMlmNzzK3tQTisFVpY3GS1qIadEsU4NpWZX/Mz74uIlRxwKvcW1l1huw53oTT/otVxwPL2eJGWtcftC+kfjYJPM4+J0dw0sLa+sM71g+0TO8Joab+zLcapPh5vkx/I1XP60x3KwwaasM/3jFALLsBj27dVzeg95vgLUZdsNG6Y2JMxlX0dseO+RpcpoAbsG2Azjq6X1+9Lb2pPexuxvHQc/sMPSSPPIj4Mskkj38Ol8e6ts8p/DidCy8QPfsMnTkdixDV/sG52Rjc9g1Gzf6triRtaKI49bXCgTPzsZFHLdvY2cPG2eSsDSduk+lx6dcB2Uex+5ilXdgzYJ3kHvt2HktcbKjJqbGL+uC7+uK86GSrro2tMx93ndfjG+Lq5vwhwvc5Ms50M1FQu9We2qPd6WCuthSvJlTVmUYokJ+5BoH4hka14YBAbJsgW1kuHZer1Pd3RS+u7J/fKH4PsGqbCVRMUmJrMOD7hvw1KpBrhuAZsFblWqZ8tsaeUwen4kaRYCLZJMXUlnqTwPpTOEcIsEtU1IBCHqjOua+58Wxmyz9V/wUqZIIVOd0hF7rdmDdSV0C20uFoBICFZ+ymbIS1ZAKI2DmlxYTlFJhpyIVRqdKha2qkq73T8rLtv2zO3vtXw9fRsgoldAy44aHxQsTva/rxFHB6vfbfjNE73McDu5ZDdRzMr2CI+B6CIaObaKhDW3HtQpLOcx2uweS551SQdxDi6WC1ZVSuD+I1mq0MNIVPVfDpl/gQPiXNSdvc5vCeMf+9rTHeDsYdyQSS4CuGe47P6Rx5FfV0nbi5xwn7P9WTHdcxbXUjRgA4GITeU33i6tp/FvWeZt/DV3wxXrFfcaq9agLqQOBkU8eTL2tkKkDh43WgVVHCGr2Peh9H/U6HfO9XXW0oWbfw973shfaXfN91fu5mn0/GN5+uwKD4V2PgciXtv0uBtxGMQBPj4Fvve8jX1tF/sOW+V91nKFW33/u43513Lf00djWfF/1lx81+76P+5Vxv33fVx3Lrtn3fdzfGferMNBs3LdPj4E+7lfGfUvvedfPf9FM/3tf/Goo/ReI6P5/</diagram></mxfile>
2302.05259/main_diagram/main_diagram.pdf ADDED
Binary file (12.1 kB). View file
 
2302.05259/paper_text/intro_method.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Deep generative models have shown outstanding sample quality in a wide variety of modalities. Generative Adversarial Networks (GANs) [@goodfellow2014generative; @karras2021alias], autoregressive models [@ramesh2021zero], Variational Autoencoders [@kingma2013auto; @rezende2014stochastic], Normalizing Flows [@grathwohl2018ffjord; @chen2019residual] and energy-based models [@xiao2020vaebm] show impressive abilities to synthesize objects. However, GANs are not robust to the choice of architecture and optimization method [@arjovsky2017wasserstein; @gulrajani2017improved; @karras2019style; @brock2018large], and they often fail to cover modes in data distribution [@zhao2018bias; @thanh2020catastrophic]. Likelihood-based models avoid mode collapse but may overestimate the probability in low-density regions [@zhang2021understanding].
4
+
5
+ Recently, diffusion probabilistic models [@sohl2015deep; @ho2020denoising] have received a lot of attention. These models generate samples using a trained Markov process that starts with white noise and iteratively removes noise from the sample. Recent works have shown that diffusion models can generate samples comparable in quality or even better than GANs [@song2020score; @dhariwal2021diffusion], while they do not suffer from mode collapse by design, and also they have a log-likelihood comparable to autoregressive models [@kingma2021variational]. Moreover, diffusion models show these results in various modalities such as images [@saharia2021image], sound [@popov2021grad; @liu2022diffgan] and shapes [@luo2021diffusion; @zhou20213d].
6
+
7
+ The main principle of diffusion models is to destroy information during the forward process and then restore it during the reverse process. In conventional diffusion models like denoising diffusion probabilistic models (DDPM) destruction of information occurs through the injection of Gaussian noise, which is reasonable for some types of data, such as images. However, for data distributed on manifolds, bounded volumes, or with other features, the injection of Gaussian noise can be unnatural, breaking the data structure. Unfortunately, it is not clear how to replace the noise distribution within traditional diffusion models. The problem is that we have to maintain a connection between the distributions defining the Markov noising process that gradually destroys information and its marginal distributions. While some papers explore other distributions, such as delta functions [@bansal2022cold] or Gamma distribution [@nachmani2021denoising], they provide ad hoc solutions for special cases that are not easily generalized.
8
+
9
+ In this paper, we present Star-Shaped Denoising Diffusion Probabilistic Models (SS-DDPM), a new approach that generalizes Gaussian DDPM to an exponential family of noise distributions. In SS-DDPM, one only needs to define marginal distributions at each diffusion step (see Figure [1](#fig:teaser){reference-type="ref" reference="fig:teaser"}). We provide a derivation of SS-DDPM, design efficient sampling and training algorithms, and show its equivalence to DDPM [@ho2020denoising] in the case of Gaussian noise. Then, we outline a number of practical considerations that aid in training and applying SS-DDPMs. In Section [5](#sec:experiments){reference-type="ref" reference="sec:experiments"}, we demonstrate the ability of SS-DDPM to work with distributions like von Mises--Fisher, Dirichlet and Wishart. Finally, we evaluate SS-DDPM on image and text generation. Categorical SS-DDPM matches the performance of Multinomial Text Diffusion [@hoogeboom2021argmax] on the `text8` dataset, while our Beta diffusion model achieves results, comparable to a Gaussian DDPM on CIFAR-10.
10
+
11
+ <figure id="fig:teaser" data-latex-placement="t">
12
+ <div class="minipage">
13
+ <img src="images/teaser_ddpm.png" />
14
+ </div>
15
+ <div class="minipage">
16
+ <img src="images/teaser_ss.png" />
17
+ </div>
18
+ <figcaption>Markovian forward processes of DDPM (left) and the star-shaped forward process of SS-DDPM (right).</figcaption>
19
+ </figure>
20
+
21
+ We start with a brief introduction of DDPMs. The Gaussian DDPM [@ho2020denoising] is defined as a forward (diffusion) process $q^{\scriptscriptstyle\mathrm{DDPM}}(x_{0:T})$ and a corresponding reverse (denoising) process $p_\theta^{\scriptscriptstyle\mathrm{DDPM}}(x_{0:T})$. The forward process is defined as a Markov chain with Gaussian conditionals: $$\begin{align}
22
+ q^{\scriptscriptstyle\mathrm{DDPM}}(x_{0:T})&= q(x_0){\prod_{t=1}^T} q^{\scriptscriptstyle\mathrm{DDPM}}(x_t|x_{t-1}),\label{eq:ddpm-1}\\
23
+ q^{\scriptscriptstyle\mathrm{DDPM}}(x_t|x_{t-1})&= \mathcal{N}\left(x_t; \sqrt{1-\beta_t}x_{t-1}, \beta_t\mathbf{I}\right),\label{eq:ddpm-2}
24
+ \end{align}$$ where $q(x_0)$ is the data distribution. Parameters $\beta_t$ are typically chosen in advance and fixed, defining the noise schedule of the diffusion process. The noise schedule is chosen in such a way that the final $x_T$ no longer depends on $x_0$ and follows a standard Gaussian distribution $q^{\scriptscriptstyle\mathrm{DDPM}}(x_T)=\mathcal{N}\left(x_T;0,\mathbf{I}\right)$.
25
+
26
+ The reverse process $p_\theta^{\scriptscriptstyle\mathrm{DDPM}}(x_{0:T})$ follows a similar structure and constitutes the generative part of the model: $$\begin{align}
27
+ p_\theta^{\scriptscriptstyle\mathrm{DDPM}}(x_{0:T})&= q^{\scriptscriptstyle\mathrm{DDPM}}(x_T){\prod_{t=1}^T}p_\theta^{\scriptscriptstyle\mathrm{DDPM}}(x_{t-1}|x_t),\\
28
+ p_\theta^{\scriptscriptstyle\mathrm{DDPM}}(x_{t-1}|x_t)&= \mathcal{N}\left(x_{t-1}; \mu_\theta(x_t, t), \Sigma_\theta(x_t, t)\right).
29
+ \end{align}$$ The forward process $q^{\scriptscriptstyle\mathrm{DDPM}}(x_{0:T})$ of DDPM is typically fixed, and all the parameters of the model are contained in the generative part of the model $p_\theta^{\scriptscriptstyle\mathrm{DDPM}}(x_{0:T})$. These parameters are tuned to maximize the variational lower bound (VLB) on the likelihood of the training data: $$\begin{gather}
30
+ \mathcal{L}^{\scriptscriptstyle\mathrm{DDPM}}(\theta)=\mathbb{E}_{q^{\scriptscriptstyle\mathrm{DDPM}}}\left[\log p_\theta^{\scriptscriptstyle\mathrm{DDPM}}(x_0|x_1)-{\sum_{t=2}^{T}}{D_{KL}\left({q^{\scriptscriptstyle\mathrm{DDPM}}(x_{t-1}|x_t, x_0)}\,\middle\|\,{p_\theta^{\scriptscriptstyle\mathrm{DDPM}}(x_{t-1}|x_t)}\right)}\right]
31
+ \label{eq::KL}\\
32
+ \mathcal{L}^{\scriptscriptstyle\mathrm{DDPM}}(\theta)\to\max_{\theta}
33
+ \end{gather}$$
34
+
35
+ One of the main challenges in defining DDPMs is the computation of the posterior $q^{\scriptscriptstyle\mathrm{DDPM}}(x_{t-1}|x_t, x_0)$. Specifically, the transition probabilities $q^{\scriptscriptstyle\mathrm{DDPM}}(x_t|x_{t-1})$ have to be defined in such a way that this posterior is tractable. Specific DDPM-like models are available for Gaussian [@ho2020denoising], Categorical [@hoogeboom2021argmax] and Gamma [@kawar2022denoising] distributions. Defining such models remains challenging in more general cases.
36
+
37
+ <figure id="fig:model-structure" data-latex-placement="t">
38
+ <figure>
39
+ <img src="images/ddpm.png" />
40
+ <figcaption>Denoising Diffusion Probabilistic Models</figcaption>
41
+ </figure>
42
+ <figure>
43
+ <img src="images/ss.png" />
44
+ <figcaption>Star-Shaped Denoising Diffusion Probabilistic Models</figcaption>
45
+ </figure>
46
+ <figcaption>Model structure of DDPM and SS-DDPM.</figcaption>
47
+ </figure>
48
+
49
+ As previously discussed, extending the DDPMs to other distributions poses significant challenges. In light of these difficulties, we propose to construct a model that only relies on marginal distributions $q(x_t|x_0)$ in its definition and the derivation of the loss function.
50
+
51
+ We define star-shaped diffusion as a *non-Markovian* forward process $q^{\scriptscriptstyle\mathrm{SS}}(x_{0:T})$ that has the following structure: $$\begin{equation}
52
+ \label{eq:ss-forward}
53
+ q^{\scriptscriptstyle\mathrm{SS}}(x_{0:T})= q(x_0){\prod_{t=1}^T} q^{\scriptscriptstyle\mathrm{SS}}(x_t|x_0),
54
+ \end{equation}$$ where $q(x_0)$ is the data distribution. We note that in contrast to DDPM all noisy variables $x_t$ are conditionally independent given $x_0$ instead of constituting a Markov chain. This structure of the forward process allows us to utilize other noise distributions, which we discuss in more detail later.
55
+
56
+ In DDPMs the true reverse model $q^{\scriptscriptstyle\mathrm{DDPM}}(x_{0:T})$ has a Markovian structure [@ho2020denoising], allowing for an efficient sequential generation algorithm: $$\begin{align}
57
+ q^{\scriptscriptstyle\mathrm{DDPM}}(x_{0:T})&=q^{\scriptscriptstyle\mathrm{DDPM}}(x_T){\prod_{t=1}^{T}} q^{\scriptscriptstyle\mathrm{DDPM}}(x_{t-1}|x_t).
58
+ \end{align}$$ For the star-shaped diffusion, however, the Markovian assumption breaks: $$\begin{align}
59
+ q^{\scriptscriptstyle\mathrm{SS}}(x_{0:T})&=q^{\scriptscriptstyle\mathrm{SS}}(x_T){\prod_{t=1}^{T}} q^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_{t:T}).
60
+ \end{align}$$ Consequently, we now need to approximate the true reverse process by a parametric model which is conditioned on the whole tail $x_{t:T}$. $$\begin{equation}
61
+ p_\theta^{\scriptscriptstyle\mathrm{SS}}(x_{0:T})= p_\theta^{\scriptscriptstyle\mathrm{SS}}(x_T){\prod_{t=1}^T} p_\theta^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_{t:T}).
62
+ \end{equation}$$
63
+
64
+ ::: wrapfigure
65
+ r0.5 ![image](images/ReverseProcessComparison.png){width="\\linewidth"}
66
+ :::
67
+
68
+ It is crucial to use the whole tail $x_{t:T}$ rather than just one variable $x_t$ when predicting $x_{t-1}$ in a star-shaped model. As we show in Appendix [8](#app:true-reverse){reference-type="ref" reference="app:true-reverse"}, if we try to approximate the true reverse process with a Markov model, we introduce a substantial irreducible gap into the variational lower bound. Such a sampling procedure fails to generate realistic samples, as can be seen in Figure [\[fig:markov-vs-general\]](#fig:markov-vs-general){reference-type="ref" reference="fig:markov-vs-general"}.
69
+
70
+ Intuitively, in DDPMs the information about $x_0$ that is contained in $x_{t+1}$ is nested into the information about $x_0$ that is contained in $x_t$. That is why knowing $x_t$ allows us to discard $x_{t+1}$. In star-shaped diffusion, however, all variables contain independent pieces of information about $x_0$ and should all be taken into account when making predictions.
71
+
72
+ We can write down the variational lower bound as follows: $$\begin{equation}
73
+ \mathcal{L}^{\scriptscriptstyle\mathrm{SS}}(\theta)=\mathbb{E}_{q^{\scriptscriptstyle\mathrm{SS}}}\left[\log p_\theta(x_0|x_{1:T})-{\sum_{t=2}^T}{D_{KL}\left({q^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_0)}\,\middle\|\,{p_\theta^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_{t:T}}\right)}\right]
74
+ \label{eq:ss-elbo}
75
+ \end{equation}$$
76
+
77
+ With this VLB, we only need the marginal distributions $q(x_{t-1}|x_0)$ to define and train the model, which allows us to use a wider variety of noising distributions. Since conditioning the predictive model $p_\theta(x_{t-1}|x_{t:T})$ on the whole tail $x_{t:T}$ is typically impractical, we propose a more efficient way to implement the reverse process next.
78
+
79
+ Instead of using the full tail $x_{t:T}$, we would like to define some statistic $G_t=\mathcal{G}_t(x_{t:T})$ that would extract all information about $x_0$ from the tail $x_{t:T}$. Formally speaking, we call $G_t$ a *sufficient tail statistic* if the following equality holds: $$\begin{equation}
80
+ q^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_{t:T})=q^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|G_t).
81
+ \end{equation}$$ One way to define $G_t$ is to concatenate all the variables $x_{t:T}$ into a single vector. This, however, is impractical, as its dimension would grow with the size of the tail $T-t+1$.
82
+
83
+ The Pitman--Koopman--Darmois [@pitman_1936] theorem (PKD) states that exponential families admit a sufficient statistic with constant dimensionality. It also states that no other distribution admits one: if such a statistic were to exist, the distribution has to be a member of the exponential family. Inspired by the PKD, we turn to the exponential family of distributions. In the case of star-shaped diffusion, we cannot apply the PKD directly, as it was formulated for i.i.d. samples and our samples are not identically distributed. However, we can still define a sufficient tail statistic $G_t$ for a specific subset of the exponential family, which we call an *exponential family with linear parameterization*:
84
+
85
+ ::: restatable
86
+ theoremmemory []{#theorem:memory label="theorem:memory"} Assume the forward process of a star-shaped model takes the following form: $$\begin{align}
87
+ \!\!q^{\scriptscriptstyle\mathrm{SS}}(x_t|x_0)&=h_t(x_t)\exp\left\{\eta_t(x_0)^\mathsf{T}\mathcal{T}(x_t)-\Omega_t(x_0)\right\},\label{eq:exponential-family}\\
88
+ \eta_t(x_0)&=A_tf(x_0)+b_t.\label{eq:linear-parameterization}
89
+ \end{align}$$ Let $G_t$ be a tail statistic, defined as follows: $$\begin{align}
90
+ G_t=\mathcal{G}_t(x_{t:T})&={\sum_{s=t}^T} A_s^\mathsf{T}\mathcal{T}(x_s).
91
+ \label{eq:tail-statistic}
92
+ \end{align}$$ Then, $G_t$ is a sufficient tail statistic: $$\begin{align}
93
+ q^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_{t:T})&=q^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|G_t).
94
+ \end{align}$$
95
+ :::
96
+
97
+ Here definition [\[eq:exponential-family\]](#eq:exponential-family){reference-type="eqref" reference="eq:exponential-family"} is the standard definition of the exponential family, where $h_t(x_t)$ is *the base measure*, $\eta_t(x_0)$ is the vector of *natural parameters* with corresponding *sufficient statistics* $\mathcal{T}(x_t)$, and $\Omega_t(x_0)$ is *the log-partition function*. The key assumption added is the *linear parameterization* of the natural parameters [\[eq:linear-parameterization\]](#eq:linear-parameterization){reference-type="eqref" reference="eq:linear-parameterization"}. We provide the proof in Appendix [\[app:proof-1\]](#app:proof-1){reference-type="ref" reference="app:proof-1"}. When $A_t$ is scalar, we denote it as $a_t$ instead.
98
+
99
+ For the most part, the premise of Theorem [\[theorem:memory\]](#theorem:memory){reference-type="ref" reference="theorem:memory"} restricts the parameterization of the distributions rather than the family of the distributions involved. As we discuss in Appendix [12](#app:different-families){reference-type="ref" reference="app:different-families"}, we found it easy to come up with linear parameterization for a wide range of distributions in the exponential family. For example, we can obtain a linear parameterization for the Beta distribution $q(x_t|x_0)=\mathrm{Beta}(x_t; \alpha_t, \beta_t)$ using $x_0$ as the mode of the distribution and introducing a new concentration parameter $\nu_t$: $$\begin{align}
100
+ \alpha_t&=1+\nu_tx_0,\\
101
+ \beta_t&=1+\nu_t(1-x_0).
102
+ \end{align}$$ In this case, $\eta_t(x_0)=\nu_tx_0$, $\mathcal{T}(x_t)=\log\frac{x_t}{1-x_t}$, and we can use equation [\[eq:tail-statistic\]](#eq:tail-statistic){reference-type="eqref" reference="eq:tail-statistic"} to define the sufficient tail statistic $G_t$. We provide more examples in Appendix [12](#app:different-families){reference-type="ref" reference="app:different-families"}. We also provide an implementation-ready reference sheet for a wide range of distributions in the exponential family in Table [1](#tab:different-families){reference-type="ref" reference="tab:different-families"}.
103
+
104
+ We suspect that, just like in PKD, this trick is only possible for a subset of the exponential family. In the general case, the dimensionality of the sufficient tail statistic $G_t$ would have to grow with the size of the tail $x_{t:T}$. It is still possible to apply SS-DDPM in this case, however, crafting the (now only approximately) sufficient statistic $G_t$ would require more careful consideration and we leave it for future work.
105
+
106
+ To maximize the VLB [\[eq:ss-elbo\]](#eq:ss-elbo){reference-type="eqref" reference="eq:ss-elbo"}, each step of the reverse process should approximate the true reverse distribution: $$\begin{equation}
107
+ \begin{split}
108
+ p_\theta^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_{t:T})\approx q^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_{t:T})
109
+ =\int q^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_0)q^{\scriptscriptstyle\mathrm{SS}}(x_0|x_{t:T})dx_0.
110
+ \end{split}
111
+ \end{equation}$$ Similarly to DDPM [@ho2020denoising], we choose to approximate $q^{\scriptscriptstyle\mathrm{SS}}(x_0|x_{t:T})$ with a delta function centered at the prediction of some model $x_\theta(\mathcal{G}_t(x_{t:T}), t)$. This results in the following definition of the reverse process of SS-DDPM: $$\begin{equation}
112
+ p_\theta^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_{t:T})=\left.q^{\scriptscriptstyle\mathrm{SS}}(x_{t-1}|x_0)\right|_{x_0=x_{\theta}(\mathcal{G}_t(x_{t:T}), t)}.
113
+ \end{equation}$$
114
+
115
+ The distribution $p_\theta^{\scriptscriptstyle\mathrm{SS}}(x_0|x_{1:T})$ can be fixed to some small-variance distribution $p^{\scriptscriptstyle\mathrm{SS}}_\theta(x_0|\hat{x}_0)$ centered at the final prediction $\hat{x}_0=x_\theta(\mathcal{G}_1(x_{1:T}), 1)$, similar to the dequantization term, commonly used in DDPM. If this distribution has no trainable parameters, the corresponding term can be removed from the training objective. This dequantization distribution would then only be used for log-likelihood estimation and, optionally, for sampling.
116
+
117
+ Together with the forward process [\[eq:ss-forward\]](#eq:ss-forward){reference-type="eqref" reference="eq:ss-forward"} and the VLB objective [\[eq:ss-elbo\]](#eq:ss-elbo){reference-type="eqref" reference="eq:ss-elbo"}, this concludes the general definition of the SS-DDPM model. The model structure is illustrated in Figure [2](#fig:model-structure){reference-type="ref" reference="fig:model-structure"}. The corresponding training and sampling algorithms are provided in Algorithms [\[alg:training\]](#alg:training){reference-type="ref" reference="alg:training"} and [3](#alg:sampling){reference-type="ref" reference="alg:sampling"}.
118
+
119
+ <figure id="alg:sampling">
120
+ <div class="minipage">
121
+ <div class="algorithm">
122
+ <div class="algorithmic">
123
+ <p><span class="math inline"><em>x</em><sub>0</sub> ∼ <em>q</em>(<em>x</em><sub>0</sub>)</span> <span class="math inline"><em>t</em> ∼ Uniform(1, …, <em>T</em>)</span> <span class="math inline"><em>x</em><sub><em>t</em> : <em>T</em></sub> ∼ <em>q</em><sup>SS</sup>(<em>x</em><sub><em>t</em> : <em>T</em></sub>|<em>x</em><sub>0</sub>)</span> <span class="math inline">$G_t = \sum_{s=t}^TA_s^\mathsf{T}\mathcal{T}(x_s)$</span> Move along <span class="math inline">∇<sub><em>θ</em></sub>KL(<em>q</em><sup>SS</sup>(<em>x</em><sub><em>t</em> − 1</sub>|<em>x</em><sub>0</sub>)∥<em>p</em><sub><em>θ</em></sub><sup>SS</sup>(<em>x</em><sub><em>t</em> − 1</sub>|<em>G</em><sub><em>t</em></sub>))</span></p>
124
+ </div>
125
+ </div>
126
+ </div>
127
+ <div class="minipage">
128
+ <div class="algorithm">
129
+ <div class="algorithmic">
130
+ <p><span class="math inline"><em>x</em><sub><em>T</em></sub> ∼ <em>q</em><sup>SS</sup>(<em>x</em><sub><em>T</em></sub>)</span> <span class="math inline"><em>G</em><sub><em>T</em></sub> = <em>A</em><sub><em>T</em></sub><sup>T</sup>𝒯(<em>x</em><sub><em>T</em></sub>)</span> <span class="math inline"><em>x̃</em><sub>0</sub> = <em>x</em><sub><em>θ</em></sub>(<em>G</em><sub><em>t</em></sub>, <em>t</em>)</span> <span class="math inline"><em>x</em><sub><em>t</em> − 1</sub> ∼ <em>q</em><sup>SS</sup>(<em>x</em><sub><em>t</em> − 1</sub>|<em>x</em><sub>0</sub>)|<sub><em>x</em><sub>0</sub> = <em>x̃</em><sub>0</sub></sub></span> <span class="math inline"><em>G</em><sub><em>t</em> − 1</sub> = <em>G</em><sub><em>t</em></sub> + <em>A</em><sub><em>t</em> − 1</sub><sup>T</sup>𝒯(<em>x</em><sub><em>t</em> − 1</sub>)</span> <span class="math inline"><em>x</em><sub>0</sub> ∼ <em>p</em><sub><em>θ</em></sub><sup>SS</sup>(<em>x</em><sub>0</sub>|<em>G</em><sub>1</sub>)</span></p>
131
+ </div>
132
+ </div>
133
+ </div>
134
+ <figcaption>SS-DDPM sampling</figcaption>
135
+ </figure>
136
+
137
+ The resulting model is similar to DDPM in spirit. We follow the same principles when designing the forward process: starting from a low-variance distribution, centered at $x_0$ at $t=1$, we gradually increase the entropy of the distribution $q^{\scriptscriptstyle\mathrm{SS}}(x_t|x_0)$ until there is no information shared between $x_0$ and $x_t$ at $t=T$.
138
+
139
+ We provide concrete definitions for Beta, Gamma, Dirichlet, von Mises, von Mises--Fisher, Wishart, Gaussian and Categorical distributions in Appendix [12](#app:different-families){reference-type="ref" reference="app:different-families"}.
140
+
141
+ While the variables $x_{1:T}$ follow a star-shaped diffusion process, the corresponding tail statistics $G_{1:T}$ form a Markov chain: $$\begin{equation}
142
+ G_t={\sum_{s=t}^T} A_s^\mathsf{T}\mathcal{T}(x_s)=G_{t+1}+A_t^\mathsf{T}\mathcal{T}(x_t),
143
+ \end{equation}$$ since $x_t$ is conditionally independent from $G_{t+2:T}$ given $G_{t+1}$ (see Appendix [11](#sec:general-case){reference-type="ref" reference="sec:general-case"} for details). Moreover, we can rewrite the probabilistic model in terms of $G_t$ and see that variables $(x_0, G_{1:T})$ form a (not necessarily Gaussian) DDPM.
144
+
145
+ In the case of Gaussian distributions, this duality makes SS-DDPM and DDPM equivalent. This equivalence can be shown explicitly:
146
+
147
+ ::: restatable
148
+ theoremequivalence []{#theorem:equivalence label="theorem:equivalence"} Let $\overline{\alpha}_t^{\scriptscriptstyle\mathrm{DDPM}}$ define the noising schedule for a DDPM model ([\[eq:ddpm-1\]](#eq:ddpm-1){reference-type="ref" reference="eq:ddpm-1"}--[\[eq:ddpm-2\]](#eq:ddpm-2){reference-type="ref" reference="eq:ddpm-2"}) via $\beta_t=(\overline{\alpha}^{\scriptscriptstyle\mathrm{DDPM}}_{t-1}-\overline{\alpha}^{\scriptscriptstyle\mathrm{DDPM}}_t)/\overline{\alpha}^{\scriptscriptstyle\mathrm{DDPM}}_{t-1}$. Let $q^{\scriptscriptstyle\mathrm{SS}}(x_{0:T})$ be a Gaussian SS-DDPM forward process with the following noising schedule and sufficient tail statistic: $$\begin{align}
149
+ q^{\scriptscriptstyle\mathrm{SS}}(x_t|x_0)&=
150
+ \mathcal{N}\left(x_t; \sqrt{\overline{\alpha}_t^{\scriptscriptstyle\mathrm{SS}}}x_0, 1-\overline{\alpha}_t^{\scriptscriptstyle\mathrm{SS}}\right),\\
151
+ \mathcal{G}_t(x_{t:T})&=
152
+ \frac{1-\overline{\alpha}^{\scriptscriptstyle\mathrm{DDPM}}_{t}}{\sqrt{\overline{\alpha}^{\scriptscriptstyle\mathrm{DDPM}}_{t}}}\sum_{s=t}^T\frac{\sqrt{\overline{\alpha}^{\scriptscriptstyle\mathrm{SS}}_s}x_s}{1-\overline{\alpha}^{\scriptscriptstyle\mathrm{SS}}_s},\text{ where}\\
153
+ \frac{\overline{\alpha}^{\scriptscriptstyle\mathrm{SS}}_t}{1-\overline{\alpha}^{\scriptscriptstyle\mathrm{SS}}_t}&=
154
+ \frac{\overline{\alpha}^{\scriptscriptstyle\mathrm{DDPM}}_{t}}{1-\overline{\alpha}^{\scriptscriptstyle\mathrm{DDPM}}_{t}}-\frac{\overline{\alpha}^{\scriptscriptstyle\mathrm{DDPM}}_{t+1}}{1-\overline{\alpha}^{\scriptscriptstyle\mathrm{DDPM}}_{t+1}}.
155
+ \label{eq:schedule-transform}
156
+ \end{align}$$ Then the tail statistic $G_t$ follows a Gaussian DDPM noising process $q^{\scriptscriptstyle\mathrm{DDPM}}(x_{0:T})|_{x_{1:T}=G_{1:T}}$ defined by the schedule $\overline{\alpha}_t^{\scriptscriptstyle\mathrm{DDPM}}$. Moreover, the corresponding reverse processes and VLB objectives are also equivalent.
157
+ :::
158
+
159
+ We show this equivalence in Appendix [10](#app:gaussian-equivalence){reference-type="ref" reference="app:gaussian-equivalence"}. We make use of this connection when choosing the noising schedule for other distributions.
160
+
161
+ This equivalence means that SS-DDPM is a direct generalization of Gaussian DDPM. While admitting the Gaussian case, SS-DDPM can also be used to implicitly define a non-Gaussian DDPM in the space of sufficient tail statistics for a wide range of distributions.
162
+
163
+ While the model is properly defined, there are several practical considerations that are important for the efficiency of star-shaped diffusion.
164
+
165
+ <figure id="fig:ss-vs-ddpm-schedule-viz" data-latex-placement="t">
166
+ <div class="minipage">
167
+ <embed src="images/schedule-gauss.pdf" />
168
+ </div>
169
+ <div class="minipage">
170
+ <img src="images/ForwardProcessesComparison.png" />
171
+ </div>
172
+ <figcaption> Top: samples <span class="math inline"><em>x</em><sub><em>t</em></sub></span> from a Gaussian DDPM forward process with a cosine noise schedule. Bottom: samples <span class="math inline"><em>G</em><sub><em>t</em></sub></span> from a Beta SS-DDPM forward process with a noise schedule obtained by matching the mutual information. Middle: corresponding samples <span class="math inline"><em>x</em><sub><em>t</em></sub></span> from that Beta SS-DDPM forward process. The tail statistics have the same level of noise as <span class="math inline"><em>x</em><sub><em>t</em></sub><sup>DDPM</sup></span>, while the samples <span class="math inline"><em>x</em><sub><em>t</em></sub><sup>BetaSS</sup></span> are diffused much faster. </figcaption>
173
+ </figure>
174
+
175
+ It is important to choose the right noising schedule for a SS-DDPM model. It significantly depends on the number of diffusion steps $T$ and behaves differently given different noising schedules, typical to DDPMs. This is illustrated in Figure [\[fig:ss-vs-ddpm-schedule\]](#fig:ss-vs-ddpm-schedule){reference-type="ref" reference="fig:ss-vs-ddpm-schedule"}, where we show the noising schedules for Gaussian SS-DDPMs that are equivalent to DDPMs with the same cosine schedule.
176
+
177
+ Since the variables $G_t$ follow a DDPM-like process, we would like to somehow reuse those DDPM noising schedules that are already known to work well. For Gaussian distributions, we can transform a DDPM noising schedule into the corresponding SS-DDPM noising schedule analytically by equating $I(x_0; G_t)=I(x_0; x_t^{\scriptscriptstyle\mathrm{DDPM}})$. In general case, we look for schedules that have approximately the same level of mutual information $I(x_0; G_t)$ as the corresponding mutual information $I(x_0; x_t^{\scriptscriptstyle\mathrm{DDPM}})$ for a DDPM model for all timesteps $t$. We estimate the mutual information using Kraskov [@kraskov2004estimating] and DSIVI [@molchanov2019doubly] estimators and build a look-up table to match the noising schedules. This procedure is described in more detail in Appendix [13](#app:noise-schedule){reference-type="ref" reference="app:noise-schedule"}. The resulting schedule for the Beta SS-DDPM is illustrated in Figure [4](#fig:ss-vs-ddpm-schedule-viz){reference-type="ref" reference="fig:ss-vs-ddpm-schedule-viz"}. Note how with the right schedule appropriately normalized tail statistics $G_t$ look and function similarly to the samples $x_t$ from the corresponding Gaussian DDPM. We further discuss this in Appendix [14](#app:tail-norm){reference-type="ref" reference="app:tail-norm"}.
178
+
179
+ During sampling, we can grow the tail statistic $G_t$ without any overhead, as described in Algorithm [3](#alg:sampling){reference-type="ref" reference="alg:sampling"}. However, during training, we need to sample the tail statistic for each object to estimate the loss function. For this we need to sample the full tail $x_{t:T}$ from the forward process $q^{\scriptscriptstyle\mathrm{SS}}(x_{t:T}|x_0)$, and then compute the tail statistic $G_t$. In practice, this does not add a noticeable overhead and can be computed in parallel to the training process if needed.
180
+
181
+ We can sample from DDPMs more efficiently by skipping some timestamps. This wouldn't work for SS-DDPM, because changing the number of steps would require changing the noising schedule and, consequently, retraining the model.
182
+
183
+ However, we can still use a similar trick to reduce the number of function evaluations. Instead of skipping some timestamps $x_{t_1+1:t_2-1}$, we can draw them from the forward process using the current prediction $x_\theta(G_{t_2}, t_2)$, and then use these samples to obtain the tail statistic $G_{t_1}$. For Gaussian SS-DDPM this is equivalent to skipping these timestamps in the corresponding DDPM. In general case, it amounts to approximating the reverse process with a different reverse process: $$\begin{equation}
184
+ p_\theta^{\scriptscriptstyle\mathrm{SS}}(x_{t_1:t_2}|G_{t_2})={\prod_{t=t_1}^{t_2}} \left.q^{\scriptscriptstyle\mathrm{SS}}(x_t|x_0)\right|_{x_0=x_\theta(G_{t}, t)}\approx{\prod_{t=t_1}^{t_2}} \left.q^{\scriptscriptstyle\mathrm{SS}}(x_t|x_0)\right|_{x_0=x_\theta(G_{t_2}, t_2)}.
185
+ \label{eq:reducing}
186
+ \end{equation}$$ We observe a similar dependence on the number of function evaluations for SS-DDPMs and DDPMs.
187
+
188
+ As defined in Theorem [\[theorem:memory\]](#theorem:memory){reference-type="ref" reference="theorem:memory"}, the tail statistics can have vastly different scales for different timestamps. The values of coefficients $a_t$ can range from thousandths when $t$ approaches $T$ to thousands when $t$ approaches zero. To make the tail statistics suitable for use in neural networks, proper normalization is crucial. In most cases, we collect the time-dependent means and variances of the tail statistics across the training dataset and normalize the tail statistics to zero mean and unit variance. We further discuss this issue in Appendix [14](#app:tail-norm){reference-type="ref" reference="app:tail-norm"}.
189
+
190
+ To make training the model easier, we make some minor adjustments to the neural network architecture and the loss function.
191
+
192
+ Our neural networks $x_\theta(G_t, t)$ take the tail statistic $G_t$ as an input and are expected to produce an estimate of $x_0$ as an output. In SS-DDPM the data $x_0$ might lie on some manifold, like the unit sphere or the space of positive definite matrices. Therefore, we need to map the neural network output to that manifold. We do that on a case-by-case basis, as described in Appendices [15](#app:synthetic-exp){reference-type="ref" reference="app:synthetic-exp"}--[18](#app:image-exp){reference-type="ref" reference="app:image-exp"}.
193
+
194
+ Different terms of the VLB can have drastically different scales. For this reason, it is common practice to train DDPMs with a modified loss function like $L_{simple}$ rather than the VLB to improve the stability of training [@ho2020denoising]. Similarly, we can optimize a reweighted variational lower bound when training SS-DDPMs.
2302.06091/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2022-11-18T09:06:28.733Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" version="20.5.3" etag="5bDhxMK5Vih4rgZLjny_" type="device"><diagram id="lJXDkkAOv7T4iDxHcm2y" name="Page-1">7Vxbc6O4Ev41qZp5sAuJ++NMNtlN1dna1E6dmd1HbBSbHYw8QCbx/vojgYTRxRhsge2qk5cgIWHU/an1davFnX2/ef81j7br33GM0jtoxe939i93EDo+9Mg/WrOra1zbBXXNKk/ius7aV3xJ/kV1JeC1r0mMClZXV5UYp2WyFSuXOMvQshTqojzHb2KzF5zGQsU2WiGl4ssyStXab0lcruta27as/Y3fULJas592vIB12US8NWtarKMYv7Wq7Ic7+z7HuKyvNu/3KKXiEwXzeOBu82Y5yspeHayQvUi548NDMRktK6J0gd8e9hWf1+UmJfWAXL7grGTKgQ4po/ek/Ivdo9d/k2tr7rLSL1T/Fi/sWEF9XzaEAr/mS/ZCHqujL9ZqxEb0K8IbVOY70iBHaVQmP0U1RUzbq6Zd0/UZJ+RnocWg6dt+3YUBEwDpEWWUr1DJeu2l+inPo12r2ZY2KDp+xw2E34EObD+OXNRP5KXWGPdVlSL1SuXy+hmlr0wWipJz/JrFKGZqeFsnJfqyjSqJv5FpK2k6SdN7nOK86ms/Pj549/ekvihz/B3xOxnOUJdKf6K8RO+dCmR3Z1BUBPQZSt/2083no1y3ZhoIwsNKb8m3U3y/v7gPP+K3+M37tPv2p239+BYVMwCczomC83KNVziL0vZs2UuZinHf5j8Yb1nlP6gsd2waRa8lFiXP55Q1YE4Ruee7v9qFVi9a3HerSsPmIjNc9VQYa372VRV/m7GQfn//4D4+Toj0mdsP6GS5OV96gdcJaTbKNhobZM2h2wYXMAGtQEUWH+X0yAo0wPLSkkJhG2WC0Lwfr3S5/ryIlt9XFdpmyxoon+ibrRYfoEWMB4Eq+VULwnB/7bof9/3J1Yr+3+ICkZuA/yB51fo369sdAAenA9wAlm3LFrAcAgXLwHJVLNvB+eqygSKWI1A2aFh9I5Y1VOEPjMP/JL4y8yRaJHFKpfncFTp49mF6o/SGltQbQGlNr4WmsLBTiFI3+e00gL5gAK2xEOCMggBF6sBztDo+Rn1PEHp40LJSh4L6Y9FSNKxrlP5EZbKMVFs5mVHGebJKCHPjP7jI5VdQzHRTXY+rrk421Lm8VhPO3VJuwS3Fgge+asAd6zDc+hpwAC+44OYoTpYlkT/0og2VdrYotlUDi+nr9lZjly+/O664ucotuYIFbboGtGkr2twk2QdSk+KiALUy6CVt9LFDhmdQdhPTwZeXMqA6osDTcBrXgB8KPM2UkCRF4b/tP9ImJhYt+BOsTgkAR/RPAo1F0GAIQBMmwTk+/mIdbellSWCB/sW06+ctyhPyYyhv1z/vK48D6h3xqGNVTpPtV3ad45IsszgjxVlo6dD3+BiGevR5B5Ry2DkE4TwU/kSCDVVlNIJvayMwoQxXUQbKljgmApV1QkZXihLVes9tybGqKE1WVLRLIrRKU1RWZN1PP7EbmySOq8CKToWi1TChAGDNLeFPnAyOalA9jfhtA+K3Vej3829mdAg8Olu7Kx70T3Jz4DE3hz6lNc/6E1/A5DqB76MwX9sBko2H/jhh3xmQ/CjfbT9O1156M9uRMDPMkZKDo+YcKZ9vZ/SF5yBYjO78SIsc8EdzfviQbsz7IctoVrzgfEM5sqX6QIwlD3GEFMBcC3GWfCDgTucEcYJznPHVrOQ3ZmTbDCVOcsLvaopCJhN9ZYG1gKDTLJ9PGR15F01HUxwtTRFXWwNRXNibQvcTUqOg86UkomzmaIRkjcWsoeqeKWK5JLOm64lm1/EgtYbDqbU0y6ek0lD1a2J0W1R6uMCBKO8JqbPrD+QmDf9tCmfun7ZpreuOwmmGclEvEFMQHLbh1Y9aeoE/9xw3sNmfzFKlvUlzPJNbydbUaQXvrHrnzCCvaDId+q+XB2eA50JRTECz16sJJclk9JQ54ARD5wDfHmu2xP5u3Tkjl4elughzgmt1+l3eZptjsGCGJDmNmpDhaHaNwosJlL+NMEHxP4yRsnB6cbVzFPqiWbQB1ETN+V6ZMEsNRM0dNcamSGoIWx0tCqzdGNKzd+Pc1elB6W+JuzZKvwnu6qjxi1vjrsMFfkHuqmJ7zOVpbj6py9VwYD6Fx47rQUsOgdqupBRzkT13cDLdCRTLOJO4QtfEEaPkDksY6eeZSJ1tpwmhm3dGOLQvH8ulmuPxW2p2+LXlaGK5xS4r16ggK1l8p4vl1k7UxUK5JuncjGeVNHQOaOicdrH0fQPW2zZiEqxpTYLgXHga6x2aNhO95anyY5pD0gG/M842GICfLcHP4eVjQV7bQHK3e6rLb8KzNZMRy8PUE3CHoYuUCySPiCUgH9rLBT7san/2kR+ebzRR/vM8CD1R43boHFF5VTotPcDR4ABeBQ5AIOHA697Td6Hb1f5sHPiHOcmUaZSHt4gteJBbGCQRYyZSAmfKTEpuN07gEPBKSITOjAPj07e3QE8nZf4VC5RP/AuclOo+GGk8RuGPcPIMqgK94NEzeEEbXp89uzkjLZ89A9w5n+LwWXC6SYGGt9d0VInj6QJIds3Y2omDYkcFOpE8/1ug/I8F3Twj/dJogVLRGrAok2oN6I1ZUe1VUDMAgu27ZqpHeVRT8eozFHmSrTrjO9uUzOs1TmOUi9/PyMhj6oqv5CnVnlJV36AgaImHIeQK9gkwaf2SVh/fWJN2KDNjiaBry/nE3J1v2yJOKs/cMSDFFkr0szCceIEe4WCk5qMDIzmAakQ5EFeWEbcQQjW7JYpj1ZXKcNKZ5zIw6mXuawZ2KEY5bMubQ9VXcjTr8P542jkLTqg7tj8d9qWEMTNRMO4qXeHBcN+SP5jTfTLct846Gu4F8lGNvmfD1Ud58qM8u4HqCGn/PB3iOoB59CRPL2Byl/MagclzbKUTHQeBCeVFm38Yq3+PQYdzNFDueTqnB5Rtey6nI5mEsi5LiS1R8kbheQz1iYYMZ8/EJZ09sejhh6fnp48V5orvB9lqU7046Ldea3LK+R8Qgspy2+zYmc5OAZYuF2sUKOxBUOGBIuH56fn/SBCQIFoBh4c2BChoUhdNQMHpsb6NfVZdZuq6LMVQM34zaYnwuABu6LC6cwL8xNSbSbMS1fNMt3Y4fbjAL3Y4nRT3HyitGcT+Q6/2w/8A</diagram></mxfile>
2302.06091/main_diagram/main_diagram.pdf ADDED
Binary file (25.8 kB). View file
 
2302.06091/paper_text/intro_method.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Single-particle cryo-electron microscopy (cryo-EM) is one of the most important structural biology techniques [\[1\]](#page-8-0). This technique can be divided into four stages: biological sample preparation, electron microscopy image collection, 3-dimensional (3D) reconstruction, and atomic structural model building. 3D reconstruction of molecular volumes from 2-dimensional (2D) electron microscopy images is the most challenging and time-consuming step in cryo-EM data analysis. There are two major challenges in cryo-EM 3D reconstruction: unknown projection poses (orientations and positions) and low signal-to-noise ratio (SNR). During electron microscopy imaging, the 3D poses of the biological molecules in the sample can not be directly measured. The SNR of a typical cryo-EM image is very low, which can vary from -10 dB to -20 dB (average around -10 dB) [\[2\]](#page-8-1), making it extremely challenging to accurately estimate poses and perform the 3D reconstruction.
4
+
5
+ Currently, many machine-learning (ML)-based methods have been proposed for solving cryo-EM 3D reconstruction [\[3\]](#page-8-2), utilizing architectures like GAN [\[4\]](#page-8-3) and auto-encoders [\[5\]](#page-8-4). Nevertheless, ML-based methods of cryo-EM 3D reconstruction are still at an early stage. The highest possible cryo-EM reconstruction resolution (2 pixels) at FSC threshold 0.5 has not been achieved by ML-based methods, even in simulated datasets without noise. For experimental datasets like the 80S, some methods with amortized inference methods failed to reconstruct the object of certain size, or found the highest possible resolution (2 pixels) of half-map FSC [\[6\]](#page-8-5) at 0.143 not reachable [\[7\]](#page-8-6). For widely utilized architecture auto-encoders, an image-to-pose encoder extracts the image-projection poses from 2D input cryo-EM images, while a pose-to-image decoder generates the images to match the inputs. However, as the poses are intermediate latent variables without supervised loss, the estimated poses can be inaccurate and easily trapped in local minima of the orientation space. These pose estimation errors lead to an inferior resolution in the 3D reconstruction output.
6
+
7
+ To improve the pose estimation and 3D reconstruction quality, here we propose a new framework called ACE-EM (ACE for Asymmetric Complementary autoEncoder). In particular, ACE-EM consists of two training tasks: (1) Image-pose-image (IPI). The task is the same as previous work, which takes projection images as inputs, and outputs predicted images, by an image-to-pose encoder followed by a pose-to-image decoder. (2) Pose-image-pose (PIP). The task explicitly learns the pose estimation in a self-supervised fashion, which takes randomly sampled poses as inputs, and outputs predicted poses, using the same encoder and decoder as in IPI but reversing their order. The two tasks complement each other and achieve a more balanced training of the autoencoder parameters.
8
+
9
+ Our main contributions are listed below.
10
+
11
+ - As far as we know, ACE-EM is the first deep-learning model in cryo-EM reconstruction that enhances the pose estimation by the self-supervised PIP task. With better pose estimation, ACE-EM can converge much faster than previous methods, efficiently cover more pose spaces, and achieve better cryo-EM 3D reconstruction quality.
12
+ - ACE-EM can boost performance regardless of decoder types. For example, some decoders, that failed in previous autoencoder architectures, can be successfully used in ACE-EM.
13
+ - Experimental results demonstrate that ACE-EM can perform well in both simulated and real-world experimental datasets. In particular, ACE-EM outperformed all the baseline methods and reached 2 pixels, the Nyquist resolution (the highest possible resolution) at FSC threshold 0.5, the Nyquist resolution of half-map FSC at 0.143 in the 80S experimental dataset, which is the only architecture that reached the Nyquist resolution with amortized inference methods.
14
+
15
+ # Method
16
+
17
+ ACE-EM is an autoencoder-based model and is trained with two unsupervised learning tasks. The autoencoder contains an image-to-pose encoder (EIP ) and a pose-to-image decoder (DP I ). The EIP takes projection images and outputs projection pose parameters. The DP I can be viewed as a cryo-EM image projection physics simulator. It takes the pose parameters and outputs projection images corresponding to the given poses, which are post-processed by applying CTF [\[28\]](#page-9-10). The first task of ACE-EM is image-pose-image(IPI) task which follows the standard unsupervised autoencoder architecture. With the pipeline of EIP and DP I , reconstructed images are generated corresponding to the given projection images. The second task is the pose-image-pose (PIP) task, a self-supervised learning, that uses the same encoder-decoder in IPI but in the reversed order. With the reverse pipeline, a corresponding pose is predicted from a random-selected pose, and the gap between the two poses is minimized in training. The IPI task and PIP task can be trained simultaneously or in alternating steps. With either training strategy, the EIP and DP I parameters are shared between the two tasks. As a result, the two tasks of ACE-EM complement each other and form a more balanced training of the EIP and DP I parameters.
18
+
19
+ The $E_{IP}$ represents a function that maps an input image $Y_i$ to its corresponding projection pose parameters $(R_i,t_i)$ . $R_i \in SO(3) \subset \mathbb{R}^{3 \times 3}$ is a rotation matrix for mapping a reference orientation to the projection orientation. $t_i \in \mathbb{R}^2$ is the 2D translation vector to account for the 2D image shift in the input image $Y_i$ . The $D_{PI}$ takes pose $(R_i,t_i)$ and predicts the projection image $Y_i^{pred}$ . The trainable parameters of $D_{PI}$ contain the cryo-EM volume in an explicit or implicit way, and the volume can be obtained from the $D_{PI}$ after training. CTF was applied to the output images to create a more realistic projection image $Y_i^{pred}$ . The details of the encoder and decoder structure can be found in appendix A.
20
+
21
+ $$E_{IP}: Y_i \mapsto (R_i, t_i) \tag{1}$$
22
+
23
+ $$D_{PI}: (R_i, t_i) \mapsto Y_i^{pred} \tag{2}$$
24
+
25
+ Choices of decoders ACE-EM provides an architecture that can work with different encoders and decoders. Cryo-EM 3D reconstruction can have either real-space or frequency-space representations in the space domain, and either neural network type or voxel grid type for volume parameter representations. To prove ACE-EM is universal and can boost the performances regardless of the decoder types, we tested a real-space voxel grid decoder VoxelGrid<sub>R</sub> which was used in cryoGAN [15], and partially used in the CryoPoseNet [22], and a frequency-space neural network decoder FourierNet which was shown to outperform other similar methods [7].
26
+
27
+ The IPI task follows the standard autoencoder architecture as shown in Figure 1. Using the notations defined earlier, the IPI task can be formally defined as follows.
28
+
29
+ $$IPI: Y_i \mapsto Y_i^{pred} \tag{3}$$
30
+
31
+ $$f_{IPI}(Y_i) := (D_{PI} \circ E_{IP})(Y_i) \tag{4}$$
32
+
33
+ **IPI loss function** Since both the input and output are images, the objective of the IPI task is to minimize their differences by mean squared error (MSE) loss function for each training batch of size B and with an image edge length of L.
34
+
35
+ $$\mathcal{L}_{image} := \frac{1}{BL^2} \sum_{i=1}^{B} \left\| Y_i - Y_i^{pred} \right\|_2 \tag{5}$$
36
+
37
+ Cryo-EM 3D reconstruction is prone to spurious 2-fold planar mirror symmetry [7]. A tentative solution is to use the "symmetric loss" employed in cryoAI [7], where $\Gamma_{\rm cryoAI}$ is the cryo-AI autoencoder pipeline and $R_{\pi}$ represents an in-plane rotation of angle $\pi$ operation applied to the input image $Y_i$ .
38
+
39
+ $$\mathcal{L}_{\text{symm}}^{\text{cryoAI}} := \frac{1}{BL^2} \sum_{i=1}^{B} \min(\|Y_i - \Gamma_{\text{cryoAI}}(Y_i)\|_2, \|R_{\pi}[Y_i] - \Gamma_{\text{cryoAI}}(R_{\pi}[Y_i])\|_2)$$
40
+ (6)
41
+
42
+ For a consideration of data augmentation, we have generalized the above symmetric loss to include in-plane image rotation by arbitrary angles and also included mirror transformation. "generalized symmetric loss" as below. $A_1[Y_i]$ and $A_2[Y_i]$ are two different affine transformations with random image rotations and/or mirror flipping. We found that the generalized symmetry loss still works with the voxel-grid-based decoder $VoxelGrid_R$ , but fails in FourierNet, so it is only applied when using the $VoxelGrid_R$ .
43
+
44
+ $$\mathcal{L}_{\text{symm}}^{G} := \frac{1}{BL^{2}} \sum_{i=1}^{B} \min(\|A_{1}[Y_{i}] - f_{IPI}(A_{1}[Y_{i}])\|_{2}, \|A_{2}[Y_{i}] - f_{IPI}(A_{2}[Y_{i}])\|_{2})$$
45
+
46
+ $$(7)$$
47
+
48
+ In addition to image loss, we also added an L1-regularization term for image shift to avoid unrealistic large shift predictions and keep the reconstructed 3D object near the origin of the coordinate system.
49
+
50
+ $$\mathcal{L}_{IPI} := \mathcal{L}_{\text{symm}}^{G} + \frac{1}{2B} \sum_{i=1}^{B} \left\| t_{i}^{pred} \right\|_{1}$$
51
+
52
+ $$\tag{8}$$
53
+
54
+ <span id="page-3-0"></span>**IPI warm-up labeling** High-frequency features are difficult to learn in at an early training stage, especially for projection images with low SNR. We found that the training results can be improved by using the low-pass filtered input images as training labels instead of using original image labels, as a training warm-up. The training image label $\tilde{Y}_i$ is defined below. The details of $f_{\text{filter-k}}$ can be found in appendix A. $f_{\text{filter-1}}$ is the first Gaussian filter with the lowest Gaussian convolution kernel variance, while $f_{\text{filter-k}}$ is the k-th Gaussian filter with higher convolution kernel variance. $N_{\text{warm-up}}^{IPI}$ is the iteration threshold for switching to the original input images. In practice, 3, 4 or 5 is selected as k value.
55
+
56
+ $$\tilde{Y}_i = \gamma(Y_i) + (1 - \gamma) f_{\text{filter-k}}(Y_i) \tag{9}$$
57
+
58
+ $$\gamma = \begin{cases}
59
+ 0, & \text{if iteration } < N_{\text{warm-up}}^{IPI} \\
60
+ 1, & \text{if iteration } >= N_{\text{warm-up}}^{IPI}
61
+ \end{cases}$$
62
+ (10)
63
+
64
+ The PIP task is similar to the IPI task but with $E_{IP}$ and $D_{PI}$ placed in reverse order. Besides, an additive Gaussian noise $\epsilon \sim \mathcal{N}(\mu, \sigma^2 I)$ is added to the output image from $D_{PI}$ to create more realistic inputs for the $E_{IP}$ . The $\mu$ and $\sigma$ of the Gaussian noise is sampled based on the background of the input images or on a given SNR. Compared to the IPI task, the inputs of PIP have changed from images to synthesized projection parameters $(R_i, t_i)$ drawn from certain distributions. The loss function is defined as the MSE loss of rotation matrices $\{R_i\}_{i\in\mathcal{B}}$ and translation vectors $\{t_i\}_{i\in\mathcal{B}}$ for each batch $\mathcal{B}$ of size $\mathcal{B}$ as shown below, where $\|\cdot\|_F$ is the Frobenius matrix norm.
65
+
66
+ $$PIP: (R_i^{syn}, t_i^{syn}) \mapsto (R_i^{pred}, t_i^{pred}) \tag{11}$$
67
+
68
+ $$f_{PIP}(R_i^{syn}, t_i^{syn}) := E_{IP}(D_{PI}(R_i^{syn}, t_i^{syn}) + \epsilon)$$
69
+ (12)
70
+
71
+ $$\mathcal{L}_{PIP} = \frac{1}{B} \sum_{i=1}^{B} \left( \frac{1}{9} \left\| R_i^{syn} - R_i^{pred} \right\|_2 + \frac{1}{2} \left\| t_i^{syn} - t_i^{pred} \right\|_1 \right)$$
72
+ (13)
73
+
74
+ With only IPI, the pose estimation of $E_{IP}$ is not distributed in the whole pose space, and some part of the pose space is never reached. By applying PIP, in which the input pose is randomly sampled in the whole pose space, $E_{IP}$ is forced to match the image-pose pairs over all possible pose spaces. Therefore, PIP gives $E_{IP}$ a proper output distribution and lets the model measure infinity of pose and images instead of only images in datasets. With a substantially correct volume representation of $D_{PI}$ , PIP even turns the unsupervised task into a supervised image-pose matching task. When training the PIP task, one has the choice of freezing the $D_{PI}$ parameters and only training the $E_{IP}$ , or allowing both the $E_{IP}$ and $D_{PI}$ parameters to be updated.
75
+
76
+ The IPI and PIP tasks can be trained together or in succession. When trained together, we designed the following training schedule with a warm-up period of $N_{\text{warm-up}}^{\text{train}}$ iterations, which allows the IPI task to learn an approximate description of the underlying 3D object before adding the PIP task. Although we found the following simple schedule was sufficient in our benchmark tests, other schedules of $\beta$ are also possible.
77
+
78
+ $$\mathcal{L}_{total} = \mathcal{L}_{IPI} + \beta \mathcal{L}_{PIP} \tag{14}$$
79
+
80
+ $$\begin{cases} \beta = 0, & \text{if iteration } < N_{\text{warm-up}}^{\text{train}} \\ \beta > 0, & \text{if iteration } >= N_{\text{warm-up}}^{\text{train}} \end{cases}$$
81
+ (15)
2302.08712/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-12-31T15:43:41.247Z" agent="5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" etag="NiN6tvKKEhR7uj_CZqgt" version="16.1.0" type="device"><diagram id="prtHgNgQTEPvFCAcTncT" name="Page-1">7VzbcqM4EP0aV+0+ZApJiMtjnGQ2W7WzlarMdV9SjFFsajAigMf2fP1KIAyScNlOuGWcJ6MGBDrdp1vdEp6gq+Xmr8SLFx+oT8IJNPzNBF1PIAQGctgPl2yFxDaFZJ4EvpBVgvvgFylvFdJV4JNUujCjNMyCWBbOaBSRWSbJvCSha/myRxrKT429OdEE9zMv1KVfAj9bFFIH2pX8lgTzRflkYLnFmaVXXixGki48n65rInQzQVcJpVlxtNxckZCjV+JS3Pd+z9ndiyUkyo654faT+e+9u4pu/l477z//F38iFzcXopefXrgSA86EKM22JQYJXUU+4d0YEzRdL4KM3MfejJ9dM7Uz2SJbhqwF2KHokCQZ2ex9U7AbP7McQpckS7bsEnEDFoht5ea6wh+UoC5q2FtC5gmVz3cdV6iwAwHMCSBBHSQ4OEjAHhlKSEcJDY4SdEeGkqmjZA6OkglGhhLWUcKDo4TRyFCydJSswVGyxua9bR0le3CU7LF5b0dHyRkcJWds3tvVUXIHRwkYY3Pf5fvUcNJQIpF/yefnrBXRiAmnvpcuctiADBGX33lZRpIol0ADMWmaJfTHbk7O4JnKuBcPJL42vT8Iaw023IBaKUtI6GXBT7n7JijFE+5owB5caQ2YstaAoagjpatkRsRt9Ym92hM0DvSUecmcZFpPuW53A3+BupuyCEPTOLPwTNasFwZzrtMZUwlJmIDzIGDJ16U4sQx8n98+TUga/PK+511x7cZ8LPno8HSCr3lfq4ymRfoIduZxRUOaVBb2GIShImqDf8DFEv62zj/UYEiwM/rpCcsb/TTSQFlpL6Cfc6CnrunXkHkBPYv/bekHx0Y/PcebQCvk4D/S3HIqrVhPK1qeuCjwu2QXACfeVCfZ0Vz8eks+f4i+p3HeNt5EpehjsCTs544kAfVLvJn+CshLAM+EE8jRObCb7vVDAj3t0tBPF17MD2erJNxOE2/2g2SHZ8xVnOGtxzCIb8tjpmpRQQYOj1xBQmZZQKMc24QD0Qa4SlICUIO/AQ1Ydzfd1pM3wYYvQeTT9SuJBar62ogNphwaoIE1XYGm4IA605WeQg7KCzbLaYsXSuEH6rQAhtOnDyo73scLHflz4QWGqq6G5gXUc0hJV/pSwrnoyrFHp6sj8stXGttNNLbgDvXs7ncB23ZGB7aeuv3x9BDmUyjWo/H0sGBvAf48W2cEFH7ABpWBpkSjO2ekrxW+qUyeluGxqUxfuCxUhuoqQ+erMgdbisrQ0CrT0/lcY0z0z/3HD5qmTloD68JPAWVNZBcz6rHF7DO2oCMCea0qPwu9NA1m+0K20YAb2QTZV37uHUJAtL/lbddFon29ETfnjW2twStnbKScRYUsYqP+Wm8UfeGyWXWVt8q+ikEdrvyzgecV9iMcRVFAP2iewy3lKVFxN2k5eS0BOHvM9sBaArMab1u7THi3va/sGq78HFfaPMgOih5bXahAut/vjAEmhjUGgHdGiwyw+6QAfhUUUPdGIbWocywDHNVvu90QANu2/MJiltQtAY4oir+QAI0mCw7Ya0UbhrdMG/Nk1rRp/OarMH4LtuT+LbXU05H3B5Zi/LAP76+vUhQT/VczhzTNweeQR6wetBVBbcPtLILCPiPosU6kMNChnIhpthRBTWV9crcfomUnApHsREzcQwQtQeqFAAif1xRyWAJYbRHA6okAVlmaLwng9EEAfZ2sbQI8M++vE8eWiQNP5k2b5n9sEeHN/E8rIdimbP52D5NIE/Zn/pXFf5MMvnuHbR9psRgOabFA/QTx2SYLMJDXKFBXiY/6oF7qXmbnld/jjW/sNmWWaeLOq1jPtillflpui2/fpuzGV+7WpgYqJbVZ/x+7LULlu6rnfyKgVoi0hbq2bFF9ZdPowb/hpqB84oZ32LThPf+KSOz0zrvL1xwf4oQZLTRowm2odsm8uITXlMQ1x+0CZ0MM4pQcLjJ5aVz8x8RjsOG86WbLqqVE1abP+Zp2xQC17Nje59idBjB4gtMYzBdgu5lYz/AFeyja1+dCWN/jdBnRpRduNZ2Ofr8FbIlzlgFlzrk653r9fALru5rqvu9c9aQWSZu+9ep1iznWl7SlAHS2ikLyDibT0XcwtaQo1qz+RKlwktV/UaGb/wE=</diagram></mxfile>
2302.08712/main_diagram/main_diagram.pdf ADDED
Binary file (17 kB). View file
 
2302.08712/paper_text/intro_method.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Anomalies indicate a departure of a system from its normal behaviour. In Industrial systems, they often lead to failures. By definition, anomalies are rare events. As a result, from a Machine Learning standpoint, collecting and classifying anomalies pose significant challenges. For example, when anomaly detection is posed as a classification problem, it leads to extreme class imbalance (data paucity problem). Though several current approaches use semi-supervised neural network to detect anomalies [@Forero:2019; @Sperl2020], these approaches still require some labeled data. In the recent past, there have been approaches that attempt to model normal dataset and consider any deviation from the normal as an anomaly. For instance, autoencoder-based family of models  [@Jinwon:2015] use some form of thresholds to detect anomalies. Another class of approaches relied on reconstruction errors [@Sakurada:2019], as an anomaly score. If the reconstruction error of a datapoint is higher than a threshold, then the datapoint is declared as an anomaly. However, the threshold value can be specific to the domain and the model, and deciding the threshold on the reconstruction error can be cumbersome.
4
+
5
+ In this paper, we have introduced the notion of *quantiles* in multiple versions of the LSTM-based anomaly detector. Our proposed approach is principled on:\
6
+
7
+ - training models on a normal dataset
8
+
9
+ - modeling temporal dependency
10
+
11
+ - proposing an adaptive solution that does not require manual tuning of the activation
12
+
13
+ Since our proposed model tries to capture the normal behavior of an industrial device, it does not require any expensive dataset labeling. Our approach also does not require re-tuning of threshold values across multiple domains and datasets. We have exhibited through empirical results later in the paper (see Table [\[table:datasetschar\]](#table:datasetschar){reference-type="ref" reference="table:datasetschar"} of Appendix [11](#appendix:datasetchar){reference-type="ref" reference="appendix:datasetchar"} ) that the distributional variance does not impact the prediction quality. Our contributions are three folds:\
14
+ **(1)** Introduction of *quantiles*, free from the assumptions on data distributions, in design of quantile-based LSTM techniques and their application in anomaly identification.\
15
+ **(2)** Proposal of the *Parameterized Elliot* as a 'flexible-form, adaptive, learnable' activation function in LSTM, where the parameter is learned from the dataset. Therefore, it does not require any manual retuning when the nature of the dataset changes. We have shown empirically that the modified LSTM architecture with [pef]{acronym-label="pef" acronym-form="singular+short"} performed better than the [ef]{acronym-label="ef" acronym-form="singular+short"} and showed that such behavior might be attributed to the slower saturation rate of [pef]{acronym-label="pef" acronym-form="singular+short"}.\
16
+ **(3)** Demonstration of *superior performance* of the proposed [lstm]{acronym-label="lstm" acronym-form="singular+short"} methods over state-of-the-art (SoTA) deep learning (Autoencoder  [@Yin], DAGMM [@zong2018deep], DevNet [@pang2019deep]) and non-deep learning algorithms ([if]{acronym-label="if" acronym-form="singular+short"} [@iforest], Elliptic envelope  [@envelope])
17
+
18
+ The rest of the paper is organized as follows. The proposal and discussion of various LSTM-based algorithms are presented in section [2](#sec:varlstm){reference-type="ref" reference="sec:varlstm"}. Section [3](#sec:background){reference-type="ref" reference="sec:background"} describes the LSTM structure and introduces the [pef]{acronym-label="pef" acronym-form="singular+short"}. This section also explains the intuition behind choosing a parameterized version of the AF and better variability due to it. Experimental results are presented in section [4](#sec:experiment){reference-type="ref" reference="sec:experiment"}. Section [5](#related){reference-type="ref" reference="related"} discusses relevant literature in anomaly detection. We conclude the paper in section [6](#sec:conclusion){reference-type="ref" reference="sec:conclusion"}.
19
+
20
+ Since *distribution independent* and *domain independent* anomaly detection are the two key motivation behind this work, we borrow the concept of quantiles from Descriptive and Inferential Statistics to address this challenge.
21
+
22
+ # Method
23
+
24
+ Quantiles are used as a robust alternative to classical conditional means in Econometrics and Statistics  [@koenker]. In a previous work, Tambwekar et.al.\[ [@Tambwekar:2022] extended the notion of conditional quantiles to the binary classification setting, allowing to quantify the uncertainty in the predictions and provide interpretations into the functions learnt by the models via a new loss called binary quantile regression loss (sBQC). The estimated quantiles are leveraged to obtain individualized confidence scores that provide an accurate measure of a prediction being misclassified. Since quantiles are a natural choice to quantify uncertainty, they are a natural candidate for anomaly detection. However, to the best of our knowledge, quantile based method has not been used for anomaly detection, however natural it seems.
25
+
26
+ Empirically, if the data being analyzed are not actually distributed according to an assumed distribution, or if there are other potential sources for anomalies that are far removed from the mean, then quantiles may be more useful descriptive statistics than means and other moment-related statistics. Quantiles can be used to identify probabilities of the range of normal data instances such that data lying outside the defined range are conveniently identified as anomalies.
27
+
28
+ The important aspect of distribution-free anomaly detection is the anomaly threshold being agnostic to the data from different domains. Simply stated, once a threshold is set (in our case, 10-90), we don't need to tune the threshold in order to detect anomalous instances for different data sets. Quantiles allows using distributions for many practical purposes, including looking for confidence intervals. Quantile divides a probability distribution into areas of equal probability i.e. quantiles offer us to quantify chances that a given parameter is inside a specified range of values. This allows us to determine the confidence level of an event (anomaly) actually occurring.
29
+
30
+ Though the mean of a distribution is a useful measure when it is symmetric, there is no guarantee that actual data distributions are symmetric. If there are potential sources for anomalies are far removed from the mean, then medians are more robust than means, particularly in skewed and heavy-tailed data. It is well known that quantiles minimize check loss [@Horowitz:1992], which is a generalized version of Mean Absolute Error (MAE) arising from medians rather than means. Thus, quantiles have less susceptibility to long-tailed distributions and outliers, in comparison to mean [@Dunning2021].
31
+
32
+ Therefore, it makes practical sense to investigate the power of quantiles in detecting anomalies in data distributions. Unlike the methods for anomaly detection in the literature, our proposed quantile-based thresholds applied in the quantile-LSTM are generic and not specific to the domain or dataset. The need to isolate anomalies from the underlying distribution is significant since it allows us to detect anomalies irrespective of the assumptions on the underlying data distribution. We have introduced the notion of quantiles in multiple versions of the LSTM-based anomaly detector in this paper, namely (i) quantile-LSTM (ii) iqr-LSTM and (iii) Median-LSTM. All the LSTM versions are based on estimating the quantiles instead of the mean behaviour of an industrial device. Note, the median is $50\%$ quantile.
33
+
34
+ Before we discuss quantile-based anomaly detection, we describe the data structure and processing setup, with some notations. Let us consider $x_i, i=1,2,..,n$ be the $n$ time-series training datapoints. We consider $T_k = \{ x_i: i=k,\cdots,k+t \}$ be the set of $t$ datapoints, and let $T_k$ be split into $w$ disjoint windows with each window of integer size $m=\frac{t}{w}$ and $T_k = \{ T_k^1,\cdots,T_k^w \}$. Here, $T_k^j = \{ x_{k+m (j-1)},...,x_{k+ m(j)-1}\}$. In Figure [1](#fig:movement){reference-type="ref" reference="fig:movement"}, we show the sliding characteristics of the proposed algorithm on a hypothetical dataset, with $t=9,m=3$. Let $Q_{\tau}(D)$ be the sample quantile of the datapoints in the set $D$. The training data consists of, for every $T_k$, $X_{k,\tau} \equiv \{Q_{\tau}(T_k^j)\}, j=1,\cdots,w$ as predictors with $y_{k,\tau} \equiv Q_{\tau}(T_{k+1})$, sample quantile at a future time-step, as the label or response. Let $\hat{y}_{k,\tau}$ be the predicted value by an LSTM model.
35
+
36
+ <figure id="fig:movement" data-latex-placement="!htp">
37
+ <img src="Figures/wtwme.png" style="width:80.0%" />
38
+ <figcaption>Sliding movement of a time period</figcaption>
39
+ </figure>
40
+
41
+ A general recipe we are proposing to detect anomalies is to: (i) estimate quantile $Q_{\tau}(x_{k+t+1})$ with $\tau \in (0,1)$ and (ii) define a statistic that measures the outlier-ness of the data, given the observation $x_{k+t+1}$. Instead of using global thresholds, thresholds are adaptive i.e. they change at every time-point depending on quantiles.
42
+
43
+ As the name suggests, in quantile-LSTM, we forecast two quantiles $q_{low}$ and $q_{high}$ to detect the anomalies present in a dataset. We assume the next quantile values of the time period after sliding the time period by one position are dependent on the quantile values of the current time period.
44
+
45
+ <figure id="fig:mediumlstm" data-latex-placement="!htb">
46
+ <figure id="fig:quantilelstm">
47
+ <img src="Figures/quantile-lstm_v2.png" style="width:85.0%" />
48
+ <figcaption>Anomaly detection process using quantile-LSTM</figcaption>
49
+ </figure>
50
+ <figure id="fig:mediumlstm">
51
+ <img src="Figures/median-lstm_v1.png" style="width:85.0%" />
52
+ <figcaption>Anomaly detection process using median-LSTM</figcaption>
53
+ </figure>
54
+ <figcaption> Sigmoid function has been applied as an recurrent function, which is applied on the outcome of the forget gate (<span class="math inline"><em>f</em><sub><em>t</em></sub> = <em>σ</em>(<em>W</em><sub><em>f</em></sub> * [<em>h</em><sub><em>t</em> − 1</sub>, <em>x</em><sub><em>t</em></sub>] + <em>b</em><sub><em>f</em></sub>)</span>) as well as input gate (<span class="math inline"><em>i</em><sub><em>t</em></sub> = <em>σ</em>(<em>W</em><sub><em>i</em></sub> * [<em>h</em><sub><em>t</em> − 1</sub>, <em>x</em><sub><em>t</em></sub>] + <em>b</em><sub><em>i</em></sub>)</span>). <span data-acronym-label="pef" data-acronym-form="singular+short">pef</span> decides the information to store in cell <span class="math inline">$c\hat{} _{t}=PEF(W_c*[h_{t-1},x_t]+b_c)$</span>.</figcaption>
55
+ </figure>
56
+
57
+ It is further expected that, nominal range of the data can be gleaned from $q_{low}$ and $q_{high}$. Using these $q_{low}$ and $q_{high}$ values of the current time windows, we can forecast $q_{low}$ and $q_{high}$ values of the next time period after sliding by one position. Here, it is required to build two LSTM models, one for $q_{low}$ (LSTM$_{qlow}$) and another for $q_{high}$ (LSTM$_{qhigh}$). Let's take the hypothetical dataset as a training set from Figure [2](#fig:quantilelstm){reference-type="ref" reference="fig:quantilelstm"}. It has three time windows from time period $x_1\cdots x_9$. Table [1](#table:firsttp){reference-type="ref" reference="table:firsttp"} defines the three time windows of the time period $x_1\cdots x_9$ and the corresponding $q_{low}$, $q_{high}$ values against the time window.
58
+
59
+ ::: {#table:firsttp}
60
+ TW $q_{low}$ $q_{high}$
61
+ --------------- ----------------------------------- --------------------------------------
62
+ $x_1,x_2,x_3$ $X_{1,low} \equiv Q_{low}(T_1^1)$ $X_{1,high} \equiv Q_{high}(T_1^1)$
63
+ $x_4,x_5,x_6$ $X_{2,low} \equiv Q_{low}(T_1^2)$ $X_{2,high} \equiv Q_{high}(T_1^2)$
64
+ $x_7,x_8,x_9$ $X_{3,low} \equiv Q_{low}(T_1^3)$ $X_{3,high} \equiv Q_{high}(T_1^3)$
65
+
66
+ : The first time period and its corresponding time windows
67
+ :::
68
+
69
+ The size of the inputs to the LSTM depends on the number of time windows $w$ and one output. Since three time windows have been considered for a time period in this example, both the LSTM models will have three inputs and one output. For example, the LSTM predicting the lower quantile, would have $X_{1,low}$, $X_{2,low}$, $X_{3,low}$ as its puts and $y_{1,low}$ as its output, for one time-period. A total of $n-t+1$ instances will be available for training the LSTM models assuming no missing values.
70
+
71
+ After building the LSTM models, for each time period it predicts the corresponding quantile value and slides one position to the next time period on the test dataset. quantile-LSTM applies a following anomaly identification approach. If the observed value $x_{k+t+1}$ falls outside of the predicted $(q_{low}, q_{high)}$, then the observation will be declared as an anomaly. For example, the observed value $x_{10}$ will be detected as an anomaly if $x_{10}< \hat{y}_{1,low}$ or $x_{10} > \hat{y}_{1,high}$. Figure [2](#fig:quantilelstm){reference-type="ref" reference="fig:quantilelstm"} illustrates the anomaly identification technique of the quantile-LSTM on a hypothetical test dataset.
72
+
73
+ iqr-LSTM is a special case of quantile-LSTM where $q_{low}$ is 0.25 and $q_{high}$ is the 0.75 quantile. In addition, another LSTM model predicts median $q_{0.5}$ as well. Effectively, at every time index $k$, three predictions are made $\hat{y}_{k,0.25},\hat{y}_{k,0.5}, \hat{y}_{k,0.75}$. Based on this, we define the Inter Quartile Range (IQR) $\hat{y}_{k,0.75} - \hat{y}_{k,0.25}$. Using IQR, the following rule identifies an anomaly when $x_{t+k+1} > \hat{y}_{k,0.5} + \alpha (\hat{y}_{k,0.75} - \hat{y}_{k,0.25})$ or $x_{t+k+1} < \hat{y}_{k,0.5} - \alpha (\hat{y}_{k,0.75} - \hat{y}_{k,0.25})$
74
+
75
+ Median-LSTM, unlike quantile-LSTM, does not identify the range of the normal datapoints; rather, based on a single LSTM, distance between the observed value and predicted median ($x_{t+k+1}-\hat{y}_{k,0.5}$) is computed, as depicted in Figure [4](#fig:mediumlstm){reference-type="ref" reference="fig:mediumlstm"}, and running statistics are computed on this derived data stream. The training set preparation is similar to quantile-LSTM.
76
+
77
+ To detect the anomalies, Median-LSTM uses an implicit adaptive threshold. It is not reasonable to have a single threshold value for the entire time series dataset when dataset exhibits seasonality and trends. We introduce some notations to make description concrete. Adopting the same conventions introduced before, define $d_k \equiv x_{t+k+1}-Q_{0.5}(T_{k+1}), k=1,2,\hdots,n-t$ and partition the difference series into $s$ sets of size $t$ each, i.e., $D \equiv {D_p, p=1,\hdots,s}$, where $D_p = \{ d_i: i=(s-1)t+1,\hdots,st \}$. After computing the differences on the entire dataset, for every window $D_p$, mean ($\mu_p$) and standard deviation ($\sigma_p$) for the individual time period $D_p$. As a result, $\mu_p$ and $\sigma_p$ will differ from one time period to another time period. Median-LSTM detects the anomalies using upper threshold and lower threshold parameters of a particular time period $D_p$ and they are computed as follows: $$T_{p,lower}=\mu_p+w\sigma_p; T_{p,higher}=\mu_p-w\sigma_p$$ An anomaly can be flagged for $d_k \in T_p$ when either $d_k > T_{p,higher}$ or $d_k < T_{p,lower}$ Now, what should be the probable value for $w$? If we consider $w=2$, it means that any datapoint beyond two standard deviations away from the mean on either side will be considered as an anomaly. It is based on the intuition that differences of the normal datapoints should be close to the mean value, whereas the anomalous differences will be far from the mean value. Hence 95.45% datapoints are within two standard deviations distance from the mean value. It is imperative to consider $w=2$ since there is a higher probability of the anomalies falling into the 4.55% datapoints. We can consider $w=3$ too where 99.7% datapoints are within three standard deviations. However, it may miss the border anomalies, which are relatively close to the normal datapoints and only can detect the prominent anomalies. Therefore we have used $w=2$ across the experiments.
78
+
79
+ In this subsection, we analyze different datasets by computing the probability of occurrence of anomalies using the quantile approach. We have considered 0.1, 0.25, 0.75, 0.9, and 0.95 quantiles and computed the probability of anomalies beyond these values, as shown in Table [\[table:probabilitybound\]](#table:probabilitybound){reference-type="ref" reference="table:probabilitybound"} of appendix section. The multivariate datasets are not considered since every feature may follow a different quantile threshold. Hence it is not possible to derive a single quantile threshold for all the features. It is evident from Table [\[table:probabilitybound\]](#table:probabilitybound){reference-type="ref" reference="table:probabilitybound"} of Appendix [7](#appendix:pobabilitybound){reference-type="ref" reference="appendix:pobabilitybound"} of that the probability of a datapoint being an anomaly is high if the datapoint's quantile value is either higher than 0.9 or lower than 0.1. However, if we increase the threshold to 0.95, the probability becomes 0 across the datasets. This emphasizes that a higher quantile threshold does not detect anomalies. It is required to identify the appropriate threshold value, and it is apparent from the table that most of the anomalies are nearby 0.9 and 0.1 quantile values. Table [\[table:probabilitybound\]](#table:probabilitybound){reference-type="ref" reference="table:probabilitybound"} also demonstrates the different nature of the anomalies present in the datasets. For instance, the anomalies of Yahoo Dataset$_1$ to Yahoo Dataset$_6$ are present nearby the quantile value 0.9, whereas the anomalies in Yahoo Dataset$_7$ to Yahoo Dataset$_9$ are close to both quantile values 0.9 and 0.1. Therefore, it is possible to detect anomalies by two extreme quantile values. We can consider these extreme quantile values as higher and lower quantile thresholds and derive a lemma. We provide a proof in the appendix section.
80
+
81
+ **Lemma 1:** For an univariate dataset $\mathcal{D}$, the probability of an anomaly $\mathcal{P(A)}=\mathcal{P}(\mathcal{E} > \alpha_{high}) +\mathcal{P(F}<\alpha_{low})$, where $\alpha_{high}, \alpha_{low}$ are the higher and lower level quantile thresholds respectively.
82
+
83
+ The lemma entails the fact that anomalies are trapped outside the high and low quantile threshold values. The bound is independent of data distribution as quantiles assume nominal distributional characteristics.
84
+
85
+ We introduce the novel parameterized Elliot activation function [pef]{acronym-label="pef" acronym-form="singular+short"}, an adaptive variant of usual activation, wherein we modify the LSTM architecture by replacing the activation function of the LSTM gates with [pef]{acronym-label="pef" acronym-form="singular+short"} as follows.
86
+
87
+ A single LSTM block is composed of four major components: an input gate, a forget gate, an output gate, and a cell state. We have applied the parameterized Elliot Function (PEF) as activation.
88
+
89
+ [pef]{acronym-label="pef" acronym-form="singular+short"} is represented by $$\begin{equation}
90
+ \label{eq:pef}
91
+ f(x)= \frac{\alpha x}{1+|x|}
92
+ \end{equation}$$ with the first order derivative of [pef]{acronym-label="pef" acronym-form="singular+short"} as: $f'(x)=\frac{\alpha}{(|x|+1)^2}$. The function is equal to 0, and the derivative is also equal to the parameter $\alpha$ at the origin. After the introduction of the PEF, the hidden state equation is:$h_t=O_t\alpha_c PEF(C_t)$. By chain rule, $$\frac{\partial J}{\partial \alpha_c}=\frac{\partial J}{\partial \alpha_c}=\frac{\partial J}{\partial h_t}O_t *Elliot(C_t)$$. After each iteration, the $\alpha_c$ is updated by gradient descent $\alpha_c^{(n+1)}=\alpha_c^n+\delta*\frac{\partial J}{\partial \alpha_c}$ (See appendix [9](#appendix:backpropa){reference-type="ref" reference="appendix:backpropa"} for back propagation of [lstm]{acronym-label="lstm" acronym-form="singular+short"} with [pef]{acronym-label="pef" acronym-form="singular+short"}). Salient features of the PEF are:
93
+
94
+ 1. The $\alpha$ in equation [\[eq:pef\]](#eq:pef){reference-type="ref" reference="eq:pef"} is learned during the back-propagation like other weight parameters of the LSTM model. Hence, this parameter, which controls the shape of the activation, is learned from data. Thus, if the dataset changes, so does the final form of the activation, which saves the "parameter tuning" effort.
95
+
96
+ 2. The cost of saturation of standard activation functions impedes training and prediction, which is an important barrier to overcome. While PEF derivative also saturates as the $|x|$ increases, the saturation rate is less than other activation functions, such as $\tanh$, $sigmoid$.
97
+
98
+ 3. [pef]{acronym-label="pef" acronym-form="singular+short"} further decreases the rate of saturation in comparison to the non-parameterized Elliot function.
99
+
100
+ To the best of our knowledge, insights on 'learning' the parameters of an activation function are not available in literature except for the standard smoothness or saturation properties activation functions are supposed to possess. It is, therefore, worthwhile to investigate the possibilities of learning an activation function within a framework or architecture that uses the inherent patterns and variances from data.
101
+
102
+ <figure id="fig:betaplot" data-latex-placement="!htb">
103
+ <figure id="fig:activationcomp">
104
+ <img src="Figures/activationcomp_v1.png" />
105
+ <figcaption>Derivatives comparisons of various activation functions. </figcaption>
106
+ </figure>
107
+ <figure id="fig:pefplot">
108
+ <img src="Figures/pef_plot_v2.png" />
109
+ <figcaption>LSTM values for 4 layers and 50 epochs using PEF as activation function using AWS2.</figcaption>
110
+ </figure>
111
+ <figure id="fig:sigmoidplot">
112
+ <img src="Figures/sigmoid_v2.jpg" />
113
+ <figcaption>LSTM values for 4 layers and 50 epochs using Sigmoid as activation function using AWS2.</figcaption>
114
+ </figure>
115
+ <figure id="fig:tanhplot">
116
+ <img src="Figures/tanh_v2.jpg" />
117
+ <figcaption>LSTM values for 4 layers and 50 epochs using Tanh as activation function using AWS2.</figcaption>
118
+ </figure>
119
+ <figure id="fig:sub-first">
120
+ <p><img src="Figures/tablediffalpha.png" alt="image" /> <span id="fig:sub-first" data-label="fig:sub-first"></span></p>
121
+ </figure>
122
+ <figure id="fig:betaplot">
123
+ <img src="Figures/plot_beta_v1.png" />
124
+ <figcaption>The final <span class="math inline"><em>α</em></span> values learn on each dataset. We can see the final <span class="math inline"><em>α</em></span> value is different for different datasets.</figcaption>
125
+ </figure>
126
+ <figcaption>Slow saturation rate as well as behavioral comparison of the different layers of <span data-acronym-label="lstm" data-acronym-form="singular+short">lstm</span> model after the introduction of <span data-acronym-label="pef" data-acronym-form="singular+short">pef</span> with other activation functions. It also shows the final value of the learned parameter <span class="math inline"><em>α</em></span> on various datasets. </figcaption>
127
+ </figure>
128
+
129
+ The derivative of the PEF is represented by: $=\frac{\alpha}{x^2}EF^2$. While the derivatives of the sigmoid and tanh are dependent on x, PEF is dependent on both $\alpha$ and x. Even if $\frac{EF^2(x)}{x^2}$ saturates, the learned parameter $\alpha$ will help the [pef]{acronym-label="pef" acronym-form="singular+short"} escape saturation. The derivatives of the sigmoid, tanh saturate when $x>5$ or $x<-5$. However, it is not true with PEF as evident from fig [5](#fig:activationcomp){reference-type="ref" reference="fig:activationcomp"}.
130
+
131
+ As empirical evidence, the layer values for every epoch of the model are captured using various activation functions like [pef]{acronym-label="pef" acronym-form="singular+short"}, sigmoid and tanh. It is observed that, after about 10 epochs, the values of the layers becomes more or less constant for sigmoid and tanh (fig [7](#fig:sigmoidplot){reference-type="ref" reference="fig:sigmoidplot"} and fig [8](#fig:tanhplot){reference-type="ref" reference="fig:tanhplot"}), indicating their values have already saturated whereas for PEF, variation can be seen till it reaches 50 epochs (fig [6](#fig:pefplot){reference-type="ref" reference="fig:pefplot"}). This shows that in comparison to sigmoid and tanh as activation functions, PEF escapes saturation due to its learned parameter $\alpha$. *The parameter $\alpha$ in [pef]{acronym-label="pef" acronym-form="singular+short"}* changes its value as the model trains over the training dataset while using PEF as the activation function. Since it is a self training parameter, it returns different values for different datasets at the end of training. These values have been documented in table 2 and plotted in fig [11](#fig:betaplot){reference-type="ref" reference="fig:betaplot"}. Table 2 demonstrates the variations in $\alpha$ values across multiple datasets as these values get updated.