Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2003.14247/main_diagram/main_diagram.drawio +1 -0
- 2003.14247/main_diagram/main_diagram.pdf +0 -0
- 2003.14247/paper_text/intro_method.md +5 -0
- 2006.16668/main_diagram/main_diagram.drawio +1 -0
- 2006.16668/paper_text/intro_method.md +672 -0
- 2007.14166/main_diagram/main_diagram.drawio +0 -0
- 2007.14166/paper_text/intro_method.md +163 -0
- 2008.04115/main_diagram/main_diagram.drawio +0 -0
- 2008.04115/paper_text/intro_method.md +151 -0
- 2009.08257/main_diagram/main_diagram.drawio +1 -0
- 2009.08257/paper_text/intro_method.md +180 -0
- 2012.07791/main_diagram/main_diagram.drawio +1 -0
- 2012.07791/main_diagram/main_diagram.pdf +0 -0
- 2012.07791/paper_text/intro_method.md +68 -0
- 2012.15355/main_diagram/main_diagram.drawio +1 -0
- 2012.15355/main_diagram/main_diagram.pdf +0 -0
- 2012.15355/paper_text/intro_method.md +60 -0
- 2101.09178/main_diagram/main_diagram.drawio +1 -0
- 2101.09178/main_diagram/main_diagram.pdf +0 -0
- 2101.09178/paper_text/intro_method.md +155 -0
- 2104.04466/main_diagram/main_diagram.drawio +1 -0
- 2104.04466/main_diagram/main_diagram.pdf +0 -0
- 2104.04466/paper_text/intro_method.md +24 -0
- 2105.01203/main_diagram/main_diagram.drawio +0 -0
- 2105.01203/paper_text/intro_method.md +21 -0
- 2106.03357/main_diagram/main_diagram.drawio +1 -0
- 2106.03357/main_diagram/main_diagram.pdf +0 -0
- 2106.03357/paper_text/intro_method.md +169 -0
- 2106.03632/main_diagram/main_diagram.drawio +1 -0
- 2106.03632/main_diagram/main_diagram.pdf +0 -0
- 2106.03632/paper_text/intro_method.md +249 -0
- 2107.10140/main_diagram/main_diagram.drawio +1 -0
- 2107.10140/paper_text/intro_method.md +30 -0
- 2108.09376/main_diagram/main_diagram.drawio +0 -0
- 2108.09376/paper_text/intro_method.md +143 -0
- 2111.00162/main_diagram/main_diagram.drawio +1 -0
- 2111.00162/main_diagram/main_diagram.pdf +0 -0
- 2111.00162/paper_text/intro_method.md +81 -0
- 2112.07194/main_diagram/main_diagram.drawio +1 -0
- 2112.07194/main_diagram/main_diagram.pdf +0 -0
- 2112.07194/paper_text/intro_method.md +80 -0
- 2201.02767/main_diagram/main_diagram.drawio +0 -0
- 2201.02767/paper_text/intro_method.md +78 -0
- 2202.09265/main_diagram/main_diagram.drawio +0 -0
- 2202.09265/main_diagram/main_diagram.pdf +0 -0
- 2202.09265/paper_text/intro_method.md +175 -0
- 2203.01228/main_diagram/main_diagram.drawio +1 -0
- 2203.01228/paper_text/intro_method.md +40 -0
- 2203.05238/main_diagram/main_diagram.drawio +0 -0
- 2203.05238/paper_text/intro_method.md +113 -0
2003.14247/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2019-11-08T07:03:25.812Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36" version="12.2.0" etag="4AjtzeMTxLncVqPyVjhc" type="device" pages="1"><diagram id="R0hvayW9W_pjeeZjDFjh">7V1bc+I4Fv41ecSl++Wxk97efZjdmtqZqn3cIuCkqSFxFpzpzv76lcAylizbAiRjsiRV3cSAwN85OvdzdIcfXn7+dTN/+/73Ypmv7xBY/rzDX+8QwhRI9Z++8rG/Aon+U1953qyW1bXDhd9W/82ri6C6+r5a5lvrhWVRrMvVm31xUby+5ovSujbfbIof9sueirX9qW/z57x14bfFfN2++q/Vsvy+vyoQP1z/W756/m4+GbLqjl/m5sXVnWy/z5fFj8Yl/Jc7/LApinL/6OXnQ77W6Blc9u/71vFs/cU2+WsZ8oYK9z/n6/fq3qrvVX6Ym90U76/LXL8e3uH7H99XZf7b23yhn/2h6KuufS9f1tXTT8Vr+VCsi83uvfh+96uuP6/nWw06UI/re969YbVeN97A2D36pr7sfaE+YlVqBiH6PdXXzDdl/rPzVmENoGK9vHjJy82Hekn1BsZBBXrFdoQTvL/w40BECGC18vcGBRXPVtxTcc5zvfwBXPWgwtePNR7G+lmB/dZ5rxU7zx/Ny8HxGCCQURsFQepLFg4GrSYOisXNa89BgkTmuiDuwL3IzKDNHBi2eYNQDyRERGANOgyI2jVv+uHTOv/5Rcswdd/567J6+HWhN9hq4exGe3MBIOZgtwPLTfFHbj1DOTjss3zZkn+DODZw8sFkrm3y9bxc/Wkv74Ou+oRfi5X64JpMUCKbTIC4DLkt3jeLvHpfU/QNLIUosxcq55vnvGwttKNlfeNB5GXh5F28b9Yf95v54o+8HGZ8e5do4VspS0h2z5YK6+JV/T2TB7oblYXi7BwooCNSMKIeiYI8XAFRhM3DQ+Vqa0MswSLPkW9DUEYh/0sLsl5hEyKe+6EkzMKRUpRBpvgSUs4wpbAtpSn1KKsImIorwpQPCiRIEsEkbzCFwGR061XgJC+IExzG6SDyQYA5boOZ7358YGKGJV4eB6ZldgI/atU7JMuY9dOSZERk2PpJhPBRDs8VIdyB3wUQDnBzrgvhC2Doc5DYuqxsPAtM9p/3wjwx2+6svy/qBQi+/Tw8qR496/+V/FvMS7OW+hr75fZPtsikwCodezNXn1BJT026N20c726U3t/Rr+rK/L0stpUNqv9cr561/bnOn/RSmgKrxXz9pbpcFprYW+3rvz7/rv/4OiPJDFWmDFVGIEDVv8SythhiGefqVZwIzgQRrR0kPBZsDAMWHuH+xfIPmM8/eN7Ml6vciuA87n7b2/CJ6l/fNqwEfCIicuQQjbfIlMzRgAF+3JBL3iWbpuxyQxtyDDMAEYYUcSAx5faC4e63vSyVMpMASwKEYExKbC8bzxmHAf5ilGALBfmTP9gCv94/XAHlObFjYgKAU4MtraVoa6mIFA7wXs8xQZ7EIl8sfKR9FFQHCRPJPiREBptWNLZ3EEXKyu4LF+zNl0Q2S4AvfA7oS5qLJfGBLtAjTqdwEJaZaJp8cEKgowDH+ixOf3pCfk5fskdGk4GulIMTU7wwzgGO+fm5mwFMbAnqs3wgqaPhNgJWGBFwEgGRq3WkB1Jkknli18YZiM5XV+ssD6DY4ShXvNviR9Dm5ZFc7eNykddAgQliHODfppaeSo074pMO6BNImVeYEmUOcIIFUa4uIRKxCAAFOJbTNFj3pA2O+14sKhmSJJymdTqAsIufCR1eAOLUXlcyW7QL4gtgGJxQTGhnQsfhsXU3JpZmIe1UjrZCrV0v04CFA5yf88EKygcSb3or+g2nTg9CJca4b4dJxvH8HG9vMqkVnNpvGR3DgyKwYkMXs/Zwap/mUggrY8aH3wUQHk4PmpJpkwCE7VygecujRZqudCL1pRPxbNnIJT66Kx+fX2xmqCIVdiIhbP2FPcUHnhh6FEL5HJ89Nqtu1J/mL6v1xx539dT85W2HBMbavizn34uXeeuyfunuX4duGPjo9qf5FlutHPfXmpRsXD68siavukN15wBab/ARfzXBzHKDwxCt/v5WAf7199VLvlXf9B/5D/XvPxXQr5HimaxVIymFJ87Ee5I5ZzGiz8G8SQw/sTClbvDZUwyQTGb4PNUbqbr6GVxKHRIHY9DK5/LeaNWhiGVrW0lPFicVqXye9dWrYrhXxfpGZ59BJUM0mkrGzOFGJj2hDJiIHYkvdnFchZ/LT+Bp70Adrv1uGLRD4NTMkHuY8C2cCQ9cGM59U69AbDLliHaidLIUNKwKkcRgSV90aQIS8gLMOUkuHE80Iu6aVYxKj1nl48QowtEXo7t+TsQ3TjzfFWPM160sU3HiUbHMM/rjnRjnw+7nrtkGT1O1wVMmfe3f3jb4KHomoOJiMs3OHViOU3/LZKtbXxHLIUFoBa5ezFkKt5r549XgklFqPvpZnSNuF2sSgUkGESVMQEa49EXwPclKEiMERwJqPM4HhPRXC0Dhanal2jPMCcVQYIoAQ93cfNbdX1PR/Z5Uo+/64B394adLxM07HNca9E65z4L6ulKEWT2+77qOENjNB7oqXxC3fMEHRa1Vvqmsn5aGB7sfs0h1N8bT6tfjA3t5Rs2mqvhhxnVJGxGcAwy54L7omq83KopSH6VwZQiRunHSIEIy1pD2mLST4lip2GbGFnhytrvA2rkImd2aGKHhapXx7jigXMVswV/mj/n6V7VXq37Ex6Isi5fOPdpsqKu0xsvPZy1Nsse5UhJZ8Vi+b/N/K2Ezf33WH3a/Gz8GMuitHIBf9O+dr3LA0TDfdj96vdVGEWP/bfP5VsuTZj8lDjbXaX/VAfISq2Jw6S/sb1IbekrXo3RD0ot76eSW3B7HSR9g0Rl0XErY1jw+Dz2G4qE+D/3/NwU3RCnp2v/QDGMcIQdHE/dapCvCHoAVQmSVzErbDKFkUEyPVDlGL16Q5JfZvsjq8TIb3YR2OM860+9m0Nf35s1/RhHbU4iGzDCDVuG2tB0qjjOK9YwDAqHavdwzLXSkym8aPOPuHLjocNxjtBtO3EySroiYdsxZm1qZNk3crz8+wuNjyBK331+MSydT6s5Gabwf1BO8z7wTxIqgM48a5cK7gBU9jhBxMdH76+sjZf3NzMo4mYgFzbr9zU9gQeObBR3AlCaj6t/U3TGQZNY0C3CsF++bP+sqhcG0WgNRXsPVnzQbbqJNmCqfYSwySaUUjHOi5LBtyUPAM8qlIFgZ9RQyU2F0bBJ94GOYzDBUUhhyPV+QmKTIQIJOUWL+0XhZtU167pUq9axWlwAIoAQiczhPDzKEmFACsOQMVmnu7nuSMINQLaccIEyFQGetRoBwQkxCZIQwPRUc8AqwE78L1fF7CSCQnHAuQOV9dK/WD1T/curBnjKnZlFZQAnENBvih+wh4vA9R0OO8ljaebjPaSi666rXyGXVx+vko1IRV5RKj9p+N5grR265Px222n3qOkYxDBslmjNUCmQC/0bOC3sPe8eLJZp7zhJHe+qkracsqCrPONVvmU4sInE8Z3wMOybcXCxixhNHey6G8GSmNHFftGcCznWUkgJ0jB6fosKO6lsPaadWe96MMLuYK6wLIIZ7bWaOXzYICQVxc/eMZ2AoQZVIZfOjOiOmJAz31JyCyuaJ6yHGx3ByKjuxC34xhKejsrs97utX2bdw+FEKyjm+DkqciX4XO5nG7h6i0gqWtMgaGimC3jrAb51BmN7iwG5GMgXOpRJG+X8LfYv3b/lG0bHMN83rvx4uDomxE4XSUD6EOecXYkiVENod50EBNyM4mvT3SCUCMx2NxYhWh4xEYIeA+MJxB7ccIJty+6A5i6umh06D1IM7uZOeCO8kdNdFgmVi15sLpKBmpHb8viQeEONIcgDPpemI3OAlVrKVHg7hOfEMHu/KDGXEnMPDAUpFSxEQTUnfIGruz9w9pG1LimKPD0U0Ac6XTGKU+hYx0GLiuJUm0NXEwDsIB8XBYBT3WvSzwsw5oGgGeTuBT/0HgKMIGMQeNNCJlYVJh5PxYb2jCYC3f1D0iLRgACIceYj+D4887CJhPVCI9J15iJSBxmu9jaCnQTLVoYciwDVPYpN1ITaOMq+d7vqsdNF37F2oJm8ty0DfIX0R1Xj0MxGvhIzOZAmIM91gLLkyoCShjJ5KR2ddTDLG9fJcYgiwe9pIRELeXOZehdrs9/2R7/p9E4nsVpwfM5ZByQEhew4zQnx8R1rETtS3IVzrTvD7YrPMN76BSN3HKHUdvBSDJsQdQ4X87VI+TyFGlFUEOL4j4153IHT1LETZC8wRh8ST4koFugzwUD8js6N2BScfk9llgE/8SXG3xf6YvB7gg6fFPAqCrcGVCI7LugFu/Ke0hm3vAxF5iCEqp8Y1PU71apQz2ox6pvNqZEBmfh+Ssqa+QNE79SUoTNXP4Yeoo4HEO/C+rqSKzuCjjPXbw9/LzenuMPZZjCNon3AWtFAe7thJJyhjn8d4NMpxMTuUrY2obKZy+gREYhLDb4asA+Imu2aesplEs2+kz4O7EauTWBjanh8U7WxEKlrV46xuxAojlrgkrbprxselFaIsNa2aMUhlQ2ts42w24UrGOq7fHCWYqFgLmiT81Ob9+2oI34JrCG9n85zOkMzuL1UMGjqcKo5MmcpQwRHkfzKZAm2JQjxV4OkkSnc1waeQKNWXeH3cvtnMcJMsQ6aKw5XmxPURjrWB4OLTF8PPtVmGc+VnOWEpakP6YJLXtpjD6m5iNJ9DkKRiw8Ltl/fFaqm4EjwUr9tiN37bnSjTdRDO/qtMONZN2iN0XeEQGt4mTrGHnucddiDLsSNj2l/aSLRYw04gOOKEj8X7Zv1xv5kv/sj1Xh0I7dl1j8d0WrVOB2mcxRCDWU+LIQ7VGpgpbiYN2JYNvipMiJ3SsdOEQ3dM8YoNJ3xzxU6346kj7dCIx6TWQ0o8/Nii26kTfswq3U6Vz/+aIl+MacEQU7FtZkB5mnxTuXeG3YJ0zabYboeVTEUHkAEECeKEC2WUMUm10W4JekJtzGUsi9AZTubpljN3aZVix5D60BdDdfBM14XS5BgzldjSbIw70z9j3LIv5hiHhSLxS0WVboYx4wXrMXsjMkxAnct1oQdJu6g2HXyJBz+cNBnwNGN2CGdHS2DU1hKJ0uMQBtSqjH7cSPOMqHQwC5Qxj8QcBfXY9TMxZlCPgzrEfkU1CuwXL6hJVDQ7KMgldogwamU+hJOpwplEpcDglnE7KUaswYGw25vsiW64sYlm3OK0yEfkADz6LAH4VmzuDPd1iBFnduzVcxBLquE09emLn4wN8Y0Nz2ZDaNT2KHx48XGbt7qbkYO9Q+yIzckIhh/pqIU3hvs/F0eiG0fG40gOxuXI4VKwQXXNfSxlMrR/1adWn26kX4D8uJVleCjeN6t8UzFAK19q5V1/q+4GhnFHL28wTqA7v4pDlEkGqIQQq0/29EaxZPo0uGus69YjjLQChw1SQ+IZZTTaKfAQjdIyVoHfm10Y8Z4DgmBm5/2iwyq/qi1aBagei7IsXjq3ZjNEU0XYX34+ayGSPc63q0VWPJbv2/zf5WY1f33elTMsNZz6rJs7T5RGH3wPv9wFlE/Uw26bJaP5fDcKoxlgw+HxNdTfToW85KrYWg4eCYdNbMbKJsUYXgRR77SUaY29DTdUzJEzNwPlCCad1e1C3Se/pXPhJhN9nEinUgC1WvV7pshkjBAk8oUgz4nTx0hKRUEWKsOr5zxESqZyhGddrzk14e2Lvx0vvNFNeh/BtWbwST3205s9SnVwJ8S+ENzYLgMi9mkC9r4Vxl9qAsLY4IHGpOPM3vPw8gWIIorPdCccV6RuU2FyRxzXlcSfUz7eDnUIYst6qP9UDjmGeAKnHF94ELyeQ0iA4mGk5/naTbJUCJwhQDGEBAvsrB4+Er7/MwjJJBZASAmoMP515G4VJpHIoJKBQEooEXYUkgQ6LiCkDtYQWpWnd96PkDjDQq2FMWDElH2euBZv6UaSAQ45AIwjwPqPRh74KgLRTABCuIRY0ipk102mXoj61jq/1QcnLqxLd7DxkKyjrm+oGF5YevlySnnyRxsfr4qPCjRdUYYkah/GUAqECodjscdeT3WUMcSfORB6c6WP8CKJeU+ddvL01icLhOKLNy4m5MOby3IEH0Lsji+mkklPVCcdL37Wad0Dqkia6EzDeGqrIm9oPUbS1Sx8bTMlB/gZyxY/c+nj53Q17uTiE7mTIAthq3pEEm/BWyRk1Z+bQquHg+elE/d/L5a5fsX/AA==</diagram></mxfile>
|
2003.14247/main_diagram/main_diagram.pdf
ADDED
|
Binary file (92.9 kB). View file
|
|
|
2003.14247/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
We use four popular networks for fair comparison, which are ConvNet, ResNet12, ResNet18 and WRN that are used in EGNN [@kim2019edge], MetaOptNet [@lee2019meta], CloserLook [@chen19closerfewshot] and LEO [@rusu2018meta] respectively. ConvNet mainly consists of four Conv-BN-ReLU blocks. The last two blocks also contain a dropout layer [@srivastava2014dropout]. ResNet12 and ResNet18 are the same as the one described in [@he2016deep]. They mainly have four blocks, which include one residual block for ResNet12 and two residual blocks for ResNet18 respectively. WRN was firstly proposed in [@zagoruyko2016wide]. It mainly has three residual blocks and the depth of the network is set to 28 as in [@rusu2018meta]. The last features of all backbone networks are processed by a global average pooling, then followed by a fully-connected layer with batch normalization [@ioffe2015batch] to obtain a 128-dimensions instance embedding.
|
| 4 |
+
|
| 5 |
+
We perform data augmentation before training, such as horizontal flip, random crop, and color jitter (brightness, contrast, and saturation), which are mentioned in [@gidaris2018dynamic; @ye2018learning]. We randomly sample 28 meta-task episodes in each iteration for meta-training. The Adam optimizer is used in all experiments with the initial learning rate of $10^{-3}$. We decay the learning rate by 0.1 per 15000 iterations and set the weight decay to $10^{-5}$.
|
2006.16668/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-06-09T05:04:10.020Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36" version="13.1.14" etag="Mj8mvjc8D7Wt8P6TslC_" type="google"><diagram id="aKa0BxgOmKbT9YWBdAXV">7V1dc6M6Ev01rrr3YVyIbz8mmeTefZipqZ3d2r2P2Mg2tdh4gUyc/fUrDMJGLScCN1hxmJqaMQKEULdOt45azcR62Oz/SIPd+lsS0nhiGuF+Yn2dmKY1swz2X1HyWpaYtmGVJas0Cssyciz4Gf2PVoXVjavnKKRZ48I8SeI82jULF8l2Sxd5oyxI0+SledkyiZtP3QUrCgp+LoIYlv4rCvN1Weo7xrH8Txqt1vzJxKjObAJ+cVWQrYMweWkU0X3+lGzzqol/T+ZJnrDib8k2mTiP6zwvXvJuYj6xv8viwukqSVYxDXZRNl0kG1a8yNglT8tgE8VFF/M67g91sIdYjxPrIU2SvPy12T/QuBAR7/yyJU9nztZvn9JtrnKDWd7wK4ifqw6sXj5/5T2aJs/bkBbXk4l1/7KOcvpzFyyKsy9MiVjZOt/E1ellFMcPSZykh3stevjDyrM8Tf5DT85YLlO2sLiD9dOT2B1ll5YnKx0jPjuuWkvTnO7PvjGp+5GpOU02NE9f2SX8BtuupFmpuMU14OWoL5Zbla1PdGXGLwwqBVjVlR97mP2oOlne4db7HU634V0xEtjRIg6yLFo0+5i9aPr6b3ZgTB1++Nfpua/Fqxr10Ss/2kf5yW3s6K+TM8ebigN+TyvplELmw86s5UVDYcxmyXO6qIpMv0KJIF3RqqdNR1mqJ0JzJDLjZSmNgzz61WyGTI7VE34kEXvwUWmY8BtK43uCLpSvVN12OuDEmixB/XxTqKnsCFDTQbHqF1fSNfvzDW7DaPau4ygObpsgDG7neoO7n4F6OiiJujQGGpTEbwrbFQFaeVASYXh7ojbgDUr3lnXE011FfFGwyipSe7i8Jqs3FfGAijxRhtGm8ZSkL0HKfrkx66f7edrQG/e/z4XbeJDkl+wgSuaLGsTf7Q/S4+fZr1Xx/9PTd1R7EAbUXy5k9sBd+HS+xNLACy2EewY0TjTStCUqSTDcPx+I9i4sJMtkEmyK7i3/ZSXfk3SDKp7lcmkupOIJ3bnruHqKx3EVxSMOxy7SmV0Pm/V0zjnm6QvoR4+vvdEXqpr1Z/R5zSea9bft7rloDd3MaRhG21V2gIF7JHD/kWRRHiXbIBaecRGiDO7OC2gw8xXRwEFAA+7wngjtcbtIQpoiySh5zg868KFFYg8qEgW+Sh2hT/H5BK6v5j37uoOtJxpZVaydiVZ9pga1TJDB68llu+KC7I0WG6a8xWeNgHiD4zmCQpZt6Iz8kPD79hzn0Zc/aYDlyN/lOVMYBvYXQonoL1L3jL/ozeYG2pDAhaN6gjeIO08gxTb682/Kx/MU5YPhzxNUQk43c8GVT197MbOR7IUvDta+7AVvsbK98HxkewH5we97oLRsQOZNNW2CwDbZUgExqqIgjlbbQteZyjBH1rovhne0COK76sQmCsPiMVI0OuJVaxXHABOLiNbdAWAi029RDTthCWTlLsAS3Yhbvi6pD5a4wkBzOZi3nue7lqA1plAT4jwf8nuDKYmeDBLv64bhmumubL6oIurK5gg12f0pG6Qrx3UCNMfVEVaSLclKcn8TC66P48RCWT6qEz+MiYUJqcH2ON8Br/VyIBo4b0qcCu0ILRHnidHdqxBnDiK/iQf0vG/H1YPLIILUsDoEVy0J9RuXD4BMajgeRCQKEXEflw/iyq0v3NZrRa3R1hatvTilxyKE7DPLW+dbJtxgmbiEkCSocFxA6NGPVA44wfHzId03+vlvy0fVhqP4+aikn3YGw9XdYLjiOnFng+EouucXGwzeZGWD4VjIBgNykN+SR0wuqKwpe55fVBXhFbEXOtRVFk+nU8SHPMofgouj/oLKcXTuO7ajqZ3zbUUcdTFwFFKVo5l7WzyqYckYZo7XcaNmTvvlhlnn3WSimVPdlXCxmeNNVjZzPsE1cxakYMd5UY+ARIxBF0AsyHmOFuMdAQ25AmKhbmTWzWRw7dPXZBxjrS62GcQYikyrG61sNYiBTKdZkAL+RxpssyWrpDAWNUUv6PLHirgyXSSQEcM3iSo7hkHXWxLqs5jJGqPEWpiFQSXWbndz1d9hkK1rU33Z4hRAdX0QHMCuiYfg4hwdb+3ZaseAfiqJ+jaaREFVPUpUIUbx1tKC+EJgumWYABR7zPkz7iueNKOFnKoDTn1uW7toIU8IHPa9rsFCnpiGor8QZA4k2sepn1GThkrotkBlguwRYvYmVZUwDa9ZE9hbiagSGNGKN4U/tiTpmK3bnB8oGyB31ZVNTF/WX7Ai78YxKr0P0m8mJi904A6o/khZG5J+Iyn7tnyIonwwOFkbI7zxpqLSbcnuI0e3eSTA+eMm0/ZALy7TOf05mpCdQ4xKVw9JP1ZdXvhbtj5YGYP8fiEADT1LFcGD1MGf76GHODHphB6Qu+ser342WN24HfEMGrpu33Qkoq3b9mMApLbfde4vUnuW4o7Q1qttM0Pe5LMtE2+wmslyLl5rsyWRiGOIRn/OpuMOOhkYY/raykfVnGNMBjgQ3qjB0C2mDzKDMyyD4Sj68BcbDN5kZYPh2LgGw4FEZbng33/YOS48fYiIcBGe6lQVQ0SE8xWg0XqoiscbMhOyM6YtHFS4A0fXOmPewtYCGpLJdSC3dkPOG9c+fZ03QtCm+8QYar5fN1rZfyMG8ozfgRziWxGbbcyI6Z4xIy8R0xD2hB//nBSflUtpoVkxA6NNoVwcz7bzbCdz8z5nlKhJxHByiXNhm2+MuIvQTYHLvLEItGNn1hFosMP7i0BzxryEk+bKoCexS65uny9izRQj0LouDFogw2FvESDOFcMd+3deOMeqj5K4Io3TNacZu1OYcvYXpsihbkQkjkgcfRrKpl2sgqhsnTOlmmJa3h4zpbqQVzwfk/aJw8uOCbrq8LIhWQl35ATbymfILb8uxpbfmwovc20I2dxD0xeyuyc9BVX1mPTUhRSlXuFljx8sfgmAh3o6VIzwMhdSmr2Gl3148QwaXuaifpZYN8KZOxb6YnLnzKimJboEPSVzMG1b3uSzLRNvwM6M6kIacVyj7NHZHDYzqjt+KrmtfIbMjOqiknzaGQzd9iJDkq9rZlRgMPrKjArw/73MqOAG7MyofGrWZ3jZmHD0HDwNmnDUg4zfaD3eFM+g4WUeZPpuyHp42rP2zEPDMh99ZRwF1qBus7L98JC/5eyNQZGDQtLAQZHeGBTZWkBDBkV6Nx0UybVPY6PROeUoMBq9pRyFVuO9lKMSO4McFOndNLfKN3Loo7cwnYLRNZgXbJExjalx8sfsRYklcbpGy8hedNfnil+/5mvGfP34RPOvGK7jWxIM127tF28swLVfntYQf+3X0yFcVS9tk0Qa6JfIBnGaCRPZ9BZp4Ek2ssfxlzz5cne47muU7YJ8sYYqGMfRLqMKzv7Qa8lE7D3J0oAl89NFt6yLn859vduJ/9VnhIGUgGLQZGc498QFBbwB5l8x/+StKwSIxxVHcGfE7VMhzLcR9yHZzKMt/cCAW3feIIALeck/mFJtV6ADL6GoFiGd+3MZRXW9XVX86yC1nZMsYniSXsego3xIF36SXq/zDF2j11FJwFvYN+J8BFKG+MJuD490tFMgl7/b36cSfMjcfY4hThxfGOKSCURvQ1yBa8rWwa74uYzpvhrr9+8N+1ad2H5L8Zmhqs8Q9IQk4N2/IOa/VxPiEFSggkZlaK8M9c6ui5XBfq8mRGWATM0x7UOGtAJPptPpI9Cxj5UnAs3VM4UdwcTwFFd8MXYlcJiRyruQ4Simah40a5prQiTfBSB8sR5dTJDfaYhpHE2VmFywOC0JcKlTs6CLCbIuv31nN5m/70cBnaFxJPtrfYl4MFicGWRxRvE0ZyWeeUXxQLpnFE9z9HjiksKQ4pHxQqU/GEa/jmH4ZVHx9ipOouPJNpsyB/Ekyr+sq1k/K248ddQPmTM5k5DkWL4kO0yTQmrHqQbrg/W3JKTFFf8H</diagram></mxfile>
|
2006.16668/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,672 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Scaling neural networks brings dramatic quality gains over a wide array of machine learning problems [1, 2, 3, 4, 5, 6]. For computer vision, increasing the model capacity has led to better image classification and detection accuracy for various computer vision architectures [7, 8, 9]. Similarly in natural language processing, scaling Transformers [10] yielded consistent gains on language understanding tasks [4, 11, 12], cross-lingual down-stream transfer [4, 13] and (massively-)multilingual neural machine translation [14, 15, 16]. This general tendency motivated recent studies to scrutinize the factors playing a critical role in the success of scaling [17, 18, 19, 20, 3], including the amounts of training data, the model size, and the computation being utilized as found by past studies. While the final model quality was found to have a power-law relationship with the amount of data, compute and model size [18, 3], the significant quality gains brought by larger models also come with various practical challenges. *Training efficiency* among the most important ones, which we define as the amount of compute and training time being used to achieve a superior model quality against the best system existed, is oftentimes left out.
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Figure 1: Multilingual translation quality (average ∆BLEU comparing to bilingual baselines) improved as MoE model size grows up to 600B, while the end-to-end training cost (in terms of TPU v3 core-year) only increased sublinearly. Increasing the model size from 37.5B to 600B (16x), results in computation cost increase from 6 to 22 years (3.6x). The 600B parameters model that achieved the best translation quality was trained with 2048 TPU v3 cores for 4 days, a total cost of 22 TPU v3 core-years. In contrast, training all 100 bilingual baseline models would have required 29 TPU v3 core-years. Our best quality dense single Transformer model (2.3B parameters) achieving ∆BLEU of 6.1, was trained with GPipe [15] on 2048 TPU v3 cores for 6 weeks or total of 235.5 TPU v3 core-years.
|
| 8 |
+
|
| 9 |
+
Here we enumerate major practical challenges faced especially when training massive-scale models that are orders of magnitude larger than the capacity limit of a single accelerator memory (e.g., GPUs or TPUs).
|
| 10 |
+
|
| 11 |
+
Architecture-specific model parallelism support There is a lack of support for efficient model parallelism algorithms under commonly used deep learning frameworks such as TensorFlow [21] and PyTorch [22]. Naive model parallelism with graph partition is supported but it would lead to severe under-utilization due to the sequential dependency of the network and gradient based optimization. In order to scale up the existing models efficiently, users typically need to invest a lot of engineering work, for example, migrating the model code to special frameworks [23, 15].
|
| 12 |
+
|
| 13 |
+
Super-linear scaling of computation cost vs model size Straightforward scaling of the mode size by increasing the depth or width [6, 15] generally results in at least linear increase of training step time. Model parallelism by splitting layer weights and computation across multiple devices generally becomes necessary, leading to network communication overhead and device under-utilization. Device under-utilization stems from imbalanced assignment and sequential dependencies of the underlying neural network. This super-linear relationship between the computation cost and the model size can not be resolved by simply using more devices, making training massive models impractical.
|
| 14 |
+
|
| 15 |
+
Infrastructure scalability for giant model representation A naive graph representation for the massive-scale model distributed across thousands of devices may become a bottleneck for both deep learning frameworks and their optimizing compilers. For example, adding D times more layers with inter-op partitioning or increasing model dimensions with intra-op partitioning across D devices may result in a graph with O(D) nodes. Communication channels between devices could further increase the graph size by up to O(D<sup>2</sup> ) (e.g., partitioning gather or transpose). Such increase in the graph size would result in an infeasible amount of graph building and compilation time for massive-scale models.
|
| 16 |
+
|
| 17 |
+
Non-trivial efforts for implementing partitioning strategies Partitioning a model to run on many devices efficiently is challenging, as it requires coordinating communications across devices. For graph-level partitioning, sophisticated algorithms [15, 24] are needed to reduce the overhead
|
| 18 |
+
|
| 19 |
+
introduced by the sequential dependencies between different partitions of graphs allocated on different devices. For operator-level parallelism, there are different communication patterns for different partitioned operators, depending on the semantics, e.g., whether it needs to accumulate partial results, or to rearrange data shards. According to our experience, manually handling these issues in the model requires substantial amount of effort, given the fact that the frameworks like TensorFlow have a large sets of operators with ad-hoc semantics. In all cases, implementing model partitioning would particularly be a burden for practitioners, as changing model architecture would require changing the underlying device communications, causing a ripple effect.
|
| 20 |
+
|
| 21 |
+
In this paper, we demonstrate how to overcome these challenges by building a 600 billion parameters sequence-to-sequence Transformer model with Sparsely-Gated Mixture-of-Experts layers, which enjoys sub-linear computation cost and O(1) compilation time. We trained this model with 2048 TPU v3 devices for 4 days on a multilingual machine translation task and achieved far superior translation quality compared to prior art when translating 100 languages to English with a single non-ensemble model. We conducted experiments with various model sizes and found that the translation quality increases as the model gets bigger, yet the total wall-time to train only increases sub-linearly with respect to the model size, as illustrated in Figure 1. To build such an extremely large model, we made the following key design choices.
|
| 22 |
+
|
| 23 |
+
Sub-linear Scaling First, model architecture should be designed to keep the computation and communication requirements sublinear in the model capacity. Conditional computation [25, 16, 26, 27] enables us to satisfy training and inference efficiency by having a sub-network activated on the per-input basis. Scaling capacity of RNN-based machine translation and language models by adding Position-wise Sparsely Gated Mixture-of-Experts (MoE) layers [16] allowed to achieve state-of-the-art results with sublinear computation cost. We therefore present our approach to extend Transformer architecture with MoE layers in Section 2.
|
| 24 |
+
|
| 25 |
+
The Power of Abstraction Second, the model description should be separated from the partitioning implementation and optimization. This separation of concerns let model developers focus on the network architecture and flexibly change the partitioning strategy, while the underlying system applies semantic-preserving transformations and implements efficient parallel execution. To this end we propose a module, GShard, which only requires the user to annotate a few critical tensors in the model with partitioning policies. It consists of a set of simple APIs for annotations, and a compiler extension in XLA [28] for automatic parallelization. Model developers write models as if there is a single device with huge memory and computation capacity, and the compiler automatically partitions the computation for the target based on the annotations and their own heuristics. We provide more annotation examples in Section 3.2.
|
| 26 |
+
|
| 27 |
+
Scalable Compilers Third, the system infrastructure, including the computation representation and compilation, must scale with thousands of devices for parallel execution. For example, Figure 2 illustrates two different ways of partitioning a dot-product operation across 4 devices (color-coded). Notice that with the usual MPMD (Multiple Program Multiple Data) approach in Figure 2a scaling becomes more challenging since the number of nodes in the graph increases linearly with the number of devices. Instead, we developed a compiler technique for SPMD (Single Program Multiple Data) transformation that generates a single program to run on all devices, keeping the compilation time constant independent of the number of devices, as illustrated in Figure 2b. We will discuss our SPMD framework in more details in Section 3.3.
|
| 28 |
+
|
| 29 |
+
The rest of the paper is organized as the following. Section 2 describes our Transformer architecture with Sparsely-Gated MoE layer in more details. Section 3 introduces our development module GShard. Section 4 demonstrates the application of our mixture of expert models on the multilingual machine translation task over 100 language pairs. Section 5 has performance and memory measurements of our implementation. Section 6 discusses related work.
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+
Figure 2: Comparison between MPMD and our proposed SPMD partitioning of a Dot operator ([M, K] × [K, N] = [M, N]) across 4 devices. In this example, both operands are partitioned along the contracting dimension K, where each device computes the local result and globally combines with an AllReduce. MPMD partitioning generates separate operators for each device, limiting its scalability, whereas SPMD partitioning generates one program to run on all devices. Note that the compilation time with our SPMD partitioning is not-dependent of the number of devices being used.
|
| 34 |
+
|
| 35 |
+
# Method
|
| 36 |
+
|
| 37 |
+
The Transformer [10] architecture has been widely used for natural language processing. It has become the de-facto standard for many sequence-to-sequence tasks, such as machine translation. Transformer makes use of two computational blocks, an encoder and a decoder, both implemented by stacking multiple Transformer layers. Transformer encoder layer consists of two consecutive layers, namely a self-attention layer followed by a position-wise feed-forward layer. Decoder adds third cross-attention layer, which attends over encoder output. We sparsely scale Transformer with conditional computation by replacing every other feed-forward layer with a Position-wise Mixture of Experts (MoE) layer [16] with a variant of top-2 gating in both the encoder and the decoder (Figure 3). We vary the number of Transformer layers and the number of experts per MoE layer in order to scale the model capacity.
|
| 38 |
+
|
| 39 |
+
Each training example consists of a pair of sequences of subword tokens. Each token activates a sub-network of the MoE Transformer during both training and inference. The size of the sub-network is roughly independent of the number of experts per MoE Layer, allowing sublinear scaling of the computation cost as described in the previous section. Computation complexity is further analyzed in Section 3.1 and training performance in Section 5.
|
| 40 |
+
|
| 41 |
+
The Mixture-of-Experts (MoE) layer used in our model is based on [16] with variations in the sparse gating function and the auxiliary loss being used. A MoE layer for Transformer consists of E feed-forward networks FFN<sup>1</sup> . . . FFNE:
|
| 42 |
+
|
| 43 |
+
$$\mathcal{G}_{s,E} = \text{GATE}(x_s) \tag{1}$$
|
| 44 |
+
|
| 45 |
+
$$FFN_e(x_s) = wo_e \cdot ReLU(wi_e \cdot x_s)$$
|
| 46 |
+
(2)
|
| 47 |
+
|
| 48 |
+
$$y_s = \sum_{e=1}^{E} \mathcal{G}_{s,e} \cdot \text{FFN}_e(x_s)$$
|
| 49 |
+
(3)
|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
|
| 53 |
+
Figure 3: Illustration of scaling of Transformer Encoder with MoE Layers. The MoE layer replaces the every other Transformer feed-forward layer. Decoder modification is similar. (a) The encoder of a standard Transformer model is a stack of self-attention and feed forward layers interleaved with residual connections and layer normalization. (b) By replacing every other feed forward layer with a MoE layer, we get the model structure of the MoE Transformer Encoder. (c) When scaling to multiple devices, the MoE layer is sharded across devices, while all other layers are replicated.
|
| 54 |
+
|
| 55 |
+
where x<sup>s</sup> is the input token to the MoE layer, wiand wobeing the input and output projection matrices for the feed-forward layer (an expert). Vector Gs,E is computed by a gating network. Gs,E has one non-negative for each expert, most of which are zeros meaning the token is not dispatched to that expert. The token is dispatched to a very small number of experts. We choose to let each token dispatched to at most two experts. The corresponding entries in Gs,E are non-zeros, representing how much an expert contributes to the final network output. Every expert FFN<sup>e</sup> applies to x<sup>s</sup> a fully-connected 2-layer network using ReLU [29] activation function. The output of the MoE layer, ys, is the weighted average of outputs from all the selected experts.
|
| 56 |
+
|
| 57 |
+
The gating function GATE(·) is critical to the MoE layer, which is modeled by a softmax activation function to indicate the weights of each expert in processing incoming tokens. In other words, to indicate how good an expert is at processing the incoming token. Furthermore, the gating function must satisfy two goals:
|
| 58 |
+
|
| 59 |
+
- Balanced load It is desirable that the MoE layer to sparsely activate the experts for a given token. A naive solution would be just to choose the top-k experts according to the softmax probability distribution. However, it is known that this approach leads to load imbalance problem for training [16]: most tokens seen during training would have been dispatched to a small number of experts, amassing a very large input buffer for only a few (busy) experts leaving other experts untrained, slowing down the training. Meanwhile many other experts do not get sufficiently trained at all. A better design of the gating function would distribute processing burden more evenly across all experts.
|
| 60 |
+
- Efficiency at scale It would be rather trivial to achieve a balanced load if the gating function is done sequentially. The computation cost for the gating function alone is at least O(NE) for all N tokens in the input batch given E experts. However, in our study, N is in the order of millions and E is in the order of thousands, a sequential implementation of the gating function would keep most of the computational resources idle most of the time. Therefore, we need an efficient parallel implementation of the gating function to leverage many devices.
|
| 61 |
+
|
| 62 |
+
We designed the following mechanisms in the gating function GATE(·) to meet the above requirements (details illustrated in Algorithm 1):
|
| 63 |
+
|
| 64 |
+
- Expert capacity To ensure the load is balanced, we enforce that the number of tokens processed by one expert is below some uniform threshold, which we define as expert capacity. Assuming that the total number of tokens in a training batch is N, and each token is dispatched to at most two experts, then the expert capacity is set to be O(N/E). GATE(·) keeps a running counter c<sup>e</sup> for how many tokens are dispatched to an expert. When both experts selected by a token already exceed their capacity, the token is considered as an *overflowed* token, where Gs,E degenerates into a zero vector. Such tokens have their representation x<sup>s</sup> passed on to the next layer via residual connections.
|
| 65 |
+
- Local group dispatching GATE(·) partitions all tokens in a training batch evenly into G groups, i.e., each group contains S = N/G tokens. All groups are processed independently in parallel. Each group is given a fractional capacity of each expert, 2N/(G · E). Each group ensures that at most this many tokens are dispatched to an expert. In this way, we can ensure that expert capacity is still enforced and the overall load is balanced.
|
| 66 |
+
- Auxiliary loss It is important that the gating function does not always choose the same few experts, as this would lead to a capacity overflow for only a few experts and under-utilization for the remaining ones. Following [16], we define an auxiliary loss term `aux to enforce this constraint. It is added to the overall loss function of the model L = `nll + k ∗ `aux with a constant multiplier k. The particular form of the auxiliary loss term `aux in line (13) of algorithm 1 is motivated by the following consideration: the term ce/S represents the fraction of input routed to each expert, and we want to minimize mean square of ce/S. But because c<sup>e</sup> is derived from top-2 operation and is not differentiable, we use the mean gates per expert m<sup>e</sup> as a differentiable approximation and replace (ce/S) <sup>2</sup> with me(ce/S), which can now be optimized with gradient descent.
|
| 67 |
+
- Random routing Intuitively, because y<sup>s</sup> is a weighted average of what selected experts return, if the weight for the 2nd expert is very small, we can simply ignore the 2nd expert to conserve the overall expert capacity. Hence, in addition to respecting the expert capacity constraint, GATE(·) dispatches to the 2nd-best expert with the probability proportional to its weight g2.
|
| 68 |
+
|
| 69 |
+
This section describes the implementation of the model in Section 2 that runs efficiently on a cluster of TPU devices.
|
| 70 |
+
|
| 71 |
+
The first step is to express the model in terms of linear algebra operations, in which our software stack (TensorFlow [21]) and the hardware platform (TPU) are highly tailored and optimized. It is readily easy to code up most of the model in terms of linear algebra in the same way as the original Transformer. However, it requires some effort to express the MoE Layer, in particular GATE(·) function presented in Algorithm 1 due to its sequential nature, and we describe the details in Section 3.1.
|
| 72 |
+
|
| 73 |
+
Next, we annotate the linear algebra computation to express parallelism. Each tensor in the computation can be annotated for replication or distribution across a cluster of devices using sharding APIs in Section 3.2. Using sharding annotations enables separation of concerns between the model description and the efficient parallel implementation, and allows users to flexibly express diverse parallelization strategies. For example, (1) the attention layer is parallelized by splitting along the batch dimension and replicating its weights to all devices. On the other hand, (2) experts in the MoE layer are infeasible to be replicated in all the devices due to its sheer size and the only viable strategy is to shard experts into many devices. Furthermore, the whole model alternates between these two modes (1)-(2). Using annotations frees model developers from the system optimization efforts and avoids baking the parallel implementation and low-level details into the model code.
|
| 74 |
+
|
| 75 |
+
Finally, the compiler infrastructure takes a (partially) annotated linear algebra computation and produces an efficient parallel program that scales to thousands of devices. As will be described in Section 3.3, the compiler applies SPMD (Single Program Multiple Data) partitioning transformation to express per-device computation, inserts necessary cross-device communication, handles irregular
|
| 76 |
+
|
| 77 |
+
```
|
| 78 |
+
Data: x_S, a group of tokens of size S
|
| 79 |
+
Data: C, Expert capacity allocated to this group
|
| 80 |
+
Result: \mathcal{G}_{S,E}, group combine weights
|
| 81 |
+
Result: \ell_{aux}, group auxiliary loss
|
| 82 |
+
(1) c_E \leftarrow 0
|
| 83 |
+
|
| 84 |
+
▶ gating decisions per expert
|
| 85 |
+
|
| 86 |
+
(2) g_{S,E} \leftarrow softmax(wg \cdot x_S)
|
| 87 |
+
\triangleright gates per token per expert, wq are trainable weights
|
| 88 |
+
(3) m_E \leftarrow \frac{1}{S} \sum_{s=1}^s g_{s,E}
|
| 89 |
+
|
| 90 |
+
(4) for s \leftarrow 1 to S do
|
| 91 |
+
g1, e1, g2, e2 = top_2(g_{s,E})
|
| 92 |
+
⊳ top-2 gates and expert indices
|
| 93 |
+
(5)
|
| 94 |
+
g1 \leftarrow g1/(g1+g2)
|
| 95 |
+
\triangleright normalized g1
|
| 96 |
+
(6)
|
| 97 |
+
c \leftarrow c_{e1}
|
| 98 |
+
\triangleright position in e1 expert buffer
|
| 99 |
+
(7)
|
| 100 |
+
if c_{e1} < C then
|
| 101 |
+
(8)
|
| 102 |
+
\mathcal{G}_{s,e1} \leftarrow g1
|
| 103 |
+
\triangleright e1 expert combine weight for x_s
|
| 104 |
+
(9)
|
| 105 |
+
(10)
|
| 106 |
+
c_{e1} \leftarrow c + 1
|
| 107 |
+
\triangleright incrementing e1 expert decisions count
|
| 108 |
+
(11)
|
| 109 |
+
end
|
| 110 |
+
(12)
|
| 111 |
+
(13) \ell_{aux} = \frac{1}{E} \sum_{e=1}^{E} \frac{c_e}{S} \cdot m_e
|
| 112 |
+
(14) for s \leftarrow 1 to S do
|
| 113 |
+
g1, e1, g2, e2 = top_2(g_{s,E})
|
| 114 |
+
⊳ top-2 gates and expert indices
|
| 115 |
+
(15)
|
| 116 |
+
(16)
|
| 117 |
+
g2 \leftarrow g2/(g1 + g2)
|
| 118 |
+
\triangleright normalized q2
|
| 119 |
+
rnd \leftarrow uniform(0,1)
|
| 120 |
+
\triangleright dispatch to second-best expert with probability \propto 2 \cdot q^2
|
| 121 |
+
(17)
|
| 122 |
+
\begin{array}{l} c \leftarrow c_{e2} \\ \text{if } c < C \wedge 2 \cdot g2 > rnd \text{ then} \end{array}
|
| 123 |
+
\triangleright position in e2 expert buffer
|
| 124 |
+
(18)
|
| 125 |
+
(19)
|
| 126 |
+
(20)
|
| 127 |
+
\mathcal{G}_{s,e2} \leftarrow g2
|
| 128 |
+
\triangleright e2 expert combine weight for x_s
|
| 129 |
+
(21)
|
| 130 |
+
c_{e2} \leftarrow c + 1
|
| 131 |
+
(22)
|
| 132 |
+
(23) end
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
patterns such as uneven partitions, and finally generates a single program to be launched on all devices for parallel execution.
|
| 136 |
+
|
| 137 |
+
Our model implementation (Algorithm 2) views the whole accelerator cluster as a single device and expresses its core mathematical algorithm in a few tensor operations independent of the concrete setup of the cluster. Einstein summation notation [30] (i.e., tf.einsum) is a powerful construct to concisely express the model and we use it extensively in our implementation. The softmax gates computation is trivially expressed by one einsum followed by the softmax function. Dispatching of inputs to selected experts is expressed by a single einsum between the dispatching mask and the input. All $FFN_e$ weights are combined into single 3-D tensors wi amd wo and the computation by $FFN_1 \dots FFN_E$ is expressed using 3 operators (two einsum and one relu). Finally, taking weighted average of all experts output into the final output is expressed in another einsum.
|
| 138 |
+
|
| 139 |
+
Top2Gating in Algorithm 2 computes the union of all group-local $\mathcal{G}_{S,E}$ described in Algorithm 1. combine\_weights is a 4-D tensor with shape [G, S, E, C]. The value combine\_weights[g, s, e, c] is non-zero when the input token s in group g is sent to the input buffer of expert e at buffer position e. For a specific g and s, a slice combine\_weight[g, s, :, :] contains at most two non-zero values. Binary dispatch\_mask is produced from combine\_weights by simply setting all non-zero values to 1.
|
| 140 |
+
|
| 141 |
+
We need to choose the number of groups G and the number of experts E properly so that the algorithm can scale to a cluster with D devices. It is worthwhile to analyze its overall computation complexity (the total number of floating point operations) for a training step given a training batch of N tokens.
|
| 142 |
+
|
| 143 |
+
**Algorithm 2:** Forward pass of the Positions-wise MoE layer. The underscored letter (e.g., **G** and **E**) indicates the dimension along which a tensor will be partitioned.
|
| 144 |
+
|
| 145 |
+
```
|
| 146 |
+
gates = softmax(einsum("GSM,ME->GSE", inputs, wg))
|
| 147 |
+
combine_weights, dispatch_mask = Top2Gating(gates)
|
| 148 |
+
dispatched_expert_inputs = einsum(
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
We analyze Algorithm 2 computation complexity scaling with number the of devices D with the following assumptions: a) number of tokens per device $\frac{N}{D} = O(1)$ is constant<sup>1</sup>; b) G = O(D), S = O(1) and N = O(GS) = O(D); c) M = O(1), H = O(1); d) E = O(D); and e) $C = O(\frac{2S}{E}) = O(\frac{1}{D})$ , D < S and is a positive integer<sup>2</sup>.
|
| 152 |
+
|
| 153 |
+
The total number of floating point operations FLOPS in Algorithm 2:
|
| 154 |
+
|
| 155 |
+
$$\begin{split} FLOPS_{\text{Softmax}} + FLOPS_{\text{Top2Gating}} + FLOPS_{\text{Dispatch|Combine}} + FLOPS_{\text{FFN}} = \\ O(GSME) & + O(GSEC) & + O(GSMEC) & + O(EGCHM) = \\ O(D \cdot 1 \cdot 1 \cdot D) + O(D \cdot 1 \cdot D \cdot \frac{1}{D}) + O(D \cdot 1 \cdot 1 \cdot D \cdot \frac{1}{D}) & + O(D \cdot D \cdot \frac{1}{D} \cdot 1 \cdot 1) = \\ O(D^2) & + O(D) & + O(D) & + O(D) \end{split}$$
|
| 156 |
+
|
| 157 |
+
and consequently per-device FLOPS/D = O(D) + O(1) + O(1) + O(1). Per-device softmax complexity $FLOPS_{\text{softmax}}/D = O(D)$ is linear in number of devices, but in practice is dominated by other terms since D << H and D < S. As a result FLOPS/D could be considered O(1), satisfying sublinear scaling design requirements. Section 5 verifies this analysis empirically.
|
| 158 |
+
|
| 159 |
+
In addition to the computation cost, we have non-constant cross-device communication cost, but it grows at a modest rate $O(\sqrt{D})$ when we increase D (Section 5).
|
| 160 |
+
|
| 161 |
+
Due to the daunting size and computation demand of tensors in Algorithm 1, we have to parallelize the algorithm over many devices. An immediate solution of how to shard each tensor in the algorithm is illustrated by underscored letters in Algorithm 2. The *sharding* API in GShard allows us to annotate tensors in the program to selectively specify how they should be partitioned. This information is propagated to the compiler so that the compiler can automatically apply transformations for parallel execution. We use the following APIs in TensorFlow/Lingvo [31] in our work.
|
| 162 |
+
|
| 163 |
+
- replicate(tensor) annotates tensor to be replicated across partitions, and returns the annotated tensor. This is often used for the non-MoE layers in our model to replicate the weights.
|
| 164 |
+
- **split(tensor, split\_dimension, num\_partitions)** annotates tensor to be partitioned along split\_dimension, and returns the annotated tensor. Partition *i* is placed on the *i*'th device, and num\_partitions must not exceed the number of devices on the system.
|
| 165 |
+
- **shard(tensor, device\_assignment)** generalizes split() to allow partitioning multiple dimensions and specifying the placement of each partition. Appendix A.3 describes this API with more details.
|
| 166 |
+
|
| 167 |
+
<sup>&</sup>lt;sup>1</sup>This is oftentimes necessary in practice to avoid overflowing device memory.
|
| 168 |
+
|
| 169 |
+
<sup>&</sup>lt;sup>2</sup>Scaling D > S would require different use of fractional expert capacity.
|
| 170 |
+
|
| 171 |
+
Note that the invocations to split or shard only adds annotations and does not change the logical shape in the user program. The user still works with full shapes and does not need to worry about issues like uneven partitioning.
|
| 172 |
+
|
| 173 |
+
GShard is general in the sense that the simple APIs apply to all dimensions in the same way. The sharded dimensions could include batch (data-parallelism), feature, expert, and even spatial dimensions in image models, depending on the use cases. Also, since the sharding annotation is per tensor, different parts of the model can be partitioned in different ways. This flexibility enables us to partition the giant MoE weights and switch partition modes between MoE and non-MoE layers, as well as uses cases beyond this paper, e.g., spatial partitioning of large images [32] (Appendix A.4).
|
| 174 |
+
|
| 175 |
+
With the above sharding APIs, we can express the sharding strategy shown in Algorithm 2 as below. The input tensor is split along the first dimension and the gating weight tensor is replicated. After computing the dispatched expert inputs, we apply split to change the sharding from the group (G) dimension to the expert (E) dimension. D is device count.
|
| 176 |
+
|
| 177 |
+
```
|
| 178 |
+
1 # Partition inputs along group (G) dim .
|
| 179 |
+
2 + inputs = split ( inputs , 0 , D )
|
| 180 |
+
3 # Replicate the gating weights
|
| 181 |
+
4 + wg = replicate ( wg )
|
| 182 |
+
5 gates = softmax ( einsum ("GSM ,ME - >GSE", inputs , wg ))
|
| 183 |
+
6 combine_weights , dispatch_mask = Top2Gating ( gating_logits )
|
| 184 |
+
7 dispatched_expert_inputs = einsum (
|
| 185 |
+
8 "GSEC ,GSM -> EGCM ", dispatch_mask , reshaped_inputs )
|
| 186 |
+
9 # Partition dispatched inputs along expert (E) dim.
|
| 187 |
+
10 + dispatched_expert_inputs = split ( dispatched_expert_inputs , 0 , D )
|
| 188 |
+
11 h = einsum ("EGCM ,EMH - > EGCH ", dispatched_expert_inputs , wi )
|
| 189 |
+
12 ...
|
| 190 |
+
```
|
| 191 |
+
|
| 192 |
+
Per-tensor sharding assignment As shown in the example above, users are not required to annotate every tensor in the program. Annotations are typically only required on a few important operators like Einsums in our model and the compiler uses its own heuristics to infer sharding for the rest of the tensors 3 . For example, since the input tensor is partitioned along G and the weight tensor is replicated, the compiler chooses to partition the einsum output along the same G dimension (Line 5). Similarly, since both inputs are partitioned along the G dimension for the input dispatch einsum (Line 7), the output sharding is inferred to be split along the G dimension, and then we add the split annotation on the output to reshard along the E dimension. Some annotations in the above example could also be determined by the compiler (e.g., replicate(wg)) but it is recommended to annotate the initial input and final output tensors of the computation.
|
| 193 |
+
|
| 194 |
+
The compiler currently uses an iterative data-flow analysis to propagate sharding information from an operator to its neighbors (operands and users), starting from the user-annotated operators. The analysis tries to minimize the chance of resharding by aligning the sharding decisions of adjacent operators. There could be other approaches such as integer programming or machine-learning methods, but improving the automatic sharding assignment is not the focus of this paper and we leave it as future work.
|
| 195 |
+
|
| 196 |
+
Mixing manual and automatic sharding Automatic partitioning with sharding annotations is often enough for common cases, but GShard also has the flexibility to allow mixing manually partitioned operators with auto-partitioned operators. This provides users with more controls on how operators are partitioned, and one example is that the user has more run-time knowledge beyond the operators' semantics. For example, neither XLA's nor TensorFlow's Gather operator definition conveys information about the index bounds for different ranges in the input, but the user might know that a specific Gather operator shuffles data only within each partition. In this case, the user can trivially partition the operator by simply shrinking the dimension size and performing a local Gather; otherwise, the compiler would need to be conservative about the index range and add unnecessary communication overhead. For example, the dispatching Einsum (Line 3) in Algorithm 2
|
| 197 |
+
|
| 198 |
+
<sup>3</sup> It is also important for the compiler to infer missing shardings since the backpropagation computation is often automatically generated by the frontend framework and users don't have access to those tensors.
|
| 199 |
+
|
| 200 |
+
in Algorithm 2, which uses an one-hot matrix to dispatch inputs, can be alternatively implemented with a Gather operator using trivial manual partitioning, while the rest of the model is partitioned automatically. Below is the pseudocode illustrating this use case.
|
| 201 |
+
|
| 202 |
+
```
|
| 203 |
+
1 # input has shape [G, S, M]. split () does not change logical shape .
|
| 204 |
+
2 input = split ( input , 0 , num_devices )
|
| 205 |
+
3 # s_indices has shape [E, G, C, 1]. Values : indices to S in input .
|
| 206 |
+
4 s_indices = split ( s_indices , 1 , num_devices )
|
| 207 |
+
5
|
| 208 |
+
6 # Begin manual partitioning .
|
| 209 |
+
7 # partitioned_input has shape [G/ num_devices , S, M]
|
| 210 |
+
8 partitioned_input = auto_to_manual_spmd_partition ( input )
|
| 211 |
+
9 # partitioned_s_indices has shape [E, G/ num_devices , C, 1]
|
| 212 |
+
10 partitioned_s_indices = auto_to_manual_spmd_partition ( s_indices )
|
| 213 |
+
11 # Concat with G indices in partitioned_input : Iota on G dimension .
|
| 214 |
+
12 partitioned_gs_indices = concat (
|
| 215 |
+
13 iota ([ E , G / num_devices , C , 1] , 1) , partitioned_s_indices , 3)
|
| 216 |
+
14 # partitioned_data has shape [E, G/ num_devices , C, M]
|
| 217 |
+
15 partitioned_data = gather (
|
| 218 |
+
16 partitioned_input , partitioned_gs_indices )
|
| 219 |
+
17
|
| 220 |
+
18 # Switch back to auto partitioning .
|
| 221 |
+
19 # data has shape [E, G, C, M]
|
| 222 |
+
20 data = manual_to_auto_spmd_partition ( partitioned_data )
|
| 223 |
+
21 ...
|
| 224 |
+
```
|
| 225 |
+
|
| 226 |
+
This section describes the compiler infrastructure that automatically partitions a computation graph based on sharding annotations. Sharding annotations inform the compiler about how each tensor should be distributed across devices. The SPMD (Single Program Multiple Data) partitioner (or "partitioner" for simplicity) is a compiler component that transforms a computation graph into a single program to be executed on all devices in parallel. This makes the compilation time near constant regardless of the number of partitions, which allows us to scale to thousands of partitions. 4
|
| 227 |
+
|
| 228 |
+
We implemented the partitioner in the XLA compiler [28]. Multiple frontend frameworks including TensorFlow, JAX, PyTorch and Julia already have lowering logic to transform their graph representation to XLA HLO graph. XLA also has a much smaller set of operators compared to popular frontend frameworks like TensorFlow, which reduces the burden of implementing a partitioner without harming generality, because the existing lowering from frontends performs the heavy-lifting to make it expressive. Although we developed the infrastructure in XLA, the techniques we describe here can be applied to intermediate representations in other machine learning frameworks (e.g., ONNX [33], TVM Relay [34], Glow IR [35]).
|
| 229 |
+
|
| 230 |
+
XLA models a computation as a dataflow graph where nodes are operators and edges are tensors flowing between operators. The core of the partitioner is per-operation handling that transforms a full-sized operator into a partition-sized operator according to the sharding specified on the input and output. When a computation is partitioned, various patterns of cross-device data transfers are introduced. In order to maximize the performance at large scale, it is essential to define a core set of communication primitives and optimize those for the target platform.
|
| 231 |
+
|
| 232 |
+
Since the partitioner forces all the devices to run the same program, the communication patterns are also regular and XLA defines a set of collective operators that perform MPI-style communications [36]. We list the common communication primitives we use in the SPMD partitioner below.
|
| 233 |
+
|
| 234 |
+
<sup>4</sup>An alternative is MPMD (Multiple Program Multiple Data), which does not scale as shown in Figure 2.
|
| 235 |
+
|
| 236 |
+
CollectivePermute This operator specifies a list of source-destination pairs, and the input data of a source is sent to the corresponding destination. It is used in two places: changing a sharded tensor's device order among partitions, and halo exchange as discussed later in this section.
|
| 237 |
+
|
| 238 |
+
AllGather This operator concatenates tensors from all participants following a specified order. It is used to change a sharded tensor to a replicated tensor.
|
| 239 |
+
|
| 240 |
+
AllReduce This operator performs elementwise reduction (e.g., summation) over the inputs from all participants. It is used to combine partially reduced intermediate tensors from different partitions. In a TPU device network, AllReduce has a constant cost when the number of partition grows (Section 5.2). It is also a commonly used primitive with efficient implementation in other types of network topology [37].
|
| 241 |
+
|
| 242 |
+
AllToAll This operator logically splits the input of each participant along one dimension, then sends each piece to a different participant. On receiving data pieces from others, each participant concatenates the pieces to produce its result. It is used to reshard a sharded tensor from one dimension to another dimension. AllToAll is an efficient way for such resharding in a TPU device network, where its cost increases sublinearly when the number of partitions grows (Section 5.2).
|
| 243 |
+
|
| 244 |
+
The core of the partitioner is the per-operator transformation from a full-sized operator into a partition-sized operator according to the specified sharding. While some operators (e.g., elementwise) are trivial to support, we discuss several common cases where cross-partition communications are required.
|
| 245 |
+
|
| 246 |
+
There are a few important technical challenges in general cases, which we will cover in Section 3.3.3. To keep the discussion more relevant to the MoE model, this section focuses on Einsum partitioning to illustrate a few communication patterns. And to keep it simple for now, we assume that all tensors are evenly partitioned, which means the size of the dimension to partitition is a multiple of the partition count.
|
| 247 |
+
|
| 248 |
+
Einsum Case Study Einsum is the most critical operator in implementing the MoE model. They are represented as a Dot operation in XLA HLO, where each operand (LHS or RHS) consists of three types of dimensions:
|
| 249 |
+
|
| 250 |
+
- Batch dimensions are the embarrassingly parallel dimensions. The same set of batch dimensions must exist in all of LHS, RHS and the output, and each element in the output only depends on the corresponding batch in LHS and RHS.
|
| 251 |
+
- Contracting dimensions only exist in the operands. LHS and RHS must have the same set of contracting dimensions, and they are summed up and collapsed in the output.
|
| 252 |
+
- Non-contracting dimensions are also parallel dimensions that exist in one of the operands and the output. Each of LHS and RHS has its own set of non-contracting dimensions, which are inherited by the output.
|
| 253 |
+
|
| 254 |
+
Sharding propagation prioritizes choosing the same sharding on batch dimensions of LHS, RHS and output, because that would avoid any cross-partition communication. However, that is not always possible, and we need cross-partition communication in the following three cases.
|
| 255 |
+
|
| 256 |
+
- Resharding. In the MoE model we built, the expert dispatching logic (Line 3 in Algorithm 2) requires switching the partitioned dimension after an Einsum. Since resharding is efficient (Section 5.2) with AllToAll, we first execute the Einsum locally, then reshard it to the desired dimension, as shown in Figure 4a.
|
| 257 |
+
- Accumulating partial results. If the inputs are partitioned along contracting dimensions, the local result is partial and we need to use an AllReduce to combine them and produce the final result, as shown in Figure 4b.
|
| 258 |
+
- Slicing in a loop. For certain scenarios, we also implemented an algorithm similar to Cannon's algorithm [38], in order to limit the size of tensors on each partition. For example,
|
| 259 |
+
|
| 260 |
+
Einsum: GSEC, GSM->EGCM
|
| 261 |
+
|
| 262 |
+
(S omitted)
|
| 263 |
+
|
| 264 |
+
(M omitted)
|
| 265 |
+
|
| 266 |
+
(M omitted)
|
| 267 |
+
|
| 268 |
+
(M omitted)
|
| 269 |
+
|
| 270 |
+
(M omitted)
|
| 271 |
+
|
| 272 |
+
(A omitted)
|
| 273 |
+
|
| 274 |
+
(A omitted)
|
| 275 |
+
|
| 276 |
+
(B omitted)
|
| 277 |
+
|
| 278 |
+
(C omitted)
|
| 279 |
+
|
| 280 |
+
(A omitted)
|
| 281 |
+
|
| 282 |
+
(B omitted)
|
| 283 |
+
|
| 284 |
+
(C omitted)
|
| 285 |
+
|
| 286 |
+
(A omitted)
|
| 287 |
+
|
| 288 |
+
(B omitted)
|
| 289 |
+
|
| 290 |
+
(C omitted)
|
| 291 |
+
|
| 292 |
+
(A omitted)
|
| 293 |
+
|
| 294 |
+
(B omitted)
|
| 295 |
+
|
| 296 |
+
(A omitted)
|
| 297 |
+
|
| 298 |
+
(B omitted)
|
| 299 |
+
|
| 300 |
+
(C omitted)
|
| 301 |
+
|
| 302 |
+
(A omitted)
|
| 303 |
+
|
| 304 |
+
(B omitted)
|
| 305 |
+
|
| 306 |
+
(C omitted)
|
| 307 |
+
|
| 308 |
+
(A omitted)
|
| 309 |
+
|
| 310 |
+
(B omitted)
|
| 311 |
+
|
| 312 |
+
(C omitted)
|
| 313 |
+
|
| 314 |
+
(B omitted)
|
| 315 |
+
|
| 316 |
+
(C omitted)
|
| 317 |
+
|
| 318 |
+
(C omitted)
|
| 319 |
+
|
| 320 |
+
(C omitted)
|
| 321 |
+
|
| 322 |
+
(C omitted)
|
| 323 |
+
|
| 324 |
+
(C omitted)
|
| 325 |
+
|
| 326 |
+
(C omitted)
|
| 327 |
+
|
| 328 |
+
(M omitted)
|
| 329 |
+
|
| 330 |
+
(M omitted)
|
| 331 |
+
|
| 332 |
+
(M omitted)
|
| 333 |
+
|
| 334 |
+
(M omitted)
|
| 335 |
+
|
| 336 |
+
(M omitted)
|
| 337 |
+
|
| 338 |
+
(M omitted)
|
| 339 |
+
|
| 340 |
+
(M omitted)
|
| 341 |
+
|
| 342 |
+
(M omitted)
|
| 343 |
+
|
| 344 |
+
(M omitted)
|
| 345 |
+
|
| 346 |
+
(M omitted)
|
| 347 |
+
|
| 348 |
+
(M omitted)
|
| 349 |
+
|
| 350 |
+
(M omitted)
|
| 351 |
+
|
| 352 |
+
(M omitted)
|
| 353 |
+
|
| 354 |
+
(M omitted)
|
| 355 |
+
|
| 356 |
+
(M omitted)
|
| 357 |
+
|
| 358 |
+
(M omitted)
|
| 359 |
+
|
| 360 |
+
(M omitted)
|
| 361 |
+
|
| 362 |
+
(M omitted)
|
| 363 |
+
|
| 364 |
+
(M omitted)
|
| 365 |
+
|
| 366 |
+
(M omitted)
|
| 367 |
+
|
| 368 |
+
(M omitted)
|
| 369 |
+
|
| 370 |
+
(M omitted)
|
| 371 |
+
|
| 372 |
+
(M omitted)
|
| 373 |
+
|
| 374 |
+
(M omitted)
|
| 375 |
+
|
| 376 |
+
(M omitted)
|
| 377 |
+
|
| 378 |
+
(M omitted)
|
| 379 |
+
|
| 380 |
+
(M omitted)
|
| 381 |
+
|
| 382 |
+
(M omitted)
|
| 383 |
+
|
| 384 |
+
(M omitted)
|
| 385 |
+
|
| 386 |
+
(M omitted)
|
| 387 |
+
|
| 388 |
+
(M omitted)
|
| 389 |
+
|
| 390 |
+
(M omitted)
|
| 391 |
+
|
| 392 |
+
(M omitted)
|
| 393 |
+
|
| 394 |
+
(M omitted)
|
| 395 |
+
|
| 396 |
+
(M omitted)
|
| 397 |
+
|
| 398 |
+
(M omitted)
|
| 399 |
+
|
| 400 |
+
(M omitted)
|
| 401 |
+
|
| 402 |
+
(M omitted)
|
| 403 |
+
|
| 404 |
+
(M omitted)
|
| 405 |
+
|
| 406 |
+
(M omitted)
|
| 407 |
+
|
| 408 |
+
(M omitted)
|
| 409 |
+
|
| 410 |
+
(M omitted)
|
| 411 |
+
|
| 412 |
+
(M omitted)
|
| 413 |
+
|
| 414 |
+
(M omitted)
|
| 415 |
+
|
| 416 |
+
(M omitted)
|
| 417 |
+
|
| 418 |
+
(M omitted)
|
| 419 |
+
|
| 420 |
+
(M omitted)
|
| 421 |
+
|
| 422 |
+
(M omitted)
|
| 423 |
+
|
| 424 |
+
(M omitted)
|
| 425 |
+
|
| 426 |
+
(M omitted)
|
| 427 |
+
|
| 428 |
+
(M omitted)
|
| 429 |
+
|
| 430 |
+
(M omitted)
|
| 431 |
+
|
| 432 |
+
(M omitted)
|
| 433 |
+
|
| 434 |
+
(M omitted)
|
| 435 |
+
|
| 436 |
+
(M omitted)
|
| 437 |
+
|
| 438 |
+
(M omitted)
|
| 439 |
+
|
| 440 |
+
(M omitted)
|
| 441 |
+
|
| 442 |
+
(M omitted)
|
| 443 |
+
|
| 444 |
+
(M omitted)
|
| 445 |
+
|
| 446 |
+
(M omitted)
|
| 447 |
+
|
| 448 |
+
(M omitted)
|
| 449 |
+
|
| 450 |
+
(M omitted)
|
| 451 |
+
|
| 452 |
+
(M omitted)
|
| 453 |
+
|
| 454 |
+
(M omitted)
|
| 455 |
+
|
| 456 |
+
(M omitted)
|
| 457 |
+
|
| 458 |
+
(M omitted)
|
| 459 |
+
|
| 460 |
+
(M omitted)
|
| 461 |
+
|
| 462 |
+
(M omitted)
|
| 463 |
+
|
| 464 |
+
(M omitted)
|
| 465 |
+
|
| 466 |
+
(M omitted)
|
| 467 |
+
|
| 468 |
+
(M omitted)
|
| 469 |
+
|
| 470 |
+
(M omitted)
|
| 471 |
+
|
| 472 |
+
(M omitted)
|
| 473 |
+
|
| 474 |
+
(M omitted)
|
| 475 |
+
|
| 476 |
+
(M omitted)
|
| 477 |
+
|
| 478 |
+
(M omitted)
|
| 479 |
+
|
| 480 |
+
(M omitted)
|
| 481 |
+
|
| 482 |
+
(M omitted)
|
| 483 |
+
|
| 484 |
+
(M omitted)
|
| 485 |
+
|
| 486 |
+
(M omitted)
|
| 487 |
+
|
| 488 |
+
(M omitted)
|
| 489 |
+
|
| 490 |
+
(M omitted)
|
| 491 |
+
|
| 492 |
+
(M omitted)
|
| 493 |
+
|
| 494 |
+
(M omitted)
|
| 495 |
+
|
| 496 |
+
(M omitted)
|
| 497 |
+
|
| 498 |
+
(M omitted)
|
| 499 |
+
|
| 500 |
+
(M omitted)
|
| 501 |
+
|
| 502 |
+
(M omitted)
|
| 503 |
+
|
| 504 |
+
(M omitted)
|
| 505 |
+
|
| 506 |
+
(M omitted)
|
| 507 |
+
|
| 508 |
+
(M omitted)
|
| 509 |
+
|
| 510 |
+
(M omitted)
|
| 511 |
+
|
| 512 |
+
(M omitted)
|
| 513 |
+
|
| 514 |
+
(M omitted)
|
| 515 |
+
|
| 516 |
+
(M omitted)
|
| 517 |
+
|
| 518 |
+
(M omitted)
|
| 519 |
+
|
| 520 |
+
(M omitted)
|
| 521 |
+
|
| 522 |
+
(M omitted)
|
| 523 |
+
|
| 524 |
+
(M omitted)
|
| 525 |
+
|
| 526 |
+
(M omitted)
|
| 527 |
+
|
| 528 |
+
(M omitted)
|
| 529 |
+
|
| 530 |
+
(M omitted)
|
| 531 |
+
|
| 532 |
+
(M omitted)
|
| 533 |
+
|
| 534 |
+
(M omitted)
|
| 535 |
+
|
| 536 |
+
(M omitted)
|
| 537 |
+
|
| 538 |
+
(M omitted)
|
| 539 |
+
|
| 540 |
+
(M omitted)
|
| 541 |
+
|
| 542 |
+
(M omitted)
|
| 543 |
+
|
| 544 |
+
(M omitted)
|
| 545 |
+
|
| 546 |
+
(M omitted)
|
| 547 |
+
|
| 548 |
+
(M omitted)
|
| 549 |
+
|
| 550 |
+
(M omitted)
|
| 551 |
+
|
| 552 |
+
(M omitted)
|
| 553 |
+
|
| 554 |
+
(M omitted)
|
| 555 |
+
|
| 556 |
+
(M omitted)
|
| 557 |
+
|
| 558 |
+
(M omitted)
|
| 559 |
+
|
| 560 |
+
(M omitted)
|
| 561 |
+
|
| 562 |
+
(M omitted)
|
| 563 |
+
|
| 564 |
+
(M omitted)
|
| 565 |
+
|
| 566 |
+
(M omitted)
|
| 567 |
+
|
| 568 |
+
(M omitted)
|
| 569 |
+
|
| 570 |
+
(M omitted)
|
| 571 |
+
|
| 572 |
+
(M omi
|
| 573 |
+
|
| 574 |
+
(a) A partitioned Einsum operator. Colored letters (G and E) represent the partitioned dimension of each tensor. The partitioner decides to first execute a batch-parallel Einsum along the G dimension, then reshard the result to the E dimension.
|
| 575 |
+
|
| 576 |
+
**EGCM**
|
| 577 |
+
|
| 578 |
+
**EGCM**
|
| 579 |
+
|
| 580 |
+
**G**SM
|
| 581 |
+
|
| 582 |
+

|
| 583 |
+
|
| 584 |
+
(b) A simple Einsum (Matmul) partitioned on the contracting dimension.
|
| 585 |
+
|
| 586 |
+

|
| 587 |
+
|
| 588 |
+
(c) An Einsum (Matmul) where we use collective-permute in a loop to compute one slice at a time. There is no full-sized tensor during the entire process.
|
| 589 |
+
|
| 590 |
+
Figure 4: Examples of Einsum partitioning with cross-device communication.
|
| 591 |
+
|
| 592 |
+
if both operands are partitioned on a non-contracting dimension, we cannot compute the local Einsum directly since operands have different non-contracting dimensions. Replicating one of the operands would not cause redundant computation, but it requires the replicated operand to fit in device memory. Therefore, if the size of the operand is too large, we instead keep both operands partitioned and use a loop to iterate over each slice of the result, and use CollectivePermute to communicate the input slices (Figure 4c).
|
| 593 |
+
|
| 594 |
+
**G**SEC
|
| 595 |
+
|
| 596 |
+
We solved several additional challenges to enable the SPMD partitioner to support a complete set of operators without extra constraints of tensor shapes or operator configurations. These challenges often involve asymmetric compute or communication patterns between partitions, which are particularly
|
| 597 |
+
|
| 598 |
+

|
| 599 |
+
|
| 600 |
+
Figure 5: Halo exchange examples.
|
| 601 |
+
|
| 602 |
+
hard to express in SPMD, since the single program needs to be general enough for all partitions. We cannot simply create many branches in the single program based on the run-time device ID, because that would lead to an explosion in program size.
|
| 603 |
+
|
| 604 |
+
Static shapes and uneven partitioning XLA requires tensor shapes to be static. 5 However, when a computation is partitioned, it's not always the case that all partitions have the same input/output shapes, because dimensions may not be evenly divisible by the number of partitions. In those cases, the size of the shape is rounded up to the next multiple of partition count, and the data in that padded region can be arbitrary.
|
| 605 |
+
|
| 606 |
+
When computing an operator, we may need to fill in a known value to the padded region for correctness. For example, if we need to partition an Reduce-Add operator, the identity value of zero needs to be used. Consider an example where the partitioned dimension (15) cannot be divided into 2 (partition count), so Partition 1 has one more column than needed. We create an Iota operator of range [0, 8), add the partition offset (calculated from P artitionId × 8), and compare with the full shape offset (15). Based on the predicate value, we select either from the operand or from zero, and the result is the masked operand.
|
| 607 |
+
|
| 608 |
+
Static operator configurations XLA operators have static configurations, like the padding, stride, and dilation defined in Convolution. However, different partitions may not execute with the same operator configuration. E.g., for a Convolution, the left-most partition applies padding to its left while the right-most partition applies padding to its right. In such cases, the partitioner may choose configurations that make some partitions to produce slightly more data than needed, then slice out the the irrelevant parts. Appendix A.4 discusses examples for Convolution and similar operators.
|
| 609 |
+
|
| 610 |
+
Halo exchange Certain operators have a communication pattern which involves partial data exchange with neighboring partitions, which we call *halo exchange*. We use the CollectivePermute operator to exchange halo data between partitions.
|
| 611 |
+
|
| 612 |
+
The most typical use case of halo exchange is for partitinoning window-based operators (e.g., Convolution, ReduceWindow), because neighboring partitions may require overlapping input data (Figure 5a). In practice, halo-exchange for these operator often needs to be coupled with proper padding, slicing, and masking due to advanced use of window configurations (dilation, stride, and padding), as well as uneven halo sizes. We describe various scenarios in Appendix A.4.
|
| 613 |
+
|
| 614 |
+
Another use of halo exchange is for data formatting operators that change the size of the shape. For example, after a Slice or Pad operator, the shape of the tensor changes, and so do the boundaries between partitions. This requires us to realign the data on different partitions, which can be handled as a form of halo exchange (Figure 5b).
|
| 615 |
+
|
| 616 |
+
Other data formatting operators, although logically not changing the size of the shape, may also need halo exchange, specifically due to the static shape constraint and uneven partitioning. For example, the Reverse operator reverses the order of elements in a tensor, but if it is partitioned unevenly, we need to shift data across partitions to keep the padding logically to the right of the result tensor. Another example is Reshape. Consider reshaping a tensor from [3, 2] to [6], where the input is
|
| 617 |
+
|
| 618 |
+
<sup>5</sup>The limited dynamism in the intermediate representation is often necessary to efficiently target accelerators.
|
| 619 |
+
|
| 620 |
+
unevenly partitioned in 2 ways on the first dimension (partition shape [2, 2]), and the output is also partitioned in 2 ways (partition shape [3]). There is padding on the input due to uneven partitioning, but after Reshape, the output tensor no longer has padding; as a result, halo exchange is required in a similar way to Slice (Figure 5c).
|
| 621 |
+
|
| 622 |
+
Compiler optimizations The SPMD partitioner creates various data formatting operators in order to perform slicing, padding, concatenation, masking and halo exchange. To address the issue, we leverage XLA's fusion capabilities on TPU, as well as code motion optimizations for slicing and padding, to largely hide the overhead of data formatting. As a result, the run-time overhead is typically negligible, even for convolutional networks where masking and padding are heavily used.
|
| 623 |
+
|
| 624 |
+
We chose multilingual neural machine translation (MT) [39, 40, 41] to validate our design for efficient training with GShard. Multilingual MT, which is an inherently multi-task learning problem, aims at building a single neural network for the goal of translating multiple language pairs simultaneously. This extends our line of work [15, 14, 16] towards a universal machine translation model [42], i.e. a single model that can translate between more than hundred languages, in all domains. Such massively multilingual translation models are not only convenient for stress testing models at scale, but also shown to be practically impactful in real-world production systems [43].
|
| 625 |
+
|
| 626 |
+
In massively multilingual MT, there are two criteria that define success in terms of the model quality, 1) improvements attained on languages that have large amounts of training data (high resourced), and 2) improvements for languages with limited data (low-resource). As the number of language pairs (tasks) to be modeled within a single translation model increases, *positive language transfer* [44] starts to deliver large gains for low-resource languages. Given the number of languages considered, M4 has a clear advantage on improving the low-resource tasks. On the contrary, for high-resource languages the increased number of tasks limits per-task capacity within the model, resulting in lower translation quality compared to a models trained on a single language pair. This *capacity bottleneck* for high resourced languages can be relaxed by increasing the model size to massive scale in order to satisfy the need for additional capacity [14, 15].
|
| 627 |
+
|
| 628 |
+
Massively multilingual, massive MT consequently aims at striking a balance between increasing *positive transfer* by massive multilinguality and mitigating the *capacity bottleneck* by massive scaling. While doing so, scaling the model size and the number of languages considered have to be coupled with a convenient neural network architecture. In order to amplify the *positive transfer* and reduce the *negative transfer*6 , one can naturally design a model architecture that harbours shared components across languages (shared sub-networks), along with some language specific ones (unshared, language specific sub-networks). However, the search space in model design (deciding on what to share) grows rapidly as the number of languages increase, making heuristic-based search for a suitable architecture impractical. Thereupon the need for approaches based on learning the wiring pattern of the neural networks from the data emerge as scalable and practical way forward.
|
| 629 |
+
|
| 630 |
+
In this section, we advocate how conditional computation [45, 46] with sparsely gated mixture of experts [16] fits into the above detailed desiderata and show its efficacy by scaling neural machine translation models beyond 1 trillion parameters, while keeping the training time of such massive networks practical. E.g. a 600B GShard model for M4 can process 1T tokens7 in 250k training steps in under 4 days. We experiment with increasing the model capacity by adding more and more experts into the model and study the factors playing role in convergence, model quality and training efficiency. Further, we demonstrate how conditional computation can speed up the training [25] and how sparsely gating/routing each token through the network can efficiently be learned without any prior knowledge on task or language relatedness, exemplifying the capability of learning the routing decision directly from the data.
|
| 631 |
+
|
| 632 |
+
<sup>6</sup>Negative transfer is the notion of sharing the model capacity by unrelated tasks which in return hurts the quality of such *interfering* tasks.
|
| 633 |
+
|
| 634 |
+
<sup>7</sup> Source side tokens after sub-word segmentation.
|
| 635 |
+
|
| 636 |
+
The premise of progressively larger models to attain greater quality necessitates large amounts of training data to begin with [3]. Following the prior work on dense scaling for multilingual machine translation [15, 14], we committed to the realistic test bed of MT *in the wild*, and use a web-scale in-house dataset. The training corpus, mined from the web [47], contains parallel documents for 100 languages, to and from English, adding up to a total of 25 billion training examples. A few characteristics of the training set is worth mentioning. Having mined from the web, the joint corpus is considerably noisy while covering a diverse set of domains and languages. Such large coverage comes with a heavy imbalance between languages in terms of the amount of examples per language pair. This imbalance follows a sharp power law, ranging from billions of examples for high-resourced languages to tens of thousands examples for low-resourced ones. While the above mentioned characteristics constitute a challenge for our study, it also makes the overall attempt as realistic as possible. We refer reader to [15, 14] for the additional details of the dataset being used.
|
| 637 |
+
|
| 638 |
+
We focus on improving the translation quality (measured in terms of BLEU score [48]) from all 100 languages to English. This resulted in approximately 13 billion training examples to be used for model training<sup>8</sup>. In order to form our baselines, we trained separate bilingual Neural Machine Translation models for each language pair (e.g. a single model for German-to-English), tuned depending on the available training data per-language<sup>9</sup>. Rather than displaying individual BLEU scores for each language pair, we follow the convention of placing the baselines along the x-axis at zero, and report the $\Delta BLEU$ trendline of each massively multilingual model trained with GShard (see Figure 6). The x-axis in Figure 6 is sorted from left-to-right in the decreasing order of amount of available training data, where the left-most side corresponds to high-resourced languages, and low-resourced languages on the right-most side respectively. To reiterate, our ultimate goal in universal machine translation is to amass the $\Delta$ BLEU trendline of a single multilingual model above the baselines for all languages considered. We also include a variant of dense 96 layer Transformer Encoder-Decoder network T(96L) trained with GPipe pipeline parallelism on the same dataset as another baseline (dashed trendline in Figure 6). Training to convergence took over 6 weeks on 2048 TPU v3 cores <sup>10</sup>, outperforming the original GPipe T(128L)<sup>11</sup> [15] and is the strongest single dense model baseline we use in our comparisons.
|
| 639 |
+
|
| 640 |
+
Scaling Transformer architecture has been an exploratory research track recently [49, 50, 51]. Without loss of generality, emerging approaches follow scaling Transformer by stacking more and more layers [49, 15], widening the governing dimensions of the network (i.e. model dimension, hidden dimension or number of attention heads) [4, 11] and more recently learning the wiring structure with architecture search [52] <sup>12</sup>. For massively multilingual machine translation, [15] demonstrated the best practices of scaling using GPipe pipeline parallelism; in which a 128 layer Transformer model with 6 billion parameters is shown to be effective at improving high-resource languages while exhibiting the highest positive transfer towards low-resource languages. Although very promising, and satisfying our desiderata for universal translation, dense scaling of Transformer architecture has practical limitations which we referred in Section 1 under training efficiency.
|
| 641 |
+
|
| 642 |
+
We aim for practical training time and seek for architectures that warrant training efficiency. Our strategy has three pillars; increase the depth of the network by stacking more layers similar to GPipe [15], increase the width of the network by introducing multiple replicas of the feed-forward networks (experts) as described in Section 2.2 and make use of learned routing modules to (sparsely) assign tokens to experts as described in Section 2.1. With this three constituents, we obtain an
|
| 643 |
+
|
| 644 |
+
<sup>&</sup>lt;sup>8</sup>Compared to prior work using the same dataset, Kazakh and Latin to English language pairs were excluded from evaluation.
|
| 645 |
+
|
| 646 |
+
<sup>&</sup>lt;sup>9</sup>We tuned batch-size and different values of regularization methods (e.g. dropout) in a Transformer-Big or Transformer-Base layout, for high or low-resourced languages respectively.
|
| 647 |
+
|
| 648 |
+
<sup>&</sup>lt;sup>10</sup>T(96L) measured to be processing 1+ trillion tokens at 300k steps, processing around 4M tokens/step, total budget of 235.5 TPU v3 core years
|
| 649 |
+
|
| 650 |
+
<sup>&</sup>lt;sup>11</sup>64 encoder + 64 decoder layers, 16384 hidden dim, 32 attention heads
|
| 651 |
+
|
| 652 |
+
<sup>&</sup>lt;sup>12</sup>Since the approaches utilizing architecture search are compute intensive, they are not considered within the scope of this work.
|
| 653 |
+
|
| 654 |
+

|
| 655 |
+
|
| 656 |
+
| Id | Model | BLEU | ∆BLEU | Weights |
|
| 657 |
+
|-----|-----------------|------|-------|----------|
|
| 658 |
+
| | | avg. | avg. | |
|
| 659 |
+
| (1) | MoE(2048E, 36L) | 44.3 | 13.5 | 600B |
|
| 660 |
+
| (2) | MoE(2048E, 12L) | 41.3 | 10.5 | 200B |
|
| 661 |
+
| (3) | MoE(512E, 36L) | 43.7 | 12.9 | 150B |
|
| 662 |
+
| (4) | MoE(512E, 12L) | 40.0 | 9.2 | 50B |
|
| 663 |
+
| (5) | MoE(128E, 36L) | 39.0 | 8.2 | 37B |
|
| 664 |
+
| (6) | MoE(128E, 12L) | 36.7 | 5.9 | 12.5B |
|
| 665 |
+
| * | T(96L) | 36.9 | 6.1 | 2.3B |
|
| 666 |
+
| * | Baselines | 30.8 | - | 100×0.4B |
|
| 667 |
+
|
| 668 |
+
Figure 6: Translation quality comparison of multilingual MoE Transformer models trained with GShard and monolingual baselines. Positions along the x-axis represent languages, raging from highto low-resource. ∆BLEU represents the quality gain of a single multilingual model compared to a monolingual Transformer model trained and tuned for a specific language. MoE Transformer models trained with GShard are reported with solid trend-lines. Dashed trend-line represents a single 96 layer multilingual Transformer model T(96L) trained with GPipe on same dataset. Each trend-line is smoothed by a sliding window of 10 for clarity. (Best seen in color)
|
| 669 |
+
|
| 670 |
+
easy to scale, efficient to train and highly expressive architecture, which we call Sparsely-Gated Mixture-of-Experts Transformer or MoE Transformer in short.
|
| 671 |
+
|
| 672 |
+
Model Details To detail the model specifics, each expert is designed to have the same shape of a regular Transformer feed-forward network, and experts (MoE layers) are distributed once in every other Transformer layer. We tied the number of devices used for training to the number of experts per MoE layer for simplicity, although this is not a requirement. During training, we use float32 for both model weights and activations in order to ensure training stability. We ran additional scalability experiments with MoE(2048E, 60L) with bfloat16 [53] activations with total of 1 trillion model weights. Although trainable by careful and manual diagnostics, with deep 1 trillion model we encountered several trainability issues with numerical stability, hence did not include the results for the sake of reproducibility. For more model and training details, please see Appendix A.2.
|
2007.14166/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2007.14166/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Adaptive gradient methods have been widely used in deep learning. Although one of the most preferred algorithms has been stochastic gradient descent (SGD) for many years, it has difficulties to overcome serious problems such as ill-conditioning and time necessity for large-scale datasets when training deep neural networks. It requires manual tuning of learning rate and difficult to parallelize 16. Thus, the problems of SGD caused the invention of more advanced algorithms. Nowadays, the optimization algorithms used for deep learning adapt their learning rates during training. Basically, the adaptive gradient methods adjust the learning rate for each parameter. Therefore, when the gradients for some parameters are large, their learning rates are reduced or vice versa.
|
| 4 |
+
|
| 5 |
+
Until recently, many adaptive methods have been proposed and they become the most commonly used alternatives to SGD. In addition to their high performance on training deep models, another advantage is they are first-order optimization algorithms just as SGD. Thus, they are computationally efficient for training deep neural networks. This work aims to present the most widely used adaptive optimization algorithms that are proven their superiority and compare their working principles. To this end, image processing that is one of the most important areas of deep learning is handled. Firstly, the effects of adaptive gradient methods are observed for image classification task by using convolutional neural networks (CNNs). Secondly, as an unsupervised task, convolutional autoencoders (CAEs) that is one of the quintessential examples of unsupervised learning are used for image reconstruction. Besides, the effects of algorithms are examined by using denoising autoencoders. By this way, the behaviours of the algorithms during training are analyzed in addition to their performances for both supervised and unsupervised learning tasks.
|
| 6 |
+
|
| 7 |
+
The rest of the paper is organized as follows. In Section 2, studies about adaptive gradient methods and the most recent variants of them are reviewed briefly. In Section 3, widely used optimization algorithms in deep learning are explained by showing their update rules and solutions for challenges of training deep networks. SGD and its momentum variants are also mentioned in this section. The experiments and comparative results are in Section 4 and conclusion in Section 5.
|
| 8 |
+
|
| 9 |
+
# Method
|
| 10 |
+
|
| 11 |
+
The choice of the algorithm to optimize a neural network is one of the most important steps. In machine learning, there are three main kinds of optimization methods. First one is called batch or deterministic gradient methods that process all training examples simultaneously in a large batch. Second one is called stochastic or online methods that use only one example at a time. Today, most algorithms are a blend of the two. During training, they use only a part of training set at each epoch. These algorithms are called minibatch methods. In deep learning era, minibatch methods are mostly preferred for two major reasons. Firstly, they accelerate the training of neural networks. Secondly, as the minibatches are selected randomly and they are independent, an unbiased estimate of the expected gradient can be computed 8 .
|
| 12 |
+
|
| 13 |
+
In this paper, the most widely used minibatch-based adaptive algorithms are examined in detail. Besides, SGD that had been preferred conventionally for a long time is explained alongside its momentum variants, briefly.
|
| 14 |
+
|
| 15 |
+
Basically, SGD 24 follows the gradient of randomly selected minibatches downhill. In order to train a neural network using SGD, firstly, the gradient estimate is computed by using a loss function. Then, the update at iteration k is applied for parameters θ. The calculations for each minibatch of m examples from the training set x (1), ..., x(m) with corresponding targets y (i) are as follows:
|
| 16 |
+
|
| 17 |
+
$$\hat{g} \leftarrow \frac{1}{m} \nabla_{\theta} \sum_{i} L(f(x^{(i)}; \theta), y^{(i)}) \tag{1}$$
|
| 18 |
+
|
| 19 |
+
$$\theta \leftarrow \theta - \epsilon_k \hat{g} \tag{2}$$
|
| 20 |
+
|
| 21 |
+
Here, the learning rate <sup>k</sup> is a very important hyperparameter. The magnitude of the update depends on the learning rate. If it is too large, updates depend too much on recent instances. If it is small, many updates may be need for convergence 1 . This hyperparameter can be chosen by trial and error. One way is to choose one of the several learning rates that results in the smallest loss function value. This is called line search. Another way is to monitor the first several epochs and use a learning rate that is higher than the best performing learning rate. In Equation 2, the learning rate is denoted as <sup>k</sup> at iteration k because in practice, it is necessary to gradually decrease the learning rate over time 8 .
|
| 22 |
+
|
| 23 |
+
SGD has difficulty to reach global optimum because of its tendency to oscillate especially on steep surface curves. Noisy or small gradients may also be problematic. The method of momentum 22 is designed to accelerate learning in such cases. It aims primarily to solve two problems: Poor conditioning of the Hessian matrix and variance in the stochastic gradient. The idea behind this algorithm is to take a running average by incorporating the previous update in the current change as if there is a momentum due to previous updates 1 . When SGD is used with momentum, it can converge faster as well as reduced oscillation.
|
| 24 |
+
|
| 25 |
+
SGD with momentum uses a variable v called velocity. The velocity is the direction and speed at which the parameters move through parameter space. It is set to an exponentially decaying average of the negative gradient. Also, SGD with momentum requires a new hyperparameter α ∈ 0, 1 called momentum parameter that determines how quickly the contributions of previous gradients exponentially decay. The parameters are updated after the velocity update is computed:
|
| 26 |
+
|
| 27 |
+
$$v \leftarrow \alpha v - \epsilon \frac{1}{m} \nabla_{\theta} \left( \sum_{i=1}^{m} L(f(x^{(i)}; \theta), y^{(i)}) \right)$$
|
| 28 |
+
(3)
|
| 29 |
+
|
| 30 |
+
$$\theta \leftarrow \theta + v \tag{4}$$
|
| 31 |
+
|
| 32 |
+
The velocity v accumulates the gradient elements. The larger α is relative to , the more previous gradients affect the current direction. Common values of α used in practice are 0.5, 0.9 and 0.99 8 . However, a disadvantage of this algorithm is the requirement of momentum hyperparameter in addition to the learning rate.
|
| 33 |
+
|
| 34 |
+
SGD with Nesterov momentum 29 is proposed as a variant of the standard momentum by taking inspiration from Nesterov´s accelerated gradient method 21. The idea is to measure the gradient of the loss function not at the local position but slightly ahead in the direction of the momentum 7 . This algorithm evaluates the gradient after the current velocity is applied with Nesterov momentum. Therefore, SGD with Nesterov momentum begins with an interim update for a minibatch 8 :
|
| 35 |
+
|
| 36 |
+
$$\tilde{\theta} \leftarrow \theta + \alpha v \tag{5}$$
|
| 37 |
+
|
| 38 |
+
Then, gradient is computed at the interim point. By using this gradient, velocity update is computed. Finally, the parameters are updated:
|
| 39 |
+
|
| 40 |
+
$$g \leftarrow \frac{1}{m} \nabla_{\tilde{\theta}} \sum_{i=1}^{m} L(f(x^{(i)}; \tilde{\theta}), y^{(i)})$$
|
| 41 |
+
(6)
|
| 42 |
+
|
| 43 |
+
$$v \leftarrow \alpha v - \epsilon g \tag{7}$$
|
| 44 |
+
|
| 45 |
+
$$\theta \leftarrow \theta + v \tag{8}$$
|
| 46 |
+
|
| 47 |
+
One of the optimization algorithms that individually adapts the learning rates of model parameters is AdaGrad 6 . The parameters with the largest partial derivative of the loss have a rapid decrease in their learning rate, while parameters with small partial derivatives have a relatively small decrease in their learning rate 8 . This is performed by using all the historical squared values of the gradient.
|
| 48 |
+
|
| 49 |
+
AdaGrad uses an additional variable r for gradient accumulation. In the beginning of this algorithm, the gradient accumulation variable is initialized to zero and gradient is computed for a minibatch:
|
| 50 |
+
|
| 51 |
+
$$g \leftarrow \frac{1}{m} \nabla_{\theta} \sum_{i} L(f(x^{(i)}; \theta), y^{(i)})$$
|
| 52 |
+
(9)
|
| 53 |
+
|
| 54 |
+
By using this gradient, the squared gradient is accumulated. Then, the update is computed by scaling learning rates of all parameters inversely proportional to the square root of the sum of all the historical squared values of the gradient. Finally, this update is applied to the model parameters:
|
| 55 |
+
|
| 56 |
+
$$r \leftarrow r + g \odot g \tag{10}$$
|
| 57 |
+
|
| 58 |
+
$$\Delta\theta \leftarrow -\frac{\epsilon}{\delta + \sqrt{r}} \odot g \tag{11}$$
|
| 59 |
+
|
| 60 |
+
$$\theta \leftarrow \theta + \Delta\theta \tag{12}$$
|
| 61 |
+
|
| 62 |
+
where is the global learning rate and δ is a small constant for numerical stability. However, AdaGrad has serious disadvantages. Generally, it performs well for simple quadratic problems, but it often stops too early when training neural networks. The learning rate gets scaled down so much that the algorithm ends up stopping entirely before reaching the global optimum 7 . Also, for training deep neural networks, the accumulation of squared gradients from the beginning of training can result in an excessive decrease in the effective learning rate. AdaGrad performs well for some but not all deep learning models 8 .
|
| 63 |
+
|
| 64 |
+
The underlying idea of AdaDelta algorithm is to improve the two main drawbacks of AdaGrad: The continual decay of learning rates throughout training and the need for a manually selected global learning rate. To this end, AdaDelta restricts the window of past gradients to be some fixed size w instead of accumulating the sum of squared gradients over all time. As mentioned in the previous section, AdaGrad accumulates the squared gradients from each iteration starting at the beginning of training. This accumulated sum continues to grow during training, effectively shrinking the learning rate on each dimension. After many iterations, the learning rate becomes infinitesimally small. With the windowed accumulation AdaGrad becomes a local estimate using recent gradients instead of accumulating to infinity. Thus, learning continues to make progress even after many iterations of updates have been done 31 .
|
| 65 |
+
|
| 66 |
+
Since storing w previous squared gradients is inefficient, AdaDelta implements this accumulation as an exponentially decaying average of the squared gradients. Assuming this running average is E g 2 t at time t, gradient accumulation is computed as follows:
|
| 67 |
+
|
| 68 |
+
$$E[g^{2}]_{t} = \rho E[g^{2}]_{t-1} + (1 - \rho)g_{t}^{2}$$
|
| 69 |
+
(13)
|
| 70 |
+
|
| 71 |
+
where ρ is a decay constant similar to that used in the momentum method. Since it is required the square root of this quantity in the parameter updates, this effectively becomes the root mean square (RMS) of previous squared gradients up to time t:
|
| 72 |
+
|
| 73 |
+
$$RMS[g]_t = \sqrt{E[g^2]_t + \delta} \tag{14}$$
|
| 74 |
+
|
| 75 |
+
where δ is again a small constant. Based on this RMS, the parameter update is computed, updates are accumulated and the parameters are updated, respectively:
|
| 76 |
+
|
| 77 |
+
$$\Delta \theta_t = -\frac{RMS[\Delta \theta]_{t-1}}{RMS[g]_t} g_t \tag{15}$$
|
| 78 |
+
|
| 79 |
+
$$E[\Delta \theta^2]_t = \rho E[\Delta \theta^2]_{t-1} + (1 - \rho)\Delta \theta_t^2$$
|
| 80 |
+
(16)
|
| 81 |
+
|
| 82 |
+
$$\theta_{t+1} = \theta_t + \Delta \theta_t \tag{17}$$
|
| 83 |
+
|
| 84 |
+
The advantage of AdaDelta is that it requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architectures, various data modalities and selection of hyperparameters 31 .
|
| 85 |
+
|
| 86 |
+
Another algorithm that modifies AdaGrad is RMSProp 10. It is proposed to perform better in the nonconvex setting by changing the gradient accumulation into an exponentially weighted moving average. As mentioned in Section 3.2, AdaGrad shrinks the learning rate according to the entire history of the squared gradient. Instead, RMSProp uses an exponentially decaying average to discard history from the extreme past so that it can converge rapidly after finding a convex bowl 8 .
|
| 87 |
+
|
| 88 |
+
In order to implement RMSProp, squared gradient is accumulated after computing gradient:
|
| 89 |
+
|
| 90 |
+
$$r \leftarrow \rho r + (1 - \rho)g \odot g \tag{18}$$
|
| 91 |
+
|
| 92 |
+
where ρ is the decay rate. Then parameter update is computed and applied as follows:
|
| 93 |
+
|
| 94 |
+
$$\Delta\theta = -\frac{\epsilon}{\sqrt{\delta + r}} \odot g \tag{19}$$
|
| 95 |
+
|
| 96 |
+
$$\theta \leftarrow \theta + \Delta\theta \tag{20}$$
|
| 97 |
+
|
| 98 |
+
Adam is one of the most widely used optimization algorithms in deep learning. The name Adam is derived from adaptive moment estimation because it computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients. Adam combines the advantages of AdaGrad which works well with sparse gradients and RMSProp which works well in online and non-stationary settings 13 .
|
| 99 |
+
|
| 100 |
+
There are some important properties of Adam. Firstly, momentum is incorporated directly as an estimate of the first-order moment of the gradient. Also, Adam includes bias corrections to the estimate of both the first-order moments and the second-order moments to account for their initialization at the origin 8 . The algorithm updates exponential moving averages of the gradient m<sup>t</sup> and the squared gradient u<sup>t</sup> where the hyperparameters ρ<sup>1</sup> ve ρ<sup>2</sup> control the exponential decay rates of these moving averages. The moving averages themselves are estimates of the first moment (the mean) and the second raw moment (the uncentered variance) of the gradient 13 .
|
| 101 |
+
|
| 102 |
+
Adam algorithm requires first and second moment variables m and u. After computing gradient, biased first and second moment estimates are updated at time step t respectively:
|
| 103 |
+
|
| 104 |
+
$$m_t \leftarrow \rho_1 m_{t-1} + (1 - \rho_1) g_t$$
|
| 105 |
+
(21)
|
| 106 |
+
|
| 107 |
+
$$u_t \leftarrow \rho_2 u_{t-1} + (1 - \rho_2) g \odot g \tag{22}$$
|
| 108 |
+
|
| 109 |
+
Then, bias is corrected in first and second moments. By using the corrected moment estimates parameter updates are calculated and applied:
|
| 110 |
+
|
| 111 |
+
$$\hat{m}_t \leftarrow \frac{m_t}{1 - \rho_1^t} \tag{23}$$
|
| 112 |
+
|
| 113 |
+
$$\hat{u}_t \leftarrow \frac{u_t}{1 - \rho_2^t} \tag{24}$$
|
| 114 |
+
|
| 115 |
+
$$\Delta\theta = -\epsilon \frac{\hat{m}_t}{\sqrt{\hat{u}_t} + \delta} \tag{25}$$
|
| 116 |
+
|
| 117 |
+
$$\theta_t \leftarrow \theta_{t-1} + \Delta\theta \tag{26}$$
|
| 118 |
+
|
| 119 |
+
Adam has many advantages. First of all, it requires a little tuning for the learning rate. Also, it is a method that is straightforward to implement and invariant to diagonal rescaling of gradients. It is computationally efficient as well as a little memory requirements. Besides, Adam is appropriate for non-stationary objectives and problems with very noisy and sparse gradients 13 .
|
| 120 |
+
|
| 121 |
+
AdaMax is proposed as an extension of Adam. It is a variant of Adam based on the infinity norm. In Adam, update rule for individual weights is to scale their gradients inversely proportional to a L <sup>2</sup> norm of their individual current and past gradients. AdaMax is based on the idea that L <sup>2</sup> norm based update rule can be generalized to a L <sup>p</sup> norm based update rule.
|
| 122 |
+
|
| 123 |
+
AdaMax algorithm begins with calculating gradients w.r.t. stochastic objective at time step t, as usual. Then, biased first moment estimate and exponentially weighted infinity norm are computed. By using them, the model parameters are updated. These steps are defined below, respectively:
|
| 124 |
+
|
| 125 |
+
$$g_t \leftarrow \nabla_{\theta} f_t(\theta_{t-1}) \tag{27}$$
|
| 126 |
+
|
| 127 |
+
$$m_t \leftarrow \rho_1 m_{t-1} + \left(1 - \rho_1\right) g_t \tag{28}$$
|
| 128 |
+
|
| 129 |
+
$$\gamma_t \leftarrow \max(\rho_2 \gamma_{t-1}, |g_t|) \tag{29}$$
|
| 130 |
+
|
| 131 |
+
$$\theta_t \leftarrow \theta_{t-1} - \left(\epsilon/\left(1 - \rho_1^t\right)\right) m_t/\gamma_t$$
|
| 132 |
+
(30)
|
| 133 |
+
|
| 134 |
+
It is showed that if AdaMax is preferred as optimization algorithm, there is no need to correct for initialization bias. Besides, the magnitude of parameter updates has a simpler bound with AdaMax than Adam 13 .
|
| 135 |
+
|
| 136 |
+
Nadam (Nesterov-accelerated adaptive moment estimation) modifies Adam's momentum component with Nesterov´s accelerated gradient. Thus, Nadam aims to improve the speed of convergence and the quality of the learned models 5 .
|
| 137 |
+
|
| 138 |
+
Similar to Adam, after computing gradient, first and second moment variables are updated as in Equations 31 and 32. Then, corrected moments are computed and parameters are updated as in following equations:
|
| 139 |
+
|
| 140 |
+
$$m_t \leftarrow \rho_t m_{t-1} + (1 - \rho_t) g_t \tag{31}$$
|
| 141 |
+
|
| 142 |
+
$$u_t \leftarrow v u_{t-1} + \left(1 - v\right) g_t^2 \tag{32}$$
|
| 143 |
+
|
| 144 |
+
$$\hat{m} \leftarrow \left(\rho_{t+1} m_t / \left(1 - \prod_{i=1}^{t+1} \rho_i\right)\right) + \left(\left(1 - \rho_t\right) g_t / \left(1 - \prod_{i=1}^{t} \rho_i\right)\right)$$
|
| 145 |
+
(33)
|
| 146 |
+
|
| 147 |
+
$$\hat{u} \leftarrow v u_t / (1 - v_t) \tag{34}$$
|
| 148 |
+
|
| 149 |
+
$$\theta_t \leftarrow \theta_{t-1} - \frac{\epsilon_t}{\sqrt{\hat{u}_t + \delta}} \hat{m}_t \tag{35}$$
|
| 150 |
+
|
| 151 |
+
Another an exponential moving average variant is AMSGrad 23. The purpose of developing AMSGrad is to guarantee the convergence while preserving the benefits of Adam and RMSProp. In AMSGrad algorithm, the first and second moment variables are updated as in Equations 36 and 37. The key difference between AMSGrad and Adam is shown in Equation 38. Here, AMSGrad maintains the maximum of all u<sup>t</sup> until the present time step and uses this maximum value for normalizing the running average of the gradient instead of u<sup>t</sup> in Adam. By doing this, AMS-Grad results in a non-increasing step size. Finally, the parameters are updated as in Equation 39. Here, Uˆ <sup>t</sup> indicates diag uˆt .
|
| 152 |
+
|
| 153 |
+
$$m_t \leftarrow \rho_1^t m_{t-1} + (1 - \rho_1^t) g_t$$
|
| 154 |
+
(36)
|
| 155 |
+
|
| 156 |
+
$$u_t \leftarrow \rho_2 u_{t-1} + (1 - \rho_2) g_t^2 \tag{37}$$
|
| 157 |
+
|
| 158 |
+
$$\hat{u}_t \leftarrow \max(\hat{u}_{t-1}, u_t) \tag{38}$$
|
| 159 |
+
|
| 160 |
+
$$\theta_{t+1} \leftarrow \prod_{F,\sqrt{\hat{U}_t}} \left(\theta_t - \epsilon_t m_t / \sqrt{\hat{u}_t}\right)$$
|
| 161 |
+
(39)
|
| 162 |
+
|
| 163 |
+
On the other side, the difference of AMSGrad from Adam and AdaGrad, it neither increases nor decreases the learning rate and furthermore, decreases u<sup>t</sup> which can potentially lead to non-decreasing learning rate even if gradient is large in the future iterations 23 .
|
2008.04115/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2008.04115/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Recent advancements in Generative Adversarial Networks (GANs) (Choi et al., 2019; Zakharov et al., 2019; Karras
|
| 4 |
+
|
| 5 |
+
Proceedings of the 37<sup>th</sup> International Conference on Machine Learning, Online, PMLR 119, 2020. Copyright 2020 by the author(s).
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
|
| 9 |
+
Figure 1. Sample data used for our experiment. Images inside the green border are real images, while those inside the red border are GAN-images. The left-hand side images in row order are from CelebA, StarGAN, FFHQ, and StyleGAN. The right-hand side images in row order are from PGGAN and StyleGAN2.
|
| 10 |
+
|
| 11 |
+
et al., 2019b; Shaham et al., 2019) enable the generation of realistic images, which has now become feasible through few-shot or single-shot learning. Some GANs manage to further reduce visible artifacts and patterns, such as blurred object shape, checkerboard artifacts, semantically strange objects, and unnatural backgrounds. For these reasons, even high-resolution images produced by the latest GANs are hardly distinguishable from real images or by human inspection.
|
| 12 |
+
|
| 13 |
+
A typical way of detecting GAN-images is to train Convolutional Neural Networks (CNNs) and a binary classifier with a large number of images generated from GANs. Some researchers (Marra et al., 2019a; Zhang et al., 2019; Yu et al., 2019) have shown that the detection performance can be improved by analyzing artifacts and patterns in GAN-images. Many of the existing methods has achieved high performance in detecting GAN-images when the model tests on the same dataset used during the training phase (Tariq et al., 2019; 2018; Jeon et al., 2019). Moreover, this binary classifier can be realized by the use of existing and well-structured CNN architectures (Tan & Le, 2019; Jeon et al., 2020).
|
| 14 |
+
|
| 15 |
+
<sup>&</sup>lt;sup>1</sup>Department of Artificial Intelligence, Sungkyunkwan University, Suwon, S. Korea <sup>2</sup>Computer Science and Engineering Department, Sungkyunkwan University, Suwon, S. Korea <sup>3</sup>Department of Applied Data Science, Sungkyunkwan University, Suwon, S. Korea. Correspondence to: Simon S. Woo <swoo@g.skku.edu>.
|
| 16 |
+
|
| 17 |
+
<span id="page-1-0"></span>
|
| 18 |
+
|
| 19 |
+
Figure 2. Overview of our T-GD network. For efficient transfer learning, our network uses L 2 *-Starting Point* (red) and *Self-training* (blue).
|
| 20 |
+
|
| 21 |
+
However, above methods are ineffective for improving transfer learning performance. That is, when the CNN classifier is trained on one dataset, it shows poor performance on other datasets. ForensicTransfer [\(Cozzolino et al.,](#page-8-0) [2018\)](#page-8-0) introduced an autoencoder for the GAN-image detection. They apply autoencoder and detect GAN-images through reconstruction error. This learning method has advantages regarding lower data usage, when the model is well trained. Although the ForensicTransfer showed promise for model transferability, its performance remains mediocre. In previous research, either artifacts, patterns, or augmentations were utilized individually for successful transfer learning, yet it is possible to combine them to transfer knowledge of GAN-image detection.
|
| 22 |
+
|
| 23 |
+
In this paper, our objective is to maintain a high detection performance during transfer learning on the source and target datasets, without suffering *catastrophic forgetting*. While many studies on transfer learning have already shown impressive performance, they have not applied for GAN-image detection. Therefore, in this work, we propose a novel regularization method with self-training for transfer learning by combining and transforming regularization, augmentation, self-training, and learning strategies to improve transferability of GAN-image detection. In particular, our approach is inspired by starting point as the reference (SPAR), L 2 -SP [\(Li et al.,](#page-8-0) [2018\)](#page-8-0), which regularizes the weight variation of a target model by referring to the weights pre-trained on the source dataset. The limitation of the latter method is that it cannot provide an optimal
|
| 24 |
+
|
| 25 |
+
solution for transfer learning. As the regularization strength changes, the control of the regularization can easily be lost. Our approach overcomes this issue through self-training, where the teacher model automatically helps control the strength of regularization when the student model learns from the target dataset.
|
| 26 |
+
|
| 27 |
+
In addition, we introduce a novel augmentation method to solve the over-fitting problem by transforming Cutmix [\(Yun](#page-9-0) [et al.,](#page-9-0) [2019\)](#page-9-0), which randomly mixes up a rectangle patch of training images. Note that the original Cutmix mixes up inter-class images; our experiment showed that this renders the learning process highly unstable for a binary classifier, and thus we transform the inter-class Cutmix to an intra-class Cutmix to increase the stability of the learning process. Also, our method combines Gaussian blur [\(Xuan](#page-9-0) [et al.,](#page-9-0) [2019\)](#page-9-0) and Joint Photograph Experts Group (JPEG) compression [\(Wang et al.,](#page-9-0) [2019\)](#page-9-0), which were previously studied for GAN-image detection.
|
| 28 |
+
|
| 29 |
+
For transfer learning, we apply learning strategies, such as Weight Standardization [\(Qiao et al.,](#page-8-0) [2019\)](#page-8-0) (WS), Group Normalization [\(Wu & He,](#page-9-0) [2018\)](#page-9-0) (GN), and the tuning of learning rate and momentum rate [\(Li et al.,](#page-8-0) [2020\)](#page-8-0). WS and GN achieved comparable performance on image classification, not depending on batch size statistics. Also, we implement transfer learning using low learning and momentum rates for stochastic gradient descent (SGD) inspired by [\(Li](#page-8-0) [et al.,](#page-8-0) [2020\)](#page-8-0). These have been experimented on object detection transfer, but we demonstrate that these strategies also work well for transfer learning across different domains. Finally, we integrate these approaches into one framework, the Transferable GAN-generated image Detection framework (T-GD). Compared to general methods of transfer learning and recent GAN-image detection, we show that T-GD is equipped with robust transferability and achieves high performance.
|
| 30 |
+
|
| 31 |
+
# Method
|
| 32 |
+
|
| 33 |
+
The first step of T-GD is to train the pre-trained models, namely the binary classifiers predicting whether an image is GAN-generated or not.
|
| 34 |
+
|
| 35 |
+
CNN binary classifier. We chose CNN binary classifiers as classifiers for the source dataset. This choice has three advantages: (1) it is easy to reuse pre-trained models, (2) many pre-studied CNN architectures can be utilized, and (3) it shows a more stable performance in binary classification than other methods such as autoencoders. We pre-train the CNN binary classifier on the source dataset and trans<span id="page-3-0"></span>fer (fine-tune) this pre-trained model to the target dataset. For instance, EfficientNet (the CNN classifier) is trained on the PGGAN-dataset (the source) and fine-tuned on the StyleGAN-dataset (the target).
|
| 36 |
+
|
| 37 |
+
EfficientNet. We implemented EfficientNet-B0 [\(Tan &](#page-9-0) [Le,](#page-9-0) [2019\)](#page-9-0) and used it as the CNN classifier. Although EfficientNet-B0 has the lowest number of parameters (about four million) among the EfficientNets, it performs well in GAN-image detection in our experiment, compared to Inception-V3 [\(Szegedy et al.,](#page-9-0) [2017\)](#page-9-0) and Xception [\(Chollet,](#page-8-0) [2017\)](#page-8-0). Another change we make to the model is the use of WS [\(Qiao et al.,](#page-8-0) [2019\)](#page-8-0) and GN [\(Wu & He,](#page-9-0) [2018\)](#page-9-0), instead of batch normalization (BN) [\(Ioffe & Szegedy,](#page-8-0) [2015\)](#page-8-0), due to their superior efficiency regarding transfer learning to that of batch statistics.
|
| 38 |
+
|
| 39 |
+
ResNext. We implemented ResNext32×4d [\(Xie et al.,](#page-9-0) [2017\)](#page-9-0) and used it as the CNN classifier, where ResNext32×4d has more parameters (about twenty million) than EfficientNet-B0. We also replace BN with WS and GN.
|
| 40 |
+
|
| 41 |
+
The next step is transfer learning. The weight of the pretrained model from the source dataset is used as the SPAR. In particular, we use L 2 -SP for transfer learning. Regularization can lead to a better optimization by preventing over-fitting when learning from scratch; L 2 -SP differs in that the starting point from a well pre-trained source dataset guides the learning process by referring to the information of the pre-trained source dataset. This method does not require freezing the weights of the pre-trained model nor using weight decay. Our method regularizes convolution layers and fully-connected (FC) layers independently.
|
| 42 |
+
|
| 43 |
+
General form of regularization. Let w be the weight parameters, and J( ˆy<sup>i</sup> , yi) be the loss function of the neural networks, where yˆ<sup>i</sup> is the i th score predicted by the models and y<sup>i</sup> is the i th label. And Ω(w) is the p-norm function of the weight w as a general form of regularization loss, fw(xi) is the neural network function with the i th data x<sup>i</sup> , and n is the dataset size. Equation 1 indicates the general form of the loss function with a weight regularization component:
|
| 44 |
+
|
| 45 |
+
$$\min_{w} \frac{1}{n} \sum_{i=1}^{n} J(\hat{y}_i, y_i) + \lambda \cdot \Omega(w),$$
|
| 46 |
+
|
| 47 |
+
$$\hat{y}_i = f_w(x_i),$$
|
| 48 |
+
(1)
|
| 49 |
+
|
| 50 |
+
where λ balances the regularization and the loss function, J is the cross-entropy function, and Ω is the L <sup>1</sup> or L 2 -norm of the parameter w.
|
| 51 |
+
|
| 52 |
+
L<sup>2</sup> regularization. L 2 regularization is used in transfer learning to avoid over-fitting and to overcome the forgetting of the learned information or *catastrophic forgetting*.
|
| 53 |
+
|
| 54 |
+
$$\Omega_{l2}(w) = ||w||_2^2. \tag{2}$$
|
| 55 |
+
|
| 56 |
+
Equation 2 is the Ω function, namely the L 2 -norm of w.
|
| 57 |
+
|
| 58 |
+
$$\min_{w} \frac{1}{n} \sum_{i=1}^{n} J(\hat{y}_{i}, y_{i}) + \beta \cdot \Omega_{l2}(w_{fc}).$$
|
| 59 |
+
(3)
|
| 60 |
+
|
| 61 |
+
In Eq. 3, the first term is the same as in Eq. 1, representing the cross-entropy loss function. The second term is the Ωl<sup>2</sup> function or the L 2 regularization term (Eq. 2) of wf c, the weights of the FC layers, scaled by β, which is equivalent to λ in Eq. 1. Note that the L 2 regularization is applied solely to the FC layers since over-fitting and forgetting are delayed, but not completely prevented in the course of learning.
|
| 62 |
+
|
| 63 |
+
L<sup>2</sup> -SP . Let w <sup>0</sup> be the pre-trained weights from the source dataset, as shown in Section 3.1, serving as the starting point (SP) as the reference, as well as a regularization point that provides guidance for transfer learning when fine-tuning. Using L2-norm, we define L 2 -SP as follows:
|
| 64 |
+
|
| 65 |
+
$$\Omega_{sp}(w, w') = \|w_{conv} - w'_{conv}\|_{2}^{2}, \tag{4}$$
|
| 66 |
+
|
| 67 |
+
where wconv denotes the weights of the convolution layers, excluding those of the FC layers. Equation 4 indicates that L 2 -SP is a one-to-one mapping between the convolution layers of the source and target datasets, e.g., the PGGANclassifier (the source) to the StyleGAN-classifier (the target).
|
| 68 |
+
|
| 69 |
+
Loss function. We combine Eq. 4, sharing the architecture of the source and target models, with the second term of Eq. 3, accounting for the FC layer (final layer) as follows:
|
| 70 |
+
|
| 71 |
+
$$\min_{w} \frac{1}{n} \sum_{i=1}^{n} J(\hat{y}_{i}, y_{i}) + \alpha \cdot \Omega_{sp}(w, w') + \beta \cdot \Omega_{l2}(w_{fc}),$$
|
| 72 |
+
|
| 73 |
+
$$J(\hat{y}_{i}, y_{i}) = -y_{i} \log(\hat{y}_{i}) - (1 - y_{i}) \log(1 - \hat{y}_{i}),$$
|
| 74 |
+
(5)
|
| 75 |
+
|
| 76 |
+
where J is the negative log-likelihood loss function, and α and β are tunable hyperparameters of which [\(Li et al.,](#page-8-0) [2019\)](#page-8-0) use values in the range from 0.1 to 0.01. The difference from L 2 -SP is that we transform α and β into γ, a parameter which adjusts itself according to the learning situation. More details about the transformed parameters are provided in Section 3.3.
|
| 77 |
+
|
| 78 |
+
We transform the transfer learning framework into a selftraining framework. In other words, the source/target model is changed into a teacher/student model. In addition to the role of a typical source model, which serves as SPAR and a regularizer to guide the learning process, the teacher model has the role of adjusting the parameters based on the learned
|
| 79 |
+
|
| 80 |
+
<span id="page-4-0"></span>
|
| 81 |
+
|
| 82 |
+
Figure 3. Overview of our self-training method. Note that the number of processes shown in this figure equals to that of the ordered processes demonstrated in Alg. 1.
|
| 83 |
+
|
| 84 |
+
target dataset (training loss). That is, the teacher model directly controls $\alpha$ and $\beta$ , as shown in Eq. 5.
|
| 85 |
+
|
| 86 |
+
Let $\{(x_1,y_1),(x_2,y_2),...,(x_n,y_n)\}$ be the labeled source dataset and $\{(\tilde{x}_1,\tilde{y}_1),(\tilde{x}_2,\tilde{y}_2),...,(\tilde{x}_m,\tilde{y}_m)\}$ the labeled target dataset. In a typical self-training process, unlabeled data is used to increase the generalizability for single dataset performance, e.g., ImageNet, by learning from extra training data. However, we assume the usage of additional data is highly limited. Hence, we only use labeled data for transfer learning.
|
| 87 |
+
|
| 88 |
+
$$J(\hat{y}_i, \tilde{y}_i) = -\tilde{y}_i \log(\hat{y}_i) - (1 - \tilde{y}_i) \log(1 - \hat{y}_i),$$
|
| 89 |
+
|
| 90 |
+
$$\hat{y}_i = f_{w'}^{noised}(\tilde{x}_i^{noised}).$$
|
| 91 |
+
(6)
|
| 92 |
+
|
| 93 |
+
In Eq. 6, w' denotes the weights of the teacher model, and $f_{w'}$ denotes the pre-trained models from the source dataset. $J(\hat{y_i}, \tilde{y_i})$ denotes the binary cross-entropy loss, but the input data, $\tilde{x}_i^{noised}$ , is from the target dataset with noise injection.
|
| 94 |
+
|
| 95 |
+
$$\gamma := s\sigma\left(-\frac{1}{m}\sum_{i=1}^{m}J(\hat{y}_{i},\tilde{y}_{i})\right),$$
|
| 96 |
+
|
| 97 |
+
$$\sigma(x) = 1/(1 + e^{-x}).$$
|
| 98 |
+
(7)
|
| 99 |
+
|
| 100 |
+
In Eq. 7, s is a hyperparameter taking values from 0.1 to 2.0, and the $\gamma$ score is obtained through the sigmoid function $\sigma$ to help stabilize training, whose input is the negative mean
|
| 101 |
+
|
| 102 |
+

|
| 103 |
+
|
| 104 |
+
Figure 4. Examples of different noisy data augmentation techniques applied to the original input image.
|
| 105 |
+
|
| 106 |
+
loss function described in Eq. 6. In the transfer phase, we use the same noised target data for both the teacher and student models. The teacher is evaluated on the data (Eq. 6), and the negative value of the result is taken and transformed by the sigmoid function $\gamma$ in Eq. 7, where $\gamma$ regulates the intensities of both $L^2$ -SP (Eq. 4) and the L2-norm of FC layer (Eq. 3). An analysis of $\gamma$ and the error amplification of self-training is presented in Supp. A.
|
| 107 |
+
|
| 108 |
+
Final loss function. We replace $\alpha$ and $\beta$ with $\gamma$ in Eq. 5 to act as a changeable balancing parameter for regularization as follows:
|
| 109 |
+
|
| 110 |
+
$$\min_{w} \frac{1}{n} \sum_{i=1}^{n} J(\hat{y}_{i}, y_{i}) + \gamma \cdot \Omega_{sp}(w, w') + \gamma \cdot \Omega_{l2}(w_{fc}).$$
|
| 111 |
+
(8)
|
| 112 |
+
|
| 113 |
+
The final loss function, as shown in Eq. 8, is composed of a cross-entropy term and an $L^2$ -SP term for the self-training of the student model. Figure 2 shows an overview of this entire pipeline, and Supp. Alg. 1 presents the detailed algorithm.
|
| 114 |
+
|
| 115 |
+
Augmentation and noised model. Noise is injected to both the data and the model. For data augmentation, we use JPEG compression (Wang et al., 2019), and Gaussian blur (Xuan et al., 2019), random horizontal flip, and a transformed version of Cutmix (Yun et al., 2019), called intraclass Cutmix. We apply dropout (Srivastava et al., 2014) to the FC layer at a stronger rate than for the pre-trained model. In addition, we apply stochastic depth (Huang et al., 2016), randomly dropping the paths of residual layers, also at a stronger rate than for the pre-trained model.
|
| 116 |
+
|
| 117 |
+
<span id="page-5-0"></span>
|
| 118 |
+
|
| 119 |
+
| Method | Category | Zero-shot (Pre-trained model) | | | | Transfer Learning | | | |
|
| 120 |
+
|-----------------------|-----------|-------------------------------|---------|----------|-----------|-------------------|---------------|---------------|---------------|
|
| 121 |
+
| | Dataset | PGGAN | StarGAN | StyleGAN | StyleGAN2 | PGGAN | StarGAN | StyleGAN | StyleGAN2 |
|
| 122 |
+
| GeneralTransfer | PGGAN | 99.91% | 56.81% | 49.47% | 49.32% | 99.86% | 87.06% | 54.17% | 54.18% |
|
| 123 |
+
| EfficientNet-B0 | StarGAN | 66.47% | 99.88% | 52.01% | 52.10% | 95.90% | 89.87% | 99.03% | 99.04% |
|
| 124 |
+
| (Base model) | StyleGAN | 49.80% | 50.04% | 99.96% | 99.97% | 66.89% | 51.12% | 99.94% | 99.95% |
|
| 125 |
+
| | StyleGAN2 | 45.23% | 49.00% | 99.99% | 99.99% | 91.33% | 88.16% | 45.26% | <u>47.37%</u> |
|
| 126 |
+
| ForensicTransfer† | PGGAN | 97.15% | 50.27% | 53.57% | 53.27% | 69.35% | 72.40% | 76.50% | 76.50% |
|
| 127 |
+
| | StarGAN | 47.09% | 85.34% | 49.51% | 49.48% | 90.14% | 51.32% | 53.14% | 53.14% |
|
| 128 |
+
| | StyleGAN | 49.23% | 49.66% | 99.12% | 99.97% | 76.57% | 58.93% | 65.83% | 65.85% |
|
| 129 |
+
| | StyleGAN2 | 49.22% | 49.66% | 99.12% | 99.12% | 76.58% | 58.94% | 65.84% | 65.84% |
|
| 130 |
+
| T-GD | PGGAN | 99.91% | 56.81% | 49.47% | 49.32% | 95.87% | 91.61% | 98.12% | 98.13% |
|
| 131 |
+
| EfficientNet-B0 | StarGAN | 66.47% | 99.88% | 52.01% | 52.10% | 94.94% | <u>97.32%</u> | 97.29% | 93.34% |
|
| 132 |
+
| (Base model) | StyleGAN | 49.80% | 50.04% | 99.96% | 99.97% | 84.92% | 90.00% | <u>97.83%</u> | 97.71% |
|
| 133 |
+
| | StyleGAN2 | 45.23% | 49.00% | 99.99% | 99.99% | 84.91% | 90.01% | 97.83% | <u>97.71%</u> |
|
| 134 |
+
| T-GD | PGGAN | 99.81% | 61.25% | 49.76% | 49.91% | 94.91% | 93.21% | 87.37% | 87.58% |
|
| 135 |
+
| $ResNext32 \times 4d$ | StarGAN | 41.43% | 99.78% | 48.37% | 48.50% | 98.88% | 96.15% | 91.48% | 91.26% |
|
| 136 |
+
| (Base model) | StyleGAN | 41.05% | 49.16% | 99.99% | 99.99% | 85.93% | 79.69% | 94.31% | 94.31% |
|
| 137 |
+
| | StyleGAN2 | 38.90% | 50.31% | 99.90% | 99.88% | 87.20% | 80.19% | 98.39% | 95.38% |
|
| 138 |
+
|
| 139 |
+
Table 1. Performance results. "†" indicates our implementation. All 4 GAN datasets are evaluated with 4 models as well as our models. The Dataset column indicates pre-trained model from a source dataset, and the Dataset row indicates the target test set for transfer learning. The evaluation metric is AUROC (%). The underlined results are the source dataset performance after transfer learning. The best results are highlighted in bold. The Zero-shot category represents the performance of a pre-trained model without any additional training and the Transfer learning category represents each pre-trained model transferred from the source to target dataset.
|
| 140 |
+
|
| 141 |
+
| | | Source Data | | Target Data |
|
| 142 |
+
|-----------|---------|-------------|--------|-------------|
|
| 143 |
+
| Dataset | Train | Validation | Test | Transfer |
|
| 144 |
+
| PGGAN | 64,202 | 16,051 | 18,799 | 2,000 |
|
| 145 |
+
| StarGAN | 137,239 | 15,260 | 50,000 | 2,000 |
|
| 146 |
+
| StyleGAN | 33,739 | 3,900 | 30,000 | 2,000 |
|
| 147 |
+
| StyleGAN2 | 42,356 | 3,900 | 30,000 | 2,000 |
|
| 148 |
+
|
| 149 |
+
*Table 2.* GAN-generated datasets used in our experiment, where train, validation, test, as well as transfer dataset are shown. We only use 2,000 images for transfer learning.
|
| 150 |
+
|
| 151 |
+
Intra-class Cutmix algorithm. The pseudo-code of the intra-class Cutmix algorithm is shown in Supp. Algorithm 2. First, the mini-batch data is shuffled and the index at which the target label $Y_m$ equals to the shuffled target label $Y_m'$ , i.e., real to real and GAN-image to GAN-image, is denoted as same\_index in Algorithm 2. If a random variable $\rho$ drawn from a uniform distribution between 0 and 1 is greater than the fixed Cutmix parameter (0.2 when pre-training, and 0.5 when transfer learning), then the input $X_m$ and the shuffled input $X_m'$ are mixed by replacing a randomly cropped region of the input $X_m$ to the region of the shuffled input $X_m'$ . Cutmix (Yun et al., 2019) mixes the target label $Y_m$ through interpolation. Intra-class Cutmix does not mix the label, because the input data $X_m$ still belongs to the same class following replacement.
|
2009.08257/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2020-05-29T13:36:58.648Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" etag="bn6D-KX113L_nLGZTKeC" version="12.9.12" type="device"><diagram id="BxVAG3pVIGHMpmKLNRZm" name="Page-1">7Z1Ld6M4Fsc/Sy+8dA4Pg/EyTiWdRXq6q9OnazlHBtnWFEZukONkPv1IIPxAymtsIWJu1SJYyMLo99fV1UUSA/9m9fxrjtbL32iC04HnJM8D/9vA81wvcvgfkfIiUxzPr1IWOUlk2j7hkfwX1xll6oYkuDjKyChNGVkfJ8Y0y3DMjtJQntPtcbY5TY+vukYLeUVnn/AYoxQr2X6QhC2r1Cg4yH2PyWJZX9l15JkVqjPLhGKJEro9SPJvB/5NTimrjlbPNzgVtVfXS/W9u1fO7n5YjjP2kS9kztMo/P4yx+E9+n2WbjfLeDT0xxLHE0o38pYHXpjyEqdzygvmv5u9yMoI/9nQ+sSwKFFd8wxuuH7en+RHC/H3+wYXjNCsLo3/sKrA6rSsk13ZHq+etThMEEOPjOYll+l2SRh+XKNYnNpyjfG0JVul/JPLDxNULHFSfyA5V4C4pP+toBtR/dM5SdMbmtK8vIg/D8R/nl6wnP7EB2fC8p+8uYP06p9Ml/J0Rb6C/yiSLeS15acpZYyueJqzT/uLrmWCKOIOrUgqGsM9Tp8wIzGqy5ZVIdXyhHOGn19F7e4ExJsepivM8heeRX5hPJGlyFY3rDW43UvYG8u05YF8fZmGZKtZ7IreC4sfSG19QmdeODKksz+RQC7aKwitdaF5E+8qeFdq/iRoUWqhr5NaQwQ53WTJDug78HeWW9TtIkVFIY+PVBHTFYlrIkfUM5phjUTmcy+ONcCpwMlEfY6djvMPo2ND4zkq/V2PeEh/ZI5+qDM0vL5dp7IaB4elvfhlOHxkKGd33BKs+H0Ph3vTUeXg9Zt92ECJrznViSq/OJPRfIXSg3NPKCeI/03JArFNLjycN/PFaK3JkmLGcD7cKaR5nubrJcrkF70qjcNmQ8Svm1XJMb9lnB+cI7xhCPripFPfUHmG5bywOS+/vlIp64p26WgdXGZL8+T4h+3KGm7x7CfhxYkyq5YylHpRr5ngmJtoYWwP65NkhJH6Lpv54qqNKfnmKUWs+dMTUqxT9FJnTwk/4Tm/kNWa5gxlUgaNPkf6ubv+phLIsWhmOUjmq0umyR0MwiXRPaNBuM14tcZYdB/vmAXe29xmibav0fZR3fNbkgBHyehr+y2u2znHxXXH5xkh6fRZkvzg+OhTEmtqA+FoHmuHPXGEZ/NDUbpfbKzj+k3JKIqJ2hxSu16kCGb2clacJ4xivyzmoHOYJ+bsQsqFAWbhJL2Mgo7ppS7ZhF6mNOWewxP5WYBsTpONP+6abFxzsmFL8D5OjrR2TS+eOb3QOcjlRLm43ZLL7veYkMvV1RXo5cRAfsfMixeqgxvQS3f00jGn13cN9kYwSDpZLlHn9KJOe/mxpGfl2cfgybhjoxrfpJdajWrQSsggmxXrwT7KD3bi/9PPpGPBN390pulxEEwxGpp3OjbcGZm0Ow84Ix+eVgmSeWXA0zXFRAbD/AP/DvRyqonp2Ag5cAJFMO5Zcd5Ftze3Nzqcd+PpZOqpDqxT/hs0Jhh8WeZNGzG2jly1EQ4gN/mQ3w1tM9cEToG5UebWm7mrPooDy262mWumf7XMXB0wAHOjE3vsN3M1tgDIjc7NsY9cXZoEyI3Oq7GPHIZphpE3p8bYR64uQQPkRqe32EeuTt4H5Ebdtw647OoUFWBu1H/rAHM1AgfMjTpw9pnX8TZg3pYH1wHmEIJr2YXrAHM1BAeRdrM+nPWnK54agwPmZn04+8zVIBwwN+vD2WeuRuGAuVkfzj5zNQxX1jhgN+nG2ceuhuLKBT9A3dxyMevxV81eGADdMPQOjNjUaBxQN03duoHXbHABsZnLfoiu2ZwC/LjLfqKqWUEDtv3c8bjOmXY1HAfQzUK339DVcBwwN9zQrXvuPsTjDCNvbrlgv51rwnGA3OTuC/aRw6y4llt5Byw7zIpruZl3gDmsS227nVsfqtUbob/1lh2cLHBdCzRnS7oQb2q63adO96IQ1bnP80DLOhYg/oMZe5Gc0IbRhlBewd4yXHGnb6PlFUM3eYzfrNNXRJDjFDHydHwFHVL51T8oKXeMqMXTfEbXVAVD+QIz+a2GMHY/4xStQPiu7T7Bvn2A+F3r22VZd/g122UBdMPQ7bt/I4jgtU/dvoFXQ3hA/czUlY3t7Ft4NYoH1E1T74CJh0l1FrDbt/FqMA8GbhfuwsMK17ZdOevWvQ4uQjPvj/seqDE56NEv3n0PICrXR/c9gLhcH933AFa69s/Cw35zPbTvalROYQ4P448aw7sP44OxXgUnPox33VFDPW0/jQ/UWB70ChfvCmgiec7ZF8Nz7FrusygQ+4C8g/Q1UXQadXNg71snrZmM1x7p6cWSVjYvsk66NiomSc+jGMfad/pcbptWtiyyT1ozgc5xzuvXz+dzT0/69TbdAWfvvBsV2SetBuigTZ+BtLI9kX3Spl//8EaDvg5E/P8CG7SyHZF9zG1Mjuuf361sS2IfdBvz4foHunu+WAvbyvWxh1aWONknrQmU7SI70Emfb7GKfdJtbB/XP9utzHqxD7qNHeP6B1oNeFsnXdsY6KYvvU2P25jJBm26C6S1wTGImlwiapixZh675xxjtz55aQwrSdunbn/60hiWklrAbn1OyhjWkvbRxsNS0l7aeHhDQy9tvBpw+xdeIEZoBuwv29DXVztA/8hvhqywnNgM7C/X3EdqUO73PME5gL9wgx+pMTqIxpogbT1EF7URogPSXSCthuUe/3w4K+kkwFEy0pG+4LB7B0nDexkMQ+/ci5YiNQoHC4Yv/FX3ESwBbZu5fce8hRWgfZwQ070lB/XUeLDoxlYfdM6iT2CL/baZW7foEzXU0ub89ItdFaqQtm/RYTZUC5PguvZS+wnMhmqfege6cpgNZQG7/d4cZkP10cbDbKhe2niYDdVLGw8vyuybhQ9rWwPRuP7Y99CBcFzvrHvoqPE4mBBjgrTteFzowNSnvpA2vYV+Hx+ZdpGz6pkPvDBl8u6PeIf/bGh9YliULK55Bjdc8/ub7s/zo4X4+2NJ67L4j6uKq86cVUkJwtFcq6QwjvBs/pV7++aWQZ5tvdSPiEzoZfYCcjnvXs325aIzLw2Y4o55daQPaIbTP2hBykVS/reZrNNpneE6JQtxgtEGYc5rLQpbPS84/eXVDBUkvqIztinwv1lOULYoX/SQiGpyrsZilHAsoR1xgWuRoqKQx0eDhpiuSFxDPlJPRjOs6ePqh8hUKIIJKOOyUJLjWN7lFhdMo7Kv2L0puwprRia+r9FfZEx/tdxMmKvvG46uXM9XFTjLTyruRqiOzDmaw0Lfs4ScFjtuDDnmF0MzSdWfiteVFFJZ4qNsRDGHinNN61qRJClby1q8iaMkEkwHwbeGvKXitc3g+IUqn29br4m/0Yy+kmUO670edpZZbRuuq2kbI3NtQx3MgW3W2uaUrO9f8Qa+op0eh007HXzMTtcaNqDFD7zIqW0tghCNCzF632EYa3QYmLOJ6hwU6zq8cvwOSvEC5PeRma3tym+kG1035Heso70KduSdT3l5TS2iPKfbwrva/LXJs2vxQZQtasgt3UeRco+RuJCoW65RT2ssD0X5WR1OJ7fOdSAKyVFC8F5ZdfYPiu9jFvXvWncH1rXLsnXVd2+p/bdOt7vHgwaEq/MlzzTOeqALwschEBo676Jbc7Eh/jGnAuHu3K/CtPxGEyxy/A8=</diagram></mxfile>
|
2009.08257/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,180 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
It has recently been recognized in the research community that neural network models generally do not exploit the compositionality of language, often relying on superficial features[^1]. Compositionality refers to the fact that linguistic constituents combine into phrases hierarchically to compose meaning. Contextualized word embeddings (BERT, @devlin2018bert, [-@devlin2018bert]; RoBERTa, @liu2019roberta, [-@devlin2018bert]; DistilBERT, @sanh2019distilbert, [-@sanh2019distilbert]; etc.) can be expected to be limited in their ability to learn such complex aspects of language, since the models are usually trained with cloze filling and next sentence prediction objectives, and are not directly exposed to semantic relations between phrases.
|
| 4 |
+
|
| 5 |
+
While larger models yield higher performance, they still lack generalization ability [@talmor2019olmpics] and are computationally expensive, which has led to an increasing interest in reducing model size. For instance, DistilBERT is built using the knowledge distillation technique [@bucila2006model; @hinton2015distilling] on BERT, which leads to a lighter and faster model that does not lose much in performance on the majority of the tested tasks. On the other hand, state-of-the-art models use vast amounts of training data - 16GB for BERT, 10 times more for RoBERTa.
|
| 6 |
+
|
| 7 |
+
This study tackles the question of what type of linguistic knowledge is missing from contextualized word embeddings, comparing models on the basis of their training set size (BERT vs. RoBERTa) as well as their model size (BERT vs. DistilBERT).
|
| 8 |
+
|
| 9 |
+
The tasks of machine reading comprehension (MRC) and dialogue are particularly fitting for this purpose, due to the fact that they require a system to interpret language within a context and perform semantic and pragmatic inference between sentences. The task in the **Co**nversational **Q**uestion **A**nswering dataset [@reddy-etal-2019-coqa CoQA] is MRC combined with dialogue - the input to the system is a context document and a dialogue of questions and answers about that text, which lead up to the question that the system is required to answer. An example from CoQA follows.
|
| 10 |
+
|
| 11 |
+
**Background:** \[\...\] At the time, the name did [not]{.underline} *describe* a single political entity [or]{.underline} *a distinct population of people* \[\...\]\
|
| 12 |
+
**Question n-1:** Did the name describe a political body?\
|
| 13 |
+
**Answer n-1:** No\
|
| 14 |
+
**Question n:** Did it *describe a people group*?\
|
| 15 |
+
**Answer n:** No
|
| 16 |
+
|
| 17 |
+
In order to answer question $n$ without relying on superficial features, one needs to be able to interpret the logical operators "*and*\" and "*or*\" (underlined), and their scopes, as well as determine that the italicized phrases are synonymous. This study tackles such cases with linguistically enhanced models. Our assumption in this paper is that if a model performs poorly on the classes of question-answer ([qa]{.smallcaps}) pairs that require certain linguistic knowledge [x]{.smallcaps} (e.g. negation, disjunction and synonymy in the example above) for their solutions, and if its performance boosts when it explicitly learns [x]{.smallcaps} (e.g. through a multitask setting with an auxiliary linguistic task), it can be considered evidence for the original model's lack of linguistic representations of [x]{.smallcaps}.
|
| 18 |
+
|
| 19 |
+
# Method
|
| 20 |
+
|
| 21 |
+
The CoQA dataset[^2] is used as a case study in this paper. It covers several domains and amounts to 127,000+ samples including a story, a [qa]{.smallcaps} pair and the dialogue history. The answers to the questions are based on the context document, however they can be paraphrases. The training data also contains *rationales*, which are the spans of the background text containing both the answer and the context required to determine the answer. The test set is composed of the Reddit and Science domains, while the rest of the domains are split between train, development and test (see Table [1](#tab:coqa){reference-type="ref" reference="tab:coqa"}). Covering various domains makes CoQA diverse with regard to style and content of the dataset, whereas the addition of the dialogue history makes the dataset interesting in that it combines different language modes - a written paragraph and a conversation. Such diversity allows for a robust analysis of linguistic relations since it gives access to negation in questions as well as statements, fictional settings of unusually flipped semantic roles, counting of any abstract or concrete objects, etc. The state-of-the-art models on this dataset [@ju2019technical] use RoBERTa, while the dataset has not received much attention with smaller or distilled models such as DistilBERT.
|
| 22 |
+
|
| 23 |
+
:::: center
|
| 24 |
+
::: {#tab:coqa}
|
| 25 |
+
-------------------- ----------- -------------------- --------- -------------
|
| 26 |
+
#Passages \#[qa]{.smallcaps} Passage #Turns
|
| 27 |
+
Domain pairs Length per passage
|
| 28 |
+
Children's Stories 750 10.5k 211 14.0
|
| 29 |
+
Literature 1,815 25.5k 284 15.6
|
| 30 |
+
School exams 1,911 28.6k 306 15.0
|
| 31 |
+
News 1,902 28.7k 268 15.1
|
| 32 |
+
Wikipedia 1,821 28.0k 245 15.4
|
| 33 |
+
Reddit 100 1.7k 361 16.6
|
| 34 |
+
Science 100 1.5k 251 15.3
|
| 35 |
+
Total 8,399 127k 271 15.2
|
| 36 |
+
-------------------- ----------- -------------------- --------- -------------
|
| 37 |
+
|
| 38 |
+
: CoQA dataset details [@reddy-etal-2019-coqa].
|
| 39 |
+
:::
|
| 40 |
+
::::
|
| 41 |
+
|
| 42 |
+
The input to the model is a concatenation of the background story, the latest dialogue history of 64 tokens, and the current question. The length of the input is limited to 512 tokens. We build the baseline RoBERTa, BERT and DistilBERT base models for CoQA as extractive models, within the framework of @Wolf2019HuggingFacesTS and following , who produce the highest results with a BERT-based extractive model on CoQA. An extractive model does not generate the answer as an abstractive model would, but selects the span in the document that best matches the gold answer. In order to train our extractive models, the substrings of the rationales which are most similar to the gold answers (as measured by F1) are selected as the training labels.
|
| 43 |
+
|
| 44 |
+
Following a standard procedure, a linear classifier head is added on top of BERT with ReLU activation which classifies every token in the input sequence as start or end of the answer span. Another linear classifier predicts whether each token in the input span falls within the rationale span or outside of it. Finally, one more classifier predicts whether the example is [freeform]{.smallcaps}, has a [yes/no]{.smallcaps} answer or is [unanswerable]{.smallcaps}. [Yes/no/unanswerable]{.smallcaps} answers are used instead of the predicted span if the model predicts the latter classes with higher confidence than the start and end tokens of the answer. Models are trained for 4 epochs (taking a few hours on a single GeForce GTX 1080 Ti GPU) with a learning rate of $3\mathrm{e}{-5}$ and AdamW optimizer [@loshchilov2017decoupled].
|
| 45 |
+
|
| 46 |
+
:::: table*
|
| 47 |
+
::: center
|
| 48 |
+
overall [num]{.smallcaps} [1-5]{.smallcaps} [neg]{.smallcaps} [yes]{.smallcaps} [no]{.smallcaps} [sent]{.smallcaps} [ant]{.smallcaps} [ord]{.smallcaps} [srl-]{.smallcaps} [srl+]{.smallcaps} [hum]{.smallcaps} [loc]{.smallcaps} [ent]{.smallcaps} [surp]{.smallcaps}
|
| 49 |
+
---------------- --------- ------------------- ------------------- ------------------- ------------------- ------------------ -------------------- ------------------- ------------------- -------------------- -------------------- ------------------- ------------------- ------------------- --------------------
|
| 50 |
+
size 7983 972 128 436 790 682 2443 185 6817 3175 1242 1418 624 1089 433
|
| 51 |
+
**RoBERTa** 81.2 76.4 40.1 72.9 89.5 85.9 83.0 82.3 81.8 80.7 83.5 83.2 83.6 83.1 78.6
|
| 52 |
+
**BERT** 76.9 77.2 41.9 68.9 85.3 77.3 79.0 74.8 77.2 76.4 78.5 77.4 80.8 79.4 74.1
|
| 53 |
+
**DistilBERT** 66.6 69.6 36.8 56.7 82.1 71.4 69.0 71.3 67.2 65.8 70.0 64.2 65.4 70.0 58.9
|
| 54 |
+
:::
|
| 55 |
+
::::
|
| 56 |
+
|
| 57 |
+
On the development set[^3], the RoBERTa model gets 81.2 points F1 [^4], BERT scores 76.9 F1 and falls two points short of the @2019arXiv190311848W implementation, and DistilBERT scores 66.6 F1, which establishes a baseline as this is the first work using DistilBERT on CoQA (see Table [\[tab:baseresults\]](#tab:baseresults){reference-type="ref" reference="tab:baseresults"}).
|
| 58 |
+
|
| 59 |
+
To resolve what types of linguistic inference are the hardest for the baseline models, several potentially difficult [qa]{.smallcaps} classes are analyzed. They are defined based on the findings of previous research as well as the observations of a qualitative evaluation of the errors made by the BERT model. Then, a quantitative evaluation of how the baseline models perform on each class is performed. There is ample variation in how the models score on various example classes (see Table [\[tab:baseresults\]](#tab:baseresults){reference-type="ref" reference="tab:baseresults"}). Nonetheless, a noticeable trend appears of the three models failing in similar classes, with DistilBERT lagging behind BERT in most of the classes, by up to 15 points in F1 in some, and RoBERTa beating BERT by a smaller margin.
|
| 60 |
+
|
| 61 |
+
The first expected source of error for the baseline models is the inability to count listed phrases. Since the models are extractive, counting cannot fall within their capabilities. In a rationale listing "*a poor man Ti, his son Dicky and their alien dog CJ7*\", for example, the models cannot chunk the text into noun phrases and then count the chunks in order to answer '*three*' to the question of how many characters there are. While the model performance is satisfactory on a wide range of questions with numerical answers ([num]{.smallcaps}), they fail consistently on questions with answers in the integer set between 1 and 5 ([1-5]{.smallcaps}). The [num]{.smallcaps} class is defined using a state-of-the-art rule-based question classification system from @madabushi2016high, which evaluates each [qa]{.smallcaps} based on the question alone. The contrast between the scores on the two classes can be explained by the fact that while extracting numerical answers such as dates is easy for the models, they struggle on the task of counting linguistic objects, which are usually manifested in low value integers.
|
| 62 |
+
|
| 63 |
+
The second expected problematic area is negation. The example below illustrates two ways that the model can fail in face of negation cues.
|
| 64 |
+
|
| 65 |
+
**Rationale**: Something looked like a bird's belly \[\...\] it was not a bird's belly \[\...\] a bottle floated there
|
| 66 |
+
|
| 67 |
+
**Question**: What looked like a bird's belly?
|
| 68 |
+
|
| 69 |
+
**Answer**: A bottle
|
| 70 |
+
|
| 71 |
+
**Wrong answer 1**: A bird's belly
|
| 72 |
+
|
| 73 |
+
**Wrong answer 2**: Not a bird's belly
|
| 74 |
+
|
| 75 |
+
The most general type of error is neglecting the negation cue altogether and answering the question with **Wrong answer 1**. This reflects on the model's inability to determine that the noun phrase "*a bird's belly*\" falls under the scope of the negation cue "*not*\". **Wrong answer 1** would be the correct answer if the phrase was not negated. The second and more rarely observed type of error reflects a lack of pragmatic knowledge as opposed to semantic or syntactic. In **Wrong answer 2** the model could be argued to have answered correctly as it is technically true that what looked like a bird's belly was not a bird's belly. However assuming Grice's maxim of quantity, which states that one should be as informative as required [@grice1989studies], the answer is not satisfactory. **Wrong answer 2** is not informative at all as it has already been implied by the question. We define the [neg]{.smallcaps} [qa]{.smallcaps} class as containing answers that are embedded under negation. For recognizing such answers, negation cues and their spans are detected with a BERT-based model following @khandelwal2019negbert and trained on the Sherlock dataset [@morante2012sem]. We reproduce the results on that dataset before using the model for detecting the negated spans in the background documents in the CoQA dataset to find the [neg]{.smallcaps} type answers. Our baseline models perform worse on the [neg]{.smallcaps} [qa]{.smallcaps} class than overall, and score much higher on questions with [yes]{.smallcaps} answers than [no]{.smallcaps} answers, which suggests that the models do not interpret negation correctly. The effect of negation on performance is particularly stark in the case of DistilBERT.
|
| 76 |
+
|
| 77 |
+
Furthermore, the [ant]{.smallcaps} class is composed of examples in which the rationale contains antonyms of the words in the question, using WordNet [@wordnet]. Here explicit negation is not necessarily involved, however the model's ability to reason over semantic polarity is tested in this [qa]{.smallcaps} class. Our baseline results on class [ant]{.smallcaps} are in line with previous conclusions stating that BERT is not good at representing antonymy, as it scores lower on this class than overall. Yet interestingly, DistilBERT as well as RoBERTa perform better on this subclass of questions than overall. We conjecture that lexical semantics is the strongest feat of BERT, therefore it is likely that DistilBERT retains most of the lexical information such as antonymy through the process of distillation. On the other hand, RoBERTa learns more about lexical features such as antonymy from the huge size of the training set.
|
| 78 |
+
|
| 79 |
+
In addition, [sent]{.smallcaps} is a [qa]{.smallcaps} class in which the sentiment of the sentence containing the rationale is different from the sentiment of the question. The class items are determined by sentence splitting [@spacy2] and sentence-level sentiment classification [@Wolf2019HuggingFacesTS]. This class is intended to capture examples where the polarity between the question and the answer can be expressed not only by negation or antonymy but also any other means, for example pragmatics. However, a qualitative analysis of the examples of the [sent]{.smallcaps} class shows that the examples which contain contradictory sentiments between the question and the answer mostly do not require one to determine the sentiment in order to answer the question correctly. For instance, the question "*How much later did he get his next job?*\" has a slightly negative connotation about a long job hunt. In contrast, the rationale takes a positive outlook: "*Nearly four years later, as Obama seeks reelection, Casillas has finally landed his first full-time job, emerging out of the group known as the long-term unemployed*\". The answer is "*four years*\", regardless of whether that is considered too long or not. Accordingly, neither of the baseline models struggle to answer [sent]{.smallcaps} questions.
|
| 80 |
+
|
| 81 |
+
Moreover, as stated in , the order of questions on the CoQA dataset follows the natural order of text, in that later questions refer, generally, to information presented towards the end of the background story. Hence, the one answering the questions ought to make inferences about what has already been discussed and where in the story they are when a given question is posed. In some cases this knowledge can be crucial for reaching the correct answer. For instance, if the story describes how "*Hans had made his way back into West Germany on foot*\" and then asks whether he was in East Germany or West, one has to determine whether the question refers to the time prior to the journey or after it. In this case the answer is East Germany even though that part of the country is never mentioned in the text, which makes the example very challenging with regard to pragmatic inference. In order to evaluate how our models perform with regard to following the dialogue flow, they are evaluated on items which do in fact follow the order of the document, so that the answer to question $n$ in the text is subsequent to the answer to question $n-1$ ([ord]{.smallcaps}). It appears that all baseline models are able to infer this order to some extent and perform better on such questions than those that jump to previous passages in the text.
|
| 82 |
+
|
| 83 |
+
Furthermore, examples are classified with regard to whether the order of the semantic roles mentioned in the question is the same ([srl+]{.smallcaps}) or different ([srl-]{.smallcaps}) to the semantic role order in the sentence containing the rationale. SRL is performed employing an AllenNLP [@shi2019simple] model. To illustrate, Figure [1](#fig:flow){reference-type="ref" reference="fig:flow"} shows an example where the roles of agent (Arg0) and patient (Arg1) are reversed in the question by means of a passive voice. All three models fail on such examples, scoring lower on the [srl-]{.smallcaps} class than overall or [srl+]{.smallcaps}. The results of the experiments show that the models find the correct answer more frequently when they can rely on the word order, avoiding the need to reason over semantic roles.
|
| 84 |
+
|
| 85 |
+
Finally, some of the observed issues are induced by the model choosing prominent entities as answers regardless of their actual relation to the question at hand. For instance, a document tells a fictional children's story wherein foods and utensils are anthropomorphised, describing how "*cereal is winning the race in a bowl of milk*\", and the question is "*who is a good swimmer?*\". Instead of answering with "*cereal*\", the baseline BERT model chooses a human entity that is mentioned by name at the beginning of the text. In contrast, if "*cereal*\" is substituted with a common name such as "*Mark*\" in the background document, the model correctly chooses it as the answer. This suggests that the model relies on lexical semantics and biases about types of entities denoted by nouns more than analyze the semantic relations in the relevant sentence. Therefore, we define a [qa]{.smallcaps} class where the rationale contains entities that have high entropy and are thus surprising given the rest of the sentence [@hale2001probabilistic; @levy2008expectation; @smith2008optimal], like "*cereal*\" in the above example. In order to detect such entities, proper nouns (as tagged by spaCy, from [@spacy2]) are masked and BERT is used to evaluate the likelihood of the original word being the filler for that mask. Words that fall below the likelihood threshold of $5\mathrm{e}{-5}$ are then deemed to be surprising entities[^5]. All three models perform worse on questions about surprising entities ([surp]{.smallcaps}) than overall, with DistilBERT exhibiting the largest margin.
|
| 86 |
+
|
| 87 |
+
Moreover, the classes of human ([hum]{.smallcaps}), location ([loc]{.smallcaps}) or general entities ([ent]{.smallcaps}), as classified by @madabushi2016high, test the models' ability to answer questions about entity roles. RoBERTa and BERT's performance on these classes is higher than their overall performance. On the other hand, DistilBERT fails on [hum]{.smallcaps} and [loc]{.smallcaps} entity questions more than other [qa]{.smallcaps} types. For many [hum]{.smallcaps} and [loc]{.smallcaps} questions there are multiple entities in the text that fit the entity type. Together with the results on the [surp]{.smallcaps} class, this is an indication that DistilBERT relies on entity type more than the larger models.
|
| 88 |
+
|
| 89 |
+
The baseline results on the various classes corroborate most of the results of previous research on BERT's shortcomings. Moreover, the results show that DistilBERT mostly repeats the same mistakes and often more gravely, except for some cases of lexical semantics. DistilBERT appears to lose more of BERT's already limited representations of the formal aspects of language and have stronger biases. Finally, RoBERTa also exhibits a lack of ability to perform compositional reasoning and reaches the highest scores on the more lexical [qa]{.smallcaps} types.
|
| 90 |
+
|
| 91 |
+
<figure id="fig:flow" data-latex-placement="t!">
|
| 92 |
+
<div class="minipage">
|
| 93 |
+
<img src="draw2.png" style="width:16cm" />
|
| 94 |
+
</div>
|
| 95 |
+
<figcaption>Illustration of the order of semantic roles in a CoQA example.</figcaption>
|
| 96 |
+
</figure>
|
| 97 |
+
|
| 98 |
+
The methods for defining [qa]{.smallcaps} classes are also used as sources of linguistic knowledge which are incorporated in the baseline models to enhance their performance with regard to the respective classes. Firstly, besides the existing [freeform]{.smallcaps}, [yes/no]{.smallcaps} and [unknown]{.smallcaps}, five additional classifiers of integer answers between 1 and 5 are defined within the model, as it would be impossible for the models to answer counting questions extractively. This results in the *base#* model[^6]. Then, four additional enhanced multitask models are built in order to tackle the issues observed in the previous section.
|
| 99 |
+
|
| 100 |
+
For every enhanced model (*negation#*, *order#*, *sentiment#*, *srl#*), the training data is tagged with annotations of the relevant linguistic information that was also used for defining the problematic classes. For *negation#*, tokens are labelled as under the scope of negation (1) or not (0); for *order#* they are labelled as occurring after the answer to question $n-1$ (1) or not (0); for *sentiment#* they are labelled as part of a sentence with a negative sentiment (1) or not (0); while for *srl#* a multi-label setup is used where every token is labelled as either taking a particular semantic role (1) or not (0). Each of these sets of labels are then used as an additional training goal for the model. The loss of a given additional goal is added to the main loss.
|
| 101 |
+
|
| 102 |
+
In addition to the multitask approach, other architectures were explored for adjoining the information from the four knowledge sources. These approaches include supplying the information as an additional input feature added or concatenated to the BERT model at the level of BERT inputs themselves or the BERT model outputs. However, experiments with the latter methods showed no considerable increase in the model performance. Thus, the multitask approach was finally adopted for enhancing models. The multitask approach is also beneficial as the model can be applied to other test sets without the overhead of extracting the linguistic knowledge from the new set.
|
| 103 |
+
|
| 104 |
+
One more enhanced model is produced by augmenting the training data by means of surprising word substitution (*surprisal#*). Supplementary data samples are produced by substituting surprising entities in the CoQA training set with entities that would be very likely to take their place, according to BERT. In order to ensure that the sentence structure is not affected and an entity is substituted with another entity, the substituting word was only selected if the new word was also tagged as a proper noun in the newly produced sentence. This procedure leads to 5880 additional samples for training. The reason behind adding these items on top of the training set instead of substituting the surprising examples is the intention to provide rare entities with better context instead of ignoring them. Many models in NLP suffer from strong social biases and therefore this approach attempts to level the playing field for rare entities by introducing them in the same contexts as the common entities. Such a method could potentially also be applied to larger datasets and the more early stages such as pretraining.
|
| 105 |
+
|
| 106 |
+
Finally, the enhanced models are combined into an *ensemble* in order to combine the strongest points of each model. In order to use the specialized knowledge from each model where it is relevant, the ensemble is created by selecting the model with the highest confidence for each prediction.
|
| 107 |
+
|
| 108 |
+
:::: table*
|
| 109 |
+
::: center
|
| 110 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 111 |
+
| class | overall | [num]{.smallcaps} | [1-5]{.smallcaps} | [neg]{.smallcaps} | [yes]{.smallcaps} | [no]{.smallcaps} | [sent]{.smallcaps} | [ant]{.smallcaps} | [ord]{.smallcaps} | [srl-]{.smallcaps} | [srl+]{.smallcaps} | [hum]{.smallcaps} | [loc]{.smallcaps} | [ent]{.smallcaps} | [surp]{.smallcaps} |
|
| 112 |
+
+:=============+:===========+:==================+:==================+:==================+:==================+:=================+:===================+:==================+:==================+:===================+:===================+:==================+:==================+:==================+:===================+
|
| 113 |
+
| model | RoBERTa |
|
| 114 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 115 |
+
| *base* | 81.2 | 76.4 | 40.1 | 72.9 | 89.5 | 85.9 | 83.0 | 82.3 | 81.8 | 80.7 | 83.5 | 83.2 | 83.6 | 83.1 | 78.6 |
|
| 116 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 117 |
+
| *base#* | 82.1 | 81.4 | 68.8 | 73.5 | 89.6 | 86.3 | 83.8 | 82.4 | 82.5 | 81.6 | 83.9 | 83.0 | 85.0 | 84.2 | 80.5 |
|
| 118 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 119 |
+
| *negation#* | 81.7 | 81.2 | 69.4 | 73.0 | 89.5 | 85.0 | 83.9 | 86.0 | 82.2 | 81.0 | 83.2 | 82.4 | 83.1 | 83.2 | 79.8 |
|
| 120 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 121 |
+
| *sentiment#* | 81.7 | 81.2 | 71.5 | 71.8 | 90.3 | 84.1 | 83.6 | 83.5 | 82.1 | 81.3 | 83.2 | 82.1 | 85.4 | 83.2 | 80.7 |
|
| 122 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 123 |
+
| *order#* | 82.3 | 82.7 | 76.0 | 74.3 | 90.0 | 85.1 | 84.2 | 85.8 | 82.8 | 82.3 | 84.4 | 83.6 | 85.4 | 83.8 | 80.3 |
|
| 124 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 125 |
+
| *srl#* | 82.2 | 82.6 | 73.3 | 72.9 | 88.6 | 85.7 | 84.5 | 83.5 | 82.6 | 81.6 | 83.6 | 83.4 | 85.2 | 83.6 | 80.4 |
|
| 126 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 127 |
+
| *surprisal#* | 82.1 | 81.7 | 73.2 | 72.7 | 89.3 | 86.0 | 83.6 | 84.5 | 82.4 | 81.8 | 83.9 | 82.3 | 84.2 | 83.7 | 79.4 |
|
| 128 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 129 |
+
| *ensemble* | 83.9 | 82.7 | 82.2 | 81.7 | 81.7 | 82.1 | 83.0 | 82.9 | 82.6 | 82.1 | 82.3 | 82.4 | 83.0 | 82.6 | 83.3 |
|
| 130 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 131 |
+
| model | BERT |
|
| 132 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 133 |
+
| *base* | 76.9 | 77.2 | 41.9 | 68.9 | 85.3 | 77.3 | 79.0 | 74.8 | 77.2 | 76.4 | 78.5 | 77.4 | 80.8 | 79.4 | 74.1 |
|
| 134 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 135 |
+
| *base#* | 76.9 | 79.6 | 63.5 | 69.7 | 86.3 | 80.2 | 79.5 | 79.6 | 77.4 | 75.7 | 79.6 | 77.1 | 79.6 | 77.9 | 72.6 |
|
| 136 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 137 |
+
| *negation#* | 77.0 | 79.5 | 63.2 | 79.7 | 84.5 | 81.0 | 79.0 | 79.7 | 77.4 | 76.1 | 79.5 | 76.3 | 81.0 | 78.4 | 76.5 |
|
| 138 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 139 |
+
| *sentiment#* | 75.2 | 76.9 | 45.6 | 66.2 | 81.4 | 79.1 | 76.7 | 76.2 | 75.6 | 75.1 | 76.8 | 74.8 | 78.9 | 76.9 | 71.6 |
|
| 140 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 141 |
+
| *order#* | 76.3 | 76.6 | 45.2 | 69.0 | 83.2 | 81.1 | 78.0 | 78.3 | 76.9 | 75.9 | 78.9 | 77.3 | 80.4 | 78.5 | 72.8 |
|
| 142 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 143 |
+
| *srl#* | 77.2 | 80.8 | 67.0 | 68.5 | 84.4 | 81.2 | 79.5 | 86.1 | 77.5 | 78.6 | 75.8 | 76.0 | 81.5 | 79.2 | 74.6 |
|
| 144 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 145 |
+
| *surprisal#* | 76.7 | 80.5 | 70.4 | 68.5 | 84.1 | 81.6 | 78.6 | 82.7 | 77.0 | 75.9 | 78.8 | 76.2 | 78.8 | 77.1 | 73.7 |
|
| 146 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 147 |
+
| *ensemble* | 79.2 | 80.8 | 62.2 | 70.8 | 86.2 | 82.8 | 81.1 | 81.5 | 79.6 | 78.7 | 81.3 | 79.8 | 82.7 | 80.7 | 76.2 |
|
| 148 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 149 |
+
| model | DistilBERT |
|
| 150 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 151 |
+
| *base* | 66.6 | 69.6 | 36.8 | 56.7 | 82.1 | 71.4 | 69.0 | 71.3 | 67.2 | 65.8 | 70.0 | 64.2 | 65.4 | 70.0 | 58.9 |
|
| 152 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 153 |
+
| *base#* | 66.9 | 69.5 | 37.0 | 58.0 | 82.8 | 72.4 | 69.5 | 70.9 | 67.4 | 65.9 | 69.7 | 65.0 | 66.2 | 70.2 | 60.0 |
|
| 154 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 155 |
+
| *negation#* | 65.3 | 70.1 | 45.4 | 58.3 | 80.2 | 73.2 | 68.4 | 68.9 | 66.0 | 64.6 | 69.6 | 62.2 | 63.3 | 67.2 | 58.7 |
|
| 156 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 157 |
+
| *sentiment#* | 64.9 | 68.6 | 35.4 | 54.9 | 80.7 | 72.4 | 68.1 | 66.8 | 65.4 | 64.6 | 67.5 | 62.0 | 61.5 | 67.6 | 60.9 |
|
| 158 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 159 |
+
| *order#* | 64.5 | 69.3 | 34.9 | 53.5 | 81.4 | 72.5 | 67.1 | 69.2 | 65.3 | 64.3 | 66.8 | 60.2 | 62.3 | 66.5 | 59.9 |
|
| 160 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 161 |
+
| *srl#* | 65.8 | 69.0 | 39.1 | 56.4 | 83.4 | 72.0 | 68.4 | 72.2 | 66.5 | 65.3 | 68.1 | 62.9 | 62.8 | 68.7 | 57.9 |
|
| 162 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 163 |
+
| *surprisal#* | 66.8 | 70.3 | 35.6 | 59.2 | 82.9 | 72.0 | 69.0 | 72.0 | 67.1 | 65.9 | 69.2 | 63.8 | 65.8 | 70.2 | 60.4 |
|
| 164 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 165 |
+
| *ensemble* | 68.8 | 65.7 | 65.8 | 65.3 | 64.9 | 66.8 | 67.4 | 66.1 | 65.7 | 66.9 | 64.5 | 67.2 | 67.8 | 67.3 | 67.5 |
|
| 166 |
+
+--------------+------------+-------------------+-------------------+-------------------+-------------------+------------------+--------------------+-------------------+-------------------+--------------------+--------------------+-------------------+-------------------+-------------------+--------------------+
|
| 167 |
+
:::
|
| 168 |
+
::::
|
| 169 |
+
|
| 170 |
+
The results of all the enhanced models on all [qa]{.smallcaps} classes on the development set are presented in Table [\[tab:enhanceresults\]](#tab:enhanceresults){reference-type="ref" reference="tab:enhanceresults"}. BERT and RoBERTa gain most in terms of F1 with the counting model on counting questions (*base#* on [1-5]{.smallcaps}), while DistilBERT only improves on these questions remarkably with the *ensemble* model, requiring more auxiliary resources than the larger models for this level of abstraction.
|
| 171 |
+
|
| 172 |
+
Moreover, BERT appears to learn formal aspects of semantics in the multitask setting. The *negation#* model improves the results on the answers that require interpreting negation ([neg]{.smallcaps} and [no]{.smallcaps}) while the *srl#* model improves on the [qa]{.smallcaps} class with semantic roles in a different order in the question and the answer ([srl-]{.smallcaps}). In the meantime RoBERTa does not improve on either. One might say that RoBERTa has learnt the abstract linguistic representations already, however its *base* results show that it makes many of the same mistakes as BERT and DistilBERT on [neg]{.smallcaps} and [srl-]{.smallcaps}. In fact, BERT outperforms RoBERTa on the [neg]{.smallcaps} class when enhanced with the explicit information about negation (the *negation#* model). This, combined with the fact that RoBERTa gets a big improvement on [neg]{.smallcaps} only when the various linguistic features are combined into an *ensemble*, suggests that RoBERTa mostly relies on better lexical representations for its higher scores, which is only outweighed when many compositional semantics cues are provided. Similarly, DistilBERT only gains a small boost over the baseline on [neg]{.smallcaps} and [no]{.smallcaps} [qa]{.smallcaps} classes with *negation#* and also requires an *ensemble* to improve on [SRL-]{.smallcaps}.
|
| 173 |
+
|
| 174 |
+
On the other hand, BERT and DistilBERT improve on the [hum]{.smallcaps} and [loc]{.smallcaps} classes with the *ensemble* models, demonstrating an ability to improve its lexical representations. RoBERTa does not yield an improvement in this case, however even its *base* model performs relatively well on these classes.
|
| 175 |
+
|
| 176 |
+
Furthermore, BERT and DistilBERT do not get a boost in the cases of pragmatics, namely *sentiment#* and *order#*. In contrast, RoBERTa gets a boost over the [ant]{.smallcaps} class from the *sentiment#*, and gains the largest increases across almost all classes from *order#*. It appears that RoBERTa can improve on its already high score on items containing antonymy relying on more pragmatic aspects of lexical semantics, and also is the most receptive to the pragmatic aspects of dialogue in CoQA.
|
| 177 |
+
|
| 178 |
+
Moreover, the model trained on the dataset which was augmented through surprising word substitution (*surprisal#*) improves over *base#* on the class with surprising entities ([surp]{.smallcaps}) with BERT and DistilBERT. This shows that the method helps the models generalize better to cover new examples with surprising entities and get rid of some of the biases about entities. Interestingly, in the case of BERT the largest boosts in the [surp]{.smallcaps} class are produced by the *negation#* and *srl#* models, showing that focusing on compositional information such as semantic roles and negation helps the model to be less biased towards very prominent lexical information of stereotypical entities as discussed in Section [5](#sec:error){reference-type="ref" reference="sec:error"}. In contrast, in the case of RoBERTa, *surprisal#* does not yield an improvement on the [surp]{.smallcaps} class. RoBERTa requires all of the enhanced models to be combined into an *ensemble* in order to get rid of the biases that all three models exhibit, suggesting that its focus on (biased) lexical representations is stronger than BERT or DistilBERT's.
|
| 179 |
+
|
| 180 |
+
Finally, the *ensemble* models perform better on virtually all classes and provide a better overall score. This is to be expected as the enhanced models, while performing at a similar level, make different errors and complement each other with their respective specializations.
|
2012.07791/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-11-24T01:55:31.267Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36" version="13.10.0" etag="Z8KZqURtxzkpD27kldHR" type="device"><diagram id="rTfyPfweLHNglZuo0xs8">xZTNcoMgFIWfxj3CJLHbpEmbRVdZdE3lVphBcQhG06fvVUDjmMy03XThCN/9Qc+5mrBd2b1YXss3I0AnlIguYc8JpSkjFG89uXqyzgIorBIhaQIn9QUBkkAbJeA8S3TGaKfqOcxNVUHuZoxba9p52qfR81NrXsACnHKul/RdCSc9zVZk4q+gChlPTkmIlDwmhxZnyYVpPRpy2D5hO2uM86uy24HuxYu6+EaHB9HxwSxU7icFQfcL1014t0MzJBzLXgOsXpMjXglda2y4/bC4KvrVf5NBPXeNlljTVAL6tyIYbqVycKp53kdbHEJk0pUad+lYfQHroHsoXDragXMMpgRnr5gSC6LZYYTTzcrv22kgaMyRt8OQBcjDEBZj78knXASr7tvGFrb9TcboMk5c7c3eDmb/t7hPc3EpSxfiZne03fxeWtxOX9sQu/lnsf03</diagram></mxfile>
|
2012.07791/main_diagram/main_diagram.pdf
ADDED
|
Binary file (7.84 kB). View file
|
|
|
2012.07791/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
We elaborate on our pose conversion algorithms, mentioned in Sec. [3.2](#sec:pose_conversion){reference-type="ref" reference="sec:pose_conversion"}. Algorithm [\[alg:poseconversion\]](#alg:poseconversion){reference-type="ref" reference="alg:poseconversion"} starts with an initial pose $\mathbf{h}^{prop}$ estimated relative to an image crop, $B$ (see, Fig. [4](#fig:initial_pose){reference-type="ref" reference="fig:initial_pose"}), and produces the final converted pose, $\mathbf{h}^{img}$, relative to the whole image, $I$ (in Fig. [6](#fig:final_pose){reference-type="ref" reference="fig:final_pose"}).
|
| 4 |
+
|
| 5 |
+
At a high level, our pose conversion algorithm, Algorithm [\[alg:poseconversion\]](#alg:poseconversion){reference-type="ref" reference="alg:poseconversion"}, consists of the following two steps:
|
| 6 |
+
|
| 7 |
+
The first step is a rescaling step (from Fig. [4](#fig:initial_pose){reference-type="ref" reference="fig:initial_pose"} to Fig. [5](#fig:intermediate_pose){reference-type="ref" reference="fig:intermediate_pose"}), where we adjust the camera to view the entire image, $I$, not just the crop, $B$. After the first step, we obtain an intermediate pose representation, $\mathbf{h}^{intermediate}$, relative to the camera location, assumed in Fig. [5](#fig:intermediate_pose){reference-type="ref" reference="fig:intermediate_pose"}.
|
| 8 |
+
|
| 9 |
+
The second step is a translation step (from Fig.. [5](#fig:intermediate_pose){reference-type="ref" reference="fig:intermediate_pose"} to Fig. [6](#fig:final_pose){reference-type="ref" reference="fig:final_pose"}), where we translate the principal / focal point of the camera from the center of the crop region to image center. After this step, each converted global pose, $\mathbf{h}^{img}$, from different crop, $B_i$, is estimated based on a consistent camera location, as shown in Fig. [6](#fig:final_pose){reference-type="ref" reference="fig:final_pose"}.
|
| 10 |
+
|
| 11 |
+
<figure id="fig:fig" data-latex-placement="!t">
|
| 12 |
+
<figure id="fig:pose_photo">
|
| 13 |
+
<img src="images/pose_conversion_1_new.png" />
|
| 14 |
+
<figcaption>An example photo</figcaption>
|
| 15 |
+
</figure>
|
| 16 |
+
<figure id="fig:initial_pose">
|
| 17 |
+
<img src="images/pose_conversion_2_new.png" />
|
| 18 |
+
<figcaption>Initial pose estimation <span class="math inline"><em>h</em><sup><em>p</em><em>r</em><em>o</em><em>p</em></sup></span></figcaption>
|
| 19 |
+
</figure>
|
| 20 |
+
<p><br />
|
| 21 |
+
</p>
|
| 22 |
+
<figure id="fig:intermediate_pose">
|
| 23 |
+
<img src="images/pose_conversion_3_new.png" />
|
| 24 |
+
<figcaption>Intermediate pose estimation <span class="math inline"><em>h</em><sup><em>i</em><em>n</em><em>t</em><em>e</em><em>r</em><em>m</em><em>e</em><em>d</em><em>i</em><em>a</em><em>t</em><em>e</em></sup></span> (Line 2 - 3)</figcaption>
|
| 25 |
+
</figure>
|
| 26 |
+
<figure id="fig:final_pose">
|
| 27 |
+
<img src="images/pose_conversion_4_new.png" />
|
| 28 |
+
<figcaption>Final pose estimation <span class="math inline"><em>h</em><sup><em>i</em><em>m</em><em>a</em><em>g</em><em>e</em></sup></span> (Line 4 - 8)</figcaption>
|
| 29 |
+
</figure>
|
| 30 |
+
<figcaption>Illustrating the pose conversion method. See Sec. <a href="#sec:append:poseconvert" data-reference-type="ref" data-reference="sec:append:poseconvert">7</a> for more details.</figcaption>
|
| 31 |
+
</figure>
|
| 32 |
+
|
| 33 |
+
Each pose, $\mathbf{h}^{prop}$, $\mathbf{h}^{intermediate}$, and $\mathbf{h}^{image}$, is associated with a specific assumed camera location and thus a specific intrinsic camera matrix, $\mathbf{K}$, $\mathbf{K}_{box}$, and $\mathbf{K}_{img}$ respectively, where we define again here.
|
| 34 |
+
|
| 35 |
+
Here, we assume $f$ equals the image crop height, $h_{bb}$, plus width, $w_{bb}$, $c_x$ and $c_y$ are the $x,y$ coordinates of the image crop center, respectively, and $w$, $h$ are the full image width and height respectively. $$\begin{equation}
|
| 36 |
+
\mathbf{K} = \begin{bmatrix}
|
| 37 |
+
f & 0 & c_x\\
|
| 38 |
+
0 & f & c_y \\
|
| 39 |
+
0 & 0 & 1
|
| 40 |
+
\end{bmatrix},\label{eq:append:intrinsic}
|
| 41 |
+
\end{equation}$$ $$\begin{equation}
|
| 42 |
+
\mathbf{K}_{box} = \begin{bmatrix}
|
| 43 |
+
w+h & 0 & c_x + x\\
|
| 44 |
+
0 & w+h & c_y + y\\
|
| 45 |
+
0 & 0 & 1
|
| 46 |
+
\end{bmatrix},\label{eq:append:cropproj}
|
| 47 |
+
\end{equation}$$ $$\begin{equation}
|
| 48 |
+
\mathbf{K}_{img} = \begin{bmatrix}
|
| 49 |
+
w+h & 0 & w/2\\
|
| 50 |
+
0 & w+h & h/2 \\
|
| 51 |
+
0 & 0 & 1
|
| 52 |
+
\end{bmatrix}.\label{eq:append:imgproj}
|
| 53 |
+
\end{equation}$$ The input to Algorithm [\[alg:poseconversion\]](#alg:poseconversion){reference-type="ref" reference="alg:poseconversion"} $\mathbf{h}^{prop}$ is estimated based on camera matrix, $\mathbf{K}$, whose principal point is at the center of the image crop $B$, $(c_x, c_y)$, and focal length $f$ is $w_{bb} + h_{bb}$, which is visualized in Fig. [4](#fig:initial_pose){reference-type="ref" reference="fig:initial_pose"}. Step 1 of the algorithm, lines 2-3, first rescales the image. This *zoom-out* operation pushes the object further away from the camera by multiplying the translation on the $z$ axis, $t_z$, with the factor $(w+h) / (w_{bb} + h_{bb})$. This extra factor in $z$ will adjust the projected coordinates, $p$, on the image plane to reflect the relative ratio of the image crop to the whole image (since the original pose estimate $\mathbf{h}^{prop}$ is estimated assuming each image crop is of constant size).
|
| 54 |
+
|
| 55 |
+
Then we also adjust, accordingly, the camera matrices from $\mathbf{K}$ to $\mathbf{K}_{box}$. This transformation in intrinsic camera matrices will adjust the principal point, and thus the origin of the image coordinates system from the top left corner of the image crop to the top left corner of the whole image.
|
| 56 |
+
|
| 57 |
+
Step 2 of the algorithm, lines 4-8, translates the camera, so that every pose estimate, $\mathbf{h}^{img}$, is based on the camera settings shown in Fig. [6](#fig:final_pose){reference-type="ref" reference="fig:final_pose"} with principal point at image center and focal length $w+h$.
|
| 58 |
+
|
| 59 |
+
The methodology here is to first adjust the camera matrix from $\mathbf{K}_{box}$ to $\mathbf{K}_{img}$, in order to compensate the translation of our desired principal points, and then solve for the associated pose, $\mathbf{h}^{img}$. Since the image coordinate system does not change in Step 2, the following equality must hold, $$\mathbf{p} = \mathbf{K}_{box} [\mathbf{R} | \mathbf{t}] \mathbf{P},$$ $$\mathbf{p} = \mathbf{K}_{img} [\mathbf{R'} | \mathbf{t'}] \mathbf{P}.$$ In other words, $$\mathbf{K}_{box} [\mathbf{R} | \mathbf{t}]
|
| 60 |
+
= \mathbf{K}_{img} [\mathbf{R'} | \mathbf{t'}].$$ So we can obtain the rotation matrices $\mathbf{R'}$ and translation vectors $\mathbf{t}'$ by the following equations, $$\mathbf{R'} = (\mathbf{K}_{img})^{-1} \mathbf{K}_{box} \mathbf{R},$$ $$\mathbf{t'} = (\mathbf{K}_{img})^{-1} \mathbf{K}_{box} \mathbf{t}.$$ The new pose, $\mathbf{h}^{img}$, can then be extracted from $\mathbf{R'}$ and $\mathbf{t}'$ using standard approaches [@rodrigues; @hartley2003multiple]
|
| 61 |
+
|
| 62 |
+
The conversion from global pose, $\mathbf{h}^{img}$, to local pose, $\mathbf{h}^{prop}$, follows the exact same methodology. For completeness, we provide pseudo-code for this step in Algorithm [\[alg:poseconversionrev\]](#alg:poseconversionrev){reference-type="ref" reference="alg:poseconversionrev"}.
|
| 63 |
+
|
| 64 |
+
:::: algorithm
|
| 65 |
+
::: algorithmic
|
| 66 |
+
$\mathbf{V} = \mathbf{K}_{img} [t_x, t_y, t_z]^T$ $[t'_x, t'_y, t'_z]^T = (\mathbf{K}_{box})^{-1} \mathbf{V}$ $\mathbf{R} = \text{rot\_vec\_to\_rot\_mat}([r_x, r_y, r_z])$ $\mathbf{R'} = (\mathbf{K}_{box})^{-1} \mathbf{K}_{img} \mathbf{R}$ $(r'_x, r'_y, r'_z) = \text{rot\_mat\_to\_rot\_vec}(\mathbf{R'})$ $f \leftarrow w + h$ $t'_z = t'_z / f * (w_{bb} + h_{bb})$ **return** $\mathbf{h}^{prop} = (r'_x, r'_y, r'_z, t'_x, t'_y, t'_z)$
|
| 67 |
+
:::
|
| 68 |
+
::::
|
2012.15355/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-01-31T21:17:05.807Z" agent="5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36" etag="qSN6WC5nb4ytcv4Gwm-S" version="14.1.3" type="device"><diagram id="kTgjqAyGO_KolCsvfGrs" name="Page-1">7VpZd5u8Fv01eWwW8/BowOB5nvAbg8CYGcRgfv0nbBzHCWnT26TN6q0fjHQ0IO2trXMQPJCiXyqJFh3GoQm8BwIzywdSeiAInmLQf204XQw0SVwMduKYFxN+MyydCjRGrLFmjgnSu4owDD3oRPdGIwwCYMA7m5YkYXFfzQq9+7tGmg1eGZaG5r22bh0THi5WjsZu9h5w7MP1zjjWlPjatXJjSA+aGRbPTGT3gRSTMISXlF+KwKuxu+JyaSe/Ufo0sAQE8D0NLHduuMUg2jizI+sdFWo3yL81ZOSalzUTbgYLT1cEkjALTFB3gj+QQnFwIFhGmlGXFohyZDtA32uKLcfzxNALk3Nb0qQBZ1LInsIkdMGzEo7QSYZBJa+n0cwsBwkE5TNTMy0FhD6AyQlVuZZyDcTNGntaPMWNMYptbIfnbF0bas0qsZ/6vgGJEg2W7biKkwUbH0lhbC6Z7TDgvd5h/A3/Ma7ARAutyYYJPIR2GGhe92YVbshjKHerMwrDqMH7CCA8NarRMhjes/EmtmmYJcZ1hauneS9QBsE+G7AeM6oGcXkdP9QSG8Dv1LuquZ7Md6lKgKdBJ79X1a/A/r1h38HOeLBemCGC4Dn+TJyF14Jv6RnBDqqAc1F5K0Qpu77OEvBtlWhBaoWJDxJUD210Geqp6R0N9nKDS/WP1ZAGOMto0xBjcEC3PkZDaOaP9J2KuBYR4S0iYj9AQ61ktu1NF7jTSAtuWH8EwffkjrTT+fqtrv8mx0/m++F8IPWWBRijlXqT5XUM+yDqaf5++yRfM0/w9Gvmyc9invoSzBP/f8yzf5r5a8e/10+C0oG7ujnaAi859VmJVDY9nzOnayZA833WqM6qz8tuzc65a7v/3SfT7/TJ7JdyyfQnueS3dTv5HV75zwiUpP60QHH6n0B/ILwfChTHv5RC2c8KmsMU/rGo2eIM0K5PnaMp+qP0+Spqpog/HTbjzD+FvoUN916FfimBcp8k0H4QZfDnxYj9tBgBbtKAbRMjz7Ck9kHHQCRFPxL3amw5CMIJ4rHFYVKfJkf+nxx/5Ah/rEfuSwkS/6xzpmkG/y5JkuwL/0i3SZL6zZJ8+2TpvfSRWBt9j4+P7+UOAQjvCbonIggD8IK1xqR5jh2grIFoQUEVKdR0OIbmdZoC3zHN8y7StiLu18xHUMy9pJjEWkKglgiI+DR+2xzmC/zTgxbVScc/vwB6AnGk6cBDkasDnbAGUw8hDH1UwasLBM1w7TOEd4+C9a+FCFhv1oKWRpcXU5ZT1sAL51t2rlbsakFpU4MaWl+XLCFHgf1AiM5GmC4KbKjYYQf9Jsv1obu2UUoc13kgdtQ64880I0YJwV173flmoYqA2PaNXFwLS2WPy9uFeQgmY+KkCygSFxD1Qro+7nRhL/S7pMqrCldwq/HElc25FQp7WbYLZb7ukRYs5DEowbRnOIcDppITvMqmqHnFlydqjcYpSKFVzMWqr+6OxmnPsn4187gywwkKVXMHfIUqbQepWYbTlJG7WyHyA1WX9iKlLVwtOqInEHnOjlJf0icOsxOr2FzZBdbrMCu+v2DjrczkC2pccBLqSVBPsjXhe7MIterpkroSHcNbc0q6cXLXOLrdudVBMbBA2lzfriS8gp47j318Gx+2Mb7k+0scOwrpzI7lTp7wuSd23GipGBuM7MGVrLq9RZffD4tqqa/2i+FgVQCJ6XYGS1fuYuvFZIrNdEVFN1A2/QKtVmF90qHMzXLOdfgiX4wsfaHjgKYGRoKDcUlMjZVjZqvNUGTHZQapaRzHh/XwKEy6qDVOKOi/iAmEqUwibuTJHhl4xL0wWdsF20/3hWJjhT8/zbXZYj6S1aOcK2y8OGVqqSWqLqAVQGz4uV1sT+SWlYKZaeMpAm28x6fUzgWIIAnQMTZIlfkqUsPKF6jI3033Y7boTSW4r2aSUUNrzTpSUu38g4LP69XVWa4308WQFtV+v17tH7Nl8PjLQI3B8Fd7BvlbDx7/nWu8jQ3xzjCN+FJRGtnm5v8+N1CLv9PRL25AoIYBauLVyc3FEfQlQCxzJO4MuYKVLBlyR1MOqdbnCyNNxFDzaSIKzVInDha39qKJy/v2oHT3ukLH9U6ki3FIcxlLiflQPmFwR65Ksjeru0S36nA720VXEM8tY6NIQB/g8oyjnaNz3nrMJDWdeNDx/F1hLVBFnd3vY3TNsJzKJlu0gev8DPLV1KzgBEKpo0rHrnqsFLUa9kOxqARg9PmlKFHgyAt4T7UkW9N6xanrLo2BkxrCKVvOeX9dgEgwPGGI+t5plTMtOIMN5qxQHDvuCMNXZJwu57lUcVOcmDOzhS1WI28mVytmWc8jmVqlIfK7ejJdYE1lSlwMjBEOGFXwtoulpJtwvTtRu9XQXAVdJpgQQrTpj3K+yjTkK4TQcUS0s7s7E3gGk2/EbFZmymKdO7nAjnRTPwC4sTMtSrvYSov3Ja9NEpOkrSHV9XzV86VdNRubCNYcM+bCdJ0mG4ZG7nctirI6ntW+YqdO6E1vvZ71QzTMXD9ON/wwm247n7hZ0y/iO7zlzfBVWnfxHfvrwm79aOWznr7GmoPEg/3KkeVfE9bT7MuTTaLl3UNbWI8zP++jUfb2IdS57NnXZGT3Pw==</diagram></mxfile>
|
2012.15355/main_diagram/main_diagram.pdf
ADDED
|
Binary file (14.3 kB). View file
|
|
|
2012.15355/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In recent years, large-scale pre-trained language models [\(Radford et al.,](#page-9-0) [2019;](#page-9-0) [Devlin et al.,](#page-8-1) [2018;](#page-8-1) [Liu et al.,](#page-8-2) [2019b\)](#page-8-2) trained with transformers [\(Vaswani et al.,](#page-9-1) [2017\)](#page-9-1) have become standard building blocks of modern NLP systems to help improve generalization when task-specific annotations are limited. In practice, it has been found that deeper transformers generally yield better results with sufficient training data [\(Lan et al.,](#page-8-3) [2019\)](#page-8-3),
|
| 4 |
+
|
| 5 |
+
especially on tasks involving reasoning and structural understanding. This suggests that additional transformer layers should be employed in conjunction with pre-trained models, instead of simple and shallow neural components, such as a classifier head, currently used by models of many NLP tasks. However, the common belief in the literature is that training deep transformers from scratch requires large datasets, and few attempts have been made on small datasets, to the best of our knowledge. One implication is that although extra transformer layers on top of pre-trained models should help with more challenging problems in principle, it does not work in practice due to limited training data. We show that after resolving several optimization issues with the method proposed in this work, it is possible to train very deep transformers with improved generalization even on small datasets.
|
| 6 |
+
|
| 7 |
+
One advantage of pre-trained models is the reduced computational resources needed when finetuning on small datasets. For instance, it allows practitioners to finetune on a single GPU and obtain strong performance on a downstream task. However, the large size of pre-trained models limits the batch size that can be used in training new transformer layers on a small computational budget. Despite their broad applications, training transformer models is known to be difficult [\(Popel and Bojar,](#page-9-2) [2018\)](#page-9-2). The standard transformer training approach leverages learning rate warm-up, layer normalization [\(Ba et al.,](#page-8-4) [2016\)](#page-8-4) and a large batch size, and models typically fail to learn when missing any one of these components. The restricted batch size aggravates the training difficulties. Even if a large batch size can be feasibly employed, poorer generalization results are often observed [\(Keskar et al.,](#page-8-5) [2016\)](#page-8-5), especially when the dataset size is only several times larger than the batch size. Furthermore, many recent works noticed a performance gap in this training approach due to layer normalization
|
| 8 |
+
|
| 9 |
+
<span id="page-0-0"></span><sup>∗</sup>Work done while the author was an intern in Borealis AI. <sup>1</sup>The code to reproduce our results can be found in: <https://github.com/BorealisAI/DT-Fixup>
|
| 10 |
+
|
| 11 |
+
(Xu et al., 2019; Nguyen and Salazar, 2019; Zhang et al., 2019a; Wang et al., 2019b; Liu et al., 2020; Huang et al., 2020).
|
| 12 |
+
|
| 13 |
+
Inspired by the recent T-Fixup by Huang et al. (2020), which eliminates the need for learning rate warm-up and layer normalization to train vanilla transformers, we derive a data-dependent initialization strategy by applying different analyses to address several key limitations of T-Fixup. We call our method the Data-dependent Transformer **Fixed-up**date initialization scheme, *DT-Fixup*. In the mixed setup of additional yet-to-be-trained transformers on top of pre-trained models, DT-Fixup enables the training of significantly deeper transformers, and is generally applicable to different neural architectures. Our derivation also extends beyond vanilla transformers to transformers with relational encodings (Shaw et al., 2018), allowing us to apply the results to one variant called relation-aware transformer (Wang et al., 2019a). By applying DT-Fixup on different tasks, we show that the impression that deep transformers do not work on small datasets stems from the optimization procedure rather than the architecture. With proper initialization and optimization, training extra transformer layers is shown to facilitate the learning of complex relations and structures in the data.
|
| 14 |
+
|
| 15 |
+
We verify the effectiveness of DT-Fixup on Spider (Yu et al., 2018), a complex and cross-domain Text-to-SQL semantic parsing benchmark, and ReColr (Yu et al., 2020b), a reading comprehension dataset requiring logical reasoning. While Text-to-SQL semantic parsing is inherently different from reading comprehension, they share similar characteristics which require certain levels of reasoning and structural understanding ability. Meanwhile, the sizes of both datasets are less than 10k training samples, which is tiny by deep learning standards and renders large-batch training undesirable due to poor generalization<sup>2</sup>.
|
| 16 |
+
|
| 17 |
+
On both datasets, DT-Fixup consistently outperforms the standard approach with better generalization and allows the training of significantly deeper transformer models. For Spider, we successfully apply DT-Fixup to train a Text-to-SQL parser containing 48 transformer layers, with 24 relation-aware layers trained from scratch on top of 24 pre-trained layers from pre-trained RoBERTa
|
| 18 |
+
|
| 19 |
+
(Liu et al., 2019b). Our parser achieves 70.9% exact match accuracy on the Spider test set, which is the state of the art at the time of writing. At the same time, it requires less training steps and no task-specific pre-training as compared to the prior art (Yu et al., 2020a). For ReClor, we rank the second on the public leaderboard by simply adding 4 transformer layers on top of RoBERTa. Further error analysis shows that the performance improvements by increasing the depth mainly come from better generalization on the harder cases requiring reasoning and structural understanding. Even the failed predictions from the deep models are more reasonable than from the shallow ones.
|
| 20 |
+
|
| 21 |
+
In this section, we present the necessary background by first introducing the relation-aware transformer layer, which outperforms the vanilla transformer layer with limited data by injecting additional inductive bias (Wang et al., 2019a). Then, we introduce the T-Fixup technique (Huang et al., 2020) for optimizing deeper vanilla transformers and discuss why it does not directly apply in the mixed transformer optimization setup.
|
| 22 |
+
|
| 23 |
+
Consider a set of inputs $X = [x_1, ..., x_n]$ where $x_i \in \mathbb{R}^{d_x}$ . A *transformer*, introduced by Vaswani et al. (2017), is a stack of blocks, with each block consisting of a multi-head *self-attention layer*, layer normalizations, a multi-layer perceptron and skip connections. Each block (with one head in self-attention for notational simplicity) transforms each $x_i$ into $y_i \in \mathbb{R}^{d_x}$ as follows:
|
| 24 |
+
|
| 25 |
+
$$\alpha_{ij} = \operatorname{softmax} \left( \boldsymbol{x}_i \boldsymbol{q} (\boldsymbol{x}_j \boldsymbol{k})^\top \middle/ \sqrt{d_z} \right)$$
|
| 26 |
+
(1)
|
| 27 |
+
|
| 28 |
+
<span id="page-1-2"></span><span id="page-1-1"></span>
|
| 29 |
+
$$\mathbf{z}_i = \sum_{j=1}^n \alpha_{ij} \mathbf{x}_j \mathbf{v}; \tag{2}$$
|
| 30 |
+
|
| 31 |
+
$$\tilde{\boldsymbol{y}}_i = \text{LayerNorm}(\boldsymbol{x}_i + \boldsymbol{z}_i \boldsymbol{w}^\top)$$
|
| 32 |
+
(3)
|
| 33 |
+
|
| 34 |
+
<span id="page-1-3"></span>
|
| 35 |
+
$$\boldsymbol{y}_i = \text{LayerNorm}(\boldsymbol{\tilde{y}}_i + \text{MLP}(\boldsymbol{\tilde{y}}_i))$$
|
| 36 |
+
(4)
|
| 37 |
+
|
| 38 |
+
where the softmax operation is applied across the index j, MLP is a two-layer perceptron, Layer-Norm is a *layer normalization* (Ba et al., 2016) layer, and $\boldsymbol{q}, \boldsymbol{k}, \boldsymbol{v} \in \mathbb{R}^{d_x \times d_z}, \boldsymbol{w} \in \mathbb{R}^{d_x \times d_z}$ .
|
| 39 |
+
|
| 40 |
+
In order to bias the transformer toward some pre-existing relational features between the inputs, Shaw et al. (2018) described a way to represent *relative position information* in a self-attention layer
|
| 41 |
+
|
| 42 |
+
<span id="page-1-0"></span><sup>&</sup>lt;sup>2</sup>For a comparison, T-Fixup applies batch sizes of more than 1k on machine translation to stabilize the training, which would hurt the generalization significantly on our datasets whose sizes are less than 10k.
|
| 43 |
+
|
| 44 |
+
by changing Equation 1-2 as follows:
|
| 45 |
+
|
| 46 |
+
$$\alpha_{ij} = \operatorname{softmax} \left( \frac{\boldsymbol{x}_i \boldsymbol{q} (\boldsymbol{x}_j \boldsymbol{k} + \boldsymbol{r}_{ij}^k)^\top}{\sqrt{d_z}} \right)$$
|
| 47 |
+
(5)
|
| 48 |
+
$$\boldsymbol{z}_i = \sum_{j=1}^n \alpha_{ij} (\boldsymbol{x}_j \boldsymbol{v} + \boldsymbol{r}_{ij}^v)$$
|
| 49 |
+
|
| 50 |
+
Here the $\mathbf{r}_{ij} \in \mathbb{R}^{d_z}$ terms encode the known relationship between two elements $\mathbf{x}_i$ and $\mathbf{x}_j$ in the input. Wang et al. (2019a) adapted this framework to effectively encode the schema information using $\mathbf{r}_{ij}$ 's for Text-to-SQL parsers, and called it relationaware transformer (RAT).
|
| 51 |
+
|
| 52 |
+
# Method
|
| 53 |
+
|
| 54 |
+
We now follow the analysis framework of T-Fixup (Huang et al., 2020), but derive the conditions to bound the gradient updates of the self-attention block in the presence of a pre-trained model. Based on the derivation, we propose a data-dependent initialization strategy for the mixed setup of the new transformers on pre-trained encodings.
|
| 55 |
+
|
| 56 |
+
Our analysis applies to the general architecture type illustrated in Figure 1, where the input passes through a pre-transformer, a main transformer, and a post-transformer module before outputting. The pre and post transformer modules can be any architectures that can be stably trained with Adam (Kingma and Ba, 2014), including MLP, LSTM, CNN, or a pre-trained deep transformer module which can be stably fine-tuned with a learning rate significantly smaller than the main learning rate used for the main transformer module. For this work, we will just consider the case of the main transformer containing only the encoder for simplicity, while our decoder will be an LSTM which can be viewed as part of the post-transformer module. Extending our analysis to include deep transformer decoder is straightforward following the framework of Huang et al. (2020).
|
| 57 |
+
|
| 58 |
+
We use $f_e$ to denote the pre-transformer mod-
|
| 59 |
+
|
| 60 |
+
ule (e for pre-trained encoder), and its parameters $\boldsymbol{\theta}_e$ ; similarly $f_o$ for post-transformer module (o for output) with parameters $\boldsymbol{\theta}_o$ . The main transformer module $f_G$ is a stack of L transformer blocks, each consisting of a self-attention block and a MLP block. Let $G_l, l=1,\ldots,2N$ denote individual self-attention or MLP layers in the blocks $(G_l)$ 's do not include the skip connections), with parameters $\boldsymbol{\theta}_l$ and let $L=2N, f_G$ 's parameters are denoted by $\boldsymbol{\theta}_G=\bigcup_{l=1}^L \boldsymbol{\theta}_l$ .
|
2101.09178/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-06-02T11:23:19.784Z" agent="5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36" version="13.1.12" etag="MyH3Dj0Zw1B8thMrY99p" type="device"><diagram id="Bd2J230uUhJCWlP7Yhj7">7ZpNs9ogFIZ/TbZ3AuTLZf1ou+nKRdcYMGGaBAexan99yZWoEW69mbnFOJcZF+GFfHCenOTklQDN6sM3gTflD05oFcCQHAI0DyAEEYRB+wvJ8aSkyeQkFIIRPegiLNkfqsVQqztG6LY3UHJeSbbpizlvGprLnoaF4Pv+sDWv+mfd4IIawjLHlan+ZESWJzWD6UX/TllRdmcG3fxq3A3WM9mWmPD9lYQWAZoJzuVpqz7MaNUGr4vLab+vb/SeL0zQRr5nBw3iN652em76uuSxm6zgu4bQdnwYoOm+ZJIuNzhve/cKr9JKWVeqBdTmmjdS8wJJ22ZVNeMVF6/HQgTTbJ0rfSsF/0WvepI8o6v1uacLqwrItKjwdqtPn/Oa5Xr7HLu2oedBhaSHN2MBzhFWtyblNZXiqIboHVCioei7Mgl1e39hnGYnqbzC22lY31XF+ciXwKsNHXs7B+SWwzxeZPPIxiGDK5S0exQCE6Yi1/U1vKEPxRNF9/EgCx70AXgij+cunuwGT+QOT+yfYm+mSQzdPcUSnyZ38YQ3eBJ3aZJ6PENrAFv2/C88mYEnfIkNQmpuso+hH14dwmsWWsIVKxrVzFUsqNKnbaSYqmq/6I6aEdKexsq9f2cY6Nu2vkjwSXhNPK+nevx1X48e2JNU3QB4YE9VhwPTTgAe16D8cvkCA6br4HENyy6nuEwXwuMamF0uyw3TlfC4BmaXS1ymeRF6XP/EhRJ4UxtCd7hMM8PjGvbt5bSUt5gbHted7HpgIW/xNjyuYdnlEBc0nQ2D1kc6u+ssp7n1D5BVFkdxGIzKwnVYRECrY+EtixETMy2Lz5s6hjvr8OvWsgQifIl86oyYmONVERSQmKa21JkkKcLJY1Pn1id3+fo3nQbvk48amOOFEuPOnNs/LBx+llqWRPjMGREw1byshX3tu1pRjBZ/AQ==</diagram></mxfile>
|
2101.09178/main_diagram/main_diagram.pdf
ADDED
|
Binary file (6.12 kB). View file
|
|
|
2101.09178/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,155 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Traditionally, game theory is applied in situations where the game is fully known. More recently, empirical game theory addresses the setting where this is not the case, instead, the game is initially unknown and has to be interacted with by sampling (Wellman 2006). One area in which this is becoming increasingly common is the ranking of trained agents relative to one another. Specifically, in the field of Reinforcement Learning game-theoretic rankings are used not just as a metric for measuring algorithmic progress (Balduzzi et al. 2018), but also as an integral component of many population-based training methods (Muller et al. 2020; Lanctot et al. 2017; Vinyals et al. 2019). In particular, for ranking, two popular solution concepts have recently emerged: Nash averaging (Balduzzi et al. 2018; Nash 1951) and α-rank (Omidshafiei et al. 2019).
|
| 4 |
+
|
| 5 |
+
In this paper, we aim to estimate the α-rank of a game using as few samples as possible. We use the α-rank solution concept for two reasons. First, it admits a unique solution whose computation easily scales to K-player games. Second, unlike older schemes such as Elo (Elo 1978), α-rank is designed with intransitive interactions in mind. Because measuring payoffs can be very expensive, it is important to do it by using as few samples as possible. For example, playing a match of chess (Silver et al. 2017) can take roughly 40 minutes (assuming a typical game-length of 40 and up to 1 minute per move as used during evaluation), and playing a full game of Dota 2 can take up to 2 hours (Berner et al. 2019). *Our objective is thus to accurately estimate the* α*rank using a small number of payoff queries.*
|
| 6 |
+
|
| 7 |
+
Rowland et al. (2019) proposed ResponseGraphUCB (RG-UCB) for this purpose, inspired by pure exploration bandit literature. RG-UCB maintains confidence intervals over payoffs. When they don't overlap, it draws a conclusion about their ordering, until all comparisons relevant for the computation of α-rank have been made. While this is provably sufficient to determine the true α-rank with a high probability in the infinite-α regime, their approach has two important limitations. First, since the frequentist criterion is indirect, relying on payoff ordering rather than the α-rank, the obtained payoffs aren't always used optimally. Second, it is nontrivial to include useful domain knowledge about the entries or structure of the payoff matrix.
|
| 8 |
+
|
| 9 |
+
To remedy these problems, we propose a Bayesian approach. Specifically, we utilize a Gaussian Process to maintain an epistemic belief over the entries of the payoff matrix, providing a powerful framework in which to supply domain knowledge. This payoff distribution induces an epistemic belief over α-ranks. We determine which payoff to sample by maximizing information gain between the α-rank belief and the obtained payoff. This allows us to focus our sampling on the entries that are expected to have the largest effect on our belief over possible α-ranks.
|
| 10 |
+
|
| 11 |
+
Contributions: Theoretically, we justify the use of information gain by showing a regret bound for a version of our criterion in the infinite-α regime. Empirically, our contribution is threefold. First, we compare to RG-UCB on stylized games, showing that maximizing information gain provides competitive performance by focusing on sampling the more relevant payoffs. Second, we evaluate another objective based on minimizing the Wasserstein divergence, which offers competitive performance while being computationally much cheaper. Finally, we demonstrate the benefit of building in prior assumptions.
|
| 12 |
+
|
| 13 |
+
<sup>\*</sup>Work done during an internship at MSR Cambridge. Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
| 14 |
+
|
| 15 |
+
Games and $\alpha$ -Rank A game with K players, each of whom can play $S_k$ strategies is characterized by its expected payoffs $M \in \mathcal{R}^{S_1 \times \ldots \times S_K}$ (Fudenberg and Tirole 1991). Letting $\mathcal{S} = S_1 \times \ldots \times S_K$ be the space of pure strategy profiles, the game also specifies a distribution over the payoffs associated with each player when $s \in \mathcal{S}$ is played. The $\alpha$ -rank of a game is computed by first defining an irreducible Markov Chain whose nodes are pure strategy profiles in $\mathcal{S}$ . We denote the stochastic matrix defining this chain as C. The transition probabilities of the chain C are calculated as follows: Let $\sigma, \tau \in \mathcal{S}$ be such that $\tau$ only differs from $\sigma$ in a single player's strategy and let $\eta = (\sum_{k=1}^K (|S_k| - 1))^{-1}$ be the reciprocal of the total number of those distinct $\tau$ . Let $M_k(\sigma)$ denote the expected payoff for player k when $\sigma$ is played. Then, the probability of transitioning from $\sigma$ to $\tau$ which varies only in player k's strategy is
|
| 16 |
+
|
| 17 |
+
$$C_{\sigma,\tau} = \begin{cases} \eta \frac{1 - \exp(-\alpha(M_k(\tau) - M_k(\sigma)))}{1 - \exp(-\alpha m(M_k(\tau) - M_k(\sigma)))} & \text{if } M_k(\tau) \neq M_k(\sigma), \\ \frac{\eta}{m} & \text{otherwise.} \end{cases}$$
|
| 18 |
+
|
| 19 |
+
$C_{\sigma,\upsilon}=0$ for all $\upsilon$ that differ from $\sigma$ in more than a single player's strategy, $C_{\sigma,\sigma}=1-\sum_{\tau\neq\sigma}C_{\sigma,\tau}$ to ensure a valid transition distribution, and $\alpha\geq 0, m\in\mathbb{N}^{>0}$ are parameters of the algorithm. We define the $\alpha$ -rank $r\in\mathcal{R}^{|\mathcal{S}|}$ as the unique stationary distribution of the chain C (Omidshafiei et al. 2019; Rowland et al. 2019) as $\alpha\to\infty$ . In practice, a large finite value of $\alpha$ is used, or a perturbed version of the transition matrix C is used with an infinite $\alpha$ to ensure the resulting Markov Chain C is irreducible.
|
| 20 |
+
|
| 21 |
+
Single Population $\alpha$ -Rank In this paper we focus on the infinite- $\alpha$ regime and restrict our attention to the 2-player single population case of $\alpha$ -rank which differs slightly from above. Importantly, our method can be easily applied to multiple populations as described above in a straightforward way, but we focus on the single population case for simplicity. Let $S=S_1$ and $M(\sigma,\tau)$ denote the payoff when the first player plays $\sigma$ and the second player plays $\tau$ . Note that $S_1=S_2$ since the single population case considers a player playing a game against an identical player. In this particular setting, the $\alpha$ -rank $r \in \mathcal{R}^{|S|}$ and the perturbed transition matrix $C \in \mathcal{R}^{S \times S}$ is calculated as follows:
|
| 22 |
+
|
| 23 |
+
$$C_{\sigma,\tau} = \begin{cases} (|S| - 1)^{-1} (1 - \epsilon) & \text{if } M(\tau, \sigma) > M(\sigma, \tau), \\ (|S| - 1)^{-1} \epsilon & \text{if } M(\tau, \sigma) < M(\sigma, \tau), \\ 0.5 (|S| - 1)^{-1} & \text{if } M(\tau, \sigma) = M(\sigma, \tau), \end{cases}$$
|
| 24 |
+
|
| 25 |
+
for $\sigma \neq \tau$ . $C_{\sigma,\sigma} = 1 - \sum_{\tau \neq \sigma} C_{\sigma,\tau}$ again to ensure a valid transition distribution and $\epsilon$ is a small perturbation to ensure irreducibility of the resulting chain. We abstract the above computation into the $\alpha$ -rank function $f: \mathcal{M} \to \mathcal{R}^{|S|}$ , where $\mathcal{M}$ is the space of 2-player payoff matrices with S strategies for each player.
|
| 26 |
+
|
| 27 |
+
**Wasserstein Divergence** Let p and q be probability distributions supported on $\mathcal{X}$ , and $c: \mathcal{X} \times \mathcal{X} \to [0, \infty)$ be a
|
| 28 |
+
|
| 29 |
+
distance. Define $\Pi$ as the space of all joint probability distributions with marginals p and q. Wasserstein divergence (Villani 2008) with cost function c, is defined as:
|
| 30 |
+
|
| 31 |
+
$$\mathcal{W}_c(p,q) := \min_{\pi \in \Pi} \int_{\mathcal{X} \times \mathcal{X}} c(x,y) d\pi(x,y).$$
|
| 32 |
+
|
| 33 |
+
In this paper, we will utilize the Wasserstein distance between our belief distributions over $\alpha$ -rank, and so we set $\mathcal{X}=\Delta^{S-1}$ , the (S-1) probability simplex, and use $c(x,y)=\frac{1}{2}\|x-y\|_1$ , i.e. the total variation distance. We will drop the suffix and denote this simply as $\mathcal{W}$ .
|
| 34 |
+
|
| 35 |
+
# Method
|
| 36 |
+
|
| 37 |
+
On a high level, our method works by maintaining an epistemic belief over $\alpha$ -ranks and selecting payoffs that lead to the maximum reduction in the entropy of that belief. Figure 1 provides a pictorial overview. In the middle of the figure, we maintain an explicit distribution over the entries of
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
Figure 1: **Overview:** On the left, a belief over $\alpha$ -ranks is induced by a belief over the payoff matrix shown in the middle. A hallucinated belief distribution is shown on the right.
|
| 42 |
+
|
| 43 |
+
the payoff matrix. This payoff distribution induces a belief over $\alpha$ -ranks, shown on the left. When deciding which payoff to sample, we examine hypothetical belief states after sampling, striving to end up with a belief with the lowest entropy. One such hypothetical, or 'hallucinated' belief is shown on the right. We now describe our method formally, first describing the probabilistic model and then the implementation.
|
| 44 |
+
|
| 45 |
+
**Payoffs:** Ground Truth and Belief We denote the unknown true payoff matrix as $M_{\star}$ . To quantify our uncertainty about what this true payoff is, we employ a Gaussian Process M, which also allows us to encode prior knowledge about payoff dependencies. Our framework is sufficiently general to allow for other approaches such as Bayesian Matrix Factorization (Salakhutdinov and Mnih 2008) or probabilistic methods for Collaborate Filtering (Su and Khoshgoftaar 2009) to be used. We choose to use Gaussian Processes due to their flexibility in encoding prior knowledge and modelling assumptions, and their ubiquity throughout literature.
|
| 46 |
+
|
| 47 |
+
The GP models noise in the payoffs as $M=M+\epsilon$ , where $\epsilon \sim \mathcal{N}(0,I\sigma_A^2)$ . When interacting with the game sequentially, the received payoffs are assumed to be generated as $m_t=M_\star(a_t)+\epsilon_t'$ . Here, $\epsilon_t'$ are i.i.d. random variables with support on the interval $[-\sigma_A,\sigma_A]$ . While it may at first seem surprising that we use Gaussian observation noise in the GP model, while assuming a truncated observation noise for the actual observation, this does not in fact affect our theoretical guarantees. We provide more details in Section 6. We denote the history of interactions at time t by $H_t$ . Because of randomness in the observations, $H_t$ is a random variable. The sequence of random variables $H_1, H_2, \ldots$ forms a filtration. We use the symbol $h_t$ to denote particular realization of history so that $h_t=a_1,m_1,\ldots,a_{t-1},m_{t-1}$ .
|
| 48 |
+
|
| 49 |
+
Belief over $\alpha$ -ranks Our explicit distribution over the entries of the payoff matrix M induces an implicit belief distribution over the $\alpha$ -ranks. For all valid $\alpha$ -ranks r, $P(r) = P(M \in f^{-1}(r))$ where $f^{-1}$ denotes the pre-image of r under f. In other words, the probability assigned to an $\alpha$ -rank r is the probability assigned to its pre-image by our belief over the payoffs. Since r is represented implicitly, we cannot query its mass function directly. Instead, we access r via sampling. This is done by first drawing a payoff from $m \sim M$ and then computing the resulting $\alpha$ -rank f(m).
|
| 50 |
+
|
| 51 |
+
**Picking Payoffs to Query** At time t, we query the payoff that provides us with the largest information gain about the $\alpha$ -rank. Formally,
|
| 52 |
+
|
| 53 |
+
$$a_{t} = \underset{a}{\operatorname{arg max}} \mathbb{I}(r; (\tilde{M}_{t}(a), a) \mid H_{t} = h_{t})$$
|
| 54 |
+
|
| 55 |
+
$$= \underset{a}{\operatorname{arg max}} \mathbb{H}(r \mid H_{t} = h_{t})$$
|
| 56 |
+
|
| 57 |
+
$$- \underset{\tilde{m}_{t} \sim \tilde{M}_{t}(a)}{\mathbb{E}} \left[ \mathbb{H}\left(r \mid H_{t} = h_{t}, A_{t} = a, \tilde{M}_{t}(a) = \tilde{m}_{t}\right) \right]$$
|
| 58 |
+
|
| 59 |
+
$$= \underset{a}{\operatorname{arg min}} \underset{\tilde{m}_{t} \sim \tilde{M}_{t}(a)}{\mathbb{E}} \left[ \mathbb{H}\left(r \mid H_{t} = h_{t}, A_{t} = a, \tilde{M}_{t}(a) = \tilde{m}_{t}\right) \right].$$
|
| 60 |
+
|
| 61 |
+
$$(2)$$
|
| 62 |
+
|
| 63 |
+
In Equation (1), $\mathbb{H}\left(r\mid H_t=h_t\right)$ is the entropy of our current belief distribution over $\alpha$ -ranks, which does not depend on a and can be dropped from the maximization, producing Equation (2). The expectation in (2) has an intuitive interpretation as the expected negative entropy of our *hallucinated* belief, i.e. belief obtained by conditioning on a sample $\tilde{m}_t$ from the current model. In essence, we are pretending to receive a sample for entry a, and then computing what our resulting belief over $\alpha$ -ranks will be. By picking the entry as in (2), we are picking the entry whose sample will lead to the largest reduction in the entropy of our belief over $\alpha$ -ranks in expectation.
|
| 64 |
+
|
| 65 |
+
**Algorithm 1** $\alpha$ IG algorithm. $\alpha$ IG(NSB) and $\alpha$ IG(Bin) variants differ in entropy estimator (Line 7).
|
| 66 |
+
|
| 67 |
+
```
|
| 68 |
+
1: for t = 1, 2, ... T do
|
| 69 |
+
for a=1,2,\ldots |\mathcal{S}| do
|
| 70 |
+
2:
|
| 71 |
+
3:
|
| 72 |
+
for i = 1, 2, ... N_e do
|
| 73 |
+
\tilde{m}_t \sim \tilde{M}_t(a)
|
| 74 |
+
4:
|
| 75 |
+
▷ 'Hallucinate' a payoff.
|
| 76 |
+
Obtain hallucinated posterior payoff:
|
| 77 |
+
5:
|
| 78 |
+
P(\hat{M}_t|H_t = h_t, A_t = a, \tilde{M}_t(a) = \tilde{m}_t)
|
| 79 |
+
D = \{r_1, \dots, r_{N_b}\}, where r_i \sim f(\hat{M}_t) i.i.d.
|
| 80 |
+
6:
|
| 81 |
+
\hat{h}_a^i = \text{ESTIMATE-ENTROPY}(\ D\ )
|
| 82 |
+
7:
|
| 83 |
+
end for \hat{h}_a = \frac{1}{N_e} \sum_{i=1}^{N_e} \hat{h}_a^i
|
| 84 |
+
8:
|
| 85 |
+
9:
|
| 86 |
+
10:
|
| 87 |
+
Query payoff a_t = \arg\min_a \hat{h}_a
|
| 88 |
+
⊳ Implements Eq. (2).
|
| 89 |
+
11:
|
| 90 |
+
12: end for
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
Implementation Our algorithm, which we refer to as $\alpha$ IG, is summarized in Algorithm 1. At a high-level, $\alpha$ IG selects an action/payoff to query at each timestep (Line 1). In order to select a payoff to query as in Equation (2), we must approximate the expectation for each payoff (Line 2). In Line 4, we use our epistemic model to obtain a 'hallucinated' outcome $\tilde{m}_t$ , as if we received a sample from selecting payoff a at timestep b. In Line 5, we condition our epistemic model on this 'hallucinated' sample $\tilde{m}_t$ in order to obtain our 'hallucinated' posterior over payoffs $\hat{M}_t$ . In Line 7, we empirically estimate the entropy of the resulting induced belief distribution over $\alpha$ -ranks. To approximate the expectation in (2), we average out entropy estimates obtained from $N_e$ different possible hallucinated payoffs in Line 9. Finally, in Line
|
| 94 |
+
|
| 95 |
+
11, we use these estimates to perform query selection as in (2) to select a payoff to query at timestep t.
|
| 96 |
+
|
| 97 |
+
Our algorithm depends on an entropy estimator ESTIMATE-ENTROPY, used in Line 7. We present results for 2 different entropy estimators: simple binning and NSB. The simple binning estimator estimates the entropy using a histogram. For comparison, we also used NSB (Nemenman, Shafee, and Bialek 2002), an entropy estimator designed to produce better estimates in the small-data regime.
|
| 98 |
+
|
| 99 |
+
Computational Requirements The main computational bottleneck of our algorithm is the calculations of $\alpha$ -rank in Line 6 of Algorithm 1. In order to perform query selection as in (2), we must compute the $\alpha$ -rank $|S| \times N_e \times N_b$ times. For our experiments on the 4x4 Gaussian game this results in $16 \times 10 \times 500 = 80,000$ computations of $\alpha$ -rank (setting $N_e = 10, N_b = 500$ ), to select a payoff to query. Relative to ResponseGraphUCB, our method thus requires significantly more computation in order to select a payoff to query. However, in Empirical Game Theory, it is commonly assumed that obtaining samples from the game is very computationally expensive (which is true in many potential practical applications (Berner et al. 2019; Silver et al. 2017; Vinyals et al. 2019)). The increased computation required by our method to select a payoff to sample should then have a negligible impact to the overall computation time required, but the increased sample efficiency could potentially lead to large speed-ups.
|
| 100 |
+
|
| 101 |
+
We perform two simple optimizations when deploying the algorithm in practice. To save computational cost, we observe the same payoff $N_r$ times in Line 11 rather than once, similar to rarely-switching bandits (Abbasi-yadkori, Pál, and Szepesvári 2011). Moreover, the number of samples $N_b$ we can use to estimate the entropy is limited due to the computational cost of computing $\alpha$ -rank. In order to obtain better differentiation between the entropy of beliefs arising from sampling different payoffs, we heuristically perform conditioning in Line 5 $N_c$ times. See Appendix B for a more detailed discussion on this.
|
| 102 |
+
|
| 103 |
+
While the query objective proposed in (2) is backed both by an appealing intuition and a theoretical argument (see Section 6), it can be expensive to evaluate due to the cost of accurate entropy estimation. To address this difficulty, we also investigate an alternative involving the Wasserstein distance. The objective we consider is
|
| 104 |
+
|
| 105 |
+
$$\arg\max_{a} \mathbb{E}_{\tilde{m}_{t} \sim \tilde{M}_{t}} [\mathcal{W}(P(r|H_{t} = h_{t}), P(r|H_{t} = h_{t}, A_{t} = a, \tilde{M}_{t}(a) = \tilde{m}_{t}))]. \quad (3)$$
|
| 106 |
+
|
| 107 |
+
Since the computation of Wasserstein distance from empirical distributions can be achieved by solving a linear program (Bonneel et al. 2011), Equation (3) naturally lends itself to being approximated via samples. In our implementation, we use POT (Flamary and Courty 2017) to approximate this distance.
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Figure 2: Diagram depicting the current belief (Blue) and 2 different hallucinated beliefs (Red). We are assuming a discrete distribution over $\alpha$ -ranks, where the belief is uniform across the relevant circles.
|
| 112 |
+
|
| 113 |
+
The Wasserstein distance is built on the notion of cost, which allows a practitioner the opportunity to supply additional prior knowledge. In our case, since $\alpha$ -ranks are probability distributions, a natural way to measure accuracy is to use the total variation distance, which corresponds to setting the cost to $c(x,y) = \frac{1}{2} \|x-y\|_1$ . On the other hand, in cases where we are interested in finding the relative ordering of agents under the $\alpha$ -rank, an alternative cost such as the Kendall Tau metric (Fagin et al. 2006) could be used. While we emphasize the ability of the Wasserstein divergence to work with any cost, we leave the empirical study of non-standard costs for future work.
|
| 114 |
+
|
| 115 |
+
It is important to note that the objective in (3) is qualitatively different to the information gain objective proposed in (2). Figure 2 provides a diagram illustrating a major difference between the two objectives. The entropy for both belief distributions shown in red is the same. In contrast, the Wasserstein distance in (3) between the current belief in blue and the hallucinated belief in red is much smaller for the distribution on the left compared to the distribution on the right.
|
| 116 |
+
|
| 117 |
+
**Notions of Regret** We quantify the performance of our method by measuring regret. Our main analysis relies on Bayesian regret (Russo and van Roy 2018), defined as
|
| 118 |
+
|
| 119 |
+
$$J_t^B = 1 - \mathbb{E}_{h_t} \left[ P(r = r_{\star} | H_t = h_t) \right], \tag{4}$$
|
| 120 |
+
|
| 121 |
+
where we used $r_\star$ to denote the $\alpha$ -rank with the highest probability under r at time t. In (10), the expectation is over realizations of the observation model. Since $J_t^B$ , like all purely Bayesian notions, does not involve the ground truth payoff, we need to justify its practical relevance. We do this by benchmarking it against two notions of frequentist regret. The first measures how accurate the probability we assign to the ground truth $r_{\rm GT} = f(M_\star)$ is
|
| 122 |
+
|
| 123 |
+
$$J_t^F = 1 - \mathbb{E}_{h_t} \left[ P(r = r_{\text{GT}} | H_t = h_t) \right]. \tag{5}$$
|
| 124 |
+
|
| 125 |
+
The second measures if the mean of our payoff belief, which we denote $M_\mu$ , evaluates to the correct $\alpha$ -rank
|
| 126 |
+
|
| 127 |
+
$$J_t^M = 1 - \mathbb{E}_{h_t} \left[ \delta \left[ f(M_\mu) = r_{\text{GT}} \right] \right], \tag{6}$$
|
| 128 |
+
|
| 129 |
+
where the symbol $\delta$ [predicate] evaluates to 1 or 0 depending on whether the predicate is true or false. In Section 7, we empirically conclude that these three notions of regret are closely coupled in practice, changing at a comparable rate.
|
| 130 |
+
|
| 131 |
+
**Regret Bounds** As an intermediate step before discussing information gain on the $\alpha$ -ranks, we first analyze the behavior of a query selection rule which maximizes information gain over the payoffs.
|
| 132 |
+
|
| 133 |
+
$$\pi_{\text{IGM}}(a|H_t = h_t) = \underset{a}{\arg\max} \mathbb{I}(\tilde{M}_t; (\tilde{M}_t(a), a) \mid H_t = h_t).$$
|
| 134 |
+
(7)
|
| 135 |
+
|
| 136 |
+
The following result shows that using sampling strategy $\pi_{\rm IGM}$ for T timesteps leads to a decay in regret of at least $Te^{\mathcal{O}(-\sqrt[3]{\Delta^2T})}$ , proving it will incur no regret as $T\to\infty$ .
|
| 137 |
+
|
| 138 |
+
**Proposition 1** (Regret Bound For Information Gain on Payoffs). *If we select actions using strategy* $\pi_{IGM}$ , the regret at timestep T is bounded as
|
| 139 |
+
|
| 140 |
+
$$J_T^B \le J_T^F = 1 - \mathbb{E}_{h_T} \left[ P(r = r_{GT} | H_T = h_T) \right] \le T e^{g(T)} \quad (8)$$
|
| 141 |
+
|
| 142 |
+
$$where \quad g(T) = \mathcal{O}(-\sqrt[3]{\Delta^2 T}).$$
|
| 143 |
+
|
| 144 |
+
The proof, and an explicit form of g are found in supplementary material. We now proceed to our second result, where we maximize information gain on the $\alpha$ -ranks directly. Consider a querying strategy that is an extension of (1) to T-step look-ahead, defined as
|
| 145 |
+
|
| 146 |
+
$$\pi_{\text{IGR}} = \underset{a_1, \dots, a_T}{\arg \max} \mathbb{I}(r; (\tilde{M}_1(a_1), a_1), \dots, (\tilde{M}_T(a_T), a_T)).$$
|
| 147 |
+
(9)
|
| 148 |
+
|
| 149 |
+
We quantify regret achieved by $\pi_{IGR}$ in the proposition below
|
| 150 |
+
|
| 151 |
+
**Proposition 2** (Regret Bound For Information Gain on Belief over $\alpha$ -Ranks). *If we select actions using strategy* $\pi_{IGR}$ , regret is bounded as
|
| 152 |
+
|
| 153 |
+
$$J_T^B = 1 - P(r = r_{\star}|H_T = h_T) \to 0 \text{ as } T \to \infty.$$
|
| 154 |
+
|
| 155 |
+
Proposition 2 provides a theoretical justification for querying the strategies that maximize information gain on the $\alpha$ -ranks. A more explicit regret bound (similar to Proposition 1) and the proof are provided in Appendix E. In practice, to avoid the combinatorial expense of selecting action sequences using $\pi_{\rm IGR}$ , we use the greedy query selection strategy in equation (1). While the regret result above does not carry over, this idealized setting at least provides some justification for information gain as a query selection criterion.
|
2104.04466/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-05-08T11:33:02.776Z" agent="5.0 (Macintosh; Intel Mac OS X 11_3_0) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.4.3 Chrome/87.0.4280.141 Electron/11.3.0 Safari/537.36" etag="-eZ39sSU2FRokuF2C16v" version="14.4.3" type="device"><diagram id="oaEWWxo9mnoRLH4iQtNR" name="Page-1">zVjfc6IwEP5rnLl7wEEQxEdra/vSm5vp3NzjTYQImQbChSB6f/1tIPwStFJqpz448dvNZvfbzcI6Mdfh4ZGjOHhmHqYTQ/cOE/N+YhgLy4RvCRwLwJrNC8DnxCugWQ28kH9YgbpCU+LhpKUoGKOCxG3QZVGEXdHCEOcsa6vtGG2fGiMfd4AXF9Eu+pt4IihQx1jU+BMmflCePLOXhSREpbKKJAmQx7IGZD5MzDVnTBSr8LDGVHJX8lLs25yRVo5xHIlrNojjzwV6+mVEm9CJ/xyC5+d4rVmFlT2iqQpYOSuOJQOcpZGHpZHZxLzLAiLwS4xcKc0g5YAFIqRKnAjOXvGaUcbz3aZnb23LBsmOUNrAN5vNaqMD3g1DRbbHXOBDA1JhPWIWYsGPoKKkhq4oLmvMUmFldcbmttIJGtkyHKWIVJX4le2aSFgoLgfwuujh1aZw7J1H9rD0RR66jUJJYC7gOBEo5QhoAD4wiyEBpUKhbq7g7ujtXZqmzaY6fJ/onuoFTOQX87zhwj2Itu3heKetWzlt3dBp+1ZO21c7Pc0/Q2I8B+1YHqpbXT/7b8oKBVOHz3LZhN6kDO4q6sSVsBTu2ljSLpmueCjCaYc4pgQ8dOycuYOnkYRHxnPJ9G0cT5BI+a1cr41fcv6rl2rEeH+p6tOlNbJWle131OoN+sHAnmf2c+IsRzc9s/Rvyz8h7CsKybblO8OAQoo5cTGs/W5wboDhNaifOmcAdW+ecLGoPoXZIY0ow16EkzOdSJ8uHHtcL2rY/wIM9DYDF95pOe6P3x7QaeDt+JVEfsf+ESaj9wf/+TcFH2IcJWR/jpLZh9yWxim3uTHDm9GI7MitjXGsTITK1gqE3N9+03OiYFbRG6vvcplzKMPXdigk9FjsqUiDHJvz3F7KCZZnRTjrSNuGknxKl2ZmRnw4kRWeSiE8CkNE2+JMTV9SPi98zYUUC4G5lsBgKQu9bz8Mg0JDlPhRIaZ4J9pCAnNqpIzrDcdyoYBiSXZgsjQe4UohY9xrn93cvkXuq59PwdoJ68bcqdg25st6bUnue+5Mb+pP5m3p7eTCUJ173p6nFaTIuZdtB1Jp3skRmriIrpQgJJ4nj+md4us5X1dNYaMK5n5dFccPKI7S54+d2R39ypm9BAfM7PCz/p8llzX+rDIf/gM=</diagram></mxfile>
|
2104.04466/main_diagram/main_diagram.pdf
ADDED
|
Binary file (24.4 kB). View file
|
|
|
2104.04466/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
This paper investigates two aspects of dialogue state tracking (DST) for multi-domain taskoriented dialogue [\(Budzianowski et al.,](#page-8-0) [2018\)](#page-8-0). We present a novel hybrid architecture that augments GPT-2 [\(Radford et al.,](#page-9-0) [2019\)](#page-9-0) with dialogue act representations derived from Graph Attention Networks (GATs) [\(Velickovi](#page-9-1) ˇ c et al. ´ , [2018\)](#page-9-1) in such a way that allows causal, sequential prediction of slot values while explicitly modelling the relationships between slots and values across domains. Our approach uses GATs to improve predictions of values that are shared across domain-slots and that might otherwise be treated independently. As a related line of work, we investigate a form of sparsely supervised DST training and find that our hybrid architecture offers improved robustness with weak supervision.
|
| 4 |
+
|
| 5 |
+
DST can be improved by modelling the relationship between slots and values across domains. This has been explored recently by [Zhou and](#page-9-2) [Small](#page-9-2) [\(2019\)](#page-9-2) who suggest three types of relationships between domain-slots pairs that can be modelled explicitly: (1) pairs that share the same candidate set, such as <restaurant-bookday> and <hotel-bookday>; (2) pairs whose candidate values are subsets, as could happen with <restaurant-name> and <taxi-destina tion> if the candidate set of the first belongs to that of the second; and (3) correlated values between domain-slot pairs, such as when the 'star' level of a booked hotel correlates with the price range of a reserved restaurant.
|
| 6 |
+
|
| 7 |
+
Graph Neural Networks (GNNs) have been proposed to captures the interactions among slots and values and to improve DST performance [\(Zhou and](#page-9-2) [Small,](#page-9-2) [2019;](#page-9-2) [Chen et al.,](#page-8-1) [2020;](#page-8-1) [Wu et al.,](#page-9-3) [2020\)](#page-9-3). These relationships can be represented as edges in graph-based models, where domains, slots, and values are nodes in the graphs. However previous work has not explored quantitatively or in depth how graph models utilize the relationships they model. [Chen et al.](#page-8-1) [\(2020\)](#page-8-1) and [Wu et al.](#page-9-3) [\(2020\)](#page-9-3) provide example cases where the predictions of correlated values were potentially enhanced by their model, while [Zhou and Small](#page-9-2) [\(2019\)](#page-9-2) and [Zhu et al.](#page-9-4) [\(2020\)](#page-9-4) present ablation studies showing marginal improvements brought by their graph modules. [Zhu et al.](#page-9-4) [\(2020\)](#page-9-4) and [Wu et al.](#page-9-3) [\(2020\)](#page-9-3) further show joint accuracies over different dialogue turns, but there is more that can be said about how GATs can improve DST. One of the aims of this paper is to more deeply analyze how graph models can lead to improved DST on top of an already good GPT-2 baseline system.
|
| 8 |
+
|
| 9 |
+
Graph models may also compensate for some potential drawbacks associated with using generative models for DST. As a well-known generative model, GPT-2 offers powerful, left-to-right
|
| 10 |
+
|
| 11 |
+
generation incorporating a causal attention mechanism. We note that [Hosseini-Asl et al.](#page-8-2) [\(2020\)](#page-8-2) have demonstrated that GPT-2 can identify slot values as a prediction task, with variable length token sequences produced sequentially with interspersed special tokens indicating slot boundaries. The ability to easily generate token sequences of arbitrary lengths is a valuable feature of the model, although it may come at the expense of modelling power relative to models with non-causal attention mechanisms, such as BERT [\(Devlin et al.,](#page-8-3) [2019;](#page-8-3) [Shan](#page-9-5) [et al.,](#page-9-5) [2020\)](#page-9-5). In particular, GPT-2's causality requires that the prediction of later slot values can depend explicitly on previously predicted slot values, but that the reverse is not possible. This can lead to decreased performance in predicting slot values that occur early on. We find that augmenting GPT-2 prediction with representations derived from GATs allows some sharing of information between slots prior to prediction to improve this GPT-2 limitation.
|
| 12 |
+
|
| 13 |
+
Capturing the relationships of slot values across domains also offers the opportunity to make better use of limited training data, particularly in sparsely supervised and weakly supervised scenarios [\(Liang](#page-9-6) [et al.,](#page-9-6) [2021\)](#page-9-6). In a 'Last Turn' annotation scenario, annotations are available only for the final turn of a task-oriented dialogue. This is unlike the fullyannotated MultiWOZ setting, which offers turnlevel annotations throughout the entire dialogue session. As an annotation option, generating summary annotations at the completion of a recorded session is an attractive alternative to creating a detailed, turn-by-turn annotation of the entire dialogue [\(Liang et al.,](#page-9-6) [2021\)](#page-9-6). If it is possible to use only these session-level annotations to train a DST system that still achieves acceptable tracking performance, the chore of creating new annotated DST datasets could be made much easier. The challenges in using this summary data are significant, however. Using only the final-turn annotations in MultiWOZ 2.0 reduces the training set to 14.3% of its original size (in annotated turns).
|
| 14 |
+
|
| 15 |
+
We summarize the contributions of our work as follows:
|
| 16 |
+
|
| 17 |
+
- (1) We propose a novel hybrid architecture that integrates GPT-2 with Graph Attention Networks (GATs) for dialogue state tracking. The model is shown to be robust when training samples are significantly reduced under sparse supervision.
|
| 18 |
+
- (2) We demonstrate that our architecture also mit-
|
| 19 |
+
|
| 20 |
+
igates a limitation of DSTs based on GPT-2 alone, associated with generating domain-slot values in a Left-to-Right manner.
|
| 21 |
+
|
| 22 |
+
(3) We investigate how knowledge-aware models capture relationships between domain-slots and show how using graphs can improve prediction of inter-dependent slot values.
|
| 23 |
+
|
| 24 |
+
While we do show DST accuracy improvements over a strong GPT-2 baseline, we emphasise that our aim is mainly to investigate and improve prediction of domain-slot values using relationships that otherwise are left unmodelled by the baseline.
|
2105.01203/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2105.01203/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Event cameras are bio-inspired vision sensors designed to generate image frames asynchronously based on scenic events [\[1\]](#page-6-0). In contrast to conventional camera sensors where raw frame pixels are streamed to a backend processor at a fixed rate, event-based cameras generate output only when there is a new event(s). Recently, researchers are seeking novel methodologies to incorporate machine learning models (in particular CNNs) in the image sensor [\[2\]](#page-6-1) [\[3\]](#page-6-2). This has revived interests in event cameras to facilitate efficient dataflow between the sensor and the near-sensor processing system. However, novel algorithms and methods are required to process the unorthodox data streams from these vision sensors to unlock their full potential [\[4\]](#page-6-3). However, researchers working on this domain face two major challenges. First, there are not sufficient event-cameras in the market, limiting the research to a few applications. Second, the commercially available event cameras suffer from different setbacks such as low resolution, lack of reconfiguration, etc.
|
| 4 |
+
|
| 5 |
+
Several camera simulators have been proposed in the literature to accommodate the research demands [\[5\]](#page-6-4) [\[6\]](#page-6-5). For instance, authors in [\[5\]](#page-6-4), presented ESIM, a camera simulator
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
|
| 9 |
+
<span id="page-0-0"></span>Fig. 1. Region-based event camera simulator designed to accommodate inference processing near the sensor
|
| 10 |
+
|
| 11 |
+
that resembles an event camera's behavior. The simulator integrates an adaptive rendering scheme that only samples frames when necessary. In addition to generating events, the simulator can produce a depth map, motion field, and camera trajectory. However, the simulator was developed for robotics applications and not specifically designed to explore inference architectures near the sensors. Therefore, any insensor high-level processing engine that aims to leverage the event sensor in the processing pipeline will fail to utilize the full potential of the events generated from this camera simulator. At best, the simulator would allow the inference engine to only activate whenever a new event is detected on the sensor interface. However, at each iteration, the full image will be processed by the inference engine regardless of the size of the ROI (Region-Of-Interest). The newest developments in imaging technology have brought forth parallel processing image sensors that can be combined with an inference engine to provide high-performance computation models near the sensor [\[1\]](#page-6-0) [\[7\]](#page-6-6) [\[8\]](#page-6-7). By tightly coupling computation on the inference layer to specific image regions, it is possible to improve the computational capabilities of these systems and reduce data communications. Nevertheless, a suitable platform is required to explore the design space of these architectures.
|
| 12 |
+
|
| 13 |
+
In this paper, we present a novel event camera simulator that simulates a per-pixel image sensor's behavior aiming to accommodate CNN inference in the sensor interface. The events captured in the simulator are identified on a regionlevel. Therefore only specific regions can be forwarded to the following computation layer to activate the inference engine minimally (shown in figure [1\)](#page-0-0). Similar to the work mentioned above, our rendering-module samples image frames whenever there is a new event. However, instead of sampling the complete image, respective event regions are only sampled. The simulator can generate valid event data from a video stream that can be used to model and train eventbased learning models. We have prototyped the simulator's computation module on an FPGA to estimate the hardware cost. Our evaluation results suggest that we can significantly reduce computation with our event-based camera approach with minimum hardware overhead.
|
| 14 |
+
|
| 15 |
+
The main contributions of this paper are:
|
| 16 |
+
|
| 17 |
+
- A novel camera simulator design that identifies events on a region-basis and facilitate suitable interface for inference architectures.
|
| 18 |
+
- A thorough evaluation of our region-level relevance computation model to highlight significance.
|
| 19 |
+
- An FPGA prototype of the relevance computation model to indicate hardware overheads related to our approach.
|
| 20 |
+
|
| 21 |
+
The remaining sections of this paper are organized as follows. Section [II](#page-1-0) discusses the related works in the literature. Section [III](#page-1-1) provides a detailed explanation of our design. We evaluate our model in Section [IV.](#page-3-0)
|
2106.03357/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-04-07T11:11:34.365Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.5.1 Chrome/89.0.4389.82 Electron/12.0.1 Safari/537.36" etag="zHZNnzQRPCUoI1U6UrGQ" version="14.5.1" type="device"><diagram name="Page-1" id="74e2e168-ea6b-b213-b513-2b3c1d86103e">7Zttb5s6FMc/TaTdF5sAAyUv27TdJt1dbbebJu2dAw5YdTA1TvPw6XcwJkAgTdomoRupIhUf28f4/E4M/E0GaDRdfBQ4ib7wgLCBZQSLAboeWJZpGi78yyzL3DI0jNwQChroRqXhjq6INhbNZjQgaa2h5JxJmtSNPo9j4suaDQvB5/VmE87qoyY4JA3DnY9Z0/qTBjJaz8soKz4RGkZ6aM/RFWPs34eCz2I93sBCE/WXV09x4Uu3TyMc8HnFhG4GaCQ4l/nRdDEiLIttEba83+2W2vV5CxLLfTp8mpNf/vL+24/Vl4eH2+urW7xcvffs3M0jZjNSzEOdrVwWEYITT7JDiceZ6SqSUwZFEw7nEZXkLsF+Vj+HBAFbKrGQmrMBZQAnMY2J0H18zhhOUqqc5S0iyoJ/8ZLPZDFMUbqaUMZGnHGhzgUiTFzfV6MIfk8qNcHFcGxk3vSEiJBksTVU5hoAJDbhUyLFEproDghpZjqnHVuX52WGrG1RJTkKG9ZJGa5dl2DgQLN5BifTcrZygXlJitn/8O3AcbgPoiaCQPDkOxYhkdqQcBpLIm4eIWSptlVZxDzOBpI80ZWMTIq+Yy4ln+qC0NFZO1Whca7gA8EaGR+cgQOzGUHZLMvwyZoLOeIxoIb8yXwQnMo5SeW+lJ9I+Sb7Dtm6B2Wr1klcsn01Ng5hnjC1dkU0CEj8qvjrVH4SgHva+F+c418ufm5zrTsxDu+Mo8RhWp3zGJ55VHh4XfMobk7OPLJOVsu92Yl5mGcelXtno3Me2x9h+sij8+s5QmceJQ+78+s5ss88Kjy6v57rR/uaBuMypX5wmFuVlPsw40XF+1SJLJfQwELJoqyEozD7/x8XU8zoisYhtLnNQmoZSsFTlnyIsSiaf46z+CpRIJOvkiTvt4KBv79b/FN0gEnmp5X3auSSksRI8ETOVFJqQ94JHOIFdpu841lj5LqHkXfsQsLT8o41bKYAMltywHSPlwRuSxK8YSEuwMSbtApxru+R8eQ4Qpx50bkQhw4rFvRYiNM5/5aUONQv6SHP5TekxKF+SQ3b4v9WlDi7X0rDDhzdK3F2v5SGXTw6v3O3+6U07ODRvRJn90tp2MGjeyXO7pfSsItH99fzw75E8Ifz6F6Js/u18b+LR/fXc/1s/wolDhltStxiX/EMgirrqOv6isbYQhYzGgKfa59kcgEYlJjnY3apK6ZAUOVRW/KUop1xGOVm/X5f8QqV04Tb9mxvHQ+udyS4q97DbZPlTgu3yK6Dw/36brW38v3X8rU6//I65tH47r+z8dfydbvn+4z3kCeMLC6zN78hGCQO9OG1z3CaUr8OKHdCgsY74DtDVRWhW0JR2ARhWNLHuvu2+OgRvmbKe4XExgaHizZCnPKZ8InuVUZ5pyNnuOFIqk2FhiOFaz3t1xBs28XsA0Fr+MGpr5abN6h7M2y6Mk5N8RnbkDspqg3IPxquM2wQ8V4It8XV5nJ6dLhtjzdnuJVFc5PIy+Ham68KvBguFMsfCeXNy19ioZvf</diagram></mxfile>
|
2106.03357/main_diagram/main_diagram.pdf
ADDED
|
Binary file (11.3 kB). View file
|
|
|
2106.03357/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,169 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Benchmark datasets and leaderboards are prevalent in machine learning's common task framework [@Donoho2019]; however, this approach inherently relies on relative measures of improvement. It may therefore be insightful to be able to evaluate state-of-the-art (SOTA) performance against the optimal performance theoretically achievable by *any* model [@VarshneyKS2019]. For supervised classification tasks, this optimal performance is captured by the Bayes error rate which, were it tractable, would not only give absolute benchmarks, rather than just comparing to previous classifiers, but also insights into dataset hardness [@HoB2002; @ZhangWNXS2020] and which gaps between SOTA and optimal the community may fruitfully try to close.
|
| 4 |
+
|
| 5 |
+
Suppose we have data generated as $(X, Y) \sim p$, where $X\in \mathbb{R}^d$, $Y\in \mathcal{Y}=\{1,\dots, K\}$ is a label and $p$ is a distribution over $\mathbb{R}^d\times \mathcal{Y}$. The **Bayes classifier** is the rule which assigns a label to an observation $\mathbf{x}$ via $$\begin{align}
|
| 6 |
+
y = C_{\text{Bayes}}(\mathbf{x}) := \mathop{\mathrm{arg\,max}}_{j\in \mathcal{Y}} p(Y=j \mid X=\mathbf{x}).
|
| 7 |
+
\end{align}$$ The **Bayes error** is simply the probability that the Bayes classifier predicts incorrectly: $$\begin{align}
|
| 8 |
+
\mathcal{E}_\text{Bayes}(p) := p(C_\text{Bayes}(X) \neq Y).
|
| 9 |
+
\end{align}$$ The Bayes classifier is optimal, in the sense it minimizes $p(C(X)\neq Y)$ over all possible classifiers $C:\mathbb{R}^d \rightarrow \mathcal{Y}$. Therefore, the Bayes error is a natural measure of 'hardness' of a particular learning task. Knowing $\mathcal{E}_\text{Bayes}$ should interest practitioners: it gives a natural benchmark for the performance of any trained classifier. In particular, in the era of deep learning, where vast amounts of resources are expended to develop improved models and architectures, it is of great interest to know whether it is even theoretically possible to substantially lower the test errors of state-of-the-art models, cf. [@CostelloF2007].
|
| 10 |
+
|
| 11 |
+
Of course, obtaining the exact Bayes error will almost always be intractable for real-world classification tasks, as it requires full knowledge of the distribution $p$. A variety of works have developed estimators for the Bayes error, either based on upper and/or lower bounds [@berisha16] or exploiting exact representations of the Bayes error [@noshad2019learning; @NIELSEN201425]. Most of these bounds and/or representations are in terms of some type of *distance* or *divergence* between the class conditional distributions, $$\begin{align}
|
| 12 |
+
p_j(\mathbf{x}) := p(X=\mathbf{x}\mid Y=j),
|
| 13 |
+
\end{align}$$ and/or the marginal label distributions $\pi_j := p(Y=j)$. For example, there are exact representations of the Bayes error in terms of a particular $f$-divergence [@noshad2019learning], and in a special case in terms of the total variation distance [@NIELSEN201425]. More generally, there are lower and upper bounds known for the Bayes error in terms of the Bhattacharyya distance [@berisha16; @NIELSEN201425], various $f$-divergences [@moon14], the Henze-Penrose (HP) divergence [@Moon18; @moon15], as well as others. Once one has chosen a desired representation and/or bound in terms of some divergence, estimating the Bayes error reduces to the estimation of this divergence. Unfortunately, for high-dimensional datasets, this estimation is highly inefficient. For example, most estimators of $f$-divergences rely on some type of $\varepsilon$-ball approach, which requires a number of samples on the order of $(1/\varepsilon)^{d}$ in $d$ dimensions [@noshad2019learning; @poczos11]. In particular, for large benchmark image datasets used in deep learning, this approach is inadequate to obtain meaningful results.
|
| 14 |
+
|
| 15 |
+
Here, we take a different approach: rather than computing an approximate Bayes error of the exact distribution (which, as we argue above, is intractable in high dimensions), we propose to compute the *exact Bayes error of an approximate distribution*. The basics of our approach are as follows.
|
| 16 |
+
|
| 17 |
+
- We show that when the class-conditional distributions are Gaussian $q_j(\mathbf{z})= \mathcal{N}(\mathbf{z}; \boldsymbol{\mu}_j, \boldsymbol{\Sigma})$, we can efficiently compute the Bayes error using a variant of Holmes-Diaconis-Ross integration proposed in [@GaussianIntegralsLinear].
|
| 18 |
+
|
| 19 |
+
- We use normalizing flows [@NormalizingFlowsProbabilistic; @kingma2018glow; @FetayaJGZ2020] to fit approximate distributions $\hat{p}_j(\mathbf{x})$, by representing the original features as $\mathbf{x}= T(\mathbf{z})$ for a learned invertible transformation $T$, where $\mathbf{z}\sim q_{j}(\mathbf{z}) = \mathcal{N}(\mathbf{z};\boldsymbol{\mu}_j, \boldsymbol{\Sigma})$, for learned parameters $\boldsymbol{\mu}_j,\boldsymbol{\Sigma}$.
|
| 20 |
+
|
| 21 |
+
- Lastly, we prove in Proposition [1](#thm:invariance){reference-type="ref" reference="thm:invariance"} that the Bayes error is invariant under invertible transformation of the features, so computing the Bayes error of the approximants $\hat{p}_j(\mathbf{x})$ can be done *exactly* by computing it for the Gaussians $q_{j}(\mathbf{z})$.
|
| 22 |
+
|
| 23 |
+
Moreover, we show that by varying the *temperature* of a single flow model, we can obtain an entire class of distributions with varying Bayes errors. This recipe allows us to compute the Bayes error of a large variety of distributions, which we use to conduct a thorough empirical investigation of a benchmark datasets and SOTA models, producing a library of trained flow models in the process. By generating synthetic versions of standard benchmark datasets with known Bayes errors, and training them on SOTA deep learning architectures, we are able to assess how well these models perform compared to the Bayes error, and find that in some cases they indeed achieve errors very near optimal. We then investigate our Bayes error estimates as a measure of objective difficulty of benchmark classification tasks, and produce a ranking of these datasets based on their approximate Bayes errors.
|
| 24 |
+
|
| 25 |
+
We should note one additional point before proceeding. In general the hardness of classification tasks can be decomposed into two relatively independent components: i) hardness caused by the lack of samples, and ii) hardness caused by the internal data distribution $p$. The focus of this work is about the latter: the hardness caused by $p$. Indeed, even if the Bayes error of a particular task is known to be a particular value $\mathcal{E}_\text{Bayes}$, it may be highly unlikely that this error is achievable given a model trained on only $N$ samples from $p$. The problem of finding the minimal error achievable from a given dataset of size $N$ has been called the optimal experimental design problem [@ritter2000average]. While this is not the focus of the present work, an interesting direction for future work is to use our methodology to investigate the relationship between $N$ and the SOTA-Bayes error gap.
|
| 26 |
+
|
| 27 |
+
Throughout this section, we assume the class conditional distributions are Gaussian: $q_j(\mathbf{x}) = \mathcal{N}(\mathbf{z};\boldsymbol{\mu}_j, \boldsymbol{\Sigma}_j)$. In the simplest case of binary classification with $K=2$ classes, equal covariance $\boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2 = \boldsymbol{\Sigma}$, and equal marginals $\pi_1=\pi_2=\frac{1}{2}$, the Bayes error can be computed analytically in terms of the CDF of the standard Gaussian distribution, $\Phi(\cdot)$, as: $$\begin{align}
|
| 28 |
+
\label{eqn:gaussian-bayes-binary}
|
| 29 |
+
\mathcal{E}_\text{Bayes}= 1-\Phi\left(\tfrac{1}{2}\|\boldsymbol{\Sigma}^{-1/2}(\boldsymbol{\mu}_1 - \boldsymbol{\mu}_2)\|_2\right).
|
| 30 |
+
\end{align}$$
|
| 31 |
+
|
| 32 |
+
When $K>2$ and/or the covariances are different between classes, there is no closed-form expression for the Bayes error. Instead, we work from the following representation: $$\begin{align}
|
| 33 |
+
\label{eqn:gaussian-bayes-general}
|
| 34 |
+
\mathcal{E}_\text{Bayes}&= 1-\sum_{k=1}^K \pi_k \int \prod_{j\neq k} \mathbb{1}(q_j(\mathbf{z}) < q_k(\mathbf{z}))\mathcal{N}(d\mathbf{z};\boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k).
|
| 35 |
+
\end{align}$$ In the general case, the constraints $q_j(\mathbf{z}) < q_k(\mathbf{z})$ are quadratic, with $q_j(\mathbf{z}) < q_k(\mathbf{z})$ occurring if and only if: $$\begin{align}
|
| 36 |
+
\label{eqn:quadratic-constraints}
|
| 37 |
+
-(\mathbf{z}-\boldsymbol{\mu}_j)^\top\boldsymbol{\Sigma}^{-1}_j(\mathbf{z}-\boldsymbol{\mu}_j) - \log\det\boldsymbol{\Sigma}_j < -(\mathbf{z}-\boldsymbol{\mu}_k)^\top \boldsymbol{\Sigma}_k^{-1}(\mathbf{z}-\boldsymbol{\mu}_k) - \log\det \boldsymbol{\Sigma}_k.
|
| 38 |
+
\end{align}$$ As far as we know, there is no efficient numerical integration scheme for computing Gaussian integrals under general quadratic constraints of this form. However, if we further assume the covariances are equal, $\boldsymbol{\Sigma}_j = \boldsymbol{\Sigma}$ for all $j=1,\dots, K$, then the constraint ([\[eqn:quadratic-constraints\]](#eqn:quadratic-constraints){reference-type="ref" reference="eqn:quadratic-constraints"}) becomes linear, of the form $$\begin{align}
|
| 39 |
+
\mathbf{a}_{jk}^\top \mathbf{z}+ b_{jk} >0,
|
| 40 |
+
\end{align}$$ where $\mathbf{a}_{jk} := 2\boldsymbol{\Sigma}^{-1}(\boldsymbol{\mu}_j-\boldsymbol{\mu}_k)$ and $b_{jk} := \boldsymbol{\mu}_k^\top \boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}_k -\boldsymbol{\mu}_j^\top \boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}_j$. Thus expression [\[eqn:gaussian-bayes-general\]](#eqn:gaussian-bayes-general){reference-type="eqref" reference="eqn:gaussian-bayes-general"} can be written as $$\begin{align}
|
| 41 |
+
\label{eqn:lin-con-gauss}
|
| 42 |
+
\mathcal{E}_\text{Bayes}&= 1-\sum_{k=1}^K \pi_k \int \prod_{j\neq k} \mathbb{1}(\mathbf{a}_{jk}^\top \mathbf{z}+ b_{jk} >0)\mathcal{N}(d\mathbf{z};\boldsymbol{\mu}_k, \boldsymbol{\Sigma}).
|
| 43 |
+
\end{align}$$ Computing integrals of this form is precisely the topic of the recent paper [@GaussianIntegralsLinear], which exploited the particular form of the linear constraints and the Gaussian distribution to develop an efficient integration scheme using a variant of the Holmes-Diaconis-Ross method [@hdr-ref1]. This method is highly efficient, even in high dimensions[^3]. In Figure [1](#fig:true-vs-estimated-bayes){reference-type="ref" reference="fig:true-vs-estimated-bayes"}, we show the estimated Bayes error using this method on a synthetic binary classification problem in $d=784$ dimensions, where we can use closed-form expression [\[eqn:gaussian-bayes-binary\]](#eqn:gaussian-bayes-binary){reference-type="eqref" reference="eqn:gaussian-bayes-binary"} to measure the accuracy of the integration. As we can see, it is highly accurate.
|
| 44 |
+
|
| 45 |
+
This method immediately allows us to investigate the behavior of large neural network models on high-dimensional synthetic datasets with class conditional distributions $q_j(\mathbf{z}) = \mathcal{N}(\mathbf{z};\boldsymbol{\mu}_j,\boldsymbol{\Sigma})$. However, in the next section, we will see that we can use normalizing flows to estimate the Bayes error of real-world datasets as well.
|
| 46 |
+
|
| 47 |
+
<figure id="fig:true-vs-estimated-bayes">
|
| 48 |
+
<p>[]</p>
|
| 49 |
+
<p><span><embed src="figs/true_vs_estimated_bayes_error.pdf" /></span></p>
|
| 50 |
+
<figcaption>We compare the Bayes error estimated using HDR integration <span class="citation" data-cites="GaussianIntegralsLinear"></span> with the exact error in the binary classification with equal covariance case given in (<a href="#eqn:gaussian-bayes-binary" data-reference-type="ref" data-reference="eqn:gaussian-bayes-binary">[eqn:gaussian-bayes-binary]</a>). We see the HDR integration routine gives highly accurate estimates. Here we use dimension <span class="math inline"><em>d</em> = 784</span>, and take <span class="math inline"><strong>μ</strong><sub>1</sub>, <strong>μ</strong><sub>2</sub></span> to be randomly drawn unit vectors, and <span class="math inline"><strong>Σ</strong> = <em>τ</em><sup>2</sup><strong>I</strong></span> where <span class="math inline"><em>τ</em></span> is the temperature.</figcaption>
|
| 51 |
+
</figure>
|
| 52 |
+
|
| 53 |
+
Normalizing flows are a powerful technique for modeling high-dimensional distributions [@NormalizingFlowsProbabilistic]. The main idea is to represent the random variable $\mathbf{x}$ as a transformation $T_\phi$ (parameterized by $\phi$) of a vector $\mathbf{z}$ sampled from some, usually simple, base distribution $q(\mathbf{z}; \psi)$ (parameterized by $\psi$), i.e. $$\begin{align}
|
| 54 |
+
\mathbf{x}= T_\phi(\mathbf{z}) \hspace{5mm} \text{ where } \hspace{5mm} \mathbf{z}\sim q(\mathbf{z}; \psi).
|
| 55 |
+
\end{align}$$ When the transformation $T_\phi$ is invertible, we can obtain the exact likelihood of $\mathbf{x}$ using a standard change of variable formula: $$\begin{align}
|
| 56 |
+
\hat{p}(\mathbf{x};\theta) = q(T^{-1}_\phi(\mathbf{x});\psi)\left|\det J_{T_\phi}(T^{-1}_\phi(\mathbf{x}))\right|^{-1},
|
| 57 |
+
\end{align}$$ where $\theta = (\phi,\psi)$ and $J_{T_\phi}$ is the Jacobian of the transformation $T_\phi$. The parameters $\theta$ can be optimized, for example, using the KL divergence: $$\begin{align}
|
| 58 |
+
\mathcal{L}(\theta)= D_{\text{KL}}(p(\mathbf{x}) \;\|\; \hat{p}(\mathbf{x};\theta)) \approx -\frac{1}{N}\sum_{i=1}^N \log q(T_\phi^{-1}(\mathbf{x}_i),\psi) + \log \left|\det J_{T^{-1}_\phi}(\mathbf{x}_i)\right| + \text{const}.
|
| 59 |
+
\end{align}$$ This approach is easily extended to the case of learning class-conditional distributions by parameterizing multiple base distributions $q_j(\mathbf{z}; \psi_j)$ and computing $$\begin{align}
|
| 60 |
+
\hat{p}_{j}(\mathbf{x};\theta) = q_j(T_\phi^{-1}(\mathbf{x});\psi_j)\left|\det J_{T_\phi}(T_\phi^{-1}(\mathbf{x}))\right|^{-1}.
|
| 61 |
+
\end{align}$$ For example, we can take $q_j(\mathbf{z};\boldsymbol{\mu}_j,\boldsymbol{\Sigma}) = \mathcal{N}(\mathbf{z};\boldsymbol{\mu}_j, \boldsymbol{\Sigma})$, where we fit the parameters $\boldsymbol{\mu}_j,\boldsymbol{\Sigma}$ during training. This is commonly done to learn class-conditional distributions, e.g. [@kingma2018glow]. This is the approach we take in the present work. In practice, the invertible transformation $T_\phi$ is parameterized as a neural network, though special care must be taken to ensure the neural network is invertible and has a tractable Jacobian determinant. Here, we use the Glow architecture [@kingma2018glow] throughout our experiments, as detailed in Section [4](#section:real-world-data){reference-type="ref" reference="section:real-world-data"}.
|
| 62 |
+
|
| 63 |
+
Normalizing flow models are particularly convenient for our purposes, since we can prove the Bayes error is invariant under invertible transformation. This is formalized as follows.
|
| 64 |
+
|
| 65 |
+
::: {#thm:invariance .proposition}
|
| 66 |
+
**Proposition 1**. *Let $(X,Y) \sim p$, $X\in \mathbb{R}^d, Y\in \mathcal{Y}=\{1,\dots, K\}$, and let $\mathcal{E}_\text{Bayes}(p)$ be the associated Bayes error of this distribution. Let $T:\mathbb{R}^d \rightarrow \mathbb{R}^d$ be an invertible map and denote $q$ the associated joint distribution of $Z=T(X)$ and $Y$. Then $\mathcal{E}_\text{Bayes}(p) = \mathcal{E}_\text{Bayes}(q)$.*
|
| 67 |
+
:::
|
| 68 |
+
|
| 69 |
+
::: proof
|
| 70 |
+
*Proof.* For convenience, denote $|\mathbf{A}|$ as the absolute value determinant of a matrix $\mathbf{A}$. Using the representation derived in [@noshad2019learning], we can write the Bayes error as $$\begin{align}
|
| 71 |
+
\mathcal{E}_\text{Bayes}(p) = 1 - \pi_1 - \sum_{k=2}^K \int \max\left(0,\pi_k - \max_{1\leq i\leq k-1}\pi_i \frac{p_{i}(\mathbf{x})}{p_{k}(\mathbf{x})}\right)p_{k}(\mathbf{x})d\mathbf{x}.
|
| 72 |
+
\end{align}$$ Then if $\mathbf{z}= T(\mathbf{x})$, we have that $q_{k}(\mathbf{z}) = p_{k}(T(\mathbf{z}))|J_{T}(\mathbf{z})|$, and $d\mathbf{x}= |J_{T^{-1}}(\mathbf{z})|d\mathbf{z}$. Hence $$\begin{align*}
|
| 73 |
+
\mathcal{E}_\text{Bayes}(p) &= 1 - \pi_1 - \sum_{k=2}^K \int \max\left(0,\pi_k - \max_{1\leq i\leq k-1}\pi_i \frac{p_{i}(\mathbf{x})}{p_{k}(\mathbf{x})}\right)p_{k}(\mathbf{x})d\mathbf{x}\\
|
| 74 |
+
&= 1 - \pi_1 - \sum_{k=2}^K \int \max\left(0,\pi_k - \max_{1\leq i\leq k-1}\pi_i \frac{q_i(\mathbf{z})|J_{T}(\mathbf{z})|}{q_{k}(\mathbf{z})|J_{T}(\mathbf{z})|}\right)q_{k}(\mathbf{z})|J_{T}(\mathbf{z})| |J_{T^{-1}}|(\mathbf{z})d\mathbf{z}.
|
| 75 |
+
\end{align*}$$ By the Inverse Function Theorem, $|J_{T^{-1}}(\mathbf{z})| = |J_{T}(\mathbf{z})|^{-1}$, and so we get $$\begin{align*}
|
| 76 |
+
\mathcal{E}_\text{Bayes}(p) &= 1 - \pi_1 - \sum_{k=2}^K \int \max\left(0,\pi_k - \max_{1\leq i\leq k-1}\pi_i \frac{q_{i}(\mathbf{z})|J_{T}(\mathbf{z})|}{q_{k}(\mathbf{z})| J_{T}(\mathbf{z})|}\right)q_{k}(\mathbf{z})|J_{T}(\mathbf{z})| |J_{T}(\mathbf{z})|^{-1}d\mathbf{z}\\
|
| 77 |
+
&=1 - \pi_1 - \sum_{k=2}^K \int \max\left(0,\pi_k - \max_{1\leq i\leq k-1}q_i \frac{q_{i}(\mathbf{z})}{q_{k}(\mathbf{z})}\right)q_{k}(\mathbf{z})d\mathbf{z}\\
|
| 78 |
+
&= \mathcal{E}_\text{Bayes}(q),
|
| 79 |
+
\end{align*}$$ which completes the proof. ◻
|
| 80 |
+
:::
|
| 81 |
+
|
| 82 |
+
This result means that we can compute the *exact* Bayes error of the approximate distributions $\hat{p}_j(\mathbf{x};\theta)$ using the methods introduced in Section [2](#section:gaussian-computation){reference-type="ref" reference="section:gaussian-computation"} with the Gaussian conditionals $q_j(\mathbf{z}; \boldsymbol{\mu}_j, \boldsymbol{\Sigma})$. If in addition the flow model $\hat{p}_j(\mathbf{x};\theta)$ is a good a approximation for the true class-conditional distribution $p_j(\mathbf{x})$, then we expect to obtain a good estimate for the true Bayes error. In what follows, we will see examples both of when this is and is not the case.
|
| 83 |
+
|
| 84 |
+
An important aspect of the normalizing flow approach is that we can in fact generate a whole family of distributions from a single flow model. To do this, we can vary the *temperature* $\tau$ of the model by multiplying the covariance $\boldsymbol{\Sigma}$ of the base distribution by $\tau^2$ to get $q_{j,\tau} := \mathcal{N}(\mathbf{z};\boldsymbol{\mu}_j, \tau^2\boldsymbol{\Sigma})$. The same invertible map $T_\phi$ induces new conditional distributions, $$\begin{align}
|
| 85 |
+
\hat{p}_{j,\tau}(\mathbf{x};\theta) = q_{j,\tau}(T^{-1}_\phi(\mathbf{x});\psi_j)\left|\det J_{T_\phi}(T^{-1}_\theta(\mathbf{x}))\right|^{-1},
|
| 86 |
+
\end{align}$$ as well as the associated joint distribution $\hat{p}_{\tau}(\mathbf{x};\theta) = \sum_j \pi_j\hat{p}_{j,\tau}(\mathbf{x};\theta)$.
|
| 87 |
+
|
| 88 |
+
It can easily be seen that the Bayes error of $\hat{p}_\tau$ is increasing in $\tau$.
|
| 89 |
+
|
| 90 |
+
::: {#thm:monotone .proposition}
|
| 91 |
+
**Proposition 2**. *The Bayes error of flow models is monotonically increasing in $\tau$. That is, for $0<\tau\leq \tau'$, we have that $\mathcal{E}_\text{Bayes}(\hat{p}_{\tau}) \leq \mathcal{E}_\text{Bayes}(\hat{p}_{\tau'})$.*
|
| 92 |
+
:::
|
| 93 |
+
|
| 94 |
+
::: proof
|
| 95 |
+
*Proof.* Note that using the representation ([\[eqn:lin-con-gauss\]](#eqn:lin-con-gauss){reference-type="ref" reference="eqn:lin-con-gauss"}) and making the substitution $\mathbf{u}\sim \mathcal{N}(\boldsymbol{0},\mathbf{I})\mapsto \boldsymbol{\Sigma}^{1/2}\mathbf{u}+ \boldsymbol{\mu}_k \sim \mathcal{N}(\boldsymbol{\mu}_k,\boldsymbol{\Sigma})$, the Bayes error at temperature $\tau$ can be written as $$\begin{align}
|
| 96 |
+
\mathcal{E}_\text{Bayes}(\hat{p}_{\tau}) = 1-\sum_{k=1}^K \pi_k \int \prod_{j\neq k} \mathbb{1}(\tilde{\mathbf{a}}_{jk}^\top \mathbf{u}+ \frac{\tilde{b}_{jk}}{\tau} >0)\mathcal{N}(d\mathbf{u};\boldsymbol{0}, \mathbf{I})
|
| 97 |
+
\end{align}$$ where $\tilde{\mathbf{a}}_{jk} = 2\boldsymbol{\Sigma}^{-1/2}(\boldsymbol{\mu}_k-\boldsymbol{\mu}_j)$, $\tilde{b}_{jk} = (\boldsymbol{\mu}_k-\boldsymbol{\mu}_j)^\top \boldsymbol{\Sigma}^{-1}(\boldsymbol{\mu}_k-\boldsymbol{\mu}_j)\geq 0$. Then it easy to see that for $0<\tau \leq \tau'$ and $\mathbf{u}\in \mathbb{R}^d$, we have that $$\begin{align}
|
| 98 |
+
\prod_{j\neq k} \mathbb{1}(\tilde{\mathbf{a}}_{jk}^\top \mathbf{u}+ \frac{\tilde{b}_{jk}}{\tau} >0) \geq \prod_{j\neq k} \mathbb{1}(\tilde{\mathbf{a}}_{jk}^\top \mathbf{u}+ \frac{\tilde{b}_{jk}}{\tau'} >0)
|
| 99 |
+
\end{align}$$ which implies that $\mathcal{E}_\text{Bayes}(\hat{p}_{\tau}) \leq \mathcal{E}_\text{Bayes}(\hat{p}_{\tau'})$. ◻
|
| 100 |
+
:::
|
| 101 |
+
|
| 102 |
+
This fact means that we can easily generate datasets of varying difficulty by changing the temperature $\tau$. For example, in Figure [2](#fig:temp_sample_fmnist){reference-type="ref" reference="fig:temp_sample_fmnist"} we show samples generated by a flow model (see Section [4](#section:real-world-data){reference-type="ref" reference="section:real-world-data"} for implementation details) trained on the Fashion-MNIST dataset at various values of temperature and the associated Bayes error. As $\tau\to 0^{+}$, the distribution $\hat{p}_{j,\tau}$ concentrate on the mode of the distributions $\hat{p}_j$, making the classification tasks easy, whereas when $\tau$ gets large, the distributions $\hat{p}_{j,\tau}$ become more uniform, making classification more challenging. In practice, this can be used to generate datasets with almost arbitrary Bayes error: for any prescribed error $\varepsilon$ in the range of the map $\tau \mapsto \mathcal{E}_\text{Bayes}(\hat{p}_{\tau})$, we can numerically invert this map to find $\tau$ for which $\mathcal{E}_\text{Bayes}(\hat{p}_\tau) = \varepsilon$.
|
| 103 |
+
|
| 104 |
+
<figure id="fig:temp_sample_fmnist">
|
| 105 |
+
<p><br />
|
| 106 |
+
</p>
|
| 107 |
+
<figcaption>Generated Fashion-MNIST Samples with Different Temperatures</figcaption>
|
| 108 |
+
</figure>
|
| 109 |
+
|
| 110 |
+
**Datasets and data preparation.** We train flow models on a wide variety of standard benchmark datasets: MNIST [@LeCun98], Extended MNIST (EMNIST) [@Cohen17], Fashion MNIST [@Xiao17], CIFAR-10 [@Krizhevsky09], CIFAR-100 [@Krizhevsky09], SVHN [@Netzer11], and Kuzushiji-MNIST [@clanuwat2018deep]. The EMNIST dataset has several different splits, which include splits by digits, letters, merge, class, and balanced. The images in MNIST, Fashion-MNIST, EMNIST, and Kuzushiji-MNIST are padded to $32$-by-$32$ pixels.[^4]
|
| 111 |
+
|
| 112 |
+
We remark that we observe our Bayes error estimator runs efficiently when the input is of dimension $32$-by-$32$-by-$3$. However it is in general highly memory intensive to run the HDR integration routine on significantly larger datasets, e.g. when the input size grows to $64$-by-$64$-by-$3$. As a consequence, in our experiments we only work on datasets of dimension no larger than $32$-by-$32$-by-$3$.
|
| 113 |
+
|
| 114 |
+
**Modeling and training.** The normalizing flow model we use in our experiments is a pytorch implementation [@GlowPytorch] of Glow [@kingma2018glow]. In all our the experiments, affine coupling layers are used, the number of steps of the flow in each level $K=16$, the number of levels $L=3$, and number of channels in hidden layers $C=512$.
|
| 115 |
+
|
| 116 |
+
During training, we minimize the Negative Log Likelihood Loss (NLL) $$\begin{align}
|
| 117 |
+
\mathrm{NLL}(\{\mathbf{x}_i,y_i\}) = -\frac{1}{N}\sum_{i=1}^N \left(\log p_{y_i} (\mathbf{x}_i;\theta) +\log \pi_{y_i}\right).
|
| 118 |
+
\end{align}$$
|
| 119 |
+
|
| 120 |
+
As suggested in [@kingma2018glow], we also add a classification loss to predict the class labels from the second-to-last layer of the encoder with a weight of $\lambda$. During the experiments we traversed configurations with $\lambda=\{0.01, 0.1, 1.0, 10\}$, and report the numbers produced by the model with the smallest NLL loss on the test set. Note here even though we add the classification loss in the objective as a regularizer, the model is selected based on the smallest NLL loss in the test set instead of the classification loss or the total loss. The training and evaluation are done on a workstation with 2 NVIDIA V100 GPUs.
|
| 121 |
+
|
| 122 |
+
![Test errors of synthetic versions of MNIST and Fashion-MNIST, generated at various temperatures, and their corresponding Bayes error. Here we used 60,000 training samples, and 10,000 testing samples, to mimic the original datasets. The model used in Fashion-MNIST was a Wide-ResNet-28-10, which attains nearly start of the accuracy on the original Fashion-MNIST dataset [@RandomErasing]. The model used in MNIST is a popular ConvNet [@ConvNetPytorch]. ](figs/synth-mnist-fmnist-ber.pdf){#fig:wrn-synthetic-data}
|
| 123 |
+
|
| 124 |
+
In this section, we use our trained flow models to generate synthetic versions of standard benchmark datasets, for which the Bayes error is known exactly. In particular, we generate synthetic versions of the MNIST and Fashion-MNIST datasets at varying temperatures. As we saw in Section [3.2](#sec:temp){reference-type="ref" reference="sec:temp"}, varying the temperature allows us to generate datasets with different difficulty. Here, we train a Wide-ResNet-28-10 model (i.e. a ResNet with depth 28 and width multiple 10) [@ZagoruykoK16; @WideResNetPytorch] on these datasets, and compare the test error to the exact Bayes error for these problems. This Wide-ResNet model (together with appropriate data augmentation) attains nearly state-of-the-art accuracy on the original Fashion-MNIST dataset [@RandomErasing], and so we expect that our results here reflect roughly the best accuracy presently attainable on these synthetic datasets as well. To make the comparison fair, we use a training set size of 60,000 to mimic the size of the original MNIST series of datasets.
|
| 125 |
+
|
| 126 |
+
The Bayes errors as well as the test errors achieved by the Wide-ResNet or ConvNet models are shown in Figure [3](#fig:wrn-synthetic-data){reference-type="ref" reference="fig:wrn-synthetic-data"}. As one would expect, the errors of trained models increase with temperature. It can be observed that Wide-ResNet and ConvNet are able to achieve close-to-optimal performance when the dataset is relatively easy, e.g., $\tau<1$ for MNIST and $\tau < 0.5$ for Fashion-MNIST. The gap becomes more significant when the dataset is harder, e.g. $\tau>1.5$ for MNIST and $\tau > 1$ for Fashion-MNIST.
|
| 127 |
+
|
| 128 |
+
For the Synthetic Fashion-MNIST dataset at temperature $\tau=1$, in addition to the Wide-ResNet (WRN-28) considered above, we also trained three other architectures: a simple linear classifier (Linear), a 1-hidden layer ReLU network (MLP) with 500 hidden units, and a standard AlexNet convolutional architecture [@krizhevsky2012imagenet]. The resulting test errors, as well as the Bayes error, are shown in Figure [\[fig:fmnist-archs\]](#fig:fmnist-archs){reference-type="ref" reference="fig:fmnist-archs"}. We see that while the development of modern architectures has led to substantial improvement in the test error, there is still a reasonably large gap between the performance of the SOTA Wide-ResNet and Bayes optimality. Nonetheless, it is valuable to know that, for this task, the state-of-the-art has substantial room to be improved.
|
| 129 |
+
|
| 130 |
+
::: wrapfigure
|
| 131 |
+
r0.5 
|
| 132 |
+
:::
|
| 133 |
+
|
| 134 |
+
A important application of our Bayes error estimator is to estimate the inherent *hardness* of a given dataset, regardless of model. We run our estimator on several popular image classification corpora and rank them based on our estimated Bayes error. The results are shown in Table [1](#tbl:main){reference-type="ref" reference="tbl:main"}. As a comparison we also put the SOTA numbers in the table.
|
| 135 |
+
|
| 136 |
+
Before proceeding, we make two remarks. First, all of the Bayes errors reported here were computed using temperature $\tau = 1$. This is for two main reasons: 1) setting $\tau=1$ reflects the flow model attaining the lowest testing NLL, and hence is in some sense the "best" approximation for the true distribution, 2) the ordering of the hardness of classes is unchanged by varying temperature, and so taking $\tau=1$ is a reasonable default. Second, the reliability of the Bayes errors reported here as a measure of inherent difficulty are dependent on the quality of the approximate distribution $\hat{p}$; if this distribution is not an adequate estimate of the true distribution $p$, then it is possible that the Bayes errors do not accurately reflect the true difficulty of the original dataset. Therefore, we also report the test NLL for each model as a metric to evaluate the quality of the approximant $\hat{p}$.
|
| 137 |
+
|
| 138 |
+
First, we observe that, by and large, the estimated Bayes errors align well with SOTA. In particular, if we constrain the NLL loss to be smaller than $1000$, then ranking by our estimated Bayes error aligns exactly with SOTA.
|
| 139 |
+
|
| 140 |
+
Second, the NLL loss in MNIST, Fashion MNIST, EMNIST and Kuzushiji-MNIST is relatively low, suggesting a good approximation by normalizing flow. However corpora such as CIFAR-10, CIFAR-100, and SVHN may suffer from a lack of training samples. In general large NLL loss may be due to either insufficient model capacity or lack of samples. In our experiments, we always observe the Glow model is able to attain essentially zero error on the training corpus, so it is highly possible the large NLL loss is caused by the lack of training samples.
|
| 141 |
+
|
| 142 |
+
Third, for datasets such as MNIST, EMNIST (digits, letters, balanced), SVHN, Fashion-MNIST, Kuzushiji-MNIST, CIFAR-10, and CIFAR-100 the SOTA numbers are roughly the same order of magnitude as the Bayes error. On the other hand, for EMNIST (bymerge and byclass) there is still substantial gap between the SOTA and estimated Bayes errors. This is consistent with the fact that there is little published literature about these two datasets; as a result models for them are not as well-developed.
|
| 143 |
+
|
| 144 |
+
::: {#tbl:main}
|
| 145 |
+
Corpus #classes #samples NLL Bayes Error SOTA Error [@PaperWithCode]
|
| 146 |
+
------------------- ---------- ---------- -------- ------------- ------------------------------------
|
| 147 |
+
MNIST 10 60,000 8.00e2 1.07e-4 1.6e-3 [@Byerly2001]
|
| 148 |
+
EMNIST (digits) 10 280,000 8.61e2 1.21e-3 5.7e-3 [@Pad2020]
|
| 149 |
+
SVHN 10 73,257 4.65e3 7.58e-3 9.9e-3 [@Byerly2001]
|
| 150 |
+
Kuzushiji-MNIST 10 60,000 1.37e3 8.03e-3 6.6e-3 [@Gastaldi17]
|
| 151 |
+
CIFAR-10 10 50,000 7.43e3 2.46e-2 3e-3 [@foret2021sharpnessaware]
|
| 152 |
+
Fashion-MNIST 10 60,000 1.75e3 3.36e-2 3.09e-2 [@Tanveer2006]
|
| 153 |
+
EMNIST (letters) 26 145,600 9.15e2 4.37e-2 4.12e-2 [@kabir2007]
|
| 154 |
+
CIFAR-100 100 50,000 7.48e3 4.59e-2 3.92e-2 [@foret2021sharpnessaware]
|
| 155 |
+
EMNIST (balanced) 47 131,600 9.45e2 9.47e-2 8.95e-2 [@kabir2007]
|
| 156 |
+
EMNIST (bymerge) 47 814,255 8.53e2 1.00e-1 1.90e-1 [@Cohen17]
|
| 157 |
+
EMNIST (byclass) 62 814,255 8.76e2 1.64e-1 2.40e-1 [@Cohen17]
|
| 158 |
+
|
| 159 |
+
: We evaluate the estimated Bayes error on image data sets and rank them by relative difficulty. Comparisons with prediction performance of state-of-the-art neural network models shows that our estimation is highly aligned with empirically observed performance.
|
| 160 |
+
:::
|
| 161 |
+
|
| 162 |
+
In addition to measuring the difficulty of classification tasks relative to one another, it also may be of interest to evaluate the relative difficulty of individual classes within a particular task. A natural way to do this is by looking at the error of one-vs-all classification tasks. Specifically, for a given class $j \in \mathcal{K}$, we consider $(\mathbf{x},1)$ drawn from the distribution $p_{-j}(\mathbf{x}) = \frac{1}{1-\pi_j}\sum_{i\neq j}\pi_ip_i(\mathbf{x})$, and $(\mathbf{x},0)$ from $p_j(\mathbf{x})$. The optimal Bayes classifier in this task is $$C_\text{Bayes}(\mathbf{x}) = \begin{cases}0 & \text{if } -\log p_j(\mathbf{x}) \leq -\log p_{-j}(\mathbf{x}),\\ 1 & \text{otherwise} \end{cases}.$$ Unfortunately, in this case, the Bayes error cannot be computed with HDR integration, since $p_{-j}$ is now a mixture of Gaussians. However, we can get a reasonable approximation for the error (though less accurate than exact integration would be) in this case using a simple Monte Carlo estimator: $\widehat{\mathcal{E}}_\text{Bayes}= \frac{1}{m}\sum_{l = 1}^m \mathbb{1}(C_\text{Bayes}(\mathbf{x}_l) \neq y_l)$, where $y_l \sim \text{Unif}\{0,1\}$ and $\mathbf{x}_l\mid y_l \sim y_lp_{-j} + (1-y_l)p_{j}$ as prescribed above.
|
| 163 |
+
|
| 164 |
+
<figure id="fig:cls_hardness">
|
| 165 |
+
<p> </p>
|
| 166 |
+
<figcaption>Classes Ranked by Hardness</figcaption>
|
| 167 |
+
</figure>
|
| 168 |
+
|
| 169 |
+
The one-vs-all errors by class on CIFAR are shown in Figure [4](#fig:cls_hardness){reference-type="ref" reference="fig:cls_hardness"}. It is observed that the errors between the hardest class and the easiest class is huge. On CIFAR-100 the error of the hardest class, squirrel, is almost $5$ times that of the easiest class, wardrobe.
|
2106.03632/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-04-22T14:45:02.819Z" agent="5.0 (Macintosh)" etag="oAsWF3-hshqWNv9wObjl" version="14.6.6" type="google"><diagram id="GnRU_M7bubpjtkMGvPQP" name="Page-1">7Vtbc6IwGP01zj51hoSrj63V9qUznXVn+pyFCMwCcSFWu79+g4RboLWlXETpQ5UTSOJ3DsnJR5jJC//wEKKt80Qs7M2gZB1m8v0Mwrmisf8x8JYAKtQTwA5dK4FADqzdf5iDEkd3roWj0omUEI+62zJokiDAJi1hKAzJvnzahnjlVrfIxhVgbSKvir64FnUS1FClHH/Eru2kLQOJl/goPZkDkYMssi9A8nImL0JCaPLNPyywF8cujUty3eqd0qxjIQ7oZy7Qf75Ye3LzvFj/2uPVy3JpAvtG5p17Rd6O/+IZ1DxW4d2GsHpZt+kbj4X2d0fSgpvoyNQtOwEq20NeGMcYmeULfiGH+Kh4jmbHn48kjHDaHOt40mJSxmOWNQ7Zj2Gcs4O7veNSvN4mreyZ6hjmUN9jRyDunut5C+KR8HidbKnYsBSGRzQkf3BaEpAAZ80Ug8jj+opDig8FiAf1ARMf0/CNnZKWShoPIpc4nHOJ73PBgFQFTkEsBscQ16id1Z3TyL5wJr/AKhyS1XtiXwCnSplTkI5Hw3EKBuR0gfx4WB85q7oh3KjG0KQOyykdPaNQYHQ+NKGwhlAxmIF1GxuTPBARRSEVsEJQj8UrN+7IvcSOWQWFIzHkCBsbsxJyVqKZBv69yUpSNyN/xAS2Su6oykMh0GpNnFMsxB6i7mvZU9UFn7fwTNyj/FOaZeHG1QT+IrILTcyvKnogoSJFFSqShIpYqG1MKxUdtZD97ObykCd5dCAPRZiss+OxyUOZ5NGBPLT5hYweav/yYK6gIAJJWrG/K5HNSbabykbtVzVajWpKHnM8K7u4SZ4igko7DlIVrEWWvxnMQuqt8XUGq7YOGIO6cF/qQzNmnGJsyqGdugsFBzf8ynzeZK61UORgi8ewEM4Yf0aU4jA4IlCSa6fEr0+r5z6JQmHuA429l6IO6r3S/N8kiDNxVbJQkSL3LIi63N0kiOEEUVnmzXsWBJwE0YYgVFDmUVwujWW1DuqSfW3bwgXxXZMVrFEQsY+nddUhOp91h8yrUTE5UGfpC4rjEPJcO5apySSFGX4XOz/XRN4tL/Bdy/Le850h2QXW8R6Q2nGTilT2CjULBKVGv7ArLwnq8npDKOHH9UlBWCwCdWgt1CXxvqeFy3mcpwtT+PDJGFCXPTs5qX8r51pyBG0kWqtW4sNMzJnYAP29HRhf9QFzcXuOWFHXPqAuoTdJqA8JCbsDFHFUH42E6jKMk4R6kBDIVg2cernpcmRwDTXKaE4a+r6GTjL/WQkBIDys6jnDARttX6ksIi47faELbAN9pAMGbLQb5crYbu3eNoSKKnuYuma70eaSie2GI3nFVvScnISN1rVXRjeQYJmlpg8vBx/KT+4pmVLR3eQfReZr9jX0mn6E7e9VGWduOe7iCvmuFxNTlW5LbxEJozzojn52mL9KmIwb+fuY8vI/</diagram></mxfile>
|
2106.03632/main_diagram/main_diagram.pdf
ADDED
|
Binary file (33.1 kB). View file
|
|
|
2106.03632/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,249 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
One of the cornerstone assumptions underlying the recent success of deep learning models is that the test data should share the same distribution as the training data. However, faced with ubiquitous distribution shifts in various real-world applications, such assumption hardly holds in practice. For example, a self-driving recognition system trained using data collected in the daytime may continually degrade its performance during nightfall. The system may also encounter weather or traffic conditions in a new city that never appear in the training set. In light of these potentially unseen scenarios, it is of paramount importance that the trained model can generalize Out-Of-Distribution (OOD): even if the target domain is not exactly the same as the source domain(s), the learned model should hopefully behave robustly under slight distribution shift.
|
| 4 |
+
|
| 5 |
+
To this end, one line of works focuses on learning the so-called invariant representations . At a colloquial level, the goal here is to learn feature embeddings that lead to indistinguishable feature distributions from different domains. In practice, both the feature embeddings and the domain discriminators are often parametrized by neural networks, leading to an adversarial game between these two. Furthermore, in order to avoid degenerate solutions, the learned features are required to be informative about the output variable as well. This is enforced by placing a predictor over the features and minimize the corresponding supervised loss simultaneously .
|
| 6 |
+
|
| 7 |
+
Another line of recent works aims to learn features that can induce invariant predictors, first termed as the invariant risk minimization (IRM) paradigm. Roughly speaking, the goal of IRM is to discover a feature embedding, upon which the optimal predictors, i.e., the Bayes predictor, are invariant across the training domains. Again, at the same time, the features should be informative about the output variable as well. However, the optimization problem of IRM is rather difficult, and several follow-up works have proposed different relaxations to the original formulation .
|
| 8 |
+
|
| 9 |
+
Despite being extensively studied, both theoretical and empirical works have shown the insufficiency of existing algorithms for domain generalization (DG). Methods based on invariant features ignore the potential shift in the marginal label distributions across domains and the methods based on invariant predictors are not robust to covariate shift . Perhaps surprisingly, empirical works have shown that with proper data augmentation and careful model tuning, the very basic algorithm of empirical risk minimization (ERM) demonstrates superior performance on domain generalization over existing methods on benchmark image datasets . This sharp gap between theory and practice calls for a fundamental understanding of the following question:
|
| 10 |
+
|
| 11 |
+
\itshape
|
| 12 |
+
What kind of invariance should we look for, in order to ensure that a good model on source domains also achieves decent accuracy on a related target domain?
|
| 13 |
+
|
| 14 |
+
In this work we attempt to answer the above question by proposing a criterion for models to look at, dubbed as transferability, which asks for an invariance of the excess risks of a predictor across domains. Different from existing proposals of invariant features and invariant predictors, which seek to find feature embeddings that respectively induce invariant marginal and conditional distributions, our notion of transferability depends on the excess risk, hence it directly takes into account the joint distribution over both the features and the labels. We show how it can be used to naturally derive a new upper bound for the target error, and then we discuss how to estimate the transferability empirically with enough samples. Our definition also inspires a method that aims to find more transferable features via representation learning using adversarial training.
|
| 15 |
+
|
| 16 |
+
[12]{r}{0.27\textwidth}
|
| 17 |
+
\vspace*{-1.8em}
|
| 18 |
+
|
| 19 |
+
\includegraphics[width=\linewidth]{images/demo_intro_ERM_MNIST.png}
|
| 20 |
+
|
| 21 |
+
\vspace{-5mm}
|
| 22 |
+
\caption{The target and source (test) accuracies of ERM on MNIST.}
|
| 23 |
+
|
| 24 |
+
Empirically, we perform experiments to measure the transferability of several existing algorithms, on both small and large scale datasets. We show that many algorithms, including ERM, are not quite transferable under the definition (Fig. , see more details in \S): when we go away from the optimal classifier (with distance $\delta$ in the parameter space), it could happen that the source accuracy remains high but the target accuracy drops significantly. This implies that during the training process, an existing algorithm may find a good source classifier with low target accuracy, hence violating the requirement for invariance of excess risks. In contrast, our algorithm is more transferable, and achieves consistent improvement over existing state-of-the-art algorithms, corroborating our findings.
|
| 25 |
+
|
| 26 |
+
In this section we present our definition of transferability in the classification setting. The setup of domain generalization is the following:
|
| 27 |
+
|
| 28 |
+
\paragraph{Settings and Notation} Given $n$ labeled source domains $\Sc_1, \dots, \Sc_n$, the problem of domain generalization is to learn a model from these source domains, in the hope that it performs well on an unseen target domain $\Tc$ that is ``similar'' to the source domains. Throughout the paper, we assume that both the source domains and the unseen target domain share the same input and output spaces, denoted as $\Xc$ and $\Yc$, respectively. For multi-class classification, the output space $\Yc = [K]$ is a set of labels for multi-class classification. For binary classification, we consider $\Yc = \{-1, +1\}$. Denote $\Hc$ as the hypothesis class. We define the classification error of a classifier $h \in \Hc$ on a domain $\Dc$ (or $\Sc$ for source domains, or $\Tc$ for target domains) as:\footnote{Throughout the paper, we will use the terms domain and distribution interchangeably.}
|
| 29 |
+
|
| 30 |
+
\epsilon_{\Dc}(h) = \Eb_{(x, y)\sim \Dc} [\ell(h(x), y)].
|
| 31 |
+
|
| 32 |
+
For $\ell(h(x), y) = \one(h(x) \neq y)$, where $\one(\cdot)$ is the usual indicator function, we use $\epsilon^{\rm 0-1}_{\Dc}(h)$ to denote it is the 0-1 loss.
|
| 33 |
+
|
| 34 |
+
In domain generalization, we often have several source domains. For the ease of presentation, we only consider a single source domain in this section, and later extend to the general case in Section . Given two domains, the source domain $\Sc$ and the target domain $\Tc$, the task of domain generalization is to transfer a classifier $h$ that performs well on $\Sc$ to $\Tc$. We ask: how much of the success of $h$ on $\Sc$ can be transferred to $\Tc$?
|
| 35 |
+
|
| 36 |
+
Note that in order to evaluate the transferability from $\Sc$ to $\Tc$, we need information from the target domain, similar to the test phase in traditional supervised learning. We believe a good criterion of transferability should satisfy the following properties:
|
| 37 |
+
[topsep=0pt, parsep=0pt]
|
| 38 |
+
- Quantifiable: the notion should be quantifiable and can be computed in practice;
|
| 39 |
+
- Any near-optimal source classifier should be near-optimal on the target domain.
|
| 40 |
+
- If the two domains are similar, as measured by e.g., total variation, then they are transferable to each other, but the converse may not be true.
|
| 41 |
+
|
| 42 |
+
At first glance the second criterion above might seem too strong and restrictive. However, we argue that in the task of domain generalization, we only have labeled source data and there is no clue to distinguish a classifier from another if both of them perform equally well on the source domain. Based on the second property, we first propose the following definition of transferability:
|
| 43 |
+
[transferability]
|
| 44 |
+
|
| 45 |
+
$\Sc$ is $(\d_\Sc, \d_\Tc)_{\Hc}$-transferable to $\Tc$ if for $\d_\Sc > 0$, there exists $\d_\Tc > 0$ such that ${\argmin} (\e_\Sc, \d_\Sc)_\Hc \subseteq {}{\argmin} (\e_\Tc, \d_\Tc)_\Hc$, where:
|
| 46 |
+
|
| 47 |
+
{\argmin} (\e_\Dc, \d_\Dc)_\Hc := \{h\in \Hc: \e_\Dc(h) \leq \inf_{h\in \Hc}\e_\Dc(h) + \d_\Dc\}. \nonumber
|
| 48 |
+
|
| 49 |
+
\vspace{-0.5em}
|
| 50 |
+
In the literature the set ${\argmin}(\e_\Dc, \d_\Dc)_\Hc$ is also known as a $\d_\Dc$-minimal set of $\e_\Dc$, which represents the near-optimal set of classifiers. Note that the $\d$-minimal set depends on the hypothesis class $\Hc$. Throughout the paper, we omit the subscript $\Hc$ in the definition when there is no confusion. Def. says that near-optimal source classifiers are also near-optimal target classifiers. Furthermore, it is easy to verify that our transferability is transitive: if $\Sc$ is $(\d_\Sc, \d_\Pc)$-transferable to $\Pc$, and $\Pc$ is $(\d_\Pc, \d_\Tc)$-transferable to $\Tc$, then $\Sc$ is $(\d_\Sc, \d_\Tc)$-transferable to $\Tc$.
|
| 51 |
+
|
| 52 |
+
Next we define transfer measures, which we will show to be equivalent with Def. in Prop. .
|
| 53 |
+
[quantifiable transfer measures]
|
| 54 |
+
Given some $\Hf \subseteq \Hc$, $\e_\Sc^* := \inf_{h\in \Hf} \e_\Sc(h)$ and $\e_\Tc^* := \inf_{h\in \Hf} \e_\Tc(h)$ we define the one-sided transfer measure, symmetric transfer measure and the realizable transfer measure respectively as:
|
| 55 |
+
|
| 56 |
+
&\mathtt{T}_{\Hf}(\Sc\|\Tc) := \sup_{h\in \Hf}\e_\Tc(h) - \e_\Tc^* - (\e_\Sc(h) - \e_\Sc^*), \\
|
| 57 |
+
&
|
| 58 |
+
\mathtt{T}_{\Hf}(\Sc, \Tc) := \max\{\Tt_\Hf(\Sc\|\Tc), \Tt_\Hf(\Tc\|\Sc)\} = \sup_{h\in \Hf}|\e_\Sc(h) - \e_\Sc^* - (\e_\Tc(h) - \e_\Tc^*)|, \\
|
| 59 |
+
&
|
| 60 |
+
\mathtt{T}^\rt_{\Hf}(\Sc, \Tc) := \sup_{h\in \Hf}|\e_\Sc(h) - \e_\Tc(h)|.
|
| 61 |
+
|
| 62 |
+
\vspace{-0.5em}
|
| 63 |
+
\noin
|
| 64 |
+
The distinction between $\Hf$ and $\Hc$ will become apparent in Prop. .
|
| 65 |
+
Note that the one-sided transfer measure is not symmetric. If we want the two domains $\Sc$ and $\Tc$ to be mutually transferable to each other, we can use the symmetric transfer measure. We call both quantities as transfer measures. Furthermore, the symmetric transfer measure reduces to in the realizable case when $\e_\Sc^* = \e_\Tc^* = 0$. In statistical learning theory, $\e_\Dc(h) - \e_\Dc^*$ is often known as an excess risk , which is the relative error compared to the optimal classifier. The transfer measures can thus be represented with the difference of excess risks. With Def. , we can immediately obtain the following result that upper bounds the target error:
|
| 66 |
+
[target error bound]
|
| 67 |
+
Given $\Hf \subseteq \Hc$, for any $h\in \Hf$, the target error is bounded by:
|
| 68 |
+
\be
|
| 69 |
+
\e_\Tc(h) \leq \e_\Sc(h) + \e_{\Tc}^* - \e_\Sc^* + \Tt_{\Hf}(\Sc \| \Tc) \leq \e_\Sc(h) + \e_{\Tc}^* - \e_\Sc^* + \Tt_{\Hf}(\Sc, \Tc).
|
| 70 |
+
\en
|
| 71 |
+
|
| 72 |
+
The first error bound of such type for a target domain uses $\Hc$-divergence for binary classification (or more rigorously, the $\Hc\Delta\Hc$-divergence). The main difference between ours and $\Hc$-divergence is that $\Hc$-divergence only concerns about the marginal input distributions, whereas the transfer measures depend on the joint distributions over both the inputs and the labels. We note that Proposition is general and works in the multi-class case as well. Moreover, even in the binary classification case we can prove that our \Cref{thm:target_bound} is tighter than $\Hc$-divergence (see \Cref{prop:compare_trans_measure_hdiv} in the appendix).
|
| 73 |
+
|
| 74 |
+
In practice we may not know the optimal errors. In this case, we can use the realizable transfer measure to upper bound the symmetric transfer measure (note that $\e_\Sc^*$ or $\e_\Tc^*$ may not be zero):
|
| 75 |
+
|
| 76 |
+
{prop}{UpperRealizable}
|
| 77 |
+
For $\Hf \subseteq \Hc$ and domains $\Sc$, $\Tc$ we have: $\Tt_\Hf(\Sc, \Tc) \leq 2\Tt_\Hf^\rt (\Sc, \Tc)$.
|
| 78 |
+
|
| 79 |
+
Since Def. essentially asks that the excess risks of approximately optimal classifiers on the source domain are comparable between the source and target domains, we can show that Def. and Def. are equivalent if $\Hf$ is a $\d$-minimal set:
|
| 80 |
+
|
| 81 |
+
[equivalence between transferability and transfer measures]{prop}{OneSide}
|
| 82 |
+
|
| 83 |
+
Let $\d_\Sc > 0$ and $\Hf = \argmin(\e_\Sc, \d_\Sc)$ and suppose $\inf_{h\in \Hf} \e_\Tc(h) = \inf_{h\in \Hc} \e_\Tc(h)$. If $\mathtt{T}_{\Hf}(\Sc\|\Tc) \leq \d$ or $\Tt_\Hf(\Sc, \Tc) \leq \d$, then $\Sc$ is $(\d_\Sc, \d + \d_\Sc)$-transferable to $\Tc$. Furthermore, if $\Sc$ is $(\d_\Sc, \d_\Tc)$-transferable to $\Tc$, then $\mathtt{T}_{\Hf}(\Sc\|\Tc)\leq \d_\Tc$ and $\Tt_\Hf(\Sc, \Tc) \leq \max\{\d_\Sc, \d_\Tc\}$.
|
| 84 |
+
|
| 85 |
+
In Prop. , we do not require $\Hf = \Hc$ since it is unnecessary to impose that all classifiers in $\Hc$ have similar excess risks on source and target domains. Instead, we only constrain $\Hf$ to be a $\d$-minimal set, i.e., $\Hf$ includes approximately optimal classifiers of $\Sc$. See also \Cref{eg:d_transfer_is_not_sim}. An additional assumption is that $\Hf$ also includes the optimal classifier of $\Tc$ which can be ensured by controlling $\d_\Sc$.
|
| 86 |
+
|
| 87 |
+
In this subsection, we compare the realizable transfer measure with other discrepancy measures between domains and focus on the 0-1 loss $\e^{\rm 0-1}_{\Dc}$. We first note that $\Tt^{\mathtt{r}}_{\Hf}(\Sc, \Tc)$ can be written as an integral probability metric (IPM) . The l.h.s. of can be written as:
|
| 88 |
+
|
| 89 |
+
&\Tt^{\mathtt{r}}_\Hf(\Sc, \Tc) := d_{\Fc_\Hf }(\Sc, \Tc), \mbox{ where } d_{\Fc}(\Sc, \Tc) = \sup_{f\in \Fc} \left|\sum_y \int f(x, y) (p_\Sc(x, y) - p_\Tc(x, y)) dx \right|,
|
| 90 |
+
|
| 91 |
+
and $\Fc_{\rm \Hf} := \{(x, y)\mapsto \one(h(x)\neq y), h\in \Hf\}$. Typical IPMs include MMD, Wasserstein distance, Dudley metric and the Kolmogorov--Smirnov distance (see Appendix for more details). However, $\Fc_\Hf$ is fundamentally different from these IPMs since it relies on an underlying function class $\Hf$. Our realizable transfer measure shares some similarity with , where a changeable function class is used, but the exact choices of the function class are different.
|
| 92 |
+
|
| 93 |
+
Even though the transferability can be written in terms of IPM, it is in fact a pseudo-metric:
|
| 94 |
+
|
| 95 |
+
[pseudo-metric]{prop}{pmetric}
|
| 96 |
+
For a general loss $\e_\Dc$ as in , $\Tt^\rt_\Hf(\Sc, \Tc)$ is a pseudo-metric, i.e., for any distributions $\Sc, \Tc, \Pc$ on the same underlying space, we have $\Tt^\rt_\Hf(\Sc, \Sc) = 0$, $\Tt^\rt_\Hf(\Sc, \Tc) = \Tc^\rt_\Hf(\Tc, \Sc)$ (symmetry), and $\Tt^\rt_\Hf(\Sc, \Tc) \leq \Tt^\rt_\Hf(\Sc, \Pc) + \Tt^\rt_\Hf(\Pc, \Tc)$ (triangle inequality).
|
| 97 |
+
|
| 98 |
+
However in general $\Tt^\rt_{\Hf}(\Sc, \Tc)$ is not a metric since $\Tt^\rt_{\Hf}(\Sc, \Tc) = 0$ even if $\Sc \neq \Tc$. For instance, taking $\Hf = \{h^*\}$ to be the optimal classifier on both $\Sc$ and $\Tc$. we have $\Tt^\rt_\Hf (\Sc, \Tc) = 0$, but $\Sc$ and $\Tc$ could differ a lot (see Figure ). In the next result we discuss the connection between realizable transfer measures and total variation (c.f. \Cref{app:other_ipm}).
|
| 99 |
+
|
| 100 |
+
[equivalence with total variation]{prop}{upperTV}
|
| 101 |
+
For binary classification with labels $\{-1, 1\}$, given the 0-1 loss $\e_\Dc = \e_\Dc^{\rm 0-1}$, we have $\Tt^{\mathtt{r}}_{\Hf}(\Sc, \Tc) \leq d_{\rm TV}(\Sc, \Tc)$ for domains $\Sc, \Tc$ and any $\Hf \subseteq \Hc$. Denote $\Hc_t$ to be the set of all binary classifiers. Then we have $d_{\rm TV}(\Sc, \Tc) \leq 4 \Tt_{\Hc_t}^\rt(\Sc, \Tc)$.
|
| 102 |
+
|
| 103 |
+
Prop. tells us that transfer measures (see also Prop. ) are no stronger than total variation, and in the realizable case, is equivalent to the similarity of domains (as measured by total variation) if $\Hf$ is unconstrained. We can moreover show that transfer measures are strictly weaker, if we choose $\Hf$ to be some $\d$-minimal set:
|
| 104 |
+
|
| 105 |
+
[20]{r}{0.5\textwidth}
|
| 106 |
+
\vspace{-10mm}
|
| 107 |
+
|
| 108 |
+
\includegraphics[width=0.33\textwidth]{images/Matching_joint_transfer.png}
|
| 109 |
+
\vspace{-2mm}
|
| 110 |
+
\caption{Visualization of Example . Source domain: $P_{\Sc}(Y=1, -1\leq X < 0) = 0.1$, $P_{\Sc}(Y=-1, 0\leq X < 1) = 0.9$. Target domain: $P_{\Tc}(Y=1, -1\leq X < 0) = 0.9$, $P_{\Tc}(Y=-1, 0\leq X < 1) = 0.1$. The dark and light colors show the intensity of the probability mass. The vertical axis denotes whether it is the target or source domain (above or below $x$-axis).}
|
| 111 |
+
|
| 112 |
+
[very dissimilar joint distributions but transferable]
|
| 113 |
+
We study the distributions described in Figure . The joint distributions are very dissimilar, i.e., for any $X, Y$ in the domain, $|p_\Sc(X, Y) - p_\Tc(X, Y)| = 0.8$. Define
|
| 114 |
+
\be
|
| 115 |
+
h_\rho(X) =
|
| 116 |
+
1 & \textrm{ if } -1\leq X < \rho \\
|
| 117 |
+
-1 & \textrm{ if }\rho \leq X < 1
|
| 118 |
+
.
|
| 119 |
+
\en
|
| 120 |
+
We choose the hypothesis class $\Hc = \{h_\rho, \, \rho\in [-1, 1]\}$ and $\Hf = \{h_\rho, |\rho|\leq \delta/0.8\}$ (for small $\d$, say $\d < 0.01$) to be some neighborhood of the optimal source classifier $h^* = h_0$.
|
| 121 |
+
Then $\Tt_\Hf(\Sc, \Tc) = \sup_{h\in \Hf} |\e_\Sc(h) - \e_\Tc(h)| = \d$, and $\Sc$ is $(\d_\Sc, \d + \d_\Sc)$-transferable to $\Tc$ on $\Hf$ for any $\d_\Sc > 0$ according to Prop. . Note that $\e_\Sc^* = \e_\Tc^* = 0$.
|
| 122 |
+
|
| 123 |
+
In the last section we proposed a new concept called transferability. However, although Def. provides a theoretically sound result for transferability, it is hard to verify it in practice, since we cannot exhaust all approximately good classifiers, especially for rich models such as deep neural networks. Nevertheless, Prop. and Prop. provide a framework to compute transferability through transfer measures, despite their simplicity. In this section we discuss how to compute these quantities by making necessary approximations based on transfer measures. There are two difficulties we need to overcome:
|
| 124 |
+
{\bf (1)} In practice we only have finite samples drawn from true distributions;
|
| 125 |
+
{\bf (2)} We need a surrogate loss such as cross entropy for training and the 0-1 loss for evaluation.
|
| 126 |
+
In \S we show that our transfer measures can be estimated with enough samples, and in \S we discuss transferability with a surrogate loss. These results will be used in our algorithms in the next section.
|
| 127 |
+
|
| 128 |
+
We show how to estimate the transfer measure $\Tt_\Hf(\Sc\|\Tc)$ from finite samples. Other versions of transfer measures in Def. follow analogously (see Appendix for more details).
|
| 129 |
+
|
| 130 |
+
[reduction of estimation error]{lem}{ReductionEst}
|
| 131 |
+
Given general loss $\e_\Dc$ as in , suppose $\widehat{\Sc}$ and $\widehat{\Tc}$ are i.i.d. sample distributions drawn from distributions of $\Sc$ and $\Tc$, then for any $\Hf\subseteq \Hc$ we have:
|
| 132 |
+
|
| 133 |
+
&\Tt_\Hf(\Sc\|\Tc) \leq \Tt_\Hf(\widehat{\Sc}\| \widehat{\Tc})+ 2{\rm est}_{\Hf}(\Sc) + 2{\rm est}_{\Hf}(\Tc), \nonumber
|
| 134 |
+
|
| 135 |
+
with the estimation errors ${\rm est}_{\Hf}(\Sc) = \sup_{h\in \Hf} |\e_\Sc(h) - \e_{\widehat{\Sc}}(h)|,\, {\rm est}_{\Hf}(\Tc) = \sup_{h\in \Hf} |\e_\Tc(h) - \e_{\widehat{\Tc}}(h)|.$
|
| 136 |
+
|
| 137 |
+
This lemma tells us that estimating transferability is no harder than computing the estimation errors of both domains. If the function class $\Hf$ has uniform convergence property , then we can guarantee efficient estimation of transferability. We first bound the sample complexity through Rademacher complexity, which is a standard tool in bounding estimation errors :
|
| 138 |
+
|
| 139 |
+
[estimation error with Rademacher complexity]{thm}{RadeEst}
|
| 140 |
+
Given the 0-1 loss $\e_\Dc = \e_\Dc^{\rm 0-1}$, suppose $\widehat{\Sc}$ and $\widehat{\Tc}$ are sample sets with $m$ and $k$ samples drawn i.i.d. from distributions $\Sc$ and $\Tc$, respectively. For any $\Hf\subseteq \Hc$ the following holds with probability $1 - \d$:
|
| 141 |
+
|
| 142 |
+
\Tt_\Hf(\Sc\|\Tc) \leq \Tt_\Hf(\widehat{\Sc}\| \widehat{\Tc})+ 4\Rf_m(\Fc_\Hf) + 4\Rf_k(\Fc_\Hf) + 2\sqrt{\frac{\log(4/\d)}{2m}} + 2\sqrt{\frac{\log(4/\d)}{2k}}, \nonumber
|
| 143 |
+
|
| 144 |
+
where $\Fc_{\Hf} := \{(x, y) \mapsto \one(h(x) \neq y),\, h\in \Hf\}$. If furthermore, $\Hf$ is a set of binary classifiers with labels $\{-1, 1\}$, then $2 \Rf_m(\Fc_{\Hf}) = \Rf_m(\Hf), \, 2 \Rf_k(\Fc_{\Hf}) = \Rf_k(\Hf)$.
|
| 145 |
+
|
| 146 |
+
We also provide estimation error results using Vapnik–Chervonenkis (VC) dimension and Natarajan dimension in Appendix . It is worth mentioning that the VC dimension of piecewise-polynomial neural networks has been upper bounded in . Since transfer measures can be estimated, in later sections we do not distinguish the sample sets $\widehat{\Sc}, \widehat{\Tc}$ and the underlying distributions $\Sc, \Tc$.
|
| 147 |
+
|
| 148 |
+
Due to the intractability of minimizing the 0-1 loss, we need to use a surrogate loss for training in practice. In this section, we discuss this nuance w.r.t. transferability. We will focus on the most commonly used surrogate loss, cross entropy (CE), although some of the results can be easily adapted to other loss functions. To distinguish a surrogate loss from the 0-1 loss, we use $\e_\Dc$ from now on for a surrogate loss and $\e_\Dc^{\rm 0-1}$ for the 0-1 loss. One of the difficulties is the non-equivalence between $\d$-minimal sets w.r.t. the 0-1 loss and a surrogate loss, i.e. $\argmin(\e_\Dc, \d_\Dc)$ might be quite different from $\argmin(\e_\Dc^{\rm 0-1}, \d_\Dc)$.
|
| 149 |
+
Moreover, it is not practical to find all elements in $\argmin(\e_\Dc^{\rm 0-1}, \d_\Dc)$ since the loss is nonconvex and nonsmooth. In light of these difficulties, we propose a more practical notion of transferability based on surrogate loss $\e_\Dc$:
|
| 150 |
+
|
| 151 |
+
[transfer measure with a surrogate loss]{prop}{transferSurrogate}
|
| 152 |
+
Given surrogate loss $\e_\Dc \geq \e_\Dc^{0-1}$ on a general domain $\Dc$. Suppose $\Hf = \argmin(\e_\Sc, \d_\Sc)$ and denote $\e_\Tc^* = \inf_{h\in \Hf} \e_\Tc(h)$, $\e_\Sc^* = \inf_{h\in \Hf} \e_\Sc(h)$, $(\e_\Tc^{\rm 0-1})^* = \inf_{h\in \Hc} \e_\Tc^{\rm 0-1}(h)$. If the following holds:
|
| 153 |
+
\be
|
| 154 |
+
\Tt_\Hf^{\rm }(\Sc\| \Tc) = \sup_{h\in \Hf} \e_\Tc (h) - \e_\Tc^* - (\e_\Sc(h) - \e_\Sc^*) \leq \d,
|
| 155 |
+
\en
|
| 156 |
+
then we have ${\argmin} (\e^{\rm }_\Sc, \d_\Sc) \subseteq {}{\argmin} (\e^{\rm 0-1}_\Tc, \d + \d_\Sc + \e_\Tc^*- (\e_\Tc^{\rm 0-1})^*).$
|
| 157 |
+
|
| 158 |
+
This proposition implies that if the transfer measure is small, then a near-optimal classifier of the surrogate loss in the source domain would be near-optimal in the target domain for the 0-1 loss. It also gives us a practical framework to guarantee transferability, which we will discuss in more depth in Section . Assume $\e_\Dc : \Hc \to \Rb$ to be Lipschitz continuous and strongly convex, which is satisfied for the cross entropy loss (see \Cref{app:functional_surrogate}). We are able to translate the $\d$-minimal set to $L_p$ balls in the function space:
|
| 159 |
+
|
| 160 |
+
C_1 \| h - h^*\|_{2, \Dc} \leq \e_{\Dc}(h) - \e_{\Dc}(h^*) \leq C_2 \| h - h^*\|_{1, \Dc},
|
| 161 |
+
|
| 162 |
+
where $C_1$ and $C_2$ are absolute constants and $h^*$ is an optimal classifier. The function norms $\|\cdot\|_{1, \Dc}$ and $\|\cdot\|_{2, \Dc}$ are the usual $L_p$ norms over distribution $\Dc$. Since the classifier $h = q(\theta, \cdot)$ is usually parameterized with, say a neural network, we further upper bound the function norms by the distance of parameters, that is, for $1\leq p < \infty$, $h = q(\theta, \cdot)$ and $h' = q(\theta', \cdot)$, we have $\| h - h'\|_{p, \Dc} \leq L \| \theta - \theta'\|_2,$
|
| 163 |
+
with $L$ some Lipschitz constant of $q$ (\Cref{app:functional_surrogate}). Combined with , we obtain:
|
| 164 |
+
|
| 165 |
+
\e_{\Dc}(h) - \e_{\Dc}(h') \leq LC_2 \| \theta - \theta'\|_2.
|
| 166 |
+
|
| 167 |
+
In other words, if the parameters are close enough, then the losses should not differ too much. We denote $\|\cdot\|_2$ as the Euclidean norm, and for later convenience we will omit the subscript in $\|\cdot\|_2$.
|
| 168 |
+
|
| 169 |
+
# Method
|
| 170 |
+
|
| 171 |
+
The notion of transferability is defined w.r.t.\ domains, hence by learning feature embeddings that induce certain feature distributions, one can aim to improve transferability of two given domains. In this section we design algorithms to evaluate and improve transferability by learning such transformations. To start with, let $g:\Xc\to\Zc$ be a feature embedding (a.k.a. featurizer), where $\Zc$ is understood to be a feature space. By a joint distribution $\Dc^g$ (or $\Sc^g$, $\Tc^g$) we mean a distribution on $g(\Xc) \times \Yc$. Formally, we are dealing with push-forwards of distributions:
|
| 172 |
+
|
| 173 |
+
\Sc^g := (g, \id) \# \Sc, \, \Tc^g := (g, \id) \# \Tc,
|
| 174 |
+
|
| 175 |
+
where $(g, \id): (x, y) \mapsto (g(x), y)$ is a function on $\Xc \times \Yc$. $\Sc$ and $\Tc$ here are joint distributions on $\Xc \times \Yc$, and here we specify $\Xc$ to be the space of the original signal such as an image. Since $\Sc$ and $\Tc$ cannot be changed, what we are evaluating here is the feature embedding $g$.
|
| 176 |
+
|
| 177 |
+
The key quantity is transfer measures as in :
|
| 178 |
+
|
| 179 |
+
\Tt_\Hf(\Sc^g \| \Tc^g) = \sup_{h\in \Hf} \e_{\Tc^g}(h) - \e_{\Tc^g}^* - (\e_{\Sc^g}(h) - \e_{\Sc^g}^*), \quad \Hf = \argmin(\e_{\Sc^g}, \d_{\Sc^g}).
|
| 180 |
+
|
| 181 |
+
Although $\Hf$ is hard to compute, we can use to obtain a lower bound of . That is, given a parametrization of the classifier $h = q(\theta, \cdot)$ and the optimal classifier $h^* = q(\theta^*, \cdot)$, we have:
|
| 182 |
+
|
| 183 |
+
\small
|
| 184 |
+
\Tt_\Hf(\Sc^g \| \Tc^g) &\geq \sup_{\|\theta - \theta^*\| \leq \d} \e_{\Tc^g}(h) - \e_{\Sc^g}(h) - \e_{\Tc^g}^* + \e_{\Sc^g}^* \tr
|
| 185 |
+
&\geq \sup_{\|\theta - {\theta^*}\| \leq \d} \e_{\Tc^g}(h) - \e_{\Sc^g}(h) - \e_{\Tc^g}(\widehat{h^*}) \tr
|
| 186 |
+
&\approx \sup_{\|\theta - \widehat{\theta^*}\| \leq \d} \e_{\Tc^g}(h) - \e_{\Sc^g}(h) - \e_{\Tc^g}(\widehat{h^*})
|
| 187 |
+
|
| 188 |
+
where $\d > 0$ depends on $\Hf$ and the constant in . In the second and the third lines, we approximated the optimal errors $\e_{\Tc^g}^*$ and $\e_{\Sc^g}^*$ with $0 \leq \e_{\Sc^g}^* \leq \e_{\Sc^g}(\widehat{h^*}), \, 0 \leq \e_{\Tc^g}^* \leq \e_{\Tc^g}(\widehat{h^*})$, and we use the learned classifier $\widehat{h^*} = q(\widehat{\theta^*}, \cdot)$ as a surrogate for the optimal classifier. As a result, if the r.h.s. of is large, then $\Sc^g$ is not quite transferable to $\Tc^g$.
|
| 189 |
+
|
| 190 |
+
\setlength{\textfloatsep}{1pt}
|
| 191 |
+
|
| 192 |
+
Input: learned feature embedding $g$, learned classifier $\widehat{h^*} = q(\widehat{\theta^*}, \cdot)$, target sample training set $\Tc = \Sc_0$, sample training sets $\Sc_1$, \dots, $\Sc_n$, ascent optimizer, minimal errors $\e_{\Sc_i}^* \approx \e_{\Sc_i}(\widehat{h^*})$, adversarial radius $\d$ \\
|
| 193 |
+
Initialize: a classifier $h = q(\theta, \cdot)$ and $\theta = \widehat{\theta^*}$, gap $ = -\infty$ \\
|
| 194 |
+
\For{$t$ in $1 \dots T$}{
|
| 195 |
+
Find $\max_i \e_{\Sc_i}(h\circ g)$ and $\min_i \e_{\Sc_i}(h\circ g)$
|
| 196 |
+
and corresponding indices $j$ and $k$
|
| 197 |
+
\\
|
| 198 |
+
Run an ascent optimizer on $h$ to maximize $ {\rm gap}_0 = \e_{\Sc_j}(h\circ g) - \e_{\Sc_k}(h\circ g)$ \\
|
| 199 |
+
Project $\theta$ onto the Euclidean ball $\|\theta - \widehat{\theta^*}\| \leq \d$\\
|
| 200 |
+
\If{${\rm gap}_0 > {\rm gap}$}{${\rm gap} = {\rm gap}_0$, save accuracies and losses of each domain}
|
| 201 |
+
}
|
| 202 |
+
Output: $j$, $k$, $h$, $\e_{\Sc_j}(h\circ g) -\e_{\Sc_k}(h\circ g)$, $\e_{\Sc_j}(\widehat{h^*})$, $\e_{\Sc_k}(\widehat{h^*})$
|
| 203 |
+
\caption{Algorithm for evaluating transferability among multiple domains}
|
| 204 |
+
|
| 205 |
+
We can thus design an algorithm to evaluate the transferability in Section . By computing the lower bound in , we can disprove the transferability as in Prop. and Prop. . Computing the lower bound in can be regarded as an attack method: there is an adversary trying to show that $\Sc^g$ is not transferable to $\Tc^g$. For this attack, we could also design a defence method aiming to minimize the lower bound and learn more transferable features.
|
| 206 |
+
|
| 207 |
+
\vspace{-0.2em}
|
| 208 |
+
|
| 209 |
+
\vspace{-0.2em}
|
| 210 |
+
|
| 211 |
+
In domain generalization we have one target domain and more than one source domains. To ease the presentation, we denote $\Sc_0 = \Tc$ (and thus $ \Sc_0^g = \Tc^g$) and extend the index set to be $\{0, 1, \cdots, n\}$. We need to evaluate the transferability between all pairs of $\Sc^g_i$ and $\Sc^g_j$. Algorithm gives an efficient method to compute the worst-case gap $\sup_{\|\theta - \widehat{\theta^*}\| \leq \d} \e_{\Sc^g_i}(h) - \e_{\Sc^g_j}(h)$ among all pairs of $(i, j)$. Essentially, it finds the worst pair of $(i,j)$ at each step such that
|
| 212 |
+
the gap $\e_{\Sc^g_i}(h) - \e_{\Sc^g_j}(h)$ takes the largest value, and then maximize this gap over parameter $\theta$ through gradient ascent.
|
| 213 |
+
|
| 214 |
+
Note that the computation of also depends on the information from the target domain. This is valid since we are only evaluating but not training over these domains.
|
| 215 |
+
|
| 216 |
+
The evaluation sub-procedure provides us a way to pick a pair of non-transferable domains $(\Sc_i^g, \Sc_j^g)$, which in turn could be used to improve the transferability among all source domains by updating the feature embedding $g$ such that the gap $\sup_{\|\theta - \theta^*\| \leq \d} \e_{\Sc^g_i}(h) - \e_{\Sc^g_j}(h)$ for $(i, j) \in [n]\times[n]$. Simultaneously, we also require that the feature embedding $g$ preserves information for the target task of interest. With the parametrization $h = q(\theta, \cdot)$, $h' = q(\theta', \cdot)$, the overall optimization problem can be formulated as:
|
| 217 |
+
|
| 218 |
+
\min_{g, h} \max_{\|\theta' - \theta\| \leq \d} \frac{1}{n}\sum_{i=1}^n \e_{\Sc_i}^{\rm }(h \circ g) + \left( {\rm max}_i \e_{\Sc_i}^{\rm }(h'\circ g) - {\rm min}_i \e_{\Sc_i}^{\rm }(h'\circ g) \right).
|
| 219 |
+
|
| 220 |
+
Intuitively, we want to learn a common feature embedding and a classifier such that all source errors are small and the pairwise transferability between source domains is also small. If the optimization problem is properly solved, then we have the following guarantee:
|
| 221 |
+
[optimization guarantee]{thm}{OptGuarantee}
|
| 222 |
+
Assume that the function $q(\cdot, x)$ is $L_{\theta}$ Lipschitz continuous for any $x$. Suppose we have learned a feature embedding $g$ and a classifier $h$ such that the loss functional $\e_{\Sc_i^g}: \Hc \to \Rb$ is $L_{\ell}$ Lipschitz continuous w.r.t. distribution $\Sc_i^g$ for $i \in [n]$ and
|
| 223 |
+
\be
|
| 224 |
+
\small
|
| 225 |
+
\max_{\|\theta' - \theta\| \leq \d} \frac{1}{n}\sum_{i=1}^n \e_{\Sc_i}^{\rm }(h \circ g) + \left( {\rm max}_i \e_{\Sc_i}^{\rm }(h'\circ g) - {\rm min}_i \e_{\Sc_i}^{\rm }(h'\circ g) \right) \leq \eta,
|
| 226 |
+
\en
|
| 227 |
+
where $\theta, \theta'$ are parameters of $h$ and $h'$. Then for any $h' \in \Hf = \{q(\theta', \cdot) :\|\theta - \theta'\| \leq \d\}$, we have:
|
| 228 |
+
\be
|
| 229 |
+
\small
|
| 230 |
+
\Tt_\Hf^{\rt}(\Tc^g_1, \Tc^g_2) \leq \eta, \quad \e_{\Sc_i}^{\rm }(h'\circ g) \leq \eta + L_{\ell}L_{\theta} \d, \quad \e_{\Tc}^{\rm }(h'\circ g) \leq 2 \eta + L_{\ell}L_{\theta} \d,
|
| 231 |
+
\en
|
| 232 |
+
for any $\Tc_1^g, \Tc_2^g, \Tc^g \in \conv(\Sc_1^g, \dots, \Sc_n^g)$ and any $i \in [n]$.
|
| 233 |
+
|
| 234 |
+
The Lipschitzness assumption for $\e_{\Sc_i^g}$ is mild and can be satisfied for cross entropy loss (c.f. \Cref{app:lipschitz_cont_loss}). Here $\conv(\cdot)$ denotes the convex hull in the same sense as , i.e., each element is a mixture of source distributions.
|
| 235 |
+
Thm tells us that if we can solve the optimization problem properly, we can guarantee transferability on a neighborhood of the classifier, as an approximation of the $\delta$-minimal set. We thus propose Algorithm , which shares similarity with existing frameworks, such as DANN and Distributional Robust Optimization , in the sense that they all involve adversarial training and minimax optimization. However, the objective in our case is different and we provide a more detailed comparison with existing methods in Appendix .
|
| 236 |
+
|
| 237 |
+
[H]
|
| 238 |
+
Input: samples sets of source domains $\Sc_1, \dots, \Sc_n$, feature embedding $g$, classifier $h = q(\theta, \cdot)$, adversarial classifier $h' = q(\theta', \cdot)$, surrogate loss $\e_{\Dc}$, adversarial radius $\d$, ascent optimizer, descent optimizer, weight parameter $\lambda$, number of epochs $T$ \\
|
| 239 |
+
\For{$t$ in $1 \dots T$}{
|
| 240 |
+
Compute $\max_i \e_{\Sc_i}(h\circ g)$ and $\min_i \e_{\Sc_i}(h\circ g)$ \\
|
| 241 |
+
Initialization $h' = h$ (or $\theta' = \theta$) \\
|
| 242 |
+
\For{$k$ in $1 \dots N$}{
|
| 243 |
+
Run the ascent optimizer on $h'$ to maximize $\max_i \e_{\Sc_i}(h'\circ g) - \min_i \e_{\Sc_i}(h'\circ g)$ fixing $g$ \\
|
| 244 |
+
Project $\theta'$ onto the Euclidean ball $\|\theta' - \theta\| \leq \d$}
|
| 245 |
+
Fixing $h'$, run the descent optimizer on $g, h$ to minimize ${\rm error} = \frac{1}{n}\sum_i \e_{\Sc_i}(h\circ g) + (\max_i \e_{\Sc_i}(h'\circ g) - \min_i \e_{\Sc_i}(h'\circ g))$
|
| 246 |
+
|
| 247 |
+
}
|
| 248 |
+
Output: feature embedding $g$, classifier $h$
|
| 249 |
+
\caption{Transfer algorithm for domain generalization}
|
2107.10140/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-06-04T13:53:05.056Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" version="14.7.2" etag="HW2SBAhbD-1MOwIiOoKW" type="google"><diagram id="mq6HoX9RGSEmGGwDlS_u">7V1Zd6pKt/01e4x7H8igbx5BRBQbBAX1xUHfSCcgjb/+YtQkNjkn+9tJ7rcNyYO6qoCquVZNqpaT8hfSCateqiXuKDat4BcMmtUvhP0FwygGIc3LwVIfLTBOnCxO6plHG/RqkL29dTKCJ+vOM63somIex0HuJZdGI44iy8gvbFqaxuVlNTsOLq+aaM7piuCrQTa0wLqppnpm7h6tJPamNm95jnu+MgSeSkLtXPlkyFzNjMs3JqT7C+mkcZwf34VVxwoO6J1xOR7HvVP60rDUivKPHAAfDyi0YHfq2y8YD5pDmdDKtQOErpZm1vFc+HZ3aBezy22AfP3YvHNOr89H6od+5fUJrHMtO45yoDyhQjdVojgNteDtac6tOlvM2MgAL8qtNNIC4OB0AEZ0FIUNDCBs2wYQGIQB2IYtwLYhzbRMlERQ+P2WJYeg89KLiwR5etGG25YHXmQB7puWQ09I033wWF/XjI2TxrvIBIw4iNNjlV8wYj//vdQLtdTxIiCPk2MNMMmvy/Q4z+PwTfF7/cgSLXof5Ox5vDw3FHzCklO/notsLfSC+liopV4D/3PZm4anjv4/Tes7hyacXv73nzqap1qUnePt5Tpv/YyC4JuSU5vfBMBrWXFoUpTfluZWlQOmZcSplntxdK4QWcfiwkpzrxmdgBZ4zqlU1zLr4Lgjalc49sPDAG/wea57xLQZI0dYL6FuzEkbRm0Y3Qujf8VRj1PTSt9c5+wp08uSQDsh6EXPoaEHsbF5qRE3rbGDw63hUMX1TNOKXgrL403nUILBSFK9FLwNr+Z+dCh5r9Fe6BzanBoXLXbz/HAPpQ93DJgLXOTJiWMnsHaZlTZ307zxz5PRBBfMdQvFy9Z+No/q5WqOJgsspEmLmODbyiisYrBabmKtP/Zoos5zzjJGaRyxKymfbVjC9MT9ECYVYzgcIE4RJERsE1LPxkQ2HsAy2qnYSaJZtCrojj3c48MJL3cUjNG3PNbFaW9M5l6fGHbMeVe9GHXly934bGzwuajhvt6Xz9YGqH8ZuqeBFVh2fh5Wr5jfjsh/QP19lmnJpyWflnwuyAd7l3zgXSgRYxbuLWyyxxj2ZC6qxNCbOrRPzmJSobPhjsksy0qRELXUWdxN7CVSk8vc0kAT1XgfmgCct5gFSeQxVZYvs0g1zDpA9wsCLBwxRyXEYbdsJ9/xi4k2wIU0QmDampMOgA+7aDmNCbjkW/Jpyacln4ckH/xd8hlMlqNEwXtdaesv1QHKT4a1LxECl6KkMd5VHdpBF7jkmmBo1+Eqp/Y05fRUQRPsWt441j4eJYlXFES/RNdbMCxLd6bsDVra7GQh91cDPy96S6uuEgXrL9x9RxBzzu/zZdEx6c3AaCjLwFEnaMmnJZ+WfB6SfN6f+WhzXNNA1ZMBaRUtZCCrFlJZItNFwSOzAujKMMlOoA5ejtbkrI6HmeqWfcSVcqqTkxbHYomNxzt5yw8AdwlLpOqHk4486UTijmS2gD5vll0ez82mSAcV126kqCJUTQiMzn11ztTshNk2M6HhpiWflnxa8nlI8nk/56NW8MCKV+XCChlzmEZkSZdlHMWTrgmo5srosfx0xcBi6CZbcUJIsjRlbb2n8HO0F+PozqWVlZRiDDxJIZ6aFntSUOdyCIEmAUnaVLdpcKIIIsVyWyZbIK4jdb1tiadong8q0oIGWA/2FThvyacln5Z8HpJ83p/5kINdh0N1JtoXgIfRzm40F10YHIBTk13WYuwSQY8Dht1+AY9iS/TnOiTRmzhOJUNBmT1rbtV6Omaomd5T97wiUArh9aKoA/dmBrtZDceDeurnBEohULHz87ECpfutQe1ms2JRriGQFgNnChBhSz4t+bTk85Dk837OhwqLTJ6LXceceFjZdzwtHHFGlhSQwouKzhjLrk/b9aLHwBxh9TU2d2PSR0DRAvt2B7eW2XoyE4hVsRA4orDXZALyJmEz6JSRBBafxHiFd5cF1R8t3FFPD0mtcIcjdKIuDAqrx4bkiooX19BPIx89/XdLc5z+aoMv+gQfRkFjL10vt+REMw7mMtWSZ++HzVBhoUNMn2U4h17aXhB0joO1OQPCPf+9nPswbqzqXbkN9CLiCaueFYdWntZNlfMBOHkS/pykTwiEPEGn07y6E8bOUqI3DjzIn05W7aRfcl4u8Crxad6cVD73FT/IHcXPFWbPfGWZJzg+A7osT+ONdS55IaE/x5O8xhOm7uEJIXfwRD4BTfSh0LyNTvIumtC96PwENLGHQvNFD3hCE38nNskvQhN/LDSpSzSpb45N4rHQxC/RhGDwfnASXwQn+VhwYvAlnBgIfutYpx4MTuwSToK8P9i/Cs4ziTwKnlejHQaJb8YTuoPn16vFCQoDIQKDjmpx3EQxwIIJDCBQgzRIWzdNyHx/9fIViY9vSWjcTzockhTwYWCdUhRX7//37oW08BDIkZ4l/75+u7sya9NJx3TS+TI3GaUXkJvmImjzekwxXVtfTnCbdnrNPL0mn/4Zmbf5p4uGXaegLgpvs1AXxfcSUecK7+eizjX+MR31E3NR6Lu5qLnpbVZbNHb6rt9xFvssGuQgXIuSJa79dGjZdHfdC8pFgKCW2kENVR1Na0HI5d2QjLajZanb1Wgaqhy3pIQCTbhNP+TwYbmp92MtDmhlYnDbfmcsVD7bTarOaFxJRree4imhIkW9Hi9IGOQy56flolrmapmrZa4/yaKPjWjW03KUZHJYNbG8TkJEqyuju16xiOtaPSAYIFAUr8eMjZchS7njtUNlOTDMA4MnVvzOV6S4h2a1TAv7uJAMICZn5SjJRGSmjsY+ibrYPO/svAqaKP52lhiz3s4XuBHgL1eRL/JuNVnIQMtcLXO1zNUy18fFBy6rqx11Ki1tjjUIZOsoxt5YWhuNyNP5OGXIQkro1XQP9SxjAqk0Xseun7vEBtNjOV4MOzGP5dCsK/SEjOBJUcm2gC9o5lCHCAtRZ/yAssfKyJzawxT1EKvQdqi8Nno9ej3TR13ZSoqKdaley1wtc7XM1TLXx5lr6ujarru1NnIKp7Y+pbNBqgIMZs8W2maE4fl+vBV5S1A4em3C+2y131Osvt+EfNpbLQZIJbJ7bYBsIwmsge182dVyzEc2faPCd11FYsLuBJMEeFpvISmYc4o396ZDUQIYheio+3XSjxw3pMV2ztUyV8tcLXP9xmpR2HtRpk8oYwHNFDmolkuOn4w5Lwj2tlKsB1rIklsc8epJoPEe6UUh2VkaQIYOlnKZwlFXnIVAuFjDS4WlpUnHj1Xdz8ygivGlt9kLhdBzkuWytHuLmaEt1xFnjUa63LUYdy5k4JqBEEHlvFbw2TJXy1wtc/3GczJ9QGE0YrHczzEUd/SIFTZAd1ZXIqHMJCll1r3hPE+XYjVZuTwTclMoGCzFJAYBtz9HEDNL0mk/jXsAok7qdDlieb/qlylt+SoC1yIhhDZY4ZloD9GVZS8VRUwrcGbRAD1Gg0DSkbg7SoLOj1OLtszVMlfLXH/y3WK+Mvbr3F1y0la30tlUycspy5pikkE7KDPZOOOzlYkryYziFHqG2eN0auiu3FWdaGPn5bbH+bPu2DEifzGQq/1uLpJin4amVp+VIyydezt7UHRrgQt9cQNUgC0l/ISG9CEbgnNkXQn7HNnQP25Xpy9RtX+CdAgHqQvlEAKRN6qhd8TqZw3VH+mG7u1P+dfqsEgIvwQTRW4lWPBXSbAeSvhPXKmBceQ71WwPpfon4UsoqfPni+cnsC+C8qEk/8TV8xMQfE9kSX1VWD6U4P8GSwy8xbK5Lz99VWQ+lOD/Bs0XAv0mNB9K73+NJgyS3xub9+T+Xy+n1iACBGHdOMqpNUpDAROHDptvm2QTUQQCG9T7k+RWTt3KqduUR5vy+G/7glxQuA2fZwC24HJkAOryEOJTax1o2m7aRaBg7fa7jtrj17u5KK0RVkvyNYobqGjz48lCG4R7HsLR0OnapNyXWcgPI5MaynFsq0RJ7YiV64gRtEL2uw1vm+o4rNY4mKbWHqnrnuc4xiQD51Rv+dNSHi1ztczVMteffEE+BBl8VjU96QLqlJ0sqIqDyA0USOOujPKVuOXAQIWg3K8NLe8QAxzW0sV6jBtAyW5SyyyRPBU3lOZwSriKR90dgwvzZK506bJYVdvViuaHm2rMxsZ80BPxFN0tRDnnbYj0iEwpo3y3cow4GbXM1TJXy1wtc32cuRDcriVmbaArdFdndewK4ngAOjLqK/ygv6qTvUCD2YAzB/wcG9WLycieeF2omK8tAaeLZeVtQICowOHcJbbaIIYGw5IpEALzXbmOMnKTuIAwohSM9UtorRVyviFwVO/PtgtgQmvQRqK7BDQdt8zVMlfLXC1zfXy1qMgDdDfu7+EKiwZAdxXGSyxKJGaZaCMe7XVZTpzvl8ByiwclKK2BtVZy3BT0fThHVGXsMajZ3Wx1VyDHgRLJjF/wBTRPWB6zYl+V12toqKYRZqsIAtDEzNcpSerZhZYNI7WottFGoIm+IUstc7XM1TJXy1wfn3NtoyLt8f2Ep6NdWLFLSOA69XC0Wo37wx47s7cbe18xJTBfyQ4YrYJ53xvSm2Fh67JlzEQhFSDS4SWZkVTNZZPcl7x8AhlWMFyNh4rpUlbBDoebjNdEZgrhJA56o2ZKJ85wL2AndskONDwQR2jLXC1ztczVMtfHRYk2L4aARQo57vCQ1ZHgFctEQywL5mQ5nC6YvDRCnoMFddOzZQHPmKyfbQxbXJI404fkZVrjMbIzecEL99jaX7PeyO8s17mn9QrZxkQokxjVWWvdLRtNNyHHTeZUJKQThJLDznowmlV9buvau5a5WuZqmatlro/PuSYyOtI6OTKraJAn0hTnA2WU807GSNNRSjgobyf1WoUyRfFjlVrp+B7TswqIl9YAt0kUqou6W4+5nQ84M7AcTPszlatCX907XqDYI3pVO148d4RhxGzqkY9YI22jlzbmC3DKxmTY6QykHZL8NOb67xBPQ9DVDsDIWaL3r+rpZkj/uUwIBh9JdAUhV7tYImeMvkM+Dd/bwfIvxhK+xPJb9dNn6B4FS+QSy7sC6i/D8qFk/TdYQi9b8X4LmA8l7L8Z5Hcl1F8mR3+w3fwR6HpP6jsK6q8D86G0/TeRCYPkd4J5T9r/r/Lp13Uigjez1+fJ5tUEdpJ6zSz3eUn4IuH9g9P1Q82xPulcvzMzfhsm9yLgbbicTKelIGtYB8V4YzgvEulTQdgszg6XuRuXl5F76JL83CMWwT8n4NCrDeUP2/PjIPX6dzsv/7Lgu/ckxGcEXyeNk6QB8dDIw5IrtQ4HmW0s/rfFInTJfQTZxOIHZ9+fsir8jx4e+YBbe8/QNTVm6a7pxc928vUvWID3vfxVLHOeRn+6kxXPKpty6IBG+souP9TL1xNsBCOeyDsPdeN33IxgTxj+CZ7+j35a4+OehltPHzyNXnkax++z9t38E/6EE5/g6Xtpk8/wtGQFnqZ7gZcfejzSksSLnB/u72v+ptDv5e97aZ0vGdWdX+88lviHkzzaPvj18MVN85rEwfHbpDao3q56IQr7aFB9xo9VIr+V3oLegekjSYSTP0wtc1/OdenEX82Niz78fw60MEhcZRRgCr33E1df9f0A8lvZrr8M2+tsDXT/59jQr8L2t5Jf34etlhonhgA/CWno+ofaYPQu0l8Wxb+16cVfjDSF4FdA38nmougtyp+xAkd+azOMvxhlCARv4vk20/ZlwfxbP4r5N8OM3tAGdhvNXwUzei/V8JAwYxj5/xfN6G8JEP5mmHHkGmbkG2H+jEX2zWKoWzVLrMA6Lrx+8hIIetaRvXEuAoJPxO2ekvC97DcOP52D/o9c/Blr639wMfLDXUyC8OX4pcgnkvpeF99b6X6ii6Gf7WICJZ6wb/Zx8zGND+54Kes1vXVHsWkdavwf</diagram></mxfile>
|
2107.10140/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In this work, we focus on the challenging problem of *source-free* domain adaptation [\[1–](#page-8-0)[4\]](#page-8-1) for semantic segmentation [\[5,](#page-8-2) [6\]](#page-8-3). Consider a deep model trained to perform semantic segmentation deployed atop an autonomous vehicle. While unsupervised domain adaptation (DA) has been extensively studied [\[7](#page-8-4)[–11\]](#page-8-5), most prior DA methods assume continued access to labeled source data during adaptation. In our example, this may be impractical due to the limitations of on-board compute and memory, particularly so for a compute-heavy task such as segmentation. Further, such access to source data on-board may also be subject to and limited by privacy regulations.
|
| 4 |
+
|
| 5 |
+
Concretely, our goal is to adapt a trained semantic segmentation model to a new target domain given only its trained parameters and unlabeled target data. The absence of source data for regularization makes this source-free adaptation setting very challenging and highly susceptible to divergence from original task training, leading to catastrophic
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1. We study source-free domain adaptive semantic segmentation, where the goal is to adapt a well-trained source model to a target domain given only unlabeled target data. Under a domain shift, many source model predictions on the target domain are initially incorrect; unconstrained self-training would reinforce such errors and degrade performance. We propose AUGCO, a selective self-training algorithm that identifies reliable predictions on which to self-train based on pixel-level predictive consistency across diverse target image views that vary in appearance, scale, and context, and leads to state-of-the art performance in the source-free setting.
|
| 10 |
+
|
| 11 |
+
performance loss.
|
| 12 |
+
|
| 13 |
+
To address the challenging nature of this task, recent methods have introduced complex multi-part solutions: Liu *et al.* [\[6\]](#page-8-3) propose an approach combining attention, knowledge distillation, self-training, and patch-level self-supervised learning, whereas Fleuret *et al.* [\[12\]](#page-8-6) propose combining entropy regularization with feature corruption using multiple auxiliary decoders. These methods introduce several hyperparameters which make them challenging to tune, particularly so in the absence of any labeled data whatsoever.
|
| 14 |
+
|
| 15 |
+
In contrast, an alternative line of work has focused on a remarkably simple solution: parameter constrained selftraining. For example, Test-time adaptation by entropy minimization or TENT [\[5\]](#page-8-2), constrains optimization to only update the model's batch-norm parameters (both affine and normalization), and self-trains on unlabeled target data by minimizing a conditional entropy [\[13\]](#page-8-7) loss. By keeping all
|
| 16 |
+
|
| 17 |
+
<sup>\*</sup>Equal contribution
|
| 18 |
+
|
| 19 |
+
<span id="page-1-0"></span>other parameters frozen, TENT is able to prevent task drift in the source-free adaptation setting.
|
| 20 |
+
|
| 21 |
+
While TENT leads to modest performance improvements on standard domain shifts, it performs self-training on *all* model predictions. Under a domain shift, many of the model's predictions may initially be incorrect, and entropy minimization encourages the model to increase its confidence even on such incorrect predictions! As a result, unconstrained self-training leads to error accumulation [\[14–](#page-8-8)[16\]](#page-8-9), particularly on categories on which the source model does poorly to begin with.
|
| 22 |
+
|
| 23 |
+
To address this, prior work has proposed *selective* selftraining on instances deemed *reliable* via model confidence [\[17\]](#page-8-10) or consistency under random image augmentations [\[16\]](#page-8-9). However, model confidence from deep networks is known to be miscalibrated under a domain shift [\[18\]](#page-8-11), and the suitability of augmentation consistency for semantic segmentation has not been previously studied. We propose a novel selection strategy that combines pixel-level predictive consistency across diverse, automatically generated target image views with per-class confidence.
|
| 24 |
+
|
| 25 |
+
Specifically, we generate two views of each target image that vary in scale, spatial context, and color statistics via a simple crop, resize, and color jiter strategy (see Fig. [1\)](#page-0-0). We then obtain aligned model predictions in both views and mark pixels for which the model makes identical predictions in both views as "reliable". Note that this deviates from several recent works that propose *encouraging* consistency between predictions across different views [\[19–](#page-8-12)[21\]](#page-8-13); we instead propose *measuring* this consistency to identify reliable pixels for self-training. Next, we also mark pixels in the top-K (K is a hyperparameter) percentile by confidence per-category as reliable.
|
| 26 |
+
|
| 27 |
+
The model is then selectively self-trained on predicted pseudolabels for reliable predictions. To prevent task drift in the source-free setting, we match TENT [\[5\]](#page-8-2) to constrain weight updates to only the model's batch-norm parameters. We make the following contributions:
|
| 28 |
+
|
| 29 |
+
- 1. We propose Augmented Consistency-guided Selftraining (AUGCO), a simple source-free adaptation algorithm for semantic segmentation that identifies reliable pixel predictions by combining pixel-level predictive consistency across diverse, automatically generated target image views with model confidence, and then selectively self-trains on those.
|
| 30 |
+
- 2. AUGCO pushes the state-of-the-art on source-free adaptation from GTA5 [\[22\]](#page-8-14)→Cityscapes [\[23\]](#page-8-15) (+3.9 mIoU) and Cityscapes→Dark Zurich [\[24\]](#page-8-16) (+5.8 mIoU), and matches it on SYNTHIA [\[25\]](#page-8-17)→Cityscapes, with no extra parameters and within a single epoch of DA.
|
2108.09376/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2108.09376/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Most contemporary convolutional neural networks (CNN) are trained on images and process video frame-byframe, for simplicity or due to the lack of large annotated video datasets. For instance, the popular COCO dataset [\[21\]](#page-8-0) for large-scale object detection does not include video sequences. However, video typically contains a considerable amount of redundancy in the temporal domain, with some image regions being almost static. Image-based convolutional neural networks do not take advantage of temporal and spatial redundancies to improve efficiency: they apply the same operations on every pixel and every frame. Representation warping has been proposed to save computations [\[8,](#page-8-1) [51,](#page-9-0) [17\]](#page-8-2), but optical flow is expensive and warping cannot cope with large changes such as newly appearing objects. Other video processing methods, *e.g*. using 3D con-
|
| 4 |
+
|
| 5 |
+
<span id="page-0-0"></span>
|
| 6 |
+
|
| 7 |
+
Figure 1: BlockCopy accelerates existing CNNs by sparsely executing convolutions, while copying features from previous executions in non-important regions. In this example on pedestrian detection, inference speed is more than doubled with negligible increase in detection miss rate.
|
| 8 |
+
|
| 9 |
+
volutions or recurrent neural networks [\[18,](#page-8-3) [28,](#page-9-1) [22\]](#page-8-4), focus on improving accuracy by using temporal information, instead of reducing computations by exploiting redundancies.
|
| 10 |
+
|
| 11 |
+
In this work, we propose a method to improve the efficiency and inference speed of convolutional neural networks for dense prediction tasks, by combining temporal feature propagation with sparse convolutions as illus<span id="page-1-1"></span>trated in Figure [1.](#page-0-0) A lightweight, trainable policy network selects important image regions, and the expensive task network sparsely executes convolutions on selected regions only. Features from non-important regions are simply copied from the previous execution, thereby saving computations.
|
| 12 |
+
|
| 13 |
+
The policy network is trained with reinforcement learning in an online fashion: the output of the large task network, for example Mask-RCNN [\[12\]](#page-8-5), serves as supervisory signal to train the online policy. Using online reinforcement learning has several advantages. First, no labeled data is required and off-the-shelf networks can be optimized during deployment without designing a separate training pipeline. Second, online training allows the network to fine-tune the policy to the task and dataset at deployment time. Finally, models of different computational costs can be obtained by simply adjusting the policy's computational target parameter.
|
| 14 |
+
|
| 15 |
+
The main contributions of this work are as follows:
|
| 16 |
+
|
| 17 |
+
- We propose BlockCopy to adapt existing CNNs for more efficient video processing, using block-sparse convolutions and temporal feature propagation. Our framework is implemented in PyTorch, using custom CUDA operations.
|
| 18 |
+
- We utilize reinforcement learning to train a policy network in an online fashion without requiring groundtruth labels.
|
| 19 |
+
- We demonstrate our method on pedestrian detection, instance segmentation and semantic segmentation tasks and show that existing off-the-shelf CNNs can be significantly accelerated without major compromises in accuracy.
|
| 20 |
+
- We show that BlockCopy improves the accuracy-speed trade-off by comparison with existing methods, lower resolution and lower frame rate baselines.
|
| 21 |
+
|
| 22 |
+
The code is available online[1](#page-1-0) .
|
| 23 |
+
|
| 24 |
+
# Method
|
| 25 |
+
|
| 26 |
+
BlockCopy optimizes a large task network for more efficient video processing, by combining block-sparse convolutions with feature transfers. Our method consists of two main components: a framework to efficiently execute a CNN architectures in a block-sparse fashion using temporal feature propagation, and a policy network determining whether blocks should be executed or transferred.
|
| 27 |
+
|
| 28 |
+
The policy network is a lightweight, trainable convolutional network, selecting the blocks to be executed. As the decision is binary for each region, *execute* or *transfer*, standard backpropagation cannot be used to train the policy network. Therefore, we use reinforcement learning, based on a reward per block, to train the policy network in an online self-distilled fashion based on the task network's output. The reward function is based on the *information gain*, representing the amount of task information gained by exe-
|
| 29 |
+
|
| 30 |
+
cuting the region instead of just transferring features. Note that the task network's weights are not updated, and only the policy network is trained. Figure 2 presents an overview of the components discussed in the next subsections.
|
| 31 |
+
|
| 32 |
+
Standard libraries for deep learning such as PyTorch [33] do not support efficient sparse convolutions. We build on the framework introduced by SegBlocks [41] to process images in blocks, by first splitting images into blocks and applying their BlockPadding module avoiding discontinuities between blocks. At execution time, representations throughout the network are stored and copied with efficient and specialized CUDA modules.
|
| 33 |
+
|
| 34 |
+
The policy network is trained to select important regions that have high impact on the output. Using ground-truth annotations of video sequences, one could extract the regions where the output changes. However, many computer vision datasets do not contain video sequences and annotating ground-truth is expensive. Instead of using ground-truth annotations, we opt for a more flexible approach with self-distillation and online reinforcement learning.
|
| 35 |
+
|
| 36 |
+
When a block is executed, the importance of this execu-
|
| 37 |
+
|
| 38 |
+
<span id="page-3-1"></span>
|
| 39 |
+
|
| 40 |
+
Figure 3: Illustration of a video sequence, with execution grids, frame states and outputs. The frame state is only updated for selected regions (yellow), whereas features from other regions are re-used from the previous frame (purple). The output bounding boxes are visualized for detections with scores larger than 0.5, whereas the reward scales with the detection score. A video with visualizations can be found in supplemental material.
|
| 41 |
+
|
| 42 |
+
<span id="page-3-0"></span>
|
| 43 |
+
|
| 44 |
+
Figure 4: Policy network architecture. Dimensions are given as W×H×C, with input images of 2048×1024×3 pixels.
|
| 45 |
+
|
| 46 |
+
tion is determined using the *information gain*. Blocks where large changes in the output occur have a large information gain. This way, the policy network can learn the relative importance of blocks at execution time, without requiring a separate training pipeline, expensive teacher network or annotated data.
|
| 47 |
+
|
| 48 |
+
The policy network uses a lightweight 8-layer ResNet backbone combined with a fully convolutional head, with the architecture depicted in Figure [4.](#page-3-0) The backbone operates on a feature representation S<sup>t</sup> consisting of 4 inputs:
|
| 49 |
+
|
| 50 |
+
- Current frame It: the RGB frame at time t.
|
| 51 |
+
- Previous frame state Ht−1: the previous frame state is an RGB frame, where each block has the image content of the last executed block for that position. By using the previous state, instead of simply the previous frame, we ensure that the network can detect small accumulating changes.
|
| 52 |
+
- Previous output Ot−1: We represent detections or instances using a probability mask for each individual class. For segmentation, the output probabilities per pixel are used.
|
| 53 |
+
- Previous execution grid At−1: The previous execution
|
| 54 |
+
|
| 55 |
+
grid is a binary mask indicating which blocks were executed for the previous frame. In combination with the previous output, this improves the exploration ability of the policy, as previously executed blocks with no information gain are less likely to contain new information.
|
| 56 |
+
|
| 57 |
+
To determine the importance of each region, we define the Information Gain (IG) for each output pixel, as a quantity representing the amount of additional information gained by executing the model for that pixel compared to using the previous output. The information gain IG<sup>t</sup> at time t is a function of the output O<sup>t</sup> and the previous output Ot−1.
|
| 58 |
+
|
| 59 |
+
The formulation of information gain is task-dependent, exploiting a task's characteristics to minimize the number of required computations while maximizing the output accuracy. The information gain is determined per pixel p, and combined afterwards per block b using max-pooling:
|
| 60 |
+
|
| 61 |
+
$$IG_b = \max IG_p \quad \forall p \in b .$$
|
| 62 |
+
(1)
|
| 63 |
+
|
| 64 |
+
Object detection For object detection tasks, the information gain depends on the movement of objects and the score of the prediction. For every new frame, predicted bounding boxes are matched with the previous detections, by choosing the most overlapping detection using the intersectionover-union IoUbb of the bounding boxes. High overlap means low information gain, with static objects having no information gain. If an object does not overlap with any object detected in the previous frame, the object is a new detection and has an information gain equal to the detection score. Objects in the previous frame not matched with any object in the current frame also have information gain equal to the score of the previous detection, as those detections should be removed.
|
| 65 |
+
|
| 66 |
+
Algorithm [1](#page-4-0) is used to determine the information gain. Note that it is important to also assign information gain to pixels of the detections in the previous frame, in order to update or remove those detections when needed. Figure [3](#page-3-1) contains visualizations for the information gain.
|
| 67 |
+
|
| 68 |
+
Instance segmentation The definition of information gain for instance segmentation is similar to the one of object detection, but the Intersection-over-Union is determined by the instance masks instead of the bounding boxes.
|
| 69 |
+
|
| 70 |
+
Semantic segmentation Semantic segmentation is a dense pixelwise classification task, where the network outputs a probability distribution per pixel. The information gain of each output pixel is determined by the pixelwise KL-Divergence between the output probability distributions.
|
| 71 |
+
|
| 72 |
+
require: outputs Ot, previous outputs Ot−<sup>1</sup> # Initialize information gain as zero-filled matrix of size H ×W IG ← 0 H×W
|
| 73 |
+
|
| 74 |
+
for all detections det in O<sup>t</sup> do
|
| 75 |
+
|
| 76 |
+
prevDetbest ← prevDet
|
| 77 |
+
|
| 78 |
+
IG<sup>p</sup> ← max(IGp,(1 − IoUbest) · detscore)
|
| 79 |
+
|
| 80 |
+
for all pixels p ∈ prevDetbest do IG<sup>p</sup> ← max(IGp,(1 − IoUbest) · prevDetscore)
|
| 81 |
+
|
| 82 |
+
if prevDet not processed then
|
| 83 |
+
|
| 84 |
+
IG<sup>p</sup> ← prevDetscore ∀ pixels p ∈ prevDet
|
| 85 |
+
|
| 86 |
+
return IG
|
| 87 |
+
|
| 88 |
+
The policy network fpn with parameters θ outputs a probability p<sup>b</sup> for each block b, indicating whether the features in that block should be calculated instead of just transferred. The network operates on the feature representation
|
| 89 |
+
|
| 90 |
+
$$S_t = \{I_t, \mathcal{H}_{t-1}, \mathcal{A}_{t-1}, \mathcal{O}_{t-1}\}$$
|
| 91 |
+
(2)
|
| 92 |
+
|
| 93 |
+
and outputs execution probabilities for each block b:
|
| 94 |
+
|
| 95 |
+
$$\mathcal{P}_t = f_{pn}(\mathcal{S}_t; \theta) \tag{3}$$
|
| 96 |
+
|
| 97 |
+
with
|
| 98 |
+
$$\mathcal{P}_t = [p_1, \dots, p_b, \dots, p_B] \in [0, 1]^B$$
|
| 99 |
+
. (4)
|
| 100 |
+
|
| 101 |
+
Probabilities P<sup>t</sup> are sampled to execution decisions A<sup>t</sup> = [a1, . . . , ab, . . . , aB] ∈ {0, 1} <sup>B</sup>. The policy πb,θ(a<sup>b</sup> | St) gives the probability of action ab. Execution decision a<sup>b</sup> = 1 results in execution of block b and a<sup>b</sup> = 0 results in feature transfer from the previous execution.
|
| 102 |
+
|
| 103 |
+
Stochastic sampling according to the probabilities encourages search space exploration, in comparison to simple thresholding. As gradients cannot be backpropagated through the sampling operation, we adopt reinforcement learning in order to optimize the policy for the task at hand.
|
| 104 |
+
|
| 105 |
+
Actions should maximize the reward for each block, with the objective to be maximized at each time step given by
|
| 106 |
+
|
| 107 |
+
$$\max \mathcal{J}(\theta) = \max \sum_{b=1}^{B} \left( \mathbb{E}_{a_b \sim \pi_{b,\theta}} \left[ \mathcal{R}_b(a_b) \right] \right)$$
|
| 108 |
+
(5)
|
| 109 |
+
|
| 110 |
+
where R<sup>b</sup> is the reward based on the Information Gain IG as described later. The reward, loss, objective and <span id="page-5-0"></span>parameters are determined at every timestep t, which we omit for simplicity of notation. The policy network's parameters θ can then be updated using gradient ascent with learning rate α:
|
| 111 |
+
|
| 112 |
+
$$\theta \leftarrow \theta + \alpha \nabla_{\theta} [\mathcal{J}(\theta)]$$
|
| 113 |
+
(6)
|
| 114 |
+
|
| 115 |
+
Based on REINFORCE policy gradients [\[44\]](#page-9-17), we can derive the loss function as (see supplemental material)
|
| 116 |
+
|
| 117 |
+
$$\mathcal{L} = -\sum_{b=1}^{B} \left( \mathcal{R}_b(a_b) \log \pi_{b,\theta}(a_b \mid \mathcal{S}_t) \right). \tag{7}$$
|
| 118 |
+
|
| 119 |
+
The reward R<sup>b</sup> depends on the information gain of that block. A trivial state would be to execute all blocks. Therefore, we introduce a reward Rcost weighted by hyperparameter γ to balance the number of computations:
|
| 120 |
+
|
| 121 |
+
$$\mathcal{R}_b(a_b) = \mathcal{R}_{IG}(a_b) + \gamma \mathcal{R}_{cost}(a_b) . \tag{8}$$
|
| 122 |
+
|
| 123 |
+
Executed blocks have a positive reward for positive information gain. In contrast, non-executed blocks have a negative reward to increase the likelihood of being executed:
|
| 124 |
+
|
| 125 |
+
$$\mathcal{R}_{IG}(a_b) = \begin{cases} IG_b & \text{if } a_b = 1, \\ -IG_b & \text{if } a_b = 0. \end{cases}$$
|
| 126 |
+
(9)
|
| 127 |
+
|
| 128 |
+
with IG<sup>b</sup> the information gain in a block.
|
| 129 |
+
|
| 130 |
+
The cost of a frame is the percentage of executed blocks:
|
| 131 |
+
|
| 132 |
+
$$C_t = \frac{\sum_{i=0}^{B} a_i}{B} \in [0, 1] . \tag{10}$$
|
| 133 |
+
|
| 134 |
+
As some frames might require more executed blocks than others, we define a moving average with momentum µ:
|
| 135 |
+
|
| 136 |
+
$$\mathcal{M}_t = (1 - \mu) \cdot \mathcal{C}_t + \mu \cdot \mathcal{C}_{t-1} . \tag{11}$$
|
| 137 |
+
|
| 138 |
+
Instead of simply minimizing the cost, we use a target parameter τ ∈ [0, 1], which defines the desired average cost. This results in more stable training with less dependence on the exact value of γ. The cost reward is then given by
|
| 139 |
+
|
| 140 |
+
$$\mathcal{R}_{cost}(a_b) = \begin{cases} \tau - \mathcal{M}_t & \text{if } a_b = 1, \\ -(\tau - \mathcal{M}_t) & \text{if } a_b = 0. \end{cases}$$
|
| 141 |
+
(12)
|
| 142 |
+
|
| 143 |
+
Executed blocks, where ab=1, have a positive reward when the number of computations is lower than target τ . The target could be adjusted at execution time, changing the model complexity on-the-fly.
|
2111.00162/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-06-03T17:52:26.186Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" etag="AUW4lKXDnhPpCFDsKmu3" version="14.7.2" type="google"><diagram id="Dlrt1vgHWt7RnYhEk_ai" name="Page-1">5X1bs6LItvWv6ccTwR3yEURARUBFQF6+4Coq90sK/PqTiauqq7qr9q5ln/3tjuiKqrUoVBIyZ8455pgXf6NXxai2QZ3tqzjJf6OIePyNln+jKMBw6Cc+Mb1OsBz7OnFtb/HrFPn7idNtTj5OEh9nh1ucdN+9sa+qvL/V35+MqrJMov67c0HbVs/v35ZW+fej1sH1Y0Ti9xOnKMiTP73NvcV99jorUPzv57Xkds2+jExy4PVKEXx588eFuyyIq+c3p+j1b/Sqrar+dVSMqyTHc/dlXl6fU37y6tcba5Oy/5UPdHd1H+/0S5JHDH+S6ZZswf/Qr6vAIB8+HvjjZvvpywyg+67x4a1YpkqCSdvf0ATpQZjkVtXd+ltVotfDqu+rAr0hxy9IQfS4ttVQxqsqr9rlUnS6/PnmGmJ+u+LP9lWNzgZd/VrC9DYm6K6lZUjxy1niyxl0HAd98Bstvv5LKXV5/Y1a3RzJPD6JnXqtRPTHOJ2z9fmKjrwD+qEQK/GCz88XYWuj37vhka8PzpEpTTrq3Cpyj6HGxL2x+42SxiZfsQTzWPmXI7heu4dr3BV0/ny3ts/iYvPivjMV81qvtZow3Mw3zupYBYrmBm1deg16K/qnwJNwee5LZ3DQf1zrcj9Fsx9Bjwu1UUgoNP0SeiH0W6MWrfFSF3mTC0POH5lUA3BMlJHXrmjqpNpACy5BtDZS36e+jX7TMOz9wRmFkuRSfqZultBWgObRGxWgPqAJqdMwxEk9rEMnPrXDhepJtPukFL8BTAnVooPAAu1A0aCgQjfs+26D3lB0PugH8ohezmIljcGUFt4se7SZeV6j1yaZJjQH5/1eo8ERbQXJEV3RPRyiTXektSghfFG/rVo8kCydZBmUB661CVlUkqd28TP++BAHEc2zoojnzSa5ipvDId7sjwOHJ0rAd2mig3a+q3d0HEN8qbwdZdd1NR3m7JSTVNKRvHGm/WgHY57fPSC8zTFxlVYrOZNWhKigg1pcrdfaaqVVy8Fpta5EEx+I680GHegHdAMdPbhTphPHFmkrxUuPHOBLPOlFA2+6vsw81xIJ5HewzGTpec04UcwkibhmO1FeDn5h1J1Cm2MytK4in7lx9yBhHRtKkJF8UTFPIt5sTleRwRMhHq5XdNCLEjrgDnion44pr6IbW+i0E3TXVUsKzvEk8Cf8IDlfXjqO/qWr/OjOlV1OG75kHSnnRJlFOcRNEU9IFvwjnhKhOlh/nMmN+VhtePtmBoIgPiWnuwvpnbrvxUj0QgKLFgGuT9HbK5S6CUj/KV9XcDyhkb9Zl8GLyAdF7uJEIavcOoHtTQia+b71CGicpmP3xrOoaGye1tnYQges7PWAJ0plrp4N9Lmc5InuxPG2gGaNs70B6XDlBrWJKHgId2t5/MUV/n429A5rgExri3QQmlWT2QzQZzcIhLwkh+7UKIxnIl0AlUMve7C9CZdG12gKButflqnvR9yd9SFpzvqaadatwzSHHfR4FUa8s0dbSvEtydEJlitrEAbXZa+cPrtXlhVu63gGYV01rcfih3QFYncvqSRozDV0xPb28Ln5NpDzqJ7QfAM7c+Cf10oOo+PslVdV3IFrcheSlzy0QUlQZa8eVsx1IJ6yCi1xLyK9IO3O1bcrbwzpnNhVJA+LIqXjntLBQE89dSbtHmkMKWdLwWg5utTmfuaJoU/ym+edvJZKaF5N4EGwQielJvQICZqD06/PAVZY0Utzeb6OPk/t4ESbTV74ji41rVCH0I8EnTFDbAtOE0hzwEE/1pPz/nPXxwph0Qz2pdyjkUrOE5l+i+1Gxochrc/NmFwG2szZ6MwrrlfHZ8Oe1811h/ZO0AC5hcmMdyyvr2nd3OdsTGGrmV7lI1qP4Nc12Nf7VHWaP2BTwmP1TF+wer4DnvYkmwJzfs65dHRuwEe3unXqpJ056AkR3l6TJSOUdJtLsfqsFpXXbPlEw9lMKlcpHVmtHbMDmm8yGA1O89IDh8Z286BL9zSaKakjAUEjAVROnM4FZFpjpXgbQD7anjoRAw9DS0dbQP3zGmy7kVai5HkR20LiLYn/kNNbYjdf5DSU97snQGrOY7bJ7dt5onYEeG6EJghKm9b3nRsez73/rLmQ3zvhWQy7nNzV3FhrePeAabYI0mhnbzNcpdtbO5IXBtbqsekPO6zi4hQm2AbsoqZx1IgtW3A699ekJfcU1/ttsEiEoqjzGxK/2Gq7SGXWygQzQ9eJ8br0Bl34A4YYSntqJJucldkmzTxuBSpp3Y7NV/FFQDJpYGh0w4rBqiMnn++JVzeQ91Osa+vPyqKp83qdpnyPLkheYuDRNExsW/GIvHAbo9Xxvi9uXiK03HUruSlJZ2wQxrQM+IPo/Gy3/Ws7HPcYaSMjaGlDGvd9mPPkpSUGbpcHmuG2t5tzwKik3jiuhdCaosGIo1iLp5M6dH9mU24mwdyLl74LN/hTDKhmURMVCk1Vb132ojJLcXZTezcSGbEVztnh290T4i32ELQDp4EegmEOOxASpXNGaKv2RRBW6wYkbN5kgWF4bdU6CZwEZ+x6feBJXthczMe/1EE/m5Xb+IKygXanU4j8vBlJtnTpCK+OOMswxvOubTpnC722LckCIiFYIAozMIe3kA9vYVuNIepE9pKwjWDC+41x9bvB7pyw37M5V7K8RSQbKk8GUiFTKg2w8Y9mYNj4iJYW6X+8Jf031hl1oFdCY6TK4ZzP2xMtNfluDNRpW2Ptwzi0BTGefEkbxmGXt3GYhef31rgXnSa3juRgrNDmGqk6SQRBWnalvi3vP0G/+joeae3DotKLVyFdV8NF67gyDhLkFrWUyMvdIl8TUmfV6jt82XAmCJsIoT+ExHzOOsO8ax4J8LK0bJKS7BvZoE+i8m8t2B9WuLaHqCxbc5ILF25ZPX6E51BBXmPVKV4N7g0sOOiwja0j+65ITqmeBMinoYnXTX5Pa6F1m4D+YG+wavPDeUhyplG8OwF7s2YPsRMpbrx1ryIdc3zJBQUJBtFA11d+3U6/nnKgisAtht4K0Qw1gVM/H5SXs7sWGmQt7AUlNJzZRj5VgU0Acl1GshMmR3Bm9H9jem0nOBexXBt48Y0RXM8St4fFxEYQY2nrfgEQAoqaqXT24Dhr3ZGyRvqCVrV6yqvBGDYcnVqhW1pCX2LHqi95itTQkSUQ84nEo3jYQF73u/PTkxYkxDnYZxal7fHMrtvH9nq9Yvcd/5X+TDp88BDYo0/Gb059kBBqUhVJ36LrEs/f+ROS+yBFsm+5ky8ngw/O5vr1s7/TGujgg9n4BMvB/IDl4HI0rJRW6Dm+pTu4Zqi+vPA/3UJHiegNpFCPv7+IiaQg+v4D9q1IkMojjASZQuJYFUH57Qe4K/5ttrfrrQzyL8NjELbcwevlP3EvaErx+awv0PPKJDrs+rZ6JF84lbIqMSWT3vL8D6eCD2YlQguVtD+gXIpbHONhpGd265NT/XqeZxtgJmYhbjAFs5Auf2XRP2g/5GC9PvLvhID6T8kA93eRAasd0ApRBIJJ7D9HDr68Knys799GLsDfTS4w8/3PEwya+7sJBkn+3SSD+kdKBsv87STjR1GT/6pk0P9IyeCpv51ksH83yWD/kZIBiP+iZETgsQ99fq2mcPv/zP6xIm/1/1A/EIx/TKC1P38XaH04Y6MysaHgQOvzdMoiLgqk7iklz2toKnY0Rlwo4zBryIvnzpYD6XEN8mPvZoZZGVmWXzpdbT2rbWcjtR7gft17boNJRefkmmGYpryAyWmdwX7z1cwIK+S9CzpCb9FZb1tjjtJk+aPwCrMaM2eJsnBKoczEUoDJIQryCehpiqGsBzfILL5gWFJrzETYL+e8p0KPsjARfxJYEsyto0x37l45M6+SA4Tlne5r5M1LSKgU3gcvylA5o3+YHJtDO4ZQsqj+vokzbrKEumpLONhL4GJs+fLctLlClkGT0lzL+0yo5zz6MOGOD9HPFBxCkY01GZZPVVQOeCR5TER5b9Wx0XKyuOou0l2Lkm4WdXEFx4PyLZmeDsWLw8RxWMwJcCxzxzNk4Wfkp4twIPP8PvHbNFriiWnn8LBOGtYx7yTlxD6+0UmoDp8jRjD9c4GqO41bagPYpKW75EVWO1uqZXJ2GatoiAECHA0O3yep13ZtSjFsqd21MaJjbYScdmxawhzYxtwbnwu//H73z/vV64gkzq/j1TmQGkDDOCWeQuhzutnzbxCoL0Lw1CWsLCpxS3JaxW7VJ8fFS8iZ+RFh+N3qawyttfvn6nkdFMIsSCBdJXe/me+DdfZF5fYHWk+6CfbN64I1WkT9+KgK2HU7jj92XJN1DpeehPPG6XW69jf05wLSL/IxNXrMLqVyfRmXiETP2beO2u9aOsHShINRw4B3Bn7xFOADS3CItCBpgIl5970ZBO1rlx3BZY7nqQr6VcmS6n3VUubBOnOcYbeAcxrDnIMMB1wtekiId0Nyx4u15x2OzszRkw550FqDueGcyDgqXgtgfmIBHFIcAdzhnUtv2bAhQLLeo2HeHY919vbd5ubMfk0aCc0rEU+zY1Oeh8Nxc3NVmdpEE4CpRD3CHLRgev0PEheQXo8/NIV8Te7kkCLV7ci71A4WrbFaXY6z1kaC+PxzsEHvyWfjmGOqWyS9g5co6bl9sYMeDjUje04WrrdLS+ifHsh2SCUbGu/vDFiRMwxxYB7HrQK+mGirq5rmEAy7q889ErK4kU+VWKj3lsv5whvSAPPv/V8Kub6kiUldPI3R06eSjDZPfOEOTaZWH/M9oKP8LOST57nWNkoygijoYeDSOeAHDvw4UeIX5qB8hks07cm5+1AG+OlpblAfOLRiAuvItTVPmvV4No6zwzyzc0CWJVc6VZhwACpT4lKdhQMP588GHnSFT8qGow0+6XHYNJgXBDTQpX0jvW4LTxwL5HYI6nMYzTgsHY0dthsTtrn3HIdpsXim7KVJJ6ZubTP14jKG0/N8zZo/jywbeyL0CKTKLk9tCX29pK5RP9JGjg/R3a+fSJ25/hPpNo+pV9+GzjQ6H/2d+tjxrFdf9rrjGvvpDJAgNNKwwpJ4M9mQ4M00mQSYD6FTlGL3w3v5hZBjaBqJZo/YPCXaPYQw4sp8Twb3kxfxFKill2wod7JoTRJaiyVT1c+Hdl9S0nk9l7RtRPccVYLWCkOsQam0ElKydQ7L/F8DP5mBh2ddN5y7t/efflOyQCVuNeb5vRb/VOKYGeyJTvkYZz28o4n2J1r349SC9GHs9ilPw5P/uCleGRV5zpZtU8cIiUKk80BYXZCh7OlWiZwSxC/t/vxL6Rapn75gy6IMQpy7RnaJHxqOSBBOkpHSrvWiIEgkjuTg3j8hm9wH1xW8/Sw8Jit7grdJ9SBdnp4oIMFDV8cKT11zJQH8UkRS+ZxvpssKL8E7Zd+G5yibAFYOYIh3SKhj/00JEkOmy/xkiHsi3in11Oas1m11tQjaoY6TrA/sct//J2fEr+CBCXovjK+krwO5IsjW3PHgerJSEOnD+i38g6P2L52Icw7Cg5GhJ6ONW005dRglACl/aUJYGGhPaDnsvXFY/pyqvuONqXfjrTY8a1h4r+LuFzX/H3SVPiLly2lnHHjNqt1x3l7IqxM8Lmre3TjbSKpBL6OBIWtotAMP6QxfF7yRBKesubnzwma8SwO8HJsdQkaXunU0WlWQe4VhajJFhW39DJ+qO7+mYLUXd+PVlDjrJT/nfQ2Mmpv3YrfZZto+6eenLP4pDW4fsGYftqbiWY8qVJOW8qZWrTXoMZ7CDDNHkRIWuGqJYH9uJSva27HgSm0G3xe8R09tpobaNkgyzzuIc6nMZkENnLdr9zj3Utqx8y72AIyhjXfRe0kqyNGKyJk/PLjjzVLvW4TDbsQOloegT46C2Gw3FKvbGu3lf+357Is3cR6Z162OPTm2j6CKw6+lA/QWGjlBqcmtNZzShUMc8GA4TSOrIENR3inn1A5s4ACMXU1MG0g5YR45q5uEzUEBVDlj8Yd3nPCChBn7i9QA6NeWiIy66NsxUaeS4AqD4e/HdRiPc5omBRzQG9jXz7KHfDrgXA1kiYiOL/Fsd2kFUy5JGO98FJc/p7NjHnfs6rLZ/PZN+PVPPMgP2JJ/H4H5/xKO/SEX8o9OOo++4UIspnuMI60xfl0Y9TMx7s6+KlazyBOiJFR7uzuB86CtmT1OypAk6uCGhvps6lsaYa/JupobLTsYSU4PDcfPoxVdE/pEOuRMM1Vv7uiEJgG0GWD5nsw4SFJJQmKsfoIOr7PeBuMCLilresOVYZjIT0xrEJB9mtvj6QTzBTFAP+cHo2etNidexIeRlyUjmNp26DrHw3dStPvOC1JFbZHbY7Yu3iQLClS2WLRz/B73Zafulg77CcOhM3+ghZ5Rt02HL9zxrYw/MTuF2/YGMtc1mZRey74GPYF0n4anSn7qnY3txXOeLRenYC06FScxCN5VnBefmH/5xHdT5YD0lNw/JegZMc7QwanreClbeDNaBsNVvLf1+6i05C5jweyUXHqe46Evg0DzWODseK8fD8/P+S7YwBytc5ePbXHEG0hxSsPNhxq7Mo+hTcMzc+KERePSRAI5N/WUt9PMA7mJbvhZvMtDOraEeTh1j8DQ/TNRiyzX3rLPJ5m/RlT3vMI0imlczAIBrcoJLJyW66dNjH8TDHP5nH/zddz7MZZv3uEsgeC+xilqktdQagbPXLCathDfs/CDz26yrBySHmD7WYU2F9qkdkUozpYMhNv4r7jtQ0r+iNzUJqP3vtGfynqjuCdfT05Ml7k6NnhJUObZJ+dn8UMMmk11JrkL6URxUTqV9wMg4flS+Ju7M6QsrxFmGHNeQycWAANs8V6QmFh7K4XqNQtgEs77a1LbRLaqXJrzWp/1kNXB2yorW8eledp4twwAz5JJqlveFphVolSW7NbYAox2vb3yfiBf6JYMKASIHi5OMqKQEO9rNl6/lTaFn6wqK8ZhN+6TNjituBExzt7vsyLESczdOKRudmmXspAauSH+KR/Q04x/5MasPfjQAxNOKG+tw+p5LdSgbmhaRJgsK9dRDxgMvZ4nUv02cV+wOphjDddhzjZO0z0N7phYqtXQPO5pGrm+VhXHgE5b4T42tKWaJSsAfab7+R3tsNy3gR1GwY8FLxLkoPYb43DsGoenyUuVnWVhwHUoTZKB8phl7xRB4IWQ1sBbDepj0CT+HIj5abYbUAa2euXiQDvoZ4Sjgzue6NQ5NZAmoDHHj3f5k1OldbzDMaubd00JoDewbNGovX+7O7wVDZDnWufex/NbKacmJkcsfK/W6GpMY7Ep4Ac2VsiOQ5LjeNuNK4xIXiJ23X5YEX7bpRjdKRpWvjaHffbRjTvu2M5sgGzyPHHVwXz8CL9efL5s0+g7VoJZWAk7ttwEsxLRnrnnEGIW7c8FC+BGeKexWbOrWe+lisjhMcmLihR3bFEGXvqYT5e2C0/4sZwpPXG5umeU9dp6Y3ayo+VPLemsUiWvDMNpGpIm8lNaRYCe3IW38mDCtfnbkiSv+SUd38OGU8NEWYGZImWmBn7o8RSbZX9sqI4+VWfMSvC7wmPpsort3tnhaAWYmuA0g12rEPoJaQ2LxZxEob2VlvtCF1DXF6uqJ9adgnj3cvNWdH3qQfqUWSvtHkFlh9F5XEk3sXzZQZtzXS4bUzPQsVrDEKRbdNcPygL+/SowMM08SUit2gZ8wevQnQp2Uq44y/IsmOpQe1cEJI9tRSZ75J23sQt06zT9SIe9ePrCjFluJpC3qGexXPfwEInOXVo4i5e1W5nMh7VLb4/dtxJLp7kAMw5nDyt6uDATu4WZaA+B2B6Jg6JZTyYf3KCXzhOl1r1XAc/ymbI0cj6eXm6NXGVIrD5rCV88Io6KCHViycvYaZnQrMPdgN0emq2dHxgS2Q+tcYKeLlJ775azGjRHkPf0m0nomAfZQ0yGNInPmgRh7YWlQmEhcH13P5tum5vzcSnPhH0M8ngLdboG/gPSwNsiJXUWhuaNQoRX4cgZl2smDZF011JhFi0b6M65ZUwq6GO/6dMZi2XLctArh+3Zhc6NG8uRxWG4yAqHq7h/ixmRjIy2jlvCvObFgwx2s6I2VBJd4OKxNmwol8L80zJDBKaxlV3Q8n47mjX3YSWLIuh7ZgkRjKUWJVdBbPciPB6+mRF1zThCzGln3kGKVLrwVFIn7rM54mJUSWGTFgc883wd4VjS+bO409jSeTR082YbNInDkNFkBPO2rmL/4j469oRrVh+d4QcJJDya7fpw8MDz8Ezes8O7azIjv0Oka7QvYObgCIok7Qackz5iFNA3fpL7XsNbtX98D7cNEySprq79kyfk8y5OrFjadjAkaAxW7dlmPNopd06B1KLrBq0n0FDlsb8C1RttCvnmo5CXHIombDnvGMCVKKZxAWDLJiXLJdOgPHc6jHn42hURsl7RvYmn+9Hwj7N2DwPm7uCFSabSTAVY4m3jzovXBkqIy9VCnXUwhyKxrv2cS+lhFnChJta5Yj9Ow6FYrf5veImvCYD/PV7iR2ni/xhewl7/zkvsL11Ioc3OOPHKpZo1xYiBAjqfv8Yb5KgeRnizpXsixSt+Ek/cA+8PRe129LExZtK+1ndME8srcJ4IHzvZSlLzKi6Ily4R5RfpOA06JsIKDqAtK1gyfj+NiUdQd8bVE1vKDeq2jvTW4RPtVmGIoTKgrER8YbwFAeyB43igT9lk4OswsQsMm6BWWwS1UGt4s9i+1c6681hoDWoPIw7GHMKgDpfDSajppksBtgtLqApYvNVxDUupBLZ6u8jA7CH6d+CgzcvbtE/R9LNYpZVkCt2ODI1HNpShRhGNKKz0NdIEa6MfNSNOcFlgh91jZRClB7VlzZr3vFdhV5y5WmfgUhudOH9nv1xsGaR0poW0c/IlCrZMSppuGm4ct9sSUocXrDH7GNs14j64vpnP0XnzVjmMe2L1QoKjkLeG6xftQklAPEtbr46daQB8C3KPfQehv/RqceVLZnBJdSl7Xz2aZpQuE/GQg1Be4QLUdyJw++1TLy5WoFRTsKrrSnHbhX+N7OENjf/Vfu79s3EqlYf5lC2bCxMiH429t0Xrqv4oKvdqdkAuBXqLFzJYlwC3PuCTnHBLvMYf5XpxtNSJ/jGC/oV/UK5ICVV+0ce1TpRGvT+clpVI8pjg/2KpMLmNrZaCFxyv6Prgvp79TR7i/gbRCGaOHnCwti/veM8gh1kgOcjPy6ifjiu8Cp/Yi7d/nhqbyOVK5Vk3DDiP6F55UFLHk82Q856xepfforazhTnB/GkaNtKuCT/HdLBUE+a9dI3y++m6U8uD02+THMrDDKxb6hWBNvT31JNWeLh3xsWaR404YSjJ2etSnofZkApkT5B53fA9eqb+bJ8SYe739VPfbKbvM57uA0LpdSua4q7v0vtXuXkVdr6kxhb3ASdI1xV8/jHG+BFhwIgcsgZv+zQugKSl2nqUp5md8QuG5bYlaZ76LQ9uvSX0bvk266NQGF0w93Qaa2rNn7qdkpC8s74Ryt3C5CmsVYXqTXX9dkzbLlK0Nlm0u9EgYKmC7zwY3yPmvAbyU2k6P3KfXIGtRkasyKBPSmHoMQnilQIm1bqUYzaHtwtWX7rWqYwnO9TkYY3ZgcOud3FyUkuW8CwEKjV4Axgb2iOXXDc3erylZb+WStN9wFMFPXjweMNWMyrQo87+jg3L2TpyyP7EDm+uW9r0g7IBdNzdjiGkjLG9b5HvVOLUGurwQM/m/vnZilmLnJGXD1L0TH3G2nHg9zJP/2WRJEq9fEgezL7PB2wHT0kyak2aA5cPGalzhHpP8XQoo8/REEb96WzUeX/jIckVGddOpfh2wfoYmWx+nS3OyxlCefbZNepxlxwJC3o+DsXir3mgOuxXbzV1WUaZkf9bXjntwFN9zHshvaNAmbuEqrbn4JodboySSPcWWLCbT2nakTBNKy6tO5t3pemdBhm6UJjexCKVq0LMieDHiRN/Nonibh0vkN22dO9XvYfbUURFzpfSa45JPmW4igLUPHhdqEc8xb1w+4Rd0bd2mODuGRL0nMNDg770SWtEknUu5LEOGJDSFplAcEHeQkKtGUVYGQt++VEzl5dOumgVRiQvW+Z0NpfaVNGJjNhQz3nVwcNFbLuVV12/0THWdSxNqNcgInnLBVvGF/fnpD04plp3tM20D2HJsSI7m2F+zLv9kjf1YvewbsDOrg8R7OZdbz0Q3jUK9FhpVsrppMCkY+lOiClyCyegzwZenUQrlvZMsfkTBuZfWyOLZjFvO40uNkokYVkR2quuAPgS1kkPen22XJiRZTeSYKZZN+faXWthnPJuBgRCDT2RDGKVM5o0kb7D9DyNdq2bUwiYMoY69PaN5BHc9ZkkYD2Arbn8WQzxuif7Es9EECcZIZNTtupyNiLEHZ8LslD+TLNqUTzy9nlp+vOB3LNKHDZ6j/3U5El8bQBEPmVRSbM/NADy5ziZ3AxpRB6siLwzi3V7ZHSM4u0L6YAYaeHPZy39WQunsJH7mbcuCStkeUg3IGV15CdgjK0nQdobla46rcBXWLBYuiZD9oIuP77FlobDQ5OP/U1G6JJtYX5wTdA2gzN4Pfu+NtW7uiga9KdMmilwctboLG2izQNHUxfOV7JpJtsOApfDDRFG23W9IkFDJuRxMwDLcOkSx2oju0nrRJ5NeMMsQo7BYIrjrxnTLAy7x0KDLT+YhOOsmRlnxXYTBwjd5n6AWe8kQaKGdxRmDEK6hE6IfTxMPsV108H8DAVFbMuVi6baZQ7/sRyHryVj/z0u4UeFQP8cLuHxPZdwOrYLl7DmGoV47nP5dsH0v2h0jDlf14NhJfcrO+xnKb2s5Gl8Muvz2JBiXVoaJQnRblxhwDkQpI0BEEJm04HvZIV0OJgf+dUM2X6gQ2DJT8JaIg9I/8sX5YlrM3RjTZKDMzb4vPw87zH2pknRwgaDGhbWgWuWhn04d81pkR6lH+yQ8otNkWxglNDCrvaDsjHFcGz4e4lDxOWj75JjHHcM5LhHwmGHjbyDFu+nvdANfNPS5/DgCc2Tnn12bvAbstJa+vOEVsA8cp7fzmmh2XjbSERzeIo6wX/TfghrSs197br16qEqvFnTrnfdiw4vxlmh9cZBekrpktr/bQz6hHn3uXwujOGwwQmGJf+6DuvS6z3dUKxE6ma7dyFlgrLGGdpQoYmIesffxHnpfb7y9ncv5/D2PyX9XMAtzo7e0iHOjJQeRPfgXXfrIYzV3cDwjg+23M2a0jA21Cb3+qy48rw5527Yh+fL1fB3RCFyONb9XquV9Z6ldOq6u+RXLjhJeUNS2Ljv2ctfy2wQz86S2TARK9hTHdz42MsY/mUboAEujVkkzhq/MEfYBy2Jr17k/tnqhIqssaj9gUdK0JiXs4HG1A67Iq+cZgBbjRj0i38FSz0C2DNv5X9idW6xlkQsfZ9ww0TJOxsJjfXl8UQW2W0cMXDdC2yNRB57pcislrMgfItpd+q7mLZzIbtfm9c9PAiS2bjYy2V57+HTVWwYQVsn5PsReCsjcV43ltqEpMDUD36Yclou2gyULjYNU7TRD04HZEFpqtg7QOnQ03VScqHaLFkq3HuZNeGeLwX6wGhm27IYK2sv5cPzhCkPtTMLZs02XM8KuA4twMHEZ8dyBQ541WcN0H9uZBWGd/N3D1BAsP3lAcpUbH+RrVfTn0DUr6v0D9VhfBd5+Cb4lFRZHLGS/CDtLAGLoL8zixSqmsm+U/v1Quzz6wEVhMPt57rdOAng9UfEFi3Jk968FH61GD9oP9slvxCL6+YhnWP5GnE5hfQyXfCPc2qUMYPzxST5ipBRHTnP5lbLFfJykE9EN32a5mOQ1F3KOdhExO83Brz0GvBvzZqCIt2cpHtDunkCfRC2LL9dt1SKLDXVJ726e6vCzQ6RzDw4TeTJtkfObWnR1RVDL3NOMpZ9ZBGguXKTJxbRz27MJukR79IapLiSz7Aw6T0K+QR5TDL/xPv6vAcopt/FBg2FMKUEYWv32gUHyQ6jLoily40+g5fn19jcTzIufsnzw9axkK0nbQSBrHQB8lDotC2jrhdAQROp6r9vabYQi6n9jOUqxp1pJYAmOg+TmWoDWTs3+/OzRT47WgBOe/DnZ0NYz3AhbtO0Crymw53fvONbbRZxU0kzguWHhsWGGwMbXCgpzcgrMQr3irc/Z3vwNB+xYoWnE1a48zWvgT5clNm2aLj/P9K7KRzHS8obmEXR6erJblRgnYS+zTn6POiVq+i6XEClSCN8i/GFTNPg8bNsw829Nzds2z5/b/65+ILuaNXACoKvDRanRYM9JaD8hDuEJZ5vrzNSusyyapRGPasffRIlp6wJSMLczolfIXRmgPjturtFUtP7DeMoKsS+I5mPeq8wmnqjj8E1Oz8yvyTh3p9JMiRwbEtfo/1okVzCARe81bDM81+c7wrvWh74pZ4eBCwIZg7CmR4SkDc8lvBLT2sVZOuWh/fRuJMeOS5N9X7AEf57e3QTvNWw9SrcRLRldmLOUfU4ttTQAGdOeiyEyCfn3WC7+HkX/G+JGvS88ZM4yb/PYLwVT76sCDBKhyXaXpqlCwBYzU/rkGTEkkzl4uop+Qf5iRQz34ZF93y06vzAx6A8f5Uw88nLlHpdMWIpYKj0+9ObSqTZ+zXXhqR5P/ZcsqpMlwc0rm7GiBmzgUHzgdY+nROibmcLZ1kvmVLKAcPxGnxh7ylZTy4sEo4B2T+XBV4rb544GoMd7ikn+bIUxCjCjstdSLcRVtO8EV3VN/MLrzhweFbku3+7j8Gqr6PFmFPlgdohY3ToegA3hOUEDuM8P5+FYmQGQR4dBxe4k67s9nBbX3E5oMtOnonVFm1vOtPeu5ivU8iwLawmxeAa3wbHvuILPCEgzQFpdjTxxOWMhXsXSxtRApQ386BAW0q7YTvksWkh0AWOU/A50iYkWvs2lltnLyJg7K3jnp/p3aviIqVToeeWFM4Z9/Wr9TrGGvaYXLUNZEasQ0Rd/DeN7/4SK/G1Xcl/j5X4URO0fworsW2/YSUsDkaURGuMbdzC+yOgxRDZsFt8wbhTVAZmFVzGNPBE+2RGN31aHebqHD5dBDL2LX+cir1CbFYDuSXu+BsuFGGeDRVegsXcq4CxLq7HaUuzfRxoTbSs8pE80xtGEyvRYmO27CCZ3GgtYxJtf51lZKhFQRO48oE82kdVQxZfLFdNlqKLBYorQGiEhUPIManK8bvYaUiwHXF+Tzi7rrXv8C5LPeAckgnXfu9ooR1HGhSmH/DR/bk094e1TVIOB2bO5+9EqeIrQ8O/CNOTZ8loifsBrYwqUTCrJWPYIGibvuN6Nc38ykXs1uFIa1A9yJcDsr0YKTO4z8Qf24feBuuC7xDSWTkOppBarMBiiuQWZuoELsKFyEtnMOjW8GaijlMHEuBVIfhpDafcwEQhb3QyEo+DvE21Rp/irx7g7GCy42lsehpzmMZfaePf6E10A6nHVbubW+5z/Vi3wGGr/L0vH1is5MTnF7hb1eTBufbBhI3sWOudyoNIWIfoHfvPZl0sNSeGl3vyGSEzJWqIISbvT9jGrN7Q+5w4xvDH8Y7VreD8mh8fIrUWaXlvJUtXCZeJRp7EtbCyh9N06QS3+Bd9YXeovsV1zsQ/OpWq7fvFcU/nYQ56UkeDny7P3G0EoHP89EYFinyL9DxiocV6OgsSB3IPnk+r3KezfuLqJI44Zmx5bZsVyB0/93yE5dh9G++evTPrMER+VvuLZpyLk7ByA2Rv3JwgyKbJWerMKoR4eWddRrzn9CHmPJwr6jCD0fNlDdR75GQ9z10b5dzrgtowcQgh4VgChZ5aZtnDM36X1+jNOKF1CuDaFqW9p2Hc+WzhKmPVlxFZyhQ14mr4+/+lJNzvMxCgBrx7lfCWm7LY4AYpaWgT8kk67ljcZUwnvYHHX889flQiP7xnrz+VexGcFOfRl8I43xef67P+3lLbv30OzlzgAEge0XWEpv7GJMDVtyfSDKCT6GrGIR+Hsd0lqXV2sKJeTSnbpdPgzdPbudKKQppOZLQnvuri1VYpH8OevJvSUVI/G/du2j7KB85o9PNcdeFynx59LjpoH6g+rf1tQuQB3cETN4BLwwYJSR/BDvvSA6Rr+cc5MnJsrEkeVytLbZ2p1tI62ojNPYMMCToJcZys5VPcNt8YJP18+UY282M8X73ncTr0aoNjoZbgypxn/4Um7PcCG5ZLB/WIqqvorFi7BENnvYCFe7Nq1RkaEn/rgGRHfcle3/6Kld91j/Hdns0pYuhPFZlPPXI6WyeA1mlCwAcKfVJX6dIAZ3p9YUn9SSZau8VtuXyvAlKY1vYy18U4GnrpL98nkcsp7fHU23yjpvZIjovMudC+m9MVaellN4T7PMcNq5SJSlW7+Elnp9UmrJsCuhFafCvei+MXiRg1K3lJhEan/LxIxB/ymNZmLvQ59ogUBIIGr+ews8B5va5rx9vNaXVarbqDMtxJnGPkqfidOu4SNF6sOn6vl8nLSzSxx4Uc1ZSNzSM/gBqbo06/x0HVH04738HCy3DuvE1xtIK0xgi36X+jNwh2eZZZhJPXu/ntzoaE33J0joUTPazuCzqRD8MxIuMRsG+z1QjtlOpgOFmU9WHFNKZ/KicnKANYxC7vtz4Y7wfneeKE0cO4dlfQ/U9s/i88m+J6EVALtRvUi/jQGSMcf1YfORRFAGoGITiSeM5yZx32Ii4uuKs4MQCdppCTCfDejdALs5hzm29rxp7lrs/yKx33eeDVDemXunducVs0+/Qlz0MJPKarvZl8R9vedg3vqW13Xd8mth0D+qIcOZIN6KV5xbEZ5t6uvUfjWYL3VSsG77LQu87r2PxwADqp3Ig+vXCn6dyrBSTvW+tsmib2EEL1LPRnjPR2n9aCcC/k5HaXnBqrjp68PRrd0LLnIeLp8sD4ZD3693SARpyeOHBixr5taX0TYHZpPEO8DALu8pQcb+p4jNbVivqx52Lxd4/gbWfjZzuk8/OWh/GIdj0fCzznYp+EHF7rs/wG5BL+g7jmNY9akE4hdRUvd36K+z35n6ki+Nr48T/hY2Pk9PXr+pbXvvnOQ3r9vw==</diagram></mxfile>
|
2111.00162/main_diagram/main_diagram.pdf
ADDED
|
Binary file (42.2 kB). View file
|
|
|
2111.00162/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep neural networks (DNNs) have dramatically raised the state-of-the-art performance in various fields. However, the over-parameterization of DNNs becomes a non-negligible problem. The amount of parameters now is often on the billion scale, which significantly increases the inference cost when using these models. An emerging field of *lottery ticket hypothesis* (LTH) explores a new scheme for pruning the model without sacrificing performance. The core idea is to identify the sparsity pattern ahead of training (or in its early stage) and train a sparse network from scratch. It has been hypothesized [@frankle2018lottery] that DNNs contain sparse networks named *winning tickets* that can be trained to match the test accuracy of the full model. These winning tickets hence have comparable or even better inference performance while potentially reducing the computational footprints.
|
| 4 |
+
|
| 5 |
+
However, finding winning tickets is a non-trivial task: it involves the training-prune-retraining cycle for several times [@frankle2018lottery], which is burdensome and computation-consuming. Although other works [@lee2018snip; @wang2019picking; @You:2019tz] have shown that sparsity might emerge at the initialization or at the early stage of training, the iterative magnitude pruning (IMP) still outperforms these alternatives by clear margins [@frankle2020pruning]. Yet, the powerful IMP method requires multiple rounds of train-prune-train process on the original training set, which is even much more expensive than training a dense network. That makes a found winning ticket a valuable asset to the owners, highlighting the necessity of protecting the winning tickets' copyright.
|
| 6 |
+
|
| 7 |
+
::: wrapfigure
|
| 8 |
+
r0.4 {width="100%"}
|
| 9 |
+
|
| 10 |
+
[]{#fig:tesear label="fig:tesear"}
|
| 11 |
+
:::
|
| 12 |
+
|
| 13 |
+
Previous works [@uchida2017embedding; @adi2018turning; @zhang2018protecting; @ma2021undistillable] have shown that deep networks are vulnerable to intellectual property (IP) infringement. For example, one can use transfer learning to adapt a trained model onto a new task or use model compression techniques to create a new sparse model based on the target model. Fortunately, in recent years the ownership verification problem has been addressed with a number of solutions proposed. The key idea is to embed verifiable information, or called *signatures*, into models' weights [@uchida2017embedding; @darvish2019deepsigns; @zhang2020passport] or predictions [@adi2018turning] without visibly affecting the original performance. By extracting the embedded information from models, one can verify the ownership of models and hence protect their IPs. For the methods that embed information in weights, additional weights regularizers are often used to enforce certain patterns, such as signs. As for the prediction methods, a special training set, which is often called a *trigger* set, is used as additional training data. The model trained upon both the original data and the trigger set can generate desired prediction labels for the privately-held trigger set, while preserving the performance on the original training set. However, those general methods did not take any structural property(e.g., sparsity) into account, leaving chance for improving their gains in the winning ticket mask protection.
|
| 14 |
+
|
| 15 |
+
On protecting the IP of winning tickets, we investigate a novel way to leverage **sparse structural information** for ownership verification (Fig. [\[fig:tesear\]](#fig:tesear){reference-type="ref" reference="fig:tesear"}). This structural information embedded in winning tickets is a good \"credential\" for ownership verification since the winning ticket at extreme sparsity is naturally robust to fine-tuning and (further) pruning attacks. The winning ticket at extreme sparsity cannot be pruned further; otherwise, the inference performance will drop (hence losing its \"value\"). Meanwhile, fine-tuning the winning tickets can only tune the weights, but the sparsity pattern will not be changed. However, there remain some key questions to answer: *How to formulate the ownership verification process under the context of the lottery ticket hypothesis? What kind of structural information should be used? How to inject user-specific information into the structure of winning tickets?* We present answers to these questions in this paper. We summarize our findings as follows:
|
| 16 |
+
|
| 17 |
+
- We formulate the lottery verification problem and define two different protection scenarios. We show that even without specific protection, the extremely sparse winning ticket can partly claim its ownership because of the critical role of its sparse structure in the final inference performance.
|
| 18 |
+
|
| 19 |
+
- We further propose a new mask embedding method that is capable of embedding ownership "signatures\" in the subnetwork's sparse structural connectivity (Fig. [\[fig:tesear\]](#fig:tesear){reference-type="ref" reference="fig:tesear"}), without much affecting its performance. The signature is robust, e.g., it can be extracted and decoded even after pruning or fine-tuning attacks. Combined further with the trigger set-based method, our mask embedding method can work under both white-box and black-box verification frameworks.
|
| 20 |
+
|
| 21 |
+
- We investigate several verification schemes, *i.e.,* separate masks, embedding signatures, and embedding signatures with the trigger set. We show that these schemes are robust to the common removal and ambiguity attacks, as well as a new type of "add-on\" attacks. Extensive experiment results demonstrate their competence on protecting the winning tickets. For example, on ResNet-20, our verification framework can defend fine-tuning attacks intrinsically, as well as pruning attacks and as add-on attack under all levels of pruning ratios.
|
| 22 |
+
|
| 23 |
+
# Method
|
| 24 |
+
|
| 25 |
+
A ownership verification framework for extremely sparse winning tickets can be formulated as a tuple $\mathcal{V} = (ME,WE,F,V,I)$, each item of which is a process:
|
| 26 |
+
|
| 27 |
+
- A *mask embedding* process $ME(\mathbf{M}_0, \mathbf{s})$ (optionally) for sparse masks. $\mathbf{s}$ is an optional string that can be encoded into masks. The output of this process is a new mask $\mathbf{M}$. $\mathbf{M}$ can either be a mask with $\mathbf{s}$ embedded or contains other structural information that is useful for ownership verification. We call the verification method with $ME(\mathbf{M}_0, \mathbf{s})$ enabled a "mask-based" method.
|
| 28 |
+
|
| 29 |
+
- A *weight embedding* process $WE$($\mathbf{D}_\mathrm{tr}$, $\mathbf{T}$, $\mathbf{s}$, $\mathbb{N}[\cdot]$,$L$, $\mathbf{W}_0$, $\mathbf{M}$), which is a learning process for the lottery ticket model. $\mathbf{D}_\mathrm{tr}=\{\mathbf{x}, y\}$ is the training dataset, $\mathbf{T}=\{\mathbf{T}_x, \mathbf{T}_y \}$ is an optional trigger set provided to the training process, $\mathbf{s}$ is an optional signature for the weights embedding process. $L$ is the loss function for model training (usually the cross-entropy loss), $N[\cdot]$ defines the model structure, $\mathbf{W}_0$ is the initialization of weights, and $\mathbf{M}$ is the sparse mask for the model. The output of $E$ is a model $\mathbb{N}$ with sparse weights $\mathbf{W} \odot \mathbf{M}$ where $\mathbf{W}$ represents the trained weights. The trigger set $\mathbf{T}$ (or/and) the signature $\mathbf{s}$ are embedded in $\mathbf{W} \odot \mathbf{M}$ after this process and can be verified with the verification process introduced next.
|
| 30 |
+
|
| 31 |
+
- A *fidelity evaluation* process $F(\mathbb{N}[\cdot], \mathbf{W}, \mathbf{M}, \mathbf{D}_\mathrm{te},\mathcal{A}_f, \epsilon_f)$ is to evaluate whether the performance discrepancy of model $\mathbb{N}[\cdot]$ is less than a pre-defined threshold $\epsilon_f$, *i.e.,* $|\mathcal{A}(\mathbb{N}[\mathbf{W},\mathbf{M}], \mathbf{D}_\mathrm{tr})-\mathcal{A}_f|<\epsilon_f$, in which $\mathcal{A}(\cdot, \cdot)$ is the inference performance on the test dataset $\mathbf{D}_\mathrm{te}$, and $\mathcal{A}_f$ is a target inference performance associated with the model.
|
| 32 |
+
|
| 33 |
+
- A *verification* process $V(\mathbb{N}[\cdot],\mathbf{W}, \mathbf{M},\mathbf{T},\mathbf{s},\epsilon_s)$ checks whether the sparse mask $\mathbf{M}$ or the trigger set $\mathbf{T}$ can be successfully verified for a given model $\mathbb{N}[\cdot]$. For the mask-based methods, the process is to check if $\mathbf{M}$ and $\mathbf{s}$ matches by evaluating $N[\cdot, \mathbf{M}]$ on $\mathbf{D}_\mathrm{te}$ to see if the performance gap is smaller than a pre-defined threshold $\epsilon_s$, and(or) extract information in $\mathbf{M}$ and compare it with $\mathbf{s}$. For the trigger set-based methods, an inference is first executed on the trigger set images $\mathbf{T}_x$, and then the prediction will be compared with trigger set labels $\mathbf{T}_y$ to see if the false detection rate is lesser than a threshold $\epsilon_s$ [@fan2019rethinking].
|
| 34 |
+
|
| 35 |
+
- An *invert* process $I(N[\mathbf{W}, \mathbf{M}], \mathbf{T}, \mathbf{s})$ exists and will enable a successful ambiguity attack [@fan2019rethinking] if: a) a set of new trigger set $\mathbf{T}'$, a new signature $\mathbf{s}'$, or a new mask $\mathbf{M}'$ can be reverse-engineered for the given mask $\mathbf{M}$ and weights $\mathbf{W}$. b) the forged $\mathbf{T}'$, $\mathbf{s}'$, $\mathbf{M}'$ can be verified with respect to $\mathbf{M}$ and $\mathbf{W}$. c) the fidelity evaluation outcome $F(\mathrm{N}[\cdot],\mathbf{W},\mathbf{M}', \mathbf{D}_\mathrm{te},\mathcal{A}_t,\epsilon_f)$ remains True.
|
| 36 |
+
|
| 37 |
+
The high-level definitions above are general and can work with any concrete implementation. We will introduce several methods that use sparse structural information in the next following sections.
|
| 38 |
+
|
| 39 |
+
Our motivation is originated from the nature of winning tickets that the sparse structure of winning tickets is critical to their performance. As the sparse masks found by IMP outperform other pruning methods by clear margins [@frankle2020pruning], incorrect masks will lead to degraded test accuracy. In the next few sections, we demonstrate how to use the sparse structure of extreme tickets, *i.e.,* both the sparse masks and weights, to perform ownership verification under different verification schemes. The ownership verification can be performed in two different scenarios: (a) protecting the sparse masks of the extremely sparse winning tickets; and (b) protecting the trained extremely sparse winning tickets.
|
| 40 |
+
|
| 41 |
+
Such sparse masks play a crucial role in achieved outstanding generalization [@frankle2018lottery] and transferability [@chen2020lottery; @chen2020lottery2], and thus draws our attention to prevent them from being illegal distributed or used. Given a fixed initialization, correct masks are essential for training the extremely sparse winning tickets to match the performance of the dense network. If we split the sparse masks into two parts, neither part is intact and correct so neither can be trained to match the performance of the dense model with the given initialization. Recall the mechanism of one lock can be unlocked by one key generally, we adopt the concept of keys and locks and propose a new ownership verification method for the masks of extremely sparse winning tickets.
|
| 42 |
+
|
| 43 |
+
Denote the sparse mask of extremely sparse winning ticket by $\{\mathbf{M}_l\}_{l=1}^N$ and the weights by $\{\mathbf{W}_l\}_{l=1}^N$, where $N$ is the number of layers. To sparsify a model, $\{\mathbf{M}_l\}_{l=1}^N$is applied to the model's weight $\{\mathbf{W}_l\}_{l=1}^N$ by conducting an element-wise product ($\{\mathbf{W}_l\odot \mathbf{M}_l\}_{l=1}^N$). Our goal is to find *key masks* *i.e.,* sub-masks $\{\mathbf{M}_l^s\}_{l=1}^N$ that contain as few elements as possible while the performance of the sparse network with the *locked masks*, *i.e.,* the remaining masks, degrade as much as possible. Meanwhile, fewer elements in key masks reduce the cost of storing, distributing, and using the key masks.
|
| 44 |
+
|
| 45 |
+
We next describe the algorithms needed to discover key masks. An algorithm is used to split the masks of extremely sparse winning tickets ($\{\mathbf{M}_l\}_{l=1}^N$) into key masks $\{\mathbf{M}_l^s\}_{l=1}^N$ and locked masks under the constraint of $\mathrm{rspar}(\{\mathbf{M}_l^s\}_{l=1}^N, \{\mathbf{M}_l\}_{l=1}^N) < n_s$. $n_s$ is a hyper-parameter controlling the relative sparsity of the key masks. Score functions are used to decide which part should be split into the key masks. The pipeline is described in Algorithm [\[alg:alg1\]](#alg:alg1){reference-type="ref" reference="alg:alg1"}.
|
| 46 |
+
|
| 47 |
+
::: algorithm
|
| 48 |
+
Derive the score matrices by applying score$(\cdot)$ over $\{\mathbf{W}_l\odot\mathbf{M}_l\}_{l=1}^N$ and get $\{\mathbf{S}_l\}_{l=1}^N$
|
| 49 |
+
|
| 50 |
+
Set the values of entries in $\{\mathbf{S}_l\}_{l=1}^N$ to negative infinity if the corresponding entries at the same position in $\{\mathbf{W}_l\odot\mathbf{M}_l\}_{l=1}^N$ is zero (*i.e.*, already pruned).
|
| 51 |
+
|
| 52 |
+
Calculate the $n$^th^ largest number across the score matrix $\{\mathbf{S}_l\}_{l=1}^N$ and record it as $T$.
|
| 53 |
+
|
| 54 |
+
Set $\mathbf{M}_l^s\gets I_{\mathbf{M}_l > T}$ and let the key masks be $\{\mathbf{M}_l^s\}_{l=1}^N$. The comparison between $\mathbf{M}_l$ and $T$ $(\mathbf{M}_l > T)$ is performed element-wise.
|
| 55 |
+
:::
|
| 56 |
+
|
| 57 |
+
We study several score functions in our experiments: 1) One-Shot Magnitude (OMP): the absolute values of each weight; 2) Edge-Weight-Product (EWP) [@patil2020phew] which measures the importance of paths from models' input to output. The EWP score is defined as the multiplication of weights along the paths; 3) Edge betweenness centrality (Betweenness). The edge betweenness centrality measures the importance of each edge inside a graph. For convolutional layers, we define the weight of each "edge" to be the summation of absolute values of each element; and 4) random scoring.
|
| 58 |
+
|
| 59 |
+
Another scenario is to protect the trained extremely sparse winning ticket since a superior performance on certain large-scale datasets usually comes with a huge economic and ecological cost. Although directly splitting the masks provides a solution to the ownership verification problem, it has some drawbacks. It delivers extra cost to users since they need to recover the masks. Such a method is also intrusive and requires additional responsibility from the users' side for storing the key masks safely. To render the extreme tickets capable of self-verification and free of key masks , we propose a novel pruning method that is able to "absorb" secret information (*e.g.*, signatures) into models' sparse masks. The core concept is to enforce the sparsity masks to follow certain "0-1" patterns, which can be extracted from masks and further decoded back to the original form of information.
|
| 60 |
+
|
| 61 |
+
A function `encode`$(\cdot)$ is used to transform a string $\mathbf{s}$ into a matrix $\mathbf{M}_s \in \{0, 1\}^{d_1 \times d_2}$ which we call *signature mask*. Our goal is to embed $\texttt{encode}(\mathbf{s})$ into the sparsity masks $\{\mathbf{M}_l\}_{l=1}^N$. One critical question is where to embed the signature mask $\mathbf{M}_s$. Empirically, low-level convolutional layers are less sparser, which means they are more unlikely to be pruned. Therefore, information embedded in the low-level convolutional layers is more difficult to be removed if using the pruning method. Based on such observation, we decide to embed $\mathbf{M}_s$ in low-level convolutional layers. To minimize mask changes, we first find a region in $\{\mathbf{M}_l\}$ with the highest similarity with $\mathbf{M}_s$ and tune the sub-mask of that region. For masks that have a dimension of two, we directly replace the region with $\mathbf{M}_s$ ; for masks that have a dimension of more than two, we raise their dimension by using random connections. Our detailed workflow is shown in Algorithm [\[alg:alg2\]](#alg:alg2){reference-type="ref" reference="alg:alg2"}.
|
| 62 |
+
|
| 63 |
+
The choices of function `encode`$(\cdot)$ are various but there is one common choice in our daily life: QR code [@soon2008qr]. QR code has multiple advantages: 1) QR code is naturally seen as a pattern with only zeros and ones; 2) QR code has the ability to correct the error if the code is dirty or damaged. For example, the H correction level can tolerate up to 30% of error [@soon2008qr]; 3) QR codes can be small in size which can be easily fit into sparse masks. The size of the QR code generated can be as small as $21\times21$ while the numbers of channels in convolutional kernels in deep learning models are typically greater than $21$, showing an abundant space for fitting the QR code in inside models' sparsity masks; and 4) The QR code **without** the finder, alignment and version patterns are imperceptible when fitted into sparse masks since there are no "regular" patterns left. Based on these merits, we choose `encode`$(\cdot)$ to be the QR code generation function. Specifically, the `encode` function we use will return a QR code without finder, alignment, and version patterns. When extracting the code, the above patterns will be added back for decoding the credential information behind the QR code.
|
| 64 |
+
|
| 65 |
+
::: algorithm
|
| 66 |
+
Calculate $\mathbf{M}_s\gets \texttt{encode}(\mathbf{s})$.
|
| 67 |
+
|
| 68 |
+
Squeeze each $\mathbf{M}_l$ into a two-dimensional matrix $\mathbf{M}_l^f$ by setting $(\mathbf{M}_l^f)_{ij} = \mathbb{I}_{\|(\mathbf{M}_l)_{ij}\|_0 > 0 }$.
|
| 69 |
+
|
| 70 |
+
Calculate the similarity (percentage of matched 0-1 patterns) between each $\mathbf{M}_l^f$ and $\mathbf{M}_s$ and name the one with the largest similarity $\mathbf{M}_\textit{max}^f$.
|
| 71 |
+
|
| 72 |
+
Change the dimension of $\mathbf{M}_s$ and fit it into $\mathbf{M}_\textit{max}^f$ to the region where the similarity is the largest.
|
| 73 |
+
:::
|
| 74 |
+
|
| 75 |
+
Next, we propose three different verification schemes based on sparse structural information, as summarized in Table [\[tab:my_label\]](#tab:my_label){reference-type="ref" reference="tab:my_label"}. Under our unique context, we further introduce a new *Add-on Attacks* which aims to create ambiguity against lottery verification by "recovering\" several pruned weights and manipulating the sparsity patterns. More details can be found in Appendix [6](#sec:more_methods){reference-type="ref" reference="sec:more_methods"}.
|
| 76 |
+
|
| 77 |
+
Scheme $\mathcal{V}_1$ is designed to protect the sparsity masks. We separate the sparsity mask $\mathbf{M}$ into two parts: $\mathbf{M}_\mathrm{l}$ and $\mathbf{M}_\mathrm{s}$, where $\mathbf{l}$/$\mathbf{s}$ subscripts denote "large"/"small", respectively. The small mask is sparser than the large one, which is used as the key mask while the large counterpart is the locked mask. We apply these two masks on weights and get two separate parts $(\mathbf{W}\odot\mathbf{M}_\mathrm{l}, \mathbf{W}\odot\mathbf{M}_\mathrm{s})$. Before re-training, legitimate users should merge the two weights by adding them up to recover the original sparse weights $\mathbf{W}\odot\mathbf{M}$. The ownership can be automatically verified by the inference performance since an incorrect provided mask-weight pair will deteriorate accuracies after re-training.
|
| 78 |
+
|
| 79 |
+
We apply the signature mask embedding method to embed credentials into the extreme ticket. Then we train the model and dispatch it to legitimate users. No further action is required at the users' side. For the verification process, one can use extract the signature from the sparse model and validate the ownership of the extreme tickets. Compared with Scheme $\mathcal{V}_1$, Scheme $\mathcal{V}_2$ is more user-friendly since no extra action is performed at the users' side. The application scenarios of Scheme $\mathcal{V}_1$ and $\mathcal{V}_2$ are also different: the latter focuses on protecting the trained weights. It also shows great defense ability towards removal and ambiguity attacks. However, this scheme works under the white-box verification setting only, which means that access to models' weights has to be assumed. To overcome that assumption, we combine Scheme $\mathcal{V}_2$ with a trigger set-based method and propose Scheme $\mathcal{V}_3$ in the next section.
|
| 80 |
+
|
| 81 |
+
Scheme $\mathcal{V}_3$ is more sophisticated than Scheme $\mathcal{V}_2$ as a set of trigger images and labels are used during the (re-)training process. With the help of this trigger set, Scheme $\mathcal{V}_3$ is now capable of black-box verification. By using remote calls of service APIs, the owner can first probe and claim the ownership in a black-box regime and further request a white-box verification if the black-box mode has raised a red flag. The white-box verification part for Scheme $\mathcal{V}_3$ remains the same as Scheme $\mathcal{V}_2$.
|
2112.07194/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-07T11:46:44.453Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" version="15.7.0" etag="LyhrULr80x5F3sxvCaen" type="google"><diagram id="sWnHVcrG-C9iphnPhuBG">7V1Zc+O4Ef41ekmVUbiPx/Exm9R6Nq7MVmX3aYuSaIk7tKhQ9NrOr09DPMQD1EGRkmYij8cWIQgg0R/6RnvE7l7ef4q95fxLNPXDEcXT9xG7H1FKOKUj+42nH1mLZiptmcXBNGvbNHwN/utnjThrfQ2m/qrSMYmiMAmW1cZJtFj4k6TS5sVx9Fbt9hyF1VmX3iybEW8avk680G90+3cwTeZpqxal3n/3g9k8n5ng7J0XL++cDbGae9PorTQXexixuziKkvTVy/udH9rVy9clHehzy7vFjcX+ItnnAxkl/vLC1+zZsvtKPvKHncXR6zLr5seJ/+5aYm8c1ldscwukeDCAhB+9+En8AV2ygZhJP5GB4SYf4W2zslIRJGARidKEMybSDvPSEqv8U15G2lkxz+bp4UW2AO7FYLsXA9ZiMfVtfzJit2/zIPG/Lr2JffcNwA5t8+QlzN4uiIvhYhZ6q1X2OvTGfnjrTb7N1uPdRWEUr8dnz+sv6PIcLZIM+ETbsZI4+lagjdoeQRiWPvl5/VX0LL2jsf2XjfnZewlCu9S/Bi+whSj+xX+Dn/+KXrwFdGmlc5me1E3PjIDcOAgoFOJwI4wrzI1kuklBynqgIHdQUIZJ9uyWlDkvkf95tXvslmxewqtZ9nv9kXGF9HkvO9BNOswn6ED58t0xhMRf/hgpoMXr1C6buoeWfFx4iHF9LmhL7zBvPgZ3xxAx/wDViAngacwoJoFisrJJKW/SmGOCFGFUCKK5FoQ2aSx6ILG4HBKPxF0ShFMfCP0BJP4jGIkH2+a/A8Vv/3Y81dcjVVlKZXMvooVf4wRZkxcGM9jP9xOgtw/ttxYNAUiwT9kbL8F0aqdxYmmDNtwPnBhH1FBhuBFcY56L+1z+KwHsvQkojoyg0jD4gCSU5BicV0Qr0vR4UMndnN9fTD9Z1cEuqmXlwaRKmRo7nnq+frZdYIXij9/sUiKRX/6efWZ9cf+erXN69VFZdX+aayIHr/kNYQJJzrCkQnJDla6uutbISKxJ8d0kAPTQhtuV59gOc9iOzu7sKQrW+zLHjpIIyGmINARzO4mbs+QjrqLXeOJng5Q1mca4AmlQA4CYRioN6KmOiynS8BTFF6tOk3jxzE8a0wDJvY9St6XtsGpAq1j9vdCmWllYg0PkPK0zj5q4uBKFW8TxTn5V5U0uNraTX/XJh6beal6IwB6YEtUKAbSVgd2hQXJVAcOYQ43hCPgXl0RoA3aMZM0NwfphSLoVIqslKGrdEdEEGrff4s6aJkAXAMKjlWXw++fHNVbW728XYuk99aG61Ljos574k4lLqR1rwUVP0oljiayAAVUHFB9QX2riSTaRQLBCSklFFbASTZhy8EZuOc7xUDCXo/C0MxNom3u2KT6RInTJjIWDuQriFliIALEEenGNsYAA1A1EtahIFWXHICyPB1TuRTkrc/k1Suz1mfmLT6bCVy7+AjoE82RPggYUMQIaGKhLXMGGNk3qEykRaGmYUcmUBC7EHfykB+uJkOE03aM1Vo4RcViQub7qMDmHUFApKJKMYrsLpVBE1HYvr+3A/fXT7Xqvqd3owQppp2nzx9nAJ52gq2qb0+wYfG0sprK9VDKfWiymwXB5w7FGAAWlOFgZmtAqJG644Qjrkm3BT4NU6xWlXMB2MYRzU9svRBPLdTZfuiNwKSw9YUZJmAlmqxvvGiNDStPU9kcLjrtgaw//7P7YOswab9NLj8bWds/IqXgeB50EY6EFBkIDnVu8fYdCZ8ew3UzyLshx+YWPk3oFAvz3ICkBCq5+zxEErzdwshcfZWz9VobdfhxuX7TBg60pNCrbkulyZr3Edw9KY5DohkomFJLAPZUwQgH/rI5bxOp6FsRcAn3hcSTWoO2Juo2JK3G64+Vwu5v8dDr+l8cvZ9fwQb/XU+7S8DUdM9mXhr/2DFGtBJNCMqZYlbrn9SAQl3t7GIdjOxjuHlxYONjFeBQWco2wgQU50f74uR8sEEoRVwwUJW5D1kKSi8KCy/l8pAGYb7GzhTok4hSEjmEGS4NpniBw3kAHIxpxRoDRaCkxxz3pVIwoZDhhnEtQ/kGgbNWpSE1GnjLMQVxO7J6g1gNkwCQmwK5BvWGYmnpEUiKwewrMqKbHZjDMGEoI3JIyUmNZBQ2r50jsD5qt4x7vfDgGJi4H99Ws6wNOMDXCRjJqGAgXWXMQdDfrtg/bjQV1QA51ObK/e+QIRBUIDQYKKtZcnAk4BGElCVHKcAHSpnobnfnQjnGBD6GSPyAPnJyDK9H+HeUNlwEpOwwK94HbZbAvrqqmv9oKtQvxtxuQxYyDVFLW285rIk8IkNStkDjEE2U0mGbwXxomagbaUN73HdP27X3PIXDO6M5BjqadOUo1zZ2dBpI7giaFCXcoCinDCG4HthaYhtqKz+q4A7medj1Oz76nffKofzATs24tXISJSYRCRJQiM6obardbnDY8VJHZ5xTaPfj5Tyec08yeC5fOghskJQG7UQHTErVcFt6VE+5SMPVA8nj7tJz3zAldXvgaHldzb2lfRvFDGAbLFby+XfpxANPYvKZ7P2192jTt8nqOi4Ml/3xNwsDmg6c81XmkpMRl0w08ciaUz2JvGgCaa83Z8x3lOzcaaSa4woyCpayrCHPgnBkEskwwhQ0wH5ertIdEmb1SwmHDf80uoziZR7No4YUPm9Zantmmz2MULTOq/OknyUdGFu81iWpycG+adeNBuXF+gM5GMFIuZ/VRq/3/55W+UcCKmCE6+86f+Lw6gzLIlFNGOiY7bVcZHKGMk2oJffil996X7vwAMoA+sd3Yb9u2Z3Yl8fqI+0PMIE4IpgrsKJAcVcORk/0UiC7w2Z2QvTN4il3B01+iAKQ/xf9Y/OlPkiBaDHI48FKk/nZTuKBfOUpqgKkoaQ+QEmm9v44c2bo+2ukYcHt+9Mkz7t+vhwq7BtoL3eY8RwqZy4F8voOq6emM9+sx1W2IAvMWgUQxhlmxxRVhF4UoejmIuvKlLRE8pcFA1czGRzGhml4Wipre2afYnwaZxoHH9kla6hfgWfCXb9US78Wu3GK8Wo46nzKtjrEbDbEPuMzqjFhKWWM5A3z5QFicrlg7BjIbAmYRtyNx74aUE3pVpLRW96gcJ7MXT14CuFysW2BPjToVABm4eEf+AY3BOlQcbENmgAnWAlYSTLeyfWiaSFa4Cd0+Snuwpm93SNzud6DxiuFLxDADvZBvqgXharhLKoaw3oZhQYfCcNMffBdHK7sKDwug1tI+xeO6oS8ohf7zFUldkaQlAgOVSG2wpFpVcSSIQk1JXsSGe8dO0yH9xVt9s4DxFrNXW7GM4nWttWAxu8Lo+4GRIx+8iGD0DqKmn/3n1zC0EaPRAx1pPDLs0Q8ADjH0ugepGc/8BWjLVyBdBJAEIrbOlhaSYqNoLQkBN9lRgZrekdReoWRfLyjRYFyu7cNjTNT2US6nQNdgxSrqMOzDeqUKMSkNNsyWfxO1sjhgpSLhUPtPaMA2/e9PK/91Gq1F4Xhda/SnVN+vGgWJ780nfpxT/crRLoCjMV2t7ElqYANrdKu5WSRu9s3feHsQ4Ei21BuX/HGLew3B1ohERhIBkhM0d5toUkWa0IiUSjg0E/hOV/uL744cXIRo/UEKPg0BNkati4FrIzi1GXzV4LTEfIcX7XTFoHgzqHAHC2KXn248XOlPy/r91TJarMPUT14QX6Xo9yBFJUHYITrxUKKzGWGwqQ0f6VOuUndt9JxqZiePD1yBdhTQKCKkFB6oGgcSMySaZzQGc2fwZkygk6Q8pI6Buypmj/k5bbjZWQF+j2KZe6BkAHFIFJIgypQ2RkplaimHLrcFaGNKK4aFZPZcGDcOc7KPLF/uStAeGEAtVVEuAUF7FEu5RAQJcUYEtddWGQxB+5RSOROAWiusHPJHKE4PIOwK5ZwMQu31wAeDUGv51UtA0R5VWc+DIuuOEspoyiSR2ODamSzpCOWcDERNf3xK42sk8IL0562RQCmbPGgw9+bxSextPOdQB9NlpRd+V24mKpEShnMMks0Upb2KXC3r0mwg6oSRGtHuQ78EkF1GVvQPBDjNTgs4uNz8pbz0JM/mDw6yh/8B</diagram></mxfile>
|
2112.07194/main_diagram/main_diagram.pdf
ADDED
|
Binary file (51.6 kB). View file
|
|
|
2112.07194/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Recent years have witnessed growing interests in open-domain dialogue systems [@adiwardana2020towards; @zhang-etal-2020-dialogpt; @roller-etal-2021-recipes]. With the increasing availability of high-quality dialogue corpora [@li-etal-2017-dailydialog; @zhang-etal-2018-personalizing] and advancement of neural architectures [@devlin2019bert; @radford2019language], learning-based dialogue systems are becoming possible. The applications call for dialogue technology capable of generating appropriate responses to users' prompts in a diverse range of scenarios, such as general chit-chat [@li-etal-2017-dailydialog], knowledge exchange [@gopalakrishnan2019topical], persona-based chat [@zhang-etal-2018-personalizing], and emotion disclosure [@rashkin-etal-2019-towards].
|
| 4 |
+
|
| 5 |
+
However, the dialogue research heavily relies on the ability to evaluate system performance with automatic dialogue evaluation metrics (ADMs). Common natural language generation (NLG) metrics used in the dialogue system literature, such as BLEU [@papineni2002bleu] and ROUGE [@lin-2004-rouge], are unsuitable for the multi-domain dialogue evaluation task as they are shown to correlate poorly with human judgements [@liu-etal-2016-evaluate] due to the one-to-many context-response mapping in dialogues [@zhao-etal-2017-learning] as well as the multi-faceted nature of dialogue evaluation [@mehri-eskenazi-2020-usr].
|
| 6 |
+
|
| 7 |
+
An alternative solution is to design model-based ADMs that explicitly learn to discriminate dialogue responses of varying quality. Lately, many model-based ADMs leveraging self-supervised learning are proposed to address the weaknesses of the standard NLG metrics [@sai-etal-2020-improving; @ghazarian-etal-2019-better; @mehri-eskenazi-2020-usr; @huang-etal-2020-grade; @zhang-etal-2021-dscore]. While these ADMs have demonstrated strong correlations with human judgements, they lack a generalized skill to evaluate dialogues across multiple domains. For example, in Table [\[tab:tab1\]](#tab:tab1){reference-type="ref" reference="tab:tab1"}, DEB [@sai-etal-2020-improving] and GRADE [@huang-etal-2020-grade] are pretrained on the DailyDialog dataset [@li-etal-2017-dailydialog]. They perform well on the DailyDialog-Eval [@zhao-etal-2020-designing] benchmark that contains responses from dialogue systems trained on chit-chat content. However, their performance significantly drops when assessed on the Topical-Eval [@mehri-eskenazi-2020-usr] benchmark, which is close in domain with TopicalChat [@gopalakrishnan2019topical] and contains dialogue responses from knowledge-grounded conversations. The reverse is true for USR [@mehri-eskenazi-2020-usr], which is pretrained on the TopicalChat dataset.
|
| 8 |
+
|
| 9 |
+
To design robust ADMs for the multi-domain dialogue evaluation task, we consider two research questions. (1) How to equip the ADM with a rating skill to discriminate responses of varying quality? In other words, the ability to assign a high score to relevant responses and a low score otherwise. (2) How can an ADM learn the general knowledge across dialogue domains so as to generalize the evaluation skill? For the first question, the most direct and effective way is to learn from humans, i.e., the ADM can be trained with human-annotated dialogue data. As for the second question, the general knowledge can be learned on a large-scale multi-domain dialogue dataset. Ideally, If human annotations are available, an oracle multi-domain dialogue evaluator can be learned. However, performing large-scale human annotations is extremely expensive. Thus, we are motivated to explore semi-supervised learning for our task.
|
| 10 |
+
|
| 11 |
+
More specifically, we propose a multi-domain dialogue evaluation (MDD-Eval) framework under the self-training paradigm [@scudder1965probability; @yarowsky1995unsupervised] where a teacher model, trained on human-annotated dialogue evaluation data, creates pseudo labels for unlabeled dialogue data. Then, the synthetically-labeled data are used to train a student model. To obtain the large-scale multi-domain unlabeled dialogue data, we leverage the dialogue data augmentation techniques that have been successfully applied in the self-supervised learning of ADMs, such as random utterance selection [@tao2018ruber; @zhang-etal-2021-dscore], mask-and-fill [@donahue-etal-2020-enabling; @gupta-etal-2021-synthesizing] and back-translation [@sennrich-etal-2016-improving; @edunov-etal-2018-understanding; @sinha-etal-2020-learning]. In this way, we expect that the student model carries the rating skill of the teacher model, and it can generalize across domains after being adapted on a large-scale multi-domain dataset with pseudo labels.
|
| 12 |
+
|
| 13 |
+
Overall, we make the following contributions:
|
| 14 |
+
|
| 15 |
+
- A model-based framework, named MDD-Eval, is proposed with a self-training scheme on augmented data. Its rating skill is trained on human-annotated data, and its cross-domain general knowledge is trained on machine-annotated data.
|
| 16 |
+
|
| 17 |
+
- We release a large-scale multi-domain dialogue dataset with machine annotations that facilitate ADM training. We name the dataset, MDD-Data.
|
| 18 |
+
|
| 19 |
+
- MDD-Eval attains an absolute improvement of 7% over the state-of-the-art ADMs in terms of mean Spearman correlation over six dialogue evaluation benchmarks.
|
| 20 |
+
|
| 21 |
+
- MDD-Data, MDD-Eval implementation, and pretrained checkpoints will be released to the public[^1]. This allows practitioners and researchers to use and adapt MDD-Eval for automatic evaluation of their dialogue systems.
|
| 22 |
+
|
| 23 |
+
# Method
|
| 24 |
+
|
| 25 |
+
In this section, we first define the multi-domain dialogue evaluation task (Section [3.1](#subsec:problem-formulation){reference-type="ref" reference="subsec:problem-formulation"}), then formulate MDD-Eval framework in three steps: (a) We pretrain a teacher model (Section [3.2](#subsec:teacher-model){reference-type="ref" reference="subsec:teacher-model"}) from a human-annotated dataset, to learn the rating skill to distinguish relevant responses from irrelevant ones. (b) We augment a large-scale multi-domain dataset for MDD-Eval self-training (Section [3.3](#subsec:dialog-augmentation){reference-type="ref" reference="subsec:dialog-augmentation"}). (c) We generalize the pretrained teacher model with the augmented data to derive a student model, which carries a generalized rating skill learned from the augmented data. (Section [3.4](#subsec:student-model){reference-type="ref" reference="subsec:student-model"}).
|
| 26 |
+
|
| 27 |
+
Formally, a dialogue context and the corresponding dialogue response can be denoted as $c_i^j$ and $r_i^j$ respectively. $c_i^j$ and $r_i^j$ are the $i^{th}$ data pair drawn from the $j^{th}$ dialogue evaluation benchmark $D^j$, where $j\in\{1,...,J\}$, and $D^j \in D^J$ and $i\in\{1,...,I\}$. There are $J$ domains, each of which has $I$ data pairs.
|
| 28 |
+
|
| 29 |
+
Our goal is to learn a metric, $M: (c_i^j, r_i^j) \rightarrow s_i^j$ where $s_i^j$ is the metric score that indicates the quality of $(c_i^j, r_i^j)$ as perceived by $M$. In addition, each $(c_i^j, r_i^j)$ is annotated by several human judges and each human judge will provide a quality score based on the Likert scale[^2] to indicate his or her perception of the quality of $(c_i^j, r_i^j)$. We denote the mean human score given to $(c_i^j, r_i^j)$ as $q_i^j$. Due to the multi-faceted nature of dialogue evaluation, the quality can refer to language fluency, coherence, topic relevance, logical consistency etc. Since the focus of our work is multi-domain dialogue evaluation instead of multi-dimensional evaluation, we fix the quality as *response appropriateness* here.
|
| 30 |
+
|
| 31 |
+
To assess the performance of $M$ on $D^j$, the correlation score between $S = \{s_i^j,\ldots,s_I^j\}$ and $Q = \{q_i^j,\ldots,q_I^j\}$ are calculated. We use $\rho_j$ to represent the correlation score on $D^j$. Higher $\rho_j$ indicates better performance of the metric on $D^j$. In the multi-domain dialogue evaluation task, an effective $M$ should achieve good correlation scores across all ${J}$ domains. In other words, the desired $M$ should obtain a good average correlation $\tilde{\rho} = \frac{1}{J}\sum^{J}_{j=1}{\rho_j}$.
|
| 32 |
+
|
| 33 |
+
We first pretrain a model on human-annotated data in one particular domain, i.e., the teacher model, $M_{teacher}$, defined by the parameters $\theta_{teacher}$. Given a dialogue context-response pair, $M_{teacher}$ should accurately determine the degree of relevance between the context and the corresponding response. To equip the teacher model with a solid rating skill, we rely on a high-quality human-annotated base dataset $D^b\in{D^J}$. Note that $D^b$ is from a single-domain, and of much smaller size than the data we would like to augment.
|
| 34 |
+
|
| 35 |
+
In dataset $D^b$, there are three categories of responses for a given context: random, adversarial and relevant. The relevant and adversarial responses are generated by human annotators. $M_{teacher}$ is trained on $D^b$ to classify a context-response pair into one of the three categories: $$\begin{equation}
|
| 36 |
+
\tilde{y}_i^b = f_{\theta_{teacher}}([c_i^b \circ r_i^b])
|
| 37 |
+
\label{eq:teacher_prediction}
|
| 38 |
+
\end{equation}$$ with the objective function: $$\begin{equation}
|
| 39 |
+
\underset{\theta_{teacher}}{\min}\frac{1}{|D^b|}\sum_{(c_i^b, r_i^b, y_i^b)\in{D^b}}{\mathcal{L}_{CE}(\tilde{y}_i^b, y_i^b)}
|
| 40 |
+
\end{equation}$$ where $\circ$ denotes the concatenation operation. $\tilde{y}_i^b$ is the predicted class, $y_i^b$ is the gold label for $(c_i^b, r_i^b)$ and $\mathcal{L}_{CE}$ is the cross entropy loss.
|
| 41 |
+
|
| 42 |
+
<figure id="context-response-example" data-latex-placement="t!">
|
| 43 |
+
<embed src="example_1.pdf" style="width:40.0%" />
|
| 44 |
+
<figcaption>An example of a dialogue context with three candidate responses. <span class="math inline"><em>M</em><sub><em>t</em><em>e</em><em>a</em><em>c</em><em>h</em><em>e</em><em>r</em></sub></span> is expected to annotate the context-response pairs as either relevant, adversarial or random.</figcaption>
|
| 45 |
+
</figure>
|
| 46 |
+
|
| 47 |
+
$M_{teacher}$ plays three key roles: (1) providing pseudo labels to unlabeled context-response pairs, $(c_i^{\text{*}}, r_i^{\text{*}})$[^3], which are obtained with different dialogue data augmentation techniques. (2) facilitating the data selection process whereby false negatives and adversarial or random samples with low confidence scores as determined by $M_{teacher}$ are removed. (3) serving as a baseline in the evaluation task.
|
| 48 |
+
|
| 49 |
+
To generalize the teacher model across domains, we collect a multi-domain dataset, denoted as $D^{\text{*}}$, that contains a large amount of unlabeled context-response pairs. The unlabeled pairs will be automatically annotated in the same way as $D^b$ by $M_{teacher}$. An example of a dialogue context with three candidate responses for annotation is presented in Figure [1](#context-response-example){reference-type="ref" reference="context-response-example"}. To construct such a dataset, we leverage the following dialogue data augmentation techniques:\
|
| 50 |
+
**Syntactic Perturbation** Motivated by [@sinha-etal-2020-learning], we have considered three variants of perturbations at the syntax level: (1) word-drop (a random portion of tokens in the response is dropped). (2) word-shuffle (the ordering of tokens in the response is randomly shuffled). (3) word-repeat (a random portion of tokens in the response is repeated multiple times). The syntactic perturbations are intended to simulate erroneous behaviours of some generative models in generating unnatural dialogue responses.\
|
| 51 |
+
**Back-Translation** Back-translation [@sennrich-etal-2016-improving; @edunov-etal-2018-understanding] augments a response by generating its syntactic variants. In practice, we adopt the pretrained WMT'19 English-German and German-English ensemble model to perform back-translation.\
|
| 52 |
+
**Generative Model Output** State-of-the-art dialogue generators, such as DialoGPT [@zhang-etal-2020-dialogpt] and BlenderBot [@roller-etal-2021-recipes], have been pretrained on a large amount of conversation data and are demonstrating strong capability in generating fluent and on-topic responses. They help generate semantic variants of a response conditioned on the respective dialogue contexts.\
|
| 53 |
+
**Random Utterance Selection** The random utterance selection is a simple and effective strategy that has been widely adopted in the self-supervised learning of dialogue evaluation metrics [@mehri-eskenazi-2020-usr; @huang-etal-2020-grade; @sai-etal-2020-improving] to introduce irrelevant responses w.r.t. a dialogue context. Given a dialogue context, three variants of random utterance selection are adopted: (1) randomly sample a response from a different dialogue. (2) randomly sample a response from the entire pool of responses produced by the generative models. (3) randomly sample a response from the entire pool of responses obtained via back-translation.\
|
| 54 |
+
**Mask-and-fill** Above-mentioned techniques tend to produce response candidates for the relevant and random class. The mask-and-fill strategy is adopted to automatically construct candidates for the adversarial class. Specifically, we adopt the Infilling by Language Modeling (ILM) framework [@donahue-etal-2020-enabling] to perform the mask-and-fill response augmentation. The process is as follows: given a context-response pair extracted from a natural human-human dialogue, one or a few contiguous tokens in the response are randomly replaced by the $[blank]$ placeholder. The modified response is input into the pretrained ILM model, which then generate tokens in an autoregressive manner. Subsequently, the $[blank]$ placeholder is substituted with the generated tokens to obtain a reconstructed view of the original response. The reconstructed response serves as an adversarial sample w.r.t. the dialogue context.\
|
| 55 |
+
After obtaining the large number of context-response pairs, we apply the pretrained $M_{teacher}$ to provide soft pseudo labels to all the pairs. The soft pseudo label is a probability distribution over the three classes (random, adversarial and relevant). Then, a filtering process is implemented to improve the quality of pseudo-labeled $D^{\text{*}}$. A confidence threshold of 70% is applied to exclude pairs classified by $M_{teacher}$ with low confidence. Emprical evidence suggests that the 70% threshold provides a good balance between the quality and quantity of augmented data. Within $D^{\text{*}}$, the relevant set consists of filtered pairs obtained with back-translation and generative models in addition to the original context-response pairs extracted from dialogues of different dialogue corpora. The adversarial set mainly include filtered pairs that are constructed via syntactic perturbation and mask-and-fill strategy. For the random set, the context-response pairs are mainly obtained with random utterance selection.
|
| 56 |
+
|
| 57 |
+
<figure id="student-model-objective" data-latex-placement="t">
|
| 58 |
+
<embed src="figure2.pdf" />
|
| 59 |
+
<figcaption>The training process of <span class="math inline"><em>M</em><sub><em>s</em><em>t</em><em>u</em><em>d</em><em>e</em><em>n</em><em>t</em></sub></span>. <span class="math inline">ℒ<sub><em>T</em><em>o</em><em>t</em><em>a</em><em>l</em></sub></span> is the sum of three components: (1) The cross entropy loss <span class="math inline">ℒ<sub><em>C</em><em>E</em></sub></span>, which is computed between <span class="math inline"><em>ỹ</em><sub><em>i</em></sub><sup>*</sup></span> generated by <span class="math inline"><em>M</em><sub><em>t</em><em>e</em><em>a</em><em>c</em><em>h</em><em>e</em><em>r</em></sub></span> and the prediction by <span class="math inline"><em>M</em><sub><em>s</em><em>t</em><em>u</em><em>d</em><em>e</em><em>n</em><em>t</em></sub></span> for an input pair <span class="math inline">(<em>c</em><sub><em>i</em></sub><sup>*</sup>, <em>r</em><sub><em>i</em></sub><sup>*</sup>)</span> . (2) The self-supervised MLM loss <span class="math inline">ℒ<sub><em>M</em><em>L</em><em>M</em></sub></span>, for domain adaptation. (3) The KL Loss <span class="math inline">ℒ<sub><em>K</em><em>L</em></sub></span>, for consistency regularization.</figcaption>
|
| 60 |
+
</figure>
|
| 61 |
+
|
| 62 |
+
Once $D^{\text{*}}$ is ready, we can learn a student model, $M_{student}$ parameterized by $\theta_{student}$, on $D^{\text{*}}$ by performing the following classification task: $$\begin{equation}
|
| 63 |
+
x_i^{\text{*}} = f_{\theta_{student}}([c_i^{\text{*}} \circ r_i^{\text{*}}])
|
| 64 |
+
\label{eq:student_prediction}
|
| 65 |
+
\end{equation}$$ Figure [2](#student-model-objective){reference-type="ref" reference="student-model-objective"} is a graphical illustration of the training objective of $M_{student}$ and the equation is as follows: $$\begin{equation}
|
| 66 |
+
\begin{aligned}
|
| 67 |
+
% & \tilde{y}_i^{\text{*}} = f_{\theta_{teacher}}([c_i^{\text{*}} \circ r_i^{\text{*}}]) & \\
|
| 68 |
+
& \underset{\theta_{student}}{\min}\frac{1}{|D^{\text{*}}|} \sum_{(c_i^{\text{*}}, r_i^{\text{*}}, \tilde{y}_i^{\text{*}})\in{D^{\text{*}}}}{\mathcal{L}_{CE}(x_i^{\text{*}}, \tilde{y}_i^{\text{*}})} + \\
|
| 69 |
+
& \mathcal{L}_{KL}(x_i^{\text{*}}, \hat{x}_i^{\text{*}}) + \mathcal{L}_{MLM}([c_i^{\text{*}} \circ r_i^{\text{*}}]) \\
|
| 70 |
+
\end{aligned}
|
| 71 |
+
\label{eq:4}
|
| 72 |
+
\end{equation}$$ where $\mathcal{L}_{CE}$ is the cross-entropy loss, $\mathcal{L}_{KL}$ is the KL divergence and $\mathcal{L}_{MLM}$ is the self-supervised masked language modeling (MLM) loss. $x_i^{\text{*}}$ and $\tilde{y}_i^{\text{*}}$ are the logits output from $M_{student}$ and the pseudo label generated by the pretrained $M_{teacher}$ respectively given the input pair $(c_i^{\text{*}}, r_i^{\text{*}})$.
|
| 73 |
+
|
| 74 |
+
$\mathcal{L}_{KL}$ is introduced to enforce consistency regularization, with which $M_{student}$ is less sensitive to noise and hence smoother w.r.t. perturbations in the input space [@xie2020unsupervised]. We denote the noisy version of $r_i^{\text{*}}$ after noise injection as $\hat{r}_i^{\text{*}}$. In the practical implementation, we follow [@he2019revisiting] to generate $\hat{r}_i^{\text{*}}$ based on $r_i^{\text{*}}$. $\hat{x}_i^{\text{*}}$ is the corresponding logits from $M_{student}$ after inputting $(c_i^{\text{*}}, \hat{r}_i^{\text{*}})$. The KL divergence between the respective post-softmax probability distributions of $x_i^{\text{*}}$ and $\hat{x}_i^{\text{*}}$ is minimized during training.
|
| 75 |
+
|
| 76 |
+
The last term, $\mathcal{L}_{MLM}$, is intended to help $M_{student}$ extract additional domain-specific knowledge so as to better adapt to the multi-domain synthetic dataset. The MLM implementation follows the standard BERT [@devlin2019bert] practice whereby a random portion of tokens in the concatenated sequence, $[c_i^{\text{*}} \circ r_i^{\text{*}}]$, are masked. $M_{student}$ is expected to make predictions on the masked tokens.
|
| 77 |
+
|
| 78 |
+
The learned student model serves as the backbone of MDD-Eval for performing the multi-domain dialogue evaluation task, that derives the metric score $s_i^j$ for a given context-response pair $(c_i^j, r_i^j)\in{D^j}$ as mentioned in Section [3.1](#subsec:problem-formulation){reference-type="ref" reference="subsec:problem-formulation"}. We formulate the scoring process by $M_{student}$ as follows: $$\begin{equation}
|
| 79 |
+
s_i^j = P(\tilde{y}_i^j=\textrm{relevance}|(c_i^j,r_i^j))
|
| 80 |
+
\end{equation}$$ which is the post-softmax probability w.r.t. the relevant class output by $M_{student}$ given the input, $(c_i^j,r_i^j)$.
|
2201.02767/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.02767/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
We first briefly review the attention mechanism in transformers in Section [3.1](#page-2-0) and then formulate our quadtree attention in Section [3.2.](#page-3-0)
|
| 4 |
+
|
| 5 |
+
Vision transformers have shown great success in many tasks. At the heart of a transformer is the attention module, which can capture long-range information between feature embeddings. Given two image embeddings X<sup>1</sup> and X2, the attention module passes information between them. Selfattention is the case when X<sup>1</sup> and X<sup>2</sup> are the same, while cross attention covers a more general situation when X<sup>1</sup> and X<sup>2</sup> are different. It first generates the query Q, key K, and value V by the
|
| 6 |
+
|
| 7 |
+
<span id="page-3-1"></span>
|
| 8 |
+
|
| 9 |
+
Figure 2: Illustration of quadtree message aggregation for a query token $q_i$ . (a) shows the token pyramids and involved key/value tokens in each level. Attention scores are marked in the first two levels for clarification, and the top K scores are highlighted in red. (b) shows message aggregation for QuadTree-A architecture. The message is assembled from different levels along a quadtree. (c) shows message aggregation for QuadTree-B architecture. The message is collected from overlapping regions from different levels.
|
| 10 |
+
|
| 11 |
+
following equation,
|
| 12 |
+
|
| 13 |
+
$$\mathbf{Q} = \mathbf{W}_q \mathbf{X}_1,$$
|
| 14 |
+
|
| 15 |
+
$\mathbf{K} = \mathbf{W}_k \mathbf{X}_2,$
|
| 16 |
+
$\mathbf{V} = \mathbf{W}_v \mathbf{X}_2,$
|
| 17 |
+
|
| 18 |
+
where $\mathbf{W}_q$ , $\mathbf{W}_k$ and $\mathbf{W}_v$ are learnable parameters. Then, it performs message aggregation by computing the attention scores between query and key as following,
|
| 19 |
+
|
| 20 |
+
<span id="page-3-2"></span>
|
| 21 |
+
$$\mathbf{Y} = \operatorname{softmax}(\frac{\mathbf{Q}\mathbf{K}^T}{\sqrt{C}})\mathbf{V},\tag{1}$$
|
| 22 |
+
|
| 23 |
+
where C is the embedding channel dimension. The above process has $O(N^2)$ computational complexity, where N is the number of image patches in a vision transformer. This quadratic complexity hinders transformers from being applied to tasks requiring high resolution output. To address this problem, PVT (Wang et al. (2021c)) downsamples $\mathbf{K}$ and $\mathbf{V}$ , while Swin Transformer (Liu et al. (2021)) limits the attention computation within local windows.
|
| 24 |
+
|
| 25 |
+
In order to reduce the computational cost of vision transformers, we present QuadTree Attention. As the name implies, we borrow the idea from quadtrees, which are often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions. Quadtree attention computes attention in a coarse to fine manner. According to the results at the coarse level, irrelevant image regions are skipped quickly at the fine level. This design achieves less information loss while keeping high efficiency.
|
| 26 |
+
|
| 27 |
+
The same as the regular transformers, we first linearly project $X_1$ and $X_2$ to the query, key, and value tokens. To facilitate fast attention computation, we construct L-level pyramids for query Q, key
|
| 28 |
+
|
| 29 |
+
K, and value V tokens by downsampling feature maps. For query and key tokens, we use average pooling layers. For value tokens, average pooling is used for cross attention tasks and convolutional-normalization-activation layers with stride 2 are used for self attention tasks if no special statement. As shown in Figure 1, after computing attention scores in the coarse level, for each query token, we select the top K key tokens with the highest attention scores. At the fine level, query sub-tokens only need to be evaluated with those key sub-tokens that correspond to one of the selected K key tokens at the coarse level. This process is repeated until the finest level. After computing the attention scores, we aggregate messages at all levels, where we design two architectures named as $\mathbf{QuadTree-A}$ and $\mathbf{QuadTree-B}$ .
|
| 30 |
+
|
| 31 |
+
**QuadTree-A.** Considering the *i*-th query token $\mathbf{q}_i$ at the finest level, we need to compute its received message $\mathbf{m}_i$ from all key tokens. This design assembles the full message by collecting partial messages from different pyramid levels. Specifically,
|
| 32 |
+
|
| 33 |
+
$$\mathbf{m}_i = \sum_{1 \le l \le L} \mathbf{m}_i^l,\tag{2}$$
|
| 34 |
+
|
| 35 |
+
where $\mathbf{m}_i^l$ indicates the partial message evaluated at level l. This partial message $\mathbf{m}_i^l$ assemble messages at the l-th level from tokens within the region $\Omega_i^l$ , which will be defined later. In this way, messages from less related regions are computed from coarse levels, while messages from highly related regions are computed in fine levels. This scheme is illustrated in Figure 2 (b), message $\mathbf{m}_i$ is generated by assembling three partial messages that are computed from different image regions with different colors, which collectively cover the entire image space. The green region indicates the most relevant region and is evaluated at the finest level, while the red region is the most irrelevant region and is evaluated at the coarsest level. The region $\Omega_i^l$ can be defined as $\Gamma_i^l - \Gamma_i^{l+1}$ , where the image region $\Gamma_i^l$ corresponds to the top K tokens at the level l-1. The regions $\Gamma_i^l$ are illustrated in Figure 2 (c). The region $\Gamma_i^l$ covers the entire image.
|
| 36 |
+
|
| 37 |
+
The partial messages are computed as,
|
| 38 |
+
|
| 39 |
+
$$\mathbf{m}_{i}^{l} = \sum_{j \in \Omega_{i}^{l}} s_{ij}^{l} \mathbf{v}_{j}^{l},\tag{3}$$
|
| 40 |
+
|
| 41 |
+
where $s_{ij}^l$ is the attention score between the query and key tokens at level l. Figure 2 (a) highlights query and key tokens involved in computing $\mathbf{m}_i^l$ with the same color as $\Omega_i^l$ . Attention scores are computed recursively,
|
| 42 |
+
|
| 43 |
+
$$s_{ij}^{l} = s_{ij}^{l-1} t_{ij}^{l}. (4)$$
|
| 44 |
+
|
| 45 |
+
Here, $s_{ij}^{l-1}$ is the score of corresponding parent query and key tokens and $s_{ij}^1=1$ . The tentative attention score $t_{ij}^l$ is evaluated according to Equation 1 among the $2\times 2$ tokens of the same parent query token. For QuadTree-A, we use average pooling layers to downsample all query, key and value tokens.
|
| 46 |
+
|
| 47 |
+
**QuadTree-B**. The attention scores $s_{ij}^l$ in **QuadTree-A** are recursively computed from all levels, which makes scores smaller at finer levels and reduces the contributions of fine image features. Besides, fine level scores are also largely affected by the inaccuracy at coarse levels. So we design a different scheme, referred as **QuadTree-B** in this paper, to address this problem. Specifically, we compute $\mathbf{m}_i$ as a weighted average of the partial messages from different levels,
|
| 48 |
+
|
| 49 |
+
$$\mathbf{m}_i = \sum_{1 \le l \le L} w_i^l \mathbf{m}_i^l, \tag{5}$$
|
| 50 |
+
|
| 51 |
+
where $w_i^l$ is a learned weight. As shown in Figure 2 (c), the partial messages here overlap with each other, which are computed as,
|
| 52 |
+
|
| 53 |
+
$$\mathbf{m}_{i}^{l} = \operatorname{Attention}(\mathbf{q}_{i}^{l}, \mathbf{K}_{\Gamma_{i}^{l}}^{l}, \mathbf{V}_{\Gamma_{i}^{l}}^{l}), \tag{6}$$
|
| 54 |
+
|
| 55 |
+
where Attention is the attention message computation as Equation 1. Here, $\mathbf{K}_{\Gamma_i^l}^l$ and $\mathbf{V}_{\Gamma_i^l}^l$ are matrices formed by stacking all keys and values within the region $\Gamma_i^l$ .
|
| 56 |
+
|
| 57 |
+
Both **QuadTree-A** and **QuadTree-B** involve only sparse attention evaluation. Thus, our method largely reduces computational complexity. As analyzed in Appendix A.1, the computational complexity of our quadtree attention is linear to the number of tokens.
|
| 58 |
+
|
| 59 |
+
<span id="page-5-0"></span>
|
| 60 |
+
|
| 61 |
+
| | | AUC@5◦ | AUC@10◦ | AUC@20◦ |
|
| 62 |
+
|------------|----------------------------------------------------|--------|---------|---------|
|
| 63 |
+
| | ContextDesc + SGMNet(Chen et al. (2021)) | 15.4 | 32.3 | 48.8 |
|
| 64 |
+
| | SuperPoint + OANet (Zhang et al. (2019b)) | 11.8 | 26.9 | 43.9 |
|
| 65 |
+
| Others | SuperPoint + SuperGlue (Sarlin et al. (2020)) | 16.2 | 33.8 | 51.9 |
|
| 66 |
+
| | DRC-Net (Li et al. (2020)) | 7.7 | 17.9 | 30.5 |
|
| 67 |
+
| | Linear Att. (LoFTR) (Katharopoulos et al. (2020)) | 16.1 | 32.6 | 49.0 |
|
| 68 |
+
| | PVT (Wang et al., 2021c) | 16.2 | 32.7 | 49.2 |
|
| 69 |
+
| LoFTR-lite | QuadTree-A (ours, K = 8) | 16.8 | 33.4 | 50.5 |
|
| 70 |
+
| | QuadTree-B (ours, K = 8) | 17.4 | 34.4 | 51.6 |
|
| 71 |
+
| | Linear Att. (LoFTR) ? (Sun et al. (2021), 64 GPUs) | 22.1 | 40.8 | 57.6 |
|
| 72 |
+
| | Linear Att. (LoFTR) (Katharopoulos et al. (2020)) | 21.1 | 39.5 | 56.6 |
|
| 73 |
+
| LoFTR | QuadTree-B (ours, K = 8) | 23.0 | 41.7 | 58.5 |
|
| 74 |
+
| | QuadTree-B∗ (ours, K = 16) | 24.9 | 44.7 | 61.6 |
|
| 75 |
+
|
| 76 |
+
Table 1: Results on feature matching. The symbol ? indicates results cited from [\(Sun et al.,](#page-10-4) [2021\)](#page-10-4), where the model is trained with a batch size of 64 on 64 GPUs (a more preferable setting than ours). The symbol ∗ indicates we use the ViT [\(Dosovitskiy et al.,](#page-9-0) [2020\)](#page-9-0)-like architecture for transformer blocks. For PVT and our method, we replace the original linear attention in LoFTR with corresponding attentions.
|
| 77 |
+
|
| 78 |
+
Multiscale position encoding. The computation of attention is permutation invariant to tokens, and thus positional information is missed. To address this problem, we adopt the locally-enhanced positional encoding (LePE) [\(Dong et al.,](#page-9-10) [2021\)](#page-9-10) at each level to design a multiscale position encoding. Specifically, for level l, we apply unshared depth-wise convolution layers to value tokens V<sup>l</sup> to encode the positional information.
|
2202.09265/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2202.09265/main_diagram/main_diagram.pdf
ADDED
|
Binary file (33.1 kB). View file
|
|
|
2202.09265/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,175 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
Consider a set of $N_{\rm tr}$ demonstrations, which is defined as $\mathcal{T}:=\{\{\mathbf{Q}^1,\mathbf{I}^1\},\dots,\{\mathbf{Q}^{N_{\rm tr}},\mathbf{I}^{N_{\rm tr}}\}\}$ where $\mathbf{Q}^n,\ n=1,\dots,N_{\rm tr}$ , are the joint space trajectories and $\mathbf{I}^n$ is RGB-D images taken from the corresponding robot's workspace. For a single joint, we define a trajectory as the ordered set $\mathbf{q}:=\{q_t\}_{t=1,\dots,T}$ , where $q_t\in\mathbb{R}$ is the joint position at sample t, and $\mathbf{Q}:=\{\mathbf{q}_1,\dots,\mathbf{q}_{N_{\rm joint}}\}$ where $N_{\rm joint}$ is the number of the joints of the manipulator.
|
| 4 |
+
|
| 5 |
+
**Probabilistic Movement Primitives (ProMP)** In order to define a distribution over trajectories, We first model a trajectory with an observation uncertainty added to the following deterministic model (Paraschos et al. 2013):
|
| 6 |
+
|
| 7 |
+
<span id="page-2-0"></span>
|
| 8 |
+
$$q = \sum_{i=1}^{N_{\text{bas}}} \theta_i \psi_i(z(t)) + \epsilon_q \tag{1}$$
|
| 9 |
+
|
| 10 |
+
where $\psi_i$ are basis functions (usually Gaussian (Bishop 2006)) evaluated at z(t). z is a phase function that allows time modulation. If no modulation is required, then z(t)=t/f, where f is the sampling frequency. $\theta_i \in \mathbb{R}$ are weights, and $\epsilon_q$ adds zero-mean Gaussian observation noise with variance $\Sigma_q$ .
|
| 11 |
+
|
| 12 |
+
For stroke-like movements, the following normalised Gaussian basis functions are used:
|
| 13 |
+
|
| 14 |
+
$$\psi_i(t) := \frac{b_i(z(t))}{\sum_{j=1}^{N_{\text{bas}}} b_j(z(t))}$$
|
| 15 |
+
(2)
|
| 16 |
+
|
| 17 |
+
where
|
| 18 |
+
|
| 19 |
+
$$b_i(z(t)) := \exp\left(-\frac{(z(t) - c_i)^2}{2h}\right) \tag{3}$$
|
| 20 |
+
|
| 21 |
+
We can also write eq. (1) in a matrix form, as follows:
|
| 22 |
+
|
| 23 |
+
<span id="page-2-3"></span>
|
| 24 |
+
$$\mathbf{q}_t = \mathbf{\Psi}_t^T \mathbf{\Theta} + \epsilon_q \tag{4}$$
|
| 25 |
+
|
| 26 |
+
where $\Psi_t := (\psi_1(z(t), \dots, \psi_{N_{\mathrm{bas}}}(z(t))) \in \mathbb{R}^{N_{\mathrm{bas}} \times 1},$ $\Theta := (\theta_1, \dots, \theta_{N_{\mathrm{bas}}}) \in \mathbb{R}^{N_{\mathrm{bas}} \times 1},$ and we also define $\Omega := (\Theta_1, \dots, \Theta_{N_{\mathrm{joint}}}) \in \mathbb{R}^{N_{\mathrm{bas}}N_{\mathrm{joint}} \times 1}$ and $\Phi := [\Psi_1, \dots, \Psi_T]^T \in \mathbb{R}^{T \times N_{\mathrm{bas}}}.$
|
| 27 |
+
|
| 28 |
+
From eq. (1), it follows that the probability of observing $q_t$ is given by:
|
| 29 |
+
|
| 30 |
+
$$p(q_t|\mathbf{\Theta}) = \mathcal{N}\left(q_t \mid \mathbf{\Psi}_t^T \mathbf{\Theta}, \mathbf{\Sigma}_q\right)$$
|
| 31 |
+
(5)
|
| 32 |
+
|
| 33 |
+
Since $\Sigma_q$ is the same for every time step, the values $q_t$ are taken from independent and identical distributions, i.i.d. Hence, the probability of observing a trajectory $\mathbf{q}$ is given by:
|
| 34 |
+
|
| 35 |
+
$$p(\mathbf{q}|\mathbf{\Theta}) := \prod_{t=1}^{T} p(q_t|\mathbf{\Theta})$$
|
| 36 |
+
(6)
|
| 37 |
+
|
| 38 |
+
However, since parameters $\Theta$ are to be learnt from data, we also assume such parameters are taken from a distribution $\Theta \sim p(\Theta|\rho) = \mathcal{N}(\Theta|\mu_{\Theta}, \Sigma_{\Theta})$ . We therefore would
|
| 39 |
+
|
| 40 |
+
like to have a predictive distribution of $q_t$ which does not depend on $\Theta$ , but on $\rho:=(\mu_{\Theta},\Sigma_{\Theta})$ . This is done by marginalising $\Theta$ out in the distribution as follows:
|
| 41 |
+
|
| 42 |
+
$$p(q_t|\rho) = \int \mathcal{N}\left(q_t \mid \boldsymbol{\Psi}_t^T \boldsymbol{\Theta}, \, \boldsymbol{\Sigma}_q\right) \mathcal{N}\left(\boldsymbol{\Theta} \mid \boldsymbol{\mu}_{\boldsymbol{\Theta}}, \, \boldsymbol{\Sigma}_{\boldsymbol{\Theta}}\right) d\boldsymbol{\Theta}$$
|
| 43 |
+
$$= \mathcal{N}\left(q_t \mid \boldsymbol{\Psi}_t^T \boldsymbol{\Theta}, \, \boldsymbol{\Sigma}_q + \boldsymbol{\Psi}_t^T \boldsymbol{\Sigma}_{\boldsymbol{\Theta}} \boldsymbol{\Psi}_t\right)$$
|
| 44 |
+
(7)
|
| 45 |
+
|
| 46 |
+
deep-MP weights learning: The weights of ProMP models are conventionally learned from demonstrations where they can be later adapted according to different trajectory reproduction needs, e.g. (1) the initial/goal point of the desired trajectory are set, or (2) some via points are determined based on the problem constrains. Computing the via/start/goal points based on visual sensory information of robot's workspace needs hand-designed and task- or robot's workspace-specific features. This is effective and handy in some robotic tasks like simple pick-and-place, but it is too complex for breast palpation, e.g., in which the task trajectory and the geometry of the breast are related. Two following deep-MP models learn the relation between visual sensory information and joint trajectories.
|
| 47 |
+
|
| 48 |
+
(deep-MP): Instead of learning the weights of the ProMP, a deep model— that can be a CNN, FC or PointNet model—captures the correlation between the visual sensory information and ProMP weights as per eq. (8). The algorithm is described in Alg. 1. Moreover, a schematic of the algorithm is shown in Fig. 2 in the green block at top right).
|
| 49 |
+
|
| 50 |
+
For a neural network similar to Fig. 2, network's behaviour can be described by the following:
|
| 51 |
+
|
| 52 |
+
<span id="page-2-1"></span>
|
| 53 |
+
$$\mathbf{\Theta}_k = h_k(\mathbf{W}_k, \mathbf{I}_k, \sigma_k) + v_k \tag{8}$$
|
| 54 |
+
|
| 55 |
+
Equation (8), known as observation equation, shows that the network's target vector $\Theta_k$ is equivalent to a nonlinear
|
| 56 |
+
|
| 57 |
+
**Input**: NN architecture h, ProMP basis functions $\Phi$ , image I, training set trajectories $\mathbf{q}$ , activation function $\sigma$ **Output**: NN weights $\mathbf{W}$ , predicted trajectory $\hat{\mathbf{q}}$ **Note**: This pseudo code is for single joint trajectory $\hat{\mathbf{q}}$ . Generalising it for all joints $\hat{\mathbf{Q}}$ is straightforward.
|
| 58 |
+
|
| 59 |
+
```
|
| 60 |
+
1: Dataset: \mathcal{T} \leftarrow \{\mathbf{q}, I\}_{1,...,N_{\mathrm{tr}}}
|
| 61 |
+
2: InitialiseDeepModel : \hat{\Theta} \leftarrow h(\mathbf{W}, \mathbf{I}, \sigma) \text{ eq. (8)}
|
| 62 |
+
as per Fig. 4: either CNN (2-D) or FCN (1-D);
|
| 63 |
+
3: InitialiseProMP: \hat{\mathbf{q}} \leftarrow \mathbf{\Phi}^T \hat{\mathbf{\Theta}} (eq. (4))
|
| 64 |
+
4: RMSE \leftarrow e = \|\hat{\mathbf{q}} - \mathbf{q}\|
|
| 65 |
+
5: while (e > \epsilon) do
|
| 66 |
+
for all \{q, I\} \in \mathcal{T} do
|
| 67 |
+
FORWARD PROPAGATION: \hat{\mathbf{\Theta}}_k = h(\mathbf{W}_k, \mathbf{I}_k, \sigma)
|
| 68 |
+
7:
|
| 69 |
+
FORWARDPROMP: \hat{\mathbf{q}}_k = \mathbf{\Phi}^T \hat{\mathbf{\Theta}}_k (eq. (4))
|
| 70 |
+
8:
|
| 71 |
+
RMSEJOINTLOSS: e_k = e_{k-1} + \|\hat{\mathbf{q}}_k - \mathbf{q}_k\|
|
| 72 |
+
9:
|
| 73 |
+
10:
|
| 74 |
+
BACKPROPAGATION: \mathbf{W}_{k+1} \leftarrow \{\mathbf{W}_k, \frac{\partial e_k}{\partial \mathbf{W}_i}\}
|
| 75 |
+
11:
|
| 76 |
+
12: end while
|
| 77 |
+
13: deep-MP: \hat{\mathbf{q}} = h(\mathbf{\Phi}, \mathbf{W}, \mathbf{I}, \sigma)
|
| 78 |
+
14: end
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
<span id="page-3-0"></span>
|
| 82 |
+
|
| 83 |
+
Figure 2: The blue box contains three NN models learning the weight of ProMP. RGB data is passed through an autoencoder, producing a bottleneck representation of the input Z, which is then fed into a CNN (a) or flattened and fed into a Fully Connected network (b). Alternatively, the depth data from the image, in the form of vector of (x, y, z) coordinates of the pointcloud, is fed to a PointNet network (c) and the resulting feature vector is passed through a dense network. All models are used to predict the full ProMP weights (green box - Alg. 1) or just the residual with respect to the mean (yellow box - Alg. 2). Fig. A.4 in the Appendix is an higher resolution version of this image.
|
| 84 |
+
|
| 85 |
+
function $h_k$ of the input image $I_k$ , the weight parameter $\mathbf{W}_k$ , the node activation $\sigma_k$ and the observation/measurement noise $v_k$ . We consider the measurement noise to be a zero-mean white noise with covariance given by $E[v_k v_l^T] = R_k$ .
|
| 86 |
+
|
| 87 |
+
**Input**: NN architecture h, ProMP basis functions $\Phi$ , image I, training set trajectories $\mathbf{q}$ , activation function $\sigma$ **Output**: NN weights $\mathbf{W}$ , predicted trajectory $\hat{\mathbf{q}}$ **Note**: This pseudo code is for single joint trajectory $\hat{\mathbf{q}}$ . Generalising it for all joints $\hat{\mathbf{Q}}$ is straightforward.
|
| 88 |
+
|
| 89 |
+
```
|
| 90 |
+
1: Dataset: \mathcal{T} \leftarrow \{\mathbf{q}, \mathbf{I}\}_{1,...,N_{\mathrm{tr}}}
|
| 91 |
+
|
| 92 |
+
2: InitWeights: \{\mathbf{\Theta}_{1},...,\mathbf{\Theta}_{N_{\mathrm{tr}}}\} \leftarrow \mathbf{\Phi} \cdot \{\mathbf{q}_{1},...,\mathbf{q}_{N_{\mathrm{tr}}}\}
|
| 93 |
+
|
| 94 |
+
3: InitAverages: \bar{\mathbf{\Theta}} \leftarrow \operatorname{mean}(\mathbf{\Theta}_{1},...,\mathbf{\Theta}_{N_{\mathrm{tr}}}) (eq. (9))
|
| 95 |
+
4: InitDeepModel : \hat{\mathbf{\Theta}}^{res} \leftarrow h(\mathbf{W}, \mathbf{I}, \sigma) \text{ eq. (8)}
|
| 96 |
+
as per Fig. 4: either CNN (2-D) or FCN (1-D);
|
| 97 |
+
5: InitFullWeights: \hat{\mathbf{\Theta}} = \hat{\mathbf{\Theta}}^{res} + \bar{\mathbf{\Theta}} \text{ (eq. (10))}
|
| 98 |
+
6: InitProMP : \hat{\mathbf{q}} \leftarrow \mathbf{\Phi}\hat{\mathbf{\Theta}} (eq. (4))
|
| 99 |
+
7: RMSE \leftarrow e = \|\hat{\mathbf{q}} - \mathbf{q}\|
|
| 100 |
+
while (e > \epsilon) do
|
| 101 |
+
8:
|
| 102 |
+
for all \{q,I\} \in \mathcal{T} do
|
| 103 |
+
9:
|
| 104 |
+
ForwardPropagation: \hat{\mathbf{\Theta}}_k^{\mathrm{res}} = h(\mathbf{W}_k, \mathbf{I}_k, \sigma)
|
| 105 |
+
10:
|
| 106 |
+
FULLWEIGHTS: \hat{\mathbf{\Theta}} = \hat{\mathbf{\Theta}}^{\mathrm{res}} + \bar{\mathbf{\Theta}} (\text{eq.} (10))
|
| 107 |
+
11:
|
| 108 |
+
FORWARDPROMP: \hat{\mathbf{q}}_k = \mathbf{\Phi}^T \hat{\mathbf{\Theta}}_k (eq. (4))
|
| 109 |
+
12:
|
| 110 |
+
RMSEJOINTLOSS: e_k = e_{k-1} + \|\hat{\mathbf{q}}_k - \mathbf{q}_k\|
|
| 111 |
+
13:
|
| 112 |
+
14:
|
| 113 |
+
BACKPROPAGATION: \mathbf{W}_{k+1} \leftarrow \{\mathbf{W}_k, \frac{\partial e_k}{\partial \mathbf{W}_i}\}
|
| 114 |
+
15:
|
| 115 |
+
16: end while
|
| 116 |
+
17: deep-MP residual: \hat{\mathbf{q}} = h(\mathbf{\Phi}, \mathbf{W}, \mathbf{I}, \sigma)
|
| 117 |
+
18: end
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
+
where $h_k$ is a nonlinear deep model mapping the image $I_k$ taken by robot's camera at a home position (see Fig. 1b) to the ProMP weights. The weights then generate the corresponding trajectories using eq. (4).
|
| 121 |
+
|
| 122 |
+
(Residual deep-MP): usually a set of demonstrated trajectories convey information about the presented behaviour regardless of the scene. GMM, GP and ProMP have been used to encode such information in a probabilistic model that can be expressed as a mean and distribution (see Fig. 3). In order to improve the performance of the deep-MP, we propose a deep model which learns the correlation between the input image and residual trajectories, i.e. difference between the mean and demonstrated trajectories. Hence, the deep model is required to learn a much simpler mapping since part of the complexities are captured by the mean trajectory (see the yellow block at bottom right in Fig. 2). First, we learn to fit a ProMP trajectory $\hat{\bf Q}$ to the demonstrated trajectories using least squares optimisation.
|
| 123 |
+
|
| 124 |
+
Hence, we can compute the mean weights $\Theta$ and mean joint space trajectories $\mathbf{q}$ (see Fig. 3) by maximising the likelihood as per eq. (9).
|
| 125 |
+
|
| 126 |
+
<span id="page-3-2"></span>
|
| 127 |
+
$$\mathbf{\Theta}^{n} = (\lambda \mathbf{I} + \mathbf{\Phi}^{T} \mathbf{\Phi})^{-1} \mathbf{\Phi}^{T} \mathbf{q}^{n}, \forall n = 1, \dots, N_{\text{tr}} \\ \bar{\mathbf{\Theta}} = \mathbb{E}([\mathbf{\Theta}^{1}, \dots, \mathbf{\Theta}^{N_{\text{tr}}}])$$
|
| 128 |
+
(9)
|
| 129 |
+
|
| 130 |
+
where $\mathbf{q}^n := (q_1, \dots, q_T) \in \mathbb{R}^{T \times 1}$ is the vectorized form of single-joint values in trajectory n, and $\lambda$ is a regularising term used to avoid over-fitting in the original optimisation objective. Then the deep-MP model learns the correlation between the residuals of the trajectories and the visual sensory information.
|
| 131 |
+
|
| 132 |
+
<span id="page-3-3"></span>
|
| 133 |
+
$$\mathbf{\Theta}^n = \mathbf{\Theta}^{\mathrm{res},n} + \bar{\mathbf{\Theta}} \tag{10}$$
|
| 134 |
+
|
| 135 |
+
<span id="page-4-1"></span>
|
| 136 |
+
|
| 137 |
+
(b) Reproduction mean trajectory and corresponding variance
|
| 138 |
+
|
| 139 |
+
Figure 3: (a) Samples of RTP-RGBD demonstrated trajectories from left/joint 1 to right/joint 4 – trajectories of joint 5, 6 and 7 are not shown here; (b) the computed mean (solid blue lines)— used for the residual deep-MP implementation—and representative variations of the distribution (shaded grey areas).
|
| 140 |
+
|
| 141 |
+
where $\Theta^n$ and $\Theta^{\mathrm{res},n}$ are, respectively, the full and residual weights of the ProMP model for $N_{\mathrm{tr}}$ demonstrated trajectories. We can use the deep-MP model presented in eq. (8) to learn $\Theta^{\mathrm{res},n}$ which will be added to the $\bar{\Theta}$ to form the ProMP corresponding with $\{\mathbf{Q}^n,\mathbf{I}^n\}$ , as per eq. (4).
|
| 142 |
+
|
| 143 |
+
Our experimental setup consists of a 7-DoF Panda robotic arm manufactured by Franka Emika. An Intel RealSense D435i RGB-D camera is mounted on the wrist of the arm. We also use a tactile finger consisting of a 6x4 uSkin Xela magnetic-based tactile sensor for RTP-RGBD and WPP data collection. This sensor is firmly connected to the left finger link of the gripper using a 3D printed mount. Although the reading of the tactile sensing is not used in this study, we will use it for future study of palpation motion control.
|
| 144 |
+
|
| 145 |
+
We have obtained three data sets: (1) we collected reach-to-palpate (RTP) dataset, called RTP-RGB, in a mock study; (2) Reach-to-palpate dataset 2 called RTP-RGBD, and (3) Wedged-palpation-path dataset called WPP.
|
| 146 |
+
|
| 147 |
+
**Reach-to-palpate** (**RTP**) (i) RTP-RGB: Our mock study includes the RTP-RGB data collection. For each sample in this dataset, the robotic arm starts from a fixed home pose as shown in Fig. 1b. At this home pose, the camera takes an RGB image of the breast phantom; and then the robot is manually moved to the corner of the breast phantom in kinesthetic teaching mode as shown in Fig. 4. This dataset contains 500 samples, i.e. the robot at home configuration takes RGB images of breast phantom at a random position on the table. We have trained the CNN and FC deep-MP model, where they yield 0.0108 and 0.0118 [Radian<sup>2</sup>] errors
|
| 148 |
+
|
| 149 |
+
<span id="page-4-0"></span>
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
|
| 155 |
+
Figure 4: RTP-RGB Datasets: CNN-deep-MP Model tested on unseen Breast Phantom configurations. (a) shows precise task execution whereas (b) and (c) have a larger errors as they belong to regions with different sample densities (a, b and c belong to region 1, 2 and 4, respectively in Fig. A.3 of Appendix). The distance between the Robot EE and the corner of the breast phantom—the desired touching point demonstrated— is because of difference in sample density of the corresponding regions.
|
| 156 |
+
|
| 157 |
+
<span id="page-4-2"></span>
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
|
| 161 |
+
Config. I Config. II Config. IV
|
| 162 |
+
|
| 163 |
+
Figure 5: Samples of WPP data set: the brown disk on images show the desired end-points of palpation path 5, 1, 4 and 3 at configuration I, II, III and IV, respectively. The starting point for each demonstrated palpation is the nipple.
|
| 164 |
+
|
| 165 |
+
in joint space and 39.7 mm and 46.8 mm errors in task space.
|
| 166 |
+
|
| 167 |
+
Full details of the RTP-RGB dataset and the obtained results are described in Appendix. The results obtained by this dataset suggest (1) we need a more structured dataset to better understand the impact of samples density on the results; (2) the depth data may be relevant; (3) a more challenging task is needed to showcase the effectiveness of the approach.
|
| 168 |
+
|
| 169 |
+
(ii) RTP-RGBD: the setup for the following data collection is the same as the one in RTP-RGB dataset. Nonetheless, we collected RGB and depth data for each sample and the robot is moved by joint space motion planning to the nipple of the breast phantom. We consider 4 regions for data collection as shown in Fig. 6: Region A, B, C and D. After each sample collection, the breast phantom was moved to a new location within the region boundary to create uniform distribution for each region with different densities as shown in Fig. 6. A total of 545 samples were collected with 292, 128, 73 and 52 samples in region A, B, C, and D respectively.
|
| 170 |
+
|
| 171 |
+
Wedges Palpation Path (WPP) After moving the robot tactile finger to the nipple of the breast phantom, the robot needs to follow the palpation path. The robot is moved using kinesthetic-teaching-mode from nipple along to the edge of the phantom similar to WPP shown in Fig. 1c. Synchronised robot full state, tactile sensor readings, and joint trajectory are recorded. 31 palpation trials for every 7 WPPs (Fig. 1c) and four different phantom configurations (Fig. 5) are recorded. A total of 868 palpation samples were collected in WPP dataset.
|
| 172 |
+
|
| 173 |
+
<span id="page-5-0"></span>
|
| 174 |
+
|
| 175 |
+
Figure 6: XY coordinates of the end-effector when the robot reaches the start point of the palpation in RTP-RGBD dataset.
|
2203.01228/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-02-10T17:04:50.507Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36" version="16.5.3" etag="tE3oPBN9CMZpv80yblzt" type="google"><diagram id="JkvKxv6R4ki28wVVk_V_">7V3ZbuPGmn4aA8lFC6yV5KXdbZ8ZoINJpg2c5KpBW7KtE9nykeTT7Xn64VZUbaQoqRaSLiPoSJRUourfv/qXC/T5+ec/Ntnr02/r+WJ1AaP5zwv05QJCiAjK/1dcea+uAICi6srjZjmvr+0vfFv+36K+yN72tpwvtsIbd+v1ard8FS/er19eFvc74Vq22ax/iG97WK/Eb33NHutvjPYXvt1nq4Xytn8u57un6mpCuHf/12L5+MS+GUT1K88Ze3O9xPYpm69/cN+Fri/Q5816vasePf/8vFgVu8f2pVropuXV5sY2i5ddnw/A6gP/yVZv9W+r72v3zn7sZv32Ml8U748u0NWPp+Vu8e01uy9e/ZHTN7/2tHte5c9A/nCV3S1WV+vNfLH5vF6tN/nl+eIhe1vld3P1sH7Z1cQEsH7O3nUBUVT+5de3u8367wX3yhWsX3lYrlbc9WsCcHm9/hWLzW7xs3UnQLO/OWcu1s+L3eY9fwv7AKS0+gzjyqQm2o89iSGpLj1x1IW4fl9Wc9Vjs/Z+4/MH9d7r6YBGQYcoAl+uPqt0iCISXd9YogOGuB8dqAE64GPoAA7Tgd+ol/XLQt3th/Kvc7erV5imKSiWbe5rAhJTu86UV8eug4jpMeP7Tga17wL/8/tuY6djquoZ/U4TAxtNB7TRjVrvYnCaP6vteHZX3qIxhS8RQqfwW1jehMqPNZTIZQ7/+R1UDxTC5J/PPZzFYaJk29fK7XlY/iwIKevrh+R+cX+vI8ldQjCJJBsBDG04gnAm6naaqloGaTYcGdjvpHW/4UT3G4N8v6P9H/C292nL3l9+v4ivwEX8ZaoUwPEMett1Fi7xGzrPA5n6aa2uRb9yvdk9rR/XL9nq63r9Wu/uvxa73Xu9N9nbbi2ZAGnjFj+Xuz+L5WakfvZXvXjx+MtP/sk7e/KS/7bqQ5Cw53/VX1A+2X+ufLb/4PyyiOjyp/erbLtd3t8+LV+qF26WK3aP1UYUv14g5Xb9trlfCPp4l20eF4y6qDfFN4tVtlv+R1xeR7z6o7+vl/mKnG5EWOYU2a+q7qz+IB/TKWth0bCRJCdFkuSBN0wxAHEsrlttgrJuyV3NL+zHcKCT4fa8db2/ej7/MX4DPLc1vNfGb6bYhvGIwDeJeb7pLfSqQ+eQBifKvCeR19GO2SovtFNhgK/fbn9TCHqyh16541BvCef0jhJqzNeLZljUZ1h1r2Nblk/jSgTL1yYGiRsN1matCJ2lIqfEJ1u+fC2R6+JIWsugtSNjZLLYE5OlXpkMo3gWG2KyYi3ojMnox3CpXPBAopgkgmYp95f0oqK6MonUlfEsAjBN4upf2IvXTuGPOLh7vd09jaFD0KO7p0JSY3X3MPDq7unwJVkq9mxUW+Z5tn1qdpXbweL679lut9i8lFdghC50GLUoRbyZRmfpOW7DSMcpwLnqkESpoPyIohoTtMcOJQXWXzUyAeMVI91/q4RFtCjGnHLZO/e21+IN264fd+BrU4m9qi84VQmzNAENtnwRX91+mjDCSehQMGZ283qQ+fYTnDARUp8ws07ZDj8O8hVsM10hYE7EvBPSqvXNRdvFWs6ibZYiMy4u8xVtN2lxnrgsMRduF2s5C7d1xiGEUy08RlQeQxZ4rLcdUpOIxhpOEeoznIIBczKmCHE0Q1zYIUclhvAn2v0tjrAoqGJRvlWjN/urgbuphWyC3iKdOPGeXBoy3lOzT7wEeyReW/rU03dwQa4vbUe1c7JI5lhn9BJ4hyi1FNUiLBtBwkAqB1EtcpM8NVL9xuA1Qb95FBHUnXk0EGoNRL/piJdQj8SDrfoNTli/YeRVv7nBU8YZ2iJNaEt9SoibfKaJqDcN8RKPGZlIV9xUqbf96dB01RxR0i6cqjldwdO0y2wS7LPMBrXXNcGJbniKhnIGitqKnLKJF9qk1OcJKEqdmOeJnIAylczbZwzN2+cWsDRnQ2OVNgn7pOtKG9yNTkwGouf5hvGIwDexP78Oe612GldApaMd8Uk7FXIY61lhosCkLs8K8SjTMryZPk2RqQ0V1mauYnPJP8VazpJ/8CjrubyB85o8d4dclhJzyT/FWtAZl3UXdE3GqbLPA0VjFyEjQcIjzOQ9wAh2f4uQ90D7pZqfwjde82VG5ghqLCDxWHKNVbhqrI5gqpwnOXUEddCTLBWhBkdSYEjEDwxV3cCISAiHk6qbg19rtuoGt6XFTL7qBkaJiDhLYu8Q+mSSoEecp1x1AwHwiTkTNxkvEwm8ma4Q3A4L+UqtjqqxwLtcy1ngzRT6uLjMV+DNlKEnLgORscC7XMtZ4E26QcQQQPE8hlUeoxYC+952CCsOwEgDqNyr8hlAkYA+GTO31An6lAwCfSLd6NNAcgO9mWQN3kQ94k2kLT2qquXIJpoEmCgFjS6TAMkoqp+8iYgmNmJFgl5EpL3cCU5YRFLiU0Rod0LRBxcR5mALIuKx1J3dz4GU8omKSjGVQREVd8gbdYOJjDM8pbp22Nijw0XVnso3Nxfk8y+5oFRS8kfheX3+VSGq5YhVnsJkwoYgIGODhPk+vGCkqmAwLXHWVqtQgLLVcCpbjSPib6PddB4eqQbSpJcnHt1ZqqtAos0gNoGM9N9va/bCp21Jl8v8DQC+/ix/PHs9f/RY/F8Sr/zBH/kj3gcohK36svw+q++rPjoFGSSE+FR33a13ByGFgJdBTiR7FmVKo7xublD+pyNzmtajvHg53qy3W0GCo54SbMFdaMkvTGDCY4FQPD4hWMIGTbUnTeQJLfUXmUqNoKMAPE6tF06P4cxmmp9BC6MBTACwlWB7PHPFEnORyCxzqfjMdD3qBPk0MfEoUJlz5NiuJWEwpiCpLKvOv6SmyK6kxpPuiuPbDGiZC9jyXY5nLrk21TRzwcNmYDLRfgqptJkOjcD0OvW4NQKafjAgsjXa8fik6cSynE66d5BsBPZT7p2YAB1rWUt0OZq1ALDMWmpnI8UESFjU6E0BjGKvEYHbbJ/j0sKYNJ8MOmHVpWukVmMkypISQZ7r3y/aCKHIqIdUA122uJWwoTfVdVlDR2LI0etPFUAuhZV8fs52T/fZKpfVrwV+/McxuHGxlV+zu8VK5KBstXwsarfu881a5BS7KqRumX/JZf3C83I+r3IYF/ldZnflegW5arWWL07y+/vSn0NWxV3k3D5fbDTmIFcry5fHr4uHgnZwf+Vqvdutn8Vr/1sLLXfptpScTiVyv355Wdzv6p9y0Yz64Dmmkt9W7fIpmkUpY79zwU0IxY+sHx62i3PTDeO2FlM5zzxlO+4IAjWHEpcOioDm2SJ50FoDep8s7h4urKQiABwlM/EgECcaawBVa4BgO0F76wU3jadcWgNBWXdH+zqHTmsA2k1HD2vAzg6FUSo60XZlDBJdyR1lSu5Fawzuq99eGILN4132S75E/l/+VZH20a/Fw2J3otKKPGTPy9V79fHn9ct6W0qp8Ja9oSntTP3SXXb/92PJfp/EO/ilrBeuv66szmKPSfHlipmCpNzVXCs8/bae57Ymv3BdXt2s83c2z9hmkXK78itfisfF4qTYJZLv8aH3gua9jLonLQP3y1SEal4p/inIlV+orTdhJGPvqX4/e0O1ufmzy/KtlSXn31Q+e2weFza9+JdZdUgKu57/W1j24oVf+Vsofw67D26V+ubFG+OsffG2UkcUr4HyaWXzi+e11S8uina/eLG2/MWLe9tfvBCV11hYU30lKX/Rl2YzvtV7kXsC7FIl2OwWmVooXlU8guJNzCco3sB7BcVrkL9aewbK9co7UC4XHsL+Ikf+ysDoGIz3GDRcpjBg89GG9/aKhhRGaf9OkiYzijBO8igtoajo781Wfd+/DSdkfz3b7l945NaVOb582rA9f1EUxvp9itQ2fmWlrgbhV3aZl/McvrTTgcj9PYLFcBKe5/0Z9fYSXaLpAVtjMPAYCYP0CTz28cPJkYhlPqQSHwIjUcgnwDIwzTKmevrQEobcfoQwhJBEKjDHrLONgzAkmfQJheH29GySEB9PpB5HQ7H7aQdxH6eTzoFI6hG8TaaXt2tRTjR5vKnHlrGJmseryMlkzrsxTqX2D07lZASZtcORE023TxD5xKdUhFgRlKmdCpbhtkeBcYsD+xCYHthvlyC1ZwP0ELFUU9TWtH/3IWLppPNCB0pv6JHek07kHCi9PcZkbIJzoLdDelto1Nab3mqRsoRmPRZoFhgYetUJjh8TlVM5pQqxtmKc8wR1zlNqYPMnnSDpX9g0HZj8GtPpozDDo7dPYzqKBmkTo7dPY9rWYk00pnCixhQTIiERuTGdqZ2krJnTSdceexe3Btobjn4N4JMHgntUsM0dHdCwzqYweNK0NMJq2OJQ0wI2afhwEgRskiCy4dEDqBLWFPKaoFPMTBJD5pGmWRxUaWQiNaIpGzkmX/4D0igFsiyRyF0CS1MOekw60cejEowonnFjZ6QWaJiSGXBIs1HgNsdNPBlQewMQaYpbqUc/s7khRUgvJz7dHkXQY2tgEI0CQBmxoOl61Hv179sQlMvGl5yooKVQquZzK2ijgE5GLGiaum6fwyCaG9JPz5uwRUvYZ3yIGRhF9sx4xQwAjZhRj2IG2nCSy6lPCgUEAzkv0amowSBqVkVNk8nic3ZLc0N6izZh1zEFPsUsQCF2xUwDhfic/9LcUNvY5elaNBhRWdDcjX4BwDEUwotZ34YrgJeys3u+uz3rBBooBCQ+QUdwMJtkrGdsh/sbHBPQESgVIDedoni5RBq5RCao5Bg5+XByqeuIF3usxmzuaGLnqvu+EUacUoykg4Nm1rYLqYSOgJahS9gpaqOHVObE1EiljWHg9UeV/rJS//rmhsyPiG+66k7vhN6kHYZRCjtO6JvOME6kH7ZQ7M/vt9OMTQDNvSDK7b4IclOXkQpsA2Kmj3nSJJYwT7c77waMkXfOKThjEFthakLsoW5rmpLaJZekVIpcaCyxQcvYL/1aor8VR9JaJk2im0Imw4zma4ZzoxF9MRqN41lsiNGKtaA7RutGvfY8db2/6h1t9sZmOggLeezH0twR5wp8/Xb7m0JTy21XzDhY8uRvilWXNrZm2keRoOOL85GmUAWwVF8/nD/90iTr9PN5zIXaCo2e8hCSXF9aD+7JIpljnVZL4B2icjMpYwFMLGs5p6fJaHrdZkxKiS5eQT6PqNgddY1hup1KozWKfY5fagjdB4aEVQbn1HsPU5zMEm6IttSHGDos40JuoJep2Hvd8QHyeaiHDg6Ue5yQJlNgSreabBRVPeMt20dD64vS3NGBsn3rhzO+yvVTqDRGQS7L9dEo0IPxZmkyVTkc/IHdUcsh3ISP4JCcDuY0gsVuK3wAL2cH8joGN9+Opem5OAGJIu54HMTiTHUcS1kHLScY6rpxlM74AEQq4YykdVtOWU45GWFNoMMsvjCLbxdm8YVZfJUGDLP4BjmLr/YLWt2WgQ/jA6xrrmBtZFIpvoLgcXDUK67/nu1yyr2UV2C099T/WTtpUHGTeG8RtXsYPbwJzhEkGkeQXTvX6Yih5Iiy7hbHuxnSSpg5MAcci5we2Tv3NqZUz2QGt0PXhuvj7mNXPnA9nP7dK6TU5QxhC7XsvTzmiCQSA57sMbMUCbYStugj4+AjBx85+MjBRw4+8hh8ZDRuH1lXGR8mVkssMoaJ1T040c7IatYfyjBjqvNDK/D/LyFbZJrnATGJhLI0KHqfUHPube1sQHcKGoLobonAqUSxRAp9e0ceBEornR1EH/lFez6pljxdoNU5p8wQVO5pD0sDdZbmH3nw8fz6tsspti4M1ip7L3zTxsZUq7fYmFxId/bMS4EPbGvGVZRILSqSxsmj1mjxUIpA/YtyiYg16RkP5Z8hhQNBMsPckZDUAZ7ZDU58mmp14dzfiMbp3aoAfoR8QQRS+Ww4oi4bhzOlOOw8DE/5zazxsNDqysIRcX9qOc5GPwLnHHzjiR70blLSBjNPKLfck5PPwVPcStuQ/hR3e5ZxjIwPi3IWxLLNfaZdOTUI9PPadScacJYbfPaHpXURmFlsIkM03tRANIu7ZkQnqrBj+FNsE2ah9rmNuQCaUe5P6n5Dol5MexJnOR4qe4qXcobN0rYtItH1jQmOFNGSnn4qZKl31ltFngY/ICyGnExjGoMfSGhFatfWQp1f7NVLip1Q3L0lOpJNWqhnfcB4GzYai2gGYlWIx7cPAkf5XCYdo1GUpow45NIoE7/9U4kOlAzHIAeCIpYPthfJGSGqxB5/KkJwt+TPYu5VKYHR2IkJ6f7a+q6MeTB0BDjsiD2YBtIRcR4LFainsVsciR4yqA9pzPFXR8vYxet2uSpO17RHLGPohJWK1VMw0lSyItZwzviZFhvCEMzHUafoXVoe4pOtB+xcF51rL87xLKkjaHfoCttSD3Cgi35cQm5pGhniYpwkM9B6ZA8T0ouLT+JRXYZ6bShy1bDYLVfzRWtrn4F2GTdiZIjcshQkUDEz1g7qqSNQVVtAcii+dFzkoivDOVt5sOGnbjxEbXmLKOSyb9JXecCIEi7BsAmtWe4Xtag8dHmdobwllLeE8pZQ3hLKW4ZW3lJZglaXZ+jlLVSXvW6vvKXou0c+lznF8VVuNAtXt/B7Q82LpZqXw+xpuObFMHuO4BxpqAhBH4edBfaiw55actiPhnQpm8rX+PNA4qhzId2OVHp9pP4BonRC5UaiOjDYWpTOeHKaMHtEJQQEwnimlqtZhNpjRwnwI0BBWlSkG22oOw7uytdQpKs3EkpzgY4bMAOlkj8qdbuL+8H7p6jbGB7vbQZsI2AbAdsI2EbANk6IUc8LHuNuZ2bo2Ebs5qh4Cgc3sbsWZBjTGQEojhKcJinGUoSHohlNKEwBya0FIayTw/Gnv3G3L9Uvp/YkttOd/gYnJzg5wckJTk5wcgbn5Iy7P1nsuFDwwyHkuvG71tyloxFyLDVeZelyxhDyuK3NWAtCjj4CQo5BKiPkrJezE4S8Y/DS6BFyTOJZsi/oj4C4z6zIwxFW7uj8cQRY+SmNu/soWJYA6kbB6s4YJQ0qdwU8EWaPoBRyym0gTYacumPMEHKGkDOEnCHkDCHn4ELOZNQhZ6JL0Ag5g5PJGTzMnoPOGUwcN3j8YL0nmp5IYqMk6MxhR0k0k7RjcqrLjiicIe6USKqBjqW7M+iysw3ri6vAj4CrIARl0uJoRpWI3xqywnh7ksgKQkV3ES5AlYQIus1DTHTnpJgNM57w8AIUUYnLUaLJr7U1sCBxXAV7JPrvB9Xiy+iZdT8Z00q1FhK7s5Bl8SqlEYhBHMfSrAVI09zmIZQmucgX2Z8n2k6M0Iw06xBp+nLxLbS5Byw3YTVpSUPBbAC/AvgVwK8Afo0B/ErGXTCbOC+YfQxol0206zA/Dhvt0k2NaqIoOOEoKo29RlFpiKIsR1G6zCvG7i6iqBjw2CASyzZRempmeh4xzfAehZE6CwE4sxcqpSec3IRQKYRKIVQKoVIIldyHSumoQ6VU1y85hEojDpUO8uOgQyWGg+tDpdtPEz5yIphKM1KdBkvpsJs2TyBYAqzzqRAtMY53UdZLgVQScXqEhJGYOm05KNKdQ4egKARFISgKQVEIioYWFKXjrtdljShCUDSRoOgwPw47KGqr771s+l1OMySiWE4jdhsSuRm+OfTQxvDMTiEkQroDpNRpZSlXD4qhmCB3engUY7FiFVusLE11x8shOArBUQiOQnAUgqPBBUfxuIOjE/oYhOBoyMHRQX40HBzVC38Su8Z8khYwwq2QQe5hZml/wuQCKjf0iblBa01R2vED27C0LhtI7mNKKWRdizjWuC2/P5eu/PIqey8cFolXyhJ3a2qliA23NacoQVzNm1Jkfk0ALnXNZr3Lib4uvhHGmgLkh/LPUGAOi7oikZaJyCORWhGaaljWRJUiZAzJUfLr4jEX6x7kO7jH9aX+ZNZBLRqFUN+UFPMDbIpCUq1XzJrQcQShloCTQi12ASd7jOR6f9XE0XKvyvpTmmTpm38ZgUBY1Z2AgMC4N8HPRkCQiFRQcuowX0qijkL9fN1ZBGCaxNW/vTT/SXpdd1Z80ORzfNRuvaHO1yPR9Y0OcWu8QNU/2D5lr8WCq+XL3y2QXDSLz+7bBqP+QJqjceixNDo6ZrPCj0fYpJUwRPZYSj0Qusnud28FlhI9rDc/ss283ObtVmE1kZsO4PCagIGZJIkN+9otE7YESe33YqTaEpxYMybq6cfnfE9zK/zwgWigCI6OBvYMuq6iblg6tcVEHzDQVkY1NCzL2/RGiQxHGSfSKO7GRzxaGTd+fL0Skf16k8pYd9wRQvpuUqcMtmMEioCZkB6K6+K0nxW2FNK3jYr88/vtNI/Ii3ltM8oRUkTqKEucdHBc3jTPVXY/m3rudmEgpcxatzvvpp2evHNntteDR/XX6xnd9zLPqcY8A9yb6ufiqwDImf5UtrwtalO/lijzcdTPip+iYIHG1A6f0WJfjMY0ojdGS2Npku0ZjJbKKs4qo/lBEB32CzXJZljHZhYgxP4GScXgvn67/U2h6fA7MhazriUFS7EaesfWTHt3J0DLgnAcRn4OQ7vQhgfwaSQMIzwx86/kl65vwTwKTu3B4IA6MdWjNc2axNPcxvnUmW0Tg57y4JVcZ9b7CZNFMsc6fZrAO0SprQA2lfUrcRo6OZoe5KcTeaojZ5qqSeT3m/V2KwhSX6wT6IIpbaKRCa1+7PC1Qh2LzFUjaqaGrzW/nz8builz2eKrqh/4H4X8Fulrbp0fGWQ3IqyUyMJK1IbUKFWFlWmus4QVtiFM+f4+ZTuuFfvtBazaUw+5Hbt07HFzg/I/U5QqHA3uT6rWxrFKNahRsdAE1dygU07RqO4E0IaS7QdYBl0ZCIfmyrA7akGCp4sDAyJmODp1ZaDjDh5H5kK5LXNrERwhy4qlqTnIsoqJ6IbgRFqjf3xZ5FE1f4ARna0b28uCgSqoE+rMUKgzC3VmX0KdWagzG1idWe0OtPoqA68za/KipxnrHabxMV5ngjpiveLgTtM0AWm8UNRO6/5e6ChwZ1PBHlfO5yLU04Nt/RMLzz5CiXDH4QaJI4EPT3ZxYcx335aKxnAqnNOcW0LWcg8wsowbQjfNRXyNwNTqvDqt9+wRmGJ6Zl/hQZoYEOL+6tYCTjLpw4fx8AD0yQOOhlQEHujmAQtYUG8eYPBf4AG/PGAhM68/D6jl6BxmPt3caYCgfKaICBRdPIcgOjvxCqLoTBQ1BdB+TTJydJASeKCbB7yaZI2iDTzgnge8muQ2CDKb+Nw+DGSDjHQQojUbPGkIcZCylwxO/wZ4bBA84FX/ts1NnXxBKSAUzqRexAQ51MABlHIsfaymaDjSh9uSbSeexAcoTuRBXAS69H9wGxT07+9wmlUgAFEs73kEZsDhnsPWPUcT3XNMsFQd7HrPUeueO9IwnnaeRsgzt+tSObmdh04AZ1+7n0LPfK/DFaRNLrJXX1t/ba/Gzgc0blzWPPDboOsRCrC1XdBNtZF2YQj14TW51J1kmSG9GMdIo1usKxE9svc65NKky9d+1HdZvFq0ElaSoOsUyqh647ZvHuX5vXZXZSd07512Wxmg/oimHUAz6tO81PTofmZdd5Ci3RGX6CXWIOjUSBrPAF9cD21tj64DmMf4pCeDJZ0aJncW+LQ6pB6HQjjT7Oj+6ll7SnSBYB+tw2uWl/XmuSgJUCssqpWqQoC++kte5b9fXt925bfMF5x+qhYVv+gDqa0Wrqo/gvQi6aI/ONEFuK6VGIVY1wKxyxeKHGkxogtGPXrh/fiNtGx4o8USPqcjjlU8064a00Wbw1JjzNPa7rJd0GOH+Kq3HrPWnInowmjXeizGdNYjjovoLOLlD9jakx7BrctCnJ48hoetu3Sx8gHdZUtJ/c/bbrLOlnykcISSamEgpqT00udESfXo3W5dSSUJltuUasEmMdSxpqOOayHOmiO197PvGxfyrZcO91kSGaxlTJvlBuKsiPDEoTASwgjwLObFQJ41Y7BbAdEhAcNy966y+79fN+vX7LGa3AWju9X6/u+craalVc9w/bpnE2qRt+ZoxbjaoDogxLEmhRFO+7h7LHfG/Ca46eY0iG4xAi9W1D8j86JNs8q+5WmaVho4fmhZg5qW6tCK0Bcm9IUJfWFCX5jQF2ZofWFot4Mx9L4w9FQY97T543+EyeO2Jo/34ERLk8elPjeGGPNQezyToePXApFotjkPHrcTixlPR+La/HTmJ0MlWGJtbM6MlS5qe8Nx0N7SoOv/Bw==</diagram></mxfile>
|
2203.01228/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Causal effects of treatments on patient outcomes are hugely important for decision-making in medical practice [@Yazdani.2015]. These estimates inform medical practitioners in the expected effectiveness of treatments and thus guide treatment selection. Notwithstanding, information on causal effects is also relevant for other decision-making of other domains such as public health [@Glass.2013] and marketing [@Varian.2016].
|
| 4 |
+
|
| 5 |
+
Causal effects can be estimated from either randomized controlled trials (RCTs) or observational studies [@Robins.2000]. Even though RCTs present the gold standard for estimating causal effects, they are often highly costly, unfeasible in practice, or even unethical (e. g., medical professionals cannot withhold effective treatment to patients in need) [@Robins.2000]. Therefore, medical practice is increasingly relying on observational data to study causal relationships between treatments and patient outcomes. Nowadays, electronic health records are readily available, capture patient trajectories with high granularity, and thus provide rich observational data in medical practice [@Allam.2021; @ozyurt2021attdmm].
|
| 6 |
+
|
| 7 |
+
In this paper, we aim at estimating causal effects from observational data in form of patient trajectories. Patient trajectories encode medical histories in a time-resolved manner and are thus of longitudinal form. However, estimating causal effects from observational data is subject to challenges [@Robins.2009; @hatt2021generalizing]. One reason, because the underlying treatment assignment mechanism is usually confounded with the patient outcomes. Another reason is that confounders may vary over time, which introduces additional dependencies and treatment-confounder feedback. As an example, consider a physician who assigns medications to patients over time. Here, the patient outcome is a measurement of some vital signs indicating the patient's health status. Typically, the physician selects treatments based on a patient's time-varying covariates such as blood pressure or heart rate, yet which also affect the patient outcome and are influenced by previous treatments. Hence, methods for estimating causal effects must adjust for time-varying confounders in order to produce unbiased estimates.
|
| 8 |
+
|
| 9 |
+
While there are works on estimating individualized causal effects [@Lim.2018; @Bica.2020; @Li.2021], we are interested in **average causal effects (ACEs)**. As such, ACEs give the expected different in health outcomes when applying different treatment interventions at the level of patient subgroups. Estimating ACEs is especially important for medical practice for a number of reasons [@Robins.2009; @hatt2021generalizing]. (i) There are many settings in practice where interventions affect whole populations such as in public health. For example, a government might be interested in the different effects of a stay-home order on COVID-19 spread for vaccinated vs. non-vaccinated people, or what the effect of sugar tax is on diabetes onsets among society. Here, treatments affect subgroups of people, thus necessitating *average* causal effects. (ii) In medical practice, personalization is typically done by varying treatment recommendations across patient subgroups. To this end, medical research seeks to find increasingly more granular subgroups [@Nielsen.2017]. These should identify differences in disease dynamics and thus capture different phenotypes [@Allam.2021], so that different treatment guidelines are tailored to each subgroup [@Hamburg.2010]. To do so, one must again compare causal effects across patient cohorts, that is, *average* causal effects across patient cohorts. Motivated by such considerations from medical practice, our work aims at time-varying ACE estimation.
|
| 10 |
+
|
| 11 |
+
**Proposed method**: We propose an end-to-end deep learning model to estimate time-varying ACEs, called DeepACE. DeepACE combines a recurrent neural network and feed-forward neural networks to learn conditional expectations of factual and counterfactual outcomes under complex non-linear dependencies, based on which we then estimate time-varying ACEs. In DeepACE, we address time-varying confounding by leveraging the G-formula, which expresses the ACE as a sequence of nested conditional expectations based on observational data as. Existing methods are limited in that these learn the nested conditional expectations *separately* by performing an iterative procedure [@vanderLaan.2018]. In contrast, our end-to-end model DeepACE makes it possible to learn them *jointly*, leading to a more efficient use of information across time.
|
| 12 |
+
|
| 13 |
+
We further develop a *sequential targeting procedure* by leveraging results from semi-parametric estimation theory in order to improve the estimation quality of DeepACE. The sequential targeting procedure perturbs ("targets") the outputs of DeepACE so that our estimator satisfies a semi-parametric efficient estimating equation. To achieve this, we propose a targeting layer and a targeted regularization loss for training. We then derive that DeepACE provides a doubly robust and asymptotically efficient estimator.
|
| 14 |
+
|
| 15 |
+
Our main **contributions**:[^1]
|
| 16 |
+
|
| 17 |
+
1. We propose DeepACE: the first end-to-end neural network for estimating time-varying average causal effects using observational data. DeepACE builds upon the iterative G-computation formula to address time-varying confounding.
|
| 18 |
+
|
| 19 |
+
2. We develop a novel sequential targeting procedure which ensures that DeepACE provides a doubly robust and asymptotically efficient estimator.
|
| 20 |
+
|
| 21 |
+
3. We perform an extensive series of computational experiments using state-of-the-art models for time-varying ACE estimation, establishing that DeepACE achieves a superior performance. We further demonstrate that DeepACE generates important findings based on a medical case study for patients suffering from low back pain.
|
| 22 |
+
|
| 23 |
+
# Method
|
| 24 |
+
|
| 25 |
+
The architecture of the **[G-computation layer]{style="color: blue"}** is shown in Fig. [2](#fig:deepace){reference-type="ref" reference="fig:deepace"} (bottom). In the G-computation layer, we use a long short-term-memory (LSTM) layer [@Hochreiter.1997] to process the input data. We choose an LSTM due to its ability to learn complex non-linear dynamics from patient trajectories while addressing the vanishing gradient problem which frequently occurs when using recurrent neural networks.
|
| 26 |
+
|
| 27 |
+
We feed the data twice into the LSTM: (i) with the observed treatments $\bar{A}_T$ (factual forward pass), and (ii) once with the treatment intervention $\bar{a}_T$ (counterfactual forward pass). Based on this, we computed the hidden LSTM states as follows. At each time step $t$, the factual forward pass leads to a factual hidden LSTM state $h_t^A$ depending on the factual trajectory $\mathcal{H}_t = (\bar{X}_t, \bar{A}_{t-1})$, and the counterfactual forward pass leads to a counterfactual hidden LSTM state $h_t^{a}$ depending on the past covariates $\bar{X}_t$ and interventions $\bar{a}_{t-1}$.
|
| 28 |
+
|
| 29 |
+
Both hidden states $h_t^A$ and $h_t^a$ are processed further. In the factual forward pass, we feed the factual hidden state $h_t^{A}$ together with the current observed treatment $A_t$ into a fully-connected feed-forward network $\mathrm{FF}_t^Q$. The network $\mathrm{FF}_t^Q$ generates a factual output $\hat{Q}_{t+1}^A$ for $Q_{t+1}(\bar{X}_{t}, \bar{A}_{t})$ according to Eq. [\[eq:iterative_process\]](#eq:iterative_process){reference-type="eqref" reference="eq:iterative_process"}. In the counterfactual forward pass, we feed the counterfactual hidden state $h_t^{a}$ also $\mathrm{FF}_t^Q$ and replace the treatment input $A_t$ with the current intervention $a_t$. As a result, the network $\mathrm{FF}_t^Q$ generates a counterfactual output $\hat{Q}_{t+1}^a$ for $Q_t^a = Q_{t+1}(\bar{X}_{t}, \bar{a}_{t})$.
|
| 30 |
+
|
| 31 |
+
We design a tailored loss function, such that we mimic Algorithm [\[alg:iter_gcomp\]](#alg:iter_gcomp){reference-type="ref" reference="alg:iter_gcomp"}. For this, we denote the outputs of the G-computation layer for a patient $i$ at time $t$ by $\hat{Q}_{t+1}^{A^{(i)}}(\eta)$ and $\hat{Q}_{t+1}^{a^{(i)}}(\eta)$. Here, we explicitly state the dependence on the model parameters (i. e., the LSTM and feed forward layers), which we denote by $\eta$. We define the *G-computation loss* as $$\begin{equation}
|
| 32 |
+
\begin{split}
|
| 33 |
+
\mathcal{L}_Q(\eta) = \frac{1}{N}\frac{1}{T} & \sum_{i=1}^N \left( \left(\hat{Q}_{T+1}^{A^{(i)}}(\eta) - y_{T+1}^{(i)}\right)^2 \right. \\
|
| 34 |
+
& \qquad \left. + \sum_{t = 2}^T \left(\hat{Q}_{t}^{A^{(i)}}(\eta) - \hat{Q}_{t+1}^{a^{(i)}}(\eta)\right)^2 \right) .
|
| 35 |
+
\end{split}
|
| 36 |
+
\end{equation}$$ By way of how $\mathcal{L}_Q$ is constructed, each counterfactual output $\hat{Q}_{t+1}^{a}$ is used as a prediction objective by the previous factual output $\hat{Q}_{t}^{A}$. Recall that, in Algorithm [\[alg:iter_gcomp\]](#alg:iter_gcomp){reference-type="ref" reference="alg:iter_gcomp"}, the counterfactual estimates $\hat{Q}_{t+1}^{a}$ are obtained only by evaluating the learned conditional expectation at $\bar{A}_t = \bar{a}_t$. Therefore, we only want the factual outputs $\hat{Q}_{t}^{A}$ to learn the counterfactual outputs $\hat{Q}_{t+1}^{a}$ and not the other way around. Hence, when training the model with gradient descent-based optimization, we block the gradient backpropagation through the counterfactual $\mathrm{FF}_t^Q$ during the counterfactual forward pass.
|
| 37 |
+
|
| 38 |
+
For our sequential targeting procedure, we now introduce a [**targeting layer**]{style="color: red"}. The motivation is as follows: In principle, we could estimate the expected potential outcome $\theta^a$ by first training the G-computation layer, and subsequently following Eq. [\[eq:theta_iterativ\]](#eq:theta_iterativ){reference-type="eqref" reference="eq:theta_iterativ"} and taking the empirical mean over the first counterfactual outputs $\hat{Q}_{2}^a$. Instead, we propose to leverage results from semi-parametric estimation theory, as this allows us to construct an estimator with better theoretical properties, namely double robustness and asymptotic efficiency.[^2] For this purpose, we design our targeting layer so that it estimates the treatment assignments, i. e., the so-called propensity scores $$\begin{equation}
|
| 39 |
+
g_t(\mathcal{H}_t) = P(A_t = 1 \,\mid\, \mathcal{H}_t) = \mathrm{E}(A_t \,\mid\, \mathcal{H}_t).
|
| 40 |
+
\end{equation}$$ We then use the propensity scores to perturb the counterfactual outputs $\hat{Q}_{t}^a$ to make them satisfy an efficient estimating equation. To formalize this, we first provide the mathematical background and subsequently describe how we implement the targeting layer.
|
2203.05238/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2203.05238/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
3D object detection is a fundamental scene understanding problem, which aims to detect 3D bounding boxes and semantic labels from a point cloud of 3D scene. Due to the irregular form of point clouds and complex contexts in 3D scenes, most existing 2D methods [\[33,](#page-9-0) [34,](#page-9-1) [52\]](#page-9-2) cannot be directly applied to 3D object detection. Fortunately, with the development of deep learning techniques on point cloud understanding [\[29,](#page-8-0) [30\]](#page-9-3), recent works [\[13,](#page-8-1) [22,](#page-8-2) [27,](#page-8-3) [37,](#page-9-4) [53\]](#page-9-5) have
|
| 4 |
+
|
| 5 |
+
<span id="page-0-0"></span>
|
| 6 |
+
|
| 7 |
+
Figure 1. Demonstration of *BR*. We consider position-level annotations as the coarse layout of the scenes, which is utilized to generate virtual scenes from a 3D shape repository. Physical constraints are applied on the virtual scenes to remedy the information loss from box annotations to centers. Then a virtual-to-real domain adaptation method is presented to additionally supervise the realscene 3D object detection with the virtual scenes. Dashed arrows indicate supervision for training.
|
| 8 |
+
|
| 9 |
+
employed deep neural networks to directly detect objects from point clouds and achieved favorable performance.
|
| 10 |
+
|
| 11 |
+
Despite the successes in deep learning based object detection on point clouds, massive amounts of labeled bounding boxes are required for training the detector. This issue significantly limits the applications of these methods, as labeling a precise 3D box takes more than 100s even by an experienced annotator [\[38\]](#page-9-6). Therefore, 3D object detection methods using cheap labels are desirable for practical applications. Motivated by this, increasing attention has been paid to weakly-supervised 3D object detection methods, which can be divided into two categories according to the form of annotation: scene-level [\[35\]](#page-9-7) and position-level [\[23,](#page-8-4) [24\]](#page-8-5) where only the class tag and both object center and class are annotated for each object respectively. The two types of annotation only require less than 1% and 5% time for one instance compared to labeling a bounding box, as shown in Table [1.](#page-1-0) While scenelevel annotation is more time-saving, it is hard for the detector to learn how to precisely locate each object in a
|
| 12 |
+
|
| 13 |
+
<sup>\*</sup>Corresponding author.
|
| 14 |
+
|
| 15 |
+
<span id="page-1-3"></span><span id="page-1-0"></span>Table 1. Annotating time and detection results of different methods based on various types of annotation. The benchmark is detailed in Section [4.](#page-4-0) (BBox refers to box annotation. S-L and P-L mean scene-level and position-level annotations respectively.)
|
| 16 |
+
|
| 17 |
+
| Annotation | BBox [22] | S-L [35] | P-L [23] | P-L(BR) |
|
| 18 |
+
|--------------------|-----------|----------|----------|---------|
|
| 19 |
+
| Time(s per object) | 110 | 1 | 5 | 5 |
|
| 20 |
+
| mAP@0.25(%) | 54.2 | <20 | 32.4 | 47.0 |
|
| 21 |
+
|
| 22 |
+
scene due to the lack of position information, and thus the performance is far from satisfactory [\[35\]](#page-9-7). Considering the time-accuracy tradeoff, position-level annotation is a more practical solution. However, previous positionlevel weakly-supervised 3D detection methods still require a number of precisely labeled boxes and can only cope with sparse outdoor scenes [\[23,](#page-8-4) [24\]](#page-8-5). Purely position-level weakly-supervised method for the complicated indoor detection task is still under exploration.
|
| 23 |
+
|
| 24 |
+
In this paper, we propose a shape-guided label enhancement approach called *Back to Reality* (BR) for weaklysupervised 3D object detection[1](#page-1-1) . To reduce the labor cost, we only label the center of each object in the 3D space and the labeling error of centers is allowed[2](#page-1-2) . While largely reducing the workload of labeling, the information loss is non-negligible from box annotations to centers. To address these, BR converts the weak labels into virtual scenes which contain much of the lost information, and in turn utilizes them to additionally supervise real-scene training, as shown in Figure [1.](#page-0-0) Our approach is based on two motivations: 1) in 3D vision, large-scale datasets of synthetic shapes are available. They contain rich geometry information, which can serve as strong prior to assist 3D object detection; 2) the position-level annotations are not only supervision for training, but they also provide coarse layout of the scene. Therefore, we assemble the 3D shapes into fully-annotated virtual scenes according to the coarse layout and apply physical constraints on them to remedy the information loss. Then a virtual-to-real domain adaptation method is presented to align the global features and object proposal features extracted by the detector between the real and virtual scenes. Moreover, our method can take advantage of the precise center labels in virtual scenes to correct the center error of position-level annotations. In this way the useful knowledge contained in virtual scenes is transferred back to reality. Experimental results on ScanNet [\[9\]](#page-8-6) show the effectiveness of the proposed BR method.
|
| 25 |
+
|
| 26 |
+
# Method
|
| 27 |
+
|
| 28 |
+
Figure [2](#page-2-1) illustrates the framework of our approach. Given real scenes with position-level annotations, we utilize 3D shapes to convert the weak labels into virtual scenes, which are utilized to provide additional supervision for the training of the detector. In this section, we first discuss our weakly-supervised setting and then demonstrate the key steps of BR.
|
| 29 |
+
|
| 30 |
+
<span id="page-1-1"></span><sup>1</sup>Label enhancement (LE) is a technique to recover label distributions from logical labels, as defined in [\[48\]](#page-9-8). Here we extend the concept of LE to denote the process of recovering the lost information for weak labels.
|
| 31 |
+
|
| 32 |
+
<span id="page-1-2"></span><sup>2</sup>We show the detailed labeling strategy in Section [3.1.](#page-2-0)
|
| 33 |
+
|
| 34 |
+
<span id="page-2-3"></span><span id="page-2-1"></span>
|
| 35 |
+
|
| 36 |
+
Figure 2. The framework of our *BR* approach. Given real scenes with position-level annotations, we first enhance the weak labels to get fully-annotated virtual scenes. Then the real scenes and virtual scenes are fed into the detector, trained with weakly-supervised and fullysupervised detection loss respectively. During training we use the precise object centers in virtual scenes to refine the imprecise centers in real scenes. Strong-weak adversarial domain adaptation method is utilized to align the distributions of features from both domains. The global discriminator outputs judgments for each scene, and the proposal discriminator outputs judgments for each object proposal. (Here GRL refers to gradient reversal layer; D<sup>g</sup> and D<sup>p</sup> stand for the global and proposal discriminators respectively.)
|
| 37 |
+
|
| 38 |
+
As choosing a point in the 3D space is hard, we divide the labeling process into two steps: firstly we label the center of an object in a proper 2D view of the scene, and compute the line that goes through this center and the focus point of the camera according to the camera parameters of the 2D view. Secondly we choose a point on the line to determine the object's center in the 3D space. This strategy requires less than 5s to label an instance, and the labeling error can be controlled within 10% of the instance size.
|
| 39 |
+
|
| 40 |
+
When the 3D scene is scanned, in many cases we can acquire mesh data. We assume the meshes are available in our input. Nevertheless, case where we only have point cloud data is also considered in our approach and experiments.
|
| 41 |
+
|
| 42 |
+
While position-level annotation requires far less labeling time, its information loss is severe, which is manifested in two aspects: 1) the information of objects' sizes is lost; 2) the object centers are imprecise. In spite of this, positionlevel annotations can provide a coarse layout of the scenes. By assembling synthetic 3D shapes according to the layout, we are able to enhance the weak labels and generate accurately-annotated virtual scenes where sizes are available and centers are precise. Our label enhancement method is two-step: 1) first we calculate some basic properties of 3D shapes; 2) then we place these shapes to generate physically reasonable virtual scenes from the labels. We provide some implementation details in the supplementary[3](#page-2-2) .
|
| 43 |
+
|
| 44 |
+
Definition of Shape Properties: Given a synthetic 3D shape, which is represented as O ∈ R<sup>N</sup>×<sup>3</sup> , we assume it is axis-aligned and normalized into a unit sphere. The length, width and height of O is defined as l, w and h. Then we divide the categories of shapes into three classes: supporter, stander and supportee. Supporters and standers are objects that can only be supported by ground, with the difference that standers are not likely to support other things. Other categories are supportees.
|
| 45 |
+
|
| 46 |
+
Then if a shape belongs to supporter, three properties are calculated: minimum-area enclosing rectangle (MER<sup>∗</sup> ), supporting surface height (SSH<sup>∗</sup> ) and compactness of the supporter surface (CSS<sup>∗</sup> ). The MER is computed in XY plane, which is the minimum rectangle enclosing all the points of the shape. The SSH is the height of the highest surface on which other objects can stand. The CSS is a boolean value, indicating whether the supporting surface can be approximated by the MER.
|
| 47 |
+
|
| 48 |
+
Virtual Scene Generation: We utilize a three-stage approach to construct the virtual scenes, which is equivalent to generate the position of each shape stage by stage: 1) we first refine the coarse layout provided by positionlevel annotations and generate the initial positions; 2) then we generate gravity-aware positions by restoring the supporting relationships between objects; 3) lastly we generate collision-aware positions to make the virtual scenes physically reasonable. The pipeline is shown in Figure [3.](#page-3-0)
|
| 49 |
+
|
| 50 |
+
To generate *initial positions*, we need to recover a more precise layout from the geometric information of the scenes. Given a scene in mesh format, we first oversegment the meshes using a normal-based graph cut method [\[11,](#page-8-21) [15\]](#page-8-22). The result is a segment graph, where the nodes indicating segments and the edges denoting adjacency relations. Then for horizontal<sup>∗</sup> segments whose area<sup>∗</sup> is larger than Amin and height<sup>∗</sup> is larger than Hmin, we iteratively merge
|
| 51 |
+
|
| 52 |
+
<span id="page-2-2"></span><sup>3</sup>we use <sup>∗</sup> to indicate that the exact definition is in supplementary.
|
| 53 |
+
|
| 54 |
+
<span id="page-3-0"></span>
|
| 55 |
+
|
| 56 |
+
Figure 3. The pipeline of our three-stage virtual scene generation method. We first extract horizontal segments from the mesh data and use them to refine the coarse layout provided by position-level annotations. Then synthetic 3D shapes are placed in virtual scenes according to the new layout to construct initial virtual scenes. After that we apply gravity and collision constraints on the virtual scenes to restore the lost physical relationships between objects and make the scenes more realistic.
|
| 57 |
+
|
| 58 |
+
their neighbors into them if the height difference between the horizontal segment and the neighbor segment is smaller than $\Delta_h$ . Once merged, the segments are considered as a whole and the height of the new merged segment is set to be same as the original horizontal segments. After merging, each horizontal segment is represented by its MER. If only one supporter's center falls in a MER, we assign this MER to the supporter. When the centers of multiple supporters fall in the same MER, we perform K-means clustering of the horizontal segment according to these centers and calculate MER for each supporter respectively.
|
| 59 |
+
|
| 60 |
+
Then we place the 3D shapes of corresponding categories on the centers given by position-level annotations and utilize the horizontal segments to refine the layout. The initial positions of the shapes are represented by a dictionary, whose key is the instance index and value is a list:
|
| 61 |
+
|
| 62 |
+
$$[(x, y, z), (s_x, s_y, s_z), O, \theta, S, M, H]$$
|
| 63 |
+
(1)
|
| 64 |
+
|
| 65 |
+
where the instance index is a integer ranging from 1 to the number of objects in the scene. (x,y,z) denotes the center coordinates. $(s_x,s_y,s_z)$ indicates the scales in three dimensions. $\theta$ is the rotation angle of the shape. S tells whether the shape is a supporter. M and H indicate the MER and SSH of supporter. They are set to None when S is false. If the shape has been assigned a horizontal segment, we use the MER of that segment to initialize the above parameters. That is, we choose a supporter whose CSS is True and make the MER of this supporter overlap with the horizontal segment. Otherwise we conduct random initialization. If only point cloud data is available, we simply perform random initialization and the following stages are the same.
|
| 66 |
+
|
| 67 |
+
Next we traverse the initial positions to generate gravity-aware positions. In this process we only need to change z and SSH in the position dictionary. For supporters and standers, we directly align their bottoms with the ground (i.e. the XY plane). For a supportee, if its (x,y) fall in any supporter's MER, we assign it to the nearest supporter and align its bottoms with the supporting surface. Otherwise, it
|
| 68 |
+
|
| 69 |
+
is aligned to the ground.
|
| 70 |
+
|
| 71 |
+
After that we move the shapes to acquire *collision-aware* positions. This stage only x and y in the position dictionary will be changed. First we move the objects on the ground, the supported ones on which will move together if there are. Then for each supporter, we move its supportees until there is no overlap. Note that the three generation stages can not only make the virtual scenes more realistic, but also weaken the impact of imprecise center labels. Thus the virtual scene generation method is robust to labeling errors.
|
| 72 |
+
|
| 73 |
+
Finally, we convert the collision-aware positions to point clouds with proper density. As larger surfaces are more likely to be captured by the sensor, we use the maximum of $(ls_x)(ws_y)$ , $(ws_y)(hs_z)$ and $(ls_x)(hs_z)$ to approximate the surface area of shapes. Then the number of points for each object is set proportional to their surface areas using uniform sampling, the largest one remaining N points.
|
| 74 |
+
|
| 75 |
+
While the label enhancement approach is able to generate physically reasonable fully-annotated virtual scenes, there is still a huge domain gap between them and the real scenes (e.g. backgrounds like walls are missed in the virtual scenes). Therefore, we need to mining useful knowledge in the perfect virtual labels to make up for the information loss of position-level annotations, rather than just relying on the virtual scenes.
|
| 76 |
+
|
| 77 |
+
We refer to the virtual scenes and real scenes as source domain and target domain respectively. A virtual-to-real adversarial domain adaptation method is utilized to solve the above problem, whose overall objective is:
|
| 78 |
+
|
| 79 |
+
$$\max_{D} \min_{O} J = L_{sup}(O) - L_{adv}(O, D)$$
|
| 80 |
+
|
| 81 |
+
$$= (L_1 + L_2 + L_3) - (L_4 + L_5)$$
|
| 82 |
+
(2)
|
| 83 |
+
|
| 84 |
+
where O refers to the object detection network (detector) and D indicates the discriminators used for adversarial feature alignment. $L_{sup}$ aims to minimize the differences between the predicted bounding boxes and the annotations,
|
| 85 |
+
|
| 86 |
+
<span id="page-4-2"></span><span id="page-4-1"></span>
|
| 87 |
+
|
| 88 |
+
Figure 4. Demonstration of our center refinement method. We first jitter the center labels in source domain, and utilize a PointNetlike module to predict the center offset from the local graph of the jittered centers. This module can be directly utilized to predict the center error in target domain as the global semantic features from the two domains have been aligned.
|
| 89 |
+
|
| 90 |
+
which can be further divided into the loss for center refinement module (L1), fully-supervised detection loss on source domain (L2) and weakly-supervised detection loss on target domain (L3). The objective of Ladv is to align the features from source domain and target domain, which aims to utilize the knowledge learned from source domain to assist object detection in target domain. Ladv can be divided into global feature alignment loss (L4) and proposal feature alignment loss (L5). Below we will explain these loss functions and our network in detail.
|
| 91 |
+
|
| 92 |
+
Firstly we elaborate on Lsup(O). As shown in Figure [2,](#page-2-1) we divide the detector into three blocks: a backbone which extracts global semantic features from the scene, a detection module which generates object proposals from the semantic features, and a prediction head which predicts the semantic label and bounding box from each object proposal feature.
|
| 93 |
+
|
| 94 |
+
During training, we jointly refine the imprecise center labels in target domain and supervise the predictions of the detector. As shown in Figure [4,](#page-4-1) we jitter the center labels in source domain by adding noise within 10% of the objects' sizes to imitate the labeling error in target domain. Then for each jittered center, we query its k nearest neighbors in 3D euclidean space from the global semantic features to construct a local graph, and predict the center offset through a PointNet-like module:
|
| 95 |
+
|
| 96 |
+
$$p(c) = \text{MLP}_2 \left\{ \max_{i \in N(c)} \left\{ \text{MLP}_1[f_i; c_i - c] \right\} \right\}$$
|
| 97 |
+
(3)
|
| 98 |
+
|
| 99 |
+
where p denotes the PointNet-like module, c indicates the jittered center label, N(c) is the index set of the k nearest neighbors of c, f<sup>i</sup> is the global semantic feature, whose coordinate is c<sup>i</sup> , and *max* refers to the channel-wise maxpooling. We set L<sup>1</sup> as the mean square error between the ground-truth center offset and p(c). Then for fullysupervised training, the detection loss L<sup>2</sup> is the same as the loss utilized in the original method. For weakly-supervised training, we utilize p to predict the center error in target domain and acquire refined center labels. We set L<sup>3</sup> as a simpler version of L<sup>2</sup> which ignores the supervision for box sizes. More details about L<sup>3</sup> can be found in supplementary.
|
| 100 |
+
|
| 101 |
+
Secondly we analyze Ladv(O, D). We conduct feature alignment in an adversarial manner: the discriminator predicts which domain the features belong to, and the detector aims to generate features that are hard to discriminate. The sign of gradients is flipped by a gradient reversal layer [\[12\]](#page-8-23).
|
| 102 |
+
|
| 103 |
+
As the virtual scenes and real scenes are processed by the same network, we hope L<sup>3</sup> helps the network learn how to locate each object in real scenes, and L<sup>2</sup> compensates for the information loss of centers and sizes. However, due to the domain gap, L<sup>2</sup> will introduce domain-specific knowledge of the virtual scenes, which impairs the influence of L3. Besides, the center refinement module is trained only on source domain, which may not perform well on target domain. Therefore, we align the global semantic features and object proposal features with L<sup>4</sup> and L<sup>5</sup> respectively. Inspired by [\[36\]](#page-9-20), the features are aligned with different intensities at different stages. For global semantic features, we use a PointNet to predict the domain label. Focal loss [\[19,](#page-8-24) [36\]](#page-9-20) is utilized to apply weak alignment:
|
| 104 |
+
|
| 105 |
+
$$\mathbf{L_4} = -\sum_{i=1}^{B} (1 - p_i)^{\gamma} log(p_i), \ \gamma > 1$$
|
| 106 |
+
(4)
|
| 107 |
+
|
| 108 |
+
where B is the batch size, and p<sup>i</sup> refers to the probability of the global discriminator's predictions on the corresponding domain. Features with high p is easy to judge, which means they are domain-specific features and forcing invariance to them can hurt performance. So a small weight is used to reduce their impact on training. For object proposal features, they will be directly taken to predict the properties for bounding boxes. As the properties are domain-invariant and have real physical meaning, we strongly align this stage of features using an objectness weighted L2 loss:
|
| 109 |
+
|
| 110 |
+
$$L_5 = \sum_{i=1}^{B} \sum_{j=1}^{N} s_{ij} (1 - p_{ij})^2$$
|
| 111 |
+
(5)
|
| 112 |
+
|
| 113 |
+
where B is the batch size, N is the number of proposals, sij refers to the objectness label and pij is the probability of the proposal discriminator's predictions on the corresponding domain. We detail the architectures of center refinement module and discriminators in supplementary.
|