Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2002.09437/main_diagram/main_diagram.drawio +1 -0
- 2002.09437/main_diagram/main_diagram.pdf +0 -0
- 2002.09437/paper_text/intro_method.md +75 -0
- 2008.12855/main_diagram/main_diagram.drawio +0 -0
- 2008.12855/paper_text/intro_method.md +158 -0
- 2102.09701/main_diagram/main_diagram.drawio +1 -0
- 2102.09701/main_diagram/main_diagram.pdf +0 -0
- 2102.09701/paper_text/intro_method.md +105 -0
- 2106.13265/main_diagram/main_diagram.drawio +1 -0
- 2106.13265/main_diagram/main_diagram.pdf +0 -0
- 2106.13265/paper_text/intro_method.md +36 -0
- 2202.14026/main_diagram/main_diagram.drawio +0 -0
- 2202.14026/paper_text/intro_method.md +51 -0
- 2203.05272/main_diagram/main_diagram.drawio +1 -0
- 2203.05272/main_diagram/main_diagram.pdf +0 -0
- 2203.05272/paper_text/intro_method.md +59 -0
- 2205.12105/main_diagram/main_diagram.drawio +0 -0
- 2205.12105/paper_text/intro_method.md +251 -0
- 2206.05696/main_diagram/main_diagram.drawio +1 -0
- 2206.05696/main_diagram/main_diagram.pdf +0 -0
- 2206.05696/paper_text/intro_method.md +41 -0
- 2207.01377/main_diagram/main_diagram.drawio +0 -0
- 2207.01377/paper_text/intro_method.md +68 -0
- 2209.10448/main_diagram/main_diagram.drawio +0 -0
- 2209.10448/paper_text/intro_method.md +92 -0
- 2212.04092/main_diagram/main_diagram.drawio +1 -0
- 2212.04092/main_diagram/main_diagram.pdf +0 -0
- 2212.04092/paper_text/intro_method.md +105 -0
- 2301.09249/main_diagram/main_diagram.drawio +0 -0
- 2301.09249/paper_text/intro_method.md +78 -0
- 2305.03088/main_diagram/main_diagram.drawio +1 -0
- 2305.03088/main_diagram/main_diagram.pdf +0 -0
- 2305.19693/main_diagram/main_diagram.drawio +0 -0
- 2305.19693/paper_text/intro_method.md +80 -0
- 2306.10563/main_diagram/main_diagram.drawio +0 -0
- 2306.10563/paper_text/intro_method.md +146 -0
- 2307.07942/main_diagram/main_diagram.drawio +0 -0
- 2307.07942/paper_text/intro_method.md +71 -0
- 2307.07988/main_diagram/main_diagram.drawio +0 -0
- 2307.07988/paper_text/intro_method.md +82 -0
- 2310.04742/main_diagram/main_diagram.drawio +545 -0
- 2310.04742/main_diagram/main_diagram.pdf +0 -0
- 2310.04742/paper_text/intro_method.md +113 -0
- 2310.06148/main_diagram/main_diagram.drawio +0 -0
- 2310.06148/paper_text/intro_method.md +48 -0
- 2310.09130/main_diagram/main_diagram.drawio +0 -0
- 2310.09130/paper_text/intro_method.md +176 -0
- 2310.14017/main_diagram/main_diagram.drawio +0 -0
- 2310.14017/paper_text/intro_method.md +110 -0
- 2312.00388/main_diagram/main_diagram.drawio +0 -0
2002.09437/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2019-11-13T01:47:11.875Z" agent="Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0" etag="-KLHpG6VwUpJGofIe51Y" version="12.2.4" type="device" pages="1"><diagram id="1BuaQ7CHyXzjpNB8chRU" name="Page-1">5LzHju3KkiX4NW/YALUYUutNLWfUWmt+fZPn3JdZiUIB3YPqLiADgR0Mp3K6m9lay8y5/wUz/SUs8VRpY5Z3/4KA7PoXzP4LgkAcwN4/X8v9twXHoL8N5VJn/xz0nw12/eT/NAL/tO51lq//5cBtHLutnv5rYzoOQ55u/6UtXpbx/K+HFWP3X+86xWX+PzXYadz9z61+nW3V31YCwv+zXczrsvr3nUGM/Lunj/998D9PslZxNp7/QxPM/QtmlnHc/m71F5N33+D9e1z+nsf/L/b+R8eWfNj+n5yQJFsbCBVp4qKkBKeW1Qv8f/1zlSPu9n8e+J/Obve/R+Dt9/Rt1v2foaKPfNnqd4DUOMk7Y1zrrR6Hd38ybtvYvwd03w46TttyGfchY8ZuXP5cCi7+/PwP16C6uvzO3cbpbY3X6e8UFvWVv72m/9yS+ncr8O+WdzuLt/hfMPX3X4ifhvJfEFN7tG6dgCKUI/X+/Gy34tzy3WK+f6mSoaT3D0sIOtS8G3LFdZzpWchABbTv/wuiUxBuKUQ08hV+8McofqyQVnPMjXekzso4NBCHLpDQMrLENvBh1zwOr857YnMwFtV0iOD8qMqkAd8SuMa6JRTeE9fvK4+v36NMTS+OO/Ez1EEx8Th+B4y5+pRTrqkMsc9A3SaVumi6asOWmesr/EI3FS9f/HgCtdsq6JpYgl12dIVa9pTY4MCOXFh5ddLWlcqn3MCs8timtSRdDwMwkgRa1BJZ9FUBp9WUZj3NoTBWfCQtWDe0Sp/IFOBJwFHSOpXUj1/9ZlEVdFeLKf1xODBF4rjq5stUSppum5oPlTC6qMdiNi0KMfpExhbB0akIBA0Flo0EZ7R/ag3oL86M6fdQl9PGkEOsMZOfpG8ocWbBnlFSjqJL0zpPjOocaRTfSSTJYUSHFFFO2zWdq8v3nbWwzELVFKj54kjA4iHe3uCwhAUniq4yiht64Alur2RoaJl391PmgJNC2bLaYfD9MfddZuQL4726+H0Mr4vwZRrcYLfvqvhefWhR40nts8AP1xjkcMKlDbnAIIIyS+iXxNiiMAhtwBz7n8vrmrzYfMTHHJRdcUKrYqo3saLrrEONTOnjn/u998JvLCBz13dWIpgwY3gkYosB+9eUVYOXaU5brznUs/09N03o88ORurrnS72JyNvSIqvKy0vIK3caDW8D2r4f3wPAJzCPlHrG3dNIgltWShu+5zoELmh3+vA7SW/Vqi1rB/kgv0b81x35ivrARnZu7CaZ97x6NvE7F1E0jmcm1tORMnKDucZFD/SDKxXUO4aF1SV6ZZ2tEfW3S7z+mwJuwUDQgHBQn9CDiyMF8BRowZ1ZfQ/gq3sGpgEUxrerEtSohpDeBz8nQDzfA8UNqV4ky3sgV9bRin9bMOlDwj2fFy93tMAsSvv6Mk0S2HiM0IbTh9PkXKqK/lQsMD8wyjcC8xnlzYprZz1L57DC/edZvuKutX3qP2LN3AFBvd6YfLsiYOgM4WSyE/YE1wyPwmEixygm1KA4aOO89kMU8QWK88IgmvdC/Bp4HXtMjMulUJ6gcSMa0s5Ja15i+rODv83g60aWkLF8lnmpsUJ/BsdCQoeg0dUNOlQPjiObQeF8VNwRtJwZnie99dupikoGCwsRtcPuLJLdcROb54m2lyZPojeaQfQbG3lrNiK8cCeNGtfHm1M3DUuG12afKIwflAW6+XXT9fx5WXwztEpQasiRodHnvQCyF+/O4l4qIHcwr8IogdKTd8fJc17lkMuaPTPvY2u8+U92wH7EdLwPPuXWrPa9/AQQOzZSghBovRF1TViTv1v7euK95jU1QeX4NGkqnYQLK4bryobvr3WTo6nscGHCFwxNgr39WCBkdw0R8mdXs2xJCv7W2p+o5D/ej7zONnsZmRsv9oMNZJeWpsFH12vAJMnPgYjj/XjO++03QCvvZ3T7h9wVxsojWnIhN3yuiR/nsIFuh/fuPxh/UZ6V0Hnv7puN6Reoq5ZFAefIvAEdBJUqtt84D4KSlae/XCsdU1GiHaqxUIa4CpVDl3vvucONf6OjuIHY+jOGEQCK98O1I0BFguuFSt48y7L05ijxsH6JoisTLZS3vd/TnWHVyZilkeuxXyAHMS75RkOnZJZ8oT//4cWq3dHdT4oodtzEPBqZ9NbjOGQ4QbEiuDw+DTrw9xoAbRekwzsPAJ9o0sSfgQfWrVvYBt574+Sl+Ah9hc43gnE5JnrUPmWF9B61RncD7zCL5EUhz+tlQ21PYnluNONJfqaBeMT3nAZZDfN7PNh32G+4NMY/oRSjTj8neOxXDPGqPMETn+A7WV4UcepnXLT/jXWWuTkuhD6oiKtngXsb3vVuFtslvaGVDuGHyI7DmGHQStiIlTDjWd92vZF9yjbQfPIBGts8IHxIHS+OonhKws+NAYevEyUUwUFIo0M3prbMLtvZoJJfGNk9XcC0dGT6WI1qdDwYvZTOQ40KT5i8UaoFEMK+AKmOoREswP777V8gUROIhPBiaBzXE31F+fnh52VxBpJk+CgRnPwN16aHuTcZ+C2IO42J11AE0NGBxTvYcQrZaNYomgMfugoy9wA0OiFfPF+Ut9PyJXvf7OZ//ySbk8Tt7mO2auQMksIsHIor0QG/DpBUAsHebvG/3y/Ctu1yb+MaVpHE12yM5PYN6sLSWjFVfdc7rm84NfWdNFqh37F44CIjjbap9p+O50ehi3/7PpGrNYtLjl9Q5n8mmovVEKXk7RNaRRwosgvNQooRgXlxh8c3PExRZhw4TNwKuAT9Y2jSPPtXc6ygcuA8kQUn3G9S2o+amRR5Iicv0TtAYNs+t8zrDicv8v15/7HvbvktOgkXENI6irN9/SFSpQrKz9jQ8swbNfBrIL9h/cmTxjh8wE53vUKT268iL3QrunqhjfdLz+mfyiq9EY0oes3IFwa+9qDwOYP9hvYDyujvM6PZx/kKn8CCEfR9PhDbqULjEDUuwJpAvHcyV2ubTqGLsXLjffO9o2LeC0xPv/1oRv85REz1fy5Go3+vWeR9CaCZpRh1CV8kIDTZgdP6YE1GpOMkgePaIW6x95IZkyaQGXvcleEQxkR7+tRzTufH6eU0KHn19YyclUJTIR27U8L9gfhEbwIcfu8DZLmDZzHWYlkRvHu27qjo5Mo+/vMg3zMXcGUJ2rtxvwRpIixdentKU3SYxL3Ld8sSI785vB5KumhBGwFOekksF73UbbL62CbgrI9U8hrX5DNSRBU9GnGkofiIRZSh4vBF1+IP0n5IIvyknUqpiqYTBUl1wFmGRdGZlBjDMJKeq6qMdirp8mf4lCXTqpHgT+xYqHxeknM5t58H2DN93Zf0IDmLKeSD6rsHftu5r/225Sk4c61HyTzreuhoWjS0cEopJc17zqTl61lSoH9uCIN66sE+fxSGrBlQP2EBr6kpuT8Y9YjBiSD3oMZTeod1742zbAnR8THr5cmqQ+WBlxm6TIhkBJVRL7vxIci7Q61MNdyjj59y+e0A5+0yoghR1I9JtbwoslLPieeC9FnQw8Ufs4IagNidGujy+Lb0d5hkWRbs6JIgwLYXF8piKJWQ81pMsOloOeC3tYZBtQbmx8PO9/h+/ZjmRRTiBFi/Wv2URRd73kZ0ssxXNSt98fQzur1RW2RXvRgfIFIauda/zMIkGuCYaDSkEUCOqTLMrPwVQN64LKZa4LiT+P4LAT/b7lCUUw9lRkXPy1t6nk/fqSpOpDieyBuOfsYLunra84I+pxB2yCPONXOKHQ2blKTb0zYDFaWhohZZAIX9FYt8gC599hIfX8H9Od7AwbzRwIvy8BYGhdoV+ydxNFCHbohGWPc8TS/WFHP6zNPv/cO5zm/5WQ1yqVmYQnN0vY8+z2TTdKA81VE4YTt6V96CDMr6eTTT3nxrp0fdPtYfjcZ1vNPau9kzzKv2/hGnr1jMr/+l6gX/Q0v3l5CPfb4tL3UA/n0C9o/8/if/QPzz7/mfYh78t0Kv/kch/+/G+J8EQvkfl/5Pjf1u/COz/19Ibui/seSWNPcfyS2Cvo2QqXW5XwwmnQYcYAjP8Wl4Bd71khM+rCfwUtneZry2lGyebgWanRi242/qY2s05dvGSfNjWZSUqfIUt/gSj1hSSKz09SpD43pExyANByiypFlVyqaWBPHafMDm4/g8O3c9KRBfT9V11+bemaM/ryTOhzopWjs1lqOpm6Ic7pTeaEtcufwwN88xxAf7u8u8EpagTrqiKNM/pYZjXr6NxC7zGBdKth6Rmq1VWi1CUcpDKayuz+33uAz7hYkkOaUOqJH2wAlY9ugv5GuUB0jg8XmiEdzI0XC/wiEi1oXas1xdS3E+HyU4lahr15rdRIBiFczFV+h1XusIXMWxiJ6Xn5KxLYY9SI+yNarAYuOdkFyTzJI2fNKIkKMDgyRTsVlHZnt+fbeiIZY+6rolyO2Op11SKbcsNVUJTbfWTomnSoqhKdMDYfVRBXcJXJH4w0wxR6L8tQC/AZ3GbFHgD/3ZuPH3I/i1Ni63UU9RNCe/5+t+jSVSK8cdkUocxVEmo9mqRYBciSz6h/QZP/L0kW2Ba2MhLAvlz8+ibYaT/Deq7KyPuaTIA7oHK0dRVt+qUpjfskptFJtXBKHZ2kM95MHfAnK0BsTxdzSR4c3I78x7PqwwkitorsJYgE2fFPeOSEkXLDGs8x/xdAYCX8ldHnyou3nqN9UYIZctorHaI9kMhkhx04aIRkFj1oQ5847PtkV26ehHGg49prMrboBHuJdiSyQ1kRZQDExS1PIUlGUrvETqG0V46rSVAtr6G4Z5trE154dEbZg2TMwxnEUynLyY1OtVRWMhSIFbQDoQ8OD8VPQpsh8cJBB4BOiU34fHjVTkqjg+BDm2t0MjSHyjOiKVLq321NMUCVdYrwkkMqalxH3Lc2PFNXod+RjrhNvqy5/83ICE+8TlEMEH1mBgfjCokvs2dZfnCTGvtbd6+1JZ0VH7YxEfkJhEj8DypXNQ4g9z8Y6iz+TeLQe5nRrtrpV0Yosj2Ia1iYgof1mARUtFtsntCgKZIDyHSF8AvBXgE//C2zb/SL9PZxB8XDa4Y3uI60Ho6qRJlBuGHQvg7C4GQhgiyaIE8PDK66mok37pBBtN/tAaLUQeT6uEWDyG4pcVSXzCaywr6QEQv6FKxND+CIuIXY0lS8cwHCQC20eRua8eIogDFvfvcXrrmyU8AjIx2jEM9WV8Qa8JArAlt38xmMHJAmJhDKxtU3+JpejVUkA+TGf+INDkqVaswkOk2BcQ4WQVd6iDJlkZsaJC5F8ITb5UiXgiWwJszgp7jPlqWrKEoybRn0POZMl/uY3kSCYgUI9/7brzkeXBkb+MjMGvRwcVA7p2qAumw/3w/dL9FEtqBTbsGD+KksGN5PRY1O2IAvwjZdfj+84nKo4Fhgccjz/B/fHA4nC0iXj542OFfFnwRl9qUned7Hg/YzaIfwXOIVYw1M2eTMAn8AuHTX1NdAnNNz43BoGpUpW+dMMi8meboTc+QfgCg2j7VyXRu/F78uF78gax0Huc184aJTjQLkrEEPKHatYpjsqyv6rzPShdFu0mX1FdR7w0Oua2Q7M/iGeWAFgQ3YDjwws47MVEC3+vf4zwcXzZWBH/iEdG5GDhUoDlxi/7ATiCRi1zZPdCbd27OUl9H7HXnAK5H7LfZxA7PsvuiHsXDHtDBBUPFL73PydJCuEGmB3Bkmj6Vbd5Tui/JfnD1T+/wSGkKM3gDZBhQ8FlKsilRG01rqFjof8su1S1T23jAN7zZnn1uJ8l3vPljRfklUeRYCEKx8HgQgV3KhxPn8HE66C2VHm0bblSTDE+18qzprnjJLp+GxKcEnDkuBSsr5NFcvTgBmfeBuCrvrzhaMFcysJ4dZNPbZ1PII748hj6JagxwN6llkYZRjZ1i+uUpg7Pq3Vi2dEigX3MyQ4pgctew8py4A1iLwDSavj5vjW38cQqoYUFs/vyNtt79eonT8sPzD5lDbiX0HavlB6mr0m5j2/XjlwfsWumXJvnlKlkd1abH4cRmUWFUqv8rPUILUVb4XS5lxQCL44NhkqH3Z6OLlneDLcB82yLj3e60RI9mhEdCMxAEzU4Mw/EcURe3KdH8jz3JuJ4PqHzoky894lplZNov/Y6CC2uprtKX5+IVbEfrSzx0N1pfQJy3v5JNhlXzjTDh7A5kVh/+r4Ou62UYIvNe1O3QDm1WUkV7T51JJrh+jPDQQanxXPlQ5BlbMi1tdwhjxv1vXjcD5Tj4npqwD7VOnLLBjHYwG9FBArvLz1bEjjQMwq2i+JQr95FpJGaobSJEJqgmNU/jL8qQRW/tBgdR0su9y2XD1wxQAtCEiRsFUaCjMQvWqHTuzbDShVdq1uH06vWdSOlbkUVv4749rsI1056PZztMmNjhYIBesGPamnOEofnnO5iyFrp0uC95kNvgFl8xBo4AMNB254iWEAEikyuNb03rrj9y2nG0LZegeZbZL5QsImwFCAJMCt/9a9PV5/BlTJUKO99X4GOJ817YS6fWZAU2Sqi4qkjOPA3osnYakGDTT26aFNjVVk083Ix5v7VbRv2rtwTrPRU7UsqtZNqNOrlIELAeWebkUbGXl8mHa4M8ALd2m7weE+8G3N7TBsfuy5Rm2n5erb9Wq/IC7EYt4txbOkNhvz58EfGaOntbyuk1JMfzQq7ob/WKvwlYfA1zUGdm9QFutZAfkPsh5IG3f+Uleoxa7LeiMBE8uSEvVPLTce/LFGBj0pPKln1o6xJQuq1CbI96ZdIWoIltG0lb4t60NhvKvXq9HxbeCzOErYVhxG+nUbbLKlZtf22rjraYrSXhTHrZnV10OZwCOk0EbZcWb8UwWYiqeXTzEbFHhtK6FY4FFwYw1PA6tVHz2Nz4azGjt2PFiX1yf5yAxB0F0JAKD5JpCuzyuB/j+iC/48TXfB/Z9Gltv+ILhbybYjMPDjmZjL5rBm9ngC+JZ4nN2gyVzcSY7c6Jnw801ESiWu87dQPZa1HygXAqKzNIZbZLlyNf10JynLdK9za4XkwHFCwb10oZOAHxaMyU1/q3ij/iIEjQAiYLZ3wQwyeB2IvfxFc3WvGvG0BGhhLKjmWgunf8+Pst7u2K5fUrezXSrU9nPw2qfabEPsi4nrngeWBwJr3IdJBZACq+jlH98PWR1nb/mJ1iD8GX9r58JxggI2MnyQHHjcJcGLAZzvZsfdj+TLGYClV0n46SZLIZzuPbgQnzfgSgubwITRn9/zL1Cw8sMbZgqE/oZ1cLu+DI+ngIN0oS6ZaxVZAT8ypmJdyloo4TvMqthTe4Bsz1+YHozVi9g4rmrE7H/rAC5QjIgk3seq0sYIVgXdBdWOffzC0V5kpYdUpMM+9H5Ob2IEDYz82M88g4so6RIRSFr9xQpVmWn1Z8upZHXq2A8w9MYxun38Yx4ykuZQviUgf9vblo7zLHApW3DNdmC5YGoLrL6PrC6r3nuJ3uFU7hvniy7IU7UlPhbwokr69AKgK/848UcEAOeVfvUIZdn0Jv5cZzB4JlFuT3zchbz96+BZt8J3dm/EvqGH7quLHx03H3AWvjyTasV8F5QCvqPxpf4lndIjTQWaFevPbpTJRVwY9gv2exFsLIdYvtw4/M+WHTvDj52BhmsB1QOuw1NiCCqKIQpUnAePAbq4EvWNdFdTATGpFIfkSiXN9kX7avPQ+aQ6v1LLmE6k/sk1IuzT9kuxvPSJ2V1+WEUoskNhVcgWV3Q+1ZPokQE4i2jUZGYlbbM0AVhd3sAPBTuUnCNe+zrOoWnH4alc46h8tHP9+ZU/KoQhHAWhWsaOKS1ZTahAdRpSxzsiPEPCxd+F3J3Nn0FuL0qU7x6Yj2y8VGo34l/q7GR97QsUsngXqlwsZpiu8v43kBBwDRg8php0EBzBRYWQxS54pO4+k/soLlsqi8g8mN5dmXOklopIlTqjdrXyry4fhVopji7YMb7wTX7I3Gs5jpqjstlhsxbBfLMJKsHcpoOsK2V6AhzqXXfHJrPILvHU/bUJvX/gmibmXBUvUQzOc9z/8x3EIZ5reOJm9bFLUuTyMYxuMnNx0uSiu3fL376rkvJMZz5PsF3r5OEOPX5CQuNa0mBHMl/FrUIRgabi8/dqnXaMKK9PiKqH6vYyA+mxOqrq2YlJjfEPJIgA9R9Med8e2nXn8tBkTlg9/uP8eBM+fnDpG/DX09xfK9JPQ8e0qYPIWAPWsK4lRqPaS7WTG+1N0J4rnLM4hQIZ4J036qkjr8ieAxYU1bPAIxtN4Lx91LxykvjPDOI4WIXwCJp/UwP9k3DMQJ8nCB9G5dz3/VQXaC+Q92L1ReaLczCzOLyvkQUcyo9LR4rUUI7XsPv6flOj1xy/2Qx+8TwwT/t++f0T6iW7sMLopD9CT+Hrly58Gechp7Zce7xC2NMvb5hmO+8ZKsjk71G+JQVpOlcdRaTGrAxtqRiIl2ZHcOLTFArK/w1MUMPt77b9Fv1LEnxUT+0DCf+8c6xueQe6r/GJ33nk68l6kKQeGoamaGi2xpvm0km3bd4FKOhjppsmAxCPjKRFdrIi0s5ZBRTADhskTzQpjAUGMwHH8o4oZfOBRpEoK2noH2GZv1GsQLUjM4jrj8jRdh5sCvR955n0StqKsl2j/5Hp1y/GUrAn6UzcPoi+aa4cxkEQI/cgI+p7iKxflJQaG8G+IjmQnTcwXvSWME/2ssUrsERkJFy6wBc2U2vt3AiBuTEQhvqMiXqjrO56dN0jcLNXcXiuavWQaR/FP9KiXQwYtAv+OMDVE8ibXb6gedEsUrA9xn+qAVCEEVxzPMnNfpRaCMBW48nZ+yvboHQfKDgciteqr7upOl/kerM4Hjx2N0kX942i423TYCqPXRyVeNfwB51diXYcBs3usSL7y5Td127Hs+PZJhK8POToeGka7v8nFXqbJQKvtlSnjhUOUqDMSbAj+R1h8kp7HZwAr/hpYmBnil3fU0C7BlwwimwfHFDHYMpL8twm+kHD8U4L7yjTpMmPrcBHpIN95R7LvEOPZLkYNChZmzbH9jx9A+XUoiWYqt0x0RqBu8btfnxy31MbOQhDMX/+kdf+QbbZE10TAs+OYMATdjW3ESRAvCb7MAheUNozQHeNZohVviX6eanN/w1SdUZ8WJ6y/xposD6hIfo8x++o16ZMhJLW5swWvsIbElmPOS7UPufLj4AZfAr/j5V0b/5S4C692s7vp4mg6RT4+RCgrjhwv8H50gGyKtQ1N7c/vetSXFNUPe+wlFPIUaaw9kGsSo0NNeO1JEAbpmuEktaDU8LxUm1Q2idK4Op0eVIAvYnO3eegjw5wbMkam2KOF8pJkyT/3VzboFNhYwFO8FaOW7eWeyFaaRRngKszuzitQ/uXreBXKgmRMIlpQHBeaMnQxN6mx647kUWdL6HkSX53AKP5N+GAfdNwXE7788eoypqxTlt+OW5YgEJS1xQfohfHNbgPtf4iB3LogpzU9mMCt9aLz5o+8xPs8SwuMx8E7fORkD2qjIzgfOYe1xuXprUgamDDwmeH6+jF57CX3tqRho+350VkLVlmON8zQZwaAc0er3i/MOJpTJI2Rpdo2wdu3vhQOCokV8oqNN5J/9tL5y7YkLxfakvhEtTaXK45ncwPbkx3YbZ8aSUUyPZ9Zz6j1LNqtLkubPawThKf4UBF6Thzi73T4rra8Vo9rXGXHitPd3FKWlsJx5RqVPeyRkQdSKzS98NZDTgzSwA/+Jc6cVakGBqTqoMeMV0jx0sH1yzbxC4T/tuH296BPTCUf0pV6VTEf9W1++yyrkJ0xkTZtUBIJKer4MjeiMpwRYat50rc31Ivb+QHKRXwxy3/jpPtoHhE2X43D51djoFKU4Kb/PVIN/T9OqiH/jaWa2AD/SDXO86EQTXcT7iR7yKYk3D282MTquBwmDnGqNhtiG2itGVBblC+61/AMnbFkXmRWMsCAk8yMUdnVe2xZRDDwb3DctiMHF26KnGX9XOGlkF/YFM8LeFZzpOQztUzqpEIh6ulGp02brk25jF5C3C3aTPthe32x30GRF1584QsnC5hs+2TJRY7pJNQSGnAz2qI75XVS3xKHF5+pkqZk7VkfSl1PVeNkkRL5xjUz2NTA5mevrZP7McJd3itNlHdwzLN0qZJYvwjg26w10+OORCLL2MyrV1xKXkP+4jiMVL6Mv4FVpvOSv/zjQw4i4G0kYih9hyfE8Tot0t7v408L7M+PhOVpurbok2pE47QBdXTqzl3ctwKieo8SkA9FX5cV2vjCxeOhJvIXpENaSiHKIPE3isfXrX4JNm5HyPQ3YRI5iYOeP96NzLVN0QIlwAbEjHHRvEyMl8pzR3WnjwRaoimLoouayRTF9M4vcF5I0SC26QUC4TwAjaka+xNDT15N4R2ENzrS77l7v4fXZZYKJTUfFUgD2MpKlUj/rOiCTaUMaTehJ0cfgkczzM4p4BD0wX/QNcl28hDsNqE+BcRkg9Z4j05x4cula+U1SFaQkFTl8Mb8EyJzVTwd8qVCRQanRvB28eUMNGT0KcuTkG5IWnpl/m/vkT/kpGAGK9FOdo/apGaDQf5F8jiV0ydZDplgPDf32o+X/+Ry+ZQf3k9/iQRsfcMNf7UJUqDRas9donOQTlq6R8rI2w4G8YaHj7E8/UMcHVm7elpDvtsOFoPqSIt22T3O1GBAp/Fdi3g+FZrA+bLfCGlk322NCf4an7kLQ+5Vkt8MjxafeQQWbQHu/bT6aPbcju5vmZBHIdIdMl7i1KZCLS3LUqGK0PeSL78ctNw/5ZIh+FZo6cNXyh1cEtIqeX0lfxv2/ICRcHav3j1XTQrj7EX4N+ZL/e/36jfoikyl670qBX8gd/YzSQubvLoouD6PEDdjJn6ABTtI/qrRzr203kHBGJlT2XNfJJ/vcfVe3IPQ1TDt465J65Qv12NSMUsB5KVvzH/QNTpr/sjHr+DTYKk7dj8NSjK8MOBcO6dCQ3g7BaM7H4DSyrz5j10fCYSHxQbmOwRnDsh1VL2ra/KqZJU3240jzgncYRCD8yPBCfSjkVnf8/IN6PjvKfzkDyUfABJu3NfJUqCtpdIc6kH8yvr0f8gZHYIPEFu3kbo7hdXs+tr5qwvtu9XDEQRX8srhW3WLDxXhIEsN9yH0i35Ya6n+fR1agInnVCDCYKP74O2220EyhxOSJA9yJo6FpaV6UcaRsh3WH+nSt4QrXvAi8bCbrW40L3rIT+oTbIRjruJelzj3FGDyU418in5PgkO47f2KdOPDo4D7Djm8a62ohZ7qmdPHUz85euFGm6TMnvTmjYTzqmJh5ztZv4g/BbtX9OsPPOT+6GtuCKpg1O+/zDO/qLR1qKdwCrQvG8IlYc/MkmXGWMVRJ69zdPpzXcPd+5eTtwJT7A/0KcTjo6sO1nWiFjqEm7leyyFkBb52HtO3sIfQtl/0xXhZqoLuxIq+5oWX6WtSHQoiG/hCL/XrkrJAZmlHZqODOFcYqkpEMrSNVkGvTH4ChSz9QF3BtlqlEnGRyrIERfMSSFr9V1zRtgudq1SjK3Bxy9D+swpSb1CXA5W1ANHqkyip+nfOY2cyrKkRNI4goXBOGSHOYbdLIZNnEZ7hX5HHhXrEyZqslayO+jaCyjbM7zNSVB0amiknS4lF+2Duc3KyAQvcaqVRPzvq2i0ydZE66AIB7fdoODtaci+S8lTD2+j09Gu4T54o63meojPe9k6LgKI0oTQWlk2JoK68R/MbVDrtqq5gufAfqgeZgSpW/ErF2VNeUMN7VAL76NpTgr2mBveD+YsqF5FPz/DBjQkxfpRT5iQhFXjooT11ip2elc18sheoplhcmt1MQ0yzq1TvPvHHZ1BaD2u6oUCroHM+58hT6RVpvAdgnHnpyasHQdivvNMDAKh0LEaUEKblzGkQC/KagXBmW0exKU3WWfpFsvRa+sEImgv/Ox9wAljy5oeqqQOxPHVHeFetR9F6Cz32twqSv1l40FNgegI9rnoOOS9VIyApmfvA9WrTJAoC76viGV99B2ffM/cku2/bvuBZcMGcP2VgjqOF5OnsKCGCuA6sRJVT25GEWaN8EKCRA8+I/QrGWXmY82isS7Mk6l5N2QfRRcC/1We0zzrL1yEPhO17iTcOsE/op38eCpCtJTjaRHHr6kF71hQFpwoKKHoOeO5e3bUfF79WNmqQgBu7eQpq6XKPwPgCtGjCbvFdOejR7Q1rCWi+MGA9iicpe1ZrRB1SWRSqVqE6HrCVYDyXRdeVL1ZzTHrIYVbPr55uvXbqxhBwDuhVkqEH7mHX4V1YN6Lgy76NosMw8VfNSpW3k3iq/mYvXt20lgKqUt7buz2+XbajGuO8OD8qRs6npjwFzQ9v3Loe8wHUeSI+4f1fBu5gtacTlqpC5YE+JG1hyZirwzk6c2B60keyWQld5Q527SVaQInKOk+OBRx+SVYWUlssM3C/j0X0v1ViOkFOAQnK0obBNURX2Vp+4YTl77IvIVI2uk76fLqf49UvId11g2fSM9ZL/CvuVe9aKrMxr4ovKa64hq9A1qT9cT6yJSayC1LTbw6m46rzXs/EySb8/28W9KHQ/++KBf1vrFikkvvP4hIUkt51CWIBBlmEr+rneg+rl6em/cpwpbo8otJhtV28rYNBK8hmYp9WCvqb/0b+ppqQy887fEI6inpxh/Ug/YoXzrSsCyvGH5WLD88OvhVCk2eR3yLxP8uYZ0l5u0UzjvaiscNdvdwV6sj/IgFq50oizNff3UgohVECpFXmmM6zXM83qfVajw+GHQJwuKosTdqzZFMxa6o9mVmWRUIyQ0niPyGjUKXW8t4t6aXGmStDMRJjTt5mmqPJWRytlHNPVVzP3WVLNoKgsXjVRo33w8IfhzZE55l1ePL9Pf/ictRL1fOtyBxryTRVerNPflQre7RKsOMo1Oakrlhivmpjf9bmauJ/DNYSvbwQY78pIL/1GifYrsmV86zIGDMfKkfo5H4lYm1QGr2+U1VTnFICcXU6ZQ/BEeCnLtwUdA3ruGMILzhSXi/WQs202RubS4mRaklY7YlOqOYb2VtSc5d2T/nW6MWXnvaCP0uBs698xBfGEt+FQF/D92pGPM/pilS6pMxYaR/cV16siHdoFLtWvfhboyNux7VTLPUok3IpLx0TS1jkHQe44QKnpSBywK9sY1gR6NNQMuNenwljO6vnk9STfVGWe5Z16ZaNCUjxxe7nYq2/YD4Nypy/rLd0UX2Us+xsJXyV9FOg5IM9usb2aodvoQDHZ800YhVPe57XgicoeDzhjTq97Owxs1wo9Ds5XBKyqhoIq6BOBkuJ+Q3D0SVQeyzd4gHejoBYYzFSc4667f3A5+lX2Hu1tbcMH8i0uYe4Vv4jgzGifeePGhHva08MFCL1rImhkquLC8oyPYBxGBn7DvIxQkW6RgD3BagY9+ZNwPPCFOsFPLIlFcVM+B60INqLXmzQb4mLeC8Z5Hywqaur+uKBx5qSeOBQpBufpvc1ET3xFUcfgtR/eoYtZzEUpo+TJIjAWILjkDwNUjVHVq4xpm0PuQjMg7ZeNfrAgRmoA7Eq2fDdIR3QbtssrAiaPYu8LuqWHd+JH1HoTwkMDt5AskbBJr4EJwEPD6JLMHr/1BTaJ0UG9nlcJJPx2P75uRtp4aCm6p69d+w5iAMi85P2zvKWGL4CbYvxsoOQoylhzQkDwkByCOAnMitcd9jhL6X5U6PA587DDpKcpy+zGezg0XtWTgpHPa1YGOuM+jo2bzMPf/JBMlU+EFPNnam/v4rzAAx1ruOfqDrWyDDUeLyqpR/EjliW6Uu+6cPz5EeAf6KIILKlJHhKCXAoVnfPkoJ/Eioqqn21h0NceF3KSuPVGbWXDcvEMALDatWaNVFxJBEPErmOLwR4/mXDhVhdits+HOHqT0xDR/7n1c88+V4XWD/ZEBRfJhTbfoWIgqGfHCd8bOAqug0D/x5wYX78Lq9ANiLfOpYOcTA8rl0ok9+g4gZyfgZPC+iqQI5HGeQwfyntiBcW9DcjYNTXn9dVwR3AvT/FWPxAjg1d709d8QpsGDm/8Nkgr/j65wWPQuyr/uJNP7Ti846WOfP0GVrbTpFz6lCZ5SEJpBW+1UzT+6yL1EjrDfLWn7c+R7gYOnxyFw/G0Nfwf+kdNSPqOLmgv4oKBruTWBcaiwzcc7e8m2amDm8maV/3I+BvAWERsznjXe9sbZeO4irsdYgnexG6JqDzZTUC80YBkcX+gEIW/IRPI+nEzV2Cv71IAA4iOWcvhH+jmj8vepBoBolTzMw8X/juyggLz+dgFsq+DPutvHbrQIXOz1Z2QQ/QnL9FYN0bGEXzACcvDEuLl+YngEDXIaa4AB5Nmw+u3R5/6Ywj2aHvjahCz4EscZoDcmVSQjE+0QcHXhjq1dR091NCf/NHTiZYSBvUYiGLF+iYr494evx5I0XEMGxv+GnsGMWRLVDXQSH2jw3TZDHB4T/Wp2rHWrtO3P0w94wh1mSkRWXD/Jci6kv/IdQlZw15zUMugYQF3M/UknGhC2nv9c74XhWiDfeqxpVD1gsWpqt7Yz2wFPqifsZxbQM/u2j2R9TWmJBoXg5z5ov9VBI0hEPrUh8WXw1ZjZ7HWKz6C+TqJiQNepKGsf1TrKK/mpzxzdZn4UsFh/JY2u1tQrVJHM4AI+jWw84rZAUMSfBLBIVG3EDOnjcvPS2qa7v4/iFl8zJ3nXFymRjUHTNY02nS4oa89I+dGiQ8blVsS5AfT1fDNu9TdGd69M72OnLwemll168clnDHXouLxSTJ5C6b9k9gKmf+zKrfe8+o1gct/MjFl7hak983EEg2QIuHVXL7avjFnf05FtHugEvSS4/gaZzHUg/DEi4ttue6Ytp5gudJAhsWFAbooam5tMHWhpJVxOE/2bkDXho80laQixUtfCMnZDEv2b7Dv6GusfdCW+hbRYCfqQzdOUi1TNP+qHEJpJ3ute4oII1T2KjwMtzpN8ALCRGETzcelTu4ttrWS91DP87u6av/DOwnDLun0MXpmmuMIC5UkLldlXumlLjwVX/4CnBoO65TLFeB1fjiC9B/ax1fWgSm8XZ+Oat5drOqMEYfwsbW3Fi2Bw2WM7viUUq9lawgCNLLQOz5LlknaFXtx3JV5U0enDXb3/B5qDj0DcIbE5SI0UZPNkfKM4K72hea/GwoN5IvYqo2n/OZIUoZR00OQOK5Q5VKtImpxYXcxTCqn3/GCUHILrIbiLleWRzCTK1meJ277P4dTYX/u0GvYoQqi6UdGq17kR6/JGq0qYUR6eBIyed08htrf6NjakdyV8FrUYUaH2bqRkAIMFkQhedaBOwMH6x1jELQCLZSn1bVJGZFj7b6Dg5lmlPFbf7ZO6VHWXzg+cImWrFbn2uQLPd1sEj+esaS2V/WcVe9P+bwJE69b8RI31bZ2NJcsppDTawlRTxapK3cW24n15oWzyvV65rsxD+FoaNG1lT3hb3A+WI1ujvV5cROdmGCn2qcFumXpVilVIe25wlZarD5ZwXtenLSF0HMAiPaMrWA3k/6axf9OiLQ9HjSS/+WbNUSwz2cQfON53u8+RLKGRtN8DeXuDz2rY5vudedszIK5xl/sHvqHtFnMGCFN/ZHONCy5aLc0splWX5C5vv937HW8P8APYj9N9aD8vM/fKkKFP1SMAAo5PdeLnpAEE9a0nAIcrR6BWAQq65efUNHa2ndCY2G0nzcppGn1/ce0Oi0C9Dl7ZiWHuHxTHW0PCe4lE1wIwQADKmusAM7EAgbh1/IJPYVo/bDbyyE+BieVMPhxQ3Mn2VAiP7Pm9o6bpj3x6sIZTX0dMr3ABgMUb9+/QRnODESE5SJLi7d8+Cxf8/Ji8I5XXubrkDQFE3ngEYlAaiA10WBarZp616t7QtFyxETZaKkJ/y6LC6RIXMLjgNvNry8rTMTQyXBzkhvlosQjvGuFQlNIoLMg5dHfeH12JEigcjJAEnvnflcRtS+U5VYfL1d1MGwKpMQ0CQiPhtKUUTrHX2pYYkVxUXsOoyvEtMSGHkiV9iNcbkOPUIWQyINgq9JoV1r7IVntauVizE8H80rGi36Xu0gDVpigRwden3Jw5C66GYwNa6Tox/u7GU5cNzFNwBWb8ei/Fkh9Mv+0liceVbK+cRdeVYeOOxbHuMvLjSsq0r676tV0QeuDx/851HRq9YrYNpLX+p6UQW2KT71rhObwPCk9kXBI2bKOd5Tqd0d+ffFHmfpteJY8uf5z5VcQdYr4ai5sqRD2TKNl82UdPO9iEfPtRIPxxfcAGyoH+4Fjb9Y8/KCjICvVYAZXeNjj3czKWfxAZLcai3hXWWQ2OFRkgDAT1BoP9HBTJNl9BNqqoViW/WmF2/mJG/zYsoU9mdNuMyyv3wo8xUYP7iKflCC98+mW2JoMfxi2zJ6yYzvoS5Y0lAl8Lr4gjPp7Vv2z7oaxDDwALAsm+p6t7KjrI1ZCrI1tdzV0+lFxrPBJQnk76Ux+dQMmjzIYwmCg4x1g37lE1U2iY/MNisgL9r9GIpjFBfpbg//88p5MS8f6B2Ls8+zkinJ3UhK9zTdNgcGJ+ypdpZoNSsL4wt6muPzBbizjTgWTmZRUU50Pl9ul+Q86h5zLsLU7ijwMs/4XddJNgXZ/t2lP5qHxAi9prUraqTRAXQhZ77pb7dN12YHalNKkMvTK9XM+zVtS/ZQwUmcErg7/HT+dgyJjXxOnIrCM6I2XV9MxXo/K2VrH/+MG2wc4FznqHvBxjhHbiK8VZ/2z25Ggz1SPbOnXKzgHVrGk3ZLYWbtyUPAO+v+EY0fwq6o3Pn9/obR5SOi+J5ECDp7agf9k0zKjQ6eW8nOrPApvaox8Hd8VlH+OLLj3R8N/N4X+VZyiRXwq8qCfJ3bSn9NkS9ox+WjoDn83s+xtj8dy20SkSCTlXSbi/d/voAjWPDEdtpgEAl0n3/8S/CdcLumVy3pZFfA5IhA22dXRsVoLmNJh/y022FKRvKD8E3lb4wLHlWZ4nlSZoeSlg1Ciq7ScuVb9kh/z4VBvgPCsr+CN9CvhNhfy2/w/7KfPy+wIkbwKTM7sdMQQfxOZx0wHv58XQdM9PBQhcUBvj1ArcBzS+keScsCTLfUZ/vth5lT3dDEz08G7e9uenHIRnvluO8Cs4Ia3rcmS/9WL9o/zR2Qx9iCfXGDb00l9Ig6x1HaN3N/vqGoZ4bPuqHEA0nrW1/XwwEMftX8bBoVilMt8zwTmXUAmlbibr1Lqkia70mJHM9hUABFY1q/u9HtyMyubauq/zjHwQmMA1vGPAuBTmCqKaFqxVGBqAO5+k0iT+C/P0uRP/NKJvEoCmoleJ6XIET8iizb1dSew1N2KMbbVrD1V4lGnQjHWU+L01c8qoB2M1OjfJKKHDsH6MeGSVxtKkPD3VUurxHYxMJe6KWXwAVLBw3FBnCWJoyWrF1n7CRASBSGbNPXw0csv138j+XJpcllrzbuTfWdV4oYMTROllJZq7LaUs9PC/c7S2bVxKnfaEIyj177iPLjfaTFuf5EF4BqiF6WRkzBtgXdmTHI11uAFK5ajGOC3cmpiZ8JCCj20PIu/42T+FHMBOcoNM+v3JcIKR9tR1SkdcqOlhVWkxKlXZXNiAcPy6bzTwh35yBjxyQ0RftLlY5fDftgfj1WxDUA+bMpxJMLDorfoUDMyihBFuuxMSDhtQVSVNxLUsXpHv1sLCkpQkRerfQb++Ekd9I7SkvNL3P1TV18DhHF69nZ7SG7GxJq+Hc/wBvoVkx1X2kVoy9FOiTkLPCP1jIlGG1PmDSm7oPuV7GHkj8rbJ+F6GyTQhCSXFVdHhM+9bQ57Plr/qFeJr44/I4ia/7Uoya+PDYehYYTTR+ukNqWYvSWjUegkpYrmz+32nS1iIN5Wj3Gv1PV7ZPXM4P0JazNfExAiUPenHO/tWDJOWdE84/wszFQZJCWFLoue7Wa/IZorB34KQ2+8gqUxZ7+R3ahGGpsUZu7r10Bz+KpIP4bJm4bKwzRHti3G3g/QKcFDKvz+kYJGeAU/36/CS9uH/kC7Sx4jgRD8xxrHxGnSxTqi6Gb8myE3un5MiafZeffN87wDf7xv/RvrMgxeDih7u3E59BaFqqvgmzdQBAg75/vO4uLeKDObyWxLs4jf7cNaRwokeFGZ6FQeaPSU0Dka88Tusuo/i1J+AUYuiNABtZAe4XktzimWaOUkNZjSFP7gJ8GjP+jzK/PRg4fGCktx8TqoMQJLE63EzViq5oqYHDsHYPRVuhd3xJG+hYoLL5P1KDRDTJmoZpotDbNaO7jBzVDS+iubmIYin1tOs+bhC5VAd8cNwt2ZQPZKeCzUOBF3Yq8FaC6tIgkliJRpLREqtPSSObuf6+M7krh2Fhoy7L9kf5kxT9S1CNnx8jo7woKc0KfZ3zhwfuIqAIkLuz4mjPe6WA9W9xURYeWbqVBGEJLCUhLhq5Zkvx7B60MOtPTVgpQSoDIl6j4wvx2HNAQ7fNCYxt5Yr1GbOPlUOBVVVmI3FqKJ9be0FIPXwixt0dxDcLqZh8FpxEuRbZL4u9Y6Y2A3vQyCErGBiS5ZIDfz2OP5oK/vIO7D/PCymaxLoN/wf93e9/V5CiWdftrvoh7Xybw5hGEFwgkPG94b4QT4tdfjqp6vpnu6rmTNZ2T5SLqISulPEjsdfZZe20DpQdRhMEtoBH3GNSWp+Gtf9U3ENNY2bjqbjc7tEcZl06mW8VIZrsZrGELIfvWmnUKKC/dDW5HczS2U+zGEqNVLY52fFW0WGJT8G6uPdW9kkPLDJvE1DNsUYmiMRL4rJZE9sRQ0RpmeRvkzshck23v/o272srgzoyT3LVPWFmej2AmF8wfZUXOwQFRnZpdGQIlLw5OqnoWAWjA8jk17vf+uSicpRVNJ0OL4ROeWYQ1nXung5KnIt8eZsOKjCahbSGdbiW/8OElkC2bK3bbPOknovZvzIT7l3i8eszZCZMzpaoqtybIAyLU/CI6vjzjUchQ4u3hR6oqCcc5orapG/OTUgrtWbNc5prXwehjcHE7qNQR58bhhXNzVWg/C0E4KqmeHoGPv/JUwOcPKzIrsWW5k2holDQlhX27hSExJdNU7Ybm+/L1WlUs+8TOT3iv2mq2AjGkUdtpl6oRlHMTeaGRgGkRGroXlME+tKHmu5phGk4961Ym+vIzvuKbJIpawTLBKZc4XYt9n0d0aL4FVX8ESaeLEOqPzBXv8MVLF8Jp3OY2G0p/+bwbwFgj1u5sGsaGKDYL2ntUn0sTnmaPpaegmZ3/TnnrN6AOkF9QB4jmuCx7MCYiBz+cxn46eBjEd/PYD8/fXv+f1xisT2/5g6Jw3CPw+2Juj0/DwceP0/HHdfqbUtD1HRAasrJpfver8LNeEB/3NB2/ICS0ZZKAy7CPopxTcwhjcM3HGAJ94SVHAGHhJSX8FQl+9G/4P9nssNofbQZ9wWbvZjLqZxZ00H8UdDy1hluesS8P56F55lU/lp3wCQvq8XIftE1Woto83ntWqXYHKbJXaXvkUos2nHWTDM6g32UTylWPuUivxQfyTMhbeH54CSTpjAMCE/oKergEoYXcO+oap45bwWETqfYpz0NW3vRSAv5etQ4mHV3yIORPDouoIsI1vewTUK105s6vhmKJFeBuYdpdguRVqYqSWEZelSWa6asVjEuCieriomDCXpPcYP1V3RzqcxLWF2bLffwgR7NYYZVNeo/dSzb+XsEitPe369qwZ4e4ctxVllVLh90HiO0s/fKSYeilscz2Xqea5IR8Hq0VP5zJVbeiObKSeX/wV57vZeXe9xp3jeFNMcP+uG/aYlVBAItBenDz5t7UheM/BsxpeV4NdF9UXPxuqCGNj4B2W6g6DUpQsUsSjUrqmuR+rTI/vEexrLA+zAbhkzzXDcMGN9yw8zww+IfWF+VuxJELrM1PgX+Sa0a+3rhgmL1MdPeVFKHLgj9PVGpj6lOZ/Pz6IFZfEkmMYQ3dt5Hc5A5XPx4kV4RCqhR36WxCzsyo3hFeh3ik2DtjOqp+ju28kHdna68Hc5nmg7c+AnF2YI5VmouxduXZJkvx2HFGNDxLpV4v0OALijKtfqiFZZ0zl2s9Jihzuo1a/IDkJiwOhslfoLm+dyWVrVUx0LZeXeqtZCOyuhjw6eRKPcSgJ8m6KDa+1YvPEVQLbhhQu7CLocWKJAS7jGH79TEkZpJsXEWGUUKPx6HJ5hcdzCowRmfRW6HasE+8ICUb2Lvf9Abj+mHYt6tqaA0qTekesuGwsDjwM68RNsWeNFGnJMJguutgXXokGWzBj5kl8GEPcpqHT70EePCZDP0Gs+HphmrB5/YqQP/D4EEo0DVAdRqx3NKvmdGi5AipzhQFo5epqGx4EPxT1hWkAYqRhQcZqdDklga9p8bea6THgV8vKvKA3R3bRA5IFqhIt3XG54XKniyMMiDMRezOAlk6dH5cyBGxvIrAiorYGDwsNPHp1rTHsbl9DZncHJTSAMqRrZOEI/BnLkieZEIna5WzorJrDUzs0ojQmgCG0bLcKMsWOLkuDmUMI8pV967b0afbklfq6htgylKgQI/HwHgFsY5X+yyarO/CuVyyyX7ObnHO53nhtzf5Ep7PEJRrLDKeL5KCO/ZxncUH+60YzniF2vd5LXhKf5WY00ZtYUMRdJ2HMAuVq3JRIG0FBSto1SU14nTSBBPGwtPYsGYJP8bBXYQUKxL87pj2yp7Nq3OZJYTK0AS/OE0fI8yKh350hij4Oi+gL44FAsclzy6LXFiTfr73anIO9jscP9Tr6tPGqIaox4f49YQLyiC7WskwYTDmF/7+vDCGXJ4vl9G5ocODnkGAJXXSDg3VxpibqS52oA0+bw24puHXWsu0+3VSK/Oia84U39Yk3uGDuzqmXz8QJFC9vqAe6uF20gs3+hoH5JUuNChYVtzAptoJaFfsPhcg3msnTf0U4UmOkj3O8smvuEeRcGVa+d5sR3IM71LUsmc5BArP6Zapi3rXCCISzryNXJiqWObXbGhr9yVXHmO9PCEjLhwEUb6RqUU2R8hiMsf3t3BL8HitQ72pr1gDFARrCsjVqu69hixHieX6oW1iPqJiDc0hqaPYznlDbDWs72gjO2P3++IrUTMVZwprFHVwR42UnlZ4RjYSRbd9pzs/BVlFbySi2902FaFtd840uJwQgZXg443xfNMgoizTKNNBswDb8pjO5FKseqdcMJtV2tHXjplD/MJ6rCs8jsDqJMIDgqYBBAvn8O4s2OLQeIcSuGKs2V6pS2TNRW3fCp1DkvYeBiLY1SsdMrxqWkGXvpoPLl5ApYZRHe67GekgIs5xK+a4GSyFjnL903jcP+vBgxXmKHrG8QVusx1O9BRpsXMjlBb1akaObiXiUlJDJ2luom4NceJZNs18BTW8s3p1kZNrQ14b8WFIVpaVvXShO64KoLyKTaWSWnZwDvtz115axDkdXLHkvcA54ospKCK7vB2fPxGcz30Eevfq2lw8j741mgcSjNHcLyMxsCuChKmC1ELlsvx86Xt763U4vEVX69VN2G2o66YLIwyzvs9A2oOREcsOCNJ39iTia3TvpTOztTIpAu+50PeyjD4JLq8pokDGMDBD+iT+s97mD4FLkCTs4J7gUqi7DcyxjELcT7fcrhm+HxHC8qatmWYnVc0z6b0mnlqvSQ6zNAZJM1h3bIJxoYRiuJDkp9xrfLpyuFvuZh8Y5IoesTXVZoYx1M+UEuQtaHQWW0PiDLX+oqjBEG5rYZbTwgrJE1AGpt9MU0ySahTPPq4G2XUTfcKDCTobdfjyyOfyWc2XYmjsyA33REraw0s+LIoyVCpS8TMGBhJWvVuXDdDO4KJRQrTel/up1tWQ9ZTy4CRwo9xOiBXE6/waPSztGzR2cdDlqQorGOmppwt3uIe5DY/zsby/rAyl5MFewEadURIioaoUGsdBW1Dt6z/q6b7xz2zLpwBz7PRGcrafs6cBInmzsQZfGuauPwg+1U0jfz912vSA9qf5yRqvDP6nygCgiF/wB75ftbT2hwmy5GKeSnEEWXw2Gp3wuPMGXC+cg7FPK1HkwRWRjokruEHUyFbO12t+2WON0PR5vJ29Ei1ZVdWQi0giLbyiFRCUMG8uZqpORCEXns/OXuY6kWxvYqKBFhNhWjTMvIQH1NXyOTL9o7CY63Sfo45CNFqe3FuiuP6lVNNMWEQSvZ4cbcPc9sE7aTXrkVnv6OEvENDJgFiPIw5Fzv64F47xrKGpK7DMxZ5TQE36PEX7FoN2VaObwAm0twx0Pfz/+Yrf2/JmljgaGVMXr+oT0+lqE0uzOhPEdPNqzLXv9x0mdI6IIa8j6R7cSDw9Ijg2UZGUtPZ+nyJQsJg1WU5yK4w6pngGZX2shjpJlDmRZYQnZyiwVy1Z5W0tS0H5ky6oCR6O2GJTOIY5300/21s4wchnn8kLfXA6Q71Hg9Kt2xZfWtGIogUjE5F3vXSeldF9Wp69qmZ17fWCFEGlkdle/SRCFPgOpyNUVWcZK4YqUzeQ5u8ALYK7YrMYKlm257Oc2SgMgjg/n6tTor6azDvgDgnOA5WQTwsBKR4XWX1IpYGIJLBNLeBedjm7Z9Uju6hxxihU1O1JPM3VbDdr281qGTE/Qws6dbHHsq/g7JjpV85U3YGYDLyU8Wpdq9ZxF7ZeuYqnI2qFPsFUaLud9HZqQ5FQx5Pygex5PUou/Imt9+PNEUBIYYUNqR8eJjpYtNfcw9aBbcJsYjSMHuWcnadsWlIP3gsTbNI03oSUbTEO3odMJfWXs2o54GnCcRSwFUy+E2i8mhTHsycWOgMXdwmA0A6L9qZeqPy6C7KuQamxVkgCHzG7Q8/G+FjLXvWZEwkyq8AbGkyDJqT2LA7fP493WE0SZZ4xO+WQy8vBgRsyPbDpvrtWwwCip0cJBqqnRYu6ti1FHuGsy5GtlM9my3IXRg/RDEEc8lOWm40MFIbrGnKOQHskHjRCkykyd7Gk+0Xb3Nran9NU8MRU9Ge4oD+NDWDt80kGR1D6rkUe5DfXpUz/zJKA/luX8gmxn4EcEZ4rfMo114bOswK714kj3Pt1Os5NTo1XZC9EndIGdcxu9yxP1zURSxnZAjnukmm+ci2FDEpxPRfXq3nlAX3HGY9mT4me6TEgrae/5/57KJqberKAENp7nk5zmQedqEsljyxia4OY3c2ZHhnyGbsLnMHMlaDXC7IuhKpnhn5ERtvnOfJGUtIH1ecGkecvvi8d0XHRIGsRZ9ik3pPK0CCqedhei3Z7SUHm9SbtnUki1SD68dW3qTv8HFSohSE4kY3p1TMo3cyuaCHSAcnF+uGiPsePQ9lIeXrDDBVkP+8SuVN1j3HVxCIdi/A1udHhjSSr0pJzyJduu9cP+eFStVyVJnYScbkwW3TcXzMBcJi8KA11PqEgPxIuENrpuqaTulJsQel0szlzRQnRfijVm9ENxHF2DFNtlb3jnvVbvfXEgE4nMEHCzkVOYQfgtchgYfiNKz2c6az8QWp9n8dNeWlnKfIB1YDbicbBx++1Z4dEbBR5BZRVN7guIZ1eAZhS0HnL8g5FzbM3kXZ6HMjbGBT7lit7pTxhRryR0ukBqAhCOiM30TdNCJRLB82c0xcIUl0l2ZTkJE0jzGNl5fAYB0s7XB73aVwVW5JjUpKgmX3WwSQOAjB+mIbTWs80Y7yW9s1iY52ISWvhrlfLx/WVJa2O9S7xhJQhKOcUcOuJj6g7LJQ0xFve6G3oJVqlFaT/qpOIQHzyGiC2fFawQfrCtoGbJYiKzukl43zv0imvWgAzuqG60mqzFcHBmG79Mkm8b3c4wMOTmJIiodISlQPJ6+oF6vYd3ubRbS/w6xK8gssTyJm2ksqjgEuChCobhquAxG1HpFVtgMsQZLoXLn5+3ucHUvGfJAkhVV/58ngTySGZqLWkoyAbx+Dm3f0CoxPY9n3CCnSI0s49ScoHk5SOoEcQuRQvXKvmbkGzrTSVkjiSdCSCdSuEdWDH3bKOtZDAgzFyo6ipqyLFBKJIRT5ut1MTXxV8aHt41yOvJ/uUXkNJV9ICHH5XBinMyki08ogRwINJUm/2IjSx4tSSpxlrfSLtHNwfiNFxny04gJwtMPadVMRpvne93tOycva1nNI13wCdc7d4zILUqmygmLAkx19x8WJdgsaqC/ySra/4hKa9kuh8pKXgVqEy6TVszo/1aypQ+qNN3P3eIitZxfW0ohwDHQwsKtPVRg5qFdRQTzc836+mpLgjdhbAAFA3XQP95DxXZ/fN+6OpLrCkbCu+Jehozuk61qHV6aJu6HpEjg6cjCmJ3NEJ9bwqD6sw1J6K6KwXOs0cW4g9HMKsrThLsW4kqwOn6syB75MTdoOSmcItaPuIMm+eJIWg1DbCNQmviaQQ2LLqJMN1K/CIAWGCy2zU/FW8hRU/nbDhsZNVCmqscA6ts+7qwR4XxRai5k8h8hwUQDECpMAY6MdlW9akcmNiCvXdTDEtkW7PjMSQNsBbZ+ksO8Rcg8Ez+PCfZwhRjr1WtvKJEB2vhEbjUSEcrCz3JSqodaSd0K7LZ5LAr/htxPa7oOkF8C9mXlab6Stz8VTk44tfjLSWnziVl6K4HEQRtFYI6tUXIE+2me55cpAO9srY4AAvJLJhuatpsBFGFR0RuTpn2e4EyUQ8rrJdVGbQqymKPS9xlO5j5oZn2JaUVOt3Vxf5Cy/rZyvfUfZm5vSt3gksNkqjJ6oAp5NCkq4RF55PREggUgPVKkF2gAl40tPkqCiPhaAOi8kFmsxKSiDykJZHZqAD5S6hYjcw01/tk8JZojstgbJxiq4fwUUI3cEOSFMASGI6MwNPSbJuXgeRSOD7JaKnPPN1jlQ3R1xniRmH4VnL2A3KJOCcnnXGYnzu6LIz5pUw81fmVqXsSchTKgqCyOELwKnxl5jpDL7Ap3p83A3xfL4aMc/eXHFBaBD3CWv0JNahGUChnzGgSwq0T5S3eZvzZsGDa6YAodliHUbsO7WHYZx9bNsddZcm010tGnCbzoLaa09s1VVMJzIsMzHlJXgNtVjJIcpTqXkQsnms8GqTqQIFG1UZ5Jonp9rRobqqGYOBSLNj2CptGL27vFrB4Oc9W0qSXFsVx3X6dFM2+VYcGJdEnsnl+qSFcnLEsFyWAKfTE1KP+AoaRWM6RMXpoWW2NUJsEtNNv+1oli+SxlzlbQvs3ZoDsKmSTnjEfS53kq5XnCBKciszFwZnuRnygojfoIzcTi0g4+bjknTu3DYRrD8prQ7ya1eczPKgWXDZX+BmYBQVDyKUJKjJUxCzZZrgsVVTfj1Z+EluTY0/Q5ccZp9zg3vr8ip8C/DavJnggSEXMzjzBW8dTCpnbmcTZmJ/6hNR5zFuKNAhqMzUOjOboomqXFqXujjeh7GPJ18/GJdn8+f5+awKYTdOtMowt5IrGp/hmfJa36pSLjk2xTajrWhitkFN1973SrccEczTDoKwiW9H+CmWs2MHkU/sLsufYj7kbk1fxoo/ybTKT9cnK3G5qJhMUYxW1PZIqgn0ZXuwTaVquMROr14zuo19rh/63TxjZSnLiA7ZVO+T1i0M8aIqNexu2TdmZ8QTAyn8TY9jhqu6kssZ2RpGKU1jYVoF7Lj1CpEaIEVElg213LPhODDu60FocXhPSdDB1tbbqdsNCpIzm62cO0zAnSf1i3LV9ZNUbPTMy+ItHxPxLHIDz3IHd0YWiOe2CzSsKvc8uF5NGiomP7BBmZP12Ll92qYWApR+uCvTgdycDdec4imbXeOylMbjY/6odN2FZl3ldN07aZrJlpwaRQVgzCy4veNCgbD+cFNosHqb5vYgHyW0yUGH9dfUPtvO7ke0nadeEqkp3nEUMsJwMrTLZjzh01M29AoG+4AYi8VOYub24LpXo80Fwo44CiGiBh7raUewFacJIK4sc7YWoSkgDitw7OsyK7FirmdYM6130DP0CcTTGRTwcFJIwvu93VO1dVkolR6g4EghrnbH6se/W1L4TKd6bngCLySMvt7LXjPOCAyqOyZ17u4+KCKcHUkOshUl4U/Pyjr2LSiDBGxvBPVClDwrmCUGbLY302MaeoJc0ZGm9RePGaMI6PtLQMwo1lPPR1ZRGlYmZ950CSeZgvAhRRF1xq0rkQr4jkznZ3RFexUjajpbje0khPqIESDah86XjL0vI5KttK45QC3DD7saLqLGQdMusWwZ3gUGXx5DkyVCsoiEYVWm3JtZ8YHQLEFTTmw4eDUaqhxz4ivzgYxCke2fn8bzuC3eTlGAXNIPQiJHoMcmQzlzbWwVcYejzes5CTiB3JNAIFmN+VSawVqBBpjvS8DtwN8n3tzSyXEnUJzi2Ox8R1oWN+inR2Z0RklhcwXMkoJIFqZeIZZpO/rtjJ98WX63oPkbqH347am+/6r4gR3L9NhkkBn3Y/qTlz6gEPS3jy5+gL/0jODf2UzTTvzPbioK/WhDIf9/Qwn9cceO96ivAqPPL47/a66f2oQYgX/8bvvSpPpPVknK9TejMPG8fLIjkAz/Bz0ObegUzv9gv3948xf+3hjTpIzn425+9RKnvsvKJO0OEyHQ/zGAq4as//tpHehvn54Q+S/w9abFezAH8B9Xp4AA/O+u/sPiF6H+uU4O+sL5TuF/BC/5buD90uzOfwu8ZlEO/wFe/gTP/+GqHwpxGn7D6j8qxFGE/H0x6MeD/Evjnv4tkDPlODRhl/71QP8LVv5Yfw4mzf/sYMe/RbB/qZf93wK7MPb5Xw/0/3DVjwU58QvkCPktgvzPWzK+L85Nku9JSPBf8D049x/gS1IfDd8vtaf8uKybelfWDbqGf3aQf4F1fzzIv1Rw9eOzbvDchvcDO/LLo3+JdX842JE/T5P8iKwbx98T5PAbttCPCvIvsO6PB/mf55W+L9b9rkEjSf8iJF9g3Sj+4fBFfirW/c7yH/YL5H9k3d8AyL86G/lds+53BTvxC+xfYt3fANi/Onv5XbLu96UtKPIL5H9k3d8AyL86e/mNsW7qDV707UEj8QYl/UeF7+/rSzD0C8VR/13wfnWi5rvk3O+czvkVWB6c+5uD+Fcnc75rxo2+J9Spt5wVPyrU8W8P6l+d0vku+Tb+roQF/sW3EfKbgzj6b/S7/NTDPn/LdH22GEF8dP09+m90u/zEHUq/sxeJfXx/EvrnQvzfDfLz9Sf9zlAU8tHdSeifS8l/N8ev7qQ/NSANfXxvEvrVAin3v8zqa9kZW47J+7EzinwDf3orOyORX+XtoN0O+50aSmAfXfmLfrUa+n7R9nuLSm+A4luBTpHQL6Aj6G9+8tuB+Vd3cTDL3Ld9VDbvAHRrXOL6HZH+vlncX2VlCIJ/e0j/K1o5vivllHiDjP9mfw79akI9wPrtofyrkwTcewin/9nW+dcAJ0mQMn83Zo7+Kq98MfN/AjgJf3SVAvoNdnu8Ly8n8TdMtPgKtvIrA/YHXv7xMMf+PD3w4/Jy/C2c4s1IR9E3HBc/KtJ/z8u/AaT/Fc0e3xEvR8m3dEu/FeUo/ha58kdF+e95+TeAcuSn4eU0Br2jGz9CrF+8/I+8nP5wgH+D/SDfc+MT+asI8wu8/ONh/tWJzu+Yl9P0e7Zhw8ivCPSPvPzjkf5XtIN8R7ycptF31MtJiPyF8j/y8o9H+VfnP9+Fl79rKQtBvkXrezsxJ37pK18oZaHIjy41xr468/mdUnOSJt+RsJAk9mvENKDm3x7Qv75D6rsl5zj5nlPvcOItq/+oWMe/Rax/dR70+6TnGEy+Y7YffVOb4Y+Kc/K/ivPjv2Pfz//wmnh8r0LrEwAq/v8B</diagram></mxfile>
|
2002.09437/main_diagram/main_diagram.pdf
ADDED
|
Binary file (44.8 kB). View file
|
|
|
2002.09437/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
Let $D = \langle(\bm{\mathrm{x}}_i, y_i)\rangle_{i=1}^N$ denote a dataset consisting of $N$ samples from a joint distribution $\mathcal{D}(\mathcal{X}, \mathcal{Y})$, where for each sample $i$, $\mathbf{x}_i \in \mathcal{X}$ is the input and $y_i \in \mathcal{Y} = \{1, 2, ..., K\}$ is the ground-truth class label. Let $\hat{p}_{i,y} = f_\theta(y|\bm{\mathrm{x}}_i)$ be the probability that a neural network $f$ with model parameters $\theta$ predicts for a class $y$ on a given input $\bm{\mathrm{x}}_i$. The class that $f$ predicts for $\mathbf{x}_i$ is computed as $\hat{y}_i = \mathrm{argmax}_{y \in \mathcal{Y}} \; \hat{p}_{i,y}$, and the predicted confidence as $\hat{p}_i = \mathrm{max}_{y \in \mathcal{Y}} \; \hat{p}_{i,y}$. The network is said to be *perfectly calibrated* when, for each sample $(\bm{\mathrm{x}}, y) \in D$, the confidence $\hat{p}$ is equal to the model accuracy $\mathbb{P}(\hat{y} = y | \hat{p})$, i.e. the probability that the predicted class is correct. For instance, of all the samples to which a perfectly calibrated neural network assigns a confidence of $0.8$, $80\%$ should be correctly predicted.
|
| 4 |
+
|
| 5 |
+
A popular metric used to measure model calibration is the *expected calibration error* (ECE) [@Naeini2015], defined as the expected absolute difference between the model's confidence and its accuracy, i.e. $\mathbb{E}_{\hat{p}} \big[ \left| \mathbb{P}(\hat{y} = y | \hat{p}) - \hat{p} \right| \big]$. Since we only have finite samples, the ECE cannot in practice be computed using this definition. Instead, we divide the interval $[0,1]$ into $M$ equispaced bins, where the $i^{\mathrm{th}}$ bin is the interval $\left(\frac{i-1}{M}, \frac{i}{M} \right]$. Let $B_i$ denote the set of samples with confidences belonging to the $i^{\mathrm{th}}$ bin. The accuracy $A_i$ of this bin is computed as $A_i = \frac{1}{|B_i|} \sum_{j \in B_i} \mathbbm{1} \left(\hat{y}_j = y_j\right)$, where $\mathbbm{1}$ is the indicator function, and $\hat{y}_j$ and $y_j$ are the predicted and ground-truth labels for the $j^{\mathrm{th}}$ sample. Similarly, the confidence $C_i$ of the $i^{\mathrm{th}}$ bin is computed as $C_i = \frac{1}{|B_i|} \sum_{j \in B_i} \hat{p}_j$, i.e. $C_i$ is the average confidence of all samples in the bin. The ECE can be approximated as a weighted average of the absolute difference between the accuracy and confidence of each bin: $\mathrm{ECE} = \sum_{i=1}^{M} \frac{|B_i|}{N} \left| A_i - C_i \right|$.
|
| 6 |
+
|
| 7 |
+
A similar metric, the *maximum calibration error* (MCE) [@Naeini2015], is defined as the maximum absolute difference between the accuracy and confidence of each bin: $\mathrm{MCE} = \mathrm{max}_{i \in \{1, ..., M\}}\left|A_i - C_i\right|$.
|
| 8 |
+
|
| 9 |
+
**AdaECE:** One disadvantage of ECE is the uniform bin width. For a trained model, most of the samples lie within the highest confidence bins, and hence these bins dominate the value of the ECE. We thus also consider another metric, AdaECE (Adaptive ECE), for which bin sizes are calculated so as to evenly distribute samples between bins (similar to the adaptive binning procedure in [@Nguyen2015posterior]): $\mathrm{AdaECE} = \sum_{i=1}^{M} \frac{|B_i|}{N} \left| A_i - C_i \right| \text{ s.t.\ } \forall i, j \cdot |B_i| = |B_j|$.
|
| 10 |
+
|
| 11 |
+
**Classwise-ECE:** The ECE metric only considers the probability of the predicted class, without considering the other scores in the softmax distribution. A stronger definition of calibration would require the probabilities of all the classes in the softmax distribution to be calibrated [@Kull2019beyond; @Vaicenavicius2019; @Widmann2019calibration; @Kumar2019verified]. This can be achieved with a simple classwise extension of the ECE metric: $\mathrm{Classwise ECE} = \frac{1}{K} \sum_{i=1}^{M}\sum_{j=1}^{K} \frac{|B_{i,j}|}{N} \left| A_{i,j} - C_{i,j} \right|$, where $K$ is the number of classes, $B_{ij}$ denotes the set of samples from the $j^{th}$ class in the $i^{th}$ bin, $A_{ij} = \frac{1}{|B_{ij}|} \sum_{k \in B_{ij}} \mathbbm{1} \left(j = y_k\right)$ and $C_{i,j} = \frac{1}{|B_{ij}|} \sum_{k \in B_{ij}} \hat{p}_{kj}$.
|
| 12 |
+
|
| 13 |
+
A common way of visualising calibration is to use a *reliability plot* [@Niculescu2005], which plots the accuracies of the confidence bins as a bar chart (see Appendix Figure [6](#fig:rel_conf_bin_plot){reference-type="ref" reference="fig:rel_conf_bin_plot"}). For a perfectly calibrated model, the accuracy for each bin matches the confidence, and hence all of the bars lie on the diagonal. By contrast, if most of the bars lie above the diagonal, the model is more accurate than it expects, and is under-confident, and if most of the bars lie below the diagonal, then it is over-confident.
|
| 14 |
+
|
| 15 |
+
We now discuss why high-capacity neural networks, despite achieving low classification errors on well-known datasets, tend to be miscalibrated. A key empirical observation made by [@Guo2017] was that poor calibration of such networks appears to be linked to overfitting on the negative log-likelihood (NLL) during training. In this section, we further inspect this observation to provide new insights.
|
| 16 |
+
|
| 17 |
+
For the analysis, we train a ResNet-50 network on CIFAR-10 with state-of-the-art performance settings [@PyTorchCIFAR]. We use Stochastic Gradient Descent (SGD) with a mini-batch of size 128, momentum of 0.9, and learning rate schedule of $\{0.1, 0.01, 0.001\}$ for the first 150, next 100, and last 100 epochs, respectively. We minimise cross-entropy loss (a.k.a. NLL) $\mathcal{L}_c$, which, in a standard classification context, is $-\log \hat{p}_{i,y_i}$, where $\hat{p}_{i,y_i}$ is the probability assigned by the network to the correct class $y_i$ for the i$^{th}$ sample. Note that the NLL is minimised when for each training sample $i$, $\hat{p}_{i,y_i} = 1$, whereas the classification error is minimised when $\hat{p}_{i,y_i} > \hat{p}_{i,y}$ for all $y \neq y_i$. This indicates that even when the classification error is $0$, the NLL can be positive, and the optimisation algorithm can still try to reduce it to $0$ by further increasing the value of $\hat{p}_{i,y_i}$ for each sample (see Appendix [8](#rel_plots_appendix){reference-type="ref" reference="rel_plots_appendix"}).
|
| 18 |
+
|
| 19 |
+
To study how miscalibration occurs during training, we plot the average NLL for the train and test sets at each training epoch in Figures [1](#fig:nll_entropy_ece){reference-type="ref" reference="fig:nll_entropy_ece"}(a) and [1](#fig:nll_entropy_ece){reference-type="ref" reference="fig:nll_entropy_ece"}(b). We also plot the average NLL and the entropy of the softmax distribution produced by the network for the correctly and incorrectly classified samples. In Figure [1](#fig:nll_entropy_ece){reference-type="ref" reference="fig:nll_entropy_ece"}(c), we plot the classification errors on the train and test sets, along with the test set ECE.
|
| 20 |
+
|
| 21 |
+
<figure id="fig:nll_entropy_ece" data-latex-placement="!t">
|
| 22 |
+
|
| 23 |
+
<figcaption>Metrics related to calibration plotted whilst training a ResNet-50 network on CIFAR-10.</figcaption>
|
| 24 |
+
</figure>
|
| 25 |
+
|
| 26 |
+
**Curse of misclassified samples:** Figures [1](#fig:nll_entropy_ece){reference-type="ref" reference="fig:nll_entropy_ece"}(a) and [1](#fig:nll_entropy_ece){reference-type="ref" reference="fig:nll_entropy_ece"}(b) show that although the average train NLL (for both correctly and incorrectly classified training samples) broadly decreases throughout training, after the $150^{th}$ epoch (where the learning rate drops by a factor of $10$), there is a marked rise in the average test NLL, indicating that the network starts to overfit on average NLL. This increase in average test NLL is caused only by the incorrectly classified samples, as the average NLL for the correctly classified samples continues to decrease even after the $150^{th}$ epoch. We also observe that after epoch $150$, the test set ECE rises, indicating that the network is becoming miscalibrated. This corroborates the observation in [@Guo2017] that miscalibration and NLL overfitting are linked.
|
| 27 |
+
|
| 28 |
+
**Peak at the wrong place:** We further observe that the entropies of the softmax distributions for both the correctly and incorrectly classified *test* samples decrease throughout training (in other words, the distributions get peakier). This observation, coupled with the one we made above, indicates that *for the wrongly classified test samples, the network gradually becomes more and more confident about its incorrect predictions*.
|
| 29 |
+
|
| 30 |
+
**Weight magnification:** The increase in confidence of the network's predictions can happen if the network increases the norm of its weights $W$ to increase the magnitudes of the logits. In fact, cross-entropy loss is minimised when for each training sample $i$, $\hat{p}_{i,y_i} = 1$, which is possible only when $||W|| \to \infty$. Cross-entropy loss thus inherently induces this tendency of weight magnification in neural network optimisation. The promising performance of weight decay [@Guo2017] (regulating the norm of weights) on the calibration of neural networks can perhaps be explained using this. This increase in the network's confidence during training is one of the key causes of miscalibration.
|
| 31 |
+
|
| 32 |
+
As discussed in §[3](#sec:cause_cali){reference-type="ref" reference="sec:cause_cali"}, overfitting on NLL, which is observed as the network grows more confident on all of its predictions irrespective of their correctness, is strongly related to poor calibration. One cause of this is that the cross-entropy objective minimises the difference between the softmax distribution and the ground-truth one-hot encoding over an entire mini-batch, irrespective of how well a network classifies individual samples in the mini-batch. In this work, we study an alternative loss function, popularly known as *focal loss* [@Lin2017], that tackles this by weighting loss components generated from individual samples in a mini-batch by how well the model classifies them. For classification tasks where the target distribution is a one-hot encoding, it is defined as $\mathcal{L}_f = -(1 - \hat{p}_{i,y_i})^\gamma \log \hat{p}_{i,y_i}$, where $\gamma$ is a user-defined hyperparameter[^2].
|
| 33 |
+
|
| 34 |
+
**Why might focal loss improve calibration?** We know that cross-entropy forms an upper bound on the KL-divergence between the target distribution $q$ and the predicted distribution $\hat{p}$, i.e. $\mathcal{L}_c \geq \mathrm{KL}(q||\hat{p})$, so minimising cross-entropy results in minimising $\mathrm{KL}(q||\hat{p})$. Interestingly, a general form of focal loss can be shown to be an upper bound on the regularised KL-divergence, where the regulariser is the negative entropy of the predicted distribution $\hat{p}$, and the regularisation parameter is $\gamma$, the hyperparameter of focal loss (a proof of this can be found in Appendix [9](#reg_bregman){reference-type="ref" reference="reg_bregman"}): $$\begin{equation}
|
| 35 |
+
\label{eq:reg_bregman}
|
| 36 |
+
\mathcal{L}_f \geq \mathrm{KL}(q||\hat{p})- \gamma\mathbb{H}[\hat{p}].
|
| 37 |
+
\end{equation}$$ The most interesting property of this upper bound is that it shows that replacing cross-entropy with focal loss has the effect of adding a maximum-entropy regulariser [@Pereyra2017] to the implicit minimisation that was previously being performed. In other words, trying to minimise focal loss minimises the KL divergence between $\hat{p}$ and $q$, whilst simultaneously increasing the entropy of the predicted distribution $\hat{p}$. Note, in the case of ground truth with one-hot encoding, only the component of the entropy of $\hat{p}$ corresponding to the ground-truth index, $\gamma (-\hat{p}_{i,y_i} \log \hat{p}_{i,y_i})$, will be maximised (refer Appendix [9](#reg_bregman){reference-type="ref" reference="reg_bregman"}). Encouraging the predicted distribution to have higher entropy can help avoid the overconfident predictions produced by DNNs (see the 'Peak at the wrong place' paragraph of §[3](#sec:cause_cali){reference-type="ref" reference="sec:cause_cali"}), and thereby improve calibration.
|
| 38 |
+
|
| 39 |
+
<figure id="fig:nll_corr_incorr_entropy" data-latex-placement="!t">
|
| 40 |
+
|
| 41 |
+
<figcaption>How metrics related to model calibration change whilst training several ResNet-50 networks on CIFAR-10, using either cross-entropy loss, or focal loss with <span class="math inline"><em>γ</em></span> set to 1, 2 or 3.</figcaption>
|
| 42 |
+
</figure>
|
| 43 |
+
|
| 44 |
+
**Empirical observations:** To analyse the behaviour of neural networks trained on focal loss, we use the same framework as mentioned above, and train four ResNet-50 networks on CIFAR-10, one using cross-entropy loss, and three using focal loss with $\gamma = 1, 2$ and $3$. Figure [2](#fig:nll_corr_incorr_entropy){reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(a) shows that the test NLL for the cross-entropy model significantly increases towards the end of training (before saturating), whereas the NLLs for the focal loss models remain low. To better understand this, we analyse the behaviour of these models for correctly and incorrectly classified samples. Figure [2](#fig:nll_corr_incorr_entropy){reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(b) shows that even though the NLLs for the correctly classified samples broadly-speaking decrease over the course of training for all the models, the NLLs for the focal loss models remain consistently higher than that for the cross-entropy model throughout training, implying that the focal loss models are relatively less confident than the cross-entropy model for samples that they predict correctly. This is important, as we have already discussed that it is overconfidence that normally makes deep neural networks miscalibrated. Figure [2](#fig:nll_corr_incorr_entropy){reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(c) shows that in contrast to the cross-entropy model, for which the NLL for misclassified test samples increases significantly after epoch $150$, the rise in this value for the focal loss models is much less severe. Additionally, in Figure [2](#fig:nll_corr_incorr_entropy){reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(d), we notice that the entropy of the softmax distribution for misclassified test samples is consistently (if marginally) higher for focal loss than for cross-entropy (consistent with Equation [\[eq:reg_bregman\]](#eq:reg_bregman){reference-type="ref" reference="eq:reg_bregman"}).
|
| 45 |
+
|
| 46 |
+
Note that from Figure [2](#fig:nll_corr_incorr_entropy){reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(a), one may think that applying early stopping when training a model on cross-entropy can provide better calibration scores. However, there is no ideal way of doing early stopping that provides the best calibration error and the best test set accuracy. For fair comparison, we chose $3$ intermediate models for each loss function with the best val set ECE, NLL and accuracy, and observed that: a) for every stopping criterion, focal loss outperforms cross-entropy in both test set accuracy and ECE, b) when using val set ECE as a stopping criterion, the intermediate model for cross-entropy indeed improves its test set ECE, but at the cost of a significantly higher test error. Please refer to Appendix [17](#sec:early_stopping){reference-type="ref" reference="sec:early_stopping"} for more details.
|
| 47 |
+
|
| 48 |
+
As per §[3](#sec:cause_cali){reference-type="ref" reference="sec:cause_cali"}, an increase in the test NLL and a decrease in the test entropy for misclassified samples, along with no corresponding increase in the test NLL for the correctly classified samples, can be interpreted as the network starting to predict softmax distributions for the misclassified samples that are ever more peaky in the wrong place. Notably, our results in Figures [2](#fig:nll_corr_incorr_entropy){reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(b), [2](#fig:nll_corr_incorr_entropy){reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(c) and [2](#fig:nll_corr_incorr_entropy){reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(d) clearly show that this effect is significantly reduced when training with focal loss rather than cross-entropy, leading to a better-calibrated network whose predictions are less peaky in the wrong place.
|
| 49 |
+
|
| 50 |
+
**Theoretical justification:** As mentioned previously, once a model trained using cross-entropy reaches high training accuracy, the optimiser may try to further reduce the training NLL by increasing the confidences for the correctly classified samples. It may achieve this by magnifying the network weights to increase the magnitudes of the logits. To verify this hypothesis, we plot the $L_2$ norm of the weights of the last linear layer for all four networks as a function of the training epoch (see Figure [2](#fig:nll_corr_incorr_entropy){reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(e)). Notably, although the norms of the weights for the models trained on focal loss are initially higher than that for the cross-entropy model, *a complete reversal* in the ordering of the weight norms occurs between epochs $150$ and $250$. In other words, as the networks start to become miscalibrated, the weight norm for the cross-entropy model also starts to become greater than those for the focal loss models. In practice, this is because focal loss, by design, starts to act as a regulariser on the network's weights once the model has gained a certain amount of confidence in its predictions. This behaviour of focal loss can be observed even on a much simpler setup like a linear model (see Appendix [10](#linear_model){reference-type="ref" reference="linear_model"}). To better understand this, we start by considering the following proposition (proof in Appendix [11](#sec:proof){reference-type="ref" reference="sec:proof"}):
|
| 51 |
+
|
| 52 |
+
::: {#pro1 .pro}
|
| 53 |
+
**Proposition 1**. *For focal loss $\mathcal{L}_f$ and cross-entropy $\mathcal{L}_c$, the gradients $\frac{\partial \mathcal{L}_f}{\partial \mathbf{w}} = \frac{\partial \mathcal{L}_c}{\partial \mathbf{w}} g(\hat{p}_{i,y_i}, \gamma)$, where $g(p, \gamma) = (1-p)^\gamma - \gamma p (1-p)^{\gamma - 1} \log(p)$, $\gamma \in \mathbb{R}^+$ is the focal loss hyperparameter, and $\mathbf{w}$ denotes the parameters of the last linear layer. Thus $\left\lVert\frac{\partial \mathcal{L}_f}{\partial \mathbf{w}}\right\rVert \leq \left\lVert\frac{\partial \mathcal{L}_c}{\partial \mathbf{w}}\right\rVert$ if $g(\hat{p}_{i,y_i}, \gamma) \in [0, 1]$.*
|
| 54 |
+
:::
|
| 55 |
+
|
| 56 |
+
Proposition [1](#pro1){reference-type="ref" reference="pro1"} shows the relationship between the norms of the gradients of the last linear layer for focal loss and cross-entropy loss, for the same network architecture. Note that this relation depends on a function $g(p, \gamma)$, which we plot in Figure [3](#fig:g_pt_grad_norms){reference-type="ref" reference="fig:g_pt_grad_norms"}(a) to understand its behaviour. It is clear that for every $\gamma$, there exists a (different) threshold $p_0$ such that for all $p \in [0,p_0]$, $g(p,\gamma) \ge 1$, and for all $p \in (p_0, 1]$, $g(p,\gamma) < 1$. (For example, for $\gamma = 1$, $p_0 \approx 0.4$.) We use this insight to further explain why focal loss provides implicit weight regularisation.
|
| 57 |
+
|
| 58 |
+
<figure id="fig:g_pt_grad_norms" data-latex-placement="!t">
|
| 59 |
+
|
| 60 |
+
<figcaption>(a): <span class="math inline"><em>g</em>(<em>p</em>, <em>γ</em>)</span> vs. <span class="math inline"><em>p</em></span> and (b-d): histograms of the gradient norms of the last linear layer for both cross-entropy and focal loss.</figcaption>
|
| 61 |
+
</figure>
|
| 62 |
+
|
| 63 |
+
**Implicit weight regularisation:** For a network trained using focal loss with a fixed $\gamma$, during the initial stages of the training, when $\hat{p}_{i,y_i} \in (0,p_0)$, $g(\hat{p}_{i,y_i}, \gamma) > 1$. This implies that the confidences of the focal loss model's predictions will initially increase faster than they would for cross-entropy. However, as soon as $\hat{p}_{i,y_i}$ crosses the threshold $p_0$, $g(\hat{p}_{i,y_i}, \gamma)$ falls below $1$ and reduces the size of the gradient updates made to the network weights, thereby having a regularising effect on the weights. This is why, in Figure [2](#fig:nll_corr_incorr_entropy){reference-type="ref" reference="fig:nll_corr_incorr_entropy"}(e), we find that the weight norms of the models trained with focal loss are initially higher than that for the model trained using cross-entropy. However, as training progresses, we find that the ordering of the weight norms reverses, as focal loss starts regularising the network weights. Moreover, we can draw similar insights from Figures [3](#fig:g_pt_grad_norms){reference-type="ref" reference="fig:g_pt_grad_norms"}(b), [3](#fig:g_pt_grad_norms){reference-type="ref" reference="fig:g_pt_grad_norms"}(c) and [3](#fig:g_pt_grad_norms){reference-type="ref" reference="fig:g_pt_grad_norms"}(d), in which we plot histograms of the gradient norms of the last linear layer (over all samples in the training set) at epochs $10$, $100$ and $200$, respectively. At epoch $10$, the gradient norms for cross-entropy and focal loss are similar, but as training progresses, those for cross-entropy decrease less rapidly than those for focal loss, indicating that the gradient norms for focal loss are consistently lower than those for cross-entropy throughout training.
|
| 64 |
+
|
| 65 |
+
Finally, observe in Figure [3](#fig:g_pt_grad_norms){reference-type="ref" reference="fig:g_pt_grad_norms"}(a) that for higher $\gamma$ values, the fall in $g(p,\gamma)$ is steeper. We would thus expect a greater weight regularisation effect for models that use higher values of $\gamma$. This explains why, of the three models that we trained using focal loss, the one with $\gamma = 3$ outperforms (in terms of calibration) the one with $\gamma = 2$, which in turn outperforms the model with $\gamma = 1$. Based on this observation, one might think that, in general, a higher value of gamma would lead to a more calibrated model. However, this is not the case, as we notice from Figure [3](#fig:g_pt_grad_norms){reference-type="ref" reference="fig:g_pt_grad_norms"}(a) that for $\gamma \ge 7$, $g(p,\gamma)$ reduces to nearly $0$ for a relatively low value of $p$ (around $0.5$). As a result, using values of $\gamma$ that are too high will cause the gradients to die (i.e. reduce to nearly $0$) early, at a point at which the network's predictions remain ambiguous, thereby causing the training process to fail.
|
| 66 |
+
|
| 67 |
+
**How to choose $\gamma$:** As discussed, focal loss provides implicit entropy and weight regularisation, which heavily depend on the value of $\gamma$. Finding an appropriate $\gamma$ is normally done using cross-validation. Also, traditionally, $\gamma$ is fixed for all samples in the dataset. However, as shown, the regularisation effect for a sample $i$ depends on $\hat{p}_{i,y_i}$, i.e. the predicted probability for the ground truth label for the sample. It thus makes sense to choose $\gamma$ for each sample based on the value of $\hat{p}_{i,y_i}$. To this end, we provide Proposition [2](#pro:gamma){reference-type="ref" reference="pro:gamma"} (proof in Appendix [11](#sec:proof){reference-type="ref" reference="sec:proof"}), which we use to find a solution to this problem:
|
| 68 |
+
|
| 69 |
+
::: {#pro:gamma .pro}
|
| 70 |
+
**Proposition 2**. *Given a $p_0$, for $1 \geq p \geq p_0 > 0$, $g(p, \gamma) \leq 1$ for all $\gamma \geq \gamma^* = \frac{a}{b} + \frac{1}{\log a}W_{-1} \big(-\frac{a^{(1-a/b)}}{b} \log a \big)$, where $a = 1-p_0$, $b = p_0 \log p_0$, and $W_{-1}$ is the Lambert-W function [@corless1996lambertw]. Moreover, for $p \geq p_0 > 0$ and $\gamma \geq \gamma^*$, the equality $g(p, \gamma) = 1$ holds only for $p = p_0$ and $\gamma = \gamma^*$.*
|
| 71 |
+
:::
|
| 72 |
+
|
| 73 |
+
It is worth noting that there exist multiple values of $\gamma$ where $g(p, \gamma) \leq 1$ for all $p \geq p_0$. For a given $p_0$, Proposition [2](#pro:gamma){reference-type="ref" reference="pro:gamma"} allows us to compute $\gamma$ s.t. (i) $g(p_0,\gamma) = 1$; (ii) $g(p, \gamma) \ge 1$ for $p \in [0,p_0)$; and (iii) $g(p, \gamma) < 1$ for $p \in (p_0,1]$. This allows us to control the magnitude of the gradients for a particular sample $i$ based on the current value of $\hat{p}_{i,y_i}$, and gives us a way of obtaining an informed value of $\gamma$ for each sample. For instance, a reasonable policy might be to choose $\gamma$ s.t. $g(\hat{p}_{i,y_i}, \gamma) > 1$ if $\hat{p}_{i,y_i}$ is small (say less than $0.25$), and $g(\hat{p}_{i,y_i}, \gamma) < 1$ otherwise. Such a policy will have the effect of making the weight updates larger for samples having a low predicted probability for the correct class and smaller for samples with a relatively higher predicted probability for the correct class.
|
| 74 |
+
|
| 75 |
+
Following the aforementioned arguments, we choose a threshold $p_0$ of $0.25$, and use Proposition [2](#pro:gamma){reference-type="ref" reference="pro:gamma"} to obtain a $\gamma$ policy such that $g(p, \gamma)$ is observably greater than $1$ for $p \in [0, 0.25)$ and $g(p, \gamma) < 1$ for $p \in (0.25, 1]$. In particular, we use the following schedule: if $\hat{p}_{i,y_i} \in [0,0.2)$, then $\gamma = 5$, otherwise $\gamma = 3$ (note that $g(0.2, 5) \approx 1$ and $g(0.25, 3) \approx 1$: see Figure [3](#fig:g_pt_grad_norms){reference-type="ref" reference="fig:g_pt_grad_norms"}(a)). We find this $\gamma$ policy to perform consistently well across multiple classification datasets and network architectures. Having said that, one can calculate multiple such schedules for $\gamma$ following Proposition [2](#pro:gamma){reference-type="ref" reference="pro:gamma"}, using the intuition of having a relatively high $\gamma$ for low values of $\hat{p}_{i, y_i}$ and a relatively low $\gamma$ for high values of $\hat{p}_{i, y_i}$.
|
2008.12855/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2008.12855/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
::: center
|
| 4 |
+
\"One cannot think well, love well, sleep well, if one has not dined well.\" - Virginia Woolf
|
| 5 |
+
:::
|
| 6 |
+
|
| 7 |
+
Food is a significant determinant of human quality of life. Food provides the energy and nutrients essential for health and is a significant source of personal enjoyment and social fabric. In many instances, pleasures of eating conflict with the optimal nutritional needs of the person's physiological well-being, and is the leading cause of the substantial increase in diet-related diseases such as obesity, diabetes, and hypertension [@FoodHealth], [@Schulze2018FoodPrevention]. An important question is: why do people enjoy food [@NewtonAndersonEveryoneCulture]? People working on improving the enjoyment aspect of food, particularly chefs and food industry, have primarily ignored the health, and those focused on health (doctors and nutritionists) usually consider the enjoyment aspect secondary [@McclementsFUTUREEAT], [@Kale2020TracingEntries]. This disconnect in the two approaches has led to the current situation with the widespread increase in food-related illnesses. An important fact is: what I like to eat is not necessarily what my body likes [@Mai2011ImplicitResearch], [@Kale2020TracingEntries]. Can we satisfy both me and my body?
|
| 8 |
+
|
| 9 |
+
Food and nutrition have their roots in multimedia and multimodal elements [@Spence2015MultisensoryPerception]. Food experience requires the participation of audio, visual, tactile, gustatory, and olfactory senses, and prior experiences play a crucial role [@Sarabian2017AvoidanceChimpanzees]. The food we perceptually enjoy is a complete multimedia experience [@McclementsFUTUREEAT], [@Spence2015MultisensoryPerception], which extends further to an extensive multimodal effect in the body, impacting the physiology and biochemistry of the individual. A multitude of sensors can measure the relationship between foods and the individual through the dynamic health state variables [@Nag2018Cross-modalEstimation]. These include readily available sensors that provide continuous data collection for blood glucose, heart rate, perspiration rate, and body temperature [@KasaeyanNaeini2019AnMonitoring].
|
| 10 |
+
|
| 11 |
+
Food is a multimodal experience that enriches personal life and enhances social rituals important to humans. However, we have not studied all aspects of food in a unified computational framework like many other aspects of life, such as social networks, sports, and entertainment. Recently Min et al. [@Min2019AComputing] put together a computational framework around different silos of food. They adopt a diverse data-centric perspective and define food computing as, *computational approaches for acquiring and analyzing heterogeneous food data from disparate sources for perception, recognition, retrieval, recommendation, and monitoring of food to address food-related issues in health, biology, gastronomy, and agronomy.* This exhaustive and inclusive approach to food computing will help understand different aspects of the food ecosystem and how they impact each other.
|
| 12 |
+
|
| 13 |
+
This paper looks at the food ecosystem from a person-centered perspective. Our goal is to study how food affects a person's life and how the food ecosystem may be affected by choices made by people, as shown in Figure [1](#fig:personcentric){reference-type="ref" reference="fig:personcentric"}.
|
| 14 |
+
|
| 15 |
+
Food serves two crucial but closely related functions of maintaining biological health state and personal enjoyment in life. Food items, listed in the dish-centric layer, meet the personal food needs of individuals. A group of food producers and distributors are part of the next layer that we show as the food chain. Finally, each item produced, distributed, served, and consumed has a specific effect on the environment.
|
| 16 |
+
|
| 17 |
+
In this paper, we present a computational framework for building a Personal Food Model (PFM) that is essential to help people identify the right food, at the right place, in the right situation, at the right price. PFM is an essential component of emerging food recommendation systems to address challenges in different aspects of businesses as well as individuals' health [@Min2019FoodChallenges]. We consider the aspects of food that satisfy the two crucial needs of a person: enjoyment and sustenance. Different groups of people have studied these two aspects. We believe that there is an excellent opportunity to bring these disjoint areas together using a computational framework centered around multimedia.
|
| 18 |
+
|
| 19 |
+
The most important contribution of this paper is the unified personal model of culinary multimedia experience and biological health aspects. We use this model in a complex recommendation system that considers food items as a combination of features contributing to both enjoyment and sustenance and optimizes specific health outcomes such as sleep quality. We model a person by analyzing their multimodal food experiences as well as complex contextual factors related to different food items and dishes. The recommendation system then tries to optimize factors related to both enjoyment and sustenance by selecting correct food dishes in a given context.
|
| 20 |
+
|
| 21 |
+
<figure id="fig:personcentric" data-latex-placement="!ht">
|
| 22 |
+
<img src="FoodComputing.png" style="width:80.0%" />
|
| 23 |
+
<figcaption>Personal Food Computing Overview</figcaption>
|
| 24 |
+
</figure>
|
| 25 |
+
|
| 26 |
+
We present the personal food model (PFM), as a critical, relevant, and timely challenge for multimedia and multimodal research. We present these ideas by
|
| 27 |
+
|
| 28 |
+
<figure id="fig:PFMSystemsOverview" data-latex-placement="h">
|
| 29 |
+
<img src="Main/FoodRecommendation.png" style="width:80.0%" />
|
| 30 |
+
<figcaption>Food Recommendation Architecture: Data from the 3 digestion phases are being collected alongside other data-streams to create the PFM: the heart of Food Recommendation.</figcaption>
|
| 31 |
+
</figure>
|
| 32 |
+
|
| 33 |
+
1. Reviewing existing work in multimedia that peripherally touched food computing but did not address real challenges. We believe this was due to the absence of a clear challenge and application. We show that characteristics of food items and the food preferences of a person can be understood by combining visual, olfactory, culinary, and tactile (texture) aspects of food and eating environment.
|
| 34 |
+
|
| 35 |
+
2. Discussing essential aspects of personal food computing that will benefit significantly from multimedia technology and offer new challenges for the multimedia community. Notably, we discuss a multimodal food logging platform for building PFM and using it in a novel food recommendation platform. This may open a prominent application area for multimedia computing.
|
| 36 |
+
|
| 37 |
+
3. Presenting early components of personal food model based on multimedia computing, but require significant new research to create applications that may rival any past multimedia applications.
|
| 38 |
+
|
| 39 |
+
As discussed in subsequent sections, these are primarily multimedia challenges that will open new paradigms in multimedia computing and communications and will help people enjoy good food and be healthy.
|
| 40 |
+
|
| 41 |
+
PFM is the digitized representation of the food-related characteristics of an individual. It can be used in food recommendation systems to provide eating-related recommendations that improve the user's quality of life. Many factors affect and limit a simple eating decision. However, this problem has not been modeled in a comprehensive framework to study food as a multimedia experience, including taste, visual, social, and experiential factors. We show how PFM can predict the user's multimodal food preferences in different contexts. We accomplish this using different data streams captured from the user, such as location history [@Nag2019SynchronizingMonitoring], vital sign streams, and food intake logged using text voice and photos. In future works, we plan to expand the sources of information we use to create the personal model and focus on using many other data streams such as the user's calendar, social media, and transaction history.
|
| 42 |
+
|
| 43 |
+
PFM encompasses a complex nature as it contains many dimensions. The Biological part captures how Food items can satisfy nutritional needs for certain goals such as weight loss or improved performance in athletics [@Nag2019ALife]. Furthermore, Contextual understanding of the user needs must be layered for best computing real-time needs [@Nag2017LiveEngine]. Other biological and life events may also impact the food events indirectly and needs to be added to the model [@Pandey2020ContinuousRetrieval].
|
| 44 |
+
|
| 45 |
+
Figure [2](#fig:PFMSystemsOverview){reference-type="ref" reference="fig:PFMSystemsOverview"} shows how the personicle collects different data streams over a long period [@Jal2014Personicle:Events]. Events from the personicle are fed to the PFM which consists of two parts. We define the Biological PFM of the user to capture the body's reactions to different food items including allergic reactions and nutritional needs. The biological model is an important factor in each food decision we make, but it is not the only factor. We also create the user's taste profile, which constitutes the Preferential Personal Food model for the user. User's taste profile contains the information about the food items which the user has experienced in the past, and it may also reveal dishes that the user has never tried.
|
| 46 |
+
|
| 47 |
+
<figure id="fig:biological-model" data-latex-placement="!ht">
|
| 48 |
+
<img src="Main/BiologicalModel.png" style="width:70.0%" />
|
| 49 |
+
<figcaption>The interactions in the biological food model</figcaption>
|
| 50 |
+
</figure>
|
| 51 |
+
|
| 52 |
+
The Biological Personal Food Model (B-PFM) must consider how food is related to the health state of the individual [@Nag2020HealthEstimation]. This model should also extend to how the user may want to change their health state towards a specific goal [@Nag2019ALife]. The B-PFM focuses on the user's dynamic health and nutritional needs [@Nag2017LiveEngine], [@Nag2017PocketLocation].
|
| 53 |
+
|
| 54 |
+
Building the Biological PFM in a purely data-driven manner is a daunting task. Even though some apps like MyFitnessPal collect food intake and activity data from the user, they only focus on a limited fitness aspect and cannot be extended to a general biological model. However, instead of finding the patterns solely based on user data, we propose a hybrid approach using patterns obtained from domain knowledge to form a rule-based population model. We personalize this model as we collect more data. These rules capture the impact of food on biological parameters. For example, research shows that eating heavy meals before bedtime could lower the quality of sleep. We collect a selection of such sequences from expert domain knowledge and calculate the probability of validity for each of these patterns in different contexts. This set of context-driven rules form the B-PFM.
|
| 55 |
+
|
| 56 |
+
We also need to understand how food items and food events impact different aspects of the health state of the individual [@Nag2020HealthEstimation], [@Nag2017PocketLocation]. The user could have multiple health goals that might lead to conflicting recommendations (eg. diet for weight loss and sleep improvement). Therefore, it is important to keep the balance between different biological goals while also including static personal factors such as allergies, intolerances, and genetic factors in this computation. Figure [3](#fig:biological-model){reference-type="ref" reference="fig:biological-model"} shows how the current nutritional state is impacted by the food intake based on the particular needs of the user at the current biological state. In the event mining section, we describe how we turn the expert knowledge into active rules and validate them based on the user's data to predict the future biological health state.
|
| 57 |
+
|
| 58 |
+
<figure id="fig:hd-taste-space" data-latex-placement="!ht">
|
| 59 |
+
<img src="Main/TasteSpace.png" style="width:103.0%" />
|
| 60 |
+
<figcaption>Visualization of the US4B Taste Space. Part A: The collection of all taste samples from all users determine the hypervolume of the taste range of food items. Part B: Past taste sample values and ratings from the user determine user’s preferred taste region within the food item taste range hypervolume.</figcaption>
|
| 61 |
+
</figure>
|
| 62 |
+
|
| 63 |
+
We propose a novel approach to quantify and describe human taste perception to create the Preferential Personal Food Model (P-PFM). Based on the current state of the art methods food recommendations either ignore the preference model completely and just focus on healthy recommendations [@Rehman2017Diet-right:System], or try to find the preferred ingredients of the user by asking the user to rate a long list of ingredients and dishes without really understanding why the user likes an ingredient [@ElahiInteractionSystem]. We introduce a taste space using six taste primaries called the US4B taste model. The US4B taste model is a multidimensional additive taste model, in which umami, salty, sweet, spicy, sour, and bitter taste (USSSSB) are added together in various ways to reproduce a broad array of tastes. The RGB color space has been the foundation of many advances in multimedia technology such as digital displays, virtual reality and 3D printing. The US4B taste space can be the key to future food-related technologies that were not possible without this foundation.
|
| 64 |
+
|
| 65 |
+
Each food item will have an exact value in the US4B channels which determine its taste. As Figure [4](#fig:hd-taste-space){reference-type="ref" reference="fig:hd-taste-space"} shows, we create a Hyperdimensional Taste Space (HD-Taste-Space) and calculate a region for each food item. An unripe mango from Brazil is going to have a different vector value in the HD-Taste-Space compared to a ripe mango from India whereas zucchini and cucumber samples share the same taste region. Therefore by sampling different instances we associate a hypervolume in the US4B Taste Space to each food item which is shown in Figure [4](#fig:hd-taste-space){reference-type="ref" reference="fig:hd-taste-space"} part A. Then we use the recipe databases to estimate the hypervolume containing the possible tastes for the dish in the hyperdimensional space. The state of the art finds correlations among recipes based on their ingredients [@Kuo2012IntelligentIngredients] but there has been no research to really understand the taste of the dish based on the recipe as a multidimensional media. To create the P-PFM we map the food log to the HD-Taste-Space to compute a hypervolume representing the user's preferred taste regions. The user's preferred taste regions in the US4B taste space is the most important part of the P-PFM. It contains the information about the food that the user likes and has experienced before, and can also predict the food that the user has never tried but might like because it lies within the user's region of interest. Knowing the preferred regions, we can search for healthier food items within the user's preferred range of taste. Diet soda is a classic example of this concept. It has similar taste, texture, smell and visual cues compared to a normal soda but it has different effects on the biology. Having the food items in the US4B taste space and finding the user's preferred taste regions in this space enables us to come up with better food options tailored for each individual's specific needs and taste preference.
|
| 66 |
+
|
| 67 |
+
Models are built using data. Most successful search engines, social media, and e-commerce systems utilize personal models to provide people with the right information, at the right time, in the right context, usually even before a user articulates his need [@AdomaviciusTowardExtensions]. Personal food model plays the same role in food recommendation systems [@Min2019FoodChallenges]. We need to log food consumed by a person and all the relevant metadata over a long period for the user. While initial food logging efforts required cumbersome manual food diaries, smartphones and cameras can drastically improve the quality and ease of logging. Aizawa [@FoodLog:Applications] was the first multimedia researcher to champion the idea of logging food using a smartphone camera and remains a very active researcher. Applications for camera-based food-logging have been developed in many other countries [@Chen2016Deep-basedRetrieval], [@Bossard2014Food-101Forests]. Multimedia and computer vision research communities have been actively exploring food-logging systems. These systems use computer vision techniques to recognize items, their ingredients, and even the volume consumed by the user [@Chen2016Deep-basedRetrieval], [@Oh2018MultimodalJournaling]. Unfortunately, there is no generalized logger for international food, and identifying ingredients and volume remains a challenge. A useful review of many visual approaches and descriptions of databases used for training is included in [@Min2019AComputing].
|
| 68 |
+
|
| 69 |
+
Conversational voice interfaces are becoming quite popular, making rapid progress. Systems like Alexa and Siri are available at home, in phones, and watches. People can report what they eat, volume, and reaction to food using a simple sentence. Many packaged food and processed ready to prepare food items have barcodes. Since barcode readers are now omnipresent even in smartphones, one can get all food information from these. Some sensors measure muscle activity and try to infer food items from that. These are placed on the chest, near the ear, or neck [@Chu2019RespirationSensors]. These have shown some progress in recognizing eating events but have not gone much beyond that yet.
|
| 70 |
+
|
| 71 |
+
We propose a multimedia food logging platform, shown in Figure [5](#fig:foodlogger){reference-type="ref" reference="fig:foodlogger"}, that could use many relevant sources to log food items and find all information that may be needed to build a PFM. Multimedia uses complementary and correlated information and provides more comprehensive and precise information than any one medium. Moreover, we will keep adding new sensors and technologies to keep the logger useful.
|
| 72 |
+
|
| 73 |
+
<figure id="fig:foodlogger" data-latex-placement="!ht">
|
| 74 |
+
<img src="Main/FoodLogger.png" />
|
| 75 |
+
<figcaption>Food logging will use multimedia input sources and complement information from online databases to log each meal and all metadata related to the meal. It captures information about the food (dish name, ingredients, quantity), location (place of eating), time (eating and logging), social context (companions), causal aspects (nutritional and flavor information), and multimedia and experiential information about the food.</figcaption>
|
| 76 |
+
</figure>
|
| 77 |
+
|
| 78 |
+
This is the beginning of building towards a robust multimedia solution to the problem of logging. There are three important aspects to this platform:
|
| 79 |
+
|
| 80 |
+
- We can design a multimedia platform that uses visual food recognition, speech-based systems, payment based options, barcodes, sensors to determine food chewing and content of the food, and several similar emerging approaches.
|
| 81 |
+
|
| 82 |
+
- Once a dish or food item is recognized and the amount consumed is known, systems must find the nutritional data using governmental or commercial databases. Similarly, weather information, social context, and other metadata related to food required by the PFM may come from other sources.
|
| 83 |
+
|
| 84 |
+
- The log must contain the user's reaction both in terms of enjoyment and bodily reaction. The enjoyment information may come from asking the user, and the bodily reactions may come from sensors such as heart rate, glucose measurement, and respiration rate.
|
| 85 |
+
|
| 86 |
+
We enrich the food events with associated nutritional, culinary, and contextual information using databases from different public and private organizations. These include nutrition (NutritionIX, USDA food database), weather, air-quality (airnow provided by EPA), and place (Google Places, Yelp).\
|
| 87 |
+
We may also want to capture some biomarkers characterizing the health of the person. These parameters may be continuously recorded and could be used to identify physiological responses to food items [@Oh2018MultimodalJournaling]. A personicle like system [@Oh2017FromChronicles] can capture this information, and the time-indexed nature of the data and events makes it readily available for associating and analyzing with the food events.
|
| 88 |
+
|
| 89 |
+
Food logs are collected for
|
| 90 |
+
|
| 91 |
+
- Building PFM to understand the nutritional requirements and taste preferences of the user.
|
| 92 |
+
|
| 93 |
+
- Understanding the health state of the user.
|
| 94 |
+
|
| 95 |
+
These two goals may require different information from the food log. Building PFM requires as much longitudinal data as available, while health state estimation requires PFM and recent lifestyle and biological data. We need to keep these goals in mind while designing the food log. We have followed the HW5 (how, what, when, why, where, what) model as described in [@XieEventStreams], [@WestermannTowardApplications] to identify what information can fully describe a food event and maximize its utility for a variety of applications. The different aspects and associated information are detailed in figure [5](#fig:foodlogger){reference-type="ref" reference="fig:foodlogger"}. There can be three types of data collected:
|
| 96 |
+
|
| 97 |
+
1. Observed data: Directly captured using a sensor.
|
| 98 |
+
|
| 99 |
+
2. Derived data: We can derive some data and information using sensors and knowledge sources. This information will depend on the algorithms and data sources used.
|
| 100 |
+
|
| 101 |
+
3. Subjective data: The system may prompt the user or some other human source to get specific information. This data is prone to errors as it depends on human perception.
|
| 102 |
+
|
| 103 |
+
We should utilize the different types of measurements in different manners to minimize the error in our analyses and predictions.
|
| 104 |
+
|
| 105 |
+
In this paper, we describe the data and knowledge needed to build a PFM directed at improving sleep quality. We considered that the sleep quality is affected by stress, activity, and food [@Azimi2019PersonalizedStudy]. We are implementing a food logging platform. We decided to focus on data collection using voice, text, and barcode for the current version. We will include visual recognition approaches soon.
|
| 106 |
+
|
| 107 |
+
We add food metadata in the log using weather and reverse-geo databases. The foodlogger asks the user about their reaction to each item entered. We used NutrionIX platform to get information about calories and nutrients in each food item. The current food logger has information about how much a user likes a dish to build the Preferential Personal Food model. However, the information about the taste and flavor of a food item is not readily available from any source. We are working towards deriving such information about food items from different sources. This is an excellent open opportunity for the multimedia community to take the lead in solving this critical problem.
|
| 108 |
+
|
| 109 |
+
<figure id="fig:relatedresearch" data-latex-placement="!ht">
|
| 110 |
+
<img src="Main/KnowledgeTable.png" style="width:90.0%" />
|
| 111 |
+
<figcaption>Attributes of food events that impact sleep quality. These relationships form the basis for the Biological Personal Food Model.</figcaption>
|
| 112 |
+
</figure>
|
| 113 |
+
|
| 114 |
+
As stated in the previous sections, the personal model should be able to incorporate existing knowledge sources. We have surveyed papers that explore the relationship between dietary inputs and sleep outcomes. We summarize our findings in figure [6](#fig:relatedresearch){reference-type="ref" reference="fig:relatedresearch"}. We found that macro nutrients have a great impact on sleep outcomes [@Tanaka2013AssociationsWorkers], [@Peuhkuri2012DietaryMelatonin], [@Peuhkuri2012DietQuality], [@Yamaguchi2013RelationshipRegularity], [@Afaghi2008AcuteIndices], [@St-Onge2016FiberSleep]. Some micronutrients contribute to melatonin secretion, and hence can have significant impact on sleep quality [@Hashimoto1996VitaminHumans], [@Valtonen2005EffectsSubjects]. Additionally, there are some studies that explore the effect of specific food items such as kiwi fruit [@Lin2011EffectProblems] and cherries [@Pigeon2010EffectsStudy], [@Losso2018PilotMechanisms], [@{Garrido2013AAging.}] on sleep. Some chemicals responsible for specific taste such as capsaicin [@Edwards1992SpicyThermoregulation] and sugar [@Sampasa-Kanyinga2018SleepAdolescents], [@ThayerEnergyExercise] can also impact sleep. Fasting contributes to the change of bedtime [@BaHammam2013TheAssessment] as well.\
|
| 115 |
+
We have also included some studies about the impact of exercise and physical activity on sleep [@Loprinzi2011Association2005-2006], [@Loprinzi2012TheWomen], [@Lambiase2013TemporalWomen], [@Park2014AssociationsAdolescents] as it is an important confounding variable that impacts both nutritional needs and sleep quality.
|
| 116 |
+
|
| 117 |
+
<figure id="fig:SleepModel" data-latex-placement="!ht">
|
| 118 |
+
<img src="Main/EventminingSystem-V2.png" />
|
| 119 |
+
<figcaption>Event Mining workflow: Hypothesis generation operators allows us to find frequently occurring sequences of events. These can be converted to hypotheses by including confounding variables and can then be tested in presence of these confounding factors using hypothesis verification operator. These verified hypotheses serve as a rule-based model for the user’s behavior.</figcaption>
|
| 120 |
+
</figure>
|
| 121 |
+
|
| 122 |
+
Multimedia research in event mining has focused on event recognition and situation understanding (eg., sports and surveillance videos). There has not been much research on how we can utilize event mining to run n-of-1 experiments using a person's events and data streams and derive rules that describe their behavior in different situations. Event mining allows us to find patterns and relationships between different events in our daily lives. We can find relationships between different events in a person's lifelog data and derive an explainable personal model [@PandeyUbiquitousHealth].
|
| 123 |
+
|
| 124 |
+
Event mining results in rules of the form $Event_i \xrightarrow{C}Event_o$, where $Event_i$ is the input event, and we want to find out its effect in the occurrences of the outcome, $Event_o$. $C$ defines the set of confounding variables and temporal conditions that might affect this relationship. For Biological-PFM, the input events are lifestyle events that have a causal impact on some observable biological outcome [@Pandey2020ContinuousRetrieval]. While, in the Preferential-PFM, the input events capture the contextual situations that affect the user's culinary preferences. This view of events and their impacts is in line with the potential outcomes framework for causal inference (provided the required assumptions, eg., SUTVA are valid) [@Rubin2005CausalDecisions] and are explored in detail later in this section.\
|
| 125 |
+
We perform a two-step analysis with a human expert acting as an intermediary to select non-spurious relationships. The event patterns language described in [@Jalali2016InteractiveStreams] allows us to describe the relationships as temporal patterns of events. **Hypothesis generation** is used as a preliminary investigation tool that allows a human expert to identify any behavioral patterns in the form of events co-occurrences and **Hypothesis verification** tells us whether the relationship is causally significant in the presence of the confounding variables.
|
| 126 |
+
|
| 127 |
+
Users' event logs contain all of their daily habits and biological responses to different events. Hypothesis generation operators allow us to discover these habits and patterns that can be tested and used for prediction. This step of the analysis is mostly data-driven and starts with a human expert specifying the event streams that they believe to be correlated. The output is a heat map with different combinations of events occupying different positions (Figure [7](#fig:SleepModel){reference-type="ref" reference="fig:SleepModel"}). The patterns with relatively higher frequency may represent a significant relationship and selected for hypothesis verification.\
|
| 128 |
+
The frequent patterns would then need to be converted to candidate hypotheses. The user would need to specify the cause and effect events along with any confounding factors. This hypothesis can then be verified using the hypothesis verification operator.
|
| 129 |
+
|
| 130 |
+
Users can also verify their beliefs by encoding those as patterns of events and specifying the variables that define the contextual situation. We defined these patterns using scientific literature, as described in the previous section. Each occurrence of the pattern represents an instance of the input event (treatment), and we want to measure its impact on the outcome. Thus, each occurrence of the pattern becomes a single unit in the potential outcomes framework [@Rubin2005CausalDecisions], and we can compare different units while matching them by the confounding factors, to estimate the causal effect of the treatment.\
|
| 131 |
+
Once we have found all the pattern occurrences and the confounding variables, we follow a two-step process to find the validity of the rule.
|
| 132 |
+
|
| 133 |
+
1. **Find similar situations based on confounding variables (Contextual Matching)**. The confounding variables define the situation in which the input event (treatment) occurs, and can affect the event relationship we want to analyze. Therefore, we want to compare the events that occur in similar contexts and compare the impact of the input event on the outcome in an unbiased manner. We can do it by either clustering the values of confounding variables or converting the confounding variables to events and find matching confounding event patterns.
|
| 134 |
+
|
| 135 |
+
2. **Find the validity of the relationship for each situation.** Once we have performed the contextual matching, for each contextual group, we can find the effect of the treatment on the outcome using an appropriate statistical test. We can compare the difference in the outcome for different input events, and this would tell us the relative causal effect of the different input events.
|
| 136 |
+
|
| 137 |
+
This two-step hypothesis verification allows us to simulate an N-of-1 experiment on the user's event log while also incorporating the existing scientific knowledge in the form of candidate hypotheses and identifying confounding variables.
|
| 138 |
+
|
| 139 |
+
We need to analyze the food log in conjunction with other events from the personicle to create an explainable and personalized food model for every individual. The model would predict the impact of food events on other aspects of a person's life; and how different lifestyle and biological factors impact our food choices. In this paper, we are exploring the relationship between food events and sleep outcomes; therefore, we will include behavioral factors that would impact these two events, such as physical activity (exercise, step count).
|
| 140 |
+
|
| 141 |
+
We can identify different behavioral habits of the user using hypothesis generation operators. We can also visualize the relationship between various nutritional factors and different sleep outcomes to find if these are worth exploring further. Once we have identified such relationships, we can start verifying these hypotheses. We derive the hypotheses from data or existing biomedical literature. These relationships have been detailed in the previous section and are also depicted in figure [6](#fig:relatedresearch){reference-type="ref" reference="fig:relatedresearch"}.\
|
| 142 |
+
Figure [7](#fig:SleepModel){reference-type="ref" reference="fig:SleepModel"} shows the complete event mining process for the personal food model. The verified hypothesis contain event relationships that hold true in the specified contextual situation. These relationships form a set of rules with varying degrees of accuracy in different contextual situations. For example, if we have verified that cow's milk has a positive impact on sleep latency, then the relationship would be quantified in the form of minutes reduced in latency. It will have a different value for different contextual situations described by physical activity, day's meals, and last night's sleep. These rules could thus be used to identify the potential outcome of different foods and recommend items with the desired sleep outcome.
|
| 143 |
+
|
| 144 |
+
Though this paper focuses primarily on personal food models, it is really about building personal health models using disparate data and information sources. A personal health model's importance is apparent in these days of a pandemic that has disrupted lives globally. In this section, we discuss interesting challenges that we need to address. We believe that multimedia computing offers concepts, techniques, and practical experiences related to key areas mentioned in the paper.
|
| 145 |
+
|
| 146 |
+
1. User Privacy: User privacy and data protection are integral to developing a multimedia personal model. Without adequate security measures the model is unlikely to be widely adopted, regardless of the performance or utility. This is an important challenge for multimedia, artificial intelligence, and privacy and security research groups and we are actively looking for collaborations in this area. There are learning techniques such as federated learning [@Yang2019FederatedApplications] that allow us to build models and share insights without taking users' data from their device. We need to incorporate such methods in our platforms so that the users have complete ownership of their data.
|
| 147 |
+
|
| 148 |
+
2. Taste Space: Taste and flavor of food are very complex. Food taste space depends on the ingredients and recipe as well as visual presentations. On the other hand, each person has their own preferred taste space that must be determined by observations over a long time. We are exploring 6-dimensional taste space. This is less than the tip of the proverbial iceberg. Such representations will result in labeling food items better so that people can select what they will enjoy eating and will be healthy.
|
| 149 |
+
|
| 150 |
+
3. Multimedia Logging Platform: Multimedia community has focused on food logging using only visual recognition approaches and has been limited only to dish and ingredient recognition. Food logging is not just recognizing dishes from pictures, but identifying all characteristics of an eating event. We need to build a multimedia logging platform to collect all food-related information relevant to building PFM. Such logs could be used for studying population for health as well as for business reasons.
|
| 151 |
+
|
| 152 |
+
4. Multimodal event detection: The health state of a person is usually estimated by combining multimedia (audio-visual) and multimodal (heart rate, EEG, respiration rate, Glucose content) signals. Estimation of health state is a great challenge for researchers that will also help ALL humans.
|
| 153 |
+
|
| 154 |
+
5. Multimodal Knowledge Collection: Much of the diagnosis and prescription related to health is multimodal and will require extending traditional knowledge graph [@Zulaika2018EnhancingGraphs][@HaussmannFoodKG:Recommendation] techniques.
|
| 155 |
+
|
| 156 |
+
6. Event Mining: Mining multiple sequences of event streams detected from disparate data streams is essential for both building models such as PFM as well as for health state estimation. Event mining may offer more challenging problems in predictive and preventive approaches in several application areas, including health using novel forms of machine learning than object recognition offered in computer vision. We have already started building a platform for this.
|
| 157 |
+
|
| 158 |
+
7. Recommendation System to motivate behavioral change: In context of eating habits, a recommendation system which always promotes the healthiest option is not necessarily the best one. A good recommendation must consider personal food preferences and healthiness together to suggest not just healthy but correct amount of 'healthy and tasty' food. Correct recommendation should be given at the correct place and the appropriate time to motivate behavioral change [@Patel2015WearableChange], [@Motivate:Publication]. PFM is the first step towards context-aware recommendation in food domain but this is just the beginning of a long journey.
|
2102.09701/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-02-19T03:46:44.321Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36" version="14.4.2" etag="whOorfo6k1D4vnmR4joY" type="device"><diagram id="4KYq00pTiZc08yqkCG8u" name="Page-1">7Vhrb9owFP010R5SpzycED4ChU7TNk3rh0r7MhlsiFUTM2NK2l+/m9hJyIOOtQRWraUqybFzfXPO8bVryxstkyuJV9EXQSi3XJsklndpuW4fBfA3Be414Id9DSwkIxpySuCaPVAD2gbdMELXlY5KCK7YqgrORBzTmapgWEqxrXabC14ddYUXtAFczzBvojeMqEijoW+X+EfKFlE+smObliXOO5sQ6wgTsdVQ1scbW95ICqH01TIZUZ5yl/OiA032tBaJSRqrQx64/frj58On6VX0OZmMB8kwmt5uL0yUO8w35oVNsuo+ZwCiANlwM9xGTNHrFZ6lLVuQG7BILTncOXCJ1yutwJwlFAYdzhnnI8GFzAJ5xKchQYCvlRS3NG+JRZwGFxCXqdQloQ23Ji8qFU32vrBT0Aj2o2JJlbyHLuYBNzRS5NYzBNnbUkg3lyvaEbEAsTHPoohd8gsXhuK/oNvtku5WWmsaDN0gmEwqdKMj0Y3cKt0oaNLtuC10F+DR6fYOoDsmg7RMlIztUHyYgaEldKdeEBQteaVwm74meB2lamXxdTaUNEpQjXPIWGzkjD42kfeIs0O+38J9jknKsWJ31Tza9DAjfBMMMiy092pTLahLqvM3T+0WqVqguol8rxZIYbmgqhEos0fx2k93DHqmY5r6N7xSTMEnWeFcCv9RmIMV3leVT6Swf3SFazO8ITi6TD8vQOPjzeIzaxy0aBxwlVZ0keVZih382oi84WKdbT8H0MFBq6RshKuF/kbprz+aYmn1hnOrd/k2eWdQMwIkrAfJH6mZCxZY1eaoR9ZtA2HOFjHczsAtFPBhulwz2KoOTMOSEcL37Ryk2MQkKzdHWu8936+6xQka632bWb1eR6t975Sqv/lfZUehW53btvPBP6/w4TNL+r+zzdN18XWb1/UC0e+0VEjLH79/qSUiTdEcwzgtxj5GBfGcivio32vUD9T6b3lH9SM/a+rIDt9fbdC6fwiq+we/5bzgtDY45WGYnf2chuheWNuotcy3Js+oK5o7PQQ7H82oRrPfFc1wW54a6xWxPHr3xr8B</diagram></mxfile>
|
2102.09701/main_diagram/main_diagram.pdf
ADDED
|
Binary file (9.7 kB). View file
|
|
|
2102.09701/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
Given a function $f: \mathbb{R}^k \rightarrow (M, d)$ and a distribution $\mathcal{D}$ over the input space $\mathbb{R}^k$, let $f(\mathcal{D})$ denote the probability distribution of the output of $f$ in $M$ when the input is drawn from $\mathcal{D}$. For a point $x \in \mathbb{R}^k$, let $x + \mathcal{P}$ denote the probability distribution of the points $x + \delta$ where $\delta$ is a smoothing noise drawn from a distribution $\mathcal{P}$ over $\mathbb{R}^k$ and let $X$ be the random variable for $x + \mathcal{P}$. For elements in $M$, define $\mathcal{B}(z, r) = \{z' \mid d(z, z') \leq r\}$ as a ball of radius $r$ centered at $z$. Define a smoothed version of $f$ under $\mathcal{P}$ as the center of the ball with the smallest radius in $M$ that encloses at least half of the probability mass of $f(x + \mathcal{P})$, i.e., $$\bar{f}_{\mathcal{P}}(x) = \underset{z}{\arg\!\min} \; r \; \text{s.t.} \; \mathbb{P} [f(X) \in \mathcal{B}(z, r)] \geq \frac{1}{2}.$$ If there are multiple balls with the smallest radius satisfying the above condition, return one of the centers arbitrarily. Let $r^*_\mathcal{P}(x)$ be the value of the minimum radius. Hereafter, we ignore the subscripts and superscripts in the above definitions whenever they are obvious from context. In this work, we sample the noise vector $\delta$ from an i.i.d Gaussian distribution of variance $\sigma^2$ in each dimension, i.e., $\delta \sim \mathcal{N}(0, \sigma^2 I)$.
|
| 4 |
+
|
| 5 |
+
@cohen19 in [-@cohen19] showed that a classifier $h: \mathbb{R}^k \rightarrow \mathcal{Y}$ smoothed with a Gaussian noise $\mathcal{N}(0, \sigma^2 I)$ as, $$\bar{h}(x) = \underset{c \in \mathcal{Y}}{\text{argmax}} \; \mathbb{P}\left[ h(x + \delta) = c \right],$$ where $\mathcal{Y}$ is a set of classes, is certifiably robust to small perturbations in the input. Their certificate relied on the fact that, if the probability of sampling from the top class at $x$ under the smoothing distribution is $p$, then for an $\ell_2$ perturbation of size at most $\epsilon$, the probability of the top class is guaranteed to be at least $$\begin{align}
|
| 6 |
+
\label{eq:cohen_bnd}
|
| 7 |
+
p_\epsilon = \Phi ( \Phi^{-1} (p) - \epsilon / \sigma),
|
| 8 |
+
\end{align}$$ where $\Phi$ is the CDF of the standard normal distribution $\mathcal{N}(0, 1)$. This bound applies to any $\{0, 1\}$-function over the input space $\mathbb{R}^k$, i.e., if $\mathbb{P}[h(x) = 1] = p$, then for any $\epsilon$-size perturbation $x', \mathbb{P}[h(x') = 1] \geq p_\epsilon$.
|
| 9 |
+
|
| 10 |
+
We use this bound to generate robustness certificates for center smoothing. We identify a ball $\mathcal{B}(\bar{f}(x), R)$ of radius $R$ enclosing a very high probability mass of the output distribution. One can define a function that outputs one if $f$ maps a point to inside $\mathcal{B}(\bar{f}(x), R)$ and zero otherwise. The bound in ([\[eq:cohen_bnd\]](#eq:cohen_bnd){reference-type="ref" reference="eq:cohen_bnd"}) gives us a region in the input space such that for any point inside it, at least half of the mass of the output distribution is enclosed in $\mathcal{B}(\bar{f}(x), R)$. We show in section [3](#sec:center-smoothing){reference-type="ref" reference="sec:center-smoothing"} that the output of the smoothed function for a perturbed input is guaranteed to be within a constant factor of $R$ from the output of the original input.
|
| 11 |
+
|
| 12 |
+
As defined in section [2](#sec:notations){reference-type="ref" reference="sec:notations"}, the output of $\bar{f}$ is the center of the smallest ball in the output space that encloses at least half the probability mass of the $f(x + \mathcal{P})$. Thus, in order to significantly change the output, an adversary has to find a perturbation such that a majority of the neighboring points map far away from $\bar{f}(x)$. However, for a function that is roughly accurate on most points around $x$, a small perturbation in the input cannot change the output of the smoothed function by much, thereby making it robust.
|
| 13 |
+
|
| 14 |
+
For an $\ell_2$ perturbation size of $\epsilon_1$ of an input point $x$, let $R$ be the radius of a ball around $\bar{f}(x)$ that encloses more than half the probability mass of $f(x' + \mathcal{P})$ for all $x'$ satisfying $\|x-x'\|_2 \leq \epsilon_1$, i.e., $$\begin{align}
|
| 15 |
+
\label{eq:big_R}
|
| 16 |
+
\forall x' \text{ s.t. } \|x-x'\|_2 \leq \epsilon_1, \; \mathbb{P} [f(X') \in \mathcal{B}(\bar{f}(x), R)] > \frac{1}{2},
|
| 17 |
+
\end{align}$$ where $X' \sim x' + \mathcal{P}$. Basically, $R$ is the radius of a ball around $\bar{f}(x)$ that contains at least half the probability mass of $f(x' + \mathcal{P})$ for any $\epsilon_1$-size perturbation $x'$ of $x$. Then, we have the following robustness guarantee on $\bar{f}$:
|
| 18 |
+
|
| 19 |
+
::: {#thm:main-exact .theorem}
|
| 20 |
+
**Theorem 1**. *For all $x'$ such that $\|x - x'\|_2 \leq \epsilon_1$, $$d(\bar{f}(x), \bar{f}(x')) \leq 2R.$$*
|
| 21 |
+
:::
|
| 22 |
+
|
| 23 |
+
::: proof
|
| 24 |
+
*Proof.* Consider the balls $\mathcal{B}(\bar{f}(x'), r^*(x'))$ and $\mathcal{B}(\bar{f}(x), R)$ (see figure [\[fig:cen_cert\]](#fig:cen_cert){reference-type="ref" reference="fig:cen_cert"}). From the definition of $r^*(x')$ and $R$, we know that the sum of the probability masses of $f(x' + \mathcal{P})$ enclosed by the two balls must be strictly greater than one. Thus, they must have an element $y$ in common. Since $d$ satisfies the triangle inequality, we have: $$\begin{align*}
|
| 25 |
+
d(\bar{f}(x), \bar{f}(x')) & \leq d(\bar{f}(x), y) + d(y, \bar{f}(x'))\\
|
| 26 |
+
& \leq R + r^*(x').
|
| 27 |
+
\end{align*}$$ Since, the ball $\mathcal{B}(\bar{f}(x), R)$ encloses more than half of the probability mass of $f(x+\mathcal{P})$, the minimum ball with at least half the probability mass cannot have a radius greater than $R$, i.e., $r^*(x') \leq R$. Therefore, $d(\bar{f}(x), \bar{f}(x')) \leq 2R$. ◻
|
| 28 |
+
:::
|
| 29 |
+
|
| 30 |
+
::: wrapfigure
|
| 31 |
+
r0.45 {width="45%"}
|
| 32 |
+
:::
|
| 33 |
+
|
| 34 |
+
The above result, in theory, gives us a smoothed version of $f$ with a provable guarantee of robustness. However, in practice, it may not be feasible to obtain $\bar{f}$ just from samples of $f(x + \mathcal{P})$. Instead, we will use some procedure that approximates the smoothed output with high probability. For some $\Delta \in [0, 1/2]$, let $\hat{r}(x, \Delta)$ be the radius of the smallest ball that encloses at least $1/2+\Delta$ probability mass of $f(x+\mathcal{P})$, i.e., $$\hat{r}(x, \Delta) = \underset{z'}{\min} \; r \; \text{s.t.} \; \mathbb{P} [f(X) \in \mathcal{B}(z', r)] \geq \frac{1}{2} + \Delta.$$ Now define a probabilistic approximation $\hat{f}(x)$ of the smoothed function $\bar{f}$ to be a point $z \in M$, which with probability at least $1-\alpha_1$ (for $\alpha_1 \in [0, 1]$), encloses at least $1/2-\Delta$ probability mass of $f(x + \mathcal{P})$ within a ball of radius $\hat{r}(x, \Delta)$. Formally, $\hat{f}(x)$ is a point $z \in M$, such that, with at least $1-\alpha_1$ probability, $$\mathbb{P}\left[ f(X) \in \mathcal{B}(z, \hat{r}(x, \Delta))\right] \geq \frac{1}{2} - \Delta.$$
|
| 35 |
+
|
| 36 |
+
Defining $\hat{R}$ to be the radius of a ball centered at $\hat{f}(x)$ that satisfies: $$\begin{align}
|
| 37 |
+
\label{eq:R_hat}
|
| 38 |
+
\forall x' \text{ s.t. } \|x - x'\|_2 \leq \epsilon_1, \; \mathbb{P} [f(X') \in \mathcal{B}(\hat{f}(x), \hat{R})] > \frac{1}{2} + \Delta,
|
| 39 |
+
\end{align}$$ we can write a probabilistic version of theorem [1](#thm:main-exact){reference-type="ref" reference="thm:main-exact"},
|
| 40 |
+
|
| 41 |
+
::: {#thm:main-prob .theorem}
|
| 42 |
+
**Theorem 2**. *With probability at least $1-\alpha_1$, $$\forall x' \text{ s.t. } \|x - x'\|_2 \leq \epsilon_1, \; d(\hat{f}(x), \hat{f}(x')) \leq 2\hat{R},$$*
|
| 43 |
+
:::
|
| 44 |
+
|
| 45 |
+
The proof of this theorem is in the appendix, and logically parallels the proof of theorem [1](#thm:main-exact){reference-type="ref" reference="thm:main-exact"}.
|
| 46 |
+
|
| 47 |
+
For an input $x$ and a given value of $\Delta$, sample $n$ points independently from a Gaussian distribution $x + \mathcal{N}(0, \sigma^2 I)$ around the point $x$ and compute the function $f$ on each of these points. Let $Z = \{z_1, z_2, \ldots, z_n\}$ be the set of $n$ samples of $f(x + \mathcal{N}(0, \sigma^2 I))$ produced in the output space. Compute the minimum enclosing ball $\mathcal{B}(z, r)$ that contains at least half of the points in $Z$. The following lemma bounds the radius $r$ of this ball by the radius of the smallest ball enclosing at least $1/2 + \Delta_1$ probability mass of the output distribution (proof in appendix).
|
| 48 |
+
|
| 49 |
+
::: {#lem:radius-bnd .lemma}
|
| 50 |
+
**Lemma 1**. *With probability at least $1 - e^{-2n\Delta_1^2}$, $$r \leq \hat{r}(x, \Delta_1).$$*
|
| 51 |
+
:::
|
| 52 |
+
|
| 53 |
+
Now, sample a fresh batch of $n$ random points. Let $p_{\Delta_1} = \rho - \Delta_1$, where $\rho$ is the fraction of points that fall inside $\mathcal{B}(z, r)$. Then, by Hoeffding's inequality, with probability at least $1 - e^{-2n\Delta_1^2}$, $$\mathbb{P}\left[ f(X) \in \mathcal{B}(z, r) \right] \geq p_{\Delta_1}.$$ Let $\Delta_2 = 1/2 - p_{\Delta_1}$. If $\max (\Delta_1, \Delta_2) \leq \Delta$, the point $z$ satisfies the conditions in the definition of $\hat{f}$, with at least $1 - 2e^{-2n\Delta_1^2}$ probability. If $\max (\Delta_1, \Delta_2) > \Delta$, discard the computed center $z$ and abstain. In our experiments, we select $\Delta_1, n$ and $\alpha_1$ appropriately so that the above process succeeds easily.
|
| 54 |
+
|
| 55 |
+
Computing the minimum enclosing ball $\mathcal{B}(z, r)$ exactly can be computationally challenging, as for certain metrics, it is known to be NP-complete [@SHENMAIER201581]. Instead, we approximate it by computing a ball $\beta\text{-MEB}(Z, 1/2)$ that contains at least half the points in $Z$, but has a radius that is within a $\beta$ factor of the optimal radius $r$. We modify theorem [1](#thm:main-exact){reference-type="ref" reference="thm:main-exact"} to account for this approximation (see appendix for proof).
|
| 56 |
+
|
| 57 |
+
<figure id="alg:certify" data-latex-placement="t">
|
| 58 |
+
<div class="minipage">
|
| 59 |
+
<div class="algorithm">
|
| 60 |
+
<div class="algorithmic">
|
| 61 |
+
<p><span class="math inline"><em>x</em> ∈ ℝ<sup><em>k</em></sup>, <em>σ</em>, <em>Δ</em>, <em>α</em><sub>1</sub></span>.<br />
|
| 62 |
+
<span class="math inline"><em>z</em> ∈ <em>M</em></span>.<br />
|
| 63 |
+
Set <span class="math inline"><em>Z</em> = {<em>z</em><sub><em>i</em></sub>}<sub><em>i</em> = 1</sub><sup><em>n</em></sup> s.t. <em>z</em><sub><em>i</em></sub> ∼ <em>f</em>(<em>x</em> + 𝒩(0, <em>σ</em><sup>2</sup><em>I</em>))</span>. Set <span class="math inline">$\Delta_1 = \sqrt{ \ln \left( 2 / \alpha_1 \right) / 2n}$</span>. Compute <span class="math inline"><em>z</em> = <em>β</em></span>-MEB<span class="math inline">(<em>Z</em>, 1/2)</span>. Re-sample <span class="math inline"><em>Z</em></span>. Compute <span class="math inline"><em>p</em><sub><em>Δ</em><sub>1</sub></sub></span>. Set <span class="math inline"><em>Δ</em><sub>2</sub> = 1/2 − <em>p</em><sub><em>Δ</em><sub>1</sub></sub></span>. If <span class="math inline"><em>Δ</em> < max (<em>Δ</em><sub>1</sub>, <em>Δ</em><sub>2</sub>)</span>, discard <span class="math inline"><em>z</em></span> and abstain.</p>
|
| 64 |
+
</div>
|
| 65 |
+
</div>
|
| 66 |
+
</div>
|
| 67 |
+
<div class="minipage">
|
| 68 |
+
<div class="algorithm">
|
| 69 |
+
<div class="algorithmic">
|
| 70 |
+
<p><span class="math inline"><em>x</em> ∈ ℝ<sup><em>k</em></sup>, <em>ϵ</em><sub>1</sub>, <em>σ</em>, <em>Δ</em>, <em>α</em><sub>1</sub>, <em>α</em><sub>2</sub></span>.<br />
|
| 71 |
+
<span class="math inline"><em>ϵ</em><sub>2</sub> ∈ ℝ</span>.<br />
|
| 72 |
+
Compute <span class="math inline"><em>f̂</em>(<em>x</em>)</span> using algorithm <a href="#alg:smooth" data-reference-type="ref" data-reference="alg:smooth">[alg:smooth]</a>. Set <span class="math inline"><em>Z</em> = {<em>z</em><sub><em>i</em></sub>}<sub><em>i</em> = 1</sub><sup><em>m</em></sup> s.t. <em>z</em><sub><em>i</em></sub> ∼ <em>f</em>(<em>x</em> + 𝒩(0, <em>σ</em><sup>2</sup><em>I</em>))</span>. Compute <span class="math inline">$\Tilde{\mathcal{R}} = \{d(\hat{f}(x), f(z_i)) \mid z_i \in Z\}$</span>. Set <span class="math inline"><em>p</em> = <em>Φ</em>(<em>Φ</em><sup>−1</sup>(1/2 + <em>Δ</em>) + <em>ϵ</em><sub>1</sub>/<em>σ</em>)</span>. Set <span class="math inline">$q = p + \sqrt{ \ln ( 1 / \alpha_2 ) / 2m}$</span>. Set <span class="math inline"><em>R̂</em> = <em>q</em></span>th-quantile of <span class="math inline">$\Tilde{\mathcal{R}}$</span>. Set <span class="math inline"><em>ϵ</em><sub>2</sub> = (1 + <em>β</em>)<em>R̂</em></span>.</p>
|
| 73 |
+
</div>
|
| 74 |
+
</div>
|
| 75 |
+
</div>
|
| 76 |
+
<figcaption>Certify</figcaption>
|
| 77 |
+
</figure>
|
| 78 |
+
|
| 79 |
+
::: {#thm:main-approx-prob .theorem}
|
| 80 |
+
**Theorem 3**. *With probability at least $1-\alpha_1$, $$\forall x' \text{ s.t. } \|x - x'\|_2 \leq \epsilon_1, \; d(\hat{f}(x), \hat{f}(x')) \leq (1 + \beta)\hat{R}$$ where $\alpha_1 = 2e^{-2n\Delta_1^2}$.*
|
| 81 |
+
:::
|
| 82 |
+
|
| 83 |
+
We use a simple approximation that works for all metrics and achieves an approximation factor of two, producing a certified radius of $3 \hat{R}$. It computes a point from the set $Z$, instead of a general point in $M$, that has the minimum median distance from all the points in the set (including itself). This can be achieved using $O(n^2)$ pair-wise distance computations. To see how the factor 2-approximation is achieved, consider the optimal ball with radius $r$. By triangle inequality of $d$, each pair of points is at most $2r$ distance from each other. Thus, a ball with radius $2r$, centered at any one of these points will cover every other point in the optimal ball. Better approximations can be obtained for specific norms, e.g., there exists a $(1 + \epsilon)$-approximation algorithm for the $\ell_2$ norm [@approx-core-set-2002]. For graph distances or when the support of the output distribution is a small discrete set of points, the optimal radius can be computed exactly using the above algorithm. The smoothing procedure is outlined in algorithm [\[alg:smooth\]](#alg:smooth){reference-type="ref" reference="alg:smooth"}.
|
| 84 |
+
|
| 85 |
+
Given an input $x$, compute $\hat{f}(x)$ as described above. Now, we need to compute a radius $\hat{R}$ that satisfies condition [\[eq:R_hat\]](#eq:R_hat){reference-type="ref" reference="eq:R_hat"}. As per bound [\[eq:cohen_bnd\]](#eq:cohen_bnd){reference-type="ref" reference="eq:cohen_bnd"}, in order to maintain a probability mass of at least $1/2 + \Delta$ for any $\epsilon_1$-size perturbation of $x$, the ball $\mathcal{B}(\hat{f}(x), \hat{R})$ must enclose at least $$\begin{align}
|
| 86 |
+
\label{bnd:prob}
|
| 87 |
+
p = \Phi \left( \Phi^{-1} \left( \frac{1}{2} + \Delta \right) + \frac{\epsilon_1}{\sigma} \right)
|
| 88 |
+
\end{align}$$ probability mass of $f(x+\mathcal{P})$. Again, just as in the case of estimating $\bar{f}$, we may only compute $\hat{R}$ from a finite number of samples $m$ of the distribution $f(x + \mathcal{P})$. For each sample $z_i \sim x + \mathcal{P}$, we compute the distance $d(\hat{f}(x), f(z_i))$ and set $\hat{R}$ to be the $q$th-quantile $\Tilde{R}_q$ of these distances for a $q$ that is slightly greater than $p$ (see equation [\[bnd:quant\]](#bnd:quant){reference-type="ref" reference="bnd:quant"} below). The $q$th-quantile $\tilde{R}_q$ is a value larger than at least $q$ fraction of the samples. We set $q$ as, $$\begin{align}
|
| 89 |
+
\label{bnd:quant}
|
| 90 |
+
q = p + \sqrt{\frac{ \ln \left( 1 / \alpha_2 \right)}{2m}},
|
| 91 |
+
\end{align}$$ for some small $\alpha_2 \in [0, 1]$. This guarantees that, with high probability, the ball $\mathcal{B}(\hat{f}(x), \Tilde{R}_q)$ encloses at least $p$ fraction of the probability mass of $f(x + \mathcal{P})$. We prove the following lemma by bounding the cumulative distribution function of the distances of $f(z_i)$s from $\hat{f}(x)$ using the Dvoretzky--Kiefer--Wolfowitz inequality.
|
| 92 |
+
|
| 93 |
+
::: {#lem:compute-bigR .lemma}
|
| 94 |
+
**Lemma 2**. *With probability $1-\alpha_2$, $$\mathbb{P}\left[ f(X) \in \mathcal{B}(\hat{f}(x), \Tilde{R}_q) \right] > p.$$*
|
| 95 |
+
:::
|
| 96 |
+
|
| 97 |
+
Combining with theorem [3](#thm:main-approx-prob){reference-type="ref" reference="thm:main-approx-prob"}, we have the final certificate: $$\forall x' \text{ s.t. } \|x - x'\|_2 \leq \epsilon_1, \; d(\hat{f}(x), \hat{f}(x')) \leq (1 + \beta)\hat{R},$$ with probability at least $1-\alpha$, for $\alpha = \alpha_1 + \alpha_2$. In our experiments, we set $\alpha_1 = \alpha_2 = 0.005$ to achieve an overall success probability of $1 - \alpha = 0.99$, and calculate the required $\Delta_1, \Delta_2$ and $q$ values accordingly. We set $\Delta$ to be as small as possible without violating $\max (\Delta_1, \Delta_2) \leq \Delta$ too often. We use a $\beta=2$-approximation for computing the minimum enclosing ball in the smoothing step. Algorithm [1](#alg:certify){reference-type="ref" reference="alg:certify"} provides the pseudocode for the certification procedure.
|
| 98 |
+
|
| 99 |
+
Although we defined our procedure for metric outputs, our analysis does not critically use all the properties of a metric. For instance, we do not require $d(z_1, z_2)$ to be strictly greater than zero for $z_1 \neq z_2$. An example of such a distance measure is the total variation distance that returns zero for two vectors that differ by a constant amount on each coordinate. Our proofs do implicitly use the symmetry property, but asymmetric distances can be converted to symmetric ones by taking the sum or the max of the distances in either directions. Perhaps the most important property of metrics that we use is the triangle inequality as it is critical for the robustness guarantee of the smoothed function. However, even this constraint may be partially relaxed. It is sufficient for the distance function $d$ to satisfy the triangle inequality approximately, i.e., $d(a,c) \leq \gamma (d(a, b) + d(b, c))$, for some constant $\gamma$. The theorems and lemmas can be adjusted to account for this approximation, e.g., the bound in theorem [1](#thm:main-exact){reference-type="ref" reference="thm:main-exact"} will become $2 \gamma R$. A commonly used distance measure for comparing images and documents is the cosine distance defined as the inner-product of two vectors after normalization. This distance can be show to be proportional to the squared Euclidean distance between the normalized vectors which satisfies the relaxed version of triangle inequality for $\gamma = 2$.
|
| 100 |
+
|
| 101 |
+
These relaxations extend the scope of center smoothing to many commonly used distance measures that need not necessarily satisfy all the metric properties. For instance, perceptual distance metrics measure the distance between two images in some feature space rather than image space. Such distances align well with human judgements when the features are extracted from a deep neural network [@ZhangIESW18] and are considered more natural measures for image similarity. For two images $I_1$ and $I_2$, let $\phi(I_1)$ and $\phi(I_2)$ be their feature representations. Then, for a distance function $d$ in the feature space that satisfies the relaxed triangle inequality, we can define a distance function $d_{\phi}(I_1, I_2) = d(\phi(I_1), \phi(I_2))$ in the image space, which also satisfies the relaxed triangle inequality. For any image $I_3$, $$\begin{align*}
|
| 102 |
+
d_{\phi}(I_1, I_2) &= d(\phi(I_1), \phi(I_2)) \\
|
| 103 |
+
&\leq \gamma \left( d(\phi(I_1), \phi(I_3)) + d(\phi(I_3), \phi(I_2)) \right)\\
|
| 104 |
+
&= \gamma \left( d_{\phi}(I_1, I_3) + d_{\phi}(I_3, I_2) \right).
|
| 105 |
+
\end{align*}$$
|
2106.13265/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-03-29T17:15:06.854Z" agent="5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.5.1 Chrome/89.0.4389.82 Electron/12.0.1 Safari/537.36" etag="7ZGiIenQ5ZtP4DOG08Ex" version="14.5.1" type="device"><diagram id="7gwCu6ZgaY8Ys19Y_M54" name="Page-1">7V3bkqM4Ev0aR8w+lAJJSMBjXbrmst0xtdMTOzNPG9hgm2lsPIDr0l+/EpYwEsJgF7ioS3d0dVlgAdLJzJOpVDLB16vHH1N/s/ySBGE8QVbwOME3E4QgtCj7j7c87VpcgnYNizQKxEn7hq/R91A0WqJ1GwVhppyYJ0mcRxu1cZas1+EsV9r8NE0e1NPmSaxedeMvwlrD15kf11v/iIJ8KZ/C2rf/FEaLZV4+sDiy8uXJoiFb+kHyUGnCnyb4Ok2SfPfb6vE6jPngyXHZfe+24Wh5Y2m4zrt84Tf862qF8m84//0BZ9Hi8y8b90JMxr0fb8UDTxCNWX9XG37L+ZMYB/rPlt/n1cpPF9F6gi/ZUXvzyH5a+5/yJPbbQvxfdDVP2B3yKYqTVOltgrAd8L/N353Khl834TpbRnPeE39IZP17Ow3TtZ8zeCDrOt5meZjKr7EzpnpXrG13J7Xmzb4NKY+N0mS7DkI+gpAdflhGefh148/40QcGeNa2zFexOBz70zC+S7IojxI2RjczNjPslvDVfZjmEcPUZ+2EPOE9+HG0MJ5+KQ5MkzxPVnwgozi+LkcRz725w+/tKsvT5FtYOYIp9nBQPlAVJAI3/BrhY6VJgObHMFmFefrEThFHEXV3XxESjCxv9/lhLw+2LUC+rMiC7YhGX8jgoux7D1P2i0DqEaiFrwa1v6d+tI7WC9bRp/V9lCbrVVhA8B2hdB7S2cyE0sDxpkxd9oJSiNpRCjE1oJQMhVLciNJeoPV1ycYrKICQpNyKIeuHu/9e/+socDUCyWoHkjbPAQndwDbNs4ummNJToFdiqhV9O5D2ASTbVoBEYR1IhBjUnecNhCNqwJE2b2HASIv4GMbT5OHTvuGqaGAH5JCVUyQJDTJOmsX/8q+v2UP8Kea8+PAXxwcg8uPNo8DL7tOT+NQ4F1myTWfhgQcmQnJyprbD/NCJYqr44x+c26oSsAxzVzamYezn0b1K/kwzKq5xl0SFKAvwXDgIQK/6R8GSTTHQ+twNhuimyt60nokLXKvyByodE+rqHe8Gr9bxZZr6T5XTNvyErPmJKFT1KnE8DdC7HvfwLof9dMQ750B8+BjlfxZIxh4UnzmyLyxgYfF5D23+4any4S5MI/aYXBcVbR3kpyIyfykSc6T8tIoFJiMTC+hQYLu2ZwmhoI5qqz0EsF3BNjlNRphbpvaLNNPek0xgqJoIyZA73xdWPLVhZMityVAaLqKdj2SQpcLyqva91eauoiAo5C0Ns+i7Py3649AVA8g6J1cTcmME80HB16116d+Li0yqLrTJijMZdjyqjno/Gp4Cz60qeFftIZnPszDXpreXCfUO6sAkzZfJIln7cVURqo7B/pzPCadLRePfYZ4/ibiLv80TFQNNygu2KK+MiVp+yeMvnMXFyeybbLyNYtl5s840aOt1ILtbJ+tw1yL6eibbcDqqVRt2VKudIfY8J9iqSXi0vmfDlo1YwL3+BBzbSCVBFz1JOFJ6dZgmOZeIS7syvsDG1y+fnhOgyJb+plAws5xfs9WzZP0LlQSNzgmhBDqhPFNcB9YUh8lLtWZhiM4RBmnUSN19URtYCueHKo/wHKBYorqjim0AXc92MHWZtFgEmQIgNmAExYG2+DmUGwtf2o8t+T6psH3J/BuYft/cvdUaSe7eao0ktx0NySdYZcXY1ZDUlcbret1GmgLui8Z7qjjZQtEPSsth3bfdJFl+sSfn1jJJvo3YhJdy/GwbzjxtG2LVF+zJiNNzmWxUZ2EDmOxs46+NvQkjzjtLF9MfHDaY7IbYo1jKr//iv/PuLG4wL7LCtvIvQVRcvHLsQagGftThqqFGBcL0PmI6DFF/xS3iepptVB6wu9tD9OBND9DVNooDZSmyfUA0aW+U4pq8J+zUeVzYxXnhDSnkyRafb/1VFHMJ+ymM70Pe9eRQKH8Kg2BujGpBy8GeJF6V9h1BUTzOuh8H+2FFfC1B0duEMZga9XGLMJ6J7bjNyuRZBAeRcxCcU2hMMyk6nagguyNR6ew2n42oQAoclatAfGLIkdpaaA9qlqeBq/RmfjqQ6gWTyU13uTvajBOLMDtecURU8cSWAxyntnJRzR6wVD/HuEo7lNB2WGvws80utWgePRb5FptK8J9dI9pkYWU94CA/K7OAuPQFfrYsdaWmURnrda6upKN65c++LQrdqqys8z8mf5l51tF68XsR5ONrGdGqSHOS/99EqwUbrTiasp/RdMV++ptNzHQO93UZZb5Nt+ucPc//sp2xz0B2v2jQ7ooV8DzLrPc1C3Nb/OlOOlET6TyoRrBBi2BrKCTVI+41JFWQYQqByDBJlCXX2ymfsGk5779u8zhayziHYtR/vvrCLnMXh5ypfPXZBCqQoM0gEqHU3dEkZaxFO7JM0ug768qXt21ed+9xYb8W0dlhmX26IHUczS0fW74Ji1c3zo6RaHJVCk3/yCuTHpm+g9h2POhCTD09f8oGZTQGQ+JKB6iCW+RKQlNF7r61f+x6H9h93dg9PrboOAB6FZiqRhtRwHh1iVIPvzxIcbOrLULadffP6M01plalw+ZL6Y6ZBgc/8AJiNJ41gPihDTHqCQjUA5ZdoW8ao/UsIE1bZfY9F8jklur0OwigwebflPb5QdVekKodjzUXAruKNaxqHegCF9cTQV6K0mH0YRbfm1n0MODxAGkWHc2ZxRAQWNpFOgKraEoz/tCKr0krEgQB1vgX9FqCJudVhKbEhA9F+JYVIcEYQLpXhKoTizEan3/QIf7+Fgg7Jg6wOpL0MzP0EUSjMTPg2uhghJl+PahMDUiVhrX/UeoQcv5QpmNWpofB3z1QSIFn2w50XdejDragyjUhAVy1OvsErhpu2SkWIciFEFoYUc81LJ1AB2Al4DgUrDvEvxuE/7jx7K4SqtaIaQU+Bo7DFAQh2HHOKfEf4dU3KfFuA0LrSHydIm2bIq4fuH1HuBVHHcBw6CJKGdFELt8vN2rYNtcHMGWu97Mbe+Wn+UWyvrj96effWG+Xm80Ju/yPX4F4ngvTIdGrhuYzZHgxtxNANTTMHFLg1ldKKS1PrCKN7L/fP7o6BIU7pXlVFdFJmezVXav7dLC2TPZJNY8dOMNt45apse3buMW8jidDjKjRDts9cdc2tbXwsdstlb2v9DC7Hhye8uxUfgdr/rOueIJwEydPRkSPI/+8lL8+8s+hR9VMQFlb5pkIsoGNqiltWI1IsCsrVQLKuxg+Yd2u+yKyatdvxR6EtD75bJzziRaVL6hLK9XR1wRu+N++wmBePQzGxt2ucxOTrtD3hvdmH0idNMsBLlLneTEgWljYnZnf//oaRp1YGBC7FktTJqCC+24LuMPNhclWj2JDZevGheMyV8oqaV9+vWNn31y1kM+GHQknb+ycPTFPjvlauMENrJiM6Y4/fp6WDQ0OYSZ4K6mRTp3VNrPX1nW8kgH3IBvIRUBLfqYys67KWT3Twog7mAw016F6mzJw4+d+YTzPIgFN4YCS+FSg30iO6jCVOj4O5/yyyU4yrssql1ZTo5TI1eOCV+QE/kOGA8BQXzhzeiJAs+QclrgmKdKjJukO4P05hU2Wh2IM6u4htoBpCcwCdChhG+0O/oGE7ef1IsyKyUbWXbQJC/19XtuzSZNZmGUdLE+DoTnSuJTFB7oXG+gD/hZk3kold06BP/NYQZV31QN0nqlC54CC8MK7/TsVoHkdJQGgaafdL39/+cd6vFx6m+9Te333RxBtZcmn0YRRkFqzDrpIwSymZmfh2CALdtR+Hd2Z7qleANQ2DdpIKfv1/HoBpjmFdXf9ahfB0QM3s2SZpPmYi/+QPisHEM9Tp/25kZvhQy/ElPjxlsnBl2iWJllZL+AcpKDPBC9tdeRgglfJm3sw9cRzgKPVQDFUofUo8AxZDM5gJr1DgsewCx/djHpLpbjnGX3F5A9YAxe/3sUTrXxPmTp6dMlbAoFdqbWoFoqxPReQaoR9EJtPoMqzZXmsxnu23UPnD1NTiNQ5wvgrAZKmJIRTan0iZ5BVHEYzXKiUJtCWcTACOmsdjj7Q9xbIE6kUfNWk9c0eH/yh4hBRClwtPgCJsiJZjw84HoCwbhWwB6Tb1n9h+74K5rwwoTixRm7jTL8+m2/bNsC4qew9wYypnsgBMASUNgV9edTLUG2/7xL4WhIHwc7Be7YdArQRcCEG9vBEwOmwF/Asr4o4UhL2lBzaVVJe8vAaKWdwkyfqlfgbxaqVc1NToO2QJR6P/FkEUKdSClZ7h4le1LVzKU6L51jsxU+NuBBCAHpurK1vliSZoFL43g9GzIRLqR1zQc0LCKqLD7Wt+xd64bMBefB7W2MTPPgjqeMlkjqK/YR2jWOUmtUF9RynATM8TGsEpqW23uVh2h3Qolpq99Kr+jU/80HLmHLmXl9TlVmTBAxZYvacAyASBt/ls9+E92GcbMR7Et/jCNym/ip8SNJvh57XuFVktBWFy0yI5myKM+wrKV8AKdNVDa9MgMjA0wdT3YaXuJzVYxtdmkTx6TSX7kBKRHtEBY3Mo8N67oQCXOz0lDqBHECI3eQ3Et1v7HG7ilEazhLh7q9WPM/QvxAKlH9L6NC6Of884bsrstBPZ8vWyvBHavaPd+ucZAu0fT/jfZeOUVA61GN65WZDMRo9LbQf0jmt2XX0VVkI6FiHQuNnNhDHhtxdzdz1vYpuml/Dm1yEK8IaCw1eV9B+vEjSKF+uxrLSflBZ9LJh0pFvMBpHnp3xgZsDIj2/J6b1XTB3aRJsZ7kRPF/8tb8IhZ9rJgQapsSOwGPgVJrMdjjxt3/K8NzpRrwH00w8TZdB1GacTdntvVRDNOLrLO+nfhfWl3Y0vmNLbW8xvvTguvTIjS/UryPyQobNcze9r9oPssJIicR2kwEuqhjMQz/fpqNJdzuoM3pZ5HPkO3JOhXGZnqh+YzibLFXkCFITXNI5IiXTDExBqVLBVtRrWxUW3aaK3Z2te0glYOV2yka8dtPEh3Yuvr5QGUWq9qV2uRX06HQjl5exqmhcVRG6rt5xTyrX1vJXiXMOh6eefvfAnJlwxFq01CI9JA1Dou2x6Clp2IFAhY0kL2dQsiPN/zop6P9e9asMTI5Gv9qephQ1XVXXip0JLkEAon16J6rpcVzPvOh7h4eM+5xV9Z5lJ+CRCwlvIfXd0V5CTmWty4qwEIOs9FKByDjTdb/mLglqY39CZOcI66pFdvoYZz3D1DDO9lnH2VRi+S1JlJZBcVCiyiW8ISSKvrBEmV8Dp+UJWb01/ZJMi9ZbY/xhyCt3broTBVBevVLRXoJDaf2tIhAasDZYpFdmt7/kCy2YrlVGxZZuS/vmLbePreAHwrHvJgP7Mo/ZGL6THYjPzN7QfK4OuRuHRe+oNz43CcJQCqLDSlBZAV+8wqy1fnyH96WZS9bXqrsVl7yUrdZk//60ZZ5vskIEmF27fXh4AJm/8C+iHMwS/jY1Xq/jdr2KPn9a/2fLfr1g//wsC7kndbt7+VqyDLIIbHgd+wHmVBz1gFcp+WZZ6t4WaQuqVMQDMq5VBQAd7P3xZ9mM8vpTwnqgB5Shobn0LIQU7BOvXEINb3k6SwYX+5gmfBb3AQNeiHGX0o8//R8=</diagram></mxfile>
|
2106.13265/main_diagram/main_diagram.pdf
ADDED
|
Binary file (29.7 kB). View file
|
|
|
2106.13265/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Disease Progression Modeling (DPM) [@wang2019survey] aims to characterize the progression of a disease and its comorbidities over time using a wide range of analytics models including disease staging [@sun2019probabilistic] , patient trajectory analytics [@dey2021impact], prediction [@prithwish2021amia], and time-to-event estimations [@liu2018early] for key disease-related events. DPM has applications throughout the healthcare ecosystem, from providers (e.g., decision support for patient staging), to payers (e.g., care management), and pharmaceutical companies (e.g., clinical trial enrichment). But the complexity of building effective DPM models can be a road-block for their rapid experimentation and adoption. Some of this is addressed by standardization of data model and tooling for data analysis and cohort selection [@hripcsak2016characterizing]. However, there are still unmet needs to facilitate the development of advanced machine learning techniques such as deep learning with additional requirements such as experiment tracking and reproducibility [@mcdermott2019reproducibility]. Furthermore, to accelerate DPM research, a data scientist's available tools should include a framework for deploying models as cloud-ready microservices for rapid prototyping and dissemination [@makinen2021needs].
|
| 4 |
+
|
| 5 |
+
::: wrapfigure
|
| 6 |
+
r0.3 {width="\\linewidth"}
|
| 7 |
+
:::
|
| 8 |
+
|
| 9 |
+
In this demonstration, we introduce Disease Progression Modeling Workbench 360 (DPM360) open source project (<https://ibm.github.io/DPM360/>). DPM360 is an easy-to-install system to help research and development of DPM models (Figure [\[fig:dpm360\]](#fig:dpm360){reference-type="ref" reference="fig:dpm360"}). It manages the entire modeling life cycle, from data analysis (e.g, cohort identification) to machine learning algorithm development and prototyping. DPM360 augments the advantages of data model standardization and tooling (OMOP-CDM, Athena, ATLAS) provided by the widely-adopted OHDSI [@hripcsak2015observational] initiative, with a powerful machine learning training framework, and a mechanism for rapid prototyping through automatic deployment of models as containerized services into a cloud environment. This enables a quicker and flexible implementation and validation of the models.
|
| 10 |
+
|
| 11 |
+
The architecture, shown in Figure [1](#fig:dpm360-full-arch-public){reference-type="ref" reference="fig:dpm360-full-arch-public"} has four main components.
|
| 12 |
+
|
| 13 |
+
<figure id="fig:dpm360-full-arch-public" data-latex-placement="h">
|
| 14 |
+
<img src="dpm360-full-arch-public" style="width:90.0%" />
|
| 15 |
+
<figcaption>DPM360 Architecture</figcaption>
|
| 16 |
+
</figure>
|
| 17 |
+
|
| 18 |
+
\(1\) **Lightsaber**: an extensible training framework which provides blueprints for the development of disease progression models (DPM). It is designed ground up using state-of-the art open source tools [@scikit-learn; @falcon2019pytorch] to provide a simple modular and unified model training framework to support some of the common use cases for DPM. Lightsaber contains four key modules:
|
| 19 |
+
|
| 20 |
+
- *data ingestion* modules to support standardized methods of ingesting data
|
| 21 |
+
|
| 22 |
+
- *model trainers* to support standardized model training incorporating best practices
|
| 23 |
+
|
| 24 |
+
- *metrics* to support pre-built DPM problem specific model evaluation
|
| 25 |
+
|
| 26 |
+
- in-built model tracking and support for post-hoc model evaluation by integrating with a *Model Registry*.
|
| 27 |
+
|
| 28 |
+
Users can select specific modules and integrate them into their modeling workflow. Lightsaber also comes with a reusable library of state-of-the-art machine and deep learning algorithms for DPM (e.g. LSTM [@gers2000learning] for in-hospital mortality predictions).
|
| 29 |
+
|
| 30 |
+
Lightsaber integrates naturally with ATLAS using a client called Lightsaber Client for ATLAS (*LCA*), enabling automated extraction of features from the [OMOP CDM](https://www.ohdsi.org/data-standardization/the-common-data-model/) model, thus complementing the ease and flexibility of defining standardized cohorts using ATLAS graphical user interface with the ability to quickly develop deep learning algorithms for DPM in Lightsaber using Python. *LCA* can be configured with the cohort details, covariate settings, model training settings for Lightsaber to extract the right set of features in formats currently supported in the OHDSI stack (see [FeatureExtraction](https://github.com/OHDSI/FeatureExtraction) and [PatientLevelPrediction](https://github.com/OHDSI/PatientLevelPrediction) R packages via the [Rpy2](https://pypi.org/project/rpy2/) interface). Additionally, the *LCA* uses custom queries and algorithms to extract and transform complex time series features into formats required for DPM in Lightsaber. For each feature extraction process, a YAML configuration file is automatically generated. This file specifies outcomes, covariate types, and file locations of the extracted feature files. Thus, Lightsaber allows a user to concentrate just on the logic of their model as it takes care of the rest.
|
| 31 |
+
|
| 32 |
+
\(2\) Tracking provenance of all aspects of model building is essential for trust and reproducibility - thus experiments ran using Lightsaber are automatically tracked in a **Model Registry** including model parameters, problem specific metrics, and model binaries allowing the identification of algorithms and parameters that result in the best model performance.
|
| 33 |
+
|
| 34 |
+
\(3\) The **Service Builder** component automatically converts registered models in *Model Registry* into microservices [@dragoni2017microservices], through the usage of hooks that listen for production ready models in the registry and thereafter start the model packaging execution pipeline. The pipeline includes extraction of model and its dependencies from the registry, containerization, and deployment in the target cluster ([Kubernates](https://kubernetes.io) or [OpenShift](https://www.openshift.com)). Upon successful model deployment, a callback function updates model metadata in the registry with deployment status and model access endpoint. Using this endpoint, potential users (data scientist or product manager) can interact with the model, now deployed as a microservice, though a [Swagger](https://swagger.io/) based interface.
|
| 35 |
+
|
| 36 |
+
\(4\) The **Installer** component installs the fully functional DPM360, including OHDSI tools, *Model Registry*, and *Service Builder* into a Kubernetes or OpenShift cluster using [Helm charts](https://helm.sh/). Each of these components are run as services within the cluster. The implementation also uses [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) to persist the data (e.g: model artifacts, ATLAS database files etc.).
|
2202.14026/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2202.14026/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
One of the most important factors for the success of deep models is their large model size and high expressive power, which enable them to learn complicated input-output relations. As such, over-parametrized deep networks or large models, with more parameters than the size of training data, have dominated the performance in computer vision, natural language processing, and so on. The adoption of large models is justified by the recent discovery that deep models exhibit a "double descent" [1] and "uni-modal variance" [2] generalization behavior, where their performance continues to improve beyond the interpolation point, extending the classical learning theory of bias-variance trade-off. While there are infinitely many global solutions that *overfit* to training data, the choice of optimization algorithm imposes certain *implicit* regularization [3] so that over-parameterized models converge to those that are generalizable.
|
| 4 |
+
|
| 5 |
+
Nonetheless, the success of over-parameterization of deep networks critically depends on the availability of *clean* training data, while overfitting inevitably occurs when training data is corrupted. Consider the task of image classification with a training dataset $\{(\boldsymbol{x}_i, \boldsymbol{y}_i)\}_{i=1}^N$ , with $\boldsymbol{x}_i$ being an input image and $\boldsymbol{y}_i$ being the corresponding one-hot label. With an over-parameterized deep network $f(\cdot; \boldsymbol{\theta})$ , model training is achieved by solving an optimization problem with respect to (w.r.t.) the network parameter $\boldsymbol{\theta}$ as follows:
|
| 6 |
+
|
| 7 |
+
$$\min_{\boldsymbol{\theta}} L(\boldsymbol{\theta}) = \frac{1}{N} \sum_{i=1}^{N} \ell(f(\boldsymbol{x}_i; \boldsymbol{\theta}), \boldsymbol{y}_i),$$
|
| 8 |
+
(1)
|
| 9 |
+
|
| 10 |
+
where $\ell(\cdot, \cdot)$ is a loss function that measures the distance between network prediction $f(x_i; \theta)$ and the label $y_i$ . If a proportion of the images in the training set is *mislabelled* [4], it is well-known that the network will
|
| 11 |
+
|
| 12 |
+
<sup>&</sup>lt;sup>1</sup>Code is available at: https://github.com/shengliu66/SOP
|
| 13 |
+
|
| 14 |
+
be optimized to zero training error hence produce $f(x_i; \theta) \approx y_i$ for all $i \in \{1, \dots, N\}$ , even for $y_i$ 's that are incorrect [5]. Overfitting to wrong labels inevitably leads to poor generalization performance (see Fig. 1).
|
| 15 |
+
|
| 16 |
+
In this paper, we introduce a principled method to address the challenges of overfitting over-parameterized deep networks in the presence of training data corruptions. We focus on the task of classification trained with noisy label, a ubiquitous problem in practice due to the extreme complexity of data annotation even for experienced domain experts [6]. Our idea leverages the property that the label noise is sparse, namely only a fraction of the labels are corrupted and the rest are intact. Principled methods for dealing with sparse corruption have a rich history, which can be retraced back to compressed sensing [7], robust subspace recovery [8,9], and even earlier [10]. Such methods are based on using a robust loss function, such as the $\ell_1$ norm which is less sensitive to large outlying entries. While it is tempting to use sparse modeling for the label noise problem by setting the loss $\ell()$ in (1) as the $\ell_1$ loss, such an approach cannot solve the overfitting issue since all global solutions are still given by those that satisfy $f(x_i; \theta) \approx y_i$ for all $i \in$ $\{1, \cdots, N\}$ . Hence, handling sparse corruptions with over-parameterized models
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
|
| 20 |
+
Figure 1: Sparse over-parameterization prevents overfitting to label noise. Training and test accuracy of a PreActResNet18 network trained with a standard cross entropy (CE) loss (dashed lines) and our Sparse Over-parameterization (SOP) (solid lines) for image classification on the CIFAR-10 dataset with 0%, 20%, and 40% of the labels flipped at random. SOP prevents overfitting to the wrong training labels, obtaining near 100%, 80%, 60% training accuracy respectively, therefore achieves better generalization on the test set without an accuracy drop at the end of training.
|
| 21 |
+
|
| 22 |
+
requires the development of techniques beyond the classical $\ell_1$ loss for sparse modeling.
|
| 23 |
+
|
| 24 |
+
Overview of our method and contribution. To handle sparse corruption with over-parameterized models, our idea is simply to use an extra variable $s_i$ to model the unknown label noise $s_{\star i}$ , which is the difference between the observed label $y_i$ and the corresponding clean label. Hence, the goal is to minimize the discrepancy between $f(x_i;\theta) + s_i$ and $y_i$ . Inspired by a line of recent work [11–13], we enforce sparsity of $s_i$ by the overparameterization $s_i = u_i \odot u_i - v_i \odot v_i$ and optimize the following training loss
|
| 25 |
+
|
| 26 |
+
$$\min_{\boldsymbol{\theta}, \{\boldsymbol{u}_i, \boldsymbol{v}_i\}_{i=1}^N} L\left(\boldsymbol{\theta}, \{\boldsymbol{u}_i, \boldsymbol{v}_i\}_{k=1}^N\right), \text{ where } L\left(\boldsymbol{\theta}, \{\boldsymbol{u}_i, \boldsymbol{v}_i\}_{k=1}^N\right)) \doteq \frac{1}{N} \sum_{i=1}^N \ell\left(f(\boldsymbol{x}_i; \boldsymbol{\theta}) + \boldsymbol{u}_i \odot \boldsymbol{u}_i - \boldsymbol{v}_i \odot \boldsymbol{v}_i, \boldsymbol{y}_i\right)\right),$$
|
| 27 |
+
(2)
|
| 28 |
+
|
| 29 |
+
with $\odot$ denoting an entry-wise Hadamard product. We term our method "Sparse Over-Parameterization" (SOP).
|
| 30 |
+
|
| 31 |
+
At the first glance, our SOP approach is seemingly problematic, because adding more learnable parameters $\{u_i, v_i\}_{i=1}^N$ to an over-parameterized network $f(\cdot, \theta)$ would aggravate rather than alleviate the overfitting issue. Indeed, a global solution to (2) is given by $u_i \equiv v_i \equiv 0$ and $f(x_i, \theta) \equiv y_i$ for all $i \in \{1, \dots, N\}$ where the network overfits to noisy labels. Here, we leverage the choice of a particular training algorithm to enforce an *implicit bias* towards producing the desired solutions. Technically, we run gradient descent on the objective in (2) starting from a small initialization for $\{u_i, v_i\}_{i=1}^N$ :
|
| 32 |
+
|
| 33 |
+
$$\boldsymbol{\theta} \leftarrow \boldsymbol{\theta} - \tau \cdot \frac{\partial L(\boldsymbol{\theta}, \{\boldsymbol{u}_i, \boldsymbol{v}_i\})}{\partial \boldsymbol{\theta}},$$
|
| 34 |
+
|
| 35 |
+
$$\boldsymbol{u}_i \leftarrow \boldsymbol{u}_i - \alpha \tau \cdot \frac{\partial L(\boldsymbol{\theta}, \{\boldsymbol{u}_i, \boldsymbol{v}_i\})}{\partial \boldsymbol{u}_i}, \quad i = 1, \dots, N,$$
|
| 36 |
+
|
| 37 |
+
$$\boldsymbol{v}_i \leftarrow \boldsymbol{v}_i - \alpha \tau \cdot \frac{\partial L(\boldsymbol{\theta}, \{\boldsymbol{u}_i, \boldsymbol{v}_i\})}{\partial \boldsymbol{v}_i}, \quad i = 1, \dots, N,$$
|
| 38 |
+
(3)
|
| 39 |
+
|
| 40 |
+
where $\alpha>0$ is the ratio of learning rates for different training variables. Such a simple algorithm enables our method of SOP to train a deep image classification networks without overfitting to wrong labels and obtain better generalization performance (see Fig. 1). A more comprehensive empirical study with a variety of datasets is presented in Section 2.
|
| 41 |
+
|
| 42 |
+
To rigorously justify our method, we theoretically investigate our method based upon a simplified over-parameterized linear model with sparse corruptions. As justified by a line of recent work [14,15], over-parameterized linear models capture similar phenomena because they well approximate over-parameterized deep networks in a linearized regime around the initial points. Under sparse corruption and certain low-rank assumptions on the data, we show that the gradient descent (3) with an $\alpha$ below a certain threshold recovers the underlying model parameters with sparse corruptions. Our result is obtained by explicitly characterizing the implicit regularization for the term $u_i \odot u_i - v_i \odot v_i$ . In particular, we explicitly show that it leads to an $\ell_1$ -norm regularization on the sparse corruption, hence connecting our method to classical $\ell_1$ loss approaches for model robustness. For more details, we refer readers to Section 3.
|
| 43 |
+
|
| 44 |
+
In summary, our contributions are two-folds:
|
| 45 |
+
|
| 46 |
+
- Method. We proposed a simple yet practical SOP method that can effectively prevent overfitting for learning over-parameterized deep networks from corrupted training data, demonstrated on a variety of datasets.
|
| 47 |
+
- *Theory.* Under a simplified over-parameterized linear model, we rigorously justify our approach for exactly separating sparse corruption from the data.
|
| 48 |
+
|
| 49 |
+
Moreover, we believe the methodology we developed here could be far beyond the label noise setting, with the potential for dealing with more challenging scenarios of preventing overfitting in learning modern overparametrized models of an ever-increasing size.
|
| 50 |
+
|
| 51 |
+
In this section, we show how our SOP method plays out on image classification problems with the noisy label. In particular, we discuss extra implementation details of our method, followed by experimental demonstrations on a variety of datasets with synthetic and real label noise.
|
2203.05272/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-11-19T09:39:46.251Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36" version="15.7.3" etag="fYD96XBDNbU6j4UqqEBf"><diagram id="1KQ5yAwUueF941y7zDZu">7V1tk5s4Ev41rt18GJfeePsYT5LbrdrspW7u6rKfUoyNZ0gY48NMZmZ//QmQMEiyDUYCxsOkKjZCkqGfVner1WrN8PXD8z8Sf3v/OV4F0QyB1fMMf5ghdEUsQD+ykpeiBEHHLUruknBVlMF9wU34d8AKWcO7x3AV7GoV0ziO0nBbL1zGm02wTGtlfpLET/Vq6ziq/+rWv2O/CPYFN0s/CqRq/w1X6X1R6iJnX/5bEN7d81+GtlfcefB5ZdbF7t5fxU+V38IfZ/g6ieO0+PbwfB1EGfU4XYoH+nTgbvlgSbBJmzRARYOffvTI3m2G7Ig2XdxmT5e+sFe2//eYPdJiHW/Sq10OyHtaAbpbiupif59+u8s+b36EW3r/ugAgjDe8W/oot7wSI0H5IyiJHzerIHs0SG8/3YdpcLP1l9ndJ8pKtOw+fYjY7ZJ0gF7cRf5ux77vfgTp8p7Vyh74k/8QRhmv/Tt8oGyDwJ/BE/3/X/GDv2FVGI9BN2ufJvGPElic9Z74q5BS9DqO4oSWbeINrb3wo/COdvBhSW8FtHzxM0jSkLLJe3bjIVytsndbxPQtwvSFPSCjOa0cPB/EDZbcQMdRED8EafJCq7AGGAKraMPGkOPMMevlac+SELNa9xV2tBlT+2wU3JW97xmFfmG8ouYbrOCbg2CC02Cuwyji1J0hvLayfyUWlTt2/sdQq5QXfy0AZw8KBQawe0GLSoTGcGENcBHdcFUoRiSY2PCoYsqKdNASAzSvUxM5KlICmZTQ605KSyLl75vtY1rIw5l1/euvz9/C4mu4KT4zqU+lAr34OrM+0sLrTOl8C9/Re+8kIChp0jq1m8uZJKCy2b/Nu8pg3MbhJs1f1lrMrA9ZX49pvGPItQBOHCINRpkOrAlXwAxp2+LQV6BGlgJqpGHU2BI2wYraAewyTtL7+C7e+NHHfemiPoz2df6I4y2j+fcgTV8YMTM46mBT0iQvX7P2c4tf/sW6yy8+PNeuuJhqqehwiVD2TnWbJH5MlqyIWTSpn9wFjLKe2xjGJIj8NPxZ774LIo40+D7/8cWkKKOaBSyyf8KwWAVr/zFKW9C91Dc6hoVTHxYQNRSAXHp1wcCVMPjnYypIwF+oCMzlnyj8Zs6Cyr8FnDkfmBj8ZZKDDQAHdcBVGg95hsSg1wDwQ3hnym6CuAnEtkUamDXGdB2fWjczERtMz0SLfh3Yy6VKvq4c7xZkCPrJktMdiPO7vUlONNnkBNr1MXWFsMK2UEtRHQSHYyO4Di72cJ2omMwJkqhKbGNUlYnY1mb7/viwZWSxzjThnsP0a+X7X3trjl7t7bfsgptve7OvZvTtbcAjZh8HEeVo+wkvsPPa3H+2N264awM2swEhlo1Ae0AjEGp3PlRomA+MxkOptA1ln5EOEQWgJegERzmeoKua7Do6xpNJz0ELWuvzHxCARKJaaI6JV/kjMoWVEguSOdRgUUPZp9CRyNq8aTp0AgKiogXuHLsSjZWqFuhgYtssfVdW4K6Uc0cX3eKcvjroiKFIR17QCxGdMzRrB+W5V5il96OB8mznMzmq+A9r0vOGWF1FoKLH99lqFL28jeLlj6LoUxhxWjRRzp6snElz3mqsnPOm9Gn9l0oFNufa9/wlK6gOfWIJLGsJ61FCCwK84y3ol+Ip9kxbvk4zPnbHYCFyhr0Cc4DrPDu3gHeCb/OrL0FCJ6P5LLk3S1DBbFy4DmIJyq4JvYpzvUYH5lD2rW3pEuwID6wguQNhku2jku2cm6vDDTuN2cuwbMeWKKnJcdmOPfd4i86yvSTiKGT7+GbyKobiS/lDyG/+PK9cfmMypGFuOhajp9mNuNSbEXHOl2B7oaN2V8cgdLQkY6JXZrQmS2KMloRiHR8bMNzPsySILdoF+IQl4TrHW3S3JLrHfly2JaFgKC5AB7Ek5MiQ12hJlKHRgynBc7wjk/w2Lb853jX5jcYivy1JGotR52ILR5L4SLP89sYkv1VePuK5J/h2KC+fktkMuJQbx5q3Ck0ZrWy3pHiTPg1zDuArn93Yg061ucg1RkQ/cNdKTrSXbnC71kRER2Fl9EhEPDrRDGet7OvBBDPpxwpojKRhj0lPgtl2Bja68TlOE6MjgpDXMSJc1YgYcEESm4406UdBuMOaKobn8j0R0VOIlR6JOIowB+Usn9/RO8uXAgsNiBuimBlZBiJhz5qGZ9GidYY7sR5LgGhMt20AT9Qv9yUp63ee5ONRTfLH56RVsauJ2LDGu5BNT+QHjXMlQIpz7dmOJKbn+L0oTgLcIRUnkWnWv1DprMzO0bxNJIqBJZ7zlB1ELZUdFJ0ebRtY3mHl1bh1+YKFTGatBO5spwXJqPwpI9SCijmibSCKrrGAMe01GVYLQsUuhX61oGI3TbZH+c9iw7mDXtle5BZJHXR5xKAnYJiFI4PKH5bgVO1A07EvmSg8KgWa1kdYIAoJmSA9GWMuQqoelRCYwlHh1OE4ogJH5LoTjm0zQfSOo5z+o8QRFzhajj3heHI1WFSTijmOORAV+2Y4iIQJVWhNivL00pHoclOhCA2haCncKdNQPGM5HIO5BGOvQtVSOG4m5XjGuhUZHEk0mas6kPREb02f6tFS7KiZZpDtfLji4qcCP1NTRkvlYGmR1Ji+6PNMldSY95LVb9yR1Mt1vFn6aSUfctFd/SeOpkm+AEZDQA+jYQ9DQeRT+Tq3SMVZgRpxno40u5bhRC59LcA4UoYcb27JSXGMufCsiwijIYAo6XiUNc3R9CKiagjECpoqUmKZo6PCAfIa6QjGxJujCOS4sGAlVWykNZo9n46Y5+dU8ghX3B/etsGpYCVgGw1W4iGVr47Hu8cinDlIGvC4jRQ8bo2Fxz3QksfF7KmtG5zkcc8sj0/5V1qzq4mEPo0POFB4q3TaMgMnnnTFNc9eA8lsVV6Wdg6Eg8crXcebn3H0WBysBBb5tuJDDgWdeJ7ekNPx+B1XE/TIkw6JseZAkdfVMeQKsA87oTqj//mPL7te4B7sFCZdbAAECYDUxy4ZYwKVP0gTE9w83l7t/IdtFG7u+pQBDTb6jgR8Szw0o2cJIDux/rNVQNav72Ak2DhAXKnrGx3TCWOGjcKUU4r2avoY9pGNjLiZBxJW88f3SGk5kkd1HF2Ts+iQTUUTpdDmdpd9vK4lzur6ZU8H9lA+EERYv6fTOd0dSl1m1y1zJhg6no6V1RJSAwMZTc9zANlCFAPPmHDQn2Ojow06O2g4vaoH6PnPV9s4jjoK6Fd4ip4thGBCxXlbyhgFDUd+OLLnpfmhavkhetMBeu1SrMuHxgKVtFbZm1qkdfddWuM9TNRuKK0VK1LOgNk6HNlH83s2MLZx9kOZa+2tiUQHiocQ9nq0qDOKRDgDrQMbWnPgA6w26IY8wld2i/CIuGm0qUwQc6PtnBOdLk0lufLogGDAHBmO7Dp5k0dcy2MD9aqJukcknZn2oSWPn3PWaINxwec92ufQikggx5tXT18U7Q/B+D6QOEGVY9gTpSv3DPC+ijfWkYSB08vIOZmoxfxKy9hDIuVceQuNyq3Fz+PqdLy87KJQeTTzr2qn5n56vM4qssj/pAbGodU2+8Bq2zTDPulNEVxXlnKptcwBVWUbomF+7coOFcYr3w87Ua6gJvbYe2O+fZ9Y5aR8cUT5omYW5bHGWpile2iOem/PZ2owhVc7CmJQ5azi5m+Bv2q6Qn+YXaJgnQ7NLKiNQa7RKiRISA0GMVFtTFFuudZxvLyriurppObHtLCJnUEzr7ndvT89h4u2sNWthna3wlsDDQQ0N8bE8E6sgRlejA/r/eRV92BanNycWH2jV7kucBZhZtS+MuNigJ3jAAtB96dzV0GVUapj0cc9mCwnR3ePLeXrZeTvdu8yQ3JC+LgRAOW95XNkg6Nb16CpKAz3YCqdCWJ9EJe7p/qA1JN9ObkLFhQG/GSpn2OpX0GEVZpVaarrCPDkO50ucnWjYXiUh2Rr0hvwlEb+PG99cQMDMRql12V275JjUZoODUUsijdgLIonexZqm7ymIdLr2rg3sC9iFENE4YsYUnnIrohJeRQM36vyuOSokaYjQxk1YiDsvDEosqPhJk3CjOQIvGk9QoSZZL9aZJAokp62aFSjTzQfM1JmdhxLpHD5QBcR3+GSuv7ggYA9RHeUsXVGCCkLJcOBMpYUpKYQL+ZoKZNufEFqDVWquV1bivPq8QHQ9B/OU77WW7daRT3cq80K+c7yt2y0Qp6NqqZUh9Sp2rOlDqhTxfMcHEcV0WROFdgTg0PgKBh8wFj+8oEugsOJW9eaXn8xwRB0P834Atjbk9mbr6gMw96Kte9Xy95YSufQswiHqilmES26Cn/WqNo6PJn1s9v6Gy0dsSjZtb+s95MyZt/kzJ7kzF7ppMMvF3sATibNL95QKs4JeDzUtgVrNgyVbh6+od3aBo4QBEUn/arVd9vQ4juEqmn+xMzsl1mqh4mbG8aSiKdsIVzfMubJDixznI0mzp442xBnIyEHv5ygxBxfnxMU0uPqxNE0D4fzJpwXc17PtICKHt8nSfxEL2/zvI150acwktJGzLRuxMWyzT+a9FZilCbEkFvMB3flnm4zO36yOh0Bbq0HYqHSTj+xsVd+HGI5gtVPLDj3nIpusetdn+0bppdJnAnLfXUqve4/x6sgq/F/</diagram></mxfile>
|
2203.05272/main_diagram/main_diagram.pdf
ADDED
|
Binary file (49.6 kB). View file
|
|
|
2203.05272/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure id="fig:abstract">
|
| 4 |
+
<div class="center">
|
| 5 |
+
<p><img src="plot/abstract_cbl_3d.png" alt="image" /> <embed src="plot/abstract_demo_cbl.pdf" /></p>
|
| 6 |
+
</div>
|
| 7 |
+
<figcaption>Contrastive Boundary Learning (top) discovers boundary from ground truth in each sub-sampled point cloud, , sub-scene, through the sub-sampling procedure. By imposing contrastive optimization on boundary areas at multiple scales, CBL enhances the feature discrimination across boundaries (middle). Without an explicit boundary prediction, CBL improves boundary segmentation and achieves better scene segmentation results (bottom). The visualization is conducted on S3DIS testset Area 5. </figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
3D point cloud semantic segmentation aims to assign semantic categories to each 3D data point, while robust 3D segmentation is very important for various applications [@ptsurvey; @ptSreview], including autonomous driving, unmanned aerial vehicles, and augmented reality.
|
| 11 |
+
|
| 12 |
+
However, despite that various point cloud segmentation methods have been developed, little attention has been put on boundaries in 3D point clouds. Accurate segmentation on scene boundaries can be of great importance. Firstly, a clean boundary estimation can be beneficial for overall segmentation performance. For example, in 2D image segmentation, accurate segmentation on boundary is the key to generate high-fidelity masks [@bound_bio; @bound_iou; @bound_loss]. Secondly, compared to object categories that usually have a large portion of 3D points, such as buildings and trees, erroneous boundary segmentation could affect the recognition of object categories with much fewer points (, pedestrians and pillars) to a greater extent. This can be particularly hazardous for applications like autonomous driving, , crashing into curbs if boundaries are recognized inaccurately by a self-driving car.
|
| 13 |
+
|
| 14 |
+
Unfortunately, most previous 3D segmentation methods generally overlook the segmentation on scene boundaries. Though a few methods have considered boundaries, they still lack an explicit and comprehensive investigation to analyze the segmentation performance on boundary areas. They also perform unsatisfactorily on the overall segmentation performance.
|
| 15 |
+
|
| 16 |
+
Therefore, to deliver a more thorough study of the segmentation on boundaries, we first explore metrics to quantify the segmentation performance on scene boundaries. After revealing the unsatisfactory performance, we propose a novel Contrastive Boundary Learning (CBL) framework to help optimize the segmentation performance on boundaries particularly, which also consistently improves the overall performance for different baseline methods.
|
| 17 |
+
|
| 18 |
+
In particular, current popular segmentation metrics lack specific measurements on boundaries, making it difficult to reveal the boundary segmentation quality in existing methods. To make a clearer view on the performance on boundaries, we calculate the popular mean intersection-over-union (mIoU) for boundary areas and inner (non-boundary) areas separately. By comparing the performance on types of areas as well as the overall performance, the unsatisfactory performance on boundary areas can be directly revealed. Moreover, to describe the performance on boundaries more comprehensively, we consider the alignment between the boundary in the ground truth and the boundary in model segmentation results. Therefore, we introduce the popular boundary IoU [@bound_iou] score (B-IoU) used in 2D instance segmentation for evaluation, which also gives a much lower score compared with the overall performance in mIoU.
|
| 19 |
+
|
| 20 |
+
After identifying the boundary segmentation difficulties, we further propose a novel contrastive boundary learning (CBL) framework to better align the boundaries of model predictions with ground-truth data's boundaries. As shown in [1](#fig:abstract){reference-type="ref+label" reference="fig:abstract"}, CBL optimizes a model on the feature representation of points in boundary areas, enhancing the feature discrimination across the scene boundaries. Furthermore, to make model better aware of the boundary areas at multiple semantic scales, we also develop a sub-scene boundary mining strategy, which leverages the sub-sampling procedure to discover boundary points in each sub-sampled point cloud, , sub-scene. Specifically, CBL operates across different sub-sampling stages and facilitates 3D segmentation methods to learn better feature representation around boundary areas.
|
| 21 |
+
|
| 22 |
+
Empirically, we experiment with three baselines across four datasets. We first present the unsatisfactory performance on boundary areas when using current point cloud segmentation methods and then show that CBL can assist baseline in achieving promising boundary and overall performance. For example, the proposed CBL helps RandLA-Net surpass current state-of-the-art methods on the Semantic3D dataset and enables a basic ConvNet to achieve leading performance on the S3DIS dataset.
|
| 23 |
+
|
| 24 |
+
Our contributions are as follows:
|
| 25 |
+
|
| 26 |
+
- We explore the boundary problem in current 3D point cloud segmentation and quantify it with metrics that consider boundary area, , boundary IoU. The results reveal that current methods deliver much worse accuracy in boundary areas than their overall performance.
|
| 27 |
+
|
| 28 |
+
- We propose a novel Contrastive Boundary Learning (CBL) framework, which improves the feature representation by contrasting the point features across the scene boundaries. It thus improves the segmentation performance around boundary areas and subsequently the overall performance.
|
| 29 |
+
|
| 30 |
+
- We conduct extensive experiments and show that CBL can bring significant and consistent improvements on boundary area as well as overall performance across all baselines. These empirical results further demonstrate that CBL is effective for improving boundary segmentation performance, and accurate boundary segmentation is important for robust 3D segmentation.
|
| 31 |
+
|
| 32 |
+
<figure id="fig:contrast">
|
| 33 |
+
<div class="center">
|
| 34 |
+
|
| 35 |
+
</div>
|
| 36 |
+
<figcaption>The detailed illustration of the Contrastive Boundary Learning.</figcaption>
|
| 37 |
+
</figure>
|
| 38 |
+
|
| 39 |
+
# Method
|
| 40 |
+
|
| 41 |
+
In this section, we present our contrastive boundary learning (CBL) framework, shown in [2](#fig:contrast){reference-type="ref+label" reference="fig:contrast"}. It imposes contrastive learning to enhance the feature discrimination across boundaries. Then, to deeply augment the model performance on boundaries, we enable the CBL in sub-sampled point clouds, , sub-scene, through the sub-scene boundary mining.
|
| 42 |
+
|
| 43 |
+
**Contrastive Boundary Learning.** We follow the widely used InfoNCE loss[@infonce] and its generalization [@nce; @softnn] to define the contrastive optimization goal on boundary points. In particular, for a boundary point $x_i\in \mathcal B_l$, we encourage learned representations more similar to its neighbor points from the same category and more distinguished from other neighbor points from different categories, , $$\begin{equation}
|
| 44 |
+
L_{CBL} = \frac {-1}{|B_l|} \sum_{x_i \in B_l} \log \frac
|
| 45 |
+
{ \displaystyle \sum_{ x_j\in \mathcal N_i \land l_j = l_i } \exp(-d(f_i, f_j) / \tau) }
|
| 46 |
+
{ \displaystyle \sum_{ x_k\in\mathcal N_i} \exp(-d(f_i, f_k) / \tau) },
|
| 47 |
+
\label{eq:cbl}
|
| 48 |
+
\end{equation}$$ where ${f_i}$ is the feature of $x_i$, $d(\cdot, \cdot)$ is a distance measurement and $\tau$ is the temperature in contrastive learning. The contrastive learning described by [\[eq:cbl\]](#eq:cbl){reference-type="ref+label" reference="eq:cbl"} focuses on boundary points only (the dashed circles in red in [2](#fig:contrast){reference-type="ref+label" reference="fig:contrast"}). First, we consider all the boundary points $\mathcal B_l$ from ground-truth data as defined in [\[eq:bound\]](#eq:bound){reference-type="ref+label" reference="eq:bound"}. Then, for each point $x_i\in \mathcal B_l$, we restrict the sampling of its positive and negative points to be within its local neighborhood $\mathcal N_i$. With such strong spatial restriction, we obtain positive pairs for $x_i$ as $\{x_j\in \mathcal N_i \land l_j=l_i\}$, and other neighboring points, $\{x_j\in \mathcal N_i \land l_j\neq l_i\}$, are negative pairs. Therefore, the contrastive learning enhances the feature discrimination across scene boundaries, which is important for improving segmentation on boundary areas.
|
| 49 |
+
|
| 50 |
+
**Sub-scene Boundary Mining.** To better explore scene boundaries, we examine the boundaries in sub-sampled point clouds at multiple scales, which enables the contrastive boundary learning on different sub-sampling stages of a backbone model. Collecting boundary points from the input point cloud is straightforward with the ground truth label. However, after sub-sampling, it is difficult to obtain a proper definition of boundary point set following [\[eq:bound\]](#eq:bound){reference-type="ref+label" reference="eq:bound"}, due to the undefined label for sub-sampled points[@omni]. Therefore, to enable CBL in sub-sampled point cloud, we propose the sub-scene boundary mining that determines the set of ground-truth boundary points in each sub-sampling stage. Specifically, we use superscripts to denote stage. At the sub-sampling stage $n$, we represent its sub-sampled point cloud as $\mathcal X^n$. For input point cloud, we have $\mathcal X^0 = \mathcal X$. When collecting a set of boundary points $\mathcal B^n_l \in \mathcal X^n$ in stage $n$, it is required to determine the label $l^n_i$ of a sub-sampled point $x^n_i\in\mathcal X^n$, , the sub-scene annotation. As each sub-sampled point $x_i^n\in \mathcal X^n$ is aggregated from a group of points in its previous point cloud $\mathcal X^{n-1}$; we thus utilize the sub-sampling procedure to determine the label iteratively. We take $l^0_i$ to be the one-hot label of ground truth label $l_i$ for point $x^0_i = x_i$, and have the following: $$\begin{equation}
|
| 51 |
+
\begin{array}{ll}
|
| 52 |
+
l^n_i &= \text{AVG}(\{ l^{n-1}_j | x^{n-1}_j \in \mathcal N^{n-1} (x^n_i) \}),
|
| 53 |
+
\end{array}
|
| 54 |
+
\label{eq:subscene:label}
|
| 55 |
+
\end{equation}$$ where $\mathcal N^{n-1} (x^n_i)$ denotes the local neighbors of $x^n_i$ in previous stage (the dashed circles in grey in [2](#fig:contrast){reference-type="ref+label" reference="fig:contrast"}), , the group of points aggregated from $\mathcal X^{n-1}$ to be represented by the single point $x^n_i\in\mathcal X^n$ after sub-sampling procedure, and $\text{AVG}$ is the average-pooling.
|
| 56 |
+
|
| 57 |
+
With [\[eq:subscene:label\]](#eq:subscene:label){reference-type="ref+label" reference="eq:subscene:label"} and ground-truth labels, we can iteratively obtain the sub-scene annotation $l^n_i$ as a distribution, whose $k$-th location describes the proportion of $k$-th class in its corresponding group of points in the input point cloud. To determine the set of boundary points in sub-sampled point cloud $\mathcal X^n$, we simply take $\arg\max l^n_i$ to allow the evaluation of boundary point in [\[eq:bound\]](#eq:bound){reference-type="ref+label" reference="eq:bound"}, and use the feature of sub-sampled point for the contrastive boundary optimization in [\[eq:cbl\]](#eq:cbl){reference-type="ref+label" reference="eq:cbl"}. Finally, with sub-scene boundary mining, we have CBL applied at all stages and the final loss is $$\begin{equation}
|
| 58 |
+
L = L_\text{cross entropy} + \lambda \sum_{n} L^n_{CBL},
|
| 59 |
+
\end{equation}$$ where $L^n_{CBL}$ is the CBL loss at stage $n$ and $\lambda$ is the loss weight.
|
2205.12105/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2205.12105/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
This document serves as an example submission. It illustrates the format we expect authors to follow when submitting a paper to ECCV. At the same time, it gives details on various aspects of paper submission, including preservation of anonymity and how to deal with dual submissions, so we advise authors to read this document carefully.
|
| 4 |
+
|
| 5 |
+
All manuscripts must be in English.
|
| 6 |
+
|
| 7 |
+
Papers submitted for review should be complete. The length should match that intended for final publication. Papers accepted for the conference will be allocated 14 pages (plus additional pages for references) in the proceedings. Note that the allocated 14 pages do not include the references. The reason for this policy is that we do not want authors to omit references for sake of space limitations.
|
| 8 |
+
|
| 9 |
+
Papers with more than 14 pages (excluding references) will be rejected without review. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Do not use the TIMES, or any other font than the default. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in 14 pages if it is reviewed in 16.
|
| 10 |
+
|
| 11 |
+
It is imperative that the paper ID is mentioned on each page of the manuscript. The paper ID is a number automatically assigned to your submission when registering your paper submission on the submission site.
|
| 12 |
+
|
| 13 |
+
All lines should be numbered in the initial submission, as in this example document. This makes reviewing more efficient, because reviewers can refer to a line on a page. Line numbering is removed in the camera-ready.
|
| 14 |
+
|
| 15 |
+
Please number all of your sections and displayed equations. Again, this makes reviewing more efficient, because reviewers can refer to a line on a page. Also, it is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like "the equation second from the top of page 3 column 1". (Note that the line numbering will not be present in the final copy, so is not an alternative to equation numbers). Some authors might benefit from reading Mermin's description of how to write mathematics: [www.pamitc.org/documents/mermin.pdf](www.pamitc.org/documents/mermin.pdf){.uri}.
|
| 16 |
+
|
| 17 |
+
To avoid confusion, in case of discrepancies between policies mentioned here and those in the ECCV 2022 webpage, the web page is the one that is updated regularly and its policies shall overrule those appearing here.
|
| 18 |
+
|
| 19 |
+
By submitting a paper to ECCV, the authors agree to the review process and understand that papers are processed by the Toronto system to match each manuscript to the best possible chairs and reviewers.
|
| 20 |
+
|
| 21 |
+
The review process of ECCV is confidential. Reviewers are volunteers not part of the ECCV organisation and their efforts are greatly appreciated. The standard practice of keeping all information confidential during the review is part of the standard communication to all reviewers. Misuse of confidential information is a severe professional failure and appropriate measures will be taken when brought to the attention of ECCV organizers. It should be noted, however, that the organisation of ECCV is not and cannot be held responsible for the consequences when reviewers break confidentiality.
|
| 22 |
+
|
| 23 |
+
Accepted papers will be published by Springer (with appropriate copyrights) electronically up to three weeks prior to the main conference. Please make sure to discuss this issue with your legal advisors as it pertains to public disclosure of the contents of the papers submitted.
|
| 24 |
+
|
| 25 |
+
By submitting a manuscript to ECCV 2022, authors acknowledge that it has not been previously published or accepted for publication in substantially similar form in any peer-reviewed venue including journal, conference, or workshop. Furthermore, no paper substantially similar in content has been or will be submitted to a journal, another conference or workshop during the review period (March 07, 2022 -- July 3, 2022). The authors also attest that they did not submit substantially similar submissions to ECCV 2022. Violation of any of these conditions will lead to rejection and the violation will be reported to the other venue or journal, which will typically lead to rejection there as well.
|
| 26 |
+
|
| 27 |
+
The goals of the dual submission policy are (i) to have exciting new work be published for the first time at ECCV 2022, and (ii) to avoid duplicating the efforts of the reviewers. Therefore, all papers under review are checked for dual submissions and this is not allowed, independent of the page size of submissions.
|
| 28 |
+
|
| 29 |
+
For already published papers, our policy is based upon the following particular definition of "publication". A publication, for the purposes of the dual submission policy, is defined to be a written work longer than four pages that was submitted for review by peers for either acceptance or rejection, and, after review, was accepted. In particular, this definition of publication does not depend upon whether such an accepted written work appears in a formal proceedings or whether the organizers declare that such work "counts as a publication".
|
| 30 |
+
|
| 31 |
+
An arXiv.org paper does not count as a publication because it was not peer-reviewed for acceptance. The same is true for university technical reports. However, this definition of publication does include peer-reviewed workshop papers, even if they do not appear in a proceedings, if their length is more than 4 pages including citations. Given this definition, any submission to ECCV 2022 should not have substantial overlap with prior publications or other concurrent submissions. As a rule of thumb, the ECCV 2022 submission should contain no more than 20 percent of material from previous publications.
|
| 32 |
+
|
| 33 |
+
Publication of the paper in the ECCV 2022 proceedings of Springer requires that at least one of the authors registers for the conference and present the paper there. It also requires that a camera-ready version that satisfies all formatting requirements is submitted before the camera-ready deadline.
|
| 34 |
+
|
| 35 |
+
ECCV reviewing is double blind, in that authors do not know the names of the area chair/reviewers of their papers, and the area chairs/reviewers cannot, beyond reasonable doubt, infer the names of the authors from the submission and the additional material. Avoid providing links to websites that identify the authors. Violation of any of these guidelines may lead to rejection without review. If you need to cite a different paper of yours that is being submitted concurrently to ECCV, the authors should (1) cite these papers, (2) argue in the body of your paper why your ECCV paper is non trivially different from these concurrent submissions, and (3) include anonymized versions of those papers in the supplemental material.
|
| 36 |
+
|
| 37 |
+
Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work. In fact it is often impossible to review a paper unless the previous citations are known and available.
|
| 38 |
+
|
| 39 |
+
Blind review means that you do not use the words "my" or "our" when citing previous work. That is all. (But see below for technical reports).
|
| 40 |
+
|
| 41 |
+
Saying "this builds on the work of Lucy Smith \[1\]" does not say that you are Lucy Smith, it says that you are building on her work. If you are Smith and Jones, do not say "as we show in \[7\]", say "as Smith and Jones show in \[7\]" and at the end of the paper, include reference 7 as you would any other cited work.
|
| 42 |
+
|
| 43 |
+
An example of a bad paper:
|
| 44 |
+
|
| 45 |
+
> ::: center
|
| 46 |
+
> An analysis of the frobnicatable foo filter.
|
| 47 |
+
> :::
|
| 48 |
+
>
|
| 49 |
+
> In this paper we present a performance analysis of our previous paper \[1\], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me.
|
| 50 |
+
>
|
| 51 |
+
> \[1\] Removed for blind review
|
| 52 |
+
|
| 53 |
+
An example of an excellent paper:
|
| 54 |
+
|
| 55 |
+
> ::: center
|
| 56 |
+
> An analysis of the frobnicatable foo filter.
|
| 57 |
+
> :::
|
| 58 |
+
>
|
| 59 |
+
> In this paper we present a performance analysis of the paper of Smith \[1\], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me.
|
| 60 |
+
>
|
| 61 |
+
> \[1\] Smith, L. and Jones, C. "The frobnicatable foo filter, a fundamental contribution to human knowledge". Nature 381(12), 1-213.
|
| 62 |
+
|
| 63 |
+
If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission [@Authors14] as additional material and cite it as
|
| 64 |
+
|
| 65 |
+
> 1\. Authors. "The frobnicatable foo filter", BMVC 2014 Submission ID 324, Supplied as additional material `bmvc14.pdf`.
|
| 66 |
+
|
| 67 |
+
Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not *require* the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper "further details may be found in [@Authors14b]". Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material.
|
| 68 |
+
|
| 69 |
+
Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the ECCV audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled "Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties", by Zeus.
|
| 70 |
+
|
| 71 |
+
You can handle this paper like any other. Don't write "We show how to improve our previous work \[Anonymous, 1968\]. This time we tested the algorithm on a lunar lander \[name of lander removed for blind review\]". That would be silly, and would immediately identify the authors. Instead write the following:
|
| 72 |
+
|
| 73 |
+
> We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems \[Zeus et al. 1968\] didn't handle case B properly. Ours handles it by including a foo term in the bar integral.
|
| 74 |
+
>
|
| 75 |
+
> \...
|
| 76 |
+
>
|
| 77 |
+
> The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: \...
|
| 78 |
+
|
| 79 |
+
As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B.\
|
| 80 |
+
For sake of anonymity, it's recommended to omit acknowledgements in your review copy. They can be added later when you prepare the final copy.
|
| 81 |
+
|
| 82 |
+
This is an edited version of Springer LNCS instructions adapted for ECCV 2022 first paper submission. You are strongly encouraged to use LaTeX2$_\varepsilon$ for the preparation of your camera-ready manuscript together with the corresponding Springer class file `llncs.cls`.
|
| 83 |
+
|
| 84 |
+
We would like to stress that the class/style files and the template should not be manipulated and that the guidelines regarding font sizes and format should be adhered to. This is to ensure that the end product is as homogeneous as possible.
|
| 85 |
+
|
| 86 |
+
The printing area is $122 \; \mbox{mm} \times 193 \;
|
| 87 |
+
\mbox{mm}$. The text should be justified to occupy the full line width, so that the right margin is not ragged, with words hyphenated as appropriate. Please fill pages so that the length of the text is no less than 180 mm.
|
| 88 |
+
|
| 89 |
+
Use 10-point type for the name(s) of the author(s) and 9-point type for the address(es) and the abstract. For the main text, please use 10-point type and single-line spacing. We recommend using Computer Modern Roman (CM) fonts, which is the default font in this template. Italic type may be used to emphasize words in running text. Bold type and underlining should be avoided. With these sizes, the interline distance should be set so that some 45 lines occur on a full-text page.
|
| 90 |
+
|
| 91 |
+
Headings should be capitalized (i.e., nouns, verbs, and all other words except articles, prepositions, and conjunctions should be set with an initial capital) and should, with the exception of the title, be aligned to the left. Words joined by a hyphen are subject to a special rule. If the first word can stand alone, the second word should be capitalized. The font sizes are given in Table [1](#table:headings){reference-type="ref" reference="table:headings"}.
|
| 92 |
+
|
| 93 |
+
:::: center
|
| 94 |
+
::: {#table:headings}
|
| 95 |
+
------------------- -------------------------------- ---------------------
|
| 96 |
+
Heading level Example Font size and style
|
| 97 |
+
Title (centered) **Lecture Notes ...** 14 point, bold
|
| 98 |
+
1st-level heading **1 Introduction** 12 point, bold
|
| 99 |
+
2nd-level heading **2.1 Printing Area** 10 point, bold
|
| 100 |
+
3rd-level heading **Headings.** Text follows ... 10 point, bold
|
| 101 |
+
4th-level heading *Remark.* Text follows ... 10 point, italic
|
| 102 |
+
------------------- -------------------------------- ---------------------
|
| 103 |
+
|
| 104 |
+
: Font sizes of headings. Table captions should always be positioned *above* the tables. The final sentence of a table caption should end without a full stop
|
| 105 |
+
:::
|
| 106 |
+
::::
|
| 107 |
+
|
| 108 |
+
Here are some examples of headings: "Criteria to Disprove Context-Freeness of Collage Languages", "On Correcting the Intrusion of Tracing Non-deterministic Programs by Software", "A User-Friendly and Extendable Data Distribution System", "Multi-flip Networks: Parallelizing GenSAT", "Self-determinations of Man".
|
| 109 |
+
|
| 110 |
+
The numbers accorded to lemmas, propositions, and theorems etc. should appear in consecutive order, starting with the number 1, and not, for example, with the number 11.
|
| 111 |
+
|
| 112 |
+
Please produce your figures electronically and integrate them into your text file. For LaTeX users we recommend using package `graphicx` or the style files `psfig` or `epsf`.
|
| 113 |
+
|
| 114 |
+
Check that in line drawings, lines are not interrupted and have constant width. Grids and details within the figures must be clearly readable and may not be written one on top of the other. Line drawings should have a resolution of at least 800 dpi (preferably 1200 dpi). For digital halftones 300 dpi is usually sufficient. The lettering in figures should have a height of 2 mm (10-point type). Figures should be scaled up or down accordingly. Please do not use any absolute coordinates in figures.
|
| 115 |
+
|
| 116 |
+
Figures should be numbered and should have a caption which should always be positioned *under* the figures, in contrast to the caption belonging to a table, which should always appear *above* the table. Please center the captions between the margins and set them in 9-point type (Fig. [1](#fig:example){reference-type="ref" reference="fig:example"} shows an example). The distance between text and figure should be about 8 mm, the distance between figure and caption about 5 mm.
|
| 117 |
+
|
| 118 |
+
{#fig:example height="6.5cm"}
|
| 119 |
+
|
| 120 |
+
If possible (e.g. if you use LaTeX) please define figures as floating objects. LaTeX users, please avoid using the location parameter "h" for "here". If you have to insert a pagebreak before a figure, please ensure that the previous page is completely filled.
|
| 121 |
+
|
| 122 |
+
Displayed equations or formulas are centered and set on a separate line (with an extra line or halfline space above and below). Displayed expressions should be numbered for reference. The numbers should be consecutive within the contribution, with numbers enclosed in parentheses and set on the right margin. For example, $$\begin{align}
|
| 123 |
+
\psi (u) & = \int_{0}^{T} \left[\frac{1}{2}
|
| 124 |
+
\left(\Lambda_{0}^{-1} u,u\right) + N^{\ast} (-u)\right] dt \; \\
|
| 125 |
+
& = 0 ?
|
| 126 |
+
\end{align}$$
|
| 127 |
+
|
| 128 |
+
Please punctuate a displayed equation in the same way as ordinary text but with a small space before the end punctuation.
|
| 129 |
+
|
| 130 |
+
The superscript numeral used to refer to a footnote appears in the text either directly after the word to be discussed or, in relation to a phrase or a sentence, following the punctuation sign (comma, semicolon, or full stop). Footnotes should appear at the bottom of the normal text area, with a line of about 2 cm in TeX and about 5 cm in Word set immediately above them.[^1]
|
| 131 |
+
|
| 132 |
+
Program listings or program commands in the text are normally set in typewriter font, e.g., CMTT10 or Courier.
|
| 133 |
+
|
| 134 |
+
*Example of a Computer Program*
|
| 135 |
+
|
| 136 |
+
program Inflation (Output)
|
| 137 |
+
{Assuming annual inflation rates of 7%, 8%, and 10%,...
|
| 138 |
+
years};
|
| 139 |
+
const
|
| 140 |
+
MaxYears = 10;
|
| 141 |
+
var
|
| 142 |
+
Year: 0..MaxYears;
|
| 143 |
+
Factor1, Factor2, Factor3: Real;
|
| 144 |
+
begin
|
| 145 |
+
Year := 0;
|
| 146 |
+
Factor1 := 1.0; Factor2 := 1.0; Factor3 := 1.0;
|
| 147 |
+
WriteLn('Year 7% 8% 10%'); WriteLn;
|
| 148 |
+
repeat
|
| 149 |
+
Year := Year + 1;
|
| 150 |
+
Factor1 := Factor1 * 1.07;
|
| 151 |
+
Factor2 := Factor2 * 1.08;
|
| 152 |
+
Factor3 := Factor3 * 1.10;
|
| 153 |
+
WriteLn(Year:5,Factor1:7:3,Factor2:7:3,Factor3:7:3)
|
| 154 |
+
until Year = MaxYears
|
| 155 |
+
end.
|
| 156 |
+
|
| 157 |
+
(Example from Jensen K., Wirth N. (1991) Pascal user manual and report. Springer, New York)
|
| 158 |
+
|
| 159 |
+
The list of references is headed "References\" and is not assigned a number in the decimal system of headings. The list should be set in small print and placed at the end of your contribution, in front of the appendix, if one exists. Please do not insert a pagebreak before the list of references if the page is not completely filled. An example is given at the end of this information sheet. For citations in the text please use square brackets and consecutive numbers: [@Alpher02], [@Alpher03], [@Alpher04] ...
|
| 160 |
+
|
| 161 |
+
To convert a submission file into a camera-ready for an accepted paper:
|
| 162 |
+
|
| 163 |
+
1. First comment out
|
| 164 |
+
|
| 165 |
+
\usepackage{ruler}
|
| 166 |
+
|
| 167 |
+
and the line that follows it.
|
| 168 |
+
|
| 169 |
+
2. The anonymous title part should be removed or commented out, and a proper author block should be inserted, for which a skeleton is provided in a commented-out version. These are marked in the source file as
|
| 170 |
+
|
| 171 |
+
% INITIAL SUBMISSION
|
| 172 |
+
|
| 173 |
+
and
|
| 174 |
+
|
| 175 |
+
% CAMERA READY SUBMISSION
|
| 176 |
+
|
| 177 |
+
3. Please write out author names in full in the paper, i.e. full given and family names. If any authors have names that can be parsed into FirstName LastName in multiple ways, please include the correct parsing in a comment to the editors, below the
|
| 178 |
+
|
| 179 |
+
\author{}
|
| 180 |
+
|
| 181 |
+
field.
|
| 182 |
+
|
| 183 |
+
4. Make sure you have inserted the proper Acknowledgments.
|
| 184 |
+
|
| 185 |
+
We need all the source files (LaTeX files, style files, special fonts, figures, bib-files) that are required to compile papers, as well as the camera ready PDF. For each paper, one ZIP-file called XXXX.ZIP (where XXXX is the zero-padded, four-digit paper ID) has to be prepared and submitted via the ECCV 2022 Submission Website, using the password you received with your initial registration on that site. The size of the ZIP-file may not exceed the limit of 60 MByte. The ZIP-file has to contain the following:
|
| 186 |
+
|
| 187 |
+
1. All source files, e.g. LaTeX2e files for the text, PS/EPS or PDF/JPG files for all figures.
|
| 188 |
+
|
| 189 |
+
2. PDF file named "XXXX.pdf\" that has been produced by the submitted source, where XXXX is the four-digit paper ID (zero-padded if necessary). For example, if your paper ID is 24, the filename must be 0024.pdf. This PDF will be used as a reference and has to exactly match the output of the compilation.
|
| 190 |
+
|
| 191 |
+
3. PDF file named "XXXX-copyright.PDF\": a scanned version of the signed copyright form (see ECCV 2022 Website, Camera Ready Guidelines for the correct form to use).
|
| 192 |
+
|
| 193 |
+
4. If you wish to provide supplementary material, the file name must be in the form XXXX-supp.pdf or XXXX-supp.zip, where XXXX is the zero-padded, four-digit paper ID as used in the previous step. Upload your supplemental file on the "File Upload\" page as a single PDF or ZIP file of 100 MB in size or less. Only PDF and ZIP files are allowed for supplementary material. You can put anything in this file -- movies, code, additional results, accompanying technical reports--anything that may make your paper more useful to readers. If your supplementary material includes video or image data, you are advised to use common codecs and file formats. This will make the material viewable by the largest number of readers (a desirable outcome). ECCV encourages authors to submit videos using an MP4 codec such as DivX contained in an AVI. Also, please submit a README text file with each video specifying the exact codec used and a URL where the codec can be downloaded. Authors should refer to the contents of the supplementary material appropriately in the paper.
|
| 194 |
+
|
| 195 |
+
Check that the upload of your file (or files) was successful either by matching the file length to that on your computer, or by using the download options that will appear after you have uploaded. Please ensure that you upload the correct camera-ready PDF--renamed to XXXX.pdf as described in the previous step as your camera-ready submission. Every year there is at least one author who accidentally submits the wrong PDF as their camera-ready submission.
|
| 196 |
+
|
| 197 |
+
Further considerations for preparing the camera-ready package:
|
| 198 |
+
|
| 199 |
+
1. Make sure to include any further style files and fonts you may have used.
|
| 200 |
+
|
| 201 |
+
2. References are to be supplied as BBL files to avoid omission of data while conversion from BIB to BBL.
|
| 202 |
+
|
| 203 |
+
3. Please do not send any older versions of papers. There should be one set of source files and one XXXX.pdf file per paper. Our typesetters require the author-created pdfs in order to check the proper representation of symbols, figures, etc.
|
| 204 |
+
|
| 205 |
+
4. Please remove unnecessary files (such as eijkel2.pdf and eijkel2.eps) from the source folder.
|
| 206 |
+
|
| 207 |
+
5. You may use sub-directories.
|
| 208 |
+
|
| 209 |
+
6. Make sure to use relative paths for referencing files.
|
| 210 |
+
|
| 211 |
+
7. Make sure the source you submit compiles.
|
| 212 |
+
|
| 213 |
+
Springer is the first publisher to implement the ORCID identifier for proceedings, ultimately providing authors with a digital identifier that distinguishes them from every other researcher. ORCID (Open Researcher and Contributor ID) hosts a registry of unique researcher identifiers and a transparent method of linking research activities to these identifiers. This is achieved through embedding ORCID identifiers in key workflows, such as research profile maintenance, manuscript submissions, grant applications and patent applications.
|
| 214 |
+
|
| 215 |
+
Please kindly use the checklist below to deal with some of the most frequently encountered issues in ECCV submissions.
|
| 216 |
+
|
| 217 |
+
**FILES:**
|
| 218 |
+
|
| 219 |
+
- My submission package contains ONE compiled pdf file for the camera-ready version to go on Springerlink.
|
| 220 |
+
|
| 221 |
+
- I have ensured that the submission package has all the additional files necessary for compiling the pdf on a standard LaTeX distribution.
|
| 222 |
+
|
| 223 |
+
- I have used the correct copyright form (with editor names pre-printed), and a signed pdf is included in the zip file with the correct file name.
|
| 224 |
+
|
| 225 |
+
**CONTENT:**
|
| 226 |
+
|
| 227 |
+
- I have removed all ` \vspace` and `\hspace` commands from my paper.
|
| 228 |
+
|
| 229 |
+
- I have not used `\thanks` or `\footnote` commands and symbols for corresponding authors in the title (which is processed with scripts) and (optionally) used an Acknowledgement section for all the acknowledgments, at the end of the paper.
|
| 230 |
+
|
| 231 |
+
- I have not used `\cite` command in the abstract.
|
| 232 |
+
|
| 233 |
+
- I have read the Springer author guidelines, and complied with them, including the point on providing full information on editors and publishers for each reference in the paper (Author Guidelines -- Section 2.8).
|
| 234 |
+
|
| 235 |
+
- I have entered a correct `\titlerunning{}` command and selected a meaningful short name for the paper.
|
| 236 |
+
|
| 237 |
+
- I have entered `\index{Lastname,Firstname}` commands for names that are longer than two words.
|
| 238 |
+
|
| 239 |
+
- I have used the same name spelling in all my papers accepted to ECCV and ECCV Workshops.
|
| 240 |
+
|
| 241 |
+
- I have inserted the ORCID identifiers of the authors in the paper header (see http://bit.ly/2H5xBpN for more information).
|
| 242 |
+
|
| 243 |
+
- I have not decreased the font size of any part of the paper (except tables) to fit into 14 pages, I understand Springer editors will remove such commands.
|
| 244 |
+
|
| 245 |
+
**SUBMISSION:**
|
| 246 |
+
|
| 247 |
+
- All author names, titles, and contact author information are correctly entered in the submission site.
|
| 248 |
+
|
| 249 |
+
- The corresponding author e-mail is given.
|
| 250 |
+
|
| 251 |
+
- At least one author has registered by the camera ready deadline.
|
2206.05696/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-03-23T10:30:11.957Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36" etag="nPUqv9KLFC3eLi1vIcRM" version="16.6.7" type="device"><diagram id="m1vNAn4XMyH9f9RnBu2x" name="Page-1">7Vtbc5s4FP41nk0fmuFu/JjESZvZ7Xan6e42jzLINg1GFERi99fvERI3WdjYBreZrds46EgIcb7vXHTJyLxZrd8lKF5+ID4OR4bmr0fmdGQYumFp8ItJNlziaDoXLJLAF40qwUPwHQuhuG+RBT5OGw0pISEN4qbQI1GEPdqQoSQhL81mcxI2nxqjBd4SPHgo3Jb+G/h0WbyXplUV73GwWBaPnhiiZoXK1lyQLpFPXrgob2PejsybhBDKr1brGxwy7RWK4R3dtdSWI0twRLvc8Akn+P6zmaDV17snb+J+z/x/3openlGYiTceGU4I/V3PCXQLo6YboQvnW0aKirdpjtQVNNDteF1VwtWC/f6bUpygyMPQ4mJk31xk8PXmTdE5jJL3z1sLBZWPMhKSRT5mA9eh+mUZUPwQI4/VvgDRQLakq1BUpzQhTyU8RtlfXTXFe+KE4nVNJFT1DpMVpskGmhS1js1vEcTVCya/NGjAZcsaA4p2SDBvUXZdYQMXAp4DoDIUUJ2iNgbAHVoFIXvD9zh8xjTwkKgQlqhbrByE4Q0JSZI/xPQRdudeqfdajeO5eDbvR/+mK+lf29Z/qeu6/vXxUACYe2yl4PKB9mOq7Oc28sCXJlDPjOeCvmHm02o9XDxLZEmvZtaZL4sE+QEgXzAjIhHuhxSWJZHC7miU5lCcsE73n0r8pV680siqhqYGn/l8+95P8GYBfgZcDW1KvLSVDdwzww/vfnw9Y28xnsLVE/s1mMfuRCVjWCqZTSqVGUedSir/Mph/H59OJUtNJYv9H19nI4PjygU/FFgppsxt9k+0q8n5Rxlr8k9PXDCaXDANBRecc7qVMmescMA+JKaiSBK6JAsSofC2kl57WfJcAlU1+YOQWAi/Yko3AgaUUdIEDxSYbL5AQSsKj6xwaRfF6bpeOd2IUufAkFKU0CuWlVemnMvuAqYe0bVftPBClKaBx4Wiib4L8ZRkiYd3qNXl7eCRC0x3qV/gynS+k0AJDhENnpuThd7Z4LZ6htnRWYcya0/xNwN+oP4djiB/pyTZk3OAeHbm9EJOR23s+pbKRbjGzOzLRcjpqKVwEcp0dLBwMTkXKYp04iQqgJqpatYmBfY6tkKEwmARMW8A4MEYzGsGGjAjvBIVq8D3c/+nIlhFwYM8VR9BZSzlqooE46wxRVfNX+Qgo/K+NdCaFt1ZnXgd0C9VLIHSYxFz4LoKK6xQRJVjo9Hx0aFYX9oXHcYdg0MNaFsBdCHrHEPEE/4iQW7OgmcliYrkRfY4/L3FXfX1IakjU292ZEykjrhitjoCwqBNrVnMGqQ7BixnWyITr7jNe6yYXur0BPKrJmpnIv+RRC6MRq+ZTGVAaqM5mfw/C6knkvOUF3U6k1r2wk43UvfGO/uVOt2Kno8Ndqq56qN0WY5QleYfT0zrVXplW+KdaR9JYEf2ltqZCez8Lwh8PD/HHfnp/uLnIPxUzYPa+Snc0THzzmPicY2ORoOP+h4+NnxoZVT11RK9sTDCbppBEc3yt879cITiz4RrnjdorBAN75Ltn4vyk0mDqcaxOYWrWc2ObLsT5Q9NlB1LSsjtxgbvMIlysa7R+47wJ5zGJErLDeFkuO2Fll2DQfaJx9L0S7UwdOZ9YuP0iP1jnOK+IN2T+zJfpftypA0r90jvNW5bl+rbe5nqDfy+vJd2c0ei1e9Xrvflr7vbj5PP9x+Xh52R0PZ7kq015dsr57q2przlJxSMbHcdunHZMs3dtwmp9+E9lBrssEj4E2nQ2NKgq9BgEZ/Po8H2IwFpjKJOAVRrORKAVky70SyN87J2yT+1IMofse9cyNADUZ5EUQ2tc0rRdSDbov3nKDByZsDI1jMY5a7GewiYhuYD+hAVgLgbllcYGosi8E0IkyPaaVRdRF6yiSlhkYrlM9FiZN6p91oOPBV0AB9y436bcutmQMQJVioK1DjZD3fbztRBx1vu6W8pG2ywikOcx1UUhhvgnvZIMvj+mqX9YaAWLdEze3IOzoxd4bWHYxqw7faQ+acw8542e4DZ9e6mOZmo3v1yT58qXjTgPsRwW9rutvKOWXyHyFFs/IV4TgcPJOZWIJkoAolz1kCiWnnjeg5qihcMmTIubnIDmOP8EDZdMl8kZChR0bTmszQUbeiSeRkYVcSVBzl/FqKE+52KiQXUQQecO278F1OKrbXkHrDVpWm0KscaK3IseaegN1zbD3opcN1ynX8C0PBcLSItcDFPeJ/jH0RP4hqtSseo9lWXLR7k1aLuSjtIlup831Cwq+flHVbS+z8xY49UCx/y8bv53PCUR7p9SI1s5mQJDCOg7EnGzvXyA47UjNUT2vqJGtUBCUuesPaGkGqvro+lr4cP909gVNrFlHjZCudd3QPOa+z/4PO1Mg9cD6t5MHNty+4JeEdaMtNNBfKqlZThTLOHY9y/kO+A/FhCXnWK7rzI/7L58yDvSsgrwnFfyEOx+qs6vrJY/XGiefsf</diagram></mxfile>
|
2206.05696/main_diagram/main_diagram.pdf
ADDED
|
Binary file (68.1 kB). View file
|
|
|
2206.05696/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Humans have long wanted to talk with the machine and have them comprehend and generate natural language. The task of chit-chat dialogue response generation can be described as one of the major goals in natural language processing. As such, there has been considerable interest in the sub-field of open-domain dialogue models.
|
| 4 |
+
|
| 5 |
+
Nevertheless, the existing dialogue response generation models still suffer from some very fundamental problems: lack of interesting ("Ok", "I see", etc.) or uninformative responses ("I don't know") [\(Li et al.,](#page-6-0) [2016a,](#page-6-0) [Shao et al.,](#page-6-1) [2017,](#page-6-1) [Ghazvininejad](#page-5-0) [et al.,](#page-5-0) [2017\)](#page-5-0). The primary cause for this is that, unlike humans, the models do not have access to knowledge, experience about out-of-domain topics or human conversational habits and hence can only produce limited unengaging generic responses.
|
| 6 |
+
|
| 7 |
+
Recent work has proposed considering additional context information such as multi-turn conversational history [\(Zhang et al.,](#page-6-2) [2018\)](#page-6-2), persona [\(Li](#page-6-3) [et al.,](#page-6-3) [2016b\)](#page-6-3) or a fact-based knowledge base [\(Di](#page-5-1)[nan et al.,](#page-5-1) [2019\)](#page-5-1). Among these, our work approaches this problem from a more general standpoint of improving the raw conversational ability of generative models. We attempt this by taking inspiration from how humans learn to converse, i.e., through mimicking social interactions. Applying this in the context of dialogue models, we use a human-readable external knowledge base consisting solely of unstructured social media interactions (hereinafter referred to as SMIkb), which tends to include a more diverse language structure and hence improve generated responses.
|
| 8 |
+
|
| 9 |
+
For our approach, we jointly train a generatorretriever model where the retriever searches through pre-indexed SMIkb and feeds the related information together with the input utterance to the generative seq2seq model, allowing for additional context at the time of generation.
|
| 10 |
+
|
| 11 |
+
In particular, we utilize the Dense Passage Retriever proposed by [Karpukhin et al.](#page-5-2) [\(2020\)](#page-5-2) on top of BART [\(Lewis et al.,](#page-5-3) [2020a\)](#page-5-3) as our generational model trained on a mix of open-domain dialogue datasets, together with a collection of Reddit submissions and comments as our main source of social interactions. Experiments showed that our approach outperformed the existing vanilla seq2seq baseline (BART) across all of the automatic and human evaluation metrics. By making use of interactions grounded in social media, the generated responses were not only more engaging but were also shown to be much more relevant and natural, thus establishing the effectiveness of our approach.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
In this section, we discuss our approach to introducing social media interactions as an external knowledge base (SMIkb) to ground in for more natural and human-like response generation. We begin with formulating the task of dialogue generation and then proceed to explain our joint retrievergenerator model as the proposed setup for utilizing the aforementioned unstructured data source. Note that in this work, we primarily focus on response generation for single-turn dialogues or dialogues. We decided that other settings such as a multi-turn case were best addressed in future work.
|
| 16 |
+
|
| 17 |
+
Our task of response generation grounded in external knowledge can be formulated as training a model to predict a response r = (r1, r2, ..., rm) of
|
| 18 |
+
|
| 19 |
+
m words when given an input utterance $\mathbf{u}$ and a set of documents $\mathcal{D}$ that might contain relevant knowledge. We define our goal as to allow the model to learn the parameters such that when given an input utterance $\mathbf{u}$ and a knowledge base $\mathcal{D}$ , the model can generate a response $\mathbf{r}$ following the probability $p(r_i|\mathbf{u},\mathbf{r}_{< i},\mathcal{D};\theta)$ , where $\theta$ refers to the parameters of the model.
|
| 20 |
+
|
| 21 |
+
Inspired by recent advances in retrieval assisted QA (Guu et al., 2020; Lewis et al., 2020b), we adopt a simple joint retriever-generator setup to the task of dialogue generation. Concretely, we utilize BART, a seq2seq model pre-trained on a denoising objective, as our generative model along with the pre-trained neural Dense Passage Retriever (DPR) (Karpukhin et al., 2020) as the retriever of choice. DPR is a highly efficient neural retriever pre-trained for retrieving the top-k similar documents to an input query u. It executes this by encoding both the query and the entire knowledge base through independent BERT-based encoders (as t). Furthermore, we follow Karpukhin et al. (2020) to build an offline searchable dense vector index of these embeddings for our SMIkb using the FAISS (Johnson et al., 2017) library for faster lookup. An overview of our architecture is shown in Figure 1. Application of our model to dialogue response generation can be formulated as a twostep process: (1) the retriever searching top-k documents from the pre-indexed interaction knowledge base, relevant to the input utterance, and (2) the generator predicting the response to the previous utterance along with the retrieved context.
|
| 22 |
+
|
| 23 |
+
Following the notion set in Section 3.1, the probability of generating the response $\mathbf{r}$ given the utterance $\mathbf{u}$ and each of the top-k documents $d_j$ from the knowledge base $\mathcal{D}$ can be defined as
|
| 24 |
+
|
| 25 |
+
$$p(\mathbf{r}|\mathbf{u};\theta,\lambda) = \sum_{j}^{k} p_{\lambda}(d_{j}|\mathbf{u};\lambda) \prod_{i} p_{\theta}(r_{i}|\mathbf{u},\mathbf{r}_{< i},d_{j};\theta),$$
|
| 26 |
+
(1)
|
| 27 |
+
|
| 28 |
+
where $\theta$ and $\lambda$ are parameters for the generator and retriever, respectively. They are both fine-tuned jointly in an end-to-end fashion, with the retriever providing additional context that is concatenated together with the input at the time of generation. As there is no "correct" document source in the knowledge base, we consider it to be a latent variable. Therefore, during decoding we marginalize these probabilities over all the retrieved documents to return the most probable (best) response using
|
| 29 |
+
|
| 30 |
+
<span id="page-2-0"></span>
|
| 31 |
+
|
| 32 |
+
| Dataset | Total (turns) | Train | Valid | Test |
|
| 33 |
+
|---------------------------|---------------|---------|--------|--------|
|
| 34 |
+
| DailyDialog | 76,743 | 53,721 | 11,511 | 11,511 |
|
| 35 |
+
| DailyDialog++ | 39,913 | 27,939 | 5,987 | 5,987 |
|
| 36 |
+
| Cornell Movie-Dialogs | 221,088 | 154,762 | 33,163 | 33,163 |
|
| 37 |
+
| Reddit (pseudo extracted) | 200,000 | 140,000 | 30,000 | 30,000 |
|
| 38 |
+
|
| 39 |
+
Table 1: Overview of datasets in use.
|
| 40 |
+
|
| 41 |
+
beam search.
|
2207.01377/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2207.01377/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Attention-deficit/hyperactivity disorder (ADHD) is one of the most common neurodevelopmental disorders of childhood affecting approximately 5 to 13 percent of the children of an age cohort, depending on the diagnostic procedure used [@Thomas2015; @Willcutt2012; @polanczyk2007worldwide]. ADHD is characterized by persistent inattention, high levels of hyperactivity, and impulsivity [@AmericanPsychiatricAssociation2013].
|
| 4 |
+
|
| 5 |
+
The diagnosis of ADHD requires clinical assessment by specialists and typically involves self- and informant reports through clinical interviews and the use of rating scales. Informant reports can be obtained from close family members, teachers, or partners, depending on the age of the candidate. Since the clinical assessment is heavily influenced by subjective reports and ratings, it also incurs the risk to reflect social or cognitive biases. The *Strengths and Weaknesses of ADHD-Symptoms and Normal-Behavior (SWAN) rating scale* [@swanson2012categorical] is a well-established screening tool based on a questionnaire that has to be filled out by parents or teachers. The SWAN scale registers symptoms of inattention, hyperactivity, and impulsivity yielding the so-called SWAN score. Specifically, the SWAN rating scale probes behaviors according to the full spectrum of symptom severity, which ranges from functionality to dysfunctionality [@swanson2012categorical; @brites2015development].
|
| 6 |
+
|
| 7 |
+
The lack of comprehensive, objective assessment tools, developmental changes in the presentation of symptoms [@Biederman2000], and the high rates of co-morbidities [@AmericanPsychiatricAssociation2013] present a major challenge to ADHD assessment and ultimately increases the risk of under- or overdiagnosis. While a false negative can lead to the denial of treatment, a false positive can lead to inappropriate treatment, both of which may have detrimental effects on an individual's ability to function at school, professionally and socially as well as on their overall well-being. This motivates the development of fully automatic screening tools that can be applied at large to people at-risk or with a suspicion of having ADHD, thereby increasing the accessibility of ADHD screening opportunities as well as the objectivity of the screening method prior to specialist assessment.
|
| 8 |
+
|
| 9 |
+
Eye movements can be classified into so-called oculomotor events. These include fixations ($\approx$ 200--300 ms), during which the eye is relatively still and visual information is obtained, and saccades, which are fast relocation movements of the eye gaze between any two fixations ($\approx$ 30--80 ms) [@holmqvist2011eye]. A sequence of fixations is referred to as a *scanpath*. As eye movements are known to reflect cognitive processes including attentional mechanisms [@justcarpenter1976; @henderson2003human], they are considered a *window on mind and brain* [@vanGompel2007eye]. For several decades, they have been used as a gold-standard measure in cognitive psychology [@Rayner1998]. Researchers from the field of cognitive psychology typically treat eye movements as the dependent variable to investigate the effect of experimental manipulation of the stimulus and hence model it as the target variable. By contrast, more recent research has demonstrated the potential of treating eye movements as the independent variable (i.e., the model input) to infer the properties of the viewer. For example, it has been shown that eye-tracking data can be used to discriminate between different cognitive states [@henderson2013predicting], personal traits [@hoppe2018eye], or cognitive load [@Shojaeizadeh2019DetectingSystem]. A major challenge in using eye movements to make inferences about a viewer is the high degree of individual variability in the eye-tracking signal. The dominance of individual characteristics in the eye-tracking data explains why machine-learning methods for viewer identification perform very well [@lohr2020ijcb; @makowski2021deepeyedentificationlive], whereas models for other inference tasks typically perform at best at a proof-of-concept level or slightly above chance level. Another major challenge for the development of machine learning methods for the analysis of eye-tracking data is data scarcity. Since the collection of high-quality eye-tracking data is resource-intensive, only very few large data sets exist.
|
| 10 |
+
|
| 11 |
+
Differences in viewing behavior between individuals with and without ADHD have been found using eye-tracking tasks in which participants were required to make voluntary eye movements towards or away from a stimulus (so-called pro- or anti-saccade tasks) [@munoz2004look; @Klein2003]. These findings motivate our approach of developing a screening tool that processes each individual's eye movements and simultaneously takes into account information about the visual stimulus.
|
| 12 |
+
|
| 13 |
+
The contribution of this paper is fourfold. First, we provide a new state-of-the-art model to detect ADHD from eye movements in a natural free-viewing task and evaluate the performance of this model and relevant reference methods on a real-world data set. Second, we provide an extensive investigation of the relevance of the different input features in i) an ablation study and ii) by computing feature importances. Third, we demonstrate that transfer learning bears the potential to overcome the problem of data scarcity in eye-tracking research. Last but not least, we release a preprocessed free-viewing eye-tracking data set for the detection of ADHD.
|
| 14 |
+
|
| 15 |
+
The remainder of this paper is structured as follows. Section [2](#sec:related-work){reference-type="ref" reference="sec:related-work"} discusses related work and Section [3](#sec:problem_setting){reference-type="ref" reference="sec:problem_setting"} lays out the problem setting. We develop a model architecture for the detection of ADHD in Section [4](#sec:method){reference-type="ref" reference="sec:method"} and introduce the dataset in Section [5](#sec:datasets){reference-type="ref" reference="sec:datasets"}. In Section [6](#sec:experiments){reference-type="ref" reference="sec:experiments"} we present the experimental findings while in Section [7](#sec:discussion){reference-type="ref" reference="sec:discussion"}, we discuss the results. Section [8](#sec:conclusion){reference-type="ref" reference="sec:conclusion"} concludes.
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
We study the problem of ADHD detection. While watching a video, the eye gaze of the $j$-th individual is recorded as a sequence of fixations, denoted as $P_{j} = \{(x_{1}, y_{1}, t_{1}), \ldots, (x_{M}, y_{M}, t_{M})\}$, where $x_{m}$, $y_{m}$ are the $m$-th fixation location, $t_{m}$ is the fixation duration, and $M$ is the total number of recorded fixations. Provided a fixed video frame rate, we can use the temporal information to map the fixations to the corresponding video frames $V$, such that semantic information can be associated with eye-gaze. The training set consists of $\mathcal{D} = \{(P_1, V, c_1), \ldots, (P_J, V, c_J) \}$, where $P_j$ and $V$ represent the $j$-th individual's aligned fixation sequences and video frames, and $c_j$ is the label for whether an individual has ADHD. The objective is to train a classifier that identifies individuals with ADHD, which is a binary classification problem.
|
| 20 |
+
|
| 21 |
+
By varying the decision threshold for a learned model, we can plot the receiver operating characteristic (ROC) curve of the true positive rates versus false-positive rates, and finally compute the area under the curve (AUC) which is the area under the ROC curve and is used as a quantitative indicator of classification performance. We use the AUC as the evaluation metric, which is insensitive to the uneven distribution of classes.
|
| 22 |
+
|
| 23 |
+
In this section we introduce our model and the pre-training task used to initialize the weights for the final task of ADHD classification.
|
| 24 |
+
|
| 25 |
+
We propose an end-to-end trained neural sequence model to classify gaze sequences as belonging to an individual with or without ADHD. Figure [1](#fig:method){reference-type="ref" reference="fig:method"} shows an overview of our proposed method. We preprocess the raw eye-tracking, which consists of horizontal and vertical screen coordinates recorded with a sampling rate of 60 or 120 Hz into sequences of fixations using the Dispersion-Threshold Identification algorithm [@salvucci2000identifying]. The model takes as input the eye gaze sequence (scanpath) and the video clip on which this scanpath has been generated.
|
| 26 |
+
|
| 27 |
+
Based on our review of the literature, we hypothesized that the eye gaze of individuals with ADHD interacts differently with the visual stimulus in comparison to typically developing controls. We therefore use saliency maps to highlight possible regions of interest in a scene. We use a state-of-the-art saliency model, DeepGaze II [@kummerer2017understanding], to compute saliency maps for our video stimuli. DeepGaze II uses VGG-19 features that were trained on an object recognition task [@simonyan2014very] and feeds them into a second network that is trained to predict a probability distribution of fixation locations on a given image.
|
| 28 |
+
|
| 29 |
+
For each video frame $i$ of size $(W,H)$, the pre-trained DeepGaze II model generates a saliency map $S^{(i)} \in \mathbb{R}^{H\times W}$. We then apply min-max normalization to transform $S^{(i)}$ to the range of $[0,1]$. To extract the normalized saliency value of each fixation location, we create an extraction mask, $E_{m}^{(i)} \in \mathbb{R}^{H\times W}$, for the $m$-th fixation on the $i$-th video frame. More specifically, $E_{m}^{(i)}$ is generated by setting the fixation location to one and all other cells to zero. We then smooth the extraction mask with a Gaussian kernel (standard deviation $\sigma$ = 1.5$^{\circ}$) and normalize it. The Gaussian kernel is applied to account for the parafoveal information intake around the center of the fixation [@holmqvist2011eye]. Eventually, the saliency value for the $m$-th fixation is given by: $$\begin{equation}
|
| 30 |
+
s_{m} = \textbf{1}_{H}\left(E_m^{(i)}\odot S^{(i)}\right)\textbf{1}_{W}^T,
|
| 31 |
+
\end{equation}$$ where $\odot$ is the Hadamard product and $\textbf{1}_d$ is an all-ones row vector of dimension $d$. In case a fixation spans multiple frames, we use the central frame for the saliency computation. Finally, we apply z-score normalization to each of these feature channels.
|
| 32 |
+
|
| 33 |
+
into a 1D-convolutional neural network (CNN) to perform the ADHD classification. depicts the details of the CNN architecture. The CNN consists of four one-dimensional convolutional layers with rectified linear unit (ReLU) activation functions, followed by two linear fully-connected layers. We apply ReLU to the first layer and sigmoid to the last layer. Each convolutional layer is followed by a batch normalization layer and an average pooling layer with a pooling size of 2. The parameters $k$, $s$ , specify the kernel size, the stride size, and the number of filters for the convolutions, respectively. A dropout layer with a rate of 0.4 is added before the first dense layer to prevent over-fitting. Finally, the neural network is optimized using the binary cross-entropy metric.
|
| 34 |
+
|
| 35 |
+
<figure id="fig:method" data-latex-placement="!t">
|
| 36 |
+
<figure>
|
| 37 |
+
<embed src="figures/method.pdf" />
|
| 38 |
+
</figure>
|
| 39 |
+
<p><br />
|
| 40 |
+
</p>
|
| 41 |
+
<figure>
|
| 42 |
+
<embed src="figures/CNN4.pdf" />
|
| 43 |
+
</figure>
|
| 44 |
+
<figcaption>Proposed network architecture. Panel (a) shows the complete architecture and Panel (b) shows the 1D-CNN denoted as “CNN" in Panel (a). The model is pre-trained to predict the viewer’s SWAN score (regression task) and fine-tuned for ADHD classification.</figcaption>
|
| 45 |
+
</figure>
|
| 46 |
+
|
| 47 |
+
The number of data points from individuals with diagnosed ADHD and negatively-diagnosed controls in the dataset is limited. We therefore pre-train our model on a relevant task for which more data is available. Specifically, we pre-train our neural network on a regression task predicting an individual's SWAN score. An individual's SWAN score is highly relevant to the diagnosis of ADHD; using the SWAN score to classify individuals with and without ADHD yields an AUC of 0.878 (standard error $= 0.007$). We therefore capitalize on the SWAN score to enable the model to detect ADHD-related patterns in the eye movements and perform pre-training on the *SWAN prediction dataset* (see Section [5](#sec:datasets){reference-type="ref" reference="sec:datasets"} for details on the datasets).
|
| 48 |
+
|
| 49 |
+
For pre-training, we replace the sigmoid output unit with a linear output unit for the regression setting. We apply the mean squared error as loss function. The pre-trained weights are then used to initialize the ADHD classification model.
|
| 50 |
+
|
| 51 |
+
The data for this study is part of the ongoing Healthy Brain Network (HBN)[^3] initiative by the Child Mind Institute [@alexander2017open], establishing a biobank of multi-modal data of children and adolescents. The data analyzed here includes all participants of the HBN up to the 6th release. Participants from the 7th release were included if their data acquisition took place until the end of the season "Spring 2019".
|
| 52 |
+
|
| 53 |
+
The tasks analyzed in this study include all free-viewing naturalistic stimuli paradigms of the test battery. Participants were shown four different age-appropriate videos with audio track: (1) an educational video clip (*Fun with Fractals*, 2:43 min), (2) a short animated film (*The Present*, 3:23 min), (3) a short clip of an animated film (*Despicable Me*, 2:50 min), and (4) a trailer for a feature-length film (*Diary of a Wimpy Kid*, 1:57 min). There were no instructions given for watching the videos. The order of the videos within the test battery was randomized for each participant except for *The Present* always being shown last.
|
| 54 |
+
|
| 55 |
+
Monocular eye gaze data of the right eye was recorded with an infrared video-based eye tracker (iView-X Red-m, SensoMotoric Instruments \[SMI\] GmbH, spatial resolution: 0.1$^\circ$, accuracy: 0.5$^\circ$). The eye gaze was recorded at a sampling rate of 60 Hz or 120 Hz, depending on the testing site. In between each task, the eye tracker was calibrated using a 5-point grid.
|
| 56 |
+
|
| 57 |
+
The recruited participants were initially screened for having symptoms of any mental disorder. Clinical diagnoses were provided in accordance with the current edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) [@AmericanPsychiatricAssociation2013], and based on a consensus by multiple licensed clinicians. A total of 1,246 participants were included in the study, whose tracker loss was less than 10%. 232 participants (178 were male and 54 were female) with an age range of 6--21 years (mean age 9.97 years ± 3 years) were selected on the basis of having received an ADHD diagnosis (including the predominantly inattentive presentation, predominantly hyperactive-impulsive presentation, and combined presentation of ADHD) and having no past or current co-morbidity according to the DSM-V. These participants were assigned to the ADHD group. A group of 152 participants (71 were male and 81 were female) with an age range of 6--21 years (10.42 years ± 3.31 years) were assigned to the control group whose psychological assessment indicated no past or current presence of any mental disorder according to the DSM-V. All remaining 862 participants are included for hyperparameter tuning and pre-training the models. Hereafter, we refer to the subset of the data that contains recordings from the ADHD and control groups as *ADHD classification dataset* and the subset used for hyperparameter tuning and pre-training as *SWAN prediction dataset*. Note that for some participants recordings are available only from a subset of the four videos, as detailed in Table [\[tab:demographic\]](#tab:demographic){reference-type="ref" reference="tab:demographic"}. In addition to the diagnostic assessment, SWAN scores for participants were obtained through the SWAN scale as a measure of ADHD-related symptom severity [@swanson2012categorical].
|
| 58 |
+
|
| 59 |
+
:::: table*
|
| 60 |
+
::: center
|
| 61 |
+
Video ADHD classification dataset SWAN prediction dataset
|
| 62 |
+
---------------------- ----------------------------- -------------------------
|
| 63 |
+
Fun with Fractals 67 (48 A, 19 C) 276
|
| 64 |
+
The Present 159 (111 A, 48 C) 444
|
| 65 |
+
Despicable Me 315 (187 A, 128 C) 656
|
| 66 |
+
Diary of a Wimpy Kid 340 (202 A, 138 C) 736
|
| 67 |
+
:::
|
| 68 |
+
::::
|
2209.10448/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2209.10448/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Facial expression recognition (FER) plays an important role in understanding people's feelings and interactions between humans. Recently, automatic emotion recognition has gained a lot of attention from the research community [\[43\]](#page-9-0) due to its applications in healthcare [\[35\]](#page-8-0), surveillance [\[7\]](#page-8-1), or human-robot interaction [\[8\]](#page-8-2). Most recent FER methods utilize deep learning [\[28\]](#page-8-3) and achieve better results than handcrafted features approaches [\[9,](#page-8-4) [44\]](#page-9-1). The success of deep networks can be attributed to large-scale FER datasets such as AffectNet [\[37\]](#page-9-2), EmotioNet [\[3\]](#page-8-5), and RAF-DB [\[33\]](#page-8-6). Some datasets describe emotion in terms of Action Units (AUs) following the Facial Action Coding System [\[6\]](#page-8-7) or quantify affection over continuous scales, such as valence and arousal [\[41\]](#page-9-3), while most of them classify facial expressions into basic universal emotions [\[12,](#page-8-8) [36\]](#page-9-4) and the neutral state.
|
| 4 |
+
|
| 5 |
+
Unfortunately, large-scale FER datasets often suffer from the problem of label uncertainty and annotation ambiguity [\[58,](#page-9-5) [5,](#page-8-9) [45\]](#page-9-6). People with different backgrounds might perceive and interpret facial expressions differently, which can lead to inconsistent and uncertain labels [\[58,](#page-9-5) [45\]](#page-9-6). In addition, real-life facial expressions usually manifest a mixture of feelings [\[67,](#page-9-7) [5\]](#page-8-9) rather than a single exaggerated emotion often found in the lab-controlled setting. For example, Figure [1](#page-1-0) shows that people may have different opinions about the expressed emotion, particularly in ambiguous images. Consequently, a distribution over emotion categories is better than a single label because it takes all sentiment classes into account and can cover various interpretations, thus mitigating the effect of ambiguity [\[16\]](#page-8-10). However, most current large-scale FER datasets only provide a single label for each sample instead of a label distribution, which means we do not have a comprehensive description for each facial expression. This can lead to insufficient supervision during training and pose a big challenge for many FER systems.
|
| 6 |
+
|
| 7 |
+
To overcome annotation ambiguity in FER, this paper proposes a new uncertainty-aware label distribution learning method that constructs emotion distributions for training samples. Specifically, for each instance, we leverage valence-arousal information to identify a set of neighbors and calculate their corresponding contributions using our adaptive similarity mechanism. We then aggregate neighborhood information with the provided single label, ad-
|
| 8 |
+
|
| 9 |
+
<sup>\*</sup>Equal contribution
|
| 10 |
+
|
| 11 |
+
<span id="page-1-0"></span>
|
| 12 |
+
|
| 13 |
+
Figure 1: User study results by 50 volunteers on three random images from RAF-DB dataset. The expression on the right image are more ambiguous, which leads to high uncertainty in the emotion label. Labels at the bottom denote the provided annotation from the dataset. (Su=Surprise, Fe=Fear, Di=Disgust, Ha=Happy, Sa=Sad, An=Angry, Ne=Neutral)
|
| 14 |
+
|
| 15 |
+
justed by its learnable uncertainty factor, to generate the target label distribution. Finally, we use the constructed distribution as supervision signals to optimize the model via label distribution learning. We also introduce a discriminative loss that reduces intra-class variations and encourages interclass differences to improve the model's robustness against ambiguous features. Note that the distribution construction only occurs during training while the inference process remains intact. In summary, our contributions are as follows:
|
| 16 |
+
|
| 17 |
+
- 1. We propose a new method, namely Label Distribution Learning with Valence-Arousal (LDLVA), for FER with ambiguous annotation by exploiting neighborhood information in the valence-arousal space.
|
| 18 |
+
- 2. Our uncertainty-aware label distribution construction provide more accurate and richer supervision for training deep FER networks, allowing them to learn from ambiguous data effectively in an end-to-end manner.
|
| 19 |
+
- 3. We perform extensive experiments under various synthetic and real-world ambiguity settings and achieve state-of-the-art results on RAF-DB, AffectNet, and SFEW datasets.
|
| 20 |
+
|
| 21 |
+
# Method
|
| 22 |
+
|
| 23 |
+
We first introduce a list of notations that will be used throughout this paper. Let $x \in \mathcal{X}$ be the instance variable in the input space $\mathcal{X}$ and $x^i$ be the particular i-th instance. The label set is denoted as $\mathcal{Y} = \{y_1, y_2, ..., y_m\}$ where m is the number of classes and $y_j$ is the label value of the j-th class. The logical label vector of $x^i$ is indicated by $l^i$ = $(l^i_{y_1}, l^i_{y_2}, ..., l^i_{y_m})$ with $i^i_{y_j} \in \{0, 1\}$ and $||l||_1 = 1$ . We define the label distribution of $x^i$ as $d^i = (d^i_{y_1}, d^i_{y_2}, ..., d^i_{y_m})$ with $||d||_1 = 1$ and $d^i_{y_j} \in [0, 1]$ representing the relative degree that $x^i$ belongs to the class $y_j$ . A neural network with parameters $\theta$ followed by a softmax layer is denoted as $f(x;\theta)$ . The corresponding feature vector of $x^i$ extracted by a CNN backbone model is indicated by $v^i \in \mathbb{R}^V$ .
|
| 24 |
+
|
| 25 |
+
Most existing FER datasets assign only a single class or equivalently, a logical label $l^i$ for each training sample $x^i$ . In particular, the given training dataset is a collection of nsamples with logical labels $D_l = \{(\boldsymbol{x}^i, \boldsymbol{l}^i) | 1 \leq i \leq n\}.$ However, as depicted in Figure 1, a label distribution $d^i$ is a more comprehensive and suitable annotation for the image than a single label. Inspired by the recent success of label distribution learning (LDL) in addressing label ambiguity [16], we aim to construct an emotion distribution $d^i$ for each training sample $x^i$ , thus transform the training set $D_l$ into $D_d = \{(\boldsymbol{x}^i, \boldsymbol{d}^i) | 1 \leq i \leq n\}$ , which can provide richer supervision information and help mitigate the ambiguity issue. Consequently, our goal is to optimize the parameters $\theta$ of the neural network $f(x;\theta)$ such that it can learn an appropriate mapping function for the instance $x^i$ from the input space to the target label distribution $d^i$ . Mathematically, we use cross-entropy to measure the discrepancy between the model's prediction and the constructed target distribution [16]. Hence, the solution can be obtained by minimizing the following classification loss:
|
| 26 |
+
|
| 27 |
+
<span id="page-2-0"></span>
|
| 28 |
+
$$\mathcal{L}_{cls} = \sum_{i=1}^{n} CE\left(\boldsymbol{d}^{i}, f(\boldsymbol{x}^{i}; \boldsymbol{\theta})\right) = -\sum_{i=1}^{n} \sum_{j=1}^{m} \boldsymbol{d}_{j}^{i} \log f_{j}(\boldsymbol{x}^{i}; \boldsymbol{\theta}).$$
|
| 29 |
+
(1)
|
| 30 |
+
|
| 31 |
+
An overview of our method is presented in Figure 2. To construct the *label distribution* for each training instance $x^i$ , we leverage its neighborhood information in the valence-arousal space. Particularly, we identify K neighbor instances for each training sample $x^i$ and utilize our *adaptive similarity mechanism* to determine their contribution degrees to the target distribution $d^i$ . Then, we combine the neighbors' predictions and their corresponding contribution degrees with the provided label $l^i$ and $l^i$ 's uncertainty factor to obtain the label distribution $d^i$ . The constructed distribution $d^i$ will be used as supervision information to train the model via label distribution learning. It is worth noting that these steps occur only during training, thus no extra costs are introduced at inference time.
|
| 32 |
+
|
| 33 |
+
As in previous works [68, 56, 5], we assume that facial images should have similar emotions to their neighbors in an auxiliary or supporting space. Therefore, the label distribution of an instance can be constructed using the information of its neighbors. Since our goal is to reconstruct the target label distribution with high fidelity, the chosen supporting space should highly correlate with the emotion space to transfer as much information as possible. Although information such as facial landmarks and action units can be utilized as the supporting space, we find that valence-arousal values are more closely associated with discrete emotions and thus particularly suitable to be the auxiliary space. In practice, the valence-arousal has been widely used to represent the human emotional spectrum, with valence describing how positive or negative an expression is and arousal indicating the intensity or activation degree of the expression [42].
|
| 34 |
+
|
| 35 |
+
Similar to the smoothness assumption [68], we assume that the label distribution of the main instance $\boldsymbol{x}^i$ can be computed as a linear combination of its neighbors' distributions. To determine the contribution of each neighbor, we propose an adaptive similarity mechanism that not only leverages the relationships between $\boldsymbol{x}^i$ and its neighbors in the auxiliary space but also utilizes their feature vectors extracted from the backbone. In particular, we first use the K-Nearest Neighbor algorithm to identify K closest points for each training sample $\boldsymbol{x}^i$ , denoted as N(i), based on the distance between training instances in the valence-arousal space. We then compute a *local similarity score* between $\boldsymbol{x}^i$ and each of its K neighbors using the following formula:
|
| 36 |
+
|
| 37 |
+
<span id="page-2-1"></span>
|
| 38 |
+
$$s_k^i = \exp\left(-\frac{\|\boldsymbol{a}^i - \boldsymbol{a}^k\|_2^2}{\delta^2}\right), \quad \forall \boldsymbol{x}^k \in N(i), \quad (2)$$
|
| 39 |
+
|
| 40 |
+
where a is the corresponding auxiliary valence-arousal vector of x, and $\delta$ is a hyperparameter controlling similarity measurement. Intuitively, the higher $s_k^i$ is, the more $x^k$ contributes to the label distribution of $x^i$ .
|
| 41 |
+
|
| 42 |
+
<span id="page-3-0"></span>
|
| 43 |
+
|
| 44 |
+
Figure 2: An overview of our Label Distribution Learning with Valence-Arousal (LDLVA) for facial expression recognition under ambiguity. Dotted lines denote components used in training only while solid lines denote components used in both training and testing.
|
| 45 |
+
|
| 46 |
+
However, since valence-arousal values are not always available in practice, we leverage an existing method [49] to generate pseudo-valence-arousal. Consequently, these values can be inaccurate and lead to incorrect calculation of $s_k^i$ . Therefore, we proposed to correct these potential errors with our adaptive similarity mechanism. Specifically, we calculate a *calibration score* for each $(\boldsymbol{x}^i, \boldsymbol{x}^k)$ pair using the feature vectors $(\boldsymbol{v}^i, \boldsymbol{v}^k)$ extracted by the CNN backbone of $\boldsymbol{x}^i$ and its neighbor instance $\boldsymbol{x}^k \in N(i)$ as follows:
|
| 47 |
+
|
| 48 |
+
$$\zeta_k^i = \operatorname{Sigmoid}\left(g([\boldsymbol{v}^i, \boldsymbol{v}^k]; \phi)\right), \tag{3}$$
|
| 49 |
+
|
| 50 |
+
where $[\cdot, \cdot]$ is the concatenation operator, g is a three-layer perceptron (MLP) with parameter $\phi$ . The dimensionality of each layer is 512, 256, and 1, respectively. We also apply layer normalization and ReLU non-linearity in the first two layers.
|
| 51 |
+
|
| 52 |
+
The final *contribution degrees* of neighbor instances are calculated as the product of the local similarity and the calibration score:
|
| 53 |
+
|
| 54 |
+
$$c_k^i = \begin{cases} \zeta_k^i s_k^i, & \text{for } \boldsymbol{x}^k \in N(i), \\ 0, & \text{otherwise.} \end{cases}$$
|
| 55 |
+
(4)
|
| 56 |
+
|
| 57 |
+
After obtaining the contribution degree of each neighbor $x^k \in N(i)$ , we can now generate the target label distribution $d^i$ for the main instance $x^i$ . The target label distribution is calculated using the logical label $l^i$ and the aggre-
|
| 58 |
+
|
| 59 |
+
gated distribution $\tilde{d}^i$ defined as follows:
|
| 60 |
+
|
| 61 |
+
$$\tilde{d}^{i} = \frac{\sum_{k} c_{k}^{i} f(\boldsymbol{x}^{k}; \boldsymbol{\theta})}{\sum_{k} c_{k}^{i}},$$
|
| 62 |
+
(5)
|
| 63 |
+
|
| 64 |
+
<span id="page-3-1"></span>
|
| 65 |
+
$$\mathbf{d}^{i} = (1 - \lambda^{i})\mathbf{l}^{i} + \lambda^{i}\mathbf{d}^{i}, \tag{6}$$
|
| 66 |
+
|
| 67 |
+
where $\lambda^i \in [0,1]$ is the *uncertainty factor* for the logical label. It controls the balance between the provided label $l^i$ and the aggregated distribution $\tilde{d}^i$ from the local neighborhood. Intuitively, a high value of $\lambda^i$ indicates that the logical label is highly uncertain, which can be caused by ambiguous expression or low-quality input images as illustrated in Figure 6, thus we should put more weight towards neighborhood information $\tilde{d}^i$ . Conversely, when $\lambda^i$ is small, the label distribution $d^i$ should be close to $l^i$ since we are certain about the provided manual label. In our implementation, $\lambda^i$ is a trainable parameter for each instance and will be optimized jointly with the model's parameters using gradient descent.
|
| 68 |
+
|
| 69 |
+
Mathematically speaking, consider Equation 1 and 6, the derivative of $\mathcal{L}_{cls}$ with respect to $\lambda^i$ can be computed as:
|
| 70 |
+
|
| 71 |
+
$$\frac{\partial \mathcal{L}_{cls}}{\partial \lambda^{i}} = \frac{\partial \text{CE}\left(\boldsymbol{d}^{i}, f(\boldsymbol{x}^{i}; \boldsymbol{\theta})\right)}{\partial \lambda^{i}}
|
| 72 |
+
= -\sum_{j} \tilde{\boldsymbol{d}}_{j}^{i} \log f_{j}(\boldsymbol{x}^{i}; \boldsymbol{\theta}) + \sum_{j} \boldsymbol{l}_{j}^{i} \log f_{j}(\boldsymbol{x}^{i}; \boldsymbol{\theta})$$
|
| 73 |
+
(8)
|
| 74 |
+
|
| 75 |
+
$$= CE(\tilde{\boldsymbol{d}}^i, f(\boldsymbol{x}^i; \theta)) - CE(\boldsymbol{l}^i, f(\boldsymbol{x}^i; \theta)). \tag{9}$$
|
| 76 |
+
|
| 77 |
+
If $CE(l^i, f(x^i; \theta))$ is smaller than $CE(d^i, f(x^i; \theta))$ , the derivative of $\mathcal{L}_{cls}$ with respect to $\lambda^i$ is positive, which leads
|
| 78 |
+
|
| 79 |
+
to a negative update for $\lambda^i$ following gradient descent optimization scheme. This is desirable because in this case, the network output is in more agreement with the logical label than the aggregated neighborhood distribution. In other words, it is more confident about the provided label and thus, we should decrease the value of the uncertainty factor $\lambda^i$ . The same reasoning can be applied in the opposite situation.
|
| 80 |
+
|
| 81 |
+
Recent literatures have shown the benefits of learning discriminative features in FER [4, 27, 14, 15]. Inspired by this, we believe it is beneficial to encourage the network to learn good facial descriptions because it can help improve the model's ability to discriminate between ambiguous emotions. We find that the center loss [55] is suitable for our purpose because of its simplicity and efficacy in reducing the intra-class variations of the learned representations. Nevertheless, in the traditional formulation of the center loss [55], the features of a sample are "blindly" pulled towards its corresponding class center given its label. This means when the provided label is incorrect, it can cause the network to learn imprecise features. We propose to overcome this problem by incorporating the label uncertainty factor $\lambda^i$ to adaptively penalize the distance between the sample and its corresponding center. For instances with high uncertainty, the network can effectively tolerate their features in the optimization process. Furthermore, we also add pairwise distances between class centers to encourage large margins between different classes, thus enhancing the discriminative power. Our discriminative loss is calculated as follows:
|
| 82 |
+
|
| 83 |
+
$$\mathcal{L}_{D} = \frac{1}{2} \sum_{i=1}^{n} (1 - \lambda^{i}) \| \boldsymbol{v}^{i} - \boldsymbol{\mu}_{y^{i}} \|_{2}^{2} + \sum_{j=1}^{m} \sum_{\substack{k=1\\k \neq j}}^{m} \exp\left(-\frac{\|\boldsymbol{\mu}_{j} - \boldsymbol{\mu}_{k}\|_{2}^{2}}{\sqrt{V}}\right), \quad (10)$$
|
| 84 |
+
|
| 85 |
+
where $y^i$ is the class index of the i-th sample while $\mu_j$ , $\mu_k$ , and $\mu_{y^i} \in \mathbb{R}^V$ are the center vectors of the j-th, k-th, and $y^i$ -th classes, respectively. During the training phase, all center vectors are zero-initialized and optimized using Equation 10. Intuitively, the first term of $\mathcal{L}_D$ encourages the feature vectors of one class to be close to their corresponding center [55] while the second term improves the inter-class discrimination by pushing the cluster centers far away from each other.
|
| 86 |
+
|
| 87 |
+
Combining Equation 1 and Equation 10, we obtain the total loss for training:
|
| 88 |
+
|
| 89 |
+
<span id="page-4-1"></span>
|
| 90 |
+
$$\mathcal{L} = \mathcal{L}_{cls} + \gamma \mathcal{L}_D, \tag{11}$$
|
| 91 |
+
|
| 92 |
+
where $\gamma$ is the hyperparameter balancing between the two losses.
|
2212.04092/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-10-21T22:19:15.370Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36 Edg/106.0.1370.47" etag="3o2rklKkuDK_3akTNFRr" version="20.2.5"><diagram id="bYW_jAO9u7Pn4qdCXIkS" name="Page-1">7R3Zlto49lvmgdPdcw4c78sjSy3p6dSkOpmu7rzkGBDglLGJbWrJ14+ELWPL12BAoqBwHlIgC9m+uvumltqfv9yEzmL2MRgjr6VI45eWOmgpiqxLCv5DRl6TEcM0k4Fp6I7TSeuBz+5PlA5K6ejSHaOoMDEOAi92F8XBUeD7aBQXxpwwDJ6L0yaBV7zrwpmmd5TWA59HjodK0x7ccTxLRi09N/sWudMZvbMspVfmDp2cLhHNnHHwnLuXetVS+2EQxMmn+UsfeQR4FC7JQtcVV7MHC5Ef1/lBEHRflktpcOMMbuJv9v1P7ylsp7vz5HjL9IXTh41fKQTCYOmPEVlEbqm955kbo88LZ0SuPuM9x2OzeO6llyeu5/UDLwhXv1UnaKQPh3g8isPgEdErfuAjMjnw4/zk1T88Xn6z9GWfUBijl9xQ+qY3KJijOHzFU+hVy+jYavKrFPPoBj+vt1GnY7PcFqp2ij0p5kyzxdfAxR9S+O4AaxWAteHFKSAKQDd+LAN6oR2tiKKLJ8jW4mUFH3odf5qSv7cYsQjK+QQEr044jvBfL/CnKCRv7JCvPSeMZ7+QTxHC1DIm1OAij/ydBg55+uBpNX3ihhFe9Jo+HX7Z5AGTe5UQBG9JXMQCeLdzqJEOOZ479fHXEd5ofGu1RzbYxcTXTS/M3fGY3AZEuzViSuKQRquHNAoHpGk/mf+7vb39jm4Xuo3+im8MewAS6I5Io1UgDV0oWjh+9UITZ+56r8lSn68/Bn7Q/hNNl56DUQW/kvQR+V6Qfgx8Z5R+7gd+FHgE8cg3vKYzJ3uWrvyHO0ShE7uBn/wsYCfAP+sHy9Bd4egdeoZ/MseLRQme9IiswLjZTtGMvIGHJslcaeiMHqcrFGqPEqwk18Pp8FdF15O18h9+qwAhxg/tpaVf+cnHHM0kYC2C+jxIiTxiKolllQ9ptY0CWdlGiaxkkBdLgsgK4sUHyT0GZKwcdJA8Hpe2E1+5kvrdvsEHyJZUALJZFniKBPEuXRSUNdFQLioQ0upfGfrX11dGvw9Bv690r+2eEOirstnR622AptCZ3HUOXbz4GIZbuVwyPHafwBuyPPr7MordyWvCpnd9moNf6zog8sXxo2cUuv6UyJAZwv//WKIoFVjrH9XRuz66j+T3d8spxq1fsglRXFbA4tlKEs/clZY2CzDqUU1s/ZSbQIyHV1A+R8DfEdlXgHMcZDtB8JwA5R4v8jBz4g3QzeBWAm+yQRsU2/MH4s5E+YYvCz5j4WFOUjvynCHyepn2CNi0VERRkcVM4SFrzKI6JQGiBlszZVEjy4ogSWOKlzT7YOVB7J3eeCnEzk7YfGJt40GMG2syyO7Icv/E5PGH0UIIEVfS6s5cOvdsw210LuB5t/KWYxlkYUJ874zjKLIKMBytzHBsUbYFxG+YzYtmzoJ8HC1D77UXYgCiuALguQ0uWiR7mxzdgWTxAb0hF0Bvlc06GTAq6K+4A946nNGrjUdKrEfqxze58Unt6JOSdfmNnVL25bA0pQh6DfAHQo4STRDk6c1ExGYugalVafLH8r83fG4Dsak6o7qVY1oqYCoKY3OyDBDbRfA5TBkl0NuA6sbBRochbzRs7nzZHFHq3i+jq6JODlRIaYxGZpQyFerKMRngBdmuKgN6wHiFQG+JAj1kvTKgR/64SzLH1ng/dqJZBtUcuMn4JyfGtOCvRhRJzaiJZosp/DEfjWnOWgXw84gNqNF0LEQe5ppPqLA4BPL0Dp8CdyUkKFkxAU+DCWJGmL+OUPqj9b6V1tENi1mI2f3YCacoLi20QoDsrQ/ACcju2lEqmpVSkTDsxwQ7GN7dMOlC8pVWNMd1o+xhNI5pjSuQTcjsEs8ckevulTwYlDaac46IaXZkhiebZXGIMbpDPb6F8JHEAdpg+GiDBV4VF9jVq8jNPXnwQvc0erKaVohn90kes+dtiB+tAkJVYSAoxFupkcO6qeu7sUvuAzx4NwnzJG+ntEkEbFdldJ8I0Lvc6STuWBkjnGRTps4cncyuKxbHXecu81K769xjbWpHZkwWSwMj/HqH+gryXNoU5T+gEhBg0o3Znzf7YTWwpfdbZu/Ht3+39Ks5vpOTfiLDA/Kkej9y8QXpw7fur9ja/22LuX8WKmMYxMkeqYO2vYHCxmjiLL2YDwVlnq6UfjRFhuiHmpnH0Sdr5By/F6u/6PZUzbLHWdY7il6GPi2/4A/9Mwmu5clFtng5wKyiyq8rZdsK8EOrolII1Gplv5EjLUHuY2er+/gs5AkPcjBZcihbwAbAnMRVoxy5HOU4rgY1x+M3lKQoWscEgC2uKkW9qLIUVZFK1gTghVdku2PJ5W3QhO1CDU/8KQhkHlsgFT3lMjXQCrroMeXvmShDAmCvyFDpIcB+RBX0ajXcyu8G9kXuD0b/IJe+qOifViPz5BzlLODTL1tcChYEwnz6MLir3UW8PL1V62zzxIKFHIlfNldKRmsztvvfQc+sKBesWtMFu4OH/QB4JU7qz6OZMxyeMNRUaX+oNX7rmqqm1lGYWL5lAaqm3DEBkSvMba3VUPhPQeQK8f3o2nbfDyiEOSifcKS3uia9rlSQO/qplQpuZZBjdzJBeB9H5MsQxc8I+SnfXA6ZsnB8z1Z1ITNhpRJ90bqFbjwA8kAe3fFC5IzJWz76q3fOSv9WhYAbgEPeLwNK7r1rFWxnj5JUGJaDz4ILCmtHgqtxc4fSyU3yFsSYsqzdPfreVF/XlnNFlkpb2hVkHMBSZVWYhIN46glKOAEGvawC8gxyphiiYH+Eyvf3HBU5SvDjZ1MQuZ20FEthSAvwIgA13sLiInoTJ2y9QZmJUoNULocmGHFjAMIeqp7j0NELpgnIh8k9W11pstU3ayG21bGLUTWdZr28VcK6Drk5Gm75trXHZ0EoPPikoRWoITOA8rVIUE6FMD55OUV2pRinXtMkMkXBvrpHTMOJRJo4DS9a4T+T35Uxnhw90FDwcSRzdYVhQw8i7ZiGHhJ6KMpmFehqT2P1R6EHo9qGaehBFD04DT3QkKjKWG5UNczrqgaYDimOJGp48I+SlHqUlCVDLmqsYF4wZDsLS0c1Lsdc0O2iOJCNco4GaC6I6r5G1bUdMgLERv53zUeQzrWXPIduwnVh0SH/xN5iqw6xewbZ3ikPFUI2rQB3t+aFZAkO+Gbkf9lOvkPJEGbdXIg9UxoUDgkNX2b4JcnF1f9Ex8LaUZgnhogqPnSZ6LGT/HCF3P4Kq4trm1zyHJrshx1lt9mh1jMV33ZZfEt6VnZSzIEQVdRgVru/D8b0YwqXHYrKubM7XilvW3rgk79BVRP8miIKitzzToarBP3eYqHQAWTbSQEQr1ftGsy+EiPEvOBqqa2ij9/D1UK8yiz0BI71+pgQhBJKojuvtOSxQ7W1kAzbUi2EQg1MyZRZiC5zD78rGfOA1XAPRaX60IrmSAnuioSV+Viop16xs2LggjJhwMqEtq4w5q9PXE5OpU7PY6QeAaWmR0DYGRNN4lfrDQImUZP4VdmVVAb6UaoQTXDgRw/t8e/XfzzfPf71+hD/5+vfaPyP3VYg+4bZDdIZ9nP6NcCaUDDFSO5drUcZuK3n/BEEi3QDv6M4fk2h6SyxpNzLr5w8G9OqFtyTpE/sJm6QTkz6wG4CUcWW1m50exjT4pCZV91dnWbmRU1m3matwtZYrUIzy6RrAJXQPII72IRd9Kf+S/BPfB1/v/vz44d7acMprI00EynNGkm2kmR6MdypAdqdqERVkBygUyjeq2JtMaAHuo9ASgSH7iMg6Dkc3lZdq5QrtOl0fn5rmb2oLZPWlpBWeWZSi6nP5lWcrVgdhjqhejZLE5aMAGLJhmbcjcASJrAacZUSANuxCWjdSrPNj0MOl3NgVdaajHKjsqoA+OE0Djbv3L5TpQ/T+OtfvYc7tTsaKfZjW9kO+PfTqc9kdAWgSxxmIgD0OeA9CH0OAdUqZWFDX/w6UTEp19lng3desI7xPrr2MEW/mgn4fi1B3VBBtKvhd88d0TTCekPkjlbbhzGnPAyyW1Z2aZtgeSLnLTFZq6rBbEDd85a0ony1mGX4nbYE7m4N4+9tWfrAtHvcykSZM7I0KLgFJboK4+kCm1Xcs9HmTalZB5+scmASAK/HqwJGSUCJgkMj7nYlSUbeUcu6oGYJMi+mD7e/9/47t75+ub2P0d3tj2/fP4EckVPY4H5wXh6X0mEhHDacsrzsDF4LsicF+T/BDa9RatAoOHhVc08Fp6QoZTvOX8cBN5iDl7WSoqsFyuVQNBPe044Xz4hm5uunAF39+Xc30L4Mug83E9SG3ENNA7DTagD20rT/2lomrBYlpVXO6pfBojxRdFXD+/dO3K5sp3wI9BDkOfhdQchXF+TV5mhGw9EajvbWmQ8Gw9HKoVVRfQ9AsjqTPuQ8IG9vhfxRGVqTgdVqWqOdCBfKMn7ymcSC0hFBYrjc/CtFqqlYcTiBCAT95Rx6pttFlbZuUzpZ4dCVDoR9dRMufrlvxby3JucNzHmz2QMJoQJygTlvD7EZSW5k31qfbu2oj25G0RceJk918n6jarx1r7dzpUFuOeBKgeI06MwKcS2vQIo7E2voKKcyaTQLI88BwRPSeYhHcDsEhiEbBiieAW5r/nrhDJDt46+Z5UoMTTkuAzyTlnNCGGCdtMMKBsihNgasHt23afVz+nCEeM00c+zgY8SqLI3dG8hsaZB1Zj4ZbQMCV+AggKnV+e9SMbasQqclaoKCYCBWcrBaj9nV6pjnERbWEtRxcRuJHd7Iql6zvv36/ghuPbWst9fiMWBIB5pOP4JyLg/iqqrC+AJpRVWxx0+Zq1ocWs6CXBUs6To5YX9YjvM7k/ykDFCo7FeZwggVKtY2jyn75RouEiHFEqWiiO6gp/cAI2FgXSmDPh/4MxaarpRdIooE2WccTvaEwV8dNS6LiEq2wZj2UmLP0z+JUS+VHCC3yHtChGxy1yHvQnIhuSm54gfhPG3umVx7ckLXwX8x8TnxMkTRlnkjZ1E1BeJ9kodiTOZt4gkhClfpl0G4mJHzrFcXlBboEslYBb3mYpT20ztJBUdKHOLFJnh9eqdURhEsCZ6Lt3kOwnHxwbK18LsMH128HFkzwfZ2imfle46xipT4kNrxzB09+iiKyr1US3Nzu7JxXkVvVmniBU7MvubYjRae80qne65P9MF/ufNFEMaOH4NyZK25MUqotO9Q7TasJy9WtrcjOsyg1JWOzvC18ukrotKVYbYGmZTvVqqwJXi6AuieYFqrKLFCb9aIlUasnLVYWfdQFixWLl2IaDLLw95YhChQtsQxRMi+jTkOAr9B7bCsixdgmciA+0IXBv+Tcl+oHb3CgSHk1A/iu3kHvo1TcMwZcjHcoakAZit8YqL4axgQ7Miu3WCwzD4GY0Rm/B8=</diagram></mxfile>
|
2212.04092/main_diagram/main_diagram.pdf
ADDED
|
Binary file (56 kB). View file
|
|
|
2212.04092/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Compositional reading comprehension datasets like HotpotQA [\(Yang et al.,](#page-11-0) [2018\)](#page-11-0) and DROP [\(Dua](#page-10-0) [et al.,](#page-10-0) [2019\)](#page-10-0) have inspired a range of model architectures that learn to answer complex questions with weak supervision from the final answer. One recent direction is to leverage large language models (LMs) to solve compositional tasks with very few examples by generating latent reasoning steps
|
| 4 |
+
|
| 5 |
+
<span id="page-0-0"></span>Q: What are all the field goals in first half?
|
| 6 |
+
|
| 7 |
+
> A: 12-yard, 42-yard and 33-yard
|
| 8 |
+
|
| 9 |
+
Q: What is the largest value in: 12-yard, 42-yard and 33-yard?
|
| 10 |
+
|
| 11 |
+
A: 42-yard
|
| 12 |
+
|
| 13 |
+
Q: Who kicked the 42-yard field goal?
|
| 14 |
+
|
| 15 |
+
A: Matt Bryant
|
| 16 |
+
|
| 17 |
+
There are no more questions left to ask. The final answer is Matt Bryant
|
| 18 |
+
|
| 19 |
+
Figure 1: Example decomposition used by Successive Prompting's question decomposition and question answering stage on a DROP example. The model iterates between predicting a simple question to ask and answering the simple question.
|
| 20 |
+
|
| 21 |
+
before answering the question [\(Wei et al.,](#page-11-1) [2022;](#page-11-1) [Nye et al.,](#page-10-1) [2021;](#page-10-1) [Karpas et al.,](#page-10-2) [2022\)](#page-10-2). Given a complex question, this approach first finds nearestneighbor training examples from a dataset of (question, reasoning, answer) triples and then concatenates them to create an input for the LM. A large LM is then prompted with this input to generate the intermediate reasoning steps needed, while answering the complex question in a single pass.
|
| 22 |
+
|
| 23 |
+
While promising, this approach discards many of the benefits of prior approaches to this task [\(Khot](#page-10-3) [et al.,](#page-10-3) [2021;](#page-10-3) [Karpas et al.,](#page-10-2) [2022\)](#page-10-2) by coupling the supervision for question decomposition to the supervision for performing the intermediate steps. Moreover, its non-modular nature does not allow using alternate symbolic reasoning engines in cases where they perform better than LMs. Additionally, the model gets exposed to only a single set of incontext examples, selected based on their proximity
|
| 24 |
+
|
| 25 |
+
to the complex question, which may not contain optimal supervision for the intermediate steps that need to be taken.
|
| 26 |
+
|
| 27 |
+
We propose "Successive Prompting", where we iteratively decompose the complex question into the next simple question to answer, answer it, and then repeat until the complex question is answered (Figure 1). Each of these steps is performed with separate a query to the LM. Since the decomposition and answering steps are performed separately, we can decouple the supervision of each step, providing two primary benefits. First, when performing in-context learning, we get multiple opportunities to select different in-context examples, which can be tailored to the particular decomposition or answering step being performed, instead of selecting a single set of examples based only on the complex question. Second, when fine-tuning (with or without in-context examples (Chen et al., 2022)), we can provide training examples for each step independently, so the model only has to learn to perform one step at a time.
|
| 28 |
+
|
| 29 |
+
This decoupling additionally allows us to judiciously inject synthetic data into the learning process, e.g., to help the model answer a particular kind of simple question that it could not previously answer, or a new reasoning composition it did not know how to decompose. Because the steps are separate, we can isolate model failures and develop synthetic approaches to fill in the gaps. It also allows us to replace the LM with other, purpose-built components to perform symbolic reasoning when appropriate (Khot et al., 2021; Segal et al., 2020; Jin et al., 2021).
|
| 30 |
+
|
| 31 |
+
We demonstrate the utility of successive prompting using a few-shot variant of the DROP dataset (Dua et al., 2019), selecting 300 examples for training (either fine-tuning or in-context example selection). These 300 examples are manually annotated with simple QA pairs as decompositions. We find that performance of all models is quite low in this few-shot setting, so we develop a synthetic data generator that produces complex questions with their decompositions from semi-structured Wikipedia tables (Yoran et al., 2021). This synthetic data provides not just complex question supervision, but also supervision for the intermediate steps. We augment this data with the 300 (complex) training examples and their decompositions from DROP. In this few-shot setting, our best performing successive prompting model shows a ~5% improvement in F1 when compared to state-of-the-art model on DROP. The code and data are available at https://github.com/dDua/succesive\_prompting
|
| 32 |
+
|
| 33 |
+
The goal of compositional question answering is to answer a complex question q in the context of a passage p (together denoted as x) by reasoning through latent sequential decisions $\mathbf{z} = z_1, z_2, ..., z_s$ to reach the final answer, y. Many models have been proposed to accomplish this with varying amounts of supervision and interpretability. In prompting methods like Chain-of-Thought (CoT, Wei et al., 2022) the latent steps are supervised, interpretable sentences; in other models these latent steps might be a program (Gupta et al., 2020; Chen et al., 2020) or even just the (unsupervised) hidden states in the model (Segal et al., 2020; Andor et al., 2019)
|
| 34 |
+
|
| 35 |
+
We focus on models that take in-context examples and produce a discrete, language-encoded $\mathbf{z}$ , with CoT being the primary exemplar. We write the general form for CoT, given an input x, a language model encoder $\mathbb{L}$ and N in-context examples obtained from querying an index I—each containing a triplet of passage with complex question $(x^n)$ , latent steps $(\mathbf{z}^n)$ and final answer $(y^n)$ —as follows:
|
| 36 |
+
|
| 37 |
+
$$y, z \leftarrow \mathbb{L}\left(x, \left\{\left(x^{n}, y^{n}, \mathbf{z}^{n}\right) \mid n \in [1, N]\right\}\right).$$
|
| 38 |
+
|
| 39 |
+
In successive prompting, we represent each latent step as a pair of simple question and answer, $z_k = (q_k, a_k)$ (see Figure 1 for example QA pairs) unlike CoT which represents each latent step as a declarative sentence. Moreover, CoT queries the index I for in-context examples and prompts the language model $\mathbb{L}$ for generating output only once. However, in successive prompting, we separate z into multiple question and answering steps, which gives us many opportunities to prompt L, with potentially different in-context examples that are more tailored to the simple question at each step. It also enables us to re-encode the context given the intermediate state $z_k$ , which can be useful in certain questions that need long chain referencing (e.g., the sort-count example in Figure 3). We can write a general form for successive prompting as follows:
|
| 40 |
+
|
| 41 |
+
$$\begin{aligned} q_1 &\leftarrow \mathbb{L}\left(x, \left\{ (x^n, q_1^n) \mid n \in [1, N] \right\} \right) \\ a_1 &\leftarrow \mathbb{L}\left(p, q_1, \left\{ (p_*^m, q_*^m, a_*^m) \mid m \in [1, M] \right\} \right) \end{aligned}$$
|
| 42 |
+
|
| 43 |
+
<span id="page-2-0"></span>
|
| 44 |
+
|
| 45 |
+
Figure 2: A demonstration of successive prompting with in-context learning. The selected examples for supervision and complex question to be answered pre-pended with the context paragraph (omitted to simplify illustration) are encoded by the model to generate question and answer at QD and QA stage respectively. During fine-tuning, only training supervision is used in an i.i.d manner for learning QD and QA models.
|
| 46 |
+
|
| 47 |
+
$$q_{2} \leftarrow \mathbb{L}\left(x, q_{1}, a_{1}, \left\{\left(x^{n}, q_{1}^{n}, a_{1}^{n}, q_{2}^{n}\right) \mid n \in [1, N]\right\}\right)$$
|
| 48 |
+
|
| 49 |
+
$$a_{2} \leftarrow \mathbb{L}\left(p, q_{2}, \left\{\left(p_{*}^{m}, q_{*}^{m}, a_{*}^{m}\right) \mid m \in [1, M]\right\}\right)$$
|
| 50 |
+
|
| 51 |
+
$$\dots$$
|
| 52 |
+
|
| 53 |
+
$$y \leftarrow \mathbb{L}\left(x, \mathbf{z}, \left\{\left(x^{n}, y^{n}, \mathbf{z}^{n}\right) \mid n \in [1, N]\right\}\right)$$
|
| 54 |
+
|
| 55 |
+
There are three kinds of model outputs in this general form: intermediate questions $q_k$ , intermediate answers $a_k$ , and the final answer y. We refer to the first kind of output as *question decomposition* (QD) and the second kind as *question answering* (QA). We treat final answer prediction as a special case of question decomposition, where the model decides that no more decomposition is necessary and outputs a final answer, so we iteratively alternate between question decomposition and question answering until the model terminates.
|
| 56 |
+
|
| 57 |
+
We have so far described successive prompting in a setting where only in-context examples are given, so no model training is performed. However, successive prompting can also be used in conjuction with model fine-tuning, where each intermediate output is treated as a training example for L. In this
|
| 58 |
+
|
| 59 |
+
section, we first describe how in-context examples are selected at every step, followed by detailing how these examples are used for model fine-tuning.
|
| 60 |
+
|
| 61 |
+
**In-context Learning** During in-context learning, a small number of training examples are provided directly in the prompt that is given to a large LM, before the test input. These examples are selected from an index based on their similarity with the test input. For successive prompting, we create two indices: $I_D$ , for looking-up relevant demonstrations for QD, and $I_A$ , for looking-up relevant demonstrations for QA. The index $I_D$ contains partially decomposed chains at each step k, demonstrating the next question $q_k$ to be produced for every complex question in the training data. The index $I_A$ contains all the simple QA pairs in the training data from all the complex questions.
|
| 62 |
+
|
| 63 |
+
In the QD stage, the index $I_D$ is queried with the complex test question, q and current step number, k, to select demonstrations regarding how to generate the next question for the held-out example. In the QA stage, the index $I_A$ is queried with the simple question $q_k$ generated during QD to select relevant simple QA pairs. Figure 2 shows a demonstration
|
| 64 |
+
|
| 65 |
+
of how in-context learning is executed step-by-step in each stage until QD outputs the special phrase "There are no more questions left to ask", along with a final answer.
|
| 66 |
+
|
| 67 |
+
Successive prompting allows the QA stage to access simple questions derived from complex questions that would not have been retrieved by Chainof-Thought prompting because on the surface they are not similar to the held-out complex question, even though they share similar sub-questions.
|
| 68 |
+
|
| 69 |
+
Model Fine-tuning For model fine-tuning, we use T5 (Raff[el et al.,](#page-10-6) [2020\)](#page-10-6) based sequence-tosequence models. Such models are typically trained with control codes in a multi-task setting [\(Ma et al.,](#page-10-7) [2021;](#page-10-7) [Rajagopal et al.,](#page-10-8) [2022\)](#page-10-8) to switch between QD and QA tasks with shared model parameters. We adapt and extend the control codes introduced by text modular networks (TMNs, [Khot et al.,](#page-10-3) [2021\)](#page-10-3) for training with our synthetic data. TMNs are limited in terms of the operations they can handle as they do not go beyond first order reasoning. We use synthetically generated data, which allows us to deal with higher-order reasoning questions in DROP. Because we are fine-tuning the model, we can use special tokens to denote question decomposition and other separators, instead of the natural language prompts shown in Figure [2,](#page-2-0) though the content is the same. The specific tokens used for each step are listed in Appendix A.
|
| 70 |
+
|
| 71 |
+
Specialized Modules Successive prompting also allows us to use specialized sub-modules for solving different QA tasks because we no longer perform QD and QA in an end-to-end manner. Solving arithmetic operations like counting, difference, sorting, etc., can be challenging for language models. As a result, we follow [Khot et al.](#page-10-3) [\(2021\)](#page-10-3) and construct a simple mathematical sub-module for QA which parses the generated simple question for symbolic operation type and its arguments and then executes them in a deterministic way. If the generated simple question cannot be parsed as a mathematical operation, we apply the language model to solve it.
|
| 72 |
+
|
| 73 |
+
Any method that prompts LMs to produce intermediate reasoning steps to answer complex questions needs some amount of supervision for those reasoning steps. This kind of annotation can be expensive to collect and often requires expert knowledge.
|
| 74 |
+
|
| 75 |
+
<span id="page-3-0"></span>
|
| 76 |
+
|
| 77 |
+
| Round | Date | Opponent | | Venue Attendance |
|
| 78 |
+
|------------|------------------------|--------------------------------------|---|------------------|
|
| 79 |
+
| R2 1st Leg | 26 Sep 1990 Walsall | | A | 5,666 |
|
| 80 |
+
| QFR | 23 Oct 1990 Liverpool | | H | 18,246 |
|
| 81 |
+
| SF 1st Leg | | 24 Feb 1991 Sheffield Wed. | H | 14,074 |
|
| 82 |
+
| | | SF 2nd Leg 27 Feb 1991 Oxford United | A | 34,669 |
|
| 83 |
+
| QFR | 23 Jan 1991 Portsmouth | | A | 33,861 |
|
| 84 |
+
|
| 85 |
+
Table 1: Example table from Wikipedia where rows become sentences and columns are used for question generation (used as context for Figure [3\)](#page-4-0).
|
| 86 |
+
|
| 87 |
+
Prior work has typically relied on a small handful of manually-written example decompositions. We find that such small collections lead to very poor performance on a dataset as varied as DROP, even for large models.
|
| 88 |
+
|
| 89 |
+
To mitigate these data issues, we propose a way to synthetically generate complex questions and their decompositions using semi-structured data which is easy to parse. We show that we can bootstrap model learning with this out-of-domain, synthetically generated data so it can adapt better when fine-tuned with limited in-domain supervision.
|
| 90 |
+
|
| 91 |
+
Generation Process: Inspired by [Yoran et al.](#page-11-3) [\(2021\)](#page-11-3), we use semi-structured data from tables in English Wikipedia which are available in plenty. We employ curated templates to convert the rows in the tables into paragraphs. We use single column headers to create first order simple questions and a combination of columns for higher order complex questions. We synthesize data for 10 simple operations: COUNT, TOP(k), BOTTOM(k), FILTER, SUM, COMPARISON, DIFFERENCE, NEGATION, GATHER, and INTERSECTION.
|
| 92 |
+
|
| 93 |
+
We generate higher order combinations of firstorder operations, wherever possible. Figure [3](#page-4-0) shows examples of higher order combinations of the atomic operation COUNT with a few other simple operations using Table [1](#page-3-0) as context. The complete list of all decompositions is provided in Appendix A. Depending on the model, we use either symbolic or natural language version of the arithmetic operations. If we are using an LM to perform arithmetic operations, we output natural language; if we are using a separate symbolic reasoning engine, we output symbolic operations. We generate approximately 141K total complex questions which result in 525K examples for QD and 257K examples for QA. See Appendix A for more dataset statistics.
|
| 94 |
+
|
| 95 |
+
<span id="page-4-0"></span>
|
| 96 |
+
|
| 97 |
+
| Reasoning | Complex Question and Decomposition (Question [Natural Language or Symbolic], Answer) |
|
| 98 |
+
|-----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| 99 |
+
| Count | How many opponents were there?<br>• Q: What are all the opponents?<br>Ans: Walsall; Liverpool; Sheffield Wed.; Oxford United;<br>Portsmouth |
|
| 100 |
+
| | • Q: count(Walsall; Portsmouth; Sheffield Wed.; Oxford United; Portsmouth)<br>Ans: 5<br>– Q: How many items are in the list: Walsall, Liverpool, Sheffield Wed. and<br>Oxford United, Portsmouth? |
|
| 101 |
+
| Higher order decompositions | |
|
| 102 |
+
| Sort-Count | Which venue had the most number of opponents?<br>• Q: What are all the venues?<br>Ans: A; H<br>• Q: What are opponents when venue was A?<br>Ans: Walsall; Oxford United; Portsmouth<br>• Q: count(Walsall; Oxford United; Portsmouth)<br>Ans: 3<br>• Q: What are opponents when venue was H?<br>Ans: Liverpool; Sheffield Wed.<br>• Q: count(Liverpool; Sheffield Wed.)<br>Ans: 2<br>• Q: top(1, 2;3) Ans: 3<br>– Q: What is the largest value in: 2 and 3?<br>• Q: Which venue has 3 opponents? Ans: A |
|
| 103 |
+
| Comparison-Count | Which round had more venues: SF 1st Leg or QFR??<br>• Q: What are the rounds when venue was A?<br>Ans: R2 1st Left; SF 2nd Leg; QFR<br>• count(R2 1st Left; SF 2nd Leg; QFR)<br>Ans: 3<br>• Q: What are the rounds when venue was H?<br>Ans: QFR; SF 1st Leg<br>• count(QFR; SF 1st Leg)<br>Ans: 2<br>• if_then(1 > 2; SF 1st Leg; QFR)<br>Ans: QFR<br>– Q: If 1 > 2 then answer is SF 1st Leg else it is QFR |
|
| 104 |
+
|
| 105 |
+
Figure 3: Examples of COUNT operation and some of its higher order combinations, with natural language and symbolic decompositions of the complex question. Underneath the first instance of a symbolic operation we show its corresponding natural language version. See Table [1](#page-3-0) for the original table used to generate context and questions.
|
2301.09249/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.09249/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
LiDAR-based 3D object detection plays an indispensable role in 3D scene understanding with a wide range of applications such as autonomous driving [@DBLP:conf/nips/DengQNFZA21; @DBLP:conf/eccv/WangLGD20] and robotics [@DBLP:conf/iros/AhmedTCMW18; @DBLP:conf/iros/MontesLCD20; @DBLP:journals/sensors/WangLSLZSQT19]. The emerging stream of 3D detection models enables accurate recognition at the cost of large-scale labeled point clouds, where 7-degree of freedom (DOF) 3D bounding boxes - consisting of a position, size, and orientation information- for each object are annotated. In the benchmark datasets like Waymo [@DBLP:conf/cvpr/SunKDCPTGZCCVHN20], there are over 12 million LiDAR boxes, for which, labeling a precise 3D box takes more than 100 seconds for an annotator [@DBLP:conf/cvpr/SongLX15]. This prerequisite for the performance boost greatly hinders the feasibility of applying models to the wild, especially when the annotation budget is limited.
|
| 4 |
+
|
| 5 |
+
To alleviate this limitation, active learning (AL) aims to reduce labeling costs by querying labels for only a small portion of unlabeled data. The criterion-based query selection process iteratively selects the most beneficial samples for the subsequent model training until the labeling budget is run out. The criterion is expected to quantify the sample informativeness using the heuristics derived from *sample uncertainty* [@DBLP:conf/icml/GalIG17; @DBLP:conf/iccv/DuZCC0021; @DBLP:conf/cvpr/CaramalauBK21; @DBLP:conf/cvpr/YuanWFLXJY21; @DBLP:conf/iccv/ChoiELFA21; @DBLP:conf/cvpr/Zhang0YWZH20; @DBLP:conf/ijcai/ShiL19] and *sample diversity* [@DBLP:conf/iclr/MaZMS21; @DBLP:conf/cvpr/GudovskiyHYT20; @DBLP:conf/eccv/GaoZYADP20; @DBLP:conf/iccv/SinhaED19; @DBLP:conf/nips/Pinsler0NH19]. In particular, uncertainty-driven approaches focus on the samples that the model is the least confident of their labels, thus searching for the candidates with: maximum entropy [@DBLP:journals/neco/MacKay92b; @DBLP:journals/bstj/Shannon48; @DBLP:conf/nips/KimSJM21; @DBLP:conf/cvpr/SiddiquiVN20; @DBLP:conf/nips/Shi019], disagreement among different experts [@DBLP:conf/nips/FreundSST92; @DBLP:conf/icml/TranDRC19], minimum posterior probability of a predicted class [@DBLP:journals/tcsv/WangZLZL17], or the samples with reducible yet maximum estimated error [@DBLP:conf/icml/RoyM01; @DBLP:conf/cvpr/YooK19; @DBLP:conf/cvpr/KimPKC21]. On the other hand, diversity-based methods try to find the most representative samples to avoid sample redundancy. To this end, they form subsets that are sufficiently diverse to describe the entire data pool by making use of the greedy coreset algorithms [@DBLP:conf/iclr/SenerS18], or the clustering algorithms [@DBLP:conf/icml/NguyenS04]. Recent works [@DBLP:conf/iccv/LiuDZLDH21; @DBLP:conf/nips/CitovskyDGKRRK21; @DBLP:conf/nips/KirschAG19; @DBLP:journals/corr/abs-1112-5745] combine the aforementioned heuristics: they measure uncertainty as the gradient magnitude of samples [@DBLP:conf/iclr/AshZK0A20] or its second-order metrics [@DBLP:conf/iccv/LiuDZLDH21] at the final layer of neural networks, and then select samples with gradients spanning a diverse set of directions. While effective, the hybrid approaches commonly cause heavy computational overhead, since gradient computation is required for each sample in the unlabeled pool. Another stream of works apply active learning to 2D/3D object detection tasks[ [@DBLP:conf/ivs/FengWRMD19; @DBLP:conf/ivs/SchmidtRTK20; @wang2022weaklySupervisedObject; @DBLP:conf/cvpr/WuC022; @9548667]]{style="color: black"}, by leveraging ensemble [@DBLP:conf/cvpr/BeluchGNK18] or Monte Carlo (MC) dropout [@DBLP:conf/icml/GalG16] algorithms to estimate the classification and localization uncertainty of bounding boxes for images/point clouds acquisition (more details in [Appendix I]{style="color: black"}). Nevertheless, those AL methods generally favor the point clouds with more objects, which have a higher chance of containing uncertain and diverse objects. With a fixed annotation budget, it is far from optimal to select such point clouds, since more clicks are required to form 3D box annotations.
|
| 6 |
+
|
| 7 |
+
To overcome the above limitations, we propose to learn AL criteria for cost-efficient sample acquisition at the 3D box level by empirically studying its relationship with optimizing the generalization upper bound. Specifically, we propose three selection criteria for cost-effective point cloud acquisition, termed as [Crb]{.smallcaps}, *i.e.,* *label [c]{.underline}onciseness*, *feature [r]{.underline}epresentativeness* and *geometric [b]{.underline}alance*. Specifically, we divide the sample selection process into three stages: (1) To alleviate the issues of label redundancy and class imbalance, and to ensure *label conciseness*, we firstly calculate the entropy of bounding box label predictions and only pick top $\mathcal{K}_1$ point clouds for Stage 2; (2) We then examine the *feature representativeness* of candidates by formulating the task as the $\mathcal{K}_2$-medoids problem on the gradient space. To jointly consider the impact of classification and regression objectives on gradients, we enable the Monte Carlo dropout ([Mc-dropout]{.smallcaps}) and construct the hypothetical labels by averaging predictions from multiple stochastic forward passes. (3) Finally, to maintain the *geometric balance* property, we minimize the KL divergence between the marginal distributions of point cloud density of each predicted bounding box. This makes the trained detector predict more accurate localization and size of objects, and recognize both close (*i.e.*, dense) and distant (*i.e.*, sparse) objects at the test time, using minimum number of annotations. We base our criterion design on our theoretical analysis of optimizing the upper bound of the generalization risk, which can be reformulated as distribution alignment of the selected subset and the test set. Note that since the empirical distribution of the test set is not observable during training, WLOG, we make an appropriate assumption of its prior distribution.
|
| 8 |
+
|
| 9 |
+
**Contributions**. Our work is a pioneering study in active learning for 3D object detection, aiming to boost the detection performance at the **lowest cost of bounding box-level annotations**. To this end, we propose a hierarchical active learning scheme for 3D object detection, which progressively filters candidates according to the derived selection criteria without triggering heavy computation. Extensive experiments conducted demonstrate that the proposed [Crb]{.smallcaps} strategy can consistently outperform all the state-of-the-art AL baselines on two large-scale 3D detection datasets irrespective of the detector architecture. To enhance the reproducibility of our work and accelerate future work in this new research direction, we develop a `active-3D-det` toolbox, which accommodates various AL approaches and 3D detectors. The source code is available in the supplementary material, and will be publicly shared upon acceptance of the paper.
|
| 10 |
+
|
| 11 |
+
# Method
|
| 12 |
+
|
| 13 |
+
In this section, we mathematically formulate the problem of active learning for 3D object detection and set up the notations. Given an orderless LiDAR point cloud $\mathcal{P} = \{x, y, z, e\}$ with 3D location $(x, y, z)$ and reflectance $e$, the goal of 3D object detection is to localize the objects of interest as a set of 3D bounding boxes $\mathcal{B} = \{b_k\}_{k\in[N_B]}$ with $N_B$ indicating the number of detected bounding boxes, and predict the associated box labels $Y = \{y_k\}_{k\in[N_B]} \in\mathcal{Y} = \{1,\ldots,C\}$, with $C$ being the number of classes to predict. Each bounding box $b$ represents the relative center position $(p_x, p_y, p_z)$ to the object ground planes, the box size $(l, w, h)$, and the heading angle $\theta$. Mainstream 3D object detectors [use point clouds $\mathcal{P}$ to extract point-level features $\bm{x}\in\mathbb{R}^{W\cdot L\cdot F}$ ]{style="color: black"}[@DBLP:conf/cvpr/ShiWL19; @DBLP:conf/iccv/YangS0SJ19; @DBLP:conf/cvpr/YangS0J20] or by voxelization [@DBLP:conf/cvpr/ShiGJ0SWL20], with $W$, $L$, $F$ representing width, length, and channels of the feature map. The feature map $\bm{x}$ is passed to a classifier $f(\cdot; \bm{w}_{f})$ parameterized by $\bm{w}_{f}$ and regression heads $g(\cdot; \bm{w}_{g})$ (*e.g.,* box refinement and ROI regression) parameterized by $\bm{w}_{g}$. The output of the model is the detected bounding boxes $\widehat{\mathcal{B}} = \{\hat{b}_k\}$ with the associated box labels $\widehat{Y}=\{\hat{y}_k\}$ from anchored areas. [The loss functions $\ell^{cls}$ and $\ell^{reg}$ for classification (*e.g.*, regularized cross entropy loss [@DBLP:journals/corr/abs-1808-09540]) and regression (*e.g.*, mean absolute error/$L_1$ regularization [@DBLP:journals/spl/QiDSML20]) are assumed to be Lipschitz continuous.]{style="color: black"} As shown in the left half of Figure [1](#fig:flowchart){reference-type="ref" reference="fig:flowchart"}, in an active learning pipeline, a small set of labeled point clouds $\mathcal{D}_L=\{(\mathcal{P}, \mathcal{B}, Y)_i\}_{i\in[m]}$ and a large pool of raw point clouds $\mathcal{D}_U=\{(\mathcal{P})_j\}_{j\in[n]}$ are provided at training time, with $n$ and $m$ being a total number of point clouds and $m\ll n$. For each active learning round $r\in[R]$, and based on the criterion defined by an active learning policy, we select a subset of raw data $\{\mathcal{P}_j\}_{j\in[N_r]}$ from $\mathcal{D}_U$ and query the labels of 3D bounding boxes from an oracle $\bm{\Omega}: \mathcal{P}\rightarrow \mathcal{B}\times\mathcal{Y}$ to construct $\mathcal{D}_S=\{(\mathcal{P}, \mathcal{B}, Y)_j\}_{j\in[N_r]}$. The 3D detection model is pre-trained with $\mathcal{D}_L$ for active selection, and then retrained with $\mathcal{D}_{S}\cup \mathcal{D}_L$ until the selected samples reach the final budget $B$, *i.e.,* $\sum_{r=1}^{R}N_{r} = B$.
|
| 14 |
+
|
| 15 |
+
<figure id="fig:flowchart" data-latex-placement="!t">
|
| 16 |
+
<embed src="Figures/flowchart.pdf" style="width:100.0%" />
|
| 17 |
+
<figcaption>An illustrative flowchart of the proposed <span class="smallcaps">Crb</span> framework for active selection of point clouds. Motivated by optimizing the generalization risk, the derived strategy hierarchically selects point clouds that have non-redundant bounding box labels, latent gradients and geometric characteristics to mitigate the gap with the test set and minimize annotation costs.</figcaption>
|
| 18 |
+
</figure>
|
| 19 |
+
|
| 20 |
+
The core question of active 3D detection is how to design a proper criterion, based on which a fixed number of unlabeled point clouds can be selected to achieve minimum empirical risk $\mathfrak{R}_T[\ell(f, g;\bm{w})]$ on the test set $\mathcal{D}_T$ and minimum annotation time. Below, inspired by [@DBLP:conf/colt/MansourMR09; @DBLP:journals/ml/Ben-DavidBCKPV10], we derive the following **generalization bound** for active 3D detection so that the desired acquisition criteria can be obtained by optimizing the generalization risk.
|
| 21 |
+
|
| 22 |
+
::: theorem
|
| 23 |
+
[]{#theo:crb label="theo:crb"} Let $\mathcal{H}$ be a hypothesis space of Vapnik-Chervonenkis (VC) dimension $d$, with $f$ and $g$ being the classification and regression branches, respectively. The $\widehat{\mathcal{D}}_{S}$ and $\widehat{\mathcal{D}}_T$ represent the empirical distribution induced by samples drawn from the acquired subset $\mathcal{D}_S$ and the test set $\mathcal{D}_T$, and $\ell$ the loss function bounded by $\mathcal{J}$. It is proven that $\forall$ $\delta \in (0, 1)$, and $\forall f, g\in \mathcal{H}$, with probability at least $1-\delta$ the following inequality holds, $$\begin{align*}
|
| 24 |
+
\mathfrak{R}_T[\ell(f, g;\bm{w})]\leq \mathfrak{R}_S[\ell(f, g;\bm{w})] + \frac{1}{2}disc(\widehat{\mathcal{D}}_{S}, \widehat{\mathcal{D}}_T) +\lambda^* + \text{const},
|
| 25 |
+
\end{align*}$$ where $\text{const} = 3 \mathcal{J} (\sqrt{\frac{\log \frac{4}{\delta}}{2 N_r}} + \sqrt{\frac{\log \frac{4}{\delta}}{2 N_t}}) + \sqrt{\frac{2d \log(e N_r/d)}{N_r}} + \sqrt{\frac{2d \log(e N_t/d)}{N_t}}$.
|
| 26 |
+
|
| 27 |
+
Notably, $\lambda^* = \mathfrak{R}_T[\ell(f^*, g^*; \bm{w}^*)] + \mathfrak{R}_S[\ell(f^*, g^*; \bm{w}^*)]$ denotes the joint risk of the optimal hypothesis $f^*$ and $g^*$, with $\bm{w}^*$ being the model weights. $N_r$ and $N_t$ indicate the number of samples in the $\mathcal{D}_S$ and $\mathcal{D}_T$. The proof can be found in the supplementary material.
|
| 28 |
+
:::
|
| 29 |
+
|
| 30 |
+
::: remark
|
| 31 |
+
The first term indicates the training error on the selected subsets, which is assumed to be trivial based on the zero training assumption [@DBLP:conf/iclr/SenerS18]. To obtain a tight upper bound of the generalization risk, the **optimal subset** $\mathcal{D}_S^*$ can be determined via minimizing the discrepancy distance of empirical distribution of two sets, *i.e.,* $$\begin{equation*}
|
| 32 |
+
\mathcal{D}^*_S = \argmin_{\mathcal{D}_S\subset\mathcal{D}_U}disc(\widehat{\mathcal{D}}_{S}, \widehat{\mathcal{D}}_T).
|
| 33 |
+
\end{equation*}$$ Below, we define the discrepancy distance for the 3D object detection task.
|
| 34 |
+
:::
|
| 35 |
+
|
| 36 |
+
::: definition
|
| 37 |
+
For any $f, g, f', g'\in \mathcal{H}$, the discrepancy between the distribution of the selected sets $\mathcal{D}_S$ and unlabeled pool $\mathcal{D}_T$ can be formulated as, $$\begin{align*}
|
| 38 |
+
disc(\widehat{\mathcal{D}}_{S}, \widehat{\mathcal{D}}_T) &= \sup_{f, f'\in\mathcal{H}}|\mathbb{E}_{\widehat{\mathcal{D}}_{S}}\ell(f, f') - \mathbb{E}_{\widehat{\mathcal{D}}_T}\ell(f, f')| + \sup_{g, g'\in\mathcal{H}}|\mathbb{E}_{\widehat{\mathcal{D}}_{S}}\ell(g, g') - \mathbb{E}_{\widehat{\mathcal{D}}_T}\ell(g, g')|,
|
| 39 |
+
\end{align*}$$ where the bounded expected loss $\ell$ for any classification and regression functions are symmetric and satisfy the triangle inequality.
|
| 40 |
+
:::
|
| 41 |
+
|
| 42 |
+
::: remark
|
| 43 |
+
As 3D object detection is naturally an integration of classification and regression tasks, mitigating the set discrepancy is basically aligning the inputs and outputs of each branch. Therefore, with the detector freezed during the active selection, finding an optimal $\mathcal{D}^*_S$ can be interpreted as enhancing the acquired set's (1) **Label Conciseness**: aligning marginal label distribution of bounding boxes, (2) **Feature Representativeness**: aligning marginal distribution of the latent representations of point clouds, and (3) **Geometric Balance**: aligning marginal distribution of geometric characteristics of point clouds and predicted bounding boxes, and can be written as:
|
| 44 |
+
|
| 45 |
+
$$\begin{equation}
|
| 46 |
+
\label{eq:optimal}
|
| 47 |
+
\mathcal{D}^*_S \approx \argmin_{\mathcal{D}_S\subset\mathcal{D}_U} \underbrace{
|
| 48 |
+
d_{\mathcal{A}}(P_{\widehat{Y}_S}, P_{Y_T})}_{\text{Conciseness}} +\underbrace{d_{\mathcal{A}}(P_{X_S}, P_{X_T})}_{\text{Representativeness}} + \underbrace{d_{\mathcal{A}}(P_{\phi(\mathcal{P}_S, \strut\widehat{\mathcal{B}}_S)}, P_{\phi(\mathcal{P}_T, \mathcal{B}_T)})}_{\text{Balance}}.
|
| 49 |
+
\end{equation}$$ Here, [$\mathcal{P}_S$ and $\mathcal{P}_T$ represent the point clouds in the selected set and the ones in the test set.]{style="color: black"} $\phi(\cdot)$ indicates the geometric descriptor of point clouds and $d_{\mathcal{A}}$ distance [@DBLP:conf/vldb/KiferBG04] which can be estimated by a finite set of samples. For latent features $X_S$ and $X_T$, we only focus on the features that differ from the training sets, since $\mathbb{E}_{\widehat{D}_L}\ell^{cls}=0$ and $\mathbb{E}_{\widehat{D}_L}\ell^{reg} = 0$ based on the zero training error assumption. Considering that test samples and their associated labels are not observable during training, we make an assumption on the prior distributions of test data. WLOG, we assume that the prior distribution of bounding box labels and geometric features are uniform. Note that we can adopt the KL-divergence for the implementation of $d_{\mathcal{A}}$ assuming that latent representations follow the univariate Gaussian distribution.
|
| 50 |
+
:::
|
| 51 |
+
|
| 52 |
+
**Connections with existing AL approaches.** The proposed criteria jointly optimize the discrepancy distance for both tasks with three objectives, which shows the connections with existing AL strategies. The uncertainty-based methods focus strongly on the first term, based on the assumption that learning more difficult samples will help to improve the suprema of the loss. This rigorous assumption can result in a bias towards hard samples, which will be accumulated and amplified across iterations. Diversity-based methods put more effort into minimizing the second term, aiming to align the distributions in the latent subspace. However, the diversity-based approaches are unable to discover the latent features specified for regression, which can be critical when dealing with a detection problem. We introduce the third term for the 3D detection task, motivated by the fact that aligning the geometric characteristics of point clouds helps to preserve the fine-grained details of objects, leading to more accurate regression. Our empirical study provided in Sec. [3.3](#sec:ablaton){reference-type="ref" reference="sec:ablaton"} suggests jointly optimizing three terms can lead to the best performance.
|
| 53 |
+
|
| 54 |
+
To optimize the three criteria outlined in Eq. [\[eq:optimal\]](#eq:optimal){reference-type="ref" reference="eq:optimal"}, we derive an AL scheme consisting of three components. In particular, to reduce the computational overhead, we hierarchically filter the samples that meet the selection criteria (illustrated in Fig. [1](#fig:flowchart){reference-type="ref" reference="fig:flowchart"}): we first pick $\mathcal{K}_1$ candidates by concise label sampling (**Stage 1**), from which we select $\mathcal{K}_2$ representative prototypes (**Stage 2**), with $\mathcal{K}_1, \mathcal{K}_2 << n$. Finally, we leverage greedy search (**Stage 3**) to find the $N_r$ prototypes that match with the prior marginal distribution of test data. The hierarchical sampling scheme can save $\mathcal{O}((n-\mathcal{K}_1)T_2 + (n-\mathcal{K}_2)T_3)$ cost, with $T_2$ and $T_3$ indicating the runtime of criterion evaluation. The algorithm is summarized in the supplemental material. In the following, we describe the details of the three stages.
|
| 55 |
+
|
| 56 |
+
**Stage 1: Concise Label Sampling ([Cls]{.smallcaps}).**[]{#sec:cls label="sec:cls"} By using *label conciseness* as a sampling criterion, we aim to alleviate label redundancy and align the source label distribution with the target prior label distribution. Particularly, we find a subset $\mathcal{D}^*_{S_1}$ of size $\mathcal{K}_1$ that minimizes Kullback-Leibler (KL) divergence between the probability distribution $P_{Y_S}$ and the uniform distribution $P_{Y_T}$. To this end, we formulate the KL-divergence with Shannon entropy $H(\cdot)$ and define an optimization problem of maximizing the entropy of the label distributions: $$\begin{align}
|
| 57 |
+
&D_{KL}\infdivx{P_{\widehat{Y}_{S_1}}}{P_{Y_T}} = -H(\widehat{Y}_{S_1}) + \log |\widehat{Y}_{S_1}|,\\
|
| 58 |
+
&\mathcal{D}^{*}_{S_1} = \argmin_{\mathcal{D}_{S_1}\subset\mathcal{D}_U} D_{KL}\infdivx{P_{\widehat{Y}_{S_1}}}{P_{Y_T}} =\argmax_{\mathcal{D}_{S_1}\subset\mathcal{D}_U} H(\widehat{Y}_{S_1}),
|
| 59 |
+
%
|
| 60 |
+
\end{align}$$ where $\log |\widehat{Y}_{S_1}| = log \mathcal{K}_1$ indicates the number of values $Y_{S_1}$ can take on, which is a constant. Note that $P_{Y_T}$ is a uniform distribution, and we removed the constant values from the formulations. We pass all point clouds $\{(\mathcal{P})_j\}_{i\in[n]}$ from the unlabeled pool to the detector and extract the predictive labels $\{\hat{y}_i\}_{i=1}^{N_B}$ for $N_B$ bounding boxes, with $\hat{y}_i = \argmax_{y\in[C]} f(x_i; \bm{w}_f)$. The label entropy of the $j$-th point cloud $H(\widehat{Y}_{j, S})$ can be calculated as, $$\begin{align}
|
| 61 |
+
\quad H(\widehat{Y}_{j, S}) = -\sum_{c=1}^C\bm{p}_{i,c}\log \bm{p}_{i,c},\quad \bm{p}_{i,c} = \frac{e^{|{\hat{y}_i=c}|/ N_B}}{\sum^C_{c=1}e^{|{\hat{y}_i=c}|/ N_B}}.
|
| 62 |
+
\end{align}$$ Based on the calculated entropy scores, we filter out the top-$\mathcal{K}_1$ candidates and validate them through the **Stage 2** representative prototype selection.
|
| 63 |
+
|
| 64 |
+
**Stage 2: Representative Prototype Selection ([Rps]{.smallcaps}).**[]{#sec:rps label="sec:rps"} In this stage, we aim to to identify whether the subsets cover the *unique* knowledge encoded only in $\mathcal{D}_U$ and not in $\mathcal{D}_L$ by measuring the *feature representativeness* with gradient vectors of point clouds. Motivated by this, we find the representative prototypes on the gradient space $\mathcal{G}$ to form the subset $\mathcal{D}_{S_2}$, where magnitude and orientation represent the uncertainty and diversity of the new knowledge. For a classification problem, gradients can be retrieved by feeding the hypothetical label $\hat{y} = \argmax_{y\in[C]} \bm{p}(y|x)$ to the networks. However, the gradient extraction for regression problem is not explored yet in the literature, due to the fact that the hypothetical labels for regression heads cannot be directly obtained. To mitigate this, we propose to enable Monte Carlo dropout [(Mc-dropout)]{.smallcaps} at the **Stage 1**, and get the averaging predictions $\bar{B}$ of $M$ stochastic forward passes through the model as the hypothetical labels for regression loss: $$\begin{align}
|
| 65 |
+
&\bar{B} \approx \frac{1}{M}\sum_{i=1}^M g(\bm{x};\bm{w}_{d}, \bm{w}_{g}), \bm{w}_{d}\sim \texttt{Bernoulli}(1-p),\\
|
| 66 |
+
&G_{S_2} = \{\nabla_{\Theta} \ell^{reg}(g(\bm{x}), \bar{B};\bm{w}_g), \bm{x}\sim\mathcal{D}_{S_2}\},
|
| 67 |
+
\end{align}$$ with $p$ indicating the dropout rate, $\bm{w}_{d}$ the random variable of the dropout layer, and $\Theta$ the parameters of the convolutional layer of the shared block. The gradient maps $G_{S_2}\in\mathcal{G}$ can be extracted from shared layers and calculated by the chain rule. Since the gradients for test samples are not observable, we make an assumption that its prior distribution follows a Gaussian distribution, which allows us to rewrite the optimization function as, $$\begin{equation}
|
| 68 |
+
\label{eq:rps}
|
| 69 |
+
\begin{split}
|
| 70 |
+
\mathcal{D}_{S_2}^* &= \argmin_{\mathcal{D}_{S_2}\subset\mathcal{D}_{S_1}}D_{KL}\infdivx{P_{X_{S_2}}}{P_{X_T}} \approx \argmin_{\mathcal{D}_{S_2}\subset\mathcal{D}_{S_1}}D_{KL}\infdivx{P_{G_{S_2}}}{P_{G_T}}\\&= \argmin_{\mathcal{D}_{S_2}\subset\mathcal{D}_{S_1}} \log\frac{\sigma_T}{\sigma_{S_2}} +\frac{\sigma_{S_2}^2 + (\mu_{S_2} -\mu_T)}{2\delta^2_{T}} - \frac{1}{2} \approx \mathcal{K}_2\texttt{-medoids}(G_{S_1}),
|
| 71 |
+
\end{split}
|
| 72 |
+
\end{equation}$$ with $\mu_{S_2}$, $\sigma_{S_2}$ ($\mu_T$, and $\sigma_T$) being the mean and a standard deviation of the univariate Gaussian distribution of the selected set (test set), respectively. Based on Eq. [\[eq:rps\]](#eq:rps){reference-type="ref" reference="eq:rps"}, the task of finding a representative set can be viewed as picking $\mathcal{K}_2$ prototypes (*i.e.,* ${\mathcal{K}_2}$-medoids) from the clustered data, so that the centroids (mean value) of the selected subset and the test set can be naturally matched. The variance $\sigma_{S_2}$ and $\sigma_{T}$, basically, the distance of each point to its prototypes will be minimized simultaneously. We test different approaches for selecting prototypes in Sec. [3.3](#sec:ablaton){reference-type="ref" reference="sec:ablaton"}.
|
| 73 |
+
|
| 74 |
+
**Stage 3: Greedy Point Density Balancing ([Gpdb]{.smallcaps}).**[]{#sec:gpd label="sec:gpd"} The third criterion adopted is *geometric balance*, which targets at aligning the distribution of selected prototypes with the marginal distribution of testing point clouds. As point clouds typically consist of thousands (if not millions) of points, it is computationally expensive to directly align the meta features (*e.g.,* coordinates) of points. Furthermore, in representation learning for point clouds, the common practice of using voxel-based architecture typically relies on quantized representations of point clouds and loses the object details due to the limited perception range of voxels. Therefore, we utilize the point density $\phi(\cdot, \cdot)$ within each bounding box to preserve the geometric characteristics of an object in 3D point clouds. By aligning the geometric characteristic of the selected set and unlabeled pool, the fine-tuned detector is expected to predict more accurate localization and size of bounding boxes and recognize both close (*i.e.,* dense) and distant (*i.e.,* sparse) objects at the test time. The probability density function (PDF) of the point density is not given and has to be estimated from the bounding box predictions. To this end, we adopt Kernel Density Estimation (KDE) using a finite set of samples from each class which can be computed as: $$\begin{align}
|
| 75 |
+
\bm{p}(\phi(\mathcal{P},\widehat{\mathcal{B}})) = \frac{1}{N_B h}\sum_{j=1}^{N_B} \mathcal{K}er (\frac{\phi(\mathcal{P},\widehat{\mathcal{B}}) - \phi(\mathcal{P},\widehat{\mathcal{B}}_j)}{h}),
|
| 76 |
+
\end{align}$$ with $h>0$ being the pre-defined bandwidth that can determine the smoothing of the resulting density function. We use Gaussian kernel for the kernel function $\mathcal{K}er(\cdot)$. With the PDF defined, the optimization problem of selecting the final candidate sets $\mathcal{D}_{S}$ of size $N_r$ for the label query is: $$\begin{equation}
|
| 77 |
+
\mathcal{D}_{S}^* = \argmin_{\mathcal{D}_{S}\subset\mathcal{D}_{S_2}}D_{KL}\infdivx{\phi(\mathcal{P}_S,\widehat{\mathcal{B}}_S)}{\phi(\mathcal{P}_T, \mathcal{B}_T)},
|
| 78 |
+
\end{equation}$$ where $\phi(\cdot, \cdot)$ measures the point density for each bounding box. We use greedy search to find the optimal combinations from the subset $\mathcal{D}_{S_2}$ that can minimize the KL distance to the uniform distribution $\bm{p}(\phi(\mathcal{P}_T,\mathcal{B}_T)) \sim \texttt{uniform}(\alpha_{lo}, \alpha_{hi})$. The upper bound $\alpha_{hi}$ and lower bound $\alpha_{lo}$ of the uniform distribution are set to the 95% density interval, *i.e.,* $\bm{p}(\alpha_{lo} <\phi(\mathcal{P},\widehat{\mathcal{B}}_j) < \alpha_{hi}) = 95\%$ for every predicted bounding box $j$. Notably, the density of each bounding box is recorded during the **Stage 1**, which will not cause any computation overhead. The analysis of time complexity against other active learning methods is presented in Sec. [3.4](#sec:complexity){reference-type="ref" reference="sec:complexity"}.
|
2305.03088/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-11-23T14:57:51.955Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" etag="WHhLtOwfM-QSHaIpBI6F" version="20.6.0" type="google"><diagram id="XiB2cVh5nBnGSZkr4HAv" name="Page-1">7V1bc5s4FP41foxHgLg9JmmavU+7aafNvuzIIGxajFzAtdNfvxIgrsJgGzDpOp1pzJGQhc7tO0dHZKbcr/ePAdqs/iQ29mYysPcz5c1Mlk2o0f8Z4SUhqKqcEJaBayckKSc8uT9wSgQpdevaOCx1jAjxIndTJlrE97EVlWgoCMiu3M0hXvlbN2iZfiPICU8W8nCt2yfXjlYJ1ZD1nP4Ldpcr/s2SZiYta8Q7pwOHK2STXYGkPMyU+4CQKPm03t9jj60dX5fkvrcNrdnEAuxHXW7Y7S344c9/v+CHt++fF89fbta/7W9gyo3vyNumT5zONnrhSxCQrW9jNgqYKXe7lRvhpw2yWOuO8pzSVtHao1cS/eihBfbukPV1Gd92TzwS0Caf+LT/nUP8KOWxmV7yHjNZAfEPpYdRQL5i3mJjB229iPV3Pa8yoo3CVTw39uX1JUlX6TsOIrwvkNIlesRkjaPghXZJW2HKrVRcFT293uXMV7hMrwqMl3lHlArcMhs65wn9kLLlGBYpAhZpyXrQ9SvxSvu2JbzhJowX+pZ2MDf7eHV4M/20ZL8/YSqkOJgxyUlGXPAmiVPonDNigZZ8NSdXJIY+FdVP3C4tRYZSAXBU9q8uGMlPTTBoixb/9MN6VZ2rJearssEpBfarUMD9wZgPB2P+r5GI7/JF+d7M3yaJ6IHvkixXGA9NKGC8IlL7wRivthtm7Nu3zMXl1rCwxGWrjfdu9DltYZ+fC/Q3++LFC7/w6WN8Ll488/vZRX5TfMXvamRHSLaBhTuYOWxzh9zAtaIqgjpHOC3AHorc72U3LmJT+g3viBvrUyoUsmlWrYGucAofJnmo9M6i660MJsn63DCYj5ChDiUA5PLIEphriqRrgOIlE8pAL39NhIIljmpfE8tVtjBniJrWZmOsTPFyI8K9dc2u3JMAO5gKAGV3V7tBef4HAw5lEUaeu/TpZ4sKE/VSyh3TaZdis9u0Ye3aNhvjLsDU3qFFPB4Tww1bq3j11LuZ+qaMPAwR8jBNIJbfg9qZos70m3Os19343IA5FQe9JA/SeRLMuxDHCfEwEqMP5pVut8ttKPRMyv/QMwFYNULw8pDEGIz5FIzS1o8+5ZBNPzxFKKKx31UW4lbdqMoCUASyYI4qC+aAsuAy1q9p75VIBOD/TwRkoFVFQBGZA21MoMrx1nFAtRS2t6JWMFcLuJVj2DbcmtzEkWuOVY9Ergn66uANJ4JcJQiqUiKdjFwVWIPBcm2wgfGpKtUk7GEfBWiKMDJNYHWEkZzj/cBIyZSnDhx58rcfY8Ho71BEmerHFBnkOSKerJVfYyisd7U8xqQsT68x8yQsTz31+oTWDKM+MVsSR7iv3gbJfdogXS+HsmdapAz7lga9McoDDGivROnXnrNwGbY5xvgUwM0FzY95NT+Dmp96DriUXHv1pgf2ZXoY+jHMspE4M4u2Lw8zgq1pTcO2xtISaAimC9GzYJTm/tuO/bONxCz83gpC8kKYDlaIRfcoHj8g/pKZEcdhO/gyID6THkw2Hp6XQ/9BJs82wjrP2/Udyh6qfmFsxJDHshP3cUPEsgrLRDUB8lkGy8HYc/1lyB7jV3+EZ+Hp067PE8+crXmcbKEKx5RqRaLsCXY4bmVzT7Jz1IAyVvuJgsfscmaC/Exh5geTe91myhbwXSwRyUQC9huj0PXiCaPvOGFHPCEqUNuFl3OBpJvcX13fDvmEOe/AJiC085o12NvAZaLY9ig96GXTnoplsexNXQzATJzaKqfJurJ9oOlrmmWJtoTmDZOfNzXUfBv1A1HZrZUzcLy+pF4g0t39iVKBZfzWiKmOKDIo15eopqC+RBbk75TB8ne6EEg8pZckiFZkSXzkPeTUyrLkff4gZJPy5wuOopcUK6BtRMrcGzOz17Ho6HTArBodAbPWNV7vDFrOYr2hXZL1fQRBclsUNB3eG7Bv3se30tATvRQ6pJi9Ec9CUDFAoFI12NZfUipyl8ygXyjcvMXo9ujEkoH+Yq6FDpBldACVWLagX+n/37Y4jFwGSBv8VEZ2T/ZdB+sb+3RfJ0dvR3s5pSI10KjvUhmCTMBwXs58faZuYl5OA129nHYpL7f45ASLNfx29+3z379/1D8sP6rw5qJOroxvum4zFPzaWVuX/fF0It4rG5HblTbvBfVD/c/2XkKBk5WrxB0lccJVlHrHyqNInKqq5/VPn3tQfKWJajaGwlf3xKfAIUQMRiE2tV/cMCKMJVdIdUz9l1Et64DAuCyo0qQaY8Y0dOOCKgmcwt92eyh3RVW9e+CzeG9OKG3U2cnpE0PUAt5P2RdKsnGcbzPBef179oXCtf355XgE4zSR8OBYAYXGsOHBwUUdC3zFWOmKtY7BWrVCEhHUUsfcpeHev1iJhtfIp2tJqfHRe/qbcpvyZmslGcsKH/PFlBqWu1j3XgE+tXp3W1toqlZhOquQdxzZsvphQ2WvDMJL4131kr7i50giah09ijqtrTKtXpzDle5DgOL4lo1x6y1J4Ear9cDaZ2q6ggTahyVbxXo/2ld9FcLF1c94hepXUr4CbruY/r327UpYCgna+yvyCIiuvl1564c7HNyEG8QKpeLjKlZE+TiwU9bNRSwgNaeMtb6ccnVrj66w4AjaqIZBF6Uzq5Yir9K2PBSGrjU7UKjdUQ9PqWI5Q3e7quQ45da6XntrRmWMrrXWlbBflSqCMnCZtV4/XzaS+DSgtJNPFvW3NTtSxb5WtSXgNAlS5ApWUbuJ0NG7QYZ4woO6F+OIhMHpNZ6wIYOwW6G4pjariWnLJIgOY5+eXRinsHPwt4hJdQ+ZneUuHdMeLF8vsGnXMs+j/C+HxK1WlvvBicSuRj139D4rawMfXjas5u0+9mqOi4fGqIa8UDRB6EoDV8OGPalf5YU6KoBzpWP0qrFSiLMZIdxqMH52FTxdtzgEbNetrmcOxykuE0HFqW+Bl3MSw50jNQWodMrbm1CqgDu9JddQ6Q+VEUqjTVGwMuTuUUDYnJ4opGIZzus20jFuKE0+ZQKlKTUfpA+UKLm1wsU/wQ8Y6OT593//eYTeLeZnWX9eB1TBgGfYLrOjP5L4uxwnAvb4xAv24W+8C9woPtiYHoh863pUjWLKoFgPYSgpch3rOcg2bbUfJVOrStY9H9kP1DssFqPY6ffXUyoniU6lOlw16/ZZlP3qwz4LgZDoTd+NCgo6KOjw70tXzLL6KZopUL8sIdL7K9OF69j80uwREmYrsrvmy8404qKEmQREMjSUCEkXPTA2mWD9YLzWHqyDS2EjIfyd0Lmcriw9LlY/Bf6eyeHe0zEnhe6qXrYfUGupNObqfWJ/NX0n5aCVydKYp6DfP87YK2o222ut6LGuq1JDLNUTzUMF+QeNrjjd/4h9HKARCkQBNoBh1GO/BcAK7usvtlRgA9QNVmM1XPBHL/M/2JToef5Xr5SH/wA=</diagram></mxfile>
|
2305.03088/main_diagram/main_diagram.pdf
ADDED
|
Binary file (62.5 kB). View file
|
|
|
2305.19693/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2305.19693/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In recent years, generative diffusion models [@sohl2015deep], also known as score-based diffusion models, have demonstrated significant progress in image [@ho2020denoising; @song2021scorebased], sound [@chen2020wavegrad; @kong2020diffwave; @liu2023audioldm] and video generation [@ho2022video; @singer2022make]. These models have not only produced samples of exceptional quality, but also demonstrated a comprehensive coverage of the data distribution. The generated samples exhibit impressive diversity and minimal mode collapse, which are crucial characteristics of high-performing generative models [@salimans2016improved; @lucic2018gans; @thanh2020catastrophic]. Diffusion models are defined in terms of a stochastic dynamic that maps a simple, usually Gaussian, distribution into the distribution of the data. In an intuitive sense, the dynamics of a generated sample passes from a phase of equal potentiality, where any (synthetic) datum could be generated, to a denoising phase where the (randomly) "selected" datum is fully denoised. As we shall see in the rest of this paper, this can be interpreted as a form of spontaneous symmetry breaking. As stated by @gross1996role, "The secret of nature is symmetry, but much of the texture of the world is due to mechanisms of symmetry breaking." Surprisingly, the concept of spontaneous symmetry breaking has not yet been examined in the context of generative modeling. This is particularly noteworthy given the importance of spontaneous symmetry breaking in nature, which accounts for the emergence of various phenomena such as crystals (breaking translation invariance), magnetism (breaking rotation invariance), and broken gauge symmetries, a common theme in contemporary theoretical physics.
|
| 4 |
+
|
| 5 |
+
{#fig:main_image width="\\textwidth"}
|
| 7 |
+
|
| 8 |
+
The concept of spontaneous symmetry breaking is strongly connected with the theory of phase transitions [@stanley1971phase; @donoghue2014dynamics]. In high energy physics, experimental evidence of particle collisions and decay appears to respect only a subset of the symmetry group that characterizes the theory weak and electromagnetic interactions. This mismatch, which is responsible for the non-null mass of several particles, is thought to be due to the fact that the potential energy of the Higgs field has infinitely many minima, each of which corresponding to a subgroup of the symmetries of the standard model [@PhysRevLett.13.321; @PhysRevLett.13.508; @anderson1963plasmons; @nambu1961dynamical]. Therefore, while the overall symmetry is preserved in the potential, this is not reflected in our physical experiments as we experience a world where only one of these equivalent Higgs state has been arbitrarily "selected". The Higgs potential is often described as a "Mexican hat", since it has an unstable critical point at the origin and a circle of equivalent unstable points around it (see Figure [\[fig:mexican_hat\]](#fig:mexican_hat){reference-type="ref" reference="fig:mexican_hat"}). Spontaneous symmetry breaking phenomena are also central to modern statistical physics as they describe the thermodynamics of phase transitions [@stanley1971phase]. For example, a ferromagnetic material at low temperature generates a coherent magnetic field in one particular direction since all the atomic dipoles tend to align to the field. On the other hand, at higher temperature the kinetic energy prevents this global alignment and the material does not generate a macroscopic magnetic field. In both cases, the laws of physics are spherically invariant and therefore do not favor any particular directions. However, at low temperature a direction is selected among all the equally likely possibilities, leading to an apparent breaking of the physical symmetries of the system. The global symmetry can then only be recovered by considering an ideal ensemble of many of these magnets, each aligning along one of the equally possible directions.
|
| 9 |
+
|
| 10 |
+
::: wrapfigure
|
| 11 |
+
r0.28
|
| 12 |
+
:::
|
| 13 |
+
|
| 14 |
+
In this paper, using both theoretical and experimental evidence, we show that the generative dynamics of diffusion models is characterized by a similar spontaneous symmetry breaking phenomenon. In this case, the symmetry group does not come from the laws of physics but it is instead implicit in the dataset. For example, translational invariance can be implicit in the fact that translated versions of similar images are equally represented in a naturalistic dataset. During the early stage of the generative dynamics, each particle reflects all the symmetries since its dynamics fluctuates around a highly symmetric central fixed-point. However, after a critical time, this central fixed-point becomes unstable, and each particle tends towards a different (synthetic) datum with arbitrarily "selected" features, with the global symmetry being apparent only when considering the ensemble of all generated data. An overview of spontaneous symmetry breaking in diffusion models is summarized in Figure [1](#fig:main_image){reference-type="ref" reference="fig:main_image"}. Our code can be found at <https://github.com/gabrielraya/symmetry_breaking_diffusion_models>
|
| 15 |
+
|
| 16 |
+
# Method
|
| 17 |
+
|
| 18 |
+
**Notation**. We denote random variables using upper-case letters and their value using lower-case letters. Additionally, vector-valued variables are denoted using boldface letters. The forward process is denoted as $(\mathbf{Y}_s, s)$, where $\mathbf{Y}_0$ represents the data and $\mathbf{Y}_s$ denotes a noisy state. The generative process is denoted as $(\mathbf{X}_t, t)$, with $s$ and $t = T - s$ representing forward and generative time, respectively, and $T$ as the time horizon. We will always use the standard integration from $0$ to $T>0$ (different to @song2021scorebased) for the short hand notion of the Ito integral equation. Since we focus on the generative part, we use $\mathbf{\hat{W}}_s$ and $\mathbf{W}_t$ to denote Brownian motion associated to the inference and generative SDEs. For ease of notation, we assume an additive SDE, so $g$ only depends on time.
|
| 19 |
+
|
| 20 |
+
**Continuous diffusion models**. The stochastic dynamics of a particle $\mathbf{Y}_0 \sim p(\mathbf{y}, 0)$, starting at time $s=0$, are described as the solution to the Itô SDE: $d \mathbf{Y}_s = f(\mathbf{Y}_s, s) \text{d}s +g(s)\text{d}\mathbf{\hat{W}}_s
|
| 21 |
+
\label{eq:forward sde}$, where $f$ and $g$ are the drift and diffusion coefficient chosen properly such that the marginal density will (approximately) converge to a spherical Gaussian steady-state distribution as $s\to T$. We can express the marginal density at time $s$ as $$\begin{equation}
|
| 22 |
+
\label{eq: marginals}
|
| 23 |
+
p(\boldsymbol{y}, s) = \int_{\mathbb{R}^D} k(\boldsymbol{y}, s; \boldsymbol{y}_0, 0) p(\boldsymbol{y}_0, 0) \text{d} \boldsymbol{y}_0~,
|
| 24 |
+
\end{equation}$$ where $k(\boldsymbol{y}, s; \boldsymbol{y}', s')$ is the transition kernel that 'solves' Eq. [\[eq:forward sde\]](#eq:forward sde){reference-type="ref" reference="eq:forward sde"} below. To generate samples from $p(\boldsymbol{y}, 0)$ by starting from the tractable $p(\boldsymbol{y}, T)$, we can employ a "backward" SDE that reverses this process [@anderson1982reverse], whose marginal density evolves according to $p(\boldsymbol{y},s)$, reverse in time, $$\begin{equation}
|
| 25 |
+
\label{eq: generative sde}
|
| 26 |
+
d \mathbf{X}_t = \Big[ g^2(T - t) \nabla_{\boldsymbol{x}} \log p(\mathbf{X}_t, T-t) - f(\mathbf{X}_t, T-t)\Big]dt +g(T-t)d\mathbf{W}_t
|
| 27 |
+
% \label{eq:sde}
|
| 28 |
+
\end{equation}$$ The score function $\nabla_{\boldsymbol{x}} \log p(\boldsymbol{x}, T-t)$ directs the dynamics towards the target distribution $p(\boldsymbol{y}, 0)$ and can be reliably estimated using a denoising autoencoder loss [@vincent2011connection; @song2021scorebased].\
|
| 29 |
+
**Ornstein--Uhlenbeck process**. In the rest of the paper, we will assume that the forward process follows a (non-stationary) Ornstein--Uhlenbeck dynamics: $\text{d} \mathbf{Y}_s = - \frac{1}{2} \beta(s) \mathbf{Y}_s ds + \sqrt{\beta(s)} \text{d}\mathbf{\hat{W}}_s
|
| 30 |
+
\label{eq:ou}$. This is an instance of Variance Preserving (VP) diffusion [@song2021scorebased] wherein the transition kernel can be written in closed form: $k(\boldsymbol{y}, s; \boldsymbol{y}_0, 0) = \mathcal{N}\left(\boldsymbol{y}; \theta_{s} \boldsymbol{y}_0, (1 - \theta_{s}^2) I \right)$, with $\theta_s = e^{-\frac{1}{2} \int_0^s \beta(\tau) \text{d} \tau}~.$ It is easy to see that this kernel reduces to an unconditional standard spherical normal for $s \rightarrow \infty$ while it tends to a delta function for $s \rightarrow 0$.
|
| 31 |
+
|
| 32 |
+
For the purpose of our analysis, it is convenient to re-express the generative SDE in Eq. [\[eq: generative sde\]](#eq: generative sde){reference-type="ref" reference="eq: generative sde"} in terms of a potential energy function $$\begin{equation}
|
| 33 |
+
d \mathbf{X}_t = -\nabla_{\boldsymbol{x}} u(\mathbf{X}_t, T- t)\text{d}t + g(T-t)\text{d}\mathbf{W}_t
|
| 34 |
+
% \label{eq:sde}
|
| 35 |
+
\end{equation}$$ where $$\begin{equation}
|
| 36 |
+
u(\boldsymbol{x}, s) = -g^2(s) \log p(\boldsymbol{x}, s) + \int_{\boldsymbol{0}}^{\boldsymbol{x}} f(\boldsymbol{z}, s) \cdot \text{d}\boldsymbol{z}~.
|
| 37 |
+
\end{equation}$$ where the line integral can go along any path connecting $\boldsymbol{0}$ and $\boldsymbol{x}$. Given a sequence of potential functions $u(\boldsymbol{x}, s)$, we can define an associated symmetry group of transformations $$\begin{equation}
|
| 38 |
+
G = \{g: \mathbb{R}^D \leftrightarrow \mathbb{R}^D \mid u(g(\boldsymbol{x}), s) = u(\boldsymbol{x}, s) , \forall ~s \in \mathbb{R}^+, \boldsymbol{x} \in \mathbb{R}^D \}~.
|
| 39 |
+
\end{equation}$$ In words, $G$ is the group of all transformations of the ambient space $R^D$ that preserves the probability measure of the training set at all stages of denoising.
|
| 40 |
+
|
| 41 |
+
We define a path of fixed points as $\tilde{\boldsymbol{x}}(t): \mathbb{R} \rightarrow \mathbb{R}^D$ such that $\nabla u(\tilde{\boldsymbol{x}}(t), T - t) = 0, \forall t \in \mathbb{R}^+$. These are points of vanishing drift for the stochastic dynamics. The stability of the path can be quantified using the second partial derivatives, which can be organized in the path of Hessian matrices $H(\tilde{\boldsymbol{x}}, T - t)$. A fixed-point is stable when all the eigenvalues of the Hessian matrix of the potential at that point are positive, while it is a saddle or unstable when at least one eigenvalue is negative. Around a stable path of fixed-point, the drift term can be well approximated by a linear function: $\nabla_{\boldsymbol{x}} u(\boldsymbol{x}, T- t) \approx H(\tilde{\boldsymbol{x}}, T - t) (\boldsymbol{x} - \tilde{\boldsymbol{x}})~.~$ From this we can conclude that, along a stable path, the dynamics is locally characterized by the quadratic potential $$\begin{equation}
|
| 42 |
+
\tilde{u}(\boldsymbol{x}, T - t) = \frac{1}{2} (\boldsymbol{x} - \tilde{\boldsymbol{x}}(t))^T H(\tilde{\boldsymbol{x}}(t), T - t) (\boldsymbol{x} - \tilde{\boldsymbol{x}}(t))~.
|
| 43 |
+
\end{equation}$$ The associated symmetry group $\tilde{G}$ is generally only a subgroup of the global symmetry group $G$. We say that the dynamics exhibit a *bifurcation* when there are at least two fixed-points paths that overlap for some values of $t$. In this case, usually a stable fixed point loses its stability after a critical time $t_c$ and 'splits' into two or more stable paths. As we will see, this is at the core of the spontaneous symmetry breaking phenomenon. Each of the branched stable paths only preserve a sub-group of the overall symmetry, while the full symmetry is still present when taking all stable paths into account.
|
| 44 |
+
|
| 45 |
+
We start by considering a very simple one-dimensional example with a dataset consisting of two points $y_{-1} = -1$ and $y_1 = -y_{-1} = 1$ sampled with equal probability. In this case, the symmetry group that preserves the potential is comprised by identity and the transformation $g(x) = -x$. Up to terms that are constant in $x$, the potential is given by the following expression: $$\begin{equation}
|
| 46 |
+
u(x, t) = \beta(T - t) \left( -\frac{1}{4} x^2 - \log{\left(e^{-\frac{(x - \theta_{T-t})^2}{2 (1 - \theta_{T-t}^2)}} + e^{-\frac{(x + \theta_{T-t})^2}{2 (1 - \theta_{T-t}^2)}} \right)} \right)
|
| 47 |
+
\end{equation}$$ which can be obtained from Eq. [\[eq: marginals\]](#eq: marginals){reference-type="ref" reference="eq: marginals"}. Figure [1](#fig:main_image){reference-type="ref" reference="fig:main_image"}a illustrates the evolution of the potential (top) and the corresponding one-dimensional generative process (bottom). For all values of $t$, the gradient vanishes at $x = 0$ since the potential is symmetric under the transformation $g(x) = - x$. The stability of this fixed-point can be established by analyzing the second derivative: $$\begin{equation}
|
| 48 |
+
\frac{\partial^2 u}{\partial x^2}\bigg|_{x=0} = - \beta(T -t) \left( \frac{1}{2} + \frac{2 \theta_{T-t}^2 - 1}{(\theta_{T-t}^2 - 1)^2} \right)
|
| 49 |
+
\end{equation}$$
|
| 50 |
+
|
| 51 |
+
<figure id="fig:three_graphs">
|
| 52 |
+
|
| 53 |
+
<figcaption>Bifurcation analysis of the generative dynamics of a one-dimensional diffusion model.<br />
|
| 54 |
+
(a) Geometric visualization of bifurcation of fixed points through the intersection of a straight line and a hyperbolic tangent at a value <span class="math inline"><em>θ</em> > <em>θ</em><sub><em>c</em></sub></span>. (b) Bifurcation diagram obtained by numerically solving the self-consistency equation Eq. <a href="#eq:all-fixed-points" data-reference-type="ref" data-reference="eq:all-fixed-points">[eq:all-fixed-points]</a>, demonstrating the bifurcation at the critical value <span class="math inline"><em>θ</em><sub><em>c</em></sub></span>. The blue, orange and green lines denote the three paths of fixed-points. The vector field is given by the drift term (i.e. the gradient of the potential) in the generative SDE.</figcaption>
|
| 55 |
+
</figure>
|
| 56 |
+
|
| 57 |
+
where $\theta_{T-t}$ is a monotonic function of $t$ ranging from $0$ to $1$. The second derivative is positive up to a critical value $\theta_c$ and negative afterwards. This implies that the fixed-point at the origin loses its stability when $\theta_{T-t} > \theta_c$. We can find this critical value by setting the second derivative equal to zero and solving for $\theta_c$, which gives $$\begin{equation}
|
| 58 |
+
\label{eq: critical theta 1d}
|
| 59 |
+
\theta_c = \sqrt{\sqrt{2} - 1} \approx 0.6436
|
| 60 |
+
\end{equation}$$ When $\theta_{T-t} > \theta_c$, the origin is not the only fixed-point of the system. All fixed-point can be found by solving the self-consistency equation: $$\begin{equation}
|
| 61 |
+
(\theta_{T-t}^2 + 1) x^* = -2 \theta_{T-t} \tanh{\left(\frac{ \theta_{T-t} x^*}{\theta_{T-t}^2 - 1} \right)}
|
| 62 |
+
\label{eq:all-fixed-points}
|
| 63 |
+
\end{equation}$$
|
| 64 |
+
|
| 65 |
+
This equation is strikingly similar to the *Curie-Weiss equation of state*, which describes magnetization under the mean-field approximation [@tauber2014critical]. Solving this equation corresponds to finding the intersections between a straight line and an hyperbolic tangent. From Figure [\[fig:critical_points\]](#fig:critical_points){reference-type="ref" reference="fig:critical_points"}, it is clear that there are three solutions for $\theta_{T-t} > \theta_c$ and only the zero solution otherwise. This corresponds to a bifurcation into two paths of fixed points that converge to the values of the data-points for $\theta_{T-t} \rightarrow 1$.
|
| 66 |
+
|
| 67 |
+
We can now describe the spontaneous symmetry breaking phenomenon. The potential $u(x, t)$ is invariant under the transformation $g(x) = -x$ for all values of $\theta_{T-t}$. However, for $\theta_{T-t} > \theta_c$, the individual particles will be 'trapped' in one of those new stable paths of fixed-points, locally breaking the symmetry of the system. From the point of view of generative modeling, this spontaneous symmetry breaking corresponds to the selection of a particular sample among all possible ones. This selection almost exclusively depends on the noise fluctuations around the critical time. In fact, fluctuations for $t \ll t_c$ are irrelevant since the process is mean reverting towards the origin. Similarly, when $t \gg t_c$, fluctuations will be reverted towards the closest fixed-point. However, fluctuations are instead amplified when $t \approx t_c$ since the origin becomes unstable, as illustrated by the red arrows in Figure [1](#fig:main_image){reference-type="ref" reference="fig:main_image"}a.
|
| 68 |
+
|
| 69 |
+
In supplemental section [\[supp:1d_calculation\]](#supp:1d_calculation){reference-type="ref" reference="supp:1d_calculation"}, we provide detailed calculations pertaining to this one-dimensional example. Additionally, in section [\[supp:hyperspherical\]](#supp:hyperspherical){reference-type="ref" reference="supp:hyperspherical"}, we generalize our investigation to models of arbitrarily high dimensionality by considering a hyper-spherical data distribution.
|
| 70 |
+
|
| 71 |
+
We can now move to a more realistic scenario where the data is comprised by a finite number $N$ of data-points $\{\boldsymbol{y}_1, \dots, \boldsymbol{y}_N\}$ embedded in $R^D$. Assuming iid sampling, the most general symmetry group in this case is given by all norm-preserving transformations of $R^D$ that map data-points into data-points. Up to constant terms, the potential is given by $$\begin{equation}
|
| 72 |
+
u(\boldsymbol{x}, t) = - \beta(t) \left(\frac{1}{4} {\left\lVert\boldsymbol{x}\right\rVert}_{2}^2 + \log{\sum_j e^{-\frac{{\left\lVert\boldsymbol{x} - \theta_{T - t} \boldsymbol{y}_j\right\rVert}_{2}^2}{2 ( 1 - \theta_{T-t}^2)}} } \right)
|
| 73 |
+
\label{eq:potential_normalized_data}
|
| 74 |
+
\end{equation}$$ where the sum runs over the whole dataset. The fixed-points of this model can be found by solving the following self-consistency equation: $$\begin{equation}
|
| 75 |
+
\label{eq: generalized self-consistency}
|
| 76 |
+
\frac{1 + \theta_{T - t}^2}{2 \theta_{T - t}} \boldsymbol{x}^* = \frac{1}{\sum_j w_j(\boldsymbol{x}^*; \theta_{T - t})} \sum_j w_j(\boldsymbol{x}^*; \theta_{T - t}) \boldsymbol{y}_j
|
| 77 |
+
\end{equation}$$ where $w_j(\boldsymbol{x}^*; \theta_{T - t}) = e^{-{\left\lVert\boldsymbol{x}^* - \theta_{T - t} \boldsymbol{y}_j\right\rVert}_{2}^2/(2(1 - \theta_{T - t}^2))}$. While this is a very general case, we can still prove the existence of a spontaneous symmetry breaking at the origin under two mild conditions. First of all, we assume the data-points to be centered: $\sum_j \boldsymbol{y}_j = 0~.$ Furthermore, we assumed that the data-points are normalized so as to have a norm $r$: ${\left\lVert\boldsymbol{y}_j\right\rVert}_{2} = r~, \forall j~,.$ Under these conditions, which can be easily enforced on real data through normalization, it is straightforward to see from Eq. [\[eq: generalized self-consistency\]](#eq: generalized self-consistency){reference-type="ref" reference="eq: generalized self-consistency"} that the origin is minimum of the potential for all values of $t$. While we cannot evaluate all the eigenvalues of the Hessian matrix at the origin in closed-form, we can obtain a simple expression for the trace of the Hessian (i.e. the Laplacian of the potential): $$\begin{equation}
|
| 78 |
+
\label{eq: hyper-spherical laplacian}
|
| 79 |
+
\nabla^2 u|_{x=0} = -\beta(T - t) \left(\frac{D}{2} + \frac{(D + r^2) \theta_{T-t}^2 - D}{( \theta_{T-t}^2 - 1)^2} \right)~,
|
| 80 |
+
\end{equation}$$ which switches sign when $\theta_{T-t}$ is equal to $\theta^* = \sqrt{(\sqrt{D^2 + r^4} - r^2)/D}$. However, in this case we cannot conclude that this is the point of the first spontaneous symmetry breaking since the Laplacian is the sum of all second derivatives, which are not necessarily all equal. Nevertheless, we do know that all second derivatives are positive at the origin for $t \rightarrow 0$, since the forward dynamics has a Gaussian steady-state distribution. Therefore, from the change of sign of the Laplacian we can conclude that at least one second derivative at the origin changes sign, corresponding to a change in stability and the onset of a spontaneous symmetry breaking with $\theta^* > \theta_c$. In supplemental section [\[supp:normalized_datasets_calculation\]](#supp:normalized_datasets_calculation){reference-type="ref" reference="supp:normalized_datasets_calculation"}, we provide detailed calculations pertaining to this model.
|
2306.10563/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2306.10563/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The world surrounding us involves multiple modalities, including vision, audio, text, etc., which complement each other and jointly comprise human perception [@baltruvsaitis2018multimodal; @zhu2021deep]. Audio-visual speech recognition (AVSR) leverages both audio and visual modalities to understand human speech, which provides a promising solution to ameliorate the noise-robustness of audio-only speech recognition with noise-invariant lip movement information [@sumby1954visual].
|
| 4 |
+
|
| 5 |
+
<figure id="fig1" data-latex-placement="t">
|
| 6 |
+
<embed src="figs_cr/fig1.pdf" />
|
| 7 |
+
<figcaption>Illustration of noisy audio-visual speech recognition. (a) Mainstream AVSR approaches with noise adaptation. (b) Our framework constructs viseme-phoneme mapping for modality transfer, which restores clean audio from visual signals to enable speech recognition under any noisy conditions.</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
However, most existing efforts still focus on audio modality to improve noise-robustness considering its dominance in AVSR, where audio modality contains much richer information to represent speech content than visual modality [@sataloff1992human; @ren2021learning]. Current mainstream approaches introduce noise adaptation techniques to improve robustness[^2], inspired by robust speech recognition [@wang2020complex]. Most of them leverage noise-corrupted training data to strengthen robustness [@afouras2018deep; @ma2021end; @song2022multimodal], and recent works extend it to self-supervised learning scheme [@shi2022robust; @hsu2022u]. Based on that, latest works introduce speech enhancement as front-end to denoise before recognition [@xu2020discriminative; @hong2022visual]. Despite the effectiveness, these methods are usually faced with two practical challenges. First, they require abundant labeled noisy audio-visual data for network training, which is not always available in some real-world scenarios [@lin2021unsupervised; @chen2022noise]. Second, the well-trained model may not adapt to new-coming noise scenes in practical applications^[2](#fn2){reference-type="ref" reference="fn2"}^, resulting in less optimal model generality [@meng2017unsupervised]. Therefore, our research idea in this paper is leveraging visual modality to develop a general noise-robust AVSR system while without dependence on noisy training data.
|
| 11 |
+
|
| 12 |
+
We may gain some inspirations from human perception mechanism of noisy audio-visual speech. Neuroscience studies [@nath2011dynamic] find that human brain will unconsciously rely more on the lip movement to understand speech under noisy conditions (*a.k.a.*, McGurk Effect, [@mcgurk1976hearing]). During this process, instead of directly recognizing lip movement, human brain will first transfer it to speech signal in auditory cortex for further understanding [@bourguignon2020lip; @megevand2020crossmodal]. With prior knowledge of lip-audio mapping, human brain can restore informative clean audio from lip movement under any noisy conditions to aid in speech understanding [@bernstein2004auditory; @aller2022differential].
|
| 13 |
+
|
| 14 |
+
Motivated by above observations, we propose a universal viseme-phoneme[^3] mapping approach (UniVPM) to implement modality transfer, which can restore clean audio from lip movement to enable speech recognition under any noisy conditions. We first build two universal memory banks to model all the visemes and phonemes via online balanced clustering. Based on that, an adversarial mutual information estimator is proposed to construct strong viseme-phoneme mapping, which enables final lip-to-audio modality transfer via retrieval. As a result, our system can adapt well to any testing noises while without noisy training data. Empirical results show the effectiveness of our approach. Our contributions are summarized as:
|
| 15 |
+
|
| 16 |
+
- We present UniVPM, a general noise-robust AVSR approach investigated on visual modality, which can adapt to any testing noises while without dependence on noisy training data, *a.k.a.*, unsupervised noise adaptation.
|
| 17 |
+
|
| 18 |
+
- We build two universal banks to model all the visemes and phonemes via online balanced clustering, followed by an adversarial mutual information estimator to construct strong mapping between them, which enables modality transfer to restore clean audio from lip movement for speech recognition under any noises.
|
| 19 |
+
|
| 20 |
+
- Our UniVPM outperforms previous state-of-the-arts on LRS3 and LRS2 benchmarks. Extensive experiments also show its superiority on visual speech recognition (VSR) task.
|
| 21 |
+
|
| 22 |
+
# Method
|
| 23 |
+
|
| 24 |
+
<figure id="fig2" data-latex-placement="t">
|
| 25 |
+
<embed src="figs_cr/fig2.pdf" style="width:88.0%" />
|
| 26 |
+
<figcaption>Illustration of our proposed UniVPM. (a) Training on clean audio-visual data to construct universal viseme-phoneme mapping. (b) Inference on any noisy data with restored clean audio from modality transfer.</figcaption>
|
| 27 |
+
</figure>
|
| 28 |
+
|
| 29 |
+
The overall framework of proposed UniVPM is illustrated in Fig. [2](#fig2){reference-type="ref" reference="fig2"}. During training, we first send the input video and clean audio streams into two front-ends for processing, which generates modality sequences $f_v, f_a \in \mathbb{R}^{T\times D}$, where $T$ is number of frames and $D$ is embedding dimension. These frames are sent into two memory banks to model all the visemes and phonemes, using an online balanced clustering algorithm where each cluster center represents a specific viseme or phoneme. Then, we propose an adversarial mutual information estimator to construct strong mapping between corresponding visemes and phonemes. Based on that, we finally implement modality transfer via retrieval to restore clean audio from visual signals, which enables speech recognition under any testing noises.
|
| 30 |
+
|
| 31 |
+
Clustering is a widely used knowledge discovery technique to partition a set of data points into homogeneous groups, which has a variety of applications such as data mining [@fayyad1996advances]. Among them, $K$-Means algorithm [@macqueen1967classification] is the most well-known and popular one. However, it cannot be directly applied for our viseme and phoneme clustering due to imbalanced data distribution (see §[6.4](#assec:analysis_phoneme_distribution){reference-type="ref" reference="assec:analysis_phoneme_distribution"}). This may challenge $K$-Means clustering according to uniform effect [@xiong2006k]. As shown in Fig. [3](#fig3){reference-type="ref" reference="fig3"} (a), most cluster centers gather in the majority data class (*i.e.*, over-fitting), leaving the minority class not well modeled.
|
| 32 |
+
|
| 33 |
+
:::: algorithm
|
| 34 |
+
::: algorithmic
|
| 35 |
+
Streaming data $D$, number of clusters $N$, maximum cluster size $S_{max}$. Initialize an empty memory bank $\mathcal{B}$ and a list of empty cluster banks $\{\mathcal{B}_1, \mathcal{B}_2, ..., \mathcal{B}_N\}$. Receive new batch data $d$ from $D$ Append all frame samples in $d$ to bank $\mathcal{B}$ Initialize a list of cluster centers $\{c_1, c_2, ..., c_N\}$ from $\mathcal{B}$ using [K-Means++]{.smallcaps} Algorithm [-@arthur2006k] Append all frame samples in $d$ to bank $\mathcal{B}$ $\{\mathcal{B}_1, ..., \mathcal{B}_N\} = \textsc{Re-allocate}(\mathcal{B}, \{c_1, ..., c_N\})$ $\{c_1, ..., c_N\} = \textsc{Renew-centers}(\{\mathcal{B}_1, ..., \mathcal{B}_N\})$ Calculate average cluster size $S_{avg} = len(\mathcal{B})/N$ Threshold cluster size $S_{thr} = \min(S_{avg}, S_{max})$ Maintain the $S_{thr}$-nearest samples to $c_i$ in $\mathcal{B}_i$ Update $\mathcal{B}$ accordingly Set a random weight $\alpha \in (0, 1)$ Find the nearest sample $d_{near}$ to $c_i$ in $\mathcal{B}_i$ $d_{new} = d_{near} \cdot \alpha + c_i \cdot (1-\alpha)$ $\mathcal{B}_i.append(d_{new})$ Update $\mathcal{B}$ accordingly
|
| 36 |
+
:::
|
| 37 |
+
::::
|
| 38 |
+
|
| 39 |
+
To this end, we propose an Online Balanced Clustering algorithm in Alg. [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} to model all the visemes and phonemes equally from input frames. First, we set the number of clusters $N$ to $40$, following the amount of English phonemes [@phy22phoneme]. Then, we set a maximum cluster size $S_{max}$ (*i.e.*, number of samples in each cluster) to control the total memory. We also initialize an empty bank $\mathcal{B}$ as an overall cache, as well as a list of empty banks $\{\mathcal{B}_1, \mathcal{B}_2, ..., \mathcal{B}_N\}$ to cache each cluster.
|
| 40 |
+
|
| 41 |
+
The proposed algorithm is executed in three steps, center initialization, $K$-Means clustering and re-sampling. First, we collect the first few batches of data frames into $\mathcal{B}$ to initialize $N$ dispersed cluster centers $\{c_1, c_2, ..., c_N\}$, using $K$-Means++ algorithm [@arthur2006k]. Second, we add the current batch data to bank $\mathcal{B}$ and employ vanilla $K$-Means algorithm to re-allocate each sample in the bank to the nearest cluster center, after that the new cluster centers would be updated. Finally, we propose a re-sampling strategy to balance the size of different clusters as well as control the total memory of bank $\mathcal{B}$, by setting a threshold cluster size $S_{thr}$ (line 12 in Alg. [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}). For those clusters with more than $S_{thr}$ samples (*i.e.*, majority cluster), we perform undersampling by only maintaining the $S_{thr}$ nearest samples to cluster center. In contrast, for the minority clusters with less samples than threshold, we propose oversampling to interpolate a new sample between center and the nearest sample with a random weight, inspired by SMOTE algorithm [@chawla2002smote]. In this way, as illustrated in Fig. [3](#fig3){reference-type="ref" reference="fig3"} (b), the resulted clusters would be balanced-sized and separated to better represent each of the visemes and phonemes.
|
| 42 |
+
|
| 43 |
+
<figure id="fig3" data-latex-placement="t">
|
| 44 |
+
<embed src="figs_cr/fig3.pdf" />
|
| 45 |
+
<figcaption>t-SNE visualization of clustered phonemes from (a) online clustering (with random pruning to keep fixed cluster size, details are in §<a href="#assec:detail_online_cluster" data-reference-type="ref" data-reference="assec:detail_online_cluster">8.3</a>), and (b) our proposed online balanced clustering. We randomly select six clusters for visualization, and black triangle denotes the cluster center. Dashed ellipses highlight the real phoneme classes, which are confirmed by pre-trained phoneme recognition model <span class="citation" data-cites="phy22phoneme"></span>.</figcaption>
|
| 46 |
+
</figure>
|
| 47 |
+
|
| 48 |
+
After clustering visemes and phonemes in banks, we propose an Adversarial Mutual Information Estimator (AMIE) to construct strong mapping between them. Mutual Information (MI) is a commonly used measure to explore the coherence between two distributions, which is, however, historically difficult to estimate. Recently, @belghazi2018mutual propose a Mutual Information Neural Estimation (MINE) approach to approximate MI lower bound with neural network. Based on that, we propose an adversarial learning approach to maximize the MI between visemes and phonemes, in order to construct strict mapping between them and thus alleviate the ambiguity of homophenes.
|
| 49 |
+
|
| 50 |
+
Mutual information measures the mutual dependency between two probability distributions, $$\begin{equation}
|
| 51 |
+
\label{eq1}
|
| 52 |
+
\begin{aligned}
|
| 53 |
+
I(X, Y) &= \sum_{x\in X} \sum_{y\in Y} p(x, y)\log\frac{p(x, y)}{p(x)p(y)},
|
| 54 |
+
\end{aligned}
|
| 55 |
+
\end{equation}$$ where $p(x, y)$ is the joint probability distribution of $X$ and $Y$, and $p(x)$ and $p(y)$ are the marginals.
|
| 56 |
+
|
| 57 |
+
Therefore, the mutual information can be written in terms of Kullback-Leibler (KL-) divergence: $$\begin{equation}
|
| 58 |
+
\label{eq2}
|
| 59 |
+
\begin{aligned}
|
| 60 |
+
I(X, Y) &= D_{K\hspace{-0.02cm}L}(p(x, y) \hspace{0.1cm}\Vert\hspace{0.1cm} p(x)p(y)),
|
| 61 |
+
\end{aligned}
|
| 62 |
+
\end{equation}$$ where $D_{K\hspace{-0.02cm}L}$ is defined as: $$\begin{equation}
|
| 63 |
+
\label{eq3}
|
| 64 |
+
\begin{aligned}
|
| 65 |
+
D_{K\hspace{-0.02cm}L}(p\hspace{0.1cm}\Vert\hspace{0.1cm} q) &= \sum_{x\in X} p(x)\log\frac{p(x)}{q(x)},
|
| 66 |
+
\end{aligned}
|
| 67 |
+
\end{equation}$$
|
| 68 |
+
|
| 69 |
+
Furthermore, the $KL$-divergence admits the Donsker-Varadhan (DV) representation [@donsker1983asymptotic; @belghazi2018mutual]: $$\begin{equation}
|
| 70 |
+
\label{eq4}
|
| 71 |
+
\begin{aligned}
|
| 72 |
+
D_{K\hspace{-0.02cm}L}(p\hspace{0.1cm}\Vert\hspace{0.1cm} q) &= \sup_{T:\Omega\rightarrow\mathbb{R}} \mathbb{E}_p[T] - \log(\mathbb{E}_q[e^T]),
|
| 73 |
+
\end{aligned}
|
| 74 |
+
\end{equation}$$ where the supremum is taken over all functions $T$ on $\Omega\subset\mathbb{R}^d$ to guarantee two finite expectations. Therefore, we have the MI lower bound: $$\begin{equation}
|
| 75 |
+
\label{eq5}
|
| 76 |
+
\begin{aligned}
|
| 77 |
+
I(X, Y) &\geq I_\Theta(X, Y),
|
| 78 |
+
\end{aligned}
|
| 79 |
+
\end{equation}$$ where $I_\Theta$ is the neural information measure, $$\begin{equation}
|
| 80 |
+
\label{eq6}
|
| 81 |
+
\begin{aligned}
|
| 82 |
+
I_\Theta(X, Y) &= \sup_{\theta\in \Theta} \mathbb{E}_{p(x, y)}[T_\theta(x, y)] \\
|
| 83 |
+
&- \log(\mathbb{E}_{p(x)p(y)}[e^{T_\theta(x, y)}]),
|
| 84 |
+
\end{aligned}
|
| 85 |
+
\end{equation}$$ and $T_\theta$ denotes a trainable neural network.
|
| 86 |
+
|
| 87 |
+
Based on MINE, we propose an Adversarial Mutual Information Estimator to explore and maximize the mutual information between clustered visemes and phonemes. As illustrated in Fig. [2](#fig2){reference-type="ref" reference="fig2"} and [4](#fig4){reference-type="ref" reference="fig4"}, given a visual sequence $f_v$, we send each frame of it into viseme bank to find the nearest cluster center $c_v$, which forms the viseme sequence $s_v \in \mathbb{R}^{T\times D}$. Similarly, we obtain a phoneme sequence $s_a$ to represent audio features $f_a$. The neural network $T_\theta$ then feeds $\{s_v, s_a\}$ to output a scalar for MI estimation, where $T_\theta$ is a 3-layer classifier with output as a 1-dimensional scalar. Furthermore, since we do not concern the accurate value of MI when maximizing it, we employ Jensen-Shannon (JS) representation [@hjelm2018learning] to approximate $KL$-divergence in Eq. [\[eq4\]](#eq4){reference-type="ref" reference="eq4"}, which has been proved with more stable neural network optimization. Therefore, the mutual information between clustered visemes and phonemes is estimated as: $$\begin{equation}
|
| 88 |
+
\label{eq7}
|
| 89 |
+
\begin{aligned}
|
| 90 |
+
I_\Theta^{J\hspace{-0.02cm}S}(s_v, s_a) = \sup_{\theta\in \Theta} &\mathbb{E}_{p(s_v, s_a)}[-\text{sp}(-T_\theta(s_v, s_a))] \\
|
| 91 |
+
- &\mathbb{E}_{p(s_v)p(s_a)}[\text{sp}(T_\theta(s_v, \tilde{s}_a))],
|
| 92 |
+
\end{aligned}
|
| 93 |
+
\end{equation}$$ where $\tilde{s}_a$ is the shuffle-ordered version of $s_a$ that subjects to the marginal distributions of phonemes, and $\text{sp}(z) = \log(1 + e^z)$ is the softplus function.
|
| 94 |
+
|
| 95 |
+
As stated in @belghazi2018mutual, the neural network $T_\theta$ can be used to estimate MI between generated data ($s_v, s_a$ in our case) by directly trained on them. However, this will suffer a lot from the poor quality of generated data at early training stage. One feasible scheme [@zhu2021arbitrary] is to train $T_\theta$ on real data ($f_v, f_a$ in our case) and then estimate MI on generated data, but this suffers from the ambiguity of homophenes (see Fig. [8](#fig8){reference-type="ref" reference="fig8"}). To this end, we propose AMIE with adversarial learning to estimate and maximize the MI between corresponding visemes and phonemes, which can construct strict viseme-phoneme mapping without ambiguity.
|
| 96 |
+
|
| 97 |
+
Inspired by GAN [@Goodfellow2014], we design the AMIE as discriminator and the viseme-phoneme banks as generator. Based on that, the adversarial loss is defined as: $$\begin{equation}
|
| 98 |
+
\label{eq8}
|
| 99 |
+
\begin{aligned}
|
| 100 |
+
\mathcal{L}_{G\hspace{-0.02cm}A\hspace{-0.02cm}N} &= \mathcal{L}_{D} + \mathcal{L}_{G} \\
|
| 101 |
+
&= I_\Theta^{J\hspace{-0.02cm}S}(f_v, f_a) + [-I_\Theta^{J\hspace{-0.02cm}S}(s_v, s_a)],
|
| 102 |
+
\end{aligned}
|
| 103 |
+
\end{equation}$$
|
| 104 |
+
|
| 105 |
+
Our framework employs an adversarial learning strategy for optimization, where $D$ and $G$ play a two-player minimax game as detailed in Alg. [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}. As a result, the estimated MI between corresponding visemes and phonemes would be maximized to construct mapping relationships. The strong distinguishing ability of adversarial learning enables strict viseme-phoneme mapping to overcome the ambiguity of homophenes, as shown in Fig. [5](#fig5){reference-type="ref" reference="fig5"}.
|
| 106 |
+
|
| 107 |
+
<figure id="fig4" data-latex-placement="t">
|
| 108 |
+
<embed src="figs_cr/fig4.pdf" />
|
| 109 |
+
<figcaption>Illustration of (a) viseme-phoneme mapping via AMIE, and (b) modality transfer via retrieval.</figcaption>
|
| 110 |
+
</figure>
|
| 111 |
+
|
| 112 |
+
With constructed viseme-phoneme mapping, we can finally implement modality transfer to restore clean audio from lips. As shown in Fig. [4](#fig4){reference-type="ref" reference="fig4"}, given the visual sequence $f_v$ and clustered phoneme centers $\{c_a^1, c_a^2, ..., c_a^N\}$, we calculate an addressing score $\mathcal{A}^{i,j}$ to indicate the probability that the $i$-th visual frame corresponds to the $j$-th phoneme cluster: $$\begin{equation}
|
| 113 |
+
\label{eq9}
|
| 114 |
+
\begin{aligned}
|
| 115 |
+
\mathcal{A}^{i,j} &= \frac{\exp(\langle f_v^i, c_a^j\rangle/\tau)}{\sum_{k=1}^{N} \exp(\langle f_v^i, c_a^k\rangle/\tau)},
|
| 116 |
+
\end{aligned}
|
| 117 |
+
\end{equation}$$ where $\langle\hspace{0.04cm}\cdot, \cdot\hspace{0.04cm}\rangle$ denotes cosine similarity, $\tau$ is temperature weight. The restored clean audio frames are: $$\begin{equation}
|
| 118 |
+
\label{eq10}
|
| 119 |
+
\begin{aligned}
|
| 120 |
+
\hat{f}_a^i &= \sum_{j=1}^{N}\mathcal{A}^{i,j}\cdot c_a^j,
|
| 121 |
+
\end{aligned}
|
| 122 |
+
\end{equation}$$
|
| 123 |
+
|
| 124 |
+
To supervise the quality of restored audio $\hat{f}_a = \{\hat{f}_a^i\}_{i=1}^{T}$, we first employ AMIE to maximize the MI between $\hat{f}_a$ and $f_v$, where Eq. [\[eq8\]](#eq8){reference-type="ref" reference="eq8"} is rewritten as: $$\begin{equation}
|
| 125 |
+
\label{eq11}
|
| 126 |
+
\small
|
| 127 |
+
\begin{aligned}
|
| 128 |
+
\mathcal{L}_{G\hspace{-0.02cm}A\hspace{-0.02cm}N} &= I_\Theta^{J\hspace{-0.02cm}S}(f_v, f_a) + [-I_\Theta^{J\hspace{-0.02cm}S}(s_v, s_a)-I_\Theta^{J\hspace{-0.02cm}S}(f_v, \hat{f}_a)],
|
| 129 |
+
\end{aligned}
|
| 130 |
+
\normalsize
|
| 131 |
+
\end{equation}$$ along with a reconstruction loss $\mathcal{L}_{rec} = \Vert \hat{f}_a - f_a\Vert_2$ to enable restoration of high-quality clean audio.
|
| 132 |
+
|
| 133 |
+
The UniVPM is optimized in an end-to-end manner (see Alg. [\[alg2\]](#alg2){reference-type="ref" reference="alg2"}), with the final training objective as: $$\begin{equation}
|
| 134 |
+
\label{eq12}
|
| 135 |
+
\small
|
| 136 |
+
\begin{aligned}
|
| 137 |
+
\mathcal{L} &= \mathcal{L}_{A\hspace{-0.01cm}S\hspace{-0.01cm}R} + \lambda_{G\hspace{-0.02cm}A\hspace{-0.02cm}N} \cdot \mathcal{L}_{G\hspace{-0.02cm}A\hspace{-0.02cm}N} + \lambda_{rec} \cdot \mathcal{L}_{rec} + \lambda_{var} \cdot \mathcal{L}_{var},
|
| 138 |
+
\end{aligned}
|
| 139 |
+
\normalsize
|
| 140 |
+
\end{equation}$$ where $\mathcal{L}_{A\hspace{-0.01cm}S\hspace{-0.01cm}R}$ denotes the downstream speech recognition loss. $\mathcal{L}_{var}$ is a variance regularization term to disperse the clustered viseme and phoneme centers, which aims to ease their mapping construction. $\lambda_{G\hspace{-0.02cm}A\hspace{-0.02cm}N}$, $\lambda_{rec}$ and $\lambda_{var}$ are weighting parameters.
|
| 141 |
+
|
| 142 |
+
::: table*
|
| 143 |
+
:::
|
| 144 |
+
|
| 145 |
+
::: table*
|
| 146 |
+
:::
|
2307.07942/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2307.07942/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Being a crucial component in the realm of scene understanding, LiDAR-based 3D object detection [@DBLP:conf/cvpr/ShiGJ0SWL20; @DBLP:journals/sensors/YanML18; @DBLP:conf/cvpr/LangVCZYB19; @DBLP:conf/cvpr/ShiWL19] identifies and accurately localizes objects in a 3D scene with the oriented bounding boxes and semantic labels. This technology has facilitated a wide range of applications in environmental perceptions, including robotics, autonomous driving, and augmented reality. With the recent advancements in 3D detection models [@DBLP:conf/cvpr/SchinaglKPRB22; @DBLP:conf/cvpr/DengLSJ22; @DBLP:conf/cvpr/HeLLZ22], highly accurate recognition of objects can be achieved through point cloud projection [@DBLP:conf/cvpr/YangLU18], point feature extraction [@DBLP:conf/cvpr/ShiWL19; @DBLP:conf/iccv/YangS0SJ19; @DBLP:conf/cvpr/YangS0J20; @DBLP:conf/cvpr/LangVCZYB19; @DBLP:conf/eccv/ShiLM22] or voxelization [@DBLP:conf/cvpr/ShiGJ0SWL20; @DBLP:conf/aaai/DengSLZZL21; @DBLP:journals/sensors/YanML18]. However, achieving such performance often comes at the expense of requiring a large volume of labeled point cloud data, which can be costly and time-consuming.
|
| 4 |
+
|
| 5 |
+
To mitigate the labeling costs and optimize the value of annotations, active learning (AL) [@DBLP:journals/csur/RenXCHLGCW22; @DBLP:journals/csur/LiuWRHZ22] has emerged as a promising solution. Active learning involves iteratively selecting the most beneficial samples for label acquisition from a large pool of unlabeled data until the labeling budget is exhausted. This selection process is guided by the selection criteria based on *sample uncertainty* [@DBLP:conf/icml/LewisC94; @5206627margin; @roth2006margin; @Parvaneh_2022_CVPR_feature_mix] and/or *diversity* [@DBLP:conf/iclr/SenerS18; @DBLP:conf/iccv/ElhamifarSYS13; @DBLP:conf/nips/Guo10; @DBLP:journals/ijcv/YangMNCH15]. Both measures are used to assess the ***informativeness*** of the unlabeled samples. Aleatoric uncertainty-driven approaches search for samples that the model is least confident of by using metrics like maximum entropy [@DBLP:conf/cvpr/WuC022] or estimated model changes [@DBLP:conf/cvpr/YooK19; @DBLP:journals/corr/abs-2206-12569]. On the other hand, epistemic uncertainty based methods attempt to find the most representative samples to avoid sample redundancy by using greedy coreset algorithms [@DBLP:conf/iclr/SenerS18] or clustering based approaches [@DBLP:conf/iclr/AshZK0A20].
|
| 6 |
+
|
| 7 |
+
While active learning has proven to be effective in reducing labeling costs for recognition tasks, its application in LiDAR-based object detection has been limited [@DBLP:conf/ivs/FengWRMD19; @DBLP:conf/ivs/SchmidtRTK20; @DBLP:conf/accv/KaoLS018]. This is largely due to its high computational costs and involvement of both detection and regression tasks, which pose significant challenges to the design of the selection criteria. A very recent work [Crb]{.smallcaps} [@DBLP:conf/iclr/Luo23] manually designed three heuristics that allow the acquisition of labels by hierarchically filtering out concise, representative, and geometrically balanced unlabelled point clouds. While effective, it remains unclear how to characterize the sample informativeness for both classification and regression tasks with *one unified measurement*.
|
| 8 |
+
|
| 9 |
+
In this paper, we propose a novel AL strategy called kernel coding rate maximization ([Kecor]{.smallcaps}) for efficient and effective active 3D detection. To endow the model with the ability to reason about the trade-off between information and performance autonomously, we resort to the coding rate theory and modify the formula from feature selection to sample selection, by replacing the covariance estimate with the empirical neural tangent kernel (NTK). The proposed [Kecor]{.smallcaps} strategy allows us to pick the most informative point clouds from the unlabeled pool such that their latent features require the maximal coding length for encoding. To characterize the non-linear relationships between the latent features and the corresponding box predictions spending the least computational costs, we train a proxy network of the 3D detector head with labeled samples and extract the outer product of Jacobians from all proxy layers to form the NTK matrix of all unlabeled samples. Empirical studies evidence that the NTK kernel not only captures non-linearity but takes the aleatoric and epistemic uncertainties into joint consideration, assisting detectors to recognize challenging objects that are of sparse structure. To accommodate both one-stage (*i.e.*, [Second]{.smallcaps}) and two-stage detectors (*i.e.*, [Pv-rcnn]{.smallcaps}), we further incorporate the classification entropy maximization into the selection criteria. Our contributions are summarized as below:
|
| 10 |
+
|
| 11 |
+
1. We propose a novel information-theoretic based criterion [Kecor]{.smallcaps} for cost-effective 3D box annotations that allows for the greedy search of informative point clouds by maximizing the kernel coding rate.
|
| 12 |
+
|
| 13 |
+
2. Our framework is flexible to accommodate different choices of kernels and 3D detector architectures. Empirical NTK kernel used in [Kecor]{.smallcaps} demonstrates a strong capacity to unify both aleatoric and epistemic uncertainties from the model perspective, which helps detectors learn a variety of challenging objects.
|
| 14 |
+
|
| 15 |
+
3. Extensive experiments have been conducted on both 3D benchmarks (*i.e.*, KITTI and Waymo Open) and 2D object detection dataset (*i.e.*, PASCAL VOC07), verifying the effectiveness and versatility of the proposed approach. Experimental results show that the proposed approach achieves a 44.4% reduction of annotations and up to 26.4% less running time compared to the state-of-the-art active 3D detection methods.
|
| 16 |
+
|
| 17 |
+
![An illustration of the workflow of the proposed kernel coding rate maximization for active 3D detection. Dotted boxes indicate the unique components in two-stage 3D detectors (*e.g.*, [Pv-rcnn]{.smallcaps}), while solid boxes indicate the shared components in both one-stage (*e.g.*, [Second]{.smallcaps}) and two-stage detectors.](images/ultimate_flowchart.pdf){#fig:flowchart width="98%"}
|
| 18 |
+
|
| 19 |
+
# Method
|
| 20 |
+
|
| 21 |
+
In this section, we present the mathematical formulation of the problem of active learning for 3D object detection, along with the establishment of the necessary notations.
|
| 22 |
+
|
| 23 |
+
**3D Object Detection**. The typical approach for detecting objects in an orderless point cloud $\mathcal{P}_i$ involves training a 3D object detector to identify and locate the objects of interest, consisting of a set of 3D bounding boxes and their labels $\mathfrak{B}_i = \{b_k, y_k\}_{k\in[N_i]}$, with $N_i$ indicating the number of bounding boxes in the $i$-th point cloud. Each point in $\mathcal{P}_i = \{(x, y, z, r)\}$ is represented by xyz spatial coordinates and additional features such as reflectance $r$. The box annotations $b_k\in\mathbb{R}^7$ include the relative center xyz spatial coordinates to the object ground planes, the box size, the heading angle, and the box label $y_k\in\mathbb{R}^C$, where $C$ indicates the number of classes. As illustrated in Figure [1](#fig:flowchart){reference-type="ref" reference="fig:flowchart"}, modern 3D detectors extract latent features $\mathfrak{m}_i=\bm g(\mathcal{P}_i;\bm \theta_g)\in\mathbb{R}^{d}$ through projection [@DBLP:conf/cvpr/YangLU18], PointNet encoding [@DBLP:conf/cvpr/ShiWL19; @DBLP:conf/iccv/YangS0SJ19; @DBLP:conf/cvpr/YangS0J20; @DBLP:conf/cvpr/LangVCZYB19; @DBLP:conf/eccv/ShiLM22] or voxelization [@DBLP:conf/cvpr/ShiGJ0SWL20; @DBLP:conf/aaai/DengSLZZL21; @DBLP:journals/sensors/YanML18], where dimension $d=W\times H\times F$ is the product of width $W$, length $H$, and channels $F$ of the feature map. The detection head $\bm h(\cdot;\bm \theta_h)$ uses $\mathfrak{m}_i$ as inputs and generates detection outcomes $\mathfrak{\hat{B}}_i = \{\hat{b}_k, \hat{y}_k\}$:
|
| 24 |
+
|
| 25 |
+
$$\begin{equation}
|
| 26 |
+
\mathcal{P}_i \xmapsto[]{\bm g(\cdot; \bm \theta_g)}\mathfrak{m}_i \xmapsto[]{\bm h(\cdot; \bm \theta_h)} \mathcal{\mathfrak{\hat{B}}}_i.
|
| 27 |
+
\end{equation}$$
|
| 28 |
+
|
| 29 |
+
**Active Learning Setup**. In an active learning setup, a small set of labeled point clouds $\mathcal{D}_L=\{\mathcal{P}_i, \mathfrak{B}_i\}_{i\in L}$ and a large pool of raw point clouds $\mathcal{D}_U=\{\mathcal{P}_j\}_{j\in U}$ are given at training time, where $L$ and $U$ are the index sets corresponding to $\mathcal{D}_L$ and $\mathcal{D}_U$, respectively, and the cardinality of each set satisfy that $|L| \ll |U|$. During each active learning round $r\in\{1,\ldots, R\}$, a subset of point clouds $\mathcal{D}_r^*$ is selected from $\mathcal{D}_U$ based on a defined active learning policy. The labels of 3D bounding boxes for the chosen point clouds are queried from an oracle $\bm{\Omega}: \mathcal{P}\mapsto \mathfrak{B}$ to create a labeled set $\mathcal{D}_S=\{\mathcal{P}_j, \mathfrak{B}_j\}_{\mathcal{P}_j\in\mathcal{D}_r^*}$. The 3D detection model is pre-trained with $\mathcal{D}_L$ for active selection and then retrained with $\mathcal{D}_{S}\cup \mathcal{D}_L$. The process is repeated until the selected samples reach the final budget $B$, *i.e.,* $\sum_{r=1}^{R}|\mathcal{D}_r^*| = B$.
|
| 30 |
+
|
| 31 |
+
As explained in Section [2.2](#sec:related_work){reference-type="ref" reference="sec:related_work"}, information theory [@DBLP:books/daglib/0016881] defines the coding rate $\mathfrak{R}(\cdot, \epsilon)$ [@DBLP:journals/pami/MaDHW07] as a measure of lossy data compression, quantifying the achievability of maximum compression while adhering to a desired error upper bound. It is commonly used as an empirical estimation of rate distortion [@DBLP:books/daglib/0016881; @DBLP:journals/tit/BiniaZZ74] indicating the minimal number of binary bits required to represent random variable $\mathbf{Z}$ with the expected decoding error below $\epsilon$. Given a finite set of $n$ samples $\mathbf{Z} = [\mathbf{z}_1,\mathbf{z}_2,...,\mathbf{z}_n] \in \mathbb{R}^{d\times n}$, the coding rate [@DBLP:journals/pami/MaDHW07] with respect to $\mathbf{Z}$ and a distortion $\epsilon$ is given by: $$\begin{equation}
|
| 32 |
+
\vspace{-1ex}\label{eq:coding}
|
| 33 |
+
\mathfrak{R}(\mathbf{Z}, \epsilon) = \frac{1}{2}\log \text{det}(\mathbf{I} + \frac{d}{\epsilon^2 n} \hat{\Sigma}),
|
| 34 |
+
\end{equation}$$ where $\mathbf{I}$ is the $d$-dimensional identify matrix and $\hat{\Sigma} = \mathbf{Z}\mathbf{Z}\tran\in \mathbb{R}^{d\times d}$ is an estimate of covariance. Theoretical justifications have been provided in [@DBLP:journals/pami/MaDHW07] that the coding vectors in $\mathbf{Z}$ can be explained by packing $\epsilon$-balls into the space spanned by $\mathbf{Z}$ (*sphere packing* [@DBLP:books/daglib/0016881]) or by computing the number of bits needed to quantize the SVD of $\mathbf{Z}$ subject to the precision. As coding rate produces a good estimate of the compactness of latent features, a few attempts [@DBLP:conf/cvpr/ChristoudiasUD08; @DBLP:journals/corr/abs-2210-11464] have been made in the areas of multi-view learning and contrastive learning, which select informative features from $d$ dimensions by maximizing the coding rate.
|
| 35 |
+
|
| 36 |
+
The core task in pool-based active learning is to select the most informative samples from the unlabeled pool $\mathcal{D}_U$, which motivates us to replace the **covariance estimate** of features with the **kernel matrix** of samples in the coding rate formula (see Equation [\[eq:coding\]](#eq:coding){reference-type="eqref" reference="eq:coding"}). To each point cloud subset $\mathcal{D} = \{\mathcal{P}_i\}_{i=1}^n\subset \mathcal{D}_U$ of size $n$, we refer to this new coding length $\mathfrak{R}^{K}(\mathfrak{M}, \epsilon)$ as the **kernel coding rate**, which represents the minimal number of bits to encode features $\mathfrak{M}$: $$\begin{equation}
|
| 37 |
+
\nonumber
|
| 38 |
+
\mathfrak{M} = \bm g(\mathcal{D}, \bm \theta_g)= [\mathfrak{m}_1,\mathfrak{m}_2,...,\mathfrak{m}_{n}] \in\mathbb{R}^{d\times n}.\vspace{-1ex}
|
| 39 |
+
\end{equation}$$ The latent features extracted from $\bm g(\cdot;\bm \theta_g)$ can help find the most informative samples irrespective of the downstream tasks of classification and/or regression. We mathematically define the kernel coding rate $\mathfrak{R}^{K}(\mathfrak{M}, \epsilon)$ as: $$\begin{equation}
|
| 40 |
+
\mathfrak{R}^{K}(\mathfrak{M}, \epsilon) := \frac{1}{2} \log\text{det} (\mathbf{I} + \frac{n}{\epsilon^2d} \mathbf{K}_{\mathfrak{M}, \mathfrak{M}}),
|
| 41 |
+
\end{equation}$$ with the kernel matrix $\mathbf{K}_{\mathfrak{M}, \mathfrak{M}} = [K(\mathfrak{m}_i, \mathfrak{m}_j)]\in\mathbb{R}^{n \times n}$. In each round $r\in\{1,\ldots, R\}$, we use *greedy search* to find an optimal subset $\mathcal{D}_r^*$ with size $n$ from the unlabeled pool $\mathcal{D}_U$ by maximizing the kernel coding rate: $$\begin{equation}
|
| 42 |
+
\label{eq:old_acquire}\vspace{-1ex}
|
| 43 |
+
\mathcal{D}_r^* = \argmax_{\mathcal{D}\subset \mathcal{D}_U \text{with} |\mathcal{D}|=n} \mathfrak{R}^{K}(\mathfrak{M}, \epsilon),
|
| 44 |
+
\end{equation}$$ where $\mathfrak{M} = \bm g(\mathcal{D}; \bm \theta_g)$. Notably, in the above equation, we consider positive semi-definite (PSD) kernel $K: \mathfrak{m}\times\mathfrak{m} \rightarrow \mathbb{R}$, which characterizes the similarity between each pair of embeddings of point clouds, and hence, helps with avoiding redundancy. The most basic type of PSD kernel to consider is linear kernel, which is defined by the dot product between two features: $$\begin{equation}
|
| 45 |
+
K_{\operatorname{Linear}}(\mathfrak{m}_i, \mathfrak{m}_j) = \langle\mathfrak{m}_i, \mathfrak{m}_j\rangle = \mathfrak{m}_i \tran \mathfrak{m}_j.\vspace{-1ex}
|
| 46 |
+
\end{equation}$$ This kernel can be computed very quickly yet it has limitations when dealing with high-dimensional input variables, such as in our case where $d = W\times L \times F$. The linear kernel may capture the noise and fluctuations in the data instead of the underlying pattern, making it less generalizable to the unseen data. Therefore, while the linear kernel can be a useful starting point, it may be necessary to consider other PSD kernels that are better suited to the specific characteristics of the point cloud data at hand. More discussion on non-linear kernels (*e.g.*, Laplace RBF kernel) is provided in the supplementary material. In the following subsection, we explain a more appropriate PSD kernel $K$ to be used in [Kecor]{.smallcaps}, where we can jointly consider *aleatoric* and *epistemic* uncertainties from the model perspective.
|
| 47 |
+
|
| 48 |
+
Compared with linear kernel, empirical neural tangent kernel (NTK) [@DBLP:conf/nips/JacotHG18; @DBLP:conf/icml/NovakSS22] defined as the outer product of the neural network Jacobians, has been shown to lead to improved generalization performance in deep learning models. The yielded NTK matrix quantifies *how changes in the inputs affect the outputs* and captures the relationships between the inputs and outputs in a compact and interpretable way.
|
| 49 |
+
|
| 50 |
+
To efficiently compute the NTK kernel matrix, we first consider a $(L+1)$-layer fully connected neural network $\bm f(\cdot;\bm \theta): \mathfrak{m}\mapsto \hat{\mathfrak{B}}$ as a proxy network for the detection head $\bm h(\cdot;\bm \theta_h)$, as shown in Figure [1](#fig:flowchart){reference-type="ref" reference="fig:flowchart"}. The $l$-th layer in the proxy network $\bm f$ has $d_l$ neurons, where $l$ ranges from $0$ to $L$. In the forward pass computation, the output from the $l$-th layer is defined as, $$\begin{equation}
|
| 51 |
+
\label{eq:mlp}
|
| 52 |
+
\bm f^{(l)}(\mathfrak{m}_i; \bm \theta^{(l)}) = \sigma(\frac{1}{\sqrt{d_l}} \bm W^{(l)}\bm f^{(l-1)}(\mathfrak{m}_i) + \beta \bm b^{(l)}),
|
| 53 |
+
\end{equation}$$ where $\beta \geq 0$ is a constant controlling the effect of bias and $\bm f^0(\mathfrak{m}_i) = \mathfrak{m}_i$. $\sigma(\cdot)$ stands for a pointwise nonlinear function. Note that the weight matrix $\bm W^{(l)}\in\mathbb{R}^{d_l\times d_{l-1}}$ is rescaled by $1/\sqrt{d_l}$ to avoid divergence, which refers to *NTK parameterization* [@DBLP:conf/nips/JacotHG18]. For notation simplicity, we denote $\bm f^{(l)}(\mathfrak{m}_i; \bm \theta^{(l)})$ as $\bm{f}^{(l)}_i$. We omit the bias term and rewrite Equation [\[eq:mlp\]](#eq:mlp){reference-type="eqref" reference="eq:mlp"} as $$\begin{equation}
|
| 54 |
+
\label{eq:mlp_new}
|
| 55 |
+
\bm{f}^{(l)}_i = \bm{\tilde{W}}^{(l)} \mathfrak{\tilde{m}}_i^{(l-1)},
|
| 56 |
+
\end{equation}$$ where $\tilde{\bm W}^{(l)} = [\bm W^{(l)}, \bm b^{(l)}]\in\mathbb{R}^{d_l\times(d_{l-1})+1}$, $\mathfrak{\tilde{m}}_i^{(l-1)} = [\frac{\sigma}{\sqrt{d_l}}\bm f_i^{(l-1)}; \sigma \beta]\in\mathbb{R}^{d_{l-1}+1}$. We denote all parameters in the proxy network as $\bm\theta = [\tilde{\bm W}^{(1)}, \ldots, \tilde{\bm W}^{(L)}]$. To endow the proxy network $\bm f$ with the capability to mimic the behavior of the detector head, we train the proxy $\bm f$ with the labeled data $\mathcal{D}_L$ by using an empirical regression loss function $\mathcal{L}: \mathbb{R}^{d_{L}}\rightarrow \mathbb{R}_+$ *e.g.*, mean squared error (MSE) to supervise the 3D box and ROI predictions. It is found that training neural networks using the MSE loss involves solving a linear regression problem with the kernel trick [@DBLP:conf/nips/JacotHG18], where the kernel $K_{\operatorname{NTK}}$ is defined as the derivative of the output of a neural network with respect to its inputs at the $l$-th layer, evaluated at the initial conditions: $$\begin{equation}
|
| 57 |
+
K_{\operatorname{NTK}}(\mathfrak{m}_i, \mathfrak{m}_j) = \langle\nabla_{\bm \theta} \bm f^{(l)}(\mathfrak{\bm m}_{i}), \nabla_{\bm \theta} \bm f^{(l)}(\mathfrak{\bm m}_{j})\rangle.
|
| 58 |
+
\end{equation}$$ By incorporating Equation [\[eq:mlp_new\]](#eq:mlp_new){reference-type="eqref" reference="eq:mlp_new"} and the chain rule, we obtain the factorization of derivates as the ultimate form of empirical NTK kernel: $$\begin{equation}
|
| 59 |
+
\begin{split}
|
| 60 |
+
\hspace{-1ex}K_{\operatorname{NTK}}(\mathfrak{m}_i, \mathfrak{m}_j) & =\sum_{l=1}^L\langle\frac{\mathrm{d} \boldsymbol{f}_i^{(L)}}{\mathrm{d} \boldsymbol{f}_i^{(l)}}\left(\tilde{\mathfrak{m}}_i^{(l-1)}\right)\tran, \frac{\mathrm{d} \boldsymbol{f}_j^{(L)}}{\mathrm{d} \boldsymbol{f}_j^{(l)}}\left(\tilde{\mathfrak{m}}_j^{(l-1)}\right)\tran\rangle_F \nonumber\\
|
| 61 |
+
& =\sum_{l=1}^L \left\langle\tilde{\mathfrak{m}}_i^{(l-1)}, \tilde{\mathfrak{m}}_j^{(l-1)}\right\rangle \cdot\left\langle\frac{\mathrm{d} \boldsymbol{f}_i^{(L)}}{\mathrm{d} \boldsymbol{f}_i^{(l)}}, \frac{\mathrm{d} \boldsymbol{f}_j^{(L)}}{\mathrm{d} \boldsymbol{f}_j^{(l)}}\right\rangle,
|
| 62 |
+
\end{split}
|
| 63 |
+
\end{equation}$$ where $\langle \cdot, \cdot \rangle_F$ indicates the Frobenius inner product. The above equation demonstrates that the NTK kernel is constructed by taking into account the gradient contributions from multiple layers, which naturally captures the *epistemic uncertainty* in the detector's behavior.
|
| 64 |
+
|
| 65 |
+
To verify the validity of aggregating gradients from multiple layers, we derive a simplified variant of the NTK kernel $K_{\operatorname{NTK}}$, which only considers the gradients *w.r.t* the parameters from the last layer of the proxy network: $$\begin{equation}
|
| 66 |
+
\hspace{-1ex}K_{\operatorname{Last}}(\mathfrak{m}_i, \mathfrak{m}_j) = \langle\nabla_{\tilde{\bm W}^{(L)}} \bm f^{(l)}(\mathfrak{\bm m}_{i}), \nabla_{\tilde{\bm W}^{(L)}} \bm f^{(l)}(\mathfrak{\bm m}_{j})\rangle.
|
| 67 |
+
\end{equation}$$ We have conducted extensive experiments to compare the impact of different kernels selected in the kernel coding rate maximization criteria as shown in Section [5.4.1](#sec:abla_kernel){reference-type="ref" reference="sec:abla_kernel"}. Empirical results suggest that the one-stage detectors generally favor $K_{\operatorname{Last}}$ while two-stage detectors tend to perform better with $K_{\operatorname{NTK}}$ on 3D detection recognition tasks.
|
| 68 |
+
|
| 69 |
+
As described in Equation [\[eq:old_acquire\]](#eq:old_acquire){reference-type="eqref" reference="eq:old_acquire"}, our approach selects the most informative point clouds based on the extracted features $\mathfrak{m}$ and gradient maps and thereby facilitate downstream predictions in the detector head. However, for two-stage detectors like [Pv-rcnn]{.smallcaps}, the classification prediction is made in the region proposal network (refer to dotted boxes and lines in Figure [1](#fig:flowchart){reference-type="ref" reference="fig:flowchart"}) before feeding features into the detector head. Therefore, the features $\mathfrak{m}$ alone cannot determine the informativeness for the box classification task. To make the proposed [Kecor]{.smallcaps} strategy applicable to both one-stage and two-stage detectors, we introduce the modified acquisition function by including an entropy regularization term as below: $$\begin{equation}
|
| 70 |
+
\hspace{-1ex}\mathcal{D}_r^* = \argmax_{\mathcal{D}\subset \mathcal{D}_U \text{with} |\mathcal{D}|=n} \mathfrak{R}^{K}(\mathfrak{M}, \epsilon) + \sigma_{\operatorname{ent}} \mathcal{H}(\hat{Y}),\vspace{-0.5ex}
|
| 71 |
+
\end{equation}$$ where $\mathcal{H}(\cdot)$ represents the mean entropy of all classification logits generated from the classifier. The effect of the hyperparameter $\sigma_{\operatorname{ent}}$ is studied in Section [5.4.2](#sec:abla_ent){reference-type="ref" reference="sec:abla_ent"}. The overall algorithm is summarized in the supplementary material.
|
2307.07988/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2307.07988/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
This work addresses continuous space-time video superresolution (C-STVSR). The task of C-STVSR is to increase simultaneously the spatial resolution and temporal framerate of an input video by any scaling factors with only one single model. It is to be distinguished from fixed-scale space-time video super-resolution (F-STVSR), for which a model is learned to perform space-time super-resolution for only one specific spatiotemporal scale. As compared to F-STVSR, C-STVSR is more flexible and practical in real-world scenarios, which often call for up-scaling lowresolution and low-frame-rate videos of varied spatiotemporal resolutions on heterogeneous video-enabled devices.
|
| 4 |
+
|
| 5 |
+
C-STVSR remains largely under-explored. One trivial solution to C-STVSR is to perform continuous video frame interpolation [2, 12, 22, 21, 36], followed by interpolat-
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
|
| 9 |
+
Figure 1: Illustrations of (a) VideoINR [6] and (b) MoTIF. The red dash lines highlight their major differences.
|
| 10 |
+
|
| 11 |
+
ing individual video frames with continuous image superresolution [5, 37, 15], or the other way around. However, their divide-and-conquer nature of treating C-STVSR as two independent sub-tasks–i.e. temporal interpolation and spatial super-resolution–misses the opportunity to attain the best achievable performance. By leveraging the spatiotemporal information in an end-to-end optimized fashion, some recent works [11, 34, 35, 9] for F-STVSR adopt a one-stage approach, combining the extraction of individual frame features and the temporal aggregation of these features as a unified task. Nonetheless, these F-STVSR methods can hardly be extended straightforwardly to C-STVSR.
|
| 12 |
+
|
| 13 |
+
Inspired by continuous image super-resolution [5, 37, 15], VideoINR [6] presents an early attempt at C-STVSR. Given any query coordinates (x, y, t) in the continuous spa-
|
| 14 |
+
|
| 15 |
+
<sup>\*</sup>Both authors contributed equally to this work.
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
Figure 2: Illustration of backward and forward motion. The circles denote pixels accessible in the input video. The dashed lines display the motion trajectories of pixels in the reference frame at t=0. The blue arrows are backward/forward motion in the form of displacement vectors. The red arrows show the displacement vectors for an arbitrary time instance that are to be predicted from blue arrows.
|
| 20 |
+
|
| 21 |
+
tiotemporal space, it takes the latent representation of the input video as the contextual information to decode the corresponding RGB value. The process involves learning a spatial implicit neural function (S-INF in Fig. 1 (a)) for superresoluting the frame features, followed by learning another temporal implicit neural function (T-INF in Fig. 1 (a)) to generate motion estimates at time t to backward warp the super-resoluted frame features. However, learning implicitly backward motion (indicating displacement vectors that identify matching pixels/features in the reference frame) as a function of time is challenging. Essentially, the backward motion at the same spatial coordinates (x, y) yet at different time instances t may capture the motion trajectories of different pixels/features in the reference frame. For example, in Fig. 2 (a), the backward motion vectors of $p_2$ at t=1and t=2 are governed by the two distinct motion trajectories that originate from pixels $p_1$ and $p_2$ in the reference frame at t = 0, respectively. In other words, the backward motion vectors at $p_2$ , when viewed as a function of time, are a mixture of multiple motion trajectories. This could potentially introduce undesirable randomness and discontinuities in the resulting time function, which must be learned by T-INF in Fig. 1 (a). Furthermore, learning implicitly such a time function based solely on frame features complicates the task.
|
| 22 |
+
|
| 23 |
+
To circumvent the aforementioned issues, we propose learning *forward motion* of pixels in the form of motion trajectories with a space-time implicit neural function (ST-INF in Fig. 1 (b)). Considering each reference frame in the input video as sitting at the origin in time, our ST-INF takes (x,y,t) as input and outputs a displacement vector that specifies where the pixel at the coordinates (x,y) of the reference frame will appear in a synthesized frame at time t. That is, it encodes the motion trajectory of the
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
|
| 27 |
+
Figure 3: Illustration of fixed-scale video frame interpolation (F-VFI), continuous video frame interpolation (C-VFI), fixed-scale video super-resolution (F-VSR), fixed-scale space-time video super-resolution (F-STVSR), TM-Net [35], and continuous space-time video super-resolution (C-STVSR) in terms of their supported space-time scales.
|
| 28 |
+
|
| 29 |
+
pixel at (x,y), e.g. the highlighted motion trajectory of $p_2$ in the reference frame at t=0 in Fig. 2 (b). Moreover, to facilitate the learning of such a neural function in an explicit way, we supply forward optical flow maps estimated between reference frames as the contextual information (i.e. $M_{0\to 1}^L, M_{1\to 0}^L$ in Fig. 1 (b)). Our space-time neural function is also learned to predict the reliability of every motion trajectory (i.e. $\hat{Z}_{0\to t}^H, \hat{Z}_{1\to t}^H$ in Fig. 1 (b)), which is essential to ensure the quality of forward warping. Explicit motion modeling allows us to extract rough reliability estimates from the input video for better prediction.
|
| 30 |
+
|
| 31 |
+
Fig. 1 (b) depicts our end-to-end trainable C-STVSR framework, MoTIF. The main contributions of our work include: (1) we propose a space-time local implicit neural function that predicts *forward* motion and its reliability in a continuous manner; (2) we propose a reliability-aware splatting and decoding scheme that fuses simultaneously information from multiple reference frames; and (3) our MoTIF achieves the state-of-the-art performance on C-STVSR and provides out-of-distribution generalization.
|
| 32 |
+
|
| 33 |
+
# Method
|
| 34 |
+
|
| 35 |
+
Given two low-resolution RGB video frames $I_0^L, I_1^L \in \mathbb{R}^{3 \times H \times W}$ of size $H \times W$ , our task is to interpolate a high-resolution video frame $I_t^H \in \mathbb{R}^{3 \times H' \times W'}$ with an arbitrary scale $s = W'/W = H'/H \geq 1$ and at any time $t \in [0,1]$ .
|
| 36 |
+
|
| 37 |
+
Fig. 4 depicts our proposed MoTIF, which comprises four major components and operates as follows. First, given $I_0^L$ and $I_1^L$ , (1) the encoder $E_I$ converts them into their latent representations $F_0^L, F_1^L, F_{(0,1)}^L \in \mathbb{R}^{C \times H \times W}$ , where ${\cal F}^L_{(0,1)}$ serves as a rough estimate of the feature of the target frame $I_t^H$ . Similar to recent STVSR works [35, 6], we adopt the off-the-shelf video-based encoder from [34], which fuses information from both $I_0^L$ and $I_1^L$ in generating $F_0^L, F_1^L$ and $F_{(0,1)}^L$ . Second, (2) the spatial local implicit neural function (S-INF) is queried to super-resolute $F_0^L, F_1^L$ as $F_0^H, F_1^H \in \mathbb{R}^{C \times H' \times W'}$ , respectively. Our S-INF follows the design of LIIF [5]. Third, considering $I_0^L$ as sitting at the origin in time, (3) the motion encoder $E_M$ encodes $M_{0 \to 1}^L \in \mathbb{R}^{2 \times H \times W}$ -namely, the forward optical flow map capturing the forward motion from $I_0^L$ to $I_1^L$ —together with its reliability map $Z_{0 \to 1}^L \in \mathbb{R}^{3 \times H \times W}$ into $T_0^L \in \mathbb{R}^{C \times H \times W}$ . The optical flow estimation is not always perfect; $Z_{0\rightarrow 1}^L$ indicates how reliable $M_{0\to 1}^L$ is across spatial locations (x,y)(Section 3.2). Forth, using $T_0^L$ as the motion latent, (4) our space-time local implicit neural function (ST-INF) renders a high-resolution, forward motion map $\hat{M}_{0 \to t}^H \in \mathbb{R}^{2 \times H' \times W'}$ and its reliability map $\hat{Z}_{0 \to t}^H \in \mathbb{R}^{H' \times W'}$ according to the query space-time coordinates (x,y,t). $\hat{M}_{0 \to t}^H$ specifies the forward motion of the features in $F_0^H$ and is utilized to forward warp $F_0^H$ to $F_t^H$ (Section 3.2). The same motion encoding, rendering and warping processes are repeated for $I_1^L$ , in aggregating temporally the information from all the reference frames. Lastly, we follow [22] to perform softmax splatting to create $F_t^H$ and $Z_t^H$ , which are further combined with $F_{(0,1)}^{\check{H}}$ to decode the high-resolution video frame $\hat{I}_t^H$ at time t (Section 3.3). $Z_t^H$ indicates how good $F_t^H$ is across spatial locations. It is used to condition the pixel-based decoding of the RGB values from $F_t^H$ and $F_{(0,1)}^H$ .
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
Figure 4: The proposed MoTIF for C-STVSR, where the dash double arrows represent the shared-weight networks.
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
Figure 5: Illustration of low-resolution coordinates (blue dots) and high-resolution coordinates (green dots).
|
| 46 |
+
|
| 47 |
+
The very core of our C-STVSR scheme is the space-time local implicit neural function (ST-INF) in Fig. 4. Our ST-INF has the striking feature of predicting forward motion rather than backward motion. That is, it specifies how the feature at coordinates p=(x,y) in $F_0^H$ or $F_1^H$ are propagated temporally to any designated time t. The forward motion is represented in the form of displacement vectors along with their reliability values. For example, to get the forward motion $\hat{M}_{0\to t}^H(p)$ and its reliability value $\hat{Z}_{0\to t}^H(p)$ for propagating the feature $F_0^H(p)$ of $F_0^H$ at p=(x,y), it is queried as follows:
|
| 48 |
+
|
| 49 |
+
$$\{\hat{Z}_{t_r \to t}^H(p), \hat{M}_{t_r \to t}^H(p)\} = f_{\theta}(v_r, p - p_r, t - t_r),$$
|
| 50 |
+
(1)
|
| 51 |
+
|
| 52 |
+
where $v_r = T_0^L(p_r)$ is the motion latent at $p_r = (x_r, y_r)$ that is nearest to the query coordinates p = (x, y), $t_r = 0$ is the temporal location where the reference frame $I_0^L$ sits, and $\theta$ is the network parameters. Fig. 5 depicts an example of the geometrical relationship between p and $p_r$ . The sum $p + \hat{M}_{0 \to t}^H(p)$ gives the landing location of the query feature
|
| 53 |
+
|
| 54 |
+
$F_0^H(p)$ at time t. In much the same way, $\hat{M}_{1 \to t}^H(p)$ and $\hat{Z}_{1 \to t}^H(p)$ for propagating the feature $F_1^H(p)$ can be obtained by having in Eq. (1) $v_r = T_1^L(p_r)$ and $t_r = 1$ , i.e. the temporal location of $I_1^L$ .
|
| 55 |
+
|
| 56 |
+
In Eq. (1), both p=(x,y) and t can take any values. Together they can refer to any space-time coordinates. Therefore, $f_{\theta}$ is able to generate forward motion in a continuous manner to warp $F_0^H, F_1^H$ of any spatial resolution to any time instance $t \in [0,1]$ . However, in essence, $f_{\theta}$ is a local function that predicts forward motion in the vicinity of the reference space-time coordinates $p_r, t_r$ by referring to the local motion latent $v_r$ .
|
| 57 |
+
|
| 58 |
+
Learning motion trajectories. Learning forward motion can be interpreted as learning motion trajectories along the temporal axis. To see this, in Eq. (1), we fix p = (x, y) at some coordinates, e.g. $p_2$ in Fig. 2 (b), take $t_r = 0$ , and view $f_{\theta}$ as a function of time t. With this setting, the forward motion predicted by $f_{\theta}$ specifies a displacement vector indicating where $p_2$ should appear at time t. Collectively, the displacement vectors evaluated at different time instances t's define the motion trajectory of $p_2$ . Generally, this motion trajectory is a smooth function of time and is relatively easier to approximate. While it is completely feasible to change the output semantics of $f_{\theta}$ to learn backward motion, the resulting time function can be discontinuous. The reason is illustrated in Fig. 2 (a), where fixing the query coordinates p at $p_2$ , $f_{\theta}$ returns at every time instance t a backward displacement vector identifying the location of the matching pixel/feature in the reference frame
|
| 59 |
+
|
| 60 |
+
at $t_r=0$ . In this case, the displacement vectors evaluated for the same $p_2$ yet at different time instances t's may correspond to the distinct motion trajectories of different matching pixels. This suggests that $f_\theta$ has to model a less smooth function of time. Section 5.1 presents an ablation study to justify the use of forward motion.
|
| 61 |
+
|
| 62 |
+
**Learning motion latents.** Predicting the forward motion of a pixel (or a feature vector) at any given p = (x, y) and for any t is a non-trivial task. We formulate the problem as learning a $f_{\theta}$ that interpolates between forward motion sampled sparsely in both the spatial and temporal dimensions. This is achieved by providing $f_{\theta}$ with the motion latent that encodes the sparsely sampled forward motion as the contextual input. Take Eq. (1) as an example, where $f_{\theta}$ is queried to predict the forward motion of $F_0^H(p)$ for time t. The prediction is conditioned on the nearest motion latent $T_0^L(p_r)$ , which captures the forward motion $M_{0\to 1}^L$ estimated from $I_0^L$ to $I_1^L$ in the vicinity of $p_r$ . In this work, we adopt Raft-lite [31] to estimate the forward optical flow map $M_{0\rightarrow 1}^L$ . Recognizing that the flow estimation is often not perfect, we follow [20] to quantify the reliability of the resulting flow map $M_{0\rightarrow 1}^L$ based on three metrics, including (1) the intensity warping error, (2) the flow warping error, and (3) the local variances of the flow map. Further details of these metrics are provided in the supplementary document. The reliability evaluation with each of these metrics yields a real-valued map of size the same as $M_{0\to 1}^L$ , reflecting the reliability of $M^L_{0 \to 1}$ across spatial locations. These maps are concatenated channel-wisely to form $Z_{0\to 1}^L$ , which is encoded jointly with $M_{0\rightarrow 1}^L$ by the motion encoder $E_M$ as $T_0^L$ . Section 5.1 shows that $Z_{0\to 1}^L$ benefits $f_\theta$ considerably in interpolating forward motion.
|
| 63 |
+
|
| 64 |
+
To come up with a prediction of $F_t^H$ for decoding a high-resolution video frame $\hat{I}_t^H$ at time t, we aggregate temporally $F_0^H, F_1^H$ , each of which represents the high-resolution feature of a reference frame (Fig. 4). Inspired by [22], we adopt softmax splatting to resolve the potential issue that multiple features from $F_0^H, F_1^H$ or both may be forward warped to the same location in $F_t^H$ . Considering that our task is to interpolate and super-resolute a new frame from the ground up, we perform softmax splatting after $F_0^H, F_1^H$ have both been forward warped to time t. Our approach differs from [22], which targets video frame interpolation and applies softmax splatting separately to individual reference frames for late fusion. In symbols, we have
|
| 65 |
+
|
| 66 |
+
$$F_t^H(p) = \sum_{i=0}^{1} \sum_{q} \frac{b(u) \cdot \exp(\alpha \cdot \hat{Z}_{i \to t}^H(q)) \cdot F_i^H(q)}{\sum_{i=0}^{1} \sum_{q} b(u) \cdot \exp(\alpha \cdot \hat{Z}_{i \to t}^H(q))}, \quad (2)$$
|
| 67 |
+
|
| 68 |
+
where the feature $F_t^H(p)$ of $F_t^H$ at p is formulated as a weighted sum of all the reference features $F_0^H(q), F_1^H(q)$ , with the weighting determined by the distance $u=p-(q+\hat{M}_{i\to t}^H(q))$ , the bilinear kernel $b(u)=\max(0,1-|u_x|)\cdot\max(0,1-|u_y|)$ , as well as the reliability $\hat{Z}_{i\to t}^H(q)$ of the forward motion at q. $\alpha=-20$ is the temperature of the softmax operation. Since the bilinear kernel has a finite support, only those $F_0^H(q), F_1^H(q)$ warped to the neighborhood of p will actually contribute to the evaluation of $F_t^H(p)$ .
|
| 69 |
+
|
| 70 |
+
Additionally, we generate a map $Z_t^H$ to indicate how good $F_t^H$ is across spatial locations. Intuitively, if $F_t^H(p)$ is synthesized from those $F_0^H(q)$ , $F_1^H(q)$ whose forward motion is unreliable, the quality of $F_t^H(p)$ should be downgraded. $Z_t^H(p)$ serves as a conditioning factor for decoding the RGB values at p, and is obtained by
|
| 71 |
+
|
| 72 |
+
$$Z_t^H(p) = \max_{i=0,1} \max_{q} b(u) \cdot \exp\left(\alpha \cdot \hat{Z}_{i \to t}^H(q)\right), \quad (3)$$
|
| 73 |
+
|
| 74 |
+
which takes the maximum value among the (unnormalized) contributing weights from $F_0^H(q), F_1^H(q)$ . When none of these contributing $F_0^H(q), F_1^H(q)$ has reliable forward motion, the quality of $F_t^H(p)$ is regarded as poor.
|
| 75 |
+
|
| 76 |
+
To synthesize a high-resolution video frame $\hat{I}_t^H$ , we implement a pixel-wise decoder that incorporates a multi-layer perceptron. It decodes the RGB values at p by taking as inputs $F_t^H(p)$ , $F_{(0,1)}^H(p)$ , $Z_t^H(p)$ , and t (the rightmost part of Fig. 4).
|
| 77 |
+
|
| 78 |
+
We train our MoTIF end-to-end with the following objective:
|
| 79 |
+
|
| 80 |
+
$$\mathcal{L} = \mathcal{L}_{char}(\hat{I}_t^H, I_t^H) + \beta \sum_{i=0}^{1} \mathcal{L}_{char}(\hat{M}_{i \to t}^H, M_{i \to t}^H), \quad (4)$$
|
| 81 |
+
|
| 82 |
+
where $\mathcal{L}_{char}(\hat{x},x) = \sqrt{\|\hat{x}-x\|^2 + \epsilon^2}$ is the Charbonnier loss [13] and $\beta$ is a hyper-parameter. $\epsilon, \beta$ are set empirically to $10^{-3}$ and 0.01, respectively. Our objective requires both the decoded frame $\hat{I}_t^H$ and the predicted forward motion $\hat{M}_{i \to t}^H$ to approximate their respective ground-truths.
|
2310.04742/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1,545 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2023-09-28T17:41:39.242Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/21.6.8 Chrome/114.0.5735.289 Electron/25.5.0 Safari/537.36" version="21.6.8" etag="ZlXd6RoAd3X_GmC2N4wM" type="device">
|
| 2 |
+
<diagram name="第 1 页" id="G8rS29YyAsj4sIlPqWDc">
|
| 3 |
+
<mxGraphModel dx="2852" dy="2738" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="0" pageScale="1" pageWidth="827" pageHeight="1169" math="1" shadow="0">
|
| 4 |
+
<root>
|
| 5 |
+
<mxCell id="0" />
|
| 6 |
+
<mxCell id="1" parent="0" />
|
| 7 |
+
<mxCell id="2" value="" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#FFFFFF;fontColor=#333333;strokeColor=#FF6666;arcSize=5;fontSize=28;strokeWidth=7;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 8 |
+
<mxGeometry x="-50" y="-40" width="420" height="530" as="geometry" />
|
| 9 |
+
</mxCell>
|
| 10 |
+
<mxCell id="3" value="" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;fontColor=#333333;strokeColor=#666666;arcSize=5;fontSize=30;fontFamily=Times New Roman;strokeWidth=4;" vertex="1" parent="1">
|
| 11 |
+
<mxGeometry x="-40" y="-10" width="400" height="470" as="geometry" />
|
| 12 |
+
</mxCell>
|
| 13 |
+
<mxCell id="4" value="" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;fontColor=#333333;strokeColor=#666666;arcSize=5;fontSize=30;fontFamily=Times New Roman;strokeWidth=4;" vertex="1" parent="1">
|
| 14 |
+
<mxGeometry x="-520" y="-10" width="400" height="470" as="geometry" />
|
| 15 |
+
</mxCell>
|
| 16 |
+
<mxCell id="5" value="" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;dashed=1;fontColor=#333333;strokeColor=#666666;fontSize=30;fontFamily=Times New Roman;strokeWidth=4;" vertex="1" parent="1">
|
| 17 |
+
<mxGeometry x="-460" y="230" width="300" height="170" as="geometry" />
|
| 18 |
+
</mxCell>
|
| 19 |
+
<mxCell id="6" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.077;entryY=1.002;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="7" target="14" parent="1">
|
| 20 |
+
<mxGeometry relative="1" as="geometry" />
|
| 21 |
+
</mxCell>
|
| 22 |
+
<mxCell id="7" value="\(W_q\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 23 |
+
<mxGeometry x="-440" y="330" width="40" height="40" as="geometry" />
|
| 24 |
+
</mxCell>
|
| 25 |
+
<mxCell id="8" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=none;endFill=0;startArrow=classic;startFill=1;strokeWidth=2;entryX=0.5;entryY=0;entryDx=0;entryDy=0;fontSize=20;fontFamily=Times New Roman;" edge="1" source="10" parent="1">
|
| 26 |
+
<mxGeometry relative="1" as="geometry">
|
| 27 |
+
<mxPoint x="-310" y="490" as="targetPoint" />
|
| 28 |
+
</mxGeometry>
|
| 29 |
+
</mxCell>
|
| 30 |
+
<mxCell id="9" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="10" target="14" parent="1">
|
| 31 |
+
<mxGeometry relative="1" as="geometry" />
|
| 32 |
+
</mxCell>
|
| 33 |
+
<mxCell id="10" value="\(W_v\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 34 |
+
<mxGeometry x="-330" y="330" width="40" height="40" as="geometry" />
|
| 35 |
+
</mxCell>
|
| 36 |
+
<mxCell id="11" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.922;entryY=1.01;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="12" target="14" parent="1">
|
| 37 |
+
<mxGeometry relative="1" as="geometry" />
|
| 38 |
+
</mxCell>
|
| 39 |
+
<mxCell id="12" value="\(W_k\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 40 |
+
<mxGeometry x="-220" y="330" width="40" height="40" as="geometry" />
|
| 41 |
+
</mxCell>
|
| 42 |
+
<mxCell id="13" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="14" target="20" parent="1">
|
| 43 |
+
<mxGeometry relative="1" as="geometry" />
|
| 44 |
+
</mxCell>
|
| 45 |
+
<mxCell id="14" value="Attention" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 46 |
+
<mxGeometry x="-440" y="260" width="260" height="40" as="geometry" />
|
| 47 |
+
</mxCell>
|
| 48 |
+
<mxCell id="15" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeWidth=2;fontSize=20;fontFamily=Times New Roman;" edge="1" target="7" parent="1">
|
| 49 |
+
<mxGeometry relative="1" as="geometry">
|
| 50 |
+
<mxPoint x="-310" y="490" as="sourcePoint" />
|
| 51 |
+
<Array as="points">
|
| 52 |
+
<mxPoint x="-310" y="410" />
|
| 53 |
+
<mxPoint x="-420" y="410" />
|
| 54 |
+
</Array>
|
| 55 |
+
</mxGeometry>
|
| 56 |
+
</mxCell>
|
| 57 |
+
<mxCell id="16" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeWidth=2;fontSize=20;fontFamily=Times New Roman;" edge="1" target="12" parent="1">
|
| 58 |
+
<mxGeometry relative="1" as="geometry">
|
| 59 |
+
<mxPoint x="-310" y="490" as="sourcePoint" />
|
| 60 |
+
<Array as="points">
|
| 61 |
+
<mxPoint x="-310" y="410" />
|
| 62 |
+
<mxPoint x="-200" y="410" />
|
| 63 |
+
</Array>
|
| 64 |
+
</mxGeometry>
|
| 65 |
+
</mxCell>
|
| 66 |
+
<mxCell id="17" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" target="20" parent="1">
|
| 67 |
+
<mxGeometry relative="1" as="geometry">
|
| 68 |
+
<mxPoint x="-470" y="400" as="targetPoint" />
|
| 69 |
+
<mxPoint x="-311" y="425" as="sourcePoint" />
|
| 70 |
+
<Array as="points">
|
| 71 |
+
<mxPoint x="-310" y="425" />
|
| 72 |
+
<mxPoint x="-310" y="426" />
|
| 73 |
+
<mxPoint x="-490" y="426" />
|
| 74 |
+
<mxPoint x="-490" y="180" />
|
| 75 |
+
<mxPoint x="-438" y="180" />
|
| 76 |
+
</Array>
|
| 77 |
+
</mxGeometry>
|
| 78 |
+
</mxCell>
|
| 79 |
+
<mxCell id="18" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="20" target="22" parent="1">
|
| 80 |
+
<mxGeometry relative="1" as="geometry" />
|
| 81 |
+
</mxCell>
|
| 82 |
+
<mxCell id="19" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" target="24" parent="1">
|
| 83 |
+
<mxGeometry relative="1" as="geometry">
|
| 84 |
+
<mxPoint x="-309" y="147" as="sourcePoint" />
|
| 85 |
+
<Array as="points">
|
| 86 |
+
<mxPoint x="-310" y="147" />
|
| 87 |
+
<mxPoint x="-310" y="146" />
|
| 88 |
+
<mxPoint x="-490" y="146" />
|
| 89 |
+
<mxPoint x="-490" y="40" />
|
| 90 |
+
</Array>
|
| 91 |
+
</mxGeometry>
|
| 92 |
+
</mxCell>
|
| 93 |
+
<mxCell id="20" value="Add &amp; Layer Norm" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 94 |
+
<mxGeometry x="-440" y="160" width="260" height="40" as="geometry" />
|
| 95 |
+
</mxCell>
|
| 96 |
+
<mxCell id="21" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="22" target="24" parent="1">
|
| 97 |
+
<mxGeometry relative="1" as="geometry" />
|
| 98 |
+
</mxCell>
|
| 99 |
+
<mxCell id="22" value="Feed Forward" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 100 |
+
<mxGeometry x="-440" y="90" width="260" height="40" as="geometry" />
|
| 101 |
+
</mxCell>
|
| 102 |
+
<mxCell id="23" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="24" parent="1">
|
| 103 |
+
<mxGeometry relative="1" as="geometry">
|
| 104 |
+
<mxPoint x="-310" y="-40" as="targetPoint" />
|
| 105 |
+
</mxGeometry>
|
| 106 |
+
</mxCell>
|
| 107 |
+
<mxCell id="24" value="Add &amp; Layer Norm" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 108 |
+
<mxGeometry x="-440" y="20" width="260" height="40" as="geometry" />
|
| 109 |
+
</mxCell>
|
| 110 |
+
<mxCell id="25" value="\(\times N\)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=30;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 111 |
+
<mxGeometry x="-188" y="-10" width="60" height="30" as="geometry" />
|
| 112 |
+
</mxCell>
|
| 113 |
+
<mxCell id="26" value="" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;dashed=1;fontColor=#333333;strokeColor=#666666;fontSize=30;fontFamily=Times New Roman;strokeWidth=4;" vertex="1" parent="1">
|
| 114 |
+
<mxGeometry x="20" y="230" width="300" height="170" as="geometry" />
|
| 115 |
+
</mxCell>
|
| 116 |
+
<mxCell id="27" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.077;entryY=1.002;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="28" target="35" parent="1">
|
| 117 |
+
<mxGeometry relative="1" as="geometry" />
|
| 118 |
+
</mxCell>
|
| 119 |
+
<mxCell id="28" value="\(W_q\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 120 |
+
<mxGeometry x="40" y="330" width="40" height="40" as="geometry" />
|
| 121 |
+
</mxCell>
|
| 122 |
+
<mxCell id="29" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=none;endFill=0;startArrow=classic;startFill=1;strokeWidth=2;entryX=0.5;entryY=0;entryDx=0;entryDy=0;fontSize=20;fontFamily=Times New Roman;" edge="1" source="31" parent="1">
|
| 123 |
+
<mxGeometry relative="1" as="geometry">
|
| 124 |
+
<mxPoint x="170" y="490" as="targetPoint" />
|
| 125 |
+
</mxGeometry>
|
| 126 |
+
</mxCell>
|
| 127 |
+
<mxCell id="30" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="31" target="35" parent="1">
|
| 128 |
+
<mxGeometry relative="1" as="geometry" />
|
| 129 |
+
</mxCell>
|
| 130 |
+
<mxCell id="31" value="\(W_v\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 131 |
+
<mxGeometry x="150" y="330" width="40" height="40" as="geometry" />
|
| 132 |
+
</mxCell>
|
| 133 |
+
<mxCell id="32" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.922;entryY=1.01;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="33" target="35" parent="1">
|
| 134 |
+
<mxGeometry relative="1" as="geometry" />
|
| 135 |
+
</mxCell>
|
| 136 |
+
<mxCell id="33" value="\(W_k\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 137 |
+
<mxGeometry x="260" y="330" width="40" height="40" as="geometry" />
|
| 138 |
+
</mxCell>
|
| 139 |
+
<mxCell id="34" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="35" target="41" parent="1">
|
| 140 |
+
<mxGeometry relative="1" as="geometry" />
|
| 141 |
+
</mxCell>
|
| 142 |
+
<mxCell id="35" value="Attention" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 143 |
+
<mxGeometry x="40" y="260" width="260" height="40" as="geometry" />
|
| 144 |
+
</mxCell>
|
| 145 |
+
<mxCell id="36" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeWidth=2;fontSize=20;fontFamily=Times New Roman;" edge="1" target="28" parent="1">
|
| 146 |
+
<mxGeometry relative="1" as="geometry">
|
| 147 |
+
<mxPoint x="170" y="490" as="sourcePoint" />
|
| 148 |
+
<Array as="points">
|
| 149 |
+
<mxPoint x="170" y="410" />
|
| 150 |
+
<mxPoint x="60" y="410" />
|
| 151 |
+
</Array>
|
| 152 |
+
</mxGeometry>
|
| 153 |
+
</mxCell>
|
| 154 |
+
<mxCell id="37" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeWidth=2;fontSize=20;fontFamily=Times New Roman;" edge="1" target="33" parent="1">
|
| 155 |
+
<mxGeometry relative="1" as="geometry">
|
| 156 |
+
<mxPoint x="170" y="490" as="sourcePoint" />
|
| 157 |
+
<Array as="points">
|
| 158 |
+
<mxPoint x="170" y="410" />
|
| 159 |
+
<mxPoint x="280" y="410" />
|
| 160 |
+
</Array>
|
| 161 |
+
</mxGeometry>
|
| 162 |
+
</mxCell>
|
| 163 |
+
<mxCell id="38" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" target="41" parent="1">
|
| 164 |
+
<mxGeometry relative="1" as="geometry">
|
| 165 |
+
<mxPoint x="10" y="400" as="targetPoint" />
|
| 166 |
+
<mxPoint x="169" y="425" as="sourcePoint" />
|
| 167 |
+
<Array as="points">
|
| 168 |
+
<mxPoint x="170" y="425" />
|
| 169 |
+
<mxPoint x="170" y="426" />
|
| 170 |
+
<mxPoint x="-10" y="426" />
|
| 171 |
+
<mxPoint x="-10" y="180" />
|
| 172 |
+
<mxPoint x="42" y="180" />
|
| 173 |
+
</Array>
|
| 174 |
+
</mxGeometry>
|
| 175 |
+
</mxCell>
|
| 176 |
+
<mxCell id="39" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="41" target="43" parent="1">
|
| 177 |
+
<mxGeometry relative="1" as="geometry" />
|
| 178 |
+
</mxCell>
|
| 179 |
+
<mxCell id="40" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" target="45" parent="1">
|
| 180 |
+
<mxGeometry relative="1" as="geometry">
|
| 181 |
+
<mxPoint x="171" y="147" as="sourcePoint" />
|
| 182 |
+
<Array as="points">
|
| 183 |
+
<mxPoint x="170" y="147" />
|
| 184 |
+
<mxPoint x="170" y="146" />
|
| 185 |
+
<mxPoint x="-10" y="146" />
|
| 186 |
+
<mxPoint x="-10" y="40" />
|
| 187 |
+
</Array>
|
| 188 |
+
</mxGeometry>
|
| 189 |
+
</mxCell>
|
| 190 |
+
<mxCell id="41" value="Add &amp; Layer Norm" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 191 |
+
<mxGeometry x="40" y="160" width="260" height="40" as="geometry" />
|
| 192 |
+
</mxCell>
|
| 193 |
+
<mxCell id="42" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="43" target="45" parent="1">
|
| 194 |
+
<mxGeometry relative="1" as="geometry" />
|
| 195 |
+
</mxCell>
|
| 196 |
+
<mxCell id="43" value="Feed Forward" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 197 |
+
<mxGeometry x="40" y="90" width="260" height="40" as="geometry" />
|
| 198 |
+
</mxCell>
|
| 199 |
+
<mxCell id="44" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="45" parent="1">
|
| 200 |
+
<mxGeometry relative="1" as="geometry">
|
| 201 |
+
<mxPoint x="170" y="-40" as="targetPoint" />
|
| 202 |
+
</mxGeometry>
|
| 203 |
+
</mxCell>
|
| 204 |
+
<mxCell id="45" value="Add &amp; Layer Norm" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 205 |
+
<mxGeometry x="40" y="20" width="260" height="40" as="geometry" />
|
| 206 |
+
</mxCell>
|
| 207 |
+
<mxCell id="46" value="\(\times N\)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=30;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 208 |
+
<mxGeometry x="294" y="-9" width="60" height="30" as="geometry" />
|
| 209 |
+
</mxCell>
|
| 210 |
+
<mxCell id="47" value="" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;fontColor=#333333;strokeColor=#666666;arcSize=5;fontSize=30;fontFamily=Times New Roman;strokeWidth=4;" vertex="1" parent="1">
|
| 211 |
+
<mxGeometry x="440" y="-10" width="400" height="470" as="geometry" />
|
| 212 |
+
</mxCell>
|
| 213 |
+
<mxCell id="48" value="" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;dashed=1;fontColor=#333333;strokeColor=#666666;fontSize=30;fontFamily=Times New Roman;strokeWidth=4;" vertex="1" parent="1">
|
| 214 |
+
<mxGeometry x="500" y="230" width="300" height="170" as="geometry" />
|
| 215 |
+
</mxCell>
|
| 216 |
+
<mxCell id="49" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.077;entryY=1.002;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="50" target="55" parent="1">
|
| 217 |
+
<mxGeometry relative="1" as="geometry" />
|
| 218 |
+
</mxCell>
|
| 219 |
+
<mxCell id="50" value="\(W_q\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 220 |
+
<mxGeometry x="520" y="330" width="40" height="40" as="geometry" />
|
| 221 |
+
</mxCell>
|
| 222 |
+
<mxCell id="51" value="\(W_v\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 223 |
+
<mxGeometry x="630" y="330" width="40" height="40" as="geometry" />
|
| 224 |
+
</mxCell>
|
| 225 |
+
<mxCell id="52" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.922;entryY=1.01;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="53" target="55" parent="1">
|
| 226 |
+
<mxGeometry relative="1" as="geometry" />
|
| 227 |
+
</mxCell>
|
| 228 |
+
<mxCell id="53" value="\(W_k\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 229 |
+
<mxGeometry x="740" y="330" width="40" height="40" as="geometry" />
|
| 230 |
+
</mxCell>
|
| 231 |
+
<mxCell id="54" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="55" target="60" parent="1">
|
| 232 |
+
<mxGeometry relative="1" as="geometry" />
|
| 233 |
+
</mxCell>
|
| 234 |
+
<mxCell id="55" value="Attention" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 235 |
+
<mxGeometry x="520" y="260" width="260" height="40" as="geometry" />
|
| 236 |
+
</mxCell>
|
| 237 |
+
<mxCell id="56" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeWidth=2;fontSize=20;fontFamily=Times New Roman;" edge="1" target="53" parent="1">
|
| 238 |
+
<mxGeometry relative="1" as="geometry">
|
| 239 |
+
<mxPoint x="650" y="490" as="sourcePoint" />
|
| 240 |
+
<Array as="points">
|
| 241 |
+
<mxPoint x="650" y="410" />
|
| 242 |
+
<mxPoint x="760" y="410" />
|
| 243 |
+
</Array>
|
| 244 |
+
</mxGeometry>
|
| 245 |
+
</mxCell>
|
| 246 |
+
<mxCell id="57" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" target="60" parent="1">
|
| 247 |
+
<mxGeometry relative="1" as="geometry">
|
| 248 |
+
<mxPoint x="490" y="400" as="targetPoint" />
|
| 249 |
+
<mxPoint x="649" y="425" as="sourcePoint" />
|
| 250 |
+
<Array as="points">
|
| 251 |
+
<mxPoint x="650" y="425" />
|
| 252 |
+
<mxPoint x="650" y="426" />
|
| 253 |
+
<mxPoint x="470" y="426" />
|
| 254 |
+
<mxPoint x="470" y="180" />
|
| 255 |
+
<mxPoint x="522" y="180" />
|
| 256 |
+
</Array>
|
| 257 |
+
</mxGeometry>
|
| 258 |
+
</mxCell>
|
| 259 |
+
<mxCell id="58" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="60" target="62" parent="1">
|
| 260 |
+
<mxGeometry relative="1" as="geometry" />
|
| 261 |
+
</mxCell>
|
| 262 |
+
<mxCell id="59" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" target="64" parent="1">
|
| 263 |
+
<mxGeometry relative="1" as="geometry">
|
| 264 |
+
<mxPoint x="651" y="147" as="sourcePoint" />
|
| 265 |
+
<Array as="points">
|
| 266 |
+
<mxPoint x="650" y="147" />
|
| 267 |
+
<mxPoint x="650" y="146" />
|
| 268 |
+
<mxPoint x="470" y="146" />
|
| 269 |
+
<mxPoint x="470" y="40" />
|
| 270 |
+
</Array>
|
| 271 |
+
</mxGeometry>
|
| 272 |
+
</mxCell>
|
| 273 |
+
<mxCell id="60" value="Add &amp; Layer Norm" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 274 |
+
<mxGeometry x="520" y="160" width="260" height="40" as="geometry" />
|
| 275 |
+
</mxCell>
|
| 276 |
+
<mxCell id="61" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="62" target="64" parent="1">
|
| 277 |
+
<mxGeometry relative="1" as="geometry" />
|
| 278 |
+
</mxCell>
|
| 279 |
+
<mxCell id="62" value="Feed Forward" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 280 |
+
<mxGeometry x="520" y="90" width="260" height="40" as="geometry" />
|
| 281 |
+
</mxCell>
|
| 282 |
+
<mxCell id="63" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="64" parent="1">
|
| 283 |
+
<mxGeometry relative="1" as="geometry">
|
| 284 |
+
<mxPoint x="650" y="-40" as="targetPoint" />
|
| 285 |
+
</mxGeometry>
|
| 286 |
+
</mxCell>
|
| 287 |
+
<mxCell id="64" value="Add &amp; Layer Norm" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 288 |
+
<mxGeometry x="520" y="20" width="260" height="40" as="geometry" />
|
| 289 |
+
</mxCell>
|
| 290 |
+
<mxCell id="65" value="\(\times N\)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=30;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 291 |
+
<mxGeometry x="776" y="-10" width="60" height="30" as="geometry" />
|
| 292 |
+
</mxCell>
|
| 293 |
+
<mxCell id="66" value="" style="shape=trapezoid;perimeter=trapezoidPerimeter;whiteSpace=wrap;html=1;fixedSize=1;size=10;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 294 |
+
<mxGeometry x="570" y="350" width="40" height="20" as="geometry" />
|
| 295 |
+
</mxCell>
|
| 296 |
+
<mxCell id="67" value="" style="shape=trapezoid;perimeter=trapezoidPerimeter;whiteSpace=wrap;html=1;fixedSize=1;size=10;rotation=-180;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 297 |
+
<mxGeometry x="570" y="330" width="40" height="20" as="geometry" />
|
| 298 |
+
</mxCell>
|
| 299 |
+
<mxCell id="68" value="" style="shape=trapezoid;perimeter=trapezoidPerimeter;whiteSpace=wrap;html=1;fixedSize=1;size=10;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 300 |
+
<mxGeometry x="680" y="350" width="40" height="20" as="geometry" />
|
| 301 |
+
</mxCell>
|
| 302 |
+
<mxCell id="69" value="" style="shape=trapezoid;perimeter=trapezoidPerimeter;whiteSpace=wrap;html=1;fixedSize=1;size=10;rotation=-180;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 303 |
+
<mxGeometry x="680" y="330" width="40" height="20" as="geometry" />
|
| 304 |
+
</mxCell>
|
| 305 |
+
<mxCell id="70" value="" style="endArrow=classic;html=1;rounded=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;shape=connector;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" target="66" parent="1">
|
| 306 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 307 |
+
<mxPoint x="540" y="390" as="sourcePoint" />
|
| 308 |
+
<mxPoint x="590" y="310" as="targetPoint" />
|
| 309 |
+
<Array as="points">
|
| 310 |
+
<mxPoint x="590" y="390" />
|
| 311 |
+
</Array>
|
| 312 |
+
</mxGeometry>
|
| 313 |
+
</mxCell>
|
| 314 |
+
<mxCell id="71" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;entryX=0.076;entryY=1.009;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="67" target="55" parent="1">
|
| 315 |
+
<mxGeometry relative="1" as="geometry">
|
| 316 |
+
<Array as="points">
|
| 317 |
+
<mxPoint x="590" y="318" />
|
| 318 |
+
<mxPoint x="540" y="318" />
|
| 319 |
+
</Array>
|
| 320 |
+
</mxGeometry>
|
| 321 |
+
</mxCell>
|
| 322 |
+
<mxCell id="72" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeWidth=2;fontSize=20;fontFamily=Times New Roman;" edge="1" target="50" parent="1">
|
| 323 |
+
<mxGeometry relative="1" as="geometry">
|
| 324 |
+
<mxPoint x="650" y="490" as="sourcePoint" />
|
| 325 |
+
<Array as="points">
|
| 326 |
+
<mxPoint x="650" y="410" />
|
| 327 |
+
<mxPoint x="540" y="410" />
|
| 328 |
+
</Array>
|
| 329 |
+
</mxGeometry>
|
| 330 |
+
</mxCell>
|
| 331 |
+
<mxCell id="73" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="51" target="55" parent="1">
|
| 332 |
+
<mxGeometry relative="1" as="geometry" />
|
| 333 |
+
</mxCell>
|
| 334 |
+
<mxCell id="74" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=none;endFill=0;startArrow=classic;startFill=1;strokeWidth=2;entryX=0.5;entryY=0;entryDx=0;entryDy=0;fontSize=20;fontFamily=Times New Roman;" edge="1" source="51" parent="1">
|
| 335 |
+
<mxGeometry relative="1" as="geometry">
|
| 336 |
+
<mxPoint x="650" y="490" as="targetPoint" />
|
| 337 |
+
</mxGeometry>
|
| 338 |
+
</mxCell>
|
| 339 |
+
<mxCell id="75" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;entryX=0.076;entryY=1.009;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" parent="1">
|
| 340 |
+
<mxGeometry relative="1" as="geometry">
|
| 341 |
+
<mxPoint x="700" y="330" as="sourcePoint" />
|
| 342 |
+
<mxPoint x="650" y="300" as="targetPoint" />
|
| 343 |
+
<Array as="points">
|
| 344 |
+
<mxPoint x="700" y="318" />
|
| 345 |
+
<mxPoint x="650" y="318" />
|
| 346 |
+
</Array>
|
| 347 |
+
</mxGeometry>
|
| 348 |
+
</mxCell>
|
| 349 |
+
<mxCell id="76" value="" style="endArrow=classic;html=1;rounded=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;shape=connector;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1">
|
| 350 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 351 |
+
<mxPoint x="651" y="390" as="sourcePoint" />
|
| 352 |
+
<mxPoint x="701" y="370" as="targetPoint" />
|
| 353 |
+
<Array as="points">
|
| 354 |
+
<mxPoint x="701" y="390" />
|
| 355 |
+
</Array>
|
| 356 |
+
</mxGeometry>
|
| 357 |
+
</mxCell>
|
| 358 |
+
<mxCell id="77" value="" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;fontColor=#333333;strokeColor=#666666;arcSize=5;fontSize=30;fontFamily=Times New Roman;strokeWidth=4;" vertex="1" parent="1">
|
| 359 |
+
<mxGeometry x="920" y="-10" width="400" height="470" as="geometry" />
|
| 360 |
+
</mxCell>
|
| 361 |
+
<mxCell id="78" value="" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f5f5f5;dashed=1;fontColor=#333333;strokeColor=#666666;fontSize=30;arcSize=10;fontFamily=Times New Roman;strokeWidth=4;" vertex="1" parent="1">
|
| 362 |
+
<mxGeometry x="980" y="230" width="300" height="170" as="geometry" />
|
| 363 |
+
</mxCell>
|
| 364 |
+
<mxCell id="79" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.077;entryY=1.002;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="80" target="85" parent="1">
|
| 365 |
+
<mxGeometry relative="1" as="geometry" />
|
| 366 |
+
</mxCell>
|
| 367 |
+
<mxCell id="80" value="\(W_q\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 368 |
+
<mxGeometry x="1000" y="330" width="40" height="40" as="geometry" />
|
| 369 |
+
</mxCell>
|
| 370 |
+
<mxCell id="81" value="\(W_v\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 371 |
+
<mxGeometry x="1110" y="330" width="40" height="40" as="geometry" />
|
| 372 |
+
</mxCell>
|
| 373 |
+
<mxCell id="82" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.922;entryY=1.01;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="83" target="85" parent="1">
|
| 374 |
+
<mxGeometry relative="1" as="geometry" />
|
| 375 |
+
</mxCell>
|
| 376 |
+
<mxCell id="83" value="\(W_k\)" style="rounded=0;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 377 |
+
<mxGeometry x="1220" y="330" width="40" height="40" as="geometry" />
|
| 378 |
+
</mxCell>
|
| 379 |
+
<mxCell id="84" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="85" target="90" parent="1">
|
| 380 |
+
<mxGeometry relative="1" as="geometry" />
|
| 381 |
+
</mxCell>
|
| 382 |
+
<mxCell id="85" value="Attention" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 383 |
+
<mxGeometry x="1000" y="260" width="260" height="40" as="geometry" />
|
| 384 |
+
</mxCell>
|
| 385 |
+
<mxCell id="86" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeWidth=2;fontSize=20;fontFamily=Times New Roman;" edge="1" target="83" parent="1">
|
| 386 |
+
<mxGeometry relative="1" as="geometry">
|
| 387 |
+
<mxPoint x="1130" y="490" as="sourcePoint" />
|
| 388 |
+
<Array as="points">
|
| 389 |
+
<mxPoint x="1130" y="410" />
|
| 390 |
+
<mxPoint x="1240" y="410" />
|
| 391 |
+
</Array>
|
| 392 |
+
</mxGeometry>
|
| 393 |
+
</mxCell>
|
| 394 |
+
<mxCell id="87" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" target="90" parent="1">
|
| 395 |
+
<mxGeometry relative="1" as="geometry">
|
| 396 |
+
<mxPoint x="970" y="400" as="targetPoint" />
|
| 397 |
+
<mxPoint x="1129" y="425" as="sourcePoint" />
|
| 398 |
+
<Array as="points">
|
| 399 |
+
<mxPoint x="1130" y="425" />
|
| 400 |
+
<mxPoint x="1130" y="426" />
|
| 401 |
+
<mxPoint x="950" y="426" />
|
| 402 |
+
<mxPoint x="950" y="180" />
|
| 403 |
+
<mxPoint x="1002" y="180" />
|
| 404 |
+
</Array>
|
| 405 |
+
</mxGeometry>
|
| 406 |
+
</mxCell>
|
| 407 |
+
<mxCell id="88" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="90" target="92" parent="1">
|
| 408 |
+
<mxGeometry relative="1" as="geometry" />
|
| 409 |
+
</mxCell>
|
| 410 |
+
<mxCell id="89" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=1;orthogonalLoop=1;jettySize=auto;html=1;entryX=0;entryY=0.5;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" target="95" parent="1">
|
| 411 |
+
<mxGeometry relative="1" as="geometry">
|
| 412 |
+
<mxPoint x="1131" y="147" as="sourcePoint" />
|
| 413 |
+
<Array as="points">
|
| 414 |
+
<mxPoint x="1130" y="147" />
|
| 415 |
+
<mxPoint x="1130" y="146" />
|
| 416 |
+
<mxPoint x="950" y="146" />
|
| 417 |
+
<mxPoint x="950" y="40" />
|
| 418 |
+
</Array>
|
| 419 |
+
</mxGeometry>
|
| 420 |
+
</mxCell>
|
| 421 |
+
<mxCell id="90" value="Add &amp; Layer Norm" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 422 |
+
<mxGeometry x="1000" y="160" width="260" height="40" as="geometry" />
|
| 423 |
+
</mxCell>
|
| 424 |
+
<mxCell id="91" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="92" target="95" parent="1">
|
| 425 |
+
<mxGeometry relative="1" as="geometry" />
|
| 426 |
+
</mxCell>
|
| 427 |
+
<mxCell id="92" value="Feed Forward" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 428 |
+
<mxGeometry x="1000" y="90" width="260" height="40" as="geometry" />
|
| 429 |
+
</mxCell>
|
| 430 |
+
<mxCell id="93" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="95" parent="1">
|
| 431 |
+
<mxGeometry relative="1" as="geometry">
|
| 432 |
+
<mxPoint x="1130" y="-40" as="targetPoint" />
|
| 433 |
+
</mxGeometry>
|
| 434 |
+
</mxCell>
|
| 435 |
+
<mxCell id="94" value="" style="rounded=1;whiteSpace=wrap;html=1;fontFamily=Times New Roman;fontSize=30;fontColor=default;fillColor=none;strokeColor=#FF6666;strokeWidth=5;" vertex="1" parent="1">
|
| 436 |
+
<mxGeometry x="994" y="313" width="100" height="82" as="geometry" />
|
| 437 |
+
</mxCell>
|
| 438 |
+
<mxCell id="95" value="Add &amp; Layer Norm" style="rounded=1;whiteSpace=wrap;html=1;fillColor=#dae8fc;strokeColor=#6c8ebf;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 439 |
+
<mxGeometry x="1000" y="20" width="260" height="40" as="geometry" />
|
| 440 |
+
</mxCell>
|
| 441 |
+
<mxCell id="96" value="\(\times N\)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=30;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 442 |
+
<mxGeometry x="1255" y="-10" width="60" height="30" as="geometry" />
|
| 443 |
+
</mxCell>
|
| 444 |
+
<mxCell id="97" value="" style="shape=trapezoid;perimeter=trapezoidPerimeter;whiteSpace=wrap;html=1;fixedSize=1;size=10;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 445 |
+
<mxGeometry x="1050" y="350" width="40" height="20" as="geometry" />
|
| 446 |
+
</mxCell>
|
| 447 |
+
<mxCell id="98" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;entryX=0.076;entryY=1.009;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="99" target="85" parent="1">
|
| 448 |
+
<mxGeometry relative="1" as="geometry">
|
| 449 |
+
<Array as="points">
|
| 450 |
+
<mxPoint x="1070" y="318" />
|
| 451 |
+
<mxPoint x="1020" y="318" />
|
| 452 |
+
</Array>
|
| 453 |
+
</mxGeometry>
|
| 454 |
+
</mxCell>
|
| 455 |
+
<mxCell id="99" value="" style="shape=trapezoid;perimeter=trapezoidPerimeter;whiteSpace=wrap;html=1;fixedSize=1;size=10;rotation=-180;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 456 |
+
<mxGeometry x="1050" y="330" width="40" height="20" as="geometry" />
|
| 457 |
+
</mxCell>
|
| 458 |
+
<mxCell id="100" value="" style="shape=trapezoid;perimeter=trapezoidPerimeter;whiteSpace=wrap;html=1;fixedSize=1;size=10;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 459 |
+
<mxGeometry x="1160" y="350" width="40" height="20" as="geometry" />
|
| 460 |
+
</mxCell>
|
| 461 |
+
<mxCell id="101" value="" style="rounded=1;whiteSpace=wrap;html=1;fontFamily=Times New Roman;fontSize=30;fontColor=default;fillColor=none;strokeColor=#FF6666;strokeWidth=5;" vertex="1" parent="1">
|
| 462 |
+
<mxGeometry x="1104" y="313" width="100" height="82" as="geometry" />
|
| 463 |
+
</mxCell>
|
| 464 |
+
<mxCell id="102" value="" style="shape=trapezoid;perimeter=trapezoidPerimeter;whiteSpace=wrap;html=1;fixedSize=1;size=10;rotation=-180;fillColor=#d5e8d4;strokeColor=#82b366;fontSize=30;fontFamily=Times New Roman;" vertex="1" parent="1">
|
| 465 |
+
<mxGeometry x="1160" y="330" width="40" height="20" as="geometry" />
|
| 466 |
+
</mxCell>
|
| 467 |
+
<mxCell id="103" value="" style="endArrow=classic;html=1;rounded=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;shape=connector;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" target="97" parent="1">
|
| 468 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 469 |
+
<mxPoint x="1020" y="390" as="sourcePoint" />
|
| 470 |
+
<mxPoint x="1070" y="310" as="targetPoint" />
|
| 471 |
+
<Array as="points">
|
| 472 |
+
<mxPoint x="1070" y="390" />
|
| 473 |
+
</Array>
|
| 474 |
+
</mxGeometry>
|
| 475 |
+
</mxCell>
|
| 476 |
+
<mxCell id="104" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;strokeWidth=2;fontSize=20;fontFamily=Times New Roman;" edge="1" target="80" parent="1">
|
| 477 |
+
<mxGeometry relative="1" as="geometry">
|
| 478 |
+
<mxPoint x="1130" y="490" as="sourcePoint" />
|
| 479 |
+
<Array as="points">
|
| 480 |
+
<mxPoint x="1130" y="410" />
|
| 481 |
+
<mxPoint x="1020" y="410" />
|
| 482 |
+
</Array>
|
| 483 |
+
</mxGeometry>
|
| 484 |
+
</mxCell>
|
| 485 |
+
<mxCell id="105" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;endArrow=none;endFill=0;startArrow=classic;startFill=1;strokeWidth=2;entryX=0.5;entryY=0;entryDx=0;entryDy=0;fontSize=20;fontFamily=Times New Roman;" edge="1" source="81" parent="1">
|
| 486 |
+
<mxGeometry relative="1" as="geometry">
|
| 487 |
+
<mxPoint x="1130" y="490" as="targetPoint" />
|
| 488 |
+
</mxGeometry>
|
| 489 |
+
</mxCell>
|
| 490 |
+
<mxCell id="106" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;entryX=0.5;entryY=1;entryDx=0;entryDy=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" source="81" target="85" parent="1">
|
| 491 |
+
<mxGeometry relative="1" as="geometry" />
|
| 492 |
+
</mxCell>
|
| 493 |
+
<mxCell id="107" value="" style="endArrow=classic;html=1;rounded=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;shape=connector;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1">
|
| 494 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 495 |
+
<mxPoint x="1130" y="390" as="sourcePoint" />
|
| 496 |
+
<mxPoint x="1180" y="370" as="targetPoint" />
|
| 497 |
+
<Array as="points">
|
| 498 |
+
<mxPoint x="1180" y="390" />
|
| 499 |
+
</Array>
|
| 500 |
+
</mxGeometry>
|
| 501 |
+
</mxCell>
|
| 502 |
+
<mxCell id="108" style="edgeStyle=orthogonalEdgeStyle;shape=connector;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;entryX=0.076;entryY=1.009;entryDx=0;entryDy=0;entryPerimeter=0;labelBackgroundColor=default;strokeColor=default;strokeWidth=2;fontFamily=Times New Roman;fontSize=30;fontColor=default;endArrow=classic;" edge="1" parent="1">
|
| 503 |
+
<mxGeometry relative="1" as="geometry">
|
| 504 |
+
<Array as="points">
|
| 505 |
+
<mxPoint x="1180" y="318" />
|
| 506 |
+
<mxPoint x="1130" y="318" />
|
| 507 |
+
</Array>
|
| 508 |
+
<mxPoint x="1180" y="330" as="sourcePoint" />
|
| 509 |
+
<mxPoint x="1130" y="300" as="targetPoint" />
|
| 510 |
+
</mxGeometry>
|
| 511 |
+
</mxCell>
|
| 512 |
+
<mxCell id="109" value="" style="rounded=1;whiteSpace=wrap;html=1;fontFamily=Times New Roman;fontSize=32;fillColor=#d5e8d4;strokeColor=#82b366;" vertex="1" parent="1">
|
| 513 |
+
<mxGeometry x="-215" y="-160" width="120" height="60" as="geometry" />
|
| 514 |
+
</mxCell>
|
| 515 |
+
<mxCell id="110" value="Trainable Parameters" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=42;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 516 |
+
<mxGeometry x="-75" y="-145" width="241" height="30" as="geometry" />
|
| 517 |
+
</mxCell>
|
| 518 |
+
<mxCell id="111" value="" style="rounded=1;whiteSpace=wrap;html=1;fontFamily=Times New Roman;fontSize=32;fillColor=#dae8fc;strokeColor=#6c8ebf;" vertex="1" parent="1">
|
| 519 |
+
<mxGeometry x="239" y="-160" width="120" height="60" as="geometry" />
|
| 520 |
+
</mxCell>
|
| 521 |
+
<mxCell id="112" value="Fixed Parameters" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=42;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 522 |
+
<mxGeometry x="379" y="-145" width="200" height="30" as="geometry" />
|
| 523 |
+
</mxCell>
|
| 524 |
+
<mxCell id="113" value="" style="rounded=1;whiteSpace=wrap;html=1;fontFamily=Times New Roman;fontSize=32;strokeColor=#FF6666;strokeWidth=8;gradientColor=#A9C4EB;gradientDirection=east;fillColor=#B9E0A5;" vertex="1" parent="1">
|
| 525 |
+
<mxGeometry x="659" y="-160" width="120" height="60" as="geometry" />
|
| 526 |
+
</mxCell>
|
| 527 |
+
<mxCell id="114" value="Linearized<br style="font-size: 42px;">Part" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=42;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 528 |
+
<mxGeometry x="796" y="-145" width="200" height="30" as="geometry" />
|
| 529 |
+
</mxCell>
|
| 530 |
+
<mxCell id="115" value="(a)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=43;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 531 |
+
<mxGeometry x="-430.5" y="520" width="241" height="30" as="geometry" />
|
| 532 |
+
</mxCell>
|
| 533 |
+
<mxCell id="116" value="(b)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=43;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 534 |
+
<mxGeometry x="49.5" y="520" width="241" height="30" as="geometry" />
|
| 535 |
+
</mxCell>
|
| 536 |
+
<mxCell id="117" value="(c)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=43;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 537 |
+
<mxGeometry x="529.5" y="520" width="241" height="30" as="geometry" />
|
| 538 |
+
</mxCell>
|
| 539 |
+
<mxCell id="118" value="(d)" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=43;fontFamily=Times New Roman;fontColor=default;" vertex="1" parent="1">
|
| 540 |
+
<mxGeometry x="1009.5" y="520" width="241" height="30" as="geometry" />
|
| 541 |
+
</mxCell>
|
| 542 |
+
</root>
|
| 543 |
+
</mxGraphModel>
|
| 544 |
+
</diagram>
|
| 545 |
+
</mxfile>
|
2310.04742/main_diagram/main_diagram.pdf
ADDED
|
Binary file (63.5 kB). View file
|
|
|
2310.04742/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Pre-trained models play a crucial role in machine learning systems, serving as foundational components. In order to optimize their performance for specific downstream tasks (Ilharco et al., 2022; Wortsman et al., 2022b; Matena & Raffel, 2022), address biases or undesired behavior (Santurkar et al., 2021; Ribeiro & Lundberg, 2022; Murty et al., 2022), align them with human preferences (Ouyang et al., 2022; Ribeiro & Lundberg, 2022), or incorporate new information (Mitchell et al., 2022a;b), it is often necessary to further customize or edit these models after pre-training.
|
| 4 |
+
|
| 5 |
+
Multi-task model fusion is a powerful approach to extracting knowledge from models fine-tuned on different downstream tasks, allowing us to create a unified model that performs well across multiple tasks. This approach proves to be helpful when only the fine-tuned model can be obtained but the data remains private and inaccessible (Wu et al., 2019; Lou et al., 2020; Tang et al., 2023); besides, it can expedite the fine-tuning of the multi-task model, since compared to the pre-trained model, we have a better starting point from which to fine-tune (Kaddour, 2022; Sanyal et al., 2023). In recent studies, researchers have introduced many powerful methods for editing pre-trained models and merging task-specific fine-tuned models. We further introduce this field in Section 2.
|
| 6 |
+
|
| 7 |
+
The vast parameter size of pre-trained models poses challenges in terms of computational efficiency and memory usage during the fine-tuning process; these difficulties further lead to inefficient multitask model fusion. Fine-tuning large-scale models requires significant computational resources and
|
| 8 |
+
|
| 9 |
+
<sup>1</sup>Wuhan University, China <sup>2</sup> JD Explore Academy, China <sup>3</sup>Beijing Institute of Technology, China
|
| 10 |
+
|
| 11 |
+
<sup>4</sup>Washington University, USA <sup>5</sup>Nanyang Technological University, Singapore
|
| 12 |
+
|
| 13 |
+
<sup>1</sup>{anketang,luoyong,dubo}@whu.edu.cn
|
| 14 |
+
|
| 15 |
+
<sup>2</sup>mathshenli@gmail.com,zybjy@mail.ustc.edu.cn <sup>3</sup>hhu@bit.edu.cn
|
| 16 |
+
|
| 17 |
+
<sup>4</sup>chen@cse.wustl.edu <sup>5</sup>dacheng.tao@ntu.edu.sg
|
| 18 |
+
|
| 19 |
+
<sup>∗</sup>Corresponding authors.
|
| 20 |
+
|
| 21 |
+
memory, making the process inefficient. To address this concern, many parameter-efficient finetuning (PEFT) techniques are proposed, these approaches significantly reduce the number of parameters that need to be fine-tuned, meanwhile achieving comparable performance to full parameter fine-tuning. However, naively combining models that were fine-tuned in a parameter-efficient manner can more readily result in representational interference between tasks, which makes model fusion algorithms suboptimal. While some research has explored fusing parameter-efficient fine-tuned models for multi-task model fusion (Chronopoulou et al., 2023; Zhang et al., 2023; Huang et al., 2023), performance still lags considerably behind fusing fully fine-tuned models. Therefore, the key challenge is performing PEFT while also preventing negative interference between task-specific representations. Motivated by these concerns, we aim to enhance the multi-task model fusion capabilities of parameter-efficient fine-tuned models.
|
| 22 |
+
|
| 23 |
+
In this work, we present a novel approach to improve the multi-task fusion capability of parameterefficient fine-tuning models. Recent advances in understanding task arithmetic and weight disentanglement have demonstrated that linearizing the entire model and fine-tuning the corresponding tangent model in tangent space can enable more effective task arithmetic (Guillermo Ortiz-Jimenez et al., 2023). While promising, completely linearizing a large pre-trained model can be computationally expensive. Typically, this approach requires two to three times the computational resources needed for fine-tuning and inference. Our key insight is that we can perform efficient fine-tuning and disentangle task representations by only linearizing a subset of parameters appended to a fixed pretrained backbone. In essence, we are proposing a hybrid approach that leverages parameter-efficient fine-tuning for efficiency, while locally linearizing the adaptable modules to attain enhanced disentanglement and improved multi-task fusion capabilities.
|
| 24 |
+
|
| 25 |
+
Our experiments on image classification and natural language processing tasks demonstrate that our partial linearization technique enables more effective model fusion, achieving superior performance across tasks compared to conventional PEFT methods and model fusion algorithms alone. In some cases, our proposed method is even comparable to full fine-tuning. In addition to the direct comparison of multi-task model fusion performance, we also visualize the weight disentanglement gain of our method on different downstream task pairs. The results show that our method can effectively improve the weight disentanglement of parameter-efficient fine-tuning models, which is the key to improving the multi-task fusion capability of parameter-efficient fine-tuning models.
|
| 26 |
+
|
| 27 |
+
To summarize, our contributions are as follows:
|
| 28 |
+
|
| 29 |
+
- We propose a novel partial linearization method for parameter-efficient fine-tuning models in order to improve the multi-task fusion capability of fine-tuned task-specific models with a low computational cost overhead.
|
| 30 |
+
- We apply our method to the LoRA modules to construct Linearized LoRA (L-LoRA) modules and conduct extensive experiments on seven tasks from the GLUE benchmark to demonstrate that our method is effective in improving the multi-task fusion capability of fine-tuned task-specific models.
|
| 31 |
+
- We present an extension of weight disentanglement property and weight disentanglement error for parameter-efficient fine-tuning models to analyze the impact of the linearization process on parameter-efficient modules. We evaluate fine-tuned models to visualize and analyze the weight disentanglement gain of L-LoRA on downstream tasks.
|
| 32 |
+
|
| 33 |
+
# Method
|
| 34 |
+
|
| 35 |
+
Under the framework of supervised learning, assume we have a set of tasks $T = \{\tau_1, \tau_2, \cdots, \tau_n\}$ and a pre-trained language model $f_{\theta_0}$ parameterized by $\theta_0$ which is trained on a massive dataset. Each task $\tau_i$ is associated with a dataset $D_{\tau_i} = \{(x_{\tau_i}^{(j)}, y_{\tau_i}^{(j)})\}_{j=1}^{s_i}$ , where $s_i$ is dataset size of $D_{\tau_i}$ , $\{x_{\tau_i}^{(j)}\}_{i=1}^{s_i}$ is the input data, and $\{y_{\tau_i}^{(j)}\}_{j=1}^{s_i}$ is the output data.
|
| 36 |
+
|
| 37 |
+
Given a specific task $\tau_i$ , the pre-trained model $f_{\theta_0}$ is fine-tuned on the task-specific dataset $D_{\tau_i}$ to obtain a task-specific model. We consider full parameter fine-tuning and parameter-efficient fine-tuning methods. If the model $f_{\theta_0}$ is fully fine-tuned on task $\tau_i$ , the parameters $\theta_0$ are updated to $\theta_i$ by minimizing the loss function $\theta_i = \arg\min_{\theta} \mathcal{L}(\tau_i;\theta)$ on the task-specific dataset $D_{\tau_i}$ . Otherwise, if the model $f_{\theta_0}$ is parameter-efficiently fine-tuned on task $\tau_i$ , the original parameters $\theta_0$ are fixed and some extra added parameters $\phi$ (the number of parameters of $\phi$ is much less than $\theta_0$ , i.e., $|\phi|/|\theta_0| \ll 1$ ) are updated to $\phi_i$ by minimizing the loss function $\phi_i = \arg\min_{\phi} \mathcal{L}(\tau_i;\theta_0,\phi)$ on the task-specific dataset $D_{\tau_i}$ . Where $\mathcal{L}$ is typically the Cross-Entropy loss function to maximize conditional posterior probability p(y|x), more specifically
|
| 38 |
+
|
| 39 |
+
$$\mathcal{L}(\tau_i; \theta_0, \phi) = \frac{1}{s_i} \sum_{i=1}^{s_i} \mathcal{L}_{CE} \left( f_{\theta_0} \left( x_{\tau_i}^{(j)}; \phi \right), y_{\tau_i}^{(j)} \right) \in \mathbb{R}^+.$$
|
| 40 |
+
(1)
|
| 41 |
+
|
| 42 |
+
**Task vector**. We define the task vector as the difference between the fine-tuned *trainable* parameters and their initial state. The task vector is denoted as $\nu_i = \theta_i - \theta_0$ for full fine-tuning or $\nu_i = \phi_i - \phi_0$ for parameter-efficient fine-tuning. We elaborate on this definition in Appendix A, discussing the rationale behind this definition and its associated benefits.
|
| 43 |
+
|
| 44 |
+
In this subsection, we discuss and present an extension of the weight disentanglement property and weight disentanglement error to parameter-efficient fine-tuning models, which are original introduced in (Guillermo Ortiz-Jimenez et al., 2023).
|
| 45 |
+
|
| 46 |
+

|
| 47 |
+
|
| 48 |
+
Figure 1: Loss landscape visualization. Here, we visualize the loss landscape $\mathcal{L}(\tau_1;\theta) + \mathcal{L}(\tau_2;\theta)$ for CLIP model on combinations of three downstream image classification tasks by interpolating on the 2D plane. $\theta = \theta_0 + \sum_{i=1}^2 \lambda_i (\theta_i - \theta_0)$ , where $\theta_0$ are the pre-trained weights, $\theta_i$ are the task-specific full fine-tuned weights for task $\tau_i$ . From these heatmaps, we observe that task-specific models reside in the same loss basin when evaluated on the joint task.
|
| 49 |
+
|
| 50 |
+
Weight disentanglement for parameter-efficient fine-tuning. The weight disentanglement is defined by the function outputs of the merged model. Consider a PEFT model denoted as $f_{\theta_0}(x;\phi_0)$ whose tunable weights are initialized as $\phi_0$ . We define this model to possess weight disentanglement characteristics in relation to a specific set of task vectors $\{\nu_i|\nu_i=\phi_i-\phi_0\}_{i\in[n]}$ and their corresponding support datasets $\{D_{\tau_i}\}_{i\in[n]}$ if the following condition is satisfied:
|
| 51 |
+
|
| 52 |
+
$$f_{\theta}(x; \phi + \sum_{i=1}^{n} \lambda_i \nu_i) = \sum_{i=1}^{n} g_i(x; \lambda_i \nu_i) + g_0(x),$$
|
| 53 |
+
(2)
|
| 54 |
+
|
| 55 |
+
where $g_i(x;\lambda_i\nu_i)=0$ for $x\notin D_{\nu_i}$ and i=1,2,...,n and $g_0(x)=0$ for $x\in\bigcup_{i\in[n]}D_{\tau_i}$ . This condition ensures that the model $f_\theta(x;\phi)$ can be expressed as a linear combination dividual terms $g_i(x;\lambda_i\nu_i)$ , incorporating the respective task vectors and an additional term $g_0(x)$ . Through adhering to this disentanglement condition, the model demonstrates the desired capability of effectively separating and disentangling the weights associated with the function outputs, enhancing its capacity to capture task-specific information.
|
| 56 |
+
|
| 57 |
+
However, it is important to highlight that weight disentanglement is a characteristic specific to the function outputs and is not directly linked to their performance. In other words, a model may exhibit weight disentanglement property yet still perform weakly. Since weight disentanglement of function outputs does not guarantee task success in itself, as other factors, such as the evaluation metrics, can be non-linear to determine the overall performance. Therefore, it is crucial to consider weight disentanglement alongside other performance metrics when analyzing model behavior and effectiveness across different tasks.
|
| 58 |
+
|
| 59 |
+
Weight disentanglement error in PEFT setting. Given two task vectors $\{\nu_i\}_{i=1,2}$ , we define the extension of disentanglement error for model $f_{\theta_0}(x;\phi_0)$ as follows:
|
| 60 |
+
|
| 61 |
+
$$\xi(\lambda_1, \lambda_2) = \sum_{i=1}^{2} \mathbb{E}_{x \sim P(D_{\tau_i})} \left[ \text{dist}(f_{\theta_0}(x; \phi_0 + \lambda_i \nu_i), f_{\theta_0}(x; \phi_0 + \lambda_1 \nu_1 + \lambda_2 \nu_2)) \right]. \tag{3}$$
|
| 62 |
+
|
| 63 |
+
Where $\operatorname{dist}(\cdot,\cdot)$ represents any chosen distance metric between two vector outputs. Taking classification tasks as an example, dist can be chosen as the prediction error, i.e., $\operatorname{dist}(y_1,y_2)=\mathbb{1}(y_1\neq y_2)$ . Intuitively, this measures how much the prediction changes when adding each task vector individually versus adding both together. If task knowledge is well-disentangled, adding the specialized vectors independently or together should yield similar predictions, minimizing $\xi$ . The disentanglement error quantifies the degree of interference between task vectors. Lower values indicate task knowledge that can be combined with less destructive interaction during model merging.
|
| 64 |
+
|
| 65 |
+
In previous work, it is generally believed that the closer the orthogonal between task vectors, the less interference between tasks and the better the effect of weight disentanglement (Ilharco et al., 2023; Chen et al., 2023; Guillermo Ortiz-Jimenez et al., 2023). The intuition is that orthogonal
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
|
| 69 |
+
Figure 2: **Similarity heatmaps**. These figures show heatmaps of the cosine similarity between task vectors from task-specific CLIP models (Radford et al., 2021) fine-tuned on different tasks. (a) Cos similarity matrix of task vectors when using full fine-tuning of the entire model. (b) Task vector similarities when using LoRA. (c) Cos similarity of task vectors when using L-LoRA, our proposed partial linearization approach that linearizes PEFT modules and fine-tunes in tangent space.
|
| 70 |
+
|
| 71 |
+
task vectors indicate that the specialized knowledge captured for each task lies in distinct subspaces with minimal overlap or redundancy. This enables better preservation of specialized knowledge and avoids destructive interference during model merging.
|
| 72 |
+
|
| 73 |
+
We visualize the loss landscape of the joint tasks in Figure 1 (Li et al., 2018), where we interpolate between the pre-trained weights $\theta_0$ and two task-specific fine-tuned weights $\theta_1$ , $\theta_2$ for CLIP models (Radford et al., 2021). The 2D heatmaps show the loss values $\mathcal{L}(\tau_1, \theta) + \mathcal{L}(\tau_2, \theta)$ evaluated on the joint tasks. Notably, we observe that the task-specific models reside at the edge of the same low-loss basin, with no significant barriers or discontinuities in between. It provides geometric intuition for why a simple linear arithmetic operation of task-specific parameters $\theta_1$ and $\theta_2$ can produce effective merging for multitask learning, as empirically observed.
|
| 74 |
+
|
| 75 |
+
The results in Figures 2(a-b) and 8(a-b) show the cosine similarity between task vectors from CLIP and Flan-T5 (Chung et al., 2022), which are fully fine-tuned or LoRA fine-tuned on image classification tasks and natural language processing (NLP) tasks respectively. We observe that vectors from full fine-tuning are closer to orthogonal than those from LoRA, which indicates that models fine-tuned with full fine-tuning are more independent than those in LoRA. This finding is consistent with the discussion about task addition in (Ilharco et al., 2023), the experimental results from Figures 4 and 5 also support our statement. The experimental details are described in Appendix C.
|
| 76 |
+
|
| 77 |
+
**Remark 3.1** At first glance, it may seem unfair to compare the task vectors from full fine-tuning to those from parameter-efficient fine-tuning methods, since full fine-tuning has access to much more trainable parameters. In fact for full fine-tuning we have $f_{\theta}(x) = f_{\theta}(x; \phi_0)$ so that
|
| 78 |
+
|
| 79 |
+
$$\frac{\langle \nu_i, \nu_j \rangle}{\|\nu_i\|_2 \|\nu_j\|_2} = \frac{\langle [\theta_i - \theta_0, \mathbf{0}], [\theta_j - \theta_0, \mathbf{0}] \rangle}{\|[\theta_i - \theta_0, \mathbf{0}]\|_2 \|[\theta_j - \theta_0, \mathbf{0}]\|_2} = \frac{\langle [\theta_i - \theta_0, \phi_0 - \phi_0], [\theta_j - \theta_0, \phi_0 - \phi_0] \rangle}{\|[\theta_i - \theta_0, \phi_0 - \phi_0]\|_2 \|[\theta_j - \theta_0, \phi_0 - \phi_0]\|_2};$$
|
| 80 |
+
|
| 81 |
+
on the other hand, for parameter-efficient fine-tuning we have
|
| 82 |
+
|
| 83 |
+
$$\frac{\langle \nu_i, \nu_j \rangle}{\|\nu_i\|_2 \|\nu_j\|_2} = \frac{\langle [\mathbf{0}, \phi_i - \phi_0], [\mathbf{0}, \phi_j - \phi_0] \rangle}{\|[\mathbf{0}, \phi_i - \phi_0]\|_2, \|[\mathbf{0}, \phi_j - \phi_0]\|_2} = \frac{\langle [\theta_0 - \theta_0, \phi_i - \phi_0], [\theta_0 - \theta_0, \phi_j - \phi_0] \rangle}{\|[\theta_0 - \theta_0, \phi_i - \phi_0]\|_2 \|[\theta_0 - \theta_0, \phi_j - \phi_0]\|_2}$$
|
| 84 |
+
|
| 85 |
+
Therefore, the comparison between full fine-tuning and parameter-efficient fine-tuning methods can be made fair by viewing them as updating different subsets of the joint parameter space $(\theta, \phi)$ .
|
| 86 |
+
|
| 87 |
+
Based on the above observations, we propose a novel partial linearization method for parameter-efficient fine-tuning models in order to enhance the weight disentanglement and improve the multitask fusion capability of fine-tuned task-specific models with a low computational cost overhead.
|
| 88 |
+
|
| 89 |
+
To monitor the progressive of trainable parameters $\phi$ throughout the training trajectory, we treat $\phi$ as a function of the training time t, denoted as $\phi(t)$ . Where t ranges from 0 to T. By employing a
|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
|
| 93 |
+
Figure 3: **Four types of fine-tuning paradigms**. (a) Full parameter fine-tuning. (b) Full-model linearization. (c) Parameter-efficient fine-tuning. (d) Linearized parameter-efficient fine-tuning. In this paper, we explore LoRA fine-tuning and linearized LoRA (L-LoRA) fine-tuning.
|
| 94 |
+
|
| 95 |
+
first-order Taylor expansion, we can streamline the dynamics of network learning as (Weng, 2022):
|
| 96 |
+
|
| 97 |
+
$$f_{\theta_0}(x;\phi(t)) \approx f_{\theta_0}^{\text{lin}}(x;\phi(t)) = f_{\theta_0}(x;\phi(0)) + \nabla_{\phi} f_{\theta_0}(x;\phi(0))^{\top} (\phi(t) - \phi(0)). \tag{4}$$
|
| 98 |
+
|
| 99 |
+
The linearized model, described by Eq.(4) also referred to as a tangent model, approximates the behavior of the neural network $f_{\theta_0}(x,\phi(t))$ at a specific time t, where $\theta_0,f_{\theta_0}(x;\phi(0)),\nabla_\phi f_{\theta_0}(x;\phi(0))$ are all constants. Consequently, the linearized function $f_{\theta_0}^{\text{lin}}(\phi(t);x)$ is a linear function of $\phi(t)$ .
|
| 100 |
+
|
| 101 |
+
Grounded in the key result detailed in Appendix B, where we characterize the evolution of model outputs. We hypothesize that by partially linearizing only a subset of parameters within the adaptable modules, the model can gain from a more disentangled representation. This partial linearization facilitates the separation of task-specific knowledge; hence, model fusion becomes more robust against the negative transfer effects and task interference in multi-task learning settings.
|
| 102 |
+
|
| 103 |
+
Figure 3 illustrates four different types of fine-tuning paradigms. The first three are existing methods, while the fourth is our proposed partial linearization fine-tuning method. (a) The full fine-tuning paradigm, all parameters $\theta$ are updated during fine-tuning. (b) The full-model linearization paradigm, but instead we fine-tune the tangent model in the tangent space. It is worth noting that although the Jacobian-vector products can be computed in a single forward pass (Pearlmutter, 1994), training and inference in this paradigm are usually twice or three times as expensive as full fine-tuning, a quantitative comparison is shown in Tables 3 and 4. (c) The PEFT paradigm, only a small number of parameters $\phi$ are updated while $\theta$ is fixed at $\theta_0$ . (d) the linearized PEFT (L-PEFT) paradigm, which is similar to PEFT fine-tuning, where $\phi$ is updated while $\theta_0$ is fixed. However, in L-PEFT, only the linearized PEFT modules are fine-tuned in the tangent space. This approach incurs only a fraction of the training and inference costs compared to standard PEFT fine-tuning. In this paper, we explore LoRA fine-tuning and linearized LoRA (L-LoRA) fine-tuning.
|
| 104 |
+
|
| 105 |
+
From Figures 2 and 8, we observe that task vectors from L-LoRA fine-tuning are closer to orthogonal than those from LoRA, which indicates that models fine-tuned with L-LoRA are more task-independent than those in LoRA. This is because the trainable parameters $\phi$ with L-LoRA are fine-tuned in the tangent space, which is a linear space, while $\phi$ in LoRA are fine-tuned in the original ambient space, which is a non-linear space. Besides, in some cases, the cosine similarity of task vectors from full fine-tuning are more orthogonal than those from L-LoRA. This may due to the larger space of trainable parameters available to full fine-tuning, providing an extremely high-dimensional space to encode specialized task knowledge. So even though L-LoRA tunes parameters in a linear subspace, the total capacity is restricted compared to full fine-tuning.
|
| 106 |
+
|
| 107 |
+
Previous findings in (Ilharco et al., 2023; Guillermo Ortiz-Jimenez et al., 2023) also suggest an empirical parameter-scaling law wherein weight disentanglement improves with increased number of trainable model parameters. Intuitively, (1) The larger the number of parameters, the stronger the expressive ability of the model, which can better learn the correlation between different tasks, thus facilitating weight disentanglement. (2) The over-parameterization brought about by a large number of parameters can provide more degrees of freedom to learn the feature representation required for
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Figure 4: Pairs of model fusion. These figures show scatter plots demonstrating the performance of different model fusion techniques on pairs of tasks. Each plot corresponds to a different fusion method. The x and y axes in each plot denote the normalized scores on the two tasks. Points indicate the performance of specific instances. Dashed lines represent the average performance per task for each method. (a) Image classification tasks. (b) NLP tasks.
|
| 112 |
+
|
| 113 |
+
each task and reduce parameter competition or interference between different tasks. Figure 5(a) shows that even though L-LoRA fine-tuning has far fewer trainable parameters compared to full fine-tuning (<1%), it achieves comparable average normalized scores to full fine-tuning.
|
2310.06148/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2310.06148/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep learning techniques have enabled breakthroughs in various areas such as game-playing [@silver2016mastering; @mnih2015human], image recognition [@krizhevsky2012imagenet; @he2015delving], and machine translation [@wu2016google]. However, deep neural networks are notoriously *data-hungry* [@lecun2015deep], limiting their successes to domains where sufficient data and computing resources are available [@hospedales2020meta; @huisman2021]. *Meta-learning* [@schaul2010metalearning; @schmidhuber1987evolutionary; @thrun1998lifelong; @brazdil2022metalearning] is one approach to reduce these limitations by learning efficient deep learning algorithms across different tasks. By presenting the learning algorithm with different tasks, that presumably share similarities with the task of interest, the learning algorithm is presumed to be able to learn more efficiently than when it has to learn the task of interest from scratch. This approach involves two different time scales of learning: at the *inner-level*, a given task is learned, and at the *outer-level* the learning algorithm is improved over tasks by adjusting the hyperparameters. Seminal approaches for this are MAML and Reptile.
|
| 4 |
+
|
| 5 |
+
While the field attracted much attention, recent results [@chen2019closer; @tian2020rethinking; @mangla2020charting] suggest that simply pre-training a network on a large dataset and *finetuning* only the final layer of the network (the final layer) may be more effective at learning new image classification tasks quickly than more complicated meta-learning techniques such as MAML [@finn2017model] and Reptile [@nichol2018reptile] when the data distribution is different from the one used for training. In contrast, MAML and Reptile often outperform finetuning when the data distribution is similar to the one used during training. These phenomena are not well understood and surprising as @raghu2020rapid have shown that the adaptation behaviour of MAML resembles that of finetuning when learning new tasks: most of the changes take place in the final layer of the network while the body of the network is mostly kept frozen.
|
| 6 |
+
|
| 7 |
+
In this work, we aim to find an explanation for the observed performance differences between MAML and finetuning. More specifically, we aim to answer the following two research questions:
|
| 8 |
+
|
| 9 |
+
1. Why do MAML and Reptile outperform finetuning in *within-distribution* settings?
|
| 10 |
+
|
| 11 |
+
2. Why can finetuning outperform gradient-based meta-learning techniques such as MAML and Reptile [@nichol2018reptile] when the test data distribution diverges from the training data distribution?
|
| 12 |
+
|
| 13 |
+
Both questions focus on the **few-shot image classification settings**. We base our work on MAML, Reptile and finetuning, as these are influential techniques that have sparked a large body of follow-up methods that use the underlying ideas. Since the questions that we aim to answer are inherently harder than just a simple performance comparison, answering them for the models that are at the basis of this body of literature will be the right starting point. We think that developing a better understanding of these influential methods is of great value and can cascade further onto the more complex methods built on top of these.
|
| 14 |
+
|
| 15 |
+
Based on our analysis of the learning objectives of the three techniques (finetuning, MAML, Reptile), we hypothesize that MAML and Reptile specialize for adaptation in low-data regimes of tasks from the training distribution, giving them an advantage in within-distribution settings. However, since they may settle for initial features that are inferior compared with finetuning due to their negligence, or relative negligence, of the initial performance, they may perform comparatively worse when the test data distribution diverges from the training distribution.
|
| 16 |
+
|
| 17 |
+
The primary contributions of our work are the following. First, we show the importance of the output layer weights and data scarcity during training for Reptile and MAML to facilitate specialization for quick adaptation in low-data regimes of similar distributions, giving them an advantage compared with finetuning. Second, we show that the pre-trained features of the finetuning technique are more diverse and discriminative than those learned by MAML and Reptile, which can be advantageous in out-of-distribution settings.[^2]
|
| 18 |
+
|
| 19 |
+
# Method
|
| 20 |
+
|
| 21 |
+
The three discussed techniques can be seen as part of a general gradient-based optimization framework, as shown in [\[alg:gengradopt\]](#alg:gengradopt){reference-type="ref+label" reference="alg:gengradopt"}. All algorithms try to find a good set of initial parameters as specified by their objective functions. The parameters are initialized randomly in line 1. Then, these initial parameters are iteratively updated based on the learning objectives (the loop starting from line 2).
|
| 22 |
+
|
| 23 |
+
This iterative updating procedure continues as follows. First, the data distribution is selected to sample data from (line 3). That is, finetuning uses the full joint distribution $p_s(\mathbf{x}, \mathbf{y})$ of the source problem, whereas Reptile and MAML select task distributions $p_j(\mathbf{x}, \mathbf{y})$ (obtained by sub-sampling a set of instances coming from a subset of labels from the full distribution $p_s$). Next, we make $T$ task-specific updates on mini-batches sampled from the distribution $p$ that was selected in the previous stage (lines 4--8). Lastly, the initial parameters $\mathbf{\theta}$ are updated using the outcomes of the task-specific adaptation phase.
|
| 24 |
+
|
| 25 |
+
Note that in this general gradient-based optimization framework, all techniques update their initialization parameters based on a single distribution $p$ at a time. One could also choose to use batches of distributions, or *meta-batches*, in order to update the initialization $\mathbf{\theta}$. This can be incorporated by using the average of the losses of the different distributions as an aggregated loss function.
|
| 26 |
+
|
| 27 |
+
:::: algorithm
|
| 28 |
+
::: algorithmic
|
| 29 |
+
Randomly initialize $\mathbf{\theta}$ Select data distribution $p=$ [$p_s$]{style="background-color: red!30"} [$p_j \sim p(\mathcal{T})$]{style="background-color: green!30"} [$p_j \sim p(\mathcal{T})$]{style="background-color: blue!30"} Set $\mathbf{\theta}^{(0)} = \mathbf{\theta}$ Sample a batch of data $\mathbf{x}, \mathbf{y} \sim p$ Compute $\mathbf{\theta}^{(t+1)} = \mathbf{\theta}^{(t)} - \nabla_{\mathbf{\theta}^{(t)}}\mathcal{L}_{t+1}(\mathbf{\theta}^{(t)})$ Update $\mathbf{\theta}$ by [$\mathbf{\theta} = \mathbf{\theta}^{(T)}$]{style="background-color: red!30"} [[\[eq:defreptupdt\]](#eq:defreptupdt){reference-type="ref+label" reference="eq:defreptupdt"}]{style="background-color: green!30"} [[\[eq:mamlupdate\]](#eq:mamlupdate){reference-type="ref+label" reference="eq:mamlupdate"}]{style="background-color: blue!30"}
|
| 30 |
+
:::
|
| 31 |
+
::::
|
| 32 |
+
|
| 33 |
+
[1](#tab:algorithms){reference-type="ref+label" reference="tab:algorithms"} gives an overview of the three algorithms. As we can see, finetuning only optimizes for the initial performance and does not take into account the performance after adaptation. This means that its goal is to correctly classify any input $\mathbf{x}$ from the source problem distribution $p_s$. Reptile, on the other hand, optimizes both for initial performance, as well as performance after every update step. This means that Reptile may settle for an initialization with somewhat worse initial performance compared with finetuning, as long as the performance during task-specific adaptation makes up for this initial deficit. MAML is the most extreme in the sense that it can settle for an initialization with poor initial performance, as long as the final performance is good.
|
| 34 |
+
|
| 35 |
+
::: {#tab:algorithms}
|
| 36 |
+
**Algorithm** **Loss function** **Focus**
|
| 37 |
+
--------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------
|
| 38 |
+
Finetuning $\displaystyle \mathop{\mathrm{\mathbb{E}}}_{\mathbf{x}_i, \mathbf{y}_i} \left[ \mathcal{L}_{\mathbf{x}_i, \mathbf{y}_i}(\mathbf{\theta}) \right]$ Initial performance
|
| 39 |
+
Reptile $\displaystyle \mathop{\mathrm{\mathbb{E}}}_{\mathcal{T}_j \sim p(\mathcal{T})} \left( \sum_{t=0}^{T-1} \mathop{\mathrm{\mathbb{E}}}_{\mathbf{x}_i, \mathbf{y}_i \sim p_j} \left[ \mathcal{L}_{t+1}(\mathbf{\mathbf{\theta}^{(t)}_j}) \right] \right)$ Multi-step performance
|
| 40 |
+
MAML $\displaystyle \mathop{\mathrm{\mathbb{E}}}_{\mathcal{T}_j \sim p(\mathcal{T})} \left( \mathop{\mathrm{\mathbb{E}}}_{\mathbf{x}_i, \mathbf{y}_i \sim p_j} \left[ \mathcal{L}_{T}(\mathbf{\mathbf{\theta}^{(T)}_j}) \right] \right)$ Final performance
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
: Overview of the loss functions and corresponding focus of finetuning, Reptile, and MAML.
|
| 44 |
+
:::
|
| 45 |
+
|
| 46 |
+
In short, Reptile and MAML can be interpreted as *look-ahead algorithms* as they take the performance after task-specific adaptation into account whereas finetuning does not. Moreover, fo-MAML relies purely on the look-ahead mechanism and neglects the initial performance while Reptile also takes the initial and intermediate performances into account. This means that MAML may outperform finetuning with a *low-capacity* network (with the worst initial performance) where there is not enough capacity to store features that are directly useful for new tasks. The reason for this is likely that finetuning will be unable to obtain good embeddings for all of the training tasks and does not have a mechanism to anticipate what features would be good to learn future tasks better. MAML, on the other hand, does have this capability, and can thus settle for a set of features with worse initial performance that lends itself better for learning new tasks. In contrast, when we have *high-capacity* networks with enough expressivity to store all relevant features for a task, finetuning may outperform MAML as it optimizes purely for initial performance without any additional adaptation, which can be prone to overfitting to the training data of the tasks due to the limited amount of available data. Lastly, one may expect Reptile to take place between MAML and finetuning: it works better than finetuning when using low-capacity backbones while it may be slightly worse than finetuning when using larger-capacity networks (but better than MAML).
|
| 47 |
+
|
| 48 |
+
Although MAML focuses on the performance after learning, it has been shown that its learning behaviour is similar to that of finetuning: it mostly relies on feature re-use and not on fast learning [@raghu2020rapid]. This means that when a *distribution shift* occurs, which means that the test tasks become more distant from the tasks that were used for training, MAML may be ill-positioned due to poor initial performance compared with finetuning which can fall back on more directly useful initial features.
|
2310.09130/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2310.09130/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Large Language Models (LLMs) have shown powerful capability in natural language understanding by capturing hidden semantics in vector space. Consequently, users can leverage LLMs to obtain embeddings and subsequently apply them to their own downstream tasks, known as \"embedding as a service\" (EaaS). However, EaaS is typically provided as an online service, giving rise to significant privacy concerns. In particular, users may input sensitive information, such as names, phones, and email addresses, that needs to be kept hidden from the service provider. With the growing concern around the potential leakage of confidential data, certain companies, such as Samsung, have temporally prohibited the usage of online LLM services.
|
| 4 |
+
|
| 5 |
+
Recent research on privacy-preserving model inference investigates around two directions, cryptographic [@liu2023llms; @chen2022x] and perturbation [@du2023dp]. Cryptography typically employs homomorphic encryption (HE) to compute the inference result of the users' encrypted input. Unfortunately, the application of cryptographic technique is constrained by the significant computation overhead of cryptographic operations, especially on large transformer models. Perturbation provides differential privacy (DP) guarantee by adding calibrated noise to the original data. A key challenge of this approach is how to balance the utility and privacy tradeoff in a local differential privacy (LDP) setting, where users' inputs are privatized before being released to the server. Furthermore, privatization on text data is particularly difficult when the randomized algorithm is required to map text input to text output.
|
| 6 |
+
|
| 7 |
+
Split learning [@gupta2018distributed; @vepakomma2018split] has emerged as a solution to privacy-preserving computation between two parties. During inference, the user performs affordable computation locally to obtain intermediate results (IRs), and forwards them to the service provider for subsequent operations. To mitigate privacy leakage, recent research has integrate DP with split learning by injecting noises into the IRs before sharing with the server [@yang2022differentially]. In the split inference setting, a crucial problem is to design an algorithm that minimizes the impact on model performance while ensuring LDP.
|
| 8 |
+
|
| 9 |
+
A notable approach involves the application of denoising techniques to conduct error correction and enhance model utility. Existing studies incorporate denoising layers on the server side, leveraging the post-processing properties of DP [@nasr2020improving; @wang2019dplssgd; @xu2022denoising]. However, the effectiveness of denoising is hindered by the fact that the server is ignorant of the injected noise levels. Driven by the limitation, a question arises: *can we improve the utility by conducting denoising on the user side, leveraging the knowledge of noise levels and raw IRs?* This is a nontrivial task to uncover the closed-form mapping between denoised embedding and noises as well as raw IRs since the inputs have undergone a series of complex transformations.
|
| 10 |
+
|
| 11 |
+
In this paper, we answer this question affirmatively by proposing Split-N-Denoise (SnD), a framework that integrates split inference and denoising techniques to enhance utility under LDP bound. To minimize computational overhead of users, we deploy only the token representation layer on the client sides. A denoise model that enhances noisy embeddings using raw inputs and noise levels is pre-trained on the server side and subsequently shared with the user. Once receiving the output from server, users input their private data into the denoise model to improve the utility of embeddings. The implementation is available at https://github.com/NusIoraPrivacy/eaas-privacy.
|
| 12 |
+
|
| 13 |
+
Our main contributions involve the following:
|
| 14 |
+
|
| 15 |
+
- We propose SnD, a framework that integrates split inference and denoising techniques to protect user's privacy during LLM inference with strong privacy guarantee. Empirical studies demonstrate that our method outperforms existing DP-based baselines by more than 10% on average and maintains utility even in extremely low privacy budget settings ($\eta\leq 0.01$).
|
| 16 |
+
|
| 17 |
+
- We design a innovative denoising method deployed on user side. In this approach, a denoise model is pre-trained on server side using public dataset and synthetic noises. Subsequently, this trained model is deployed on the user side, where it leverages the specific noise levels and raw IRs provided by the user to enhance the embeddings.
|
| 18 |
+
|
| 19 |
+
# Method
|
| 20 |
+
|
| 21 |
+
Differential privacy (DP) [@dwork2006differential; @dwork2014algorithmic] is considered the gold standard for data privacy. Its definition is as follows:
|
| 22 |
+
|
| 23 |
+
::: definition
|
| 24 |
+
**Definition 1** (($\epsilon, \delta$)-Differential Privacy). A randomized mechanism $M$ with domain $D$ and range $R$ preserves $(\epsilon, \delta)$-differential privacy if and only if for any two neighboring datasets $D, D' \in D$ and for any subset $S \subseteq R$, the following inequality holds: $$\Pr[M(D) \in S] \leq e^{\epsilon} \Pr[M(D') \in S] + \delta$$ where $\epsilon$ is the privacy budget and $\delta$ is the failure probability.
|
| 25 |
+
:::
|
| 26 |
+
|
| 27 |
+
Local differential privacy (LDP) is a particular case of DP, where the server is not trusted and data privatization is conducted by the client. For any inputs $x$, $x'$ $\in D$, LDP requires a randomized mechanism $M$ to satisfy: $$\begin{equation}
|
| 28 |
+
\Pr[M(x) \in S] \leq e^{\epsilon} \Pr[M(x') \in S] + \delta
|
| 29 |
+
\end{equation}$$ for any measurable subset subset $S \subseteq Range(M)$.
|
| 30 |
+
|
| 31 |
+
In the context of local privacy preservation, we employ $d_\chi$-privacy [@Chatzikokolakis2013], a specialized variant of local differential privacy tailored for textual data [@feyisetan2019privacy; @Qu_2021]. $d_\chi$-privacy allows to impose high probability of observing the same output for inputs with similar semantics. We state the formal definition in the following:
|
| 32 |
+
|
| 33 |
+
::: definition
|
| 34 |
+
**Definition 2** ($d_\chi$-privacy). For an input domain $X$ and an output domain $Y$, $d_\chi$ serves as a metric space over $X$. A stochastic mechanism $M: X \rightarrow Y$ is said to adhere to $\eta d_\chi$-privacy if, for any two elements $x, x' \in X$, the output distributions $M(x)$ and $M(x')$ satisfy the following inequality: $$\frac{P(M(x) = y)}{P(M(x') = y)} \leq e^{\eta d_\chi(x, x')}, \quad \forall y \in Y,$$ where $\eta \geq 0$ is a tunable privacy parameter that modulates the level of privacy protection.
|
| 35 |
+
:::
|
| 36 |
+
|
| 37 |
+
The privacy guarantee indicates that the log-likelihood ratio of producing the same outcome $y$ is bounded by $\eta d_\chi(x, x')$ for any two possible inputs $x$, $x'$.
|
| 38 |
+
|
| 39 |
+
<figure id="framework" data-latex-placement="htbp">
|
| 40 |
+
<embed src="architecture.pdf" />
|
| 41 |
+
<figcaption>Overview of our privacy-preserving SnD framework. Users first obtain an initial embedding from a local encoder, followed by a noise addition via the privatization module. This privatized embedding is then transmitted to the server for processing. Upon completion, users receive a noised output, which is subsequently refined using a pre-trained denoising model to achieve an optimal balance between privacy and utility.</figcaption>
|
| 42 |
+
</figure>
|
| 43 |
+
|
| 44 |
+
Denote $G: \mathcal{V}^n \rightarrow \mathbb{R}^d$ as the language model that maps $n$-token to embedding. In Split-N-Denoise (SnD), we split the language model $G$ into a local encoder $G_l: \mathcal{V}^n \rightarrow \mathbb{R}^{n\times d}$ at user side and a cloud encoder $G_c: \mathbb{R}^{n\times d} \rightarrow \mathbb{R}^d$ at server side. The local encoder consists of only the token representation layer to minimize the computation cost for user, and the server performs subsequent operations on the IRs uploaded by the clients. The architecture of SnD is depicted in Figure [1](#framework){reference-type="ref" reference="framework"}, containing four main components:
|
| 45 |
+
|
| 46 |
+
- *Local encoder module*: the user retrieves the token embeddings of their input locally.
|
| 47 |
+
|
| 48 |
+
- *Privatization module*: the token representations are privatized by the user before being transmitted to the server to satisfy LDP.
|
| 49 |
+
|
| 50 |
+
- *Cloud encoder module*: the server performs transformation on the privatized token representations and returns the embedding to user.
|
| 51 |
+
|
| 52 |
+
- *Denoise module*: user conducts local denoising on the received embedding leveraging their raw inputs and specific noise levels.
|
| 53 |
+
|
| 54 |
+
We adopt $d_\chi$-privacy to privatize the token representation layers on user side. Given an input sequence $x = [x_1, \ldots, x_n]$, the token representation layer transforms $x$ into a vector sequence $X=[\boldsymbol{x}_1, \ldots, \boldsymbol{x}_n] \in \mathbb{R}^{n\times d}$ via embedding model $E \in \mathbb{R}^{|\mathcal{V}| \times d}$, where $|\mathcal{V}|$ denotes the vocabulary size and $d$ represents the dimensionality of the embeddings.
|
| 55 |
+
|
| 56 |
+
Assuming $L_2$ norm as the distance metric, the application of $d_X$ privacy, parameterized by $\eta$, to a given word embedding $\boldsymbol{x}_t \in \mathbb{R}^d$ is realized by the addition of Laplacian noise $z \sim c\exp(-\eta ||z||)$, where $c$ is a real-valued constant [@wu2017bolton]. To sample $z$ from the Laplacian distribution, consider $z = l\boldsymbol{v}$, where $l$ is sampled from a Gamma distribution $\Gamma(d, 1/\eta)$ and $\boldsymbol{v}$ is uniformly sampled from the unit ball $B^d$. Consequently, the privatized representation $M(\boldsymbol{x}_t)$ can be succinctly expressed as: $$\begin{equation}
|
| 57 |
+
\label{eq:noise}
|
| 58 |
+
M(\boldsymbol{x}_t) = \boldsymbol{x}_t + \boldsymbol{z}.
|
| 59 |
+
\end{equation}$$
|
| 60 |
+
|
| 61 |
+
The supports for $z$ and thus $M(\boldsymbol{x}_t)$ are unbounded, imposing difficulties on subsequent denoise procedures, especially under low level of $\eta$. To improve the performance of denoise model introduced in Section [3.4](#sec:denoise){reference-type="ref" reference="sec:denoise"}, the client clips the $l_2$ norm of the privatized representation within $C_{x_t}$: $$\begin{equation}
|
| 62 |
+
M' (\boldsymbol{x}_t) = M (\boldsymbol{x}_t) \cdot \min \left(1, C_{x_t}/\|M(\boldsymbol{x}_t)\|\right)
|
| 63 |
+
\end{equation}$$ , where $C_{x_t}=\max_{\boldsymbol{x}_t \in \mathcal{X}_t}\|\boldsymbol{x}_t\|$ is chosen to be the upper bound of $\boldsymbol{x}_t$. The user then updates its noise matrix locally according to the clipped representations for subsequent denoise. Appendix [7.12](#app:ablation){reference-type="ref" reference="app:ablation"} demonstrates the benefits of norm clipping empirically.
|
| 64 |
+
|
| 65 |
+
The following theorem states that the noise mechanism $M': \mathbb{R}^d \rightarrow \mathbb{R}^d$ adheres to $\eta d_\chi-$privacy. Refer to Appendix [7.1](#app:etadp){reference-type="ref" reference="app:etadp"} for the proof.
|
| 66 |
+
|
| 67 |
+
::: {#thm:etadp .theorem}
|
| 68 |
+
**Theorem 3**. *For any $d\geq1$ and any $\eta>0$, the mechanism $M': \mathbb{R}^d \rightarrow \mathbb{R}^d$ achieves $\eta d_\chi-$privacy with respect to $d_\chi(\boldsymbol{x}, \boldsymbol{x}') = \|\boldsymbol{x}-\boldsymbol{x}'\|$.*
|
| 69 |
+
:::
|
| 70 |
+
|
| 71 |
+
**Limitation of server-side denoise:** the denoising ability of a server is limited by its lack of knowledge regarding the noise levels. The server's capacity to remove noise is inherently conflicted with the level of privacy protection. Intuitively, if the server could produce an appropriate denoised output on its own, there is a higher probability that it can also reconstruct the original user input. Proposition [4](#prop:mseserver){reference-type="ref" reference="prop:mseserver"} below gives the lower bound of mean square error (MSE) for server-side denoise algorithms. The proof can be found in Appendix [7.2.1](#app:mseserve){reference-type="ref" reference="app:mseserve"}.
|
| 72 |
+
|
| 73 |
+
::: {#prop:mseserver .proposition}
|
| 74 |
+
**Proposition 4**. *Let $\boldsymbol{y}\in \mathcal{Y}\subseteq \mathbb{R}^k$ be the original vector without noises added, and let $\hat{\boldsymbol{y}}\in \mathbb{R}^k$ be the noisy vector obtained under $\eta d_{\chi}$-privacy mechanism. Denote $D_s: \mathbb{R}^k \rightarrow \mathbb{R}^k$ as the denoising algorithm run by the server. Suppose $D_s$ is unbiased and the token embeddings are bounded by $B_x$: $$\begin{equation}
|
| 75 |
+
\|\boldsymbol{x}'-\boldsymbol{x}\|\leq B_x, \forall \boldsymbol{x}',\ \boldsymbol{x}
|
| 76 |
+
\end{equation}$$ , then: $$\begin{equation}
|
| 77 |
+
\mathbb{E} [\|D_s(\hat{\boldsymbol{y}}) - \boldsymbol{y}\|/k] \geq \frac{\sum_{i=1}^d \rm{diam}_i (\mathcal{Y})^2/4k}{e^{\eta B_x}-1}
|
| 78 |
+
\end{equation}$$ where $\rm{diam}_i (\mathcal{Y}) = \sup_{\boldsymbol{y}, \boldsymbol{y}'\in \mathcal{Y}: \boldsymbol{y}_j=\boldsymbol{y}_j' \forall j\neq i}|\boldsymbol{y}_i-\boldsymbol{y}_i'|$ is the diameter of $\mathcal{Y}$ in the $i$-th dimension.*
|
| 79 |
+
:::
|
| 80 |
+
|
| 81 |
+
::: remark
|
| 82 |
+
*Remark 5*. The vector $\boldsymbol{y}$ can be: (i) the token representations uploaded from users, (ii) output embeddings, or (iii) any intermediate results returned by the language model based on the token embeddings. The instantiation of $\boldsymbol{y}$ is determined by the layer at which the server runs denoising algorithm.
|
| 83 |
+
:::
|
| 84 |
+
|
| 85 |
+
To address the limitation, we propose a denoise framework where users conduct error correction on the noisy embeddings using their specific noises and raw inputs. Given the black-box nature of neural network transformation on the privatized token representations, we propose to train a transformer-based model for embedding denoise.
|
| 86 |
+
|
| 87 |
+
Let $\Tilde{X}=[\boldsymbol{\Tilde{x}}_1, \ldots, \boldsymbol{\Tilde{x}}_n]$, $Z=[\boldsymbol{z}_1, \ldots, \boldsymbol{z}_n]$ $\in$ $\mathbb{R}^{n\times d}$ denote, respectively, the privatized token representations and noise matrix. Noted that the noise vector is updated with the clipped privatized token embeddings $\boldsymbol{z} = M'(\boldsymbol{x}_t) - \boldsymbol{x}_t$. After a series of operations, the server returns a noisy embedding $\boldsymbol{e}_n$ capturing the context of input token to the user. The denoise model is parameterized by a $L$-layer transformer decoder, $D: \mathbb{R}^{(2n+1)\times d}\rightarrow \mathbb{R}^d$: $$\begin{equation}
|
| 88 |
+
\boldsymbol{e}_d = D(\boldsymbol{e}_n, \Tilde{X}, Z)
|
| 89 |
+
\end{equation}$$
|
| 90 |
+
|
| 91 |
+
The input to the denoise model $H_0$ is a concatenation of vectors: $$\begin{equation}
|
| 92 |
+
H_0 = [\boldsymbol{e}_n; \boldsymbol{\Tilde{x}}_1, \ldots, \boldsymbol{\Tilde{x}}_n; \boldsymbol{z}_1, \ldots, \boldsymbol{z}_n]
|
| 93 |
+
\end{equation}$$
|
| 94 |
+
|
| 95 |
+
Let $\boldsymbol{h}_{t}^l$ represents the hidden state for the $t^{th}$ vector at layer $l$. This state is computed using the following recursive relation: $$\begin{equation}
|
| 96 |
+
\boldsymbol{h}_{t}^l = \boldsymbol{h}_{t}^{l-1} + \boldsymbol{a}_{t}^{l-1} + \boldsymbol{m}_{t}^{l-1}
|
| 97 |
+
\end{equation}$$ where $$\begin{equation}
|
| 98 |
+
\begin{gathered}
|
| 99 |
+
\boldsymbol{a}_{t}^{l-1} = attn^l (\boldsymbol{h}_1^{l-1}, \boldsymbol{h}_2^{l-1}, ..., \boldsymbol{h}_{2n+1}^{l-1}),\\
|
| 100 |
+
\boldsymbol{m}_{t}^{l-1} = W_{proj}^l \sigma (W_{fc}^l \gamma (\boldsymbol{a}_{t}^{l} + \boldsymbol{h}_{t}^{l-1}))
|
| 101 |
+
\end{gathered}
|
| 102 |
+
\end{equation}$$ The denoised embedding is obtained directly from the hidden state representation for $\boldsymbol{e}_n$ at the final layer: $$\begin{equation}
|
| 103 |
+
\boldsymbol{e}_d = \boldsymbol{h}_0^L
|
| 104 |
+
\end{equation}$$ We visualize the architecture of the denoise model in Figure [3](#denoisemod){reference-type="ref" reference="denoisemod"}. Intuitively, the noisy embedding undergoes $L$ steps to transform into the denoised embedding. In each step, the transformation is conditioned on the feature representations of raw IRs as well as specific noises.
|
| 105 |
+
|
| 106 |
+
To train a denoise model, the server samples a set of noises added to the token representations of public corpus. Subsequently, the clean embedding $\boldsymbol{e}_c$ and noisy embedding $\boldsymbol{e}_n$ are computed from, respectively, the raw and privatized token representations: $$\begin{equation}
|
| 107 |
+
\boldsymbol{e}_c = G(X),\ \boldsymbol{e}_n = G(\Tilde{X})
|
| 108 |
+
\end{equation}$$
|
| 109 |
+
|
| 110 |
+
The denoise model is trained on the above datasets with the objective to minimize the deviation between denoised and clean embeddings: $$\begin{equation}
|
| 111 |
+
\min_D \mathbb{E} [\|D(\boldsymbol{e}_n, \Tilde{X}, Z) - \boldsymbol{e}_c\|^2]
|
| 112 |
+
\end{equation}$$
|
| 113 |
+
|
| 114 |
+
The pretrained model is shared with users to conduct denoising on the received embeddings locally. It is important to note that the denoise model does not expose any information regarding user data. This is primarily due to the fact that the model's training is carried out exclusively on a public dataset, rendering it irrelevant to users' private inputs.
|
| 115 |
+
|
| 116 |
+
In this section, we analyze the communication complexity and user computation complexity of our framework.
|
| 117 |
+
|
| 118 |
+
*Communication complexity*: the communication cost can be broken as: (1) user uploads the token representations to the server ($O(nd)$ messages); (2) server share the embeddings with user ($O(d)$ messages). Hence, the total communication overhead is $O(nd)$.
|
| 119 |
+
|
| 120 |
+
*User computation complexity*: user's computation cost can be broken as: (1) retrieving token embeddings from input text ($O(n)$ complexity); (2) performing local denoising with the transformer-based model ($O(n^2dL)$ complexity [@vaswani2017attention]). Therefore, the user's computation cost adds up to $O(n^2dL)$.
|
| 121 |
+
|
| 122 |
+
:::: table*
|
| 123 |
+
::: small
|
| 124 |
+
+-------------+------------------------------+----------------------------+------------------------------+---+
|
| 125 |
+
| | DistillBert (66m) | Bert Base (110m) | Bert Large (340m) | |
|
| 126 |
+
+:============+:========+:========+:=========+:=======+:=======+:=========+:========+:========+:=========+:==+
|
| 127 |
+
| 2-10 $\eta$ | 100 | 500 | $\infty$ | 100 | 500 | $\infty$ | 100 | 500 | $\infty$ | |
|
| 128 |
+
+-------------+---------+---------+----------+--------+--------+----------+---------+---------+----------+---+
|
| 129 |
+
| CoLA | 0.693 | 0.694 | 0.701 | 0.688 | 0.694 | 0.751 | 0.697 | 0.699 | 0.757 | |
|
| 130 |
+
+-------------+---------+---------+----------+--------+--------+----------+---------+---------+----------+---+
|
| 131 |
+
| QQP | 0.632 | 0.649 | 0.683 | 0.667 | 0.688 | 0.728 | 0.676 | 0.684 | 0.706 | |
|
| 132 |
+
+-------------+---------+---------+----------+--------+--------+----------+---------+---------+----------+---+
|
| 133 |
+
| MRPC | 0.683 | 0.691 | 0.695 | 0.689 | 0.725 | 0.742 | 0.684 | 0.689 | 0.701 | |
|
| 134 |
+
+-------------+---------+---------+----------+--------+--------+----------+---------+---------+----------+---+
|
| 135 |
+
| RTE | 0.578 | 0.580 | 0.592 | 0.592 | 0.610 | 0.616 | 0.590 | 0.601 | 0.621 | |
|
| 136 |
+
+-------------+---------+---------+----------+--------+--------+----------+---------+---------+----------+---+
|
| 137 |
+
:::
|
| 138 |
+
::::
|
| 139 |
+
|
| 140 |
+
:::: table*
|
| 141 |
+
::: small
|
| 142 |
+
+-------------+----------------------------------+----------------------------------+-------------------------------------+---+
|
| 143 |
+
| | T5 Small (60m) | T5 Base (220m) | T5 Large (770m) | |
|
| 144 |
+
+:============+:======+:======+:======+:=========+:======+:======+:======+:=========+:=======+:=======+:=======+:=========+:==+
|
| 145 |
+
| 2-13 $\eta$ | 0.001 | 0.01 | 1 | $\infty$ | 0.001 | 0.01 | 1 | $\infty$ | 0.001 | 0.01 | 1 | $\infty$ | |
|
| 146 |
+
+-------------+-------+-------+-------+----------+-------+-------+-------+----------+--------+--------+--------+----------+---+
|
| 147 |
+
| CoLA | 0.69 | 0.69 | 0.69 | 0.71 | 0.69 | 0.70 | 0.70 | 0.73 | 0.70 | 0.70 | 0.70 | 0.75 | |
|
| 148 |
+
+-------------+-------+-------+-------+----------+-------+-------+-------+----------+--------+--------+--------+----------+---+
|
| 149 |
+
| QQP | 0.68 | 0.69 | 0.68 | 0.71 | 0.66 | 0.67 | 0.69 | 0.72 | 0.66 | 0.67 | 0.70 | 0.71 | |
|
| 150 |
+
+-------------+-------+-------+-------+----------+-------+-------+-------+----------+--------+--------+--------+----------+---+
|
| 151 |
+
| MRPC | 0.68 | 0.69 | 0.69 | 0.70 | 0.69 | 0.69 | 0.70 | 0.71 | 0.68 | 0.69 | 0.69 | 0.71 | |
|
| 152 |
+
+-------------+-------+-------+-------+----------+-------+-------+-------+----------+--------+--------+--------+----------+---+
|
| 153 |
+
| RTE | 0.55 | 0.56 | 0.58 | 0.60 | 0.57 | 0.58 | 0.62 | 0.63 | 0.57 | 0.59 | 0.61 | 0.62 | |
|
| 154 |
+
+-------------+-------+-------+-------+----------+-------+-------+-------+----------+--------+--------+--------+----------+---+
|
| 155 |
+
:::
|
| 156 |
+
::::
|
| 157 |
+
|
| 158 |
+
:::: table*
|
| 159 |
+
::: small
|
| 160 |
+
+:------------+:------+:------+:---------+:------+:------+:---------+:------+:------+:---------+:-------+:---------+
|
| 161 |
+
| | GPT2 Small | GPT2 Medium | GPT2 large | GPT2 Xlarge |
|
| 162 |
+
+-------------+--------------------------+--------------------------+--------------------------+-------------------+
|
| 163 |
+
| | (120m) | (345m) | (774m) | (1.5b) |
|
| 164 |
+
+-------------+-------+-------+----------+-------+-------+----------+-------+-------+----------+--------+----------+
|
| 165 |
+
| 2-12 $\eta$ | 1 | 100 | $\infty$ | 1 | 100 | $\infty$ | 1 | 100 | $\infty$ | 100 | $\infty$ |
|
| 166 |
+
+-------------+-------+-------+----------+-------+-------+----------+-------+-------+----------+--------+----------+
|
| 167 |
+
| CoLA | 0.688 | 0.700 | 0.709 | 0.690 | 0.698 | 0.728 | 0.700 | 0.701 | 0.724 | 0.693 | 0.766 |
|
| 168 |
+
+-------------+-------+-------+----------+-------+-------+----------+-------+-------+----------+--------+----------+
|
| 169 |
+
| QQP | 0.645 | 0.657 | 0.716 | 0.647 | 0.652 | 0.711 | 0.637 | 0.650 | 0.721 | 0.650 | 0.741 |
|
| 170 |
+
+-------------+-------+-------+----------+-------+-------+----------+-------+-------+----------+--------+----------+
|
| 171 |
+
| MRPC | 0.688 | 0.691 | 0.720 | 0.688 | 0.693 | 0.710 | 0.674 | 0.691 | 0.701 | 0.686 | 0.705 |
|
| 172 |
+
+-------------+-------+-------+----------+-------+-------+----------+-------+-------+----------+--------+----------+
|
| 173 |
+
| RTE | 0.556 | 0.563 | 0.581 | 0.567 | 0.578 | 0.583 | 0.581 | 0.606 | 0.611 | 0.584 | 0.592 |
|
| 174 |
+
+-------------+-------+-------+----------+-------+-------+----------+-------+-------+----------+--------+----------+
|
| 175 |
+
:::
|
| 176 |
+
::::
|
2310.14017/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2310.14017/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Time series data is crucial in various real-world applications, ranging from finance [1, 2, 3], engineering [4, 5], to healthcare [6, 7]. Unlike domains such as computer vision [8, 9] and natural language processing [10, 11], where human recognizable features exist, time series data often lacks readily discernible patterns, making data labeling challenging. Consequently, the scarcity of labeled data poses a significant hurdle in effectively utilizing time series for analysis and classification tasks.
|
| 4 |
+
|
| 5 |
+
To address the paucity of labeled data in time series analysis, self-supervised contrastive learning has emerged as a promising approach. For example, TimeCLR [12] proposes a DTW data augmentation for time series data; TS2vec [13] designs a cropping and masking mechanism to form positive pairs; ExpCLR [14] introduces a novel loss function to utilize continuous expert features. By leveraging the inherent consistency within unlabeled data, contrastive learning algorithms enable the extraction of effective representations without relying on explicit labels. This paradigm shift opens up possibilities for overcoming the data scarcity issue and enhancing the capabilities of time series analysis.
|
| 6 |
+
|
| 7 |
+
<sup>\*</sup>These authors contributed equally to this work.
|
| 8 |
+
|
| 9 |
+
Despite recent advancements in contrastive learning methods for time series, existing approaches fail to exploit the full potential of medical time series data, such as electroencephalogram (EEG) signals. Unlike conventional time series data, medical time series often exhibit more data levels (Figure 1), including patient, trial, sample, and observation levels. Current contrastive learning techniques exclusively employ subsets of these levels (as illustrated in Table 1). Additionally, many of these methods are tailored to specific data types, which restricts their capacity to capture the rich complexity of medical time series. For example, CLOCS [15] presents a contrastive learning method for ECG using sample and patient levels. Mixing-up [16] captures sample-level consistency through a mixing data augmentation scheme. TNC [17] exploits trial-level consistency by contrasting neighbor samples in the same trial as positive pairs. Neither of them leverages all the levels exhibited in the medical time series.
|
| 10 |
+
|
| 11 |
+
After reviewing existing contrastive learning methods within the time series domain, we consistently posed a pivotal question to ourselves: Can we design a straightforward yet appliable contrastive learning framework that can be adapted to all forms of medical time series data, akin to the classical model SimCLR [18] in the domain of contrastive learning? Our objective is to craft an innovative framework that utilizes all information within medical time series in the context of self-supervised contrastive learning. It enables us to harness patient and trial information to learn consistency across instances while leveraging the sample and observation levels' information to facilitate conventional instance discrimination.
|
| 12 |
+
|
| 13 |
+
In this paper, we propose a hierarchical framework, COMET, that systematically leverages all four levels of medical time series, namely patient, trial, sample, and observation, to reduce the reliance on labeled data. By incorporating self-supervised contrastive learning, our method aims to bridge the gap between the limited availability of labeled data and the need for robust and generalizable models in medical time series analysis. We conduct extensive experiments with six baselines on three diverse datasets in a challenging patient-independent setting. COMET outperforms SOTAs by 14% and 13% F1 score with la-
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+
**Figure 1: Structure of medical time series.** Medical time series commonly have four levels (coarse to fine): patient, trial, sample, and observation. An observation is a single value in univariate time series and a vector in multivariate time series.
|
| 18 |
+
|
| 19 |
+
bel fractions of 10% and 1%, respectively, on EEG-based Alzheimer's detection. Further, COMET outperforms SOTAs by 0.17% and 2.66% F1 score with label fractions of 10% and 1%, respectively, on detecting Myocardial infarction with ECG. Finally, COMET outperforms SOTAs by 2% and 8% F1 score with label fractions of 10% and 1%, respectively, in the EEG-based diagnosis of Parkinson's disease. The results of downstream tasks demonstrate the effectiveness and stability of our method.
|
| 20 |
+
|
| 21 |
+
# Method
|
| 22 |
+
|
| 23 |
+
In this section, we clarify the key conceptions of observation (or measurement), sample (or segment), trial (or recording), and patient (or subject) in the context of medical time series (Figure 1). For better understanding, we illustrate the concepts with an example of Electroencephalography (EEG) signals for Alzheimer's Disease diagnosis (details in Appendix A).
|
| 24 |
+
|
| 25 |
+
Definition 1: Observation. *An observation* xi,t ∈ R <sup>F</sup> *in medical time series data represents a single data point or a vector captured at a specific timestamp* t*.* Here we use i to denote the sample index (see Definition 2) and t to denote the timestamp. It may record physiological status, laboratory test results, vital signs, or other measurable health indicators. The observation is a single real value for univariate time series while vector for multivariate time series. The F is the feature dimension if it is a multivariate time series.
|
| 26 |
+
|
| 27 |
+
Definition 2: Sample. *A sample* x<sup>i</sup> = {xi,t|t = 1, · · · , T} *is a sequence of consecutive observations*, typically measured at regular intervals over a specified period (T timestamps). It can also be called a *segment* or *window*. Here we use i to denote the sample index. In the medical time series, a sample might consist of a sequence of heart rate measurements or blood pressure readings.
|
| 28 |
+
|
| 29 |
+
Definition 3: Trial. *A trial* r<sup>i</sup> *is a collection of consecutive samples.* It can also be called a *record*. Here we use i to denote the trial ID. In medical time series, a trial is a continuous set of observations collected over a not-short period (*e.g.*, 30 minutes). Therefore, a trial is generally too long (*e.g.*, hundreds of thousands of observations) to feed into deep learning models for representation learning directly and is usually split into shorter subsequences (*i.e.*, samples/segments). To represent the aggregate of samples stemming from a particular trial r<sup>i</sup> with trial ID i, we employ the notation R<sup>i</sup> .
|
| 30 |
+
|
| 31 |
+
Definition 4: Patient. *A patient* p<sup>i</sup> *represents a collection of multiple trials stemming from a single patient.* It can also be called a *subject*. Here we use i to denote the patient ID. It is important to note that trials for a given patient may exhibit variations due to differing data collection timeframes, sensor placements, patient conditions, and other contributing factors. As shown in Definition 3, a trial is typically divided into many samples for better representation learning. In practical scenarios, a patient, which constitutes a cluster of trials, is also divided into samples that may share identical or distinct trial IDs but maintain the same patient ID. To represent the aggregate of samples stemming from a particular patient p<sup>i</sup> with the corresponding patient ID i, we employ the notation P<sup>i</sup> .
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+
Figure 2: Overview of COMET approach. Our COMET model consists of four contrastive blocks, each illustrating the formulation of positive pairs and negative pairs at different data levels. In the observation-level contrastive, an observation $x_{i,t}$ and its augmented view $\tilde{x}_{i,t}$ serve as a positive pair. Similarly, in the sample-level contrastive, a sample $x_i$ and its augmented view $\tilde{x}_i$ form a positive pair. Moving to the trial-level contrastive, two samples $x_i$ and $x_i$ from the same trial $x_i$ are considered to be a positive pair. The patient-level contrastive follows a similar pattern, where two samples $x_i$ and $x_i$ from the same patient $x_i$ are regarded as a positive pair. Positive and corresponding negative pairs will be utilized to build contrastive loss in embedding space after being processed by encoder $x_i$ .
|
| 36 |
+
|
| 37 |
+
In this work, we propose a novel hierarchical contrastive framework to learn representative and generalizable embeddings by comprehensively exploring instance discrimination at observation and sample levels and harnessing consistency across instances at trial and patient levels. Although we elaborate the proposed COMET in the context of medical time series, we note our model can possibly be extended to other time series beyond healthcare as long as extra information is available. For example, a climate dataset contains multiple meteorological satellites, each satellite contains multiple measuring units, and each unit contains multiple sensors, and every sensor can measure a specific observation at a certain timestamp. The key is to utilize all available information, excluding label data, for contrastive pre-training, such as patient ID. To adapt our approach to other domains, researchers must consider a crucial question: Does the dataset have additional information beyond sample labels? If affirmative, can this information be harnessed for contrastive learning? The example of satellite sensor application underscores the potential existence of supplementary information even in non-medical domains.
|
| 38 |
+
|
| 39 |
+
**Problem (Self-Supervised Representation Learning For Medical Time Series).** Let an unlabeled dataset $\mathcal{D}$ consist of a set of patients, where each patient $p_i$ has multiple trials, each trial $r_i$ can be segmented into many samples, and each sample $x_i$ comprises a series of observations. We aim to pre-train an encoder G that exploits data consistency at all available levels in a self-supervised contrastive manner. For a given time series sample $x_i \in \mathbb{R}^{T \times F}$ with T timestamps and F feature dimensions, the encoder G learns a sample-level representation $h_i \in \mathbb{R}^{T \times K}$ , where $h_{i,t} \in \mathbb{R}^K$ is the observation-level representation at timestamp t with K dimensions.
|
| 40 |
+
|
| 41 |
+
By exploiting hierarchical consistency at multiple data levels, we aim to learn a representation $h_i$ that is both representative (yielding good performance in downstream tasks) and generalizable (maintaining stability across different patients). Depending on the fine-tuning settings [18], a specific fraction of labels $y_i$ corresponding to samples $x_i$ are necessary.
|
| 42 |
+
|
| 43 |
+
In this section, we first present our assumption of data consistency behind designing a hierarchical contrastive framework. Then, we describe the architecture of the proposed model COMET (Figure 2).
|
| 44 |
+
|
| 45 |
+
Capturing data consistency is crucial in the development of a contrastive learning framework [13]. Data consistency refers to the shared commonalities preserved within the data, which provide a supervisory signal to guide model optimization. Contrastive learning captures data consistency by contrasting positive and negative data pairs, where positive pairs share commonalities and negative pairs do not. We propose consistency across four data levels: observation, sample, trial, and patient, from fine-grained to coarse-grained in the medical time series. Although we present four levels here, our model can easily be adapted to accommodate specific datasets by adding or removing data levels.
|
| 46 |
+
|
| 47 |
+
**Observation-level data consistency.** We assume a slightly augmented observation (e.g., channel masked) will carry similar information as the original observation [13]. We use $\boldsymbol{x}_{i,t}$ as the anchor observation at timestamp t, and $\boldsymbol{x}_{i,t^-}$ as the observation at another timestamp $t^-$ in the sample $\boldsymbol{x}_i$ . We consider the anchor observation $\boldsymbol{x}_{i,t}$ and an augmented observation $\widetilde{\boldsymbol{x}}_{i,t}$ as positive pairs $(\boldsymbol{x}_{i,t},\widetilde{\boldsymbol{x}}_{i,t})$ (with closer embeddings). Conversely, we consider the original observation $\boldsymbol{x}_{i,t}$ and the observations $\widetilde{\boldsymbol{x}}_{i,t^-}$ and $\boldsymbol{x}_{i,t^-}$ at another timestamp $t^-$ as negative pairs $(\boldsymbol{x}_{i,t},\widetilde{\boldsymbol{x}}_{i,t^-}), (\boldsymbol{x}_{i,t},\boldsymbol{x}_{i,t^-})$ , with distant embeddings.
|
| 48 |
+
|
| 49 |
+
**Sample-level data consistency.** The sample-level consistency is based on our assumption that a slightly perturbed sample (e.g., temporally masked) should carry similar information as the original sample [18, 16, 26]. We consider the anchor sample $x_i$ and its augmented view $\tilde{x}_i$ as positive pair $(x_i, \tilde{x}_i)$ . We regard the anchor sample $x_i$ and a different sample $x_j$ and its augmented view $\tilde{x}_j$ as negative pairs: $(x_i, \tilde{x}_j)$ and $(x_i, x_j)$ .
|
| 50 |
+
|
| 51 |
+
**Trial-level data consistency.** We assume that samples sliced from the same trial should carry similar information compared to those obtained from different trials. For simplicity, we use x to denote the anchor sample and $x^+$ to denote a sample from the same trial $r_i$ as the anchor sample, while $x^-$ to denote a sample from another trial $r_j$ . In other words, we have $\{x, x^+\} \in \mathcal{R}_i$ and $x^- \in \mathcal{R}_j$ . We treat sample x and the sample $x^+$ from the same trial as positive pair $(x, x^+)$ . We regard sample x and the sample $x^-$ from different trials as negative pair $(x, x^-)$ .
|
| 52 |
+
|
| 53 |
+
**Patient-level data consistency.** We assume samples originating from the same patient are likely to contain similar information when compared to those from different patients. Here, we use $\boldsymbol{x}$ to denote the anchor sample and $\boldsymbol{x}^+$ to denote a sample from the same patient $\boldsymbol{p}_i$ , while $\boldsymbol{x}^-$ from another patient $\boldsymbol{p}_j$ . In other words, there are $\{\boldsymbol{x},\boldsymbol{x}^+\}\in\mathcal{P}_i$ and $\boldsymbol{x}^-\in\mathcal{P}_j$ . We have positive pair $(\boldsymbol{x},\boldsymbol{x}^+)$ including samples from the same patient and negative pair $(\boldsymbol{x},\boldsymbol{x}^-)$ that from different patients.
|
| 54 |
+
|
| 55 |
+
**Disease-level data consistency.** For completeness, we introduce disease-level data consistency, which suggests that samples associated with the same type of disease should exhibit shared patterns, even when collected from different patients in different ways. However, capturing disease-level consistency requires ground truth labels, which are not available in a self-supervised approach. As a result, we do NOT employ disease-level consistency in this paper. Nevertheless, it can be adapted for semi-supervised or supervised contrastive learning and may prove beneficial in learning domain-adaptable representations for certain diseases across patients and even datasets.
|
| 56 |
+
|
| 57 |
+
A common principle underlying all definitions is that the X-level data consistency refers to the positive pair belonging to the same X, where X could be observation, sample, trial, patient, or disease.
|
| 58 |
+
|
| 59 |
+
We assume that *each patient is associated with only one label*, such as suffering from a specific disease, which implies that all samples from the same patient essentially originate from the same distribution. However, in cases where data from a patient is derived from multiple distributions (e.g., a patient could perform various daily activities; associated with multiple labels), the assumptions of trial-level and patient-level consistency are not satisfied. Therefore, the user can switch on the observation-level and sample-level consistency.
|
| 60 |
+
|
| 61 |
+
Building upon the concepts of data consistency, we introduce four contrastive blocks corresponding to the four data levels. Our model is highly flexible, allowing users to enable or disable any of the blocks based on the requirements of a specific task or dataset.
|
| 62 |
+
|
| 63 |
+
For a given time series sample $x_i$ , we apply data augmentation (such as masking) to generate an augmented sample $\tilde{x}_i$ [25, 43]. We input the original sample $x_i$ and its augmented view $\tilde{x}_i$ into
|
| 64 |
+
|
| 65 |
+
contrastive encoder G to obtain their respective representations $h_i = G(\boldsymbol{x}_i)$ and $\widetilde{h}_i = G(\widetilde{\boldsymbol{x}}_i)$ . It is important to note that we apply data augmentation to the samples, which indirectly extends to augmenting the observations, simplifying the encoding process. To capture observation-level consistency, we assume that, after being processed by encoder G, the representation of observation $\boldsymbol{x}_{i,t}$ is close to the representation of the augmented observation $\widetilde{\boldsymbol{x}}_{i,t}$ . In contrast, it should be distant from the representations of observations $\boldsymbol{x}_{i,t^-}$ and $\widetilde{\boldsymbol{x}}_{i,t^-}$ originating from any other timestamp $t^-$ . Specifically, our positive pair is $(\boldsymbol{x}_{i,t},\widetilde{\boldsymbol{x}}_{i,t})$ and negative pairs are $(\boldsymbol{x}_{i,t},\boldsymbol{x}_{i,t^-})$ and $(\boldsymbol{x}_{i,t},\widetilde{\boldsymbol{x}}_{i,t^-})$ .
|
| 66 |
+
|
| 67 |
+
**Observation-level contrastive loss.** The observation-level contrastive loss $\mathcal{L}_O$ [13] for the input sample $x_i$ is defined as:
|
| 68 |
+
|
| 69 |
+
$$\mathcal{L}_{O} = \mathbb{E}_{\boldsymbol{x}_{i} \in \mathcal{D}} \left[ \mathbb{E}_{t \in \mathcal{T}} \left[ -\log \frac{\exp(\boldsymbol{h}_{i,t} \cdot \widetilde{\boldsymbol{h}}_{i,t})}{\sum_{t^{-} \in \mathcal{T}} (\exp(\boldsymbol{h}_{i,t} \cdot \widetilde{\boldsymbol{h}}_{i,t^{-}}) + \mathbb{1}_{[t \neq t^{-}]} \exp(\boldsymbol{h}_{i,t} \cdot \boldsymbol{h}_{i,t^{-}}))} \right] \right]$$
|
| 70 |
+
(1)
|
| 71 |
+
|
| 72 |
+
where $\mathcal{T} = \{1, \dots, T\}$ is the set of all timestamps in sample $x_i$ and $\cdot$ denotes dot product. The $\mathbb{1}_{[t \neq t^-]}$ is an indicator function that equals to 0 when $t = t^-$ and 1 otherwise.
|
| 73 |
+
|
| 74 |
+
For an input time series sample $x_i$ and its augmented view $\tilde{x}_i$ , we calculate their representations through $h_i = G(x_i)$ and $\tilde{h}_i = G(\tilde{x}_i)$ . The augmentation applied here could be the same as or different from the augmentation used in Section 4.2. We assume that after passing through the encoder G, the representation of the sample $x_i$ is close to the representation of its augmented view $\tilde{x}_i$ , while far away from the representations of any other samples $x_j$ and $\tilde{x}_j$ . In specific, our positive pair is $(x_i, \tilde{x}_i)$ , and negative pairs are $(x_i, \tilde{x}_j)$ and $(x_i, x_j)$ .
|
| 75 |
+
|
| 76 |
+
**Sample-level contrastive loss.** The sample-level contrastive loss $\mathcal{L}_S$ [18, 43] for the input sample $x_i$ is defined as:
|
| 77 |
+
|
| 78 |
+
$$\mathcal{L}_{S} = \mathbb{E}_{\boldsymbol{x}_{i} \in \mathcal{D}} \left[ -\log \frac{\exp(\boldsymbol{h}_{i} \cdot \widetilde{\boldsymbol{h}}_{i})}{\sum_{j=1}^{|\mathcal{D}|} (\exp(\boldsymbol{h}_{i} \cdot \widetilde{\boldsymbol{h}}_{j}) + \mathbb{1}_{[i \neq j]} \exp(\boldsymbol{h}_{i} \cdot \boldsymbol{h}_{j}))} \right]$$
|
| 79 |
+
(2)
|
| 80 |
+
|
| 81 |
+
where $|\mathcal{D}|$ represents the total number of samples in the dataset $\mathcal{D}$ and $\cdot$ denotes dot product. The $\mathbb{1}_{[i\neq j]}$ is an indicator function that equals 0 when i=j and 1 otherwise.
|
| 82 |
+
|
| 83 |
+
For an input sample $x \in \mathcal{R}_i$ , where $\mathcal{R}_i$ is a collection of all samples segmented from trial $r_i$ , we feed it into the contrastive encoder G to generate a sample-level representation h = G(x). To seize trial-level data consistency, we assume that the representation of the anchor sample $x \in \mathcal{R}_i$ is close to the representation of sample $x^+$ that also come from the trial $r_i$ . In contrast, the representation of the anchor sample x is far away from the representation of sample $x^-$ that come from a different trial $r_j$ , where $x^- \in \mathcal{R}_j$ . In other words, we have positive pair $(x, x^+)$ and negative pair $(x, x^-)$ .
|
| 84 |
+
|
| 85 |
+
**Trial-level contrastive loss.** The trial-level contrastive loss $\mathcal{L}_R$ [15, 18] for the input sample x is defined as:
|
| 86 |
+
|
| 87 |
+
$$\mathcal{L}_{R} = \mathbb{E}_{\boldsymbol{x} \in \mathcal{D}} \left[ \mathbb{E}_{\boldsymbol{x}^{+} \in \mathcal{R}_{i}} \left[ -\log \frac{\exp(\operatorname{sim}(\boldsymbol{h}, G(\boldsymbol{x}^{+}))/\tau)}{\sum_{j=1}^{J} \sum_{\boldsymbol{x}^{-} \in \mathcal{R}_{j}} (\exp(\operatorname{sim}(\boldsymbol{h}, G(\boldsymbol{x}^{-}))/\tau))} \right] \right]$$
|
| 88 |
+
(3)
|
| 89 |
+
|
| 90 |
+
where J is the total number of trials in the dataset $\mathcal{D}$ . The $\operatorname{sim}(\boldsymbol{u}, \boldsymbol{v}) = \boldsymbol{u}^T \boldsymbol{v} / \|\boldsymbol{u}\| \|\boldsymbol{v}\|$ denotes the cosine similarity, and $\tau$ is a temperature parameter to adjust the scale. The $G(\boldsymbol{x}^+)$ and $G(\boldsymbol{x}^-)$ are learned representations of samples $\boldsymbol{x}^+ \in \mathcal{R}_i$ and $\boldsymbol{x}^- \in \mathcal{R}_j$ , respectively. To measure the trial-level loss for sample $\boldsymbol{x}$ , we iterate all the $\boldsymbol{x}^+$ in $\mathcal{R}_i$ , and averaging across $|\mathcal{R}_i| - 1$ positive pairs.
|
| 91 |
+
|
| 92 |
+
In this block, we do NOT learn a trial-level embedding representing the entire trial. Instead, we learn a representation for each sample within the trial while considering trial-level data consistency. Similarly, we follow this protocol for the patient-level contrastive block.
|
| 93 |
+
|
| 94 |
+
For an input sample $x \in \mathcal{P}_i$ , where $\mathcal{P}_i$ denotes all samples from patient $p_i$ , we feed it into the contrastive encoder G to generate a sample-level representation h = G(x). Similar to the above
|
| 95 |
+
|
| 96 |
+
trial-level contrastive block, we have positive pair $(x, x^+)$ and negative pair $(x, x^-)$ , in which $x^+$ come from the same patient while $x^-$ come from a different patient.
|
| 97 |
+
|
| 98 |
+
**Patient-level contrastive loss.** The patient-level contrastive loss $\mathcal{L}_P$ [15] for the input sample x is defined as:
|
| 99 |
+
|
| 100 |
+
$$\mathcal{L}_{P} = \mathbb{E}_{\boldsymbol{x} \in \mathcal{D}} \left[ \mathbb{E}_{\boldsymbol{x}^{+} \in \mathcal{P}_{i}} \left[ -\log \frac{\exp(\operatorname{sim}(\boldsymbol{h}, G(\boldsymbol{x}^{+}))/\tau)}{\sum_{j=1}^{M} \sum_{\boldsymbol{x}^{-} \in \mathcal{P}_{j}} (\exp(\operatorname{sim}(\boldsymbol{h}, G(\boldsymbol{x}^{-}))/\tau))} \right] \right]$$
|
| 101 |
+
(4)
|
| 102 |
+
|
| 103 |
+
where M is the total number of patients in the dataset $\mathcal{D}$ . In this block, the $G(\mathbf{x}^+)$ and $G(\mathbf{x}^-)$ are learned representations of samples $\mathbf{x}^+ \in \mathcal{P}_i$ and $\mathbf{x}^- \in \mathcal{P}_j$ , respectively.
|
| 104 |
+
|
| 105 |
+
The overall loss function $\mathcal{L}$ consists of four loss terms The observation-level loss $\mathcal{L}_O$ and sample-level loss $\mathcal{L}_S$ encourage the encoder to learn robust representations that are invariant to perturbations. The trial-level loss $\mathcal{L}_R$ and patient-level loss $\mathcal{L}_P$ compel the encoder to learn cross-sample features within a trial or a patient. In summary, the overall loss function of the proposed COMET model is:
|
| 106 |
+
|
| 107 |
+
$$\mathcal{L} = \lambda_1 \mathcal{L}_{O} + \lambda_2 \mathcal{L}_{S} + \lambda_3 \mathcal{L}_{R} + \lambda_4 \mathcal{L}_{P}$$
|
| 108 |
+
(5)
|
| 109 |
+
|
| 110 |
+
where $\lambda_1, \lambda_2, \lambda_3, \lambda_4 \in [0,1]$ are hyper-coefficients that control the relative importance and adjust the scales of each level's loss. Users can simply turn off specific data levels by setting $\lambda$ of those levels to 0. We set $\lambda_1 + \lambda_2 + \lambda_3 + \lambda_4 = 1$ . We calculate the total loss by taking the expectation of $\mathcal L$ across all samples $x \in \mathcal D$ . In practice, the contrastive losses are calculated within a mini-batch.
|
2312.00388/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|