Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 2002.01612/main_diagram/main_diagram.drawio +0 -0
- 2002.01612/paper_text/intro_method.md +15 -0
- 2006.04016/main_diagram/main_diagram.drawio +1 -0
- 2006.04016/main_diagram/main_diagram.pdf +0 -0
- 2006.04016/paper_text/intro_method.md +29 -0
- 2006.05580/main_diagram/main_diagram.drawio +1 -0
- 2006.05580/main_diagram/main_diagram.pdf +0 -0
- 2006.05580/paper_text/intro_method.md +96 -0
- 2006.09447/main_diagram/main_diagram.drawio +1 -0
- 2006.09447/main_diagram/main_diagram.pdf +0 -0
- 2006.09447/paper_text/intro_method.md +46 -0
- 2006.11197/main_diagram/main_diagram.drawio +1 -0
- 2006.11197/main_diagram/main_diagram.pdf +0 -0
- 2006.11197/paper_text/intro_method.md +124 -0
- 2007.01072/main_diagram/main_diagram.drawio +1 -0
- 2007.01072/main_diagram/main_diagram.pdf +0 -0
- 2007.01072/paper_text/intro_method.md +85 -0
- 2012.09446/main_diagram/main_diagram.drawio +1 -0
- 2012.09446/main_diagram/main_diagram.pdf +0 -0
- 2012.09446/paper_text/intro_method.md +11 -0
- 2101.08165/main_diagram/main_diagram.drawio +1 -0
- 2101.08165/main_diagram/main_diagram.pdf +0 -0
- 2101.08165/paper_text/intro_method.md +76 -0
- 2103.17242/main_diagram/main_diagram.drawio +1 -0
- 2103.17242/main_diagram/main_diagram.pdf +0 -0
- 2103.17242/paper_text/intro_method.md +62 -0
- 2104.07555/main_diagram/main_diagram.drawio +1 -0
- 2104.07555/main_diagram/main_diagram.pdf +0 -0
- 2104.07555/paper_text/intro_method.md +43 -0
- 2104.08253/main_diagram/main_diagram.drawio +1 -0
- 2104.08253/main_diagram/main_diagram.pdf +0 -0
- 2104.08253/paper_text/intro_method.md +83 -0
- 2104.12437/main_diagram/main_diagram.drawio +1 -0
- 2104.12437/main_diagram/main_diagram.pdf +0 -0
- 2104.12437/paper_text/intro_method.md +171 -0
- 2106.00524/main_diagram/main_diagram.drawio +1 -0
- 2106.00524/main_diagram/main_diagram.pdf +0 -0
- 2106.00524/paper_text/intro_method.md +110 -0
- 2106.00660/main_diagram/main_diagram.drawio +0 -0
- 2106.00660/paper_text/intro_method.md +124 -0
- 2106.01532/main_diagram/main_diagram.drawio +1 -0
- 2106.01532/main_diagram/main_diagram.pdf +0 -0
- 2106.01532/paper_text/intro_method.md +153 -0
- 2106.05321/main_diagram/main_diagram.drawio +1 -0
- 2106.05321/paper_text/intro_method.md +84 -0
- 2106.05409/main_diagram/main_diagram.drawio +1 -0
- 2106.05409/main_diagram/main_diagram.pdf +0 -0
- 2106.05409/paper_text/intro_method.md +50 -0
- 2108.00230/main_diagram/main_diagram.drawio +1 -0
- 2108.00230/main_diagram/main_diagram.pdf +0 -0
2002.01612/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2002.01612/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Accurate measurements of poverty and related human livelihood outcomes critically shape the decisions of governments and humanitarian organizations around the world, and the eradication of poverty remains the first of the United Nations Sustainable Development Goals [\[1\]](#page-6-0). However, reliable locallevel measurements of economic well-being are rare in many parts of the developing world. Such measurements are typically made with household surveys, which are expensive and time consuming to conduct across broad geographies, and as a result such surveys are conducted infrequently and on limited numbers of households. For example, Uganda (our study country) is one of the best-surveyed countries in Africa, but surveys occur at best every few years, and when they do occur often only survey a few hundred villages across the whole country (Fig. [1\)](#page-3-0). Scaling up these ground-based surveys to cover more regions and more years would likely be prohibitively expensive for most countries in the developing
|
| 4 |
+
|
| 5 |
+
world [\[2\]](#page-6-1). The resulting lack of frequent, reliable local-level information on economic livelihoods hampers the ability of governments and other organizations to target assistance to those who need it and to understand whether such assistance is having its intended effect.
|
| 6 |
+
|
| 7 |
+
To tackle this data gap, an alternative strategy has been to try to use passively-collected data from non-traditional sources to shed light on local-level economic outcomes. Such work has shown promise in measuring certain indicators of economic livelihoods at local level. For instance, [\[3\]](#page-6-2) show how features extracted from cell phone data can be used to predict asset wealth in Rwanda, and [\[4\]](#page-6-3) show how applying NLP techniques to Wikipedia articles can be used to predict asset wealth in multiple developing countries, and [\[5\]](#page-6-4) show how a transfer learning approach that uses coarse information from nighttime satellite images to extract features from daytime high-resolution imagery can also predict asset wealth variation across multiple African countries.
|
| 8 |
+
|
| 9 |
+
These existing approaches to using non-traditional data are promising, given that they are inexpensive and inherently scalable, but they face two main challenges that inhibit their broader adoption by policymakers. The first is the outcome being measured. While measures of asset ownership are thought to be relevant metrics for understanding longer-run household well-being [\[6\]](#page-6-5), official measurement of poverty requires data on consumption expenditure (i.e. the value of all goods consumed by a household over a given period), and existing methods have either not been used to predict consumption data or perform much more poorly when predicting consumption than when predicting other livelihood indicators such as asset wealth [\[5\]](#page-6-4). Second, interpretability of model predictions is key for whether policymakers will adopt machine-learning based approaches to livelihoods measurement, and current approaches attempt to maximize predictive performance rather than interpretability. This tradeoff, central to many problems at the interface of machine learning and policy [\[7\]](#page-6-6), has yet to be navigated in the poverty domain.
|
| 10 |
+
|
| 11 |
+
Here we demonstrate an interpretable computational framework for predicting local-level consumption expenditure using object detection on high-resolution (30cm) daytime satellite imagery. We focus on Uganda, a country with existing high-quality ground data on consumption where performance benchmark are available. We first train a satellite imagery object detector on a publicly available, global scale
|
| 12 |
+
|
| 13 |
+
<sup>∗</sup>Equal Contribution
|
| 14 |
+
|
| 15 |
+
object detection dataset, called xView [8], which avoids location specific training and provides a more general object detection model. We then apply this detector to high resolution images taken over hundreds of villages across Uganda that were measured in an existing georeferenced household survey, and use extracted counts of detected objects as features in a final prediction of consumption expenditure. We show that not only does our approach substantially outperform previous performance benchmarks on the same task, it also yields features that are immediately and intuitively interpretable to the analyst or policy-maker.
|
2006.04016/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2019-10-06T20:49:37.888Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36" etag="PJR6Ua5BtECI5wekGlLC" version="12.0.2" type="device" pages="9"><diagram id="pVAhR-AvXr4DIAku9-Ab" name="char-to-word">7Ztdc5s4FIZ/jS+zg8Af8WXttJudaWba8c7s9lIBGbTByCPk2vTXVzIStiJslIYPJVtfZKyDEPDoSOc9x2QULDeHPyncJg8kQunI96LDKLgb+f5sNuZ/haEoDWMQlIaY4qg0gZNhhX8gafSkdYcjlGsdGSEpw1vdGJIsQyHTbJBSste7rUmqX3ULY2QYViFMTes/OGJJab2deCf7PcJxoq4MPHlkA1VnacgTGJH9mSn4OAqWlBBWftscligV7BSX8rxPF45WN0ZRxmxO+Lo6JOQpGv+1e3i4XXrje7K/v1HP8R2mO/nE8m5ZoRDElOy2shuiDB3qwMNH1d0zbwxUj8vdBJENYrTgXeRA1VxLFwEz2d6fgKsuyRlr1Q3KKY6rkU8U+BcJ4iVQgC2UYBFhyp8ek4yb9yjnt7Z4Faors2QCHAyQ34fXXJkbh1AEzSg4iSxCYhSPu8c+wQyttjAUR/d80+S2hG34Ve/ANe9pRuP/EhrQGZqx82gKHcFgpCZvhZQ/NKnpWyE1HZrUrJkUyqIPQiPxVkYyJMIZzJMjOqBj4iho8a9A+sdENb+dH7s7SN5lq5Ct8pIoMkSWFVyu9SCN0dWOk8ZNb1KDXdkoSiHD3/W7q5sLeYUvBPP7rqSLXLeFPqg6Pyc7GiJ5yrkyU6PI84Jn55UPbZx3dIPqEV/hGbeGZ4SGa/DlwHQXoCjHP2TAFlO7Ffd3vOPJYjS54xaY4ljooJDPLaLcIJYV5lr6gzywwVEkzl+k8BGlCxg+xcfFuiQpocfrBuvjh3dZk4yt5P0okdqmjpjV+40c/kb4+fnc3vitOIwaRo2qdn41Almvc9TJrM8tYkznSl95uitKf24RTnpV+vPm7axfQBZRpCOlX86NQyjMbdNA0ZP0mF/YvIbSGvO582gcUfpVech9VENLfeBZlGHcQDW01gee34zKXbGvgkyj2Jer57faf5FvmBUqaDjHe5f7coX8j/Q+8CzKb50LfrVIXBH8wLNOg3pS/HKe3NG5wLNOijqA4Vj+AzyLBKgnESInxh3ZDzx3UqKLcJwR/u7kSE2sBlf+auQ3wGpw6Q8ssiR3pX8VbCy0/7xx9/ut/Z87h5kXMsM73r32B9dF6jvU/jZvJYjFqqjLXUHfUwllCYlJBtPPhGyla/yHGCvka1pwx4juONq2cmFPKNfK1bsfmzN7YQpfOUV+oM984PW8QGvekVjgz6u/H67EO9Ac79Y4TfWV5ocht+eMkid0diSaPk4n02drUEWU6+GyIfPz9MyvLlDOarZrv7NA2fpLFtULfaJvnMI8l99DssGhFhhbRVk56WCao/W3MHpDCZxD2XZ+2RtK3zmUbWejvaEMnEPZdrLaG8qxayhV3Ht7KCfOoWz7B8HeUE6dQ2kmhgZKTmcrvoY7mhYLylM4kaA3MdV1qhXh8zp7JrKObnRTtTc0SNAOqdukZadaTSh4CUoXUyx0wKys1qjWt1FVu+GtU7VGNIqzxhdEMX8qkbvbFnCs87fSuYaq1Bhy+falxZoLyWG1ivtKDqeDyGwjR/x0/LSzJqsfAFVZviYt9OvW5POKW4uLchAF3ill/9Y9yoOI8159OQhqonrQa4CZDSI2+90xaqJ435RNHRpCm7q6DkoWXc9rddJkX12vmz59ggcrrdT84jqumaZfKPjx5unfZstoe/rf4+DjTw==</diagram><diagram id="cCfvyR6srZFLLNvzFAoQ" name="word-to-char">7Vxbc6I8GP41XnYHgqBerj3st8fZGWdnu3vTiRCFKRAnYNX++g2SIBhaohUT+epFB16SEJ73+ATSnnUdrT8RuPC/Yw+FPWB465510wNgMOjTv5lgkwuAAXLBnAReLjJ3gknwjJjQYNJl4KGk0jDFOEyDRVXo4jhGblqRQULwqtpshsPqXRdwjgTBxIWhKP0deKmfS4e2sZP/h4K5z+9sGuxKBHljJkh86OFVSWTd9qxrgnGaH0XraxRm2HFcFo79HN2F/rfPzuPk6RP5aQF4lQ92d0iX4hEIitOjh/57P378a0+XV59//7oD3x++/gld1sV4guGS4cWeNd1wAOcELxesGSIpWtepDU55c0NysmaBILU8hCOUkg3tx0YvzIdZnTlg56udDnkTv6Q+3gwyq5kXI++QoQcMnAOAMmWBssZeQCgiAY6peIUSOrPxm+B7WXEifKrgAeewo5cVow8QVjMQFIfYQ9kgBjWNlR+kaLKAbnZ1RUMwlflpRG96Y9LDWRCG1zjEZNvX8iAazlwqT1KCH1HpiuMO0XT2mq01QgmOgtJsC8p+96DcVCFThazdWWSBYmSdziLrKEZ20Iwsir2PWcVIz2IcoywVw8TfQm1WYaVAkM19poIPNj/9U752s2b6yc827Cy/JfKEklMGWlr4QjJHr7WzGyOwXYM5lxEUwjR4qs6tThHsDj9xQGdd1FwsKmyqg/L+CV4SF7Eu5TKTj8L6WXv98mcW+m1toHjE481iKJiFK9gF9YS0qn+CkuCZ1RqZXhfZ9LYTtsc9+4ZKYBjMswLOpYpF1MfGmUcFlFZ8ZBeiwPOy/uMQTlE4hu7jfOvZJZ+cbX+ZF+M4nbD58Ir7hCXQoN5q2OhXmY2XNcut8Y3mwofho9p2dQQ8myWoFZ2PmkOBGtLCbV8b0iJN787EWkaN8e28+EizutPTFs0InCnB4LStXMzjSGBrtYp5ySzwJTA14S7mJdPCBmxVsxfzkolhA7aq+YspQQ31JTA8UzYymNw93ynMIZYhUlsomEbXOUzuHv8jEmOKzFVQuhoWw91GGxYjzffOxGJy1elTu/OJq6AxmjE6IMHotK1gSm+VtaAxHLtOgakJjQGXTBEbsFVNY8AlU8QGbFXTGCBBEfWlMdwyG2lM7p7vNOYQyxAJbiqYRtdpDHjhDV5naYwlQWMOirXFV3ZZ23kIk4QduzgKXHYsBN677U828B7BigoWxOuHmkAM6gLxvvee7ksjCVrUBeTBUDfk+xJ8qwvI79u81a8pPqxzVh/9U9O7C0HeVI+8yAVdKJPfq+CxerBcUDORfJavU2lV6VJa9TEJnmk1AMNKUdlCADOcvaxvGbZUCANtaXMoRrB3bUpqE+imSzEmvutSUpd9oNAzEzLbDKLbLw8Pt/dPI2vxdfxDakPGW5YuKAkCbu3ShedMHdtpE+y9LR2gLqe1lNJqoW55qVgnqG3FUIvFQ2ehPuMCXS3UEuvK7a/PHasfYUWvvFD3mmUdq8XzrOcBs5plwOjIJb1ic+Lp1/Rq0W15IV2l31qaZaOWP73SCWrV2ajlTTo6Qa06G2mxa+d82ci5iGxkXWg2OvWrBo381tYsG5363YLGUKvORjKbdLqCtep0JLPhp0v5iAOpeUKyLzQhtb1pSqd1jZFq1235gzmdqnblWHeY9++npPawpqe7fw6Vh5zdf9iybv8B</diagram><diagram id="m1iCk0Yw4d_pTaYAF2el" name="Page-3">7V1Zd6JIGP01OWfmIX0EZPExxk73TCe9xN6Sl5wSSmSCQAOJml8/hVSp7KUBqiTmoVtKQLzfdu9HUZ5Jl/PlBx94sxvXgPaZ2DOWZ9LoTBTVgYL+jQZW8UBfkOIB07eMeEjYDoytF4gHe3j0yTJgkNgxdF07tLzkoO46DtTDxBjwfXeR3G3q2slP9YAJMwNjHdjZ0V+WEc7iUU3ubcc/QsuckU8WevidOSA744FgBgx3sTMkvT+TLn3XDeNX8+UltCPsCC638/efl9bz+MVz3ku94MOfz8/fzuOTXe1zyOYr+NAJDz61cT8aXckrePH72838AnwZX199xYf0noH9hPHC3zVcEQBN333y8G7QD+Eyz2xgQnbvUV6ssEEQeR505zD0V+g4fPaN+2CvE1S8vdjakOwy2zEf2Q1grzE3Z94ig15gcPYASqAFShoalo8QsVwHDS9ggK5s+Cr4ig2XhY8VPGIbflRsGH6AkKqBQDg4BoxO0kOusZhZIRx7QI/eXaAUjMZm4Rx96Ego85xKYMSDgBGaAqbPOzCrJACscJKPBCeRMU7KkeCkMMZJrcYJOsZFxLTQluM6MCphIJitgROSICEg/NXvCNB3Mtm8231vtMRox1srvBV/JDQyVI0GWkQYgW/Csv3kylwn52BOxnxog9B6Tl5bniHwJ3x1LXTVG66CI3aVPCk5PnCffB3iQ3bpGTkLPk5KHRd/58xxax/YfMXD3ULLuIWe8QsUCWHS/j4MrBdcoyO7etHlrS9YHp7JIzQCbMuMiI+ODAt9NBBFlIXo+AV+Y24ZRnT80AYTaA+B/miu4/TStV1//bnSdP2Hdpm6TjjG10OYao3UQc33Gnz288jHdy1LvPGV7kJOQ84qy8kzuNNpABux+aA6FbAh+8T3uSH71LKoJbY/qMxv7eJDrYbqp/ucCR+BQvm0xEOEw6RQY8xD4EcLFUHDCecX+BFHFUixZv0CP/KoAinWvF+gEEj8En9SYSqZfxw6J+q/j2dkJSHIuEbXuX8cHm+I/AtZxZcxOhv2T8KGG/ZPrZNaYv+x6fjhvOTCWdB/zpSQSKGE2mpDHqaMGiMgBAmOoeGE/ov8CKUKpFjTf5EfoVSBFGv6L1IIJX7pP/GzSvofh86J/u/jGVlhGGZco+v0Xyy4Y9RZ+i/S3AdEcUowxwkhmUxdP5y5pusA+9p1PewY/8EwXOF5X+ApdJNuk8goFOmgbAJUHFBlX7KAKL/SaqKUdAap13LEZqXb0Loef78pqX1Cde2bWradDD1R19F4EPruI9x5x1AmiqykgpIQ3/LSeYgeJOgSzpFTStWcnC42VUolik5aMANe9FJ/8u3V0EepLSpbVRZImmszvzAKNdMGQYBf6+7c0vHrXcHpRPFIS2LqsESfzhKNkRppv8kMeoRhhFxhQoJLK4xpDdm625IctLWlNdHGamfjK/Qt9KWiOkfLdF6V2qSCflU7RCftCZK2L9cpSKWb4G4plfazolQHNPQnmRRxddzNoHiIngTl5YZkuaVKCI0Ff2oGrpSjkvsNpeGxdvuxf/Vw/Ul0vtjBw9Xkef58LvQpop9JB1MidysYdDALoKJu9tbfwiwzHpPGXcEFtTIbpsw8HIFBykRtfZUUtzQA1Ka53FLRNTiZ0uaxMlMy6XwWwFl3V5gLONvvlhagm63h3UG3xQ5rAbp1N6N5QrfFrmwBuhQNbC7asuVFc7cvWxakp8bsXs6R7dl3elJ2WYh0szVb8I0pGkxshI3Em7CRqTVgW8JGrm7MtAxRKxN9yszDExgUKo9fNiMfphOboy+EP3ULTl6EjXLUsrECXebCRjlq2ViBLnNho1DIRo6FDSma1cJGqX7K5iRs0phlVW+nZ5yXhchbEjYKxSQ0NsJG5k3YKNQasC1ho1Q/VN8yRK3c/yszD09gUKg8ftmMcphObJC+HLVOLIKTF2FDwqCT6DIXNupRy8YKdJkLG5VCNnIsbEjRrBY2asEDDSdhU4JZVvV2ei59WYi8JWGj1r0MAdVUv0z+vVr/NTgJUFKTGAs56XiQk40HzaXjnIWrjnsC58z1rRcUocBO5PoGjNlT36UfZJAz5hRzcnxzczrVLO8+mZPSnORBLm6MqWVp/smYlMbsS9zFptawrqB44KgxuFMztvOebGlXZmgUMqMrYMvMwW54jiNPYDMX0BoXUx4PtVBGclcLaeJdh1qyHcEtCUn6IKY5O+0jXUIvdS+luadjC/CuW4xxFM0Sd3Wq7nW3OQabfZ2iuOPVFbDZ1ymKO2rdqlPqUdQpqTN1quF7jCyjWeatTg2yvYjOgs28Tg063ClIg828Tg0oOgWdqlODV1qypTold6VODd5Qd2TAPJrrXtGQI7DTFJ892B3uFKTrVItgz0bS3b0CHPVhNLr5+XkkeY9B7g8mKnZ04yeajpDAXPnz5JI3zoP1SnIXaAdB8JbbN9ErE/+/PotFBv6y/kYvwTwyjTMJvJ190IVb6ePQWHwBZDhl/yO6O9WULwmqmvAlScr60iDHl+q4FZXrS3mMpyFfWjvTyXkOdx6RtL8ZOI+gjzTl/vGTOYGK9vIsL+///FOD82h5vqPPgA90ZKvzCQgQ9MUpqNRj9luxMbH+Yrb8aDrMLz8TTe7LjVo9RVo3S2TuLhUoNFN+vn7698r4/u2L5/0Y/ggE7+5Cu6f5vd7W1mxcmy1eIFVYL0G48/wGBOXPb7zWLoMULchpzOTNKkgvy1ebXbLR6PnoxMByohg63nSatnFrM0dEMnOAWFjqUxG/Oh5NybVw3TeIOJ0auekwljxCpuYkvDoi61L6uVrow8uB+evhVuzbt2awqINw94oKXdS4mU8Mw3LMoINcKB28jcVqP+UzOc+ONMWNcn1mv1Zi9Xq6aSC36+uSFXU3vcXC1XXP8vuEh6ykWxwnrLqEmzmS2AP6/ZRpabuEUvo3U9OPTNTXJcxFcb/H/U+ew4/nZJJQbZ6DNn03qh3b3VFKnt24Boz2+B8=</diagram><diagram id="srkQ_djYyyc0HLuSfoU2" name="diacriticANDSegmentation">7Vpdc6I8FP41XrpDQD68rB/77sxuZzu1s+3uzU6EAHkXjBti1f76DRBUQhRFse1M7UxNTsIhnOc5yTkHO8YwXv1H4Ty8JR6KOrrmrTrGqKPrttPj/1PBOhf0gJELAoq9XAS2ggl+QUKoCekCeygpTWSERAzPy0KXzGbIZSUZpJQsy9N8EpXvOocBqggmLoyq0kfssTCXOqa2lX9BOAiLOwNNjMSwmCwESQg9stwRGeOOMaSEsLwVr4YoSm1X2OUJLkxLG3+l/pM/xKPvzteHH91c2edTLtk8AkUz1lj1ev3/49/l/d3tbYh+P/+6v0HLpFuA+QyjhTCYeFi2LiyIZt5NCgTvuRFMEux2jEHI4ogLAG/mFyCvgkPNcsHGhpx7iMSI0TW/brlFqQAp3AGokFEUQYafy/eEgizBRt3mDncE89XomuC1bgg9gtbA1soqErKgLhJX7Zq1TlFfUsQgDRCrKOKNncfeijLUTkCwZysQtCJurYFPsoVuobT+Lkgx0E0yV73hE4A2X20HeStIv4chpHxwHE+RV2jkK8yV5lMqVKFkMfOQJ4ixDDFDkzl009El313KvEk1TcS1WR9H0ZBEhGa6DN9xkZtSLWGU/EE7I1PH7HEaFPd/RpSh1UW4V2CqSZgWpNvhpq3gpq7tp2EJ91NBNs2zQe6rMP4GpyhK2kFXQtNM/1RoWtlHxQfe35mXf1pEHdhGCXV+1FVQN8wrom6B+s2ZH0rztEnoOIr4mcrbgzmimK8ApYZDufRuK6oDbgrdP0EG9fcFi/CsgGNz/KWeF6QHgWi7JE6PhKydYZjHAP02HRRoElR2BSqgcFDQGlR6Ow4aYs9Dsw8HzVHWjU/mG3NR1cb84aKpyQuXeDMuarXjoi0GSCcHRLKHtgaubb0tP7QvkMd4MAk30EheglaYPYmhtP0zdaZPpuiNVsK3ss5adDrvMi+ypZNVa5oXFZTYUOS4vIhDBNc70+bphOTAgh31gveuS55v1MyXn6M8nzfyFV80qbNVSd17pjNfecabA8+cnw+vxXtglWlhWE153+9dh/fSfYoHaJWXjirM3RPuuAsarQeUhyqI1Z915YPx1EAGpBeFhOIXLoMbD8AUuQyTGe/PCE1ZNMgPc072VjNJeY+xqscj0K8Z/PSPiFPrNhRl4PjejjddClwal/1kRY55lJs38EQtCB++/LRfPsfGYNT9Pfzl90EXqKJZCdA0RZjv5bgouMNpMV07D0Ch3TDKKdpmY90B2FEAXKQL51BfbakjqH++pQ6AdJDrVzXFEZw5KUFpLd88AOTbsSbovR6x9phCqC97YK/if6ratWm2ZaYjUrO2zNSrZcxVTVGN6gf42+Th9jwvlMsEvq+rywSeNbXMaiGvuUMajcx7iSKAcj1OPdEu8C6zqSlajGqkV5CygY+NaSQ1xnUjmr4CvRPfY+qq+tyE+CyGq2tU6DyIHF/pepbroKnfvutJ0L2WJ4K2Xko/EurxwQkKYm4gmCV3bQDbKAGte6FdvzEfRYgmCWm/X3Jt4DhVgoAqQYwDW9F5BLnAXr2vwAWKqtZTZ1vGunKF64BXvFqFS6rs9ppWuCrVjZYqXJX7lCu7Z1e4lBgVdr34zjXC0KWYcRLr2j1KGKEfu9fRP8eRSvCO4ni7zO7Fu9sf9OWc2v4q0hj/Aw==</diagram><diagram id="VbBaOwV59dmNkXR_B0R3" name="diacriticANDwordtasks">7V1bl5s4Ev41fkwfhLg+pi8z2U1n0rvOOTN52kPbtM22bTyYTnfn1w8YsKFUXC0JHJOckxgZBFZd9KnqKzGhN+u33wNnu/ziz93VRFXmbxN6O1FV09Sif+OG96RBtYykYRF486SJHBum3k83bVTS1hdv7u4KJ4a+vwq9bbFx5m827iwstDlB4L8WT3vyV8W7bp2FyzRMZ86Kbf3Tm4fLpNXSlWP7J9dbLLM7EyX9Zu1kJ6cNu6Uz919zTfRuQm8C3w+TT+u3G3cVj102Ltpm88fDc/D759tpuLnb2f/+n/XpQ9LZb20uOfyEwN2Enbv23n9+N1fOp+cXf/P3/bfHL9fabXqJ8sNZvaTjde3dT799SX9x+J4NY+C/bOZu3BWZ0OvXpRe6060zi799jfQmaluG61X69ZO3Wt34Kz/YX0ufnp7U2Sxq34WB/+zmvpkbj4ZuxFf4m3Ca3i39uT/cIHTfgBBrRoAcxBKps+uv3TB4j65Le1Ez0aaqTOz0+PWoGGbatMzpRKYATqqKi0PPx+GOPqQj3mL0CTP6zLi7m/nH2Aqio9nK2e28WXGokwvcOWME3UYqNxI6MhJZW+CunND7UbwnNjzpHR58L3qaoyAoEIQJRnjnvwQzN70qr9N1Hdmgo9AJFm7IdLSX1uFndxegOgowOSaUjwAPx5IESBEBGqsw9UgFSRp/v/jZFx92+3nuY3QCUbdvxy+jT4v4/6n/FK6dt6yz6NmS/pJvuXrXueNaT6h3NWaW+/jUl3c9iLIv76rVG2c0pW/jj7OXYPV+HTizZzesF0BRWgdYoEQHi9jE088zfx0b+/7zXgIJNiLxRUs/8H5GbU7W59wLIvTj+Zvo2HV2Ry3hL6iDk6yYBgkiKEOUoHRGUH/6wTxqmbqLdfRLnf24QNlFoxIW5VI0gI2/cYG1pE3OylvE4zyL+naj9ut4jL0IMn5Mv1h783l8G1QRjtJXOkjfxmVd0AdFpPAzPA+n3pzwVUT4cGLlJnzjdA9MMA98s3SC6Mu79WMkKyFuuOBUWdCrx39Rt7z/k/aQa0/+CJQ9g3/1nj20ebLsbUz0986ju9rJmHsbCrmgJpKFfjDwbFpGDJ7qEoVu1U/LdZh57uyWB4kB5+q+eeFfsRO9UkySHn+PT73Kjm7fUh+7P3jPDjbRj0suNDSaNXzPf3u8cH9UuPLBDbxofOLp5NA3bJucAvWjAdpj6nowm0DmOrPra+lAjKI+Uqvr2k+1ccWWtHSwm8NLP7hbrbztLoYU25xSuElrTk/q/M9jBFEXe4/19SVceZvMq3SBIcKmGQI9jt4IXxJRHifrmPc8s4xgorsZ55nUHM0rfVgzDWkQX7tQE2VAQd8mikXSOJioQPh/fhZqDc0+seibACiYh4HKld4UBuptQKBobKc0xHZEwXWgJ3AHY0aNwZ0FHBTsqATcRbrivOdO28Yn7MofmLmPorR7LtryfKPmfE2vOj/6kPxCrkiWNIiUjnZYCFfV26H6a9ghwPIUIgBOdkgVWrgPkaL3bOD5grLflLCYT272Gwv9lsDzYSVoNn4Qj9h1Aj4jFyU0gGdYxSnBpozciE5ZwYkD61jctuV8URq103OTRHXETrSrT+xjKC5cg+JsnmIvQgqqN3Ph3LwsFvAdc+x8fHjfGRzCRkFvPWcWeGFk8KryX3cX+sGYv+U4G5hFoHQATn3lb7Obnd8sLppmYRXnbaqytorF2KB75icpDnTDsmm7sLKjtlq9tkNyY7n1npZf731QrpTDArBNAk44Omi8EuwXRgCCndY93QbUueFKkBvV8nSqHhohjnki3/yEcDSyRGLBgtzAAYjmFNaQiTHUFkS+y0rjaBRGTXpO46hsRIWbkU5GMlelmSLwQq6ZNgjn1MGLLmGAoUaKM1uoBQjJzNYXQFA1oEedAQLo6LBW5Q8Qlsr0JlT057/+tfxt+3ltLP5/O0dKmbi4nocgei4n8v+j40kv0cxafICFI3k4HlTwLVgew1qASg0jM/OFhWAFKggsoHJTGbn9wtkXaoPErnKgS0mI3aHjz4F9URrFzwIASUBAM9oHBDhOz+Veo69JVwMxeWJ3rJ/TYP2cYkmddDkwB04SdH8CrKlb7CxATS5qOn3BNiZnyly8xgIjqQ6ew4KMA5OnwAzKLdZMeau1oXgJ0tVL6IRcqUC7LDBlcGLi6AR/Zl5MHHwy5p+aKKjdeWoPg9i7zjFwZQ474qQ68IEzb9f4uUwZqsYuO1qrWrP6Mwrqz5QaxmMPILjf1BRQF60rSZHa3fSbm0phKHisYxUbvzjMfxKqF3CpY9C5JO50YWkpWACYMSEkpKVwUZ2+ycBYAFgrdbYAsHcTbUBUvVATZbxp3yZ6Okt0LACsEzosAOzbPkfuYBnYsSj0pWi+AOUP6qKkxS7Sp+/R+MwSrm9mZI+ZeUV6fWyN7nj44uHrNProrGOZbR53W8wUR3ZwJ70x7KuSzIIEfjCuNex6+xdO82mKCgRAVZMRgNQoMEqcPAsnKzs/DiWnIA5XVKEVLjoO6bVhFVpVuIi+wlDUgCarZcPcPpkHl0BR58WuBIeiOBAwx3xeuS/PpNmbL2cDGWO5lVBAZZKiBiBF0xicMoWB8BYhjWHN5qKXTDYzf1PWWkUVXOGy4rDV4UUWXFXi2DyfeohwgjJwwu4MJxiVhsiEH5z4fvNlsdT98O12fRO+vD2rD7rzweBAITlJL/rLTRZJzjokszeVIdMRxAaiJXj6HrtEGWRJjjVz8RX/o6VrutAabMCAP9RUSoCFuJAFBc/lJaBbY36B0lWLqRE9g4ASouS4dMedZstEpQPnarKxNlFZLFRUJpbQGBPN3Hc0B7CodxPN7j+aKLtjjT0ZlomqYkxUPB5qjX9kzZiHfirMMfPUcsxxgBvLTs5xGaQSMMF2JbDDzXD0hpv1t97PVcUfuHxXE3B+zf6szO/gvE8lrs4D3J+1RJ3zUZzK9eiw9Z4oRbUwYHq8eVE8dI2C9B7cR7Nl6GULBuywYtFyN/CEPsZmp0eCJRHEYSD+lWFnOb3BzfSM7lE+0BEB+R/BUT6zQVYoXilsS3U8ff2y85idrnDRfVDhoyGMQwuRL8ya89P8BimZ0weqQkaVqi53KBqElVotUIQtOyskOZzhtLDQjyTNKhmKrKKzaIKMAWKhah4kUnyYGkRKBA1TIqFKjZE7FGwk4lyYkRWibT2+wqIAFocowKQe1nQdC5E5aLgpSLGDxnWVxW6oXExjnV5V+WtS2VopHJBdb7bIgZeIpqHRN0ULEGynNWhdCrveNTdSiE4pbKNg2zph67+yKG5eQWiFLzpNQTgsSctiXKSwtaSEIFem7QNf/YLgrtk1yMUEOAQFuZj7cN6aBJelKAJNKXF39F713qsYhTewVxlJ9V6CCDhZJ17lbuUePF0mD2uuu9Zcw9TAUh+pIVINNIUhAkjcBBlXhJGrUyYspsxGV5sVtAoLg9tYyOYsqACDpUayJomRI6WapN0g4nQG+5LXgs5Mm/P8+MrJatjoVGVZXnBrzMZJWLarhq+T5QUiswqAcXNy0VtcmFoDSCBqe3Jc9mP9dbOZA8iN2iw6yF43V4AHokC93YDe0H2DbJMWAhHUtKpnk+a7A9bPFPQcJgBNByGqru5f04tx7KbhCW7OX1wZYOd9eEtUZ+gaAXc30bqGrPQsFJBFEBqWZbXeTVe10QcWGrKyGyxCL243XZ0NFvByKMKURwFKSqrJqcxzEQnKRhRsFd1S2xptqCshcE/IWUTuNZ1cgRpEA/bSOMlr2RMZygzvo6cZH8HKiS3+x8rIllp9ECF0oQayuJJaeEUUbGVdsrq6sICrykqLZV9KDbgSBVsLczDHreA4yNnVSMI3g2KGqqtSDfVsXw062Lq7w6YUFUI2NKlCbpH+uuBYF7UAsNdU1jGL2mmwBPgK2Gowt8OQSgspkCuFqjVpkDZ7DNWmPEi2A/zAcT0FWFyzi100f+MLUDCj2cuCuOFu9OUIPFgz+R2HMwbNz5E40zjIXlKG2xdxhnB4NcNIr+KuJcANGfLIoR/+eJ3dK/Of9+bT7D9399Pg87OPvtp5XOPFl4BydJkrPFRSgl60MvLgqrTAMIrrAJksOFQLxteulEjKzEaiBwocKilBvNULZsBBY5TJf0NFzCEZODD6W5Um55eCVTPUsBeCOii965yqZDoSR3xDR5tg0G0kvgmYWahWiwFE0d5wybfYEu+CQ4GGwRAWZRLfcNGpp08ZfRPfKnVy2M7fJDovpoqpQMaUQPIbPuQcivBH8tteK3AmWXudYF6UIIryYVpWCV+PF+kDVzn+qYxfQH1gjICfSxGmPgajqDUEOOTZOFPgcIXjQBTvgQGH/pbs0QeuzUS5sjkR4AwNeilRCs3eiTcJDldPQUHZi+TAGWBHO5kMOFy6Y7C1TFQWFFXPyRFR77ge2W9Q8jasNZfJf8Nlj4VhzyLSPlj6m8mUA8kkwOHoqUWm+pKDXhqD7GUy4HDRcdgmYBAEOPzXnUXJp2EzULwrA85kyr0EcuAqV1LcmU0PX6d7kS8W3mYx8pmaxNjha5ll8t4q7XGkvQ1JTRjv0zfxrckL4C9zcWdq0KYFLu+iw8CPzew4P0SDtvziz934jH8A</diagram><diagram id="njgFJgLuSqs89quLe1bZ" name="segsyndiac">7Vxtd6I6EP41frSHEF4/1r7tW7tt9Z5u98sehKhsUSxi1f76G4SoCUEQCWq3vefchRBCnGcmM/NkoAEvhvObwBoPbn0HeQ1ZcuYNeNmQZV1X8P+jhkXcoAAYN/QD14mbwLqh7b6jpFFKWqeugyZUx9D3vdAd0422PxohO6TarCDwZ3S3nu/RTx1bfZRqaNuWl259cp1wELcaqrRu/4Lc/oA8GUjJlaFFOicNk4Hl+LONJnjVgBeB74fx0XB+gbxIdkQu5kw5D1687u/rh/fvr7fnzb8PD814sOtdbln9hACNwmqHluOh3yxvmsgr+a3hgggQjZzzCAd8ZnvWZOLaDdgahEMPNwB8GN+AnBQMObMFKxFi1UP+EIXBAt83W4NEMBps4EPaAuRZoftGP9NKdKW/Gm71hHvfxbORpUStZVOPb1kQbZXoISb+NLBRctemVPMGkpmBQivoozA1ED7Y+NnrpiVoOwAIOQBqHhZWq+cv57lGUnud+uRCc7I01HPcAUjj+foiPupH/14MrABfvBp2kUNGxBOMB427pDQl8KcjBzmJXswGbojaY8uOrs7w2kKrTTRSO7l3ee563oXv+cFyLNgzbGRHmjYJA/8FbVzpGqqCtYA8/w0FIZpXonoEUo2G1EgQ3dBMnaOZLPCbSkihvivEyt4QmzyEf1hd5E3EYEtj6VjI6HGx1GwDdXscbRCGrSLR5qqDFLhQrRFcNX8Bxn5nHB36wZXnYbeJj1tjFLh4AiiSI4pb79dNefh0Lfulv0T05zT03BGR+srDRebVjxb75Nj2h9GyvzxeQhW7eVMkUjqzsJLzDaQAxwyBKKQ0MWY4cB0Hjeoww54a/cc1w+Ufb1HG5xv94j+RoJvSmXpcBqp/GigfKyibx2WghhgDFR8J7Rz51OUtoQJyjVHVazRGM98Y89IVx5oMVsgwpoLmbvgruRQdP0cWdaYmZ5fzxMCWJ4vkpHGK6Q+EjHOFJdMfKBmMhhRLfzBE1mKj2zjqMNkyYYU/4cx5sf21nP7s76D744N4xpXmbmQlPAF1jvVhy2+Bp6D2OKikUMbhRkm119iFUZDaM88x69BKUDzisaeBt2gFOFpBYb6jo73irrEMiG4a+IH7jtuslf67AbJD1x/h85EfRErUih06VnWhoSq7wphp37gig2qJf0AFZB43djwx5yabkLby8tweMxBQC1l5ZZbII/cYQKMsYZyp4wmpbnVJd2k/AMnoGs2iGDCl+gYHX00Wpfk8iqwmQcFcVa9XFAUIpZ2yk5pTTuXIxMljfWrSrAxRJMPTNImSMkAeRa2qosRUgCURJSYtV2PqFUWahmi5P9qd2/3MkCUJej2ZTxI4WldT01xeeYvUS8lXGAUAKuAAGmXDmnxZiEzaaafLSrh4yk4NA+uNacis99mwlHk0XdvvhUNrfpT7WZUbH4PdoWxR5qWKVew+P/mBgy+2UX+IBWQt8zsRwJbKQfN2rvOX5j0UYmtMIGsaZduAOKNNBQFpBYFb1qL9FKSClDSL4QKE1vrVWPNYIimuE8l+GWoXluW4UgSHII4r9Rya2hVDcsmi6mYuXcsO3BArsSw9oknoB5+rV+HVi+bgV3TLwVYvQZU3ZBB3c4ux48dOb6UnLtu9zvIrR0WGo/DUwJC7UBOpBiq7r0x8xoYaaHVGOZAXsu7oxDJ2ZqQzqOmU6zK3+67oZKOOYD9/ppyCP1NVLBRDplVC0c5MU5dWf3o5F6eqzDaOVszFVVa8WYDgrWIDsI7oSClM2BxUm3RwxsQbqVEKaw9gE3IxAZIq8QtthAZI8LPqtEKXJrFlbTh8TDm1Wsva4GfhaRZY0EyBdeDKNiio9jSKODt+XOBWg0Hmx5S1GaTKbhGnY8xaC9sgr3gxwxz/4aILlSnTWo27gZtp1mmZFexGfISaC1WlY5RUUl48JGcGUuRCQVVVcY9Swf4ElxhqL7AB2TExREii909uqDApoNAxsAbODswOkdroTwrxmNSEXT3S/kGUkjR/fvujfzl/nX71r6d/wp+9gTQv8n7tQSq4NEjvFMl6OgACPH8BqnClXFEVKV/cX1bZKGXKioCYiKqpKpylhyssdsekOlkVoCQFyQpkCOtgsijAoYlfEgstaFugPCJ55hM/5BsPxHGBrVscVeamH2JPZCc1SIZRDIaTUHjxj6htEe6M/w0CqQxYOk0uAK6/EMQfcecriD66D/DsLAyCc8pfiCiDsMGULHBei1RqNMUd3lH+KORRGdQAHfPCNGrCXtjhTpfHHZ3EC8tHbJh5dqlxsr8q7PJ+1OnaT49z5b97bTh7unv5dndzxNkfHUyQZHDTEkhCSFuCKUhSBd453F9S2Rhl535EhRa0+tQiqb93HfnhCv7p6CYAxvuPt9HrDTc8r0dSGTpFAlKzmGTYEovqcuI6BJMNSbbXoasEmkQAtTAtyu0XY/h1PO08dtRn93n2hCBoFiAPjjpj5ov7cPLMJ4JryZg/bhXhLmqQlTAbdSbM3GVih/fqP1zCvBUrJl/m86uCAnMuUry47TNfLg8wky6r9aXLXHgLMMb/Rrq8PW6hs2XOp5+EZctXjz+f7/6O3lq/v78/P3Tmjy3D4UYtFRgleTn3431/rwzmdKyqpkNVUR6TG1jt/5LOhyZIdsnSyCjpLE0UIyLfWZfXv9uo1zIXNyro9BbjcdMQ+AU3+oXAVZn7c2P9BazSn3DbRdT1VFwpdEFM2bfBmWEUZpjS1Vb4dP1l9Lj7+vPy8Op/</diagram><diagram id="hXyUggpaycolHQUwyI97" name="Page-8">7V1bd6LKEv41WSvnIa6mm+byGBNncpnMZI+Zs5P9chZiqyQobiRR59efJgJCU1EkNJiJLzPaaIN166qvLjkiZ+PFV9+ajm68PnOPMOovjsj5Eca6qfF/w4XlakFVyGph6Dv91ZKyXug6v1m0iKLVZ6fPZpkPBp7nBs40u2h7kwmzg8ya5fvePPuxgedm7zq1hiy30LUtN7/6t9MPRqtVg6L1+gVzhqP4zgqKroyt+MPRwmxk9b15aol0jsiZ73nB6tV4ccbckHYxXX5Z6O7y52h80vYG0znS6bfx4GS12ZddvpL8BJ9NgtJbf+1cXY28/vLqrIMGv7/cDp+W+ARHzHyx3OeIYNGPDZYxBX3vedJn4S7KEWnPR07AulPLDq/OucjwtVEwdqPLCZEQfzN0rdksem17Y8eOXg+8SRBJihJ+KXoG5gdsIXBty09WEj5w+WXemAX+kn8v2oUYkfBGspsI5XwtCcnaKC0FqhFJYCR9w2TvNYX5i4jIMMHNkftw8dPQLXZ/Zjxqv69vLi9PFLNhgpsy6a2jDL11JUduBUPkjqnyHnJfB6ffF/f26b29WFDTH6j/3P5zAom35gYRSTJk1/599uILJ7NXYp3yDxjTxfoafzUM/z8bWT6/1hn3OJuiDfnzrfZcfeR9XB04rnvmuZ7/+l0yMGxm2yG3A997YqkrPYOqNGFxdDdFHoNplsG0ZejI1HRdVUxCVTPHb0oBdmstxdA1g2BDVamu6hUoGx15vy7Rjzl6egwupotv1uPy9ETF25WNTfqn4THD39mh9oQ6k+ZE35qNEq6JhostnOA+1LEWjd49RJ8MX58vIvV7fbOM30z4T0t9KXz7kL62/trru/h7qydn/dxxV46jaR4BKhmv+cy1Aucle0+IUdEdbj3nVa9igTHNVlZkTNxC1NCogrBGTGzq2S1n3rNvs2iX9Gm2dWMFqaAwxjsHlj9kQW7nV8FK6FLe0tB3WxoTsjTxJr144W/P5yYHddlwzPnOeeNNUhaoJ35vm1XiBiIQDpSMiZl4EybYo2jJcp3hJNQa/hiMr7dDc+Nwx+s0ujB2+v3wNqCtW1tDtG9eg0pE0YqtXkpnVEBniCbpFNNqMmPx64eMSdsjM8Z/8qt92OBgxbRZqfsGoioIloJ6DKNiZF0lNRaoXQ0hJtmNiCls9Ibd49JiLVMfm4YfmL39wCrFWXMbxYFruV7tWKlR1eUY1W9Wj7mzOhy3vsWMAei4abbBeoOc4yYzGFKzDNTzwRAhgLeGJVm1OPCqmrsjfu6wSS1u+WCAYbe8r/U0qtXIXZVQ0R/SUSt/bsniMGyLFYDDAv2HnAHTN6kSoUJWL/44qoRaYhzzlu+4JZBR1RZVNAWbiPAPGEhTZRESCmQqUJW28617dyNHVXaORzeI0OZTuqYQE36+AgDaNt+sgEtUmjbyPBhMWlRXVGzE/2Y1agsDirs3m+6Cjdb6BiiMKwv5PiXcE5gBqhy17HqDYGwtPqJebmFMA2oL+x4FTqZ3hVQFdHov0ZqsElNhix3QGdEXkaWYIHtxgYj5w6LsFAkoe96PV8CkhjSUHRuStemAs27RXEqQiLNqleCs+Y0VZDSJs2IogSYDaO0uJ4FlB1xOMTp3LNt3Auf3AW+VYM+oJooYzZs0EHClkgwagdyDA/BUkr+KCDzhHHdrBZ5IgZitEViComy9wlumtgFYAiakpPhnr2CJDSK0F7AE/HxQOrJ6WKI0bQ6whEQXiUDRzx8HS+wkex8EliBQwuvTwxJUSHEqVPD7dghmBE+kXliCQHFyhbGLk65Pu/NWpSKJsjq7xirvUOC8P0qZ0Vchf9TAPaJpUvESIsQXRr4uUYMAkwrCC7gI9I+uuhVLGyB8yoDIjdH76d0dM88lbYW1B5c/2P13O/j3TNqBWJufKo9ZWAATFZQ/EqHQWx6vqsYSy+KAZTDHN47XdNHSxV/j3u01+V/wgtiv59F1e9TRT3YKH7O1TRsEvjEXXMnKlFESe0yELCk5EhCB6o5rkIpGgVp9bnmn4UvP77iuM52F+NuU+Q6/dwjSnbPV6u16aZvm9yz7afhqK348B64ziS3APqUgkqMx5oyarzQBUxAVWI1NCvTx6vw3e00F6v7lneJG9hTPnwsaFCoRSRx+fycHeIRPff50Fteyz8ZfbuxF1B1QYh1S4gpYPKDTqz5mqvr8+BX9vGZLW7mqQIkP0dHuem6KMTHJK7qk4Oi7dWXObztfuh38NO222fy/lwNZUhCy+85bGfQa0i/bGVqbh28K6RegCTLGLzMM1iUxuIKKuc3FAp8p8Z/zkXFVzrYqKLjkWjrZBVmJVCgZmViLiJx+la2h2irIaEx8qMB1ikqKj5rdCMdJ37pKMWWLT3VoQJlWpz0UHdFxwGKaubTl0esVHdnddR/zXNFF7KXsuSL6l+JGss8VyJ88gDhphU04A4xrkATiwKyqzQcQPENliw+wNv44bf1RXdZ/EyCSdjA2KEBjdkQTKsFL97mKDoaYgZBtR+ryMHZpwpYtY822SIsOhpitLyw6ubNMTou0eJ8o7yG1Q5rmC72Onf8cYc0ah0fSpDebQtjGoQw5fxxiwVLFAHgNmVCYtXnH9Ng58LZUdzRSGuOtQR4m3YvRojt4aX8JbDIfdk1ZIGd2KgySWW6+BdDeXn4uj9dYcBRwjtfxUMOqy81BXkvqJUj3pxwYnfYQm2I05CFWwOjbH11+yf2cLM7a7SRR0RSLJWWhZSen9jb7mGMwkH0EGVxBS4t/o53ObwffH3Ew7HWm599O57MiyakYLbKffXfZ9i37KYy/t/Egy7BS3lHf8Zn9erTzG7BZIJMvptDREI86SE9OBRwmbUOUV5Qv4KDaAtDQ50TxqJINWzDQOykLxQM5JclEXk6mz0EtQ532ZNYqJYIKqvmYJe5TzTaryFLBusYQHrr83+pfoWInA9ZopjWpbGNMfmOxx6k6eBUWrgKdT8305AqwUEKYBppwYcpJairaqybcDTKzWRlr6uaDn69A/XgFTbiladNYE65eSxOuWVcTLvyXFWRBfHvVhLuT7O1fEy7MucPwjyq92GwbAVGVVvpQrG9CKcxrCLOr1KdtZkx2uipgg5A358hmS08IFnoKih4BioBTqPJaxGAySpqAkptGFU+gCtH+n2wWeH48ikpIBh5GU1UJgyniVGWiYGCqsgbYLCwrDgeHG1bQ7CYkD5Mx2m8JWH1dUtvHbBeSGmlSIiaizHwhXUUgttd7DP9IWpQOSh9Ym0UmXSkcyZjcuiW96F+PONqhDu9dh2JjZ52Q4iBIaZWFaXjEo2qqqmnIVJEW50oS5NdoUUIpISaiJknaMyuucqKYCL8HCwK7U5kTfxsJ9S4Yc95tO3YOtTHlPHRVrI2pbwIEzNy8T3X8cmBtmYNJE2FdQutj7s3ImBlX4/GCXUyt4Ozmpe9egIVP73Ed9ikVx8MYEUVHJuAtKsgEnIEqpjODJJce4h7SNlt7aUS5INVMZwZ2VrMjQ6U1cYKiJin3m4uGVyVRd9Zw6EyGh5hXar94Mr6nhmHMoFBJglg+JRyLReAtxstqAGBB5haYU9tI2pf7TkXsagNJYJCMH33AXRFxf1t+9iIFDD5eBfNcC7g8ZSlzmMIs0ReSVJaxX/nfXSRv/9K/4NNXULTx548DwGZJCFPciBTsxaxKK2X90dHDkLHN3lSuvhDncySyxozBgpCHRY6dA8xYbpBE9uAlWBrIyN/6nhekbQInz+jG64fZsM7/AQ==</diagram><diagram id="JCgm13LWT0JuSAX-6sYb" name="Page-6">7V1dc9s4sv01rrr7YBcBkCD4mM/Z2Uo2qXXuZnJfpmiJtjUjm15Jntj76y9pETLZAEWQIj4kwls1a9Ey5HQ3Gn1ON7rPyLu7p19W6cPt53yeLc9wMH86I+/PMI4TWvy3fPC8fRAisn1ws1rMt4/Q64PLxX+z6mFQPX1czLN1442bPF9uFg/Nh7P8/j6bbRrP0tUq/9l823W+bH7qQ3qTCQ8uZ+lSfPp9Md/cbp+yKHh9/vdscXPLPxkF1U/uUv7m6sH6Np3nP2uPyIcz8m6V55vtd3dP77JlKTsul+iXr/Efm3cffv/z3/9c3H9Mrt5EH8+3i33s8yu7f8Iqu98MXjr59d2vv/6Wffn3//3+/fc//vq+ebN+e46qtf9Kl4+VwKp/7OaZS/BmlT8+VG/LVpvsSaa39Iq/PVD8a9FOhIXpZfldtlk9F79XrU6S5CLarlQZHg6qT/xZU2P1Ybc1DdLqWVoZzs1u7VfhFN9U8ukjK2xCVnvUJArLmihItygKSdzPs3IVdEbe/rxdbLLLh3RW/vRn4WmKZ7ebu2X1493eCooXN8t0va6+n+V3i1n1/XV+v6kcTFK8bBVztzixXJzPjWVqwo0lwo20CTe0Z2ctgqmWb25IcsHiIKFxHKKERCELBbFFkSi2MLyIEEU4CUjxBhbQUJcYI3tiDDu3q13RUIlo6HJT7bGGjOh/HnP+g/P1y+57U7wheXh6/Vnx3U35/28Xny6/feZLFX/adrXtDw9zENeL5fJdvsxXL79Lrq+v8WxWOo7NKv8zq/1kTq9oRLm3qD4Nteq4W5fRIF0ieoFYTBnBLAyjOIyZLl3G3Wae3c/flKFU8WpW+tbSo9aFu/2FbC5EUiPIZo/nXGXLdLP4q/mhMgFVn/A1X7wYZ6XBwgFFMQqLWK76b8M7dSlgnT+uZlm1ZD1s6vUpmF28fkAQRIw2P2WTrm6yjfApL5reCewA5TM9G/kyv97cpU8mdvI8zdi1dCfTGcuurvXv5A5VurLR+b4eW9d8kQV/8O42XX3Lv+ereU3/C/j2cW2ioWHRRqKMzUOZjTB8RShVDQYH4JBi1wMcgpJEMAwq8XJYV3yIZZu+p8cHoXQhodXzb2WYXfxjq5c/qqj75cX7p8arZ/7qabGp/Vrx6kf1AeX3r79UvuC/03LYbP2xgv2rKRQHW9e7Z0HCbJ5eNAibJwm0F9UDinKmhS8EgYnmM4goUAhd5jhP17c7XwFt06CNEXaY0zBkOhhonKGBpsNdCV8oUTOdQpfpc+1tD+Ub1lqMS9Oh9ym9ypZrJ+MbfWcZDQROjSu8ZrhEEuRoO8uIApG0vk0fym/z1YflcvGwLr5/+5CtFsVfkJWizLZPv74+6lLRVTr78+ZFqV8eN8vFPRe8NhZqiLJCuDdFfgVJfAzSpioZadCiqtnjavn8dlWIuTyBu9TR3F59lYDKX7rNV4v/Fs/S3YmyWGWzzSK/L17f56tSYm+3PqE4SIjOgJESsMkIJ47rmouISdXpgonPhcQLKc+0ONJBhtCBJBR4I33el8rjtJpZhKFJ35scbBUokJnF+0U6Wy1ezCL4V7be5Kv0ZSd6I+n2Hixp+g7Oudd9h8zta7MSDm3H9h1fv1x6g1A5TlrQXd1rIJP2MALcg+J3gH5QZRWUaYpO+oEGcsWbwZDQ0QymHyKQFVSlH8xhSCrDkAdabIOweDVDkuC6IV4E0V5bLF/UYMurhQ/YC7rtmvuhbrsmVu06BlFWPNSuY+B4FbmR0axWhow98zHwFI0SuTJt8R48te95D0FVCRVIKnPMB16y86fz9+Sfv/79/vYy+vSP71fpuaYIuMywfcvLbJuR3didQzPGQ/IYcs9u5Hn2sXfjdZ5Ef30h4YfnL+FTGj5+u/lCzplCRs1KFSSKmocQkkhKthXgoTeepGScgVRSTQruZ7be7LMoFfntUd3eaMOogBKZsxjdlPboxiFRKCDFV48WdHs0JX+0Ryu9RTPGiSL/exQgiWXRPDdFYE1SvSqNbUoK25aUQkjphqSobUkpVAbXCIH7vAyUm5C/JqY2wL772XDAvveEqQPxPVvHGhA/qwcukBTqROGc/9YGuuUiE3OcM8E0iu2waZrAKitC/erALlVbcVnFu6O3Z9H74km6XNyUcdCs0O0LUiu31WKWLt9UP7hbzOfl779dlvj87Q6oNVno4gvE6TzcGDGOSFrqJKvlzwOAz87xKAbDl+Gr8mJLvkJ+fb3O9GhdoYTaTvgPOCjr4X+iDJQMhf9b1TkU8yrjo/FF4RgS2l2kdCEgGQaNtEUgKHAIG7XIxhEAgAKHsNJ+UdlGAChwCCztF5VtCIACBbTkMAaojplODFDtHg8CetmGiA9TwThOHQVUO2RCMAAFChWudnAAj/tdwQEoUIZMhoBApTyXwl8jSaV96nFJGAq4yFBkUinGITDAV3ZZOK6gAZWGNY7IyjocUGlY44isrOMBlY427uKB3WHTDQgUWgl5QABlJoLFjWAdJw8IUEs+6XQBgUoLoXKzcqlXXqHpU8vLZ/lNfp8uP+X5Q2Uaf2SbzXNVTZc+bvKz9uvQLT6hXrDb8tdTUbN9wMeBekMxuAYHy7e071oRzvEGRa2HoCOdiIZgxBDUKzPxSJW1MdNWVIdUOhMZuzAquw9qTBWS6EamCo3RzQgtQ1quPPBXP16jnV73HYa7t/gwZZkJeKAthEHfmKfFne72tzF3KkL4WaoSBjUdY3VG1r1o9Ug9GJK5h+ahq+QTtO1/ULVLiLj/Q6OumK/sHLOJo2YpuH1mEysQDGaZzVqzYTfIPC4RG8wm7vbmhoUxds61/y0qJT+2T5UOcaF47CpWJ8TpCnuKFQDt0UrXOt+KFfKHRytd6wwt7tfo1jWGlv+13QwtttpV9zgZWklPzOkVbldbZEoMLTZSbzsE2MSuARu+JR0CNo5dWEQqrUx1ARviGsqTtt48mmiGDMOJ+sIXlU6XxydOV4ANOWrY2CFd68CGHDVs7JCudWCj0ljVYWDDD81uYEMUxuJ4YANkJqLeCdaik5bi3tMFNtKWvU4Am8Q5YONaTxqkMFnCrIh4ttEKsHEN5YUKKM/daCYchhP1hS/hUePENnG6AmzCo4aNHdK1DmzCo4aNHdK1DmzCo+6zszs0u4FN6Dvt9DcOEfVOsKY+nFqvHRRKMnXHXUbYmD2js64QhzEcLSMZD4ElnkZfZWF4clWh5tRJqWPKjESY6JWpqEwmjH2yr07NQNfiWBRYN2w/2I00w2CXhB1bF7ZmVOySsCX3uAwL24lWSUM1JAC/bjjHrcvtm0U4AuEDtADVm0UhIPRN39OMNGeXbW7m2LljSnONskvCtn9MKZQsn4qw7R9T/e64nsAxdRQXYHF8KsfU2C2sHNrMiWvHlHTq6IkK2/oxpTLS81SEbf2Y6jmN8viPKXqgJg0dU8mJHFPSuZEnspkBNUKsV2eojHE8VmHHzgn7hHmCxDlhy6Brv6GXCMmmXvJVFvzB/yz+Vry5HIP5LS9HYtaGYS7gL511DMg8osyULlPa9UXipiQZGJ9ITElfGorKgLkmU3qxJelgVW9LA9wSb27iji3JQHg/W2IyU/q6Kv6+dHGfzfXM4gWTdeHJw2aZ/OS5YlEY6VRxDCuMiKQok2eAzBw9sQz7j6DjWeEQ0lmxI8+v0vWLotO7Uk/3V+uHHn7hRBSPAFCR6Z2HgIb0rkBDTHMm+q69xR5dmQ0PYwUWw1hvz5oW0Eunytolnyzdf8nn0BOSAH5JwpzKin6oPs2I+P+hdrgdb5ADtWyutIvbNtcxPw6t3WCKx6YdlBQgnFEfX740Sp6CIyqWVNTxiNCQ3xubg3BU8qBr7Y576XBru8ugh0j+3fq39PHH/6b4T/zLfz58RvTjj8/nCvmgrgbG21/YexHEPV4ajoANIbZS7x8MFgrBQuMR01IFyoK7fkF9Igvqtyg/+HB3NTno1tpa2kDHd6mKe8SEE4vf+YHNNcUnThuI36WakuWIRtiMt0XIl93r2YZg20Xl/2Tbjr58yTZu8br2vu2XTqWzCFArIWf5a2rnrXWMbFCFsHGaGxSG+NY3aL9ep93zGoAc+fiGPsMbWsKn+qCGfQe/42FW0DxL6dD0/w7Z7yxJLcwqVJk+195W3Wtt/YOFz6n+4Fej2644agynUGFpjfRp0AGQBeITX7bnV2HqWh1/1MyaRHwOX92ZSNGTLm8iHTA5wnm/TdcGl9nNXSGw9EXYOg7/QQbRFbjby/bvOhbxvcubPdXTahLqnexxa4fZhwLzPvS4QccLvaGL5VRTb+g9+pkwNjiX9nodwUEIGfpxyjzGBPDzKGPzUOYHGL4iVKcfCJMEIgQJhKcmIbxK27kuT9ASawYXOCaNeJPEbG/E2WNcmNueJML0ImQAYqDkIkniYPcVD3MuEW7W+6jyen0DTuFzsIGAU9pbbAS/9Kns/LI2QVQodICCDkqXu4kAxgljkY6QbQIEjr3x6AiFUuVp8hERqHQOJZkvo3yEpFtTq6omDP0i3BLpGZj8Kdfb4bWXUgf69culR3oDXC5HfvVJkLHEHkJN9iBtCTMSEyCU3Oo7VLtjdmOHKr/Pte9QlWlY16EqveDfs9o6kKn4/SKdrRabYhfi4F/ZepOvTo7wSVezakGdeVvoFKJANBkpP6gN9fW8Ot+f/2m7pvh6MXFw7852BTQvJ+7BF/ZgIUD/nDjujQEJhURCDP4azdUd0vvaR1He0bucw9TJgoGbQJIp0vz+oREvIb3s6ElCzSQhgeSxbYpQ5Rqmpwj7ngWE0YuEoWT3Bdi2kdhCwmALWV18oeSTTDCG9PDw1zOGNd/TFlfY4gyl95I9Z/jiIpCgLMusofTmr2cNhW3GhG1mmzcc4Z6t1I1ePhcCn72QCKdDGxh0v7b5wxHu5nr+sOG1MVY5Yk0yiNJ7uCPo2BOI2vyCfQpR5UKwSxRiJzXIgYTrsFG4ZDCURCQQchqmECUXlz1yG+oihLthsqsnxOSh0m/IsXX/4Pa2x0GzxDyEd82VryvAhRLDm15TfcKEr4UW+1pAk1jY/EYvhjIfUbodUQpXVnjjL2vxJNMUDfhk034OEaJRajvdxPpNn1FKNw0MFV7TVPym5K7V8ui3Jrn9d+IUlhxDwBKGwKzgEsrtlUFlLI3MBixshOynys1cIzZ2HKYDugBRfpGit+lwUpsvpBjrmruGxXzycsSzLIIsGbXdf4H55GWbshjcm5ZTl8ynLpWoaFCkFksal2NeXmpGcT5zaT9zGYZNLnkXpdXzliZrF5PD85aTrYrX5jsY8B3cxdZ9vszp67MSTTyVvy2l2HSyBdvVvYakL4Y+e8DdQUAP7uGgPIWT5ENyoOINIUjSdDSDyQdCm0OiVMkHcwgyUbhS3ddiG3TFqxmSYnfUDPEi2N8+TFapPXLOrtOuubl223V4FHYNRr1QmLFXn1kGHK/hLGAi43g97zH0FA3kyrTFeiQKlOlEWQ8C70hY5z0STRTkZAs8owB3bkdZeae27Yik3QB7xghDolFLl4C7G5Qi1bCgmq1mrdgPjOyhBNiIchwLF4rAQrp7wUsbDvqyn4PYFASOfdvd4BHyXcb1dxknkDOR3BwyGu3t5k/6ZrNOkGqCX5CQalK/gLUZiKYMzSTRHlHLcpss9UbYHAl1MBc6YhRpNzakkAsaeg0ELqTYrHi02BB7uqA1YQYv+1AO123RBQj3aAw5MW2BLu8O6Ep3m6+DfKw9VA321NC7NARBhRv2nEQTqvY183v3eQz7bjDbNfOI+Ha9Q6rXEkn1GmJGfbS0Ab8vVLILoUHr+4TvJmuFSrsJMr5SyY5FxM1CEGa7Uml33/vESpUcDx2TEB79Q8fzSpZyrrgIhbIAUw+x4151keumSC6CuNY/smlLvD1Lf7OEl8mZ4dog5GfAjEohBXJtWksX+SEwbboKEawOYrargxC/tOHLg0YrAUNnXRvSbHlQOMINZwfKg7oTNnymUXfZD5+i43gIgIEpDS4PgguZLg+STpvy5UEHdQVKwMFvvTwo1FQn4suD6mpn8HixXR6kayKZLw8axy9YLw8KfXnQiHiPwPIgKeIzWh4UaaAqdZUHKUSRxxEbggv5bHB5EFzIdJJbOsHQ8wWlaigsD2LWS04iGXnntXUmVos4oKsRyjZPsDwIDOljg1vtQk7euOf089sslAdRJERg1suDoh4lthMuD4pABacD5UF+uq97EDoCXUEcKA+KNHFrvglaL8/fUtBhrUhojDnPvkioZwBJhX78g4uEJEu5VyQUyfg7PfSOLxLqa4pYR5FQuazlIiFd88cnSRpTgZqwXSREPenXpquIOlckRGWkny8SOkDJMUjSWS8SoiNwhUdRJEQr0qy7SOg4BgZGFJjS0CIhYSHTRUJUE5054SKhCIOD33qREJUxlb5IaGy1w+PFdpEQ1diF8MwXCR3sF6wXCVFNBOck8V5EE+eKhKgGqtJekdCRxIYM3GodWiQkLGQ61U0VOMiJ8gWBcM/UeuFJLCPvvLbKX+EZG3d0NcIV4NMrEoo4GD20SCgiUOGGPSfPAfgiIaNFQgmMwKzP3UXxuF0ckwPYO6OxGt8B3YxffBSl3/FYo3djy6N3USxj/EY6e5JjIaT7GZ2SGUdWzXOs8b6x8+N9K0l71mKUMzMOIZKxPukGxb4bapu2GNSV7Sx13GMY84SLoClIDkgn/HLUYkhzmq4Q++rWPhsatje2PeIXxYdfHPa18WN7D2HGLxHdvtnaeCZjG33rTHMWAQIB61XxvEXGiVXFj8l08D3jONMx1pxf6vycX8RRsr786LjEhoKNkaOwsQBOCxjKpgVNpt/4tACmaTzbJMkIyuDUHNtUBFMgSydKRQg9ceyTEUwTMzjZknnm3NhdpsANnkTJPHeFCkd+cgxHPh1r7K6wkOmSeaapanLCJfPUubG77HAazNdOazQQ67XTiSYCbJqBPxZnMtqunU40EFr2aqePJEgYa/4qheU6pnmBRIFZmiZwpJS5Nn8VB5puSvl6zb2WEMBGLAljgiUYrdfEga9nUCqpD5vxGAoSUXO7cWqGNrEf+uhc3B7xFgfcToov0U6Mpq4LV63H2fvCl343qIAD4ZUwtjLYO890YhlsxyN+BKNBFHBH3jvol6xFAD60nnPGyByuxNb4bdW6+k7guvPWnTw4RtiqIe/tCocCOpTuFtrCoSBWM+rxTFYGZj3XNTAkilmLOm1luTHy0yNbeWdITKKASYovjQIdpKln02TT3BRDby3Zkkbz3BjJShl6RgnHkOfGSHV+JEZHcVEUdnRDARoYzoorEbCS9nPfT5DU3QRKxngazXTvtpXPdDtBjEADkRIjRlPduxyrD/9H0a9IUsgAgMlc9867OJXrNh+KjN0UoNKYtVAENBBDAQVrDG1FVpis4VAEK7Bm04SoEYVZ1AKihoJDMQtRsZ+SZiGhTiLX0ulYAcb6dHo5FE3Yw/YT6tiPSXMOHhAE+7W4kFLHmkri/W1wVb/fmvi0lk7HfkyahTm7MeQMhybTxZXcS6VjGTGhB7baS6V341aselUM8ztXjhsxKxCNhlQ64bDIWiKdyHgWz6QNPPUwbPfqQCqdeJ6iTV3gbocDiXSC9WzHySbSCYNNARxIpZOJTFnDRHXK2s4nOh4HIJEVGZpMl61lOp1ONJGiE06nY8Hh2E+oE41dSM58Qv1wE3EgpU40VdZMEghgBqkK6wl1onCN8PQT6twTKgQkRzHai5BYcCVDU+qytUwn1YkCRTtRsIrMpdSLl6u89OKvui1kdvs5n2flO/4f</diagram><diagram id="_-wQKpvgRo5uD627klxP" name="Page-9">7V1bc5s4FP41ntl9SAYhcXvMbbs7bafZTWayeepgkG0abFyMG7u/foWRAAmtwRiB0/ilBQEHo+87Fx0dKSN4M998iN3l7HPk43Cka/5mBG9Hum45Jvk3bdhmDQjArGEaB37WBIqGh+Anpo0abV0HPl5xNyZRFCbBkm/0osUCewnX5sZx9MrfNolC/q1Ld4orDQ+eG1ZbnwI/mWWttqEV7X/iYDpjbwYavTJ32c20YTVz/ei11ATvRvAmjqIkO5pvbnCY9h3rl9vkwb/4ePvy7avzcjH+6ExfI+ciE/bHIY/knxDjRdJa9M3r8u+XqW18f/K0+59fnzfhBNFHtB9uuKb9Rb812bIOjKP1wsepEDCC16+zIMEPS9dLr74SxpC2WTIP6eW8jzRyMg3d1Yoee9E88OjxJFoklCggfYj+BhwneCOAVvPFIIeB0BdHc5zEW/IclQKBk8mh1AU2/dzXggg5T2dlEiCbEpCSb5rLLjqYHNA+PqC/dUl/m2FCO4XrePP7OmIXLla77roiN9jLTXGNHE3T/29mbkyu3c3HBCgqkPy+TGZ2y3G4ToIwvInCKN49Cye2hz0vxTuJoxdcujK2DWTkINO3AVUIA2RcGhzGugUubUtzTMtCwIGGUUGcNXGAm5fAtkwb6jZChoUsZfjDX1nhgKlxYKCqvuU6yHU/M7Hd9zc6WuEcmcJdB58eHj+rUTVOcRSCZfNgWWYFLCTBSh1URr1q4IV/lcYE5MxLuZ4yvNx3vrua5f1MOife/puqALER9PSZasTu5HbDnW3Z2SZISo+Rs2cmkRwXD6Un7Jnsl2JfiEVW0Tr28L6PtmhI5MZTnDQgclvQywZQgipri3HoJsEP/itkUNM33EfBTosYpxyeU4bFS8j6gz5UjlUEObrGy7F1Xk7WXRU5O97lH30EFc16KhLDu0wPo/guDEkwS46vlzgOyLtx6gtx1npfNNUp/tj1XqY7U/FlnYTBghmAQ028o9JoOLy7BaDqYKURlTKrYb3ViEow84dHWOq8uM178apjMGUxFFQFsa3Ghy9j8vNcomfvDWAdapf1amzJ1FgZxo4ajJmQoKzXj9FTFJdBD8Tb+ySCb2DbRzIi2PoYmqZKTbd4NwsQrKq6jAaGIhpAWT6iAxqkeD9GmU1XPzpugGhfQb5uIT6Qqo7IWIzGAWypAhhIAD4myBeHtm2D/pE8gD/tcFsMk52Owm0gKrjieBvK0mJKWAE4ThQUOYwV9aM1803QBwqwi2P7xvzRBUE26pc/DbJqx/Gnz5TAm+COGDroELXkjmh7UM/ckWUIu+TO2/QshpCAaetZrP1yVIPbIKf4PhM5OhICQyp2qDwObJByUxIYgpoQoDD9etn2a2psv90wwGDEPnErgoQJWJE+rQMMRxCk2o7IsowqAgxQplgebKiZczht7lQCDNSWO6Ira0YdgqW7Ld22TG9YNfeYdFhdEDET2CktWZe8tXTJySbEIMtP5HZGr7hFVhDFT2ArcotIli/pYHrj/ssDufTJHeNwNQjGLrYn0uy36dl4POkR4zymGQxjWfajA4wftovE9RLih94t0qILqQa5/SIty1N0gTSezklHEXcbLc5gU7Ct6lyGFGykCuwGiQU29vTWcbi9jsm4MY3660DgEWtVAeYHMfZ2dCEvwKtEITCIJbRyn4oqwOiSONI2VQFzTgr8H1SIn/nPcRkqK2AoKt9jQsas4bcg+J3cfBu4XhxkLvMfohZRTG2q6c5TPBfj1bL0OPmmsSiyztwSyBKeGbytXEQpNTjDSpvcMJim+uoRiHeMSwkQeG54RS/MA99PXyOlYmE0tLZGQxXtgDB0gkwhS7SDhsR266pMhCEzEd076hkBDO9jlzrH3YwAp+vdnYah3OHePRp/S9eh6FqYhlFl/7CfMmbpAyjHhswOFfnO0QHZzpE8p9QMvQa5p0yxhsopoUqMCC6Fme6mWSVk6JfIRMg0NQdppgWFZQi6fWlAw4DQ0QwH5gVzHSedyK8Q1MIWmH1Q1omcUvYfYCvN+hQFWxTGzCHY65Fb2taqGdUkTVni49GdToPF9OzHu51T0iw+0mdl2yXtli3/gKrKyUxFlaPnqsK94RziB3y6JJzrtarQOj7t0vsiIKFYeDLR5cXCvjk2jWpFoSoN1/kJPaLxp7zazqofPvbimkqZ2LPD6dLSOGIqX7IaRZawUOZxrOMHjmdTIzc1SDtpU6O6fmV0zMhsuAJlIRSAgojGk/jiyA0IqR/FBSDWMAUgVqsKkM5G61n8fDLkYX1+ePWQIIiVJfVFHkXDkPfoFAQkrerQYlg3UL+OrZ+Ik0s0n4POToNOYZJMsj6236DTlpWBqXdOTfPPtHD2VFLQtG9qS2wzVR7K+Zm8oYN2y8AJAEGQGIF15/tiC3+4XX2/BprhPxtPF/Hz6rbJBlPvMCwWtl6p1KQ2RbcyLdawuLUrdFWv5nyj6PJLK3RxN7HW6IqCFKOrn6BbUe0dslzxYEXvQiUEalv0XqGOuvGylDqqF2S+TcNgiNa6K8PgNDMMx65pOHJ6uRl1ji/5sqQjomiSzN3NLzLw4ZRjkLlem5/rzZfx9rB9nJQ2x+8DdabNALSBA9Pm+NTcmTYD0Mas1pD3ulklqs/znVxKt2Y/svoUryp0kbBtrM6q2gbI6sqHuYpmdX1Wbv7eFuuIiEOtqs/DIi6bzu0A8b8Wy3UyiHoPt90gYhWie6o0hgW7r8ndQ7YPGmQjMz5BslczavPnaND8ucNXjyCrZSIFCFtYIDGkUJ1hlQWoZ27uj8rK3Nzj0M/cPJKbxwfB58FToYmzKA5+kjY35NRFgUM2hNEUq24aahDOBPewzDNd5UmDMKLGyxivSmUS72WR5+DjefFvTwCjykCgS2LCFhQkp8XfaMpMX/GHruDdfw==</diagram></mxfile>
|
2006.04016/main_diagram/main_diagram.pdf
ADDED
|
Binary file (11.4 kB). View file
|
|
|
2006.04016/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
We built a diacritic restoration joint model and studied the extent to which sharing information is plausible to improve diacritic restoration performance. Our joint model is motivated by the recent success of the hierarchical modeling proposed in [@hashimoto2016joint] such that information learned from an auxiliary task is passed as input to the diacritic restoration related layers.[^8]
|
| 4 |
+
|
| 5 |
+
Since our joint model may involve both character and word level based tasks, we began our investigation by asking the following question: *how to integrate information between these two levels?* Starting from the randomly initialized character embeddings as well as a pretrained set of embeddings for words, we follow two approaches (Figure [1](#charactertoword){reference-type="ref" reference="charactertoword"} visually illustrates the two approaches with an example).
|
| 6 |
+
|
| 7 |
+
<figure id="charactertoword" data-latex-placement="h">
|
| 8 |
+
<img src="both.png" style="width:5cm;height:5cm" />
|
| 9 |
+
<figcaption><span id="charactertoword" data-label="charactertoword"></span>An example of embedding vectors for the word <em>cat</em> and its individual characters: c,a, and t. (i) A character-based representation for the word <em>cat</em> from its individual characters; (ii) A concatenation for the word embedding with each of its individual characters.</figcaption>
|
| 10 |
+
</figure>
|
| 11 |
+
|
| 12 |
+
<figure id="architecture" data-latex-placement="h">
|
| 13 |
+
<img src="all.png" style="width:12cm;height:8cm" />
|
| 14 |
+
<figcaption>The diacritic restoration joint model. All <em>Char Embed</em> entities refer to the same randomly initialized character embedding learned during the training process. <em>Pretrained</em> embeddings refer to fixed word embeddings obtained from fastText <span class="citation" data-cites="bojanowski2017enriching"></span>. <em>(i)</em> shows the input representation for CharToWord and WordToChar embedding which is the same as in Figure <a href="#charactertoword" data-reference-type="ref" data-reference="charactertoword">1</a>. <em>(ii)</em> represents the diacritic restoration joint model; output labels from each task are concatenated with WordToChar embedding and optionally with segmentation hidden.</figcaption>
|
| 15 |
+
</figure>
|
| 16 |
+
|
| 17 |
+
We pass information learned by character level tasks into word level tasks by composing a word embedding from the word's characters. We first concatenate the individual embeddings of characters in that word, and then apply a Bidirectional Long Short Term Memory (BiLSTM) layer to generate denser vectors.[^9] This helps representing morphology and word composition into the model.
|
| 18 |
+
|
| 19 |
+
To pass information learned by word level tasks into character level tasks, we concatenate each word with each of its composed characters during each pass, similar to what is described in @watson2018utilizing's study. This helps distinguishing the individual characters based on the surrounding context, implicitly capturing additional semantic and syntactic information.
|
| 20 |
+
|
| 21 |
+
For all architectures, the main component is BiLSTM [@hochreiter1997long; @schuster1997bidirectional], which preserves the temporal order of the sequence and has been shown to provide the state-of-the-art performance in terms of accuracy [@zalmout2017don; @sawsantcn]. After representing characters through random initialization and representing words using pretrained embeddings obtained from fastText [@bojanowski2017enriching], the learning process for each batch runs as follows:
|
| 22 |
+
|
| 23 |
+
1. We extract the two additional input representation described in Section [3.1](#input_representation){reference-type="ref" reference="input_representation"};
|
| 24 |
+
|
| 25 |
+
2. We apply BiLSTM for each of the different tasks separately to obtain their corresponding outputs;
|
| 26 |
+
|
| 27 |
+
3. We pass all outputs from all tasks as well as WordToChar embedding vectors as input to the diacritic restoration model and obtain our diacritic outputs.
|
| 28 |
+
|
| 29 |
+
Figure [2](#architecture){reference-type="ref" reference="architecture"} illustrates the diacritic restoration joint model. As can be seen, SYN as well as POS tagging are trained on top of CharToWord representation which is basically the concatenation of the pretrained embedding for each word with the character-based representations described in Figure [1](#charactertoword){reference-type="ref" reference="charactertoword"}. SEG is also trained separately on top of the character embeddings. We pass the outputs of all these tasks along with WordToChar representation to train the BiLSTM diacritic restoration model. Omitting a task is rather easy, we just remove the related components for that task to yield the appropriate model. We optionally pass the last hidden layer for SEG along with the remaining input to the diacritic restoration model.[^10]
|
2006.05580/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2019-11-13T05:32:52.602Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/78.0.3904.70 Chrome/78.0.3904.70 Safari/537.36" etag="kzYdCCawE_q0ozSeLFpo" version="12.2.4" type="google" pages="1"><diagram id="MehWcagrSrb0XZ4o-nfG" name="Page-1">7V3tc6I4GP9rnNn7sDskIaAfq267c9u96V137677pUMhKrtoPMBq+9dfgKDmRUstIKt0OpYkGOD3e/K8JA9pBw2mq6vQmU++UI8EHWh4qw4adiC0LZt9JhVPWYWJzaxiHPpeVgU2Fbf+M+GVBq9d+B6JhBNjSoPYn4uVLp3NiBsLdU4Y0qV42ogG4lXnzpgoFbeuE6i1//hePMlqu9jY1H8i/niSXxkYvGXq5CfzLqKJ49FlVpWegz520CCkNM6OpqsBCRLsclyyji53tK5vLCSzuMgXVp+XV/Pu8sbsu0Nk/H61/Dv68R7grJtHJ1jwJ+Z3Gz/lEIR0MfNI0ovRQf3lxI/J7dxxk9Yl45zVTeJpwEqAHY78IBjQgIbpd9FoNIKuy+qjOKQ/yVaLZz1Y2GIt49DxfPYMwre8npW0qc/IH/uRhDFZbVXxZ74idEri8Imdwlsxh8XgAghzgpYbOvNTJttM9rjkOlyCxuuuNyCzA47zazA3q8bc9WBXi7ndezCMXZg7OG0rA3OMBcyBWQxziCvDHJ66nJtQxBwWxBxUhnnv1MXctETVAnBBMTerghxVC7nnkO5IK+WW2yUPIz3kNnEsUhLkoCtBfnTNYr0MOZl5F4krwkozOiMixBmUuW+B9sFEPMFXUUHaAgFrQMjrQhI4sf8oejg6YPgVbqjP7mTDgST2SLaUEV2ELuHf2vZIXugI5E5f3lHshGMSKx2lPK0f+3Dq7FO3CRCZIlOo2Gixqhos3ZIRFwdPI40EtHqiXe4WtMugMl/IOHW5R7D3QfKGcLcY6qhXFeqgatSPLenIkjA/fqSlangIPhgK7uwBY51iyYHiVnsbcV7lBP54xooug42w+n4Cl+86wQVvmPqel1xGy6bIdwkUACnYRRqP1NS5R0ZVDKganzGAz4gBcGwG1DAMwnMaA9A+MgNQtbeMgTMaA1DjdtbLgGp7ITqnMQB6x2ZA5/lbQYL2iKYx4YYG678FzRveR+mKwAU7AVjz1aaRHY2Tvze3f/z1zuv/lvfG7i7rMGtuKMWeHxI39mnyNRYtJ5T0JzT0n9mtO0F5ctCTBqJVzCMzKxMD1RitF0waR1IZ49AWCTA101W1jkOkakJwwvgzwBuGP1Qt0Qnjj5AciB+fAXWO3DxhBkzUNPzV1U/rhPHHqGkWQLfiX4Yn9q6TrEAadMQ+rp0HdisMR2h4Tuz8Yt5ZGYZHWqcyNZ4XsHGNvBdI9GjXqfQdWeCFjipepwIHrTF6TjRJRRrsI1IdYx2IBunPr0gxXvsbbybZMmolGRbICmpJTpkB5ZEsq9uqST5IC58jydAujeS1La6L5FZdFyUZlUeyPGdVNckFkkdakjOP1iiJYrmjiilG6py1F9L5PprdwIki3xXJFUhXeL28NIx0BV6UAYZunxEcPv3LCgYbJbx418lilLQwXAmlJ15KQiyeSQ+gbmY5SMKzGxr5fApaCZqupRNiOi8SYz3QOKbT9H68Sz9YSzcLtvzZ+JqMYl6zU4YzSveQYjdK1uX5RSBrocKpcKYo62bN3md++S1ZHzv+LI/bH8KdIXs0cebJobsIg6d+6Lg/E/5eSl7ZBNzaQWEYvZ5mUGjSXtZnvl3I1zMExecSuGTzK/LS12SwDBOHvpypBPxyAhnYI+KlzySY6hpCqxeL6cUyNWHOeUNUIZI02MGqEGJRp6KaPTtTXSBoVWEzVCHCppzXqckx1ClD2ZyWpwzVaZtWGR6iDOHblCFslDKUV90OVoYmkiUe1fyShKnOZrTqsBnq0MSSydVk++qUYa8yZahOirSy0gxZwebLr+TWazjb2ZVGRBG4UYZTTpc43HBK8l73IgAu8LpPGUpvO5N0RsM0k7QEdaEkLkL1/SkTqmKQ+2Hlpy8UeLW5wXjKb2Lq8LSsOvEssNzcYDyRnFalw1P3rllleBZY2W0wnvLuEcfHs8D6WoPxlHedOT6euhcwzOQXD5Ktipif0bH7w3v2cd2xh8lv1iyD3owUPdkJS8qCP8/dvlJMIRSpNNYvbx0rh9PSKe89ZMLs+FtL7L438+sl9v6b9Yjt++9/0p/hp9XdrYe+L94XcBrbpAKh/NakAqWj8uICLcOwZbggw4dGfgrD1a0faRlWAxWIzultX90sZFVK9M65Xwy/DqPPXwdXdzj6cvH8A2qH2Ll4OgptGnJ3G0QoBam1GkQtl7qo/xwdnTfxKm/M0gBeWwe2BF7l/QcbwKtuFqTl9ZW8ytunNoBXXUp7a1OL6F6gvAF9fDZ1c2stm0U0Lmgcl+c8r/c2LWvYzRuZuj2TWzaLxC5SJn6NGx5pidRusqlnsvV+RItpyxbz6FxqpmWTz2ybg/UuBw2lrozBZctZiBpKuhXtZqCnRDPJk1LybRacCSlb1mo3KQDUysqr9p8HO5Das/+84RICddPc2MLAJmWZEjGs7qo+AewCFVb0elhZcfMPYLIJ6M1/0UEf/wc=</diagram></mxfile>
|
2006.05580/main_diagram/main_diagram.pdf
ADDED
|
Binary file (22.7 kB). View file
|
|
|
2006.05580/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In this section, we provide a formulation of the problem statement, followed by a brief description of key concepts in GP.\
|
| 4 |
+
|
| 5 |
+
Existing image deraining methods assume the additive model where the rainy image ($x$) is considered to be the superposition of a clean image ($y$) and a rain component ($r$), , $$\begin{equation}
|
| 6 |
+
x= y + r.
|
| 7 |
+
\end{equation}$$ Single image deraining task is typically an inverse problem where the goal is to estimate the clean image $y$, given a rainy image $x$. This can be achieved by learning a function that either (i) directly maps from rainy image to clean image [@eigen2013restoring; @fu2017clearing; @Authors17c; @Authors17g], or (ii) extracts the rain component from the rainy image which can then be subtracted from the rainy image to obtain the clean image [@Authors17f; @Authors18; @li2018recurrent]. We follow the second approach of estimating the rain component from a rainy image.
|
| 8 |
+
|
| 9 |
+
In semi-supervised learning, we are given a labeled dataset of input-target pairs ($\{x,y\} \in \mathcal{D_L}$) sampled from an unknown joint distribution $p(x,y)$ and unlabeled input data points $x \in \mathcal{D_U}$ sampled from $p(x)$. The goal is to learn a function $f(x|\theta)$ parameterized by $\theta$ that accurately predicts the correct target $y$ for unseen samples from $p(x)$. The parameters $\theta$ are learned by leveraging both labeled and unlabeled datasets. Since the labeled dataset consists of input-target pairs, supervised loss functions such as mean absolute error or cross entropy are typically used to train the networks. The unlabeled datapoints form $\mathcal{D_U}$ are used to augment $f(x|\theta)$ with information about the structure of $p(x)$ like shape of the data manifold [@oliver2018realistic] via different techniques such as enforcing consistent regularization [@laine2016temporal], virtual adversarial training [@miyato2018virtual] or pseudo-labeling [@lee2013pseudo].
|
| 10 |
+
|
| 11 |
+
Following [@wei2019semi], we employ the semi-supervised learning framework to leverage unlabeled real-world data to obtain better generalization performance. Specifically, we consider the synthetically generated rain dataset consisting of input-target pairs as the labeled dataset $\mathcal{D_L}$ and real-world unlabeled images as the unlabeled dataset $\mathcal{D_U}$. In contrast to [@wei2019semi], we follow the approach of pseudo-labeling to leverage the unlabeled data.
|
| 12 |
+
|
| 13 |
+
A Gaussian process (GP) $f(v)$ is an infinite collection of random variables, of which any finite subset is jointly Gaussian distributed. A GP is completely specified by its mean function and covariance function which are defined as follows $$\begin{equation}
|
| 14 |
+
\begin{aligned} m(v) &=\mathbb{E}[f(v)],
|
| 15 |
+
\end{aligned}
|
| 16 |
+
\end{equation}$$ $$\begin{equation}
|
| 17 |
+
\begin{aligned} {K}\left(v, v^{\prime}\right) &=\mathbb{E}\left[(f(v)-m(v))\left(f\left(v^{\prime}\right)-m\left(v^{\prime}\right)\right)\right], \end{aligned}
|
| 18 |
+
\end{equation}$$ where $v,v' \in \mathcal{V}$ denote the possible inputs that index the GP. The covariance matrix is constructed from a covariance function, or kernel, ${K}$ which expresses some prior notion of smoothness of the underlying function. GP can then be denoted as follows $$\begin{equation}
|
| 19 |
+
f(v) \sim \mathcal{GP}(m(v), K(v, v')+\sigma_{\epsilon}^2I).
|
| 20 |
+
\end{equation}$$ where is I identity matrix and $\sigma_{\epsilon}^2$ is the variance of the additive noise. Any collection of function values is then jointly Gaussian as follows $$\begin{equation}
|
| 21 |
+
f(V)=\left[f\left(v_{1}\right), \ldots, f\left(v_{n}\right)\right]^{T} \sim \mathcal{N}\left(\mu, K(V, V')+\sigma_{\epsilon}^2I\right)
|
| 22 |
+
\end{equation}$$ with mean vector and covariance matrix defined by the GP as mentioned earlier. To make predictions at unlabeled points, one can compute a Gaussian posterior distribution in closed form by conditioning on the observed data. The reader is referred to [@rasmussen2003gaussian] for a detailed review on GP.
|
| 23 |
+
|
| 24 |
+
# Method
|
| 25 |
+
|
| 26 |
+
<figure id="fig:overview" data-latex-placement="t!">
|
| 27 |
+
<div class="center">
|
| 28 |
+
<img src="figures/Overview_GP_v2" style="width:100.0%" />
|
| 29 |
+
</div>
|
| 30 |
+
<figcaption>Overview of the proposed GP-based SSL framework. We leverage unlabeled data during learning. The training process consists of iterating over labeled data and unlabeled data. During the labeled training phase, we use supervised loss function consisting of <span class="math inline"><em>l</em><sub>1</sub></span> error and perceptual loss between the prediction and targets. In the unlabeled phase, we jointly model the labeled and unlabeled latent vectors using GP to obtain the pseudo-GT for the unlabeled sample at the latent space. We use this pseudo-GT for supervision.</figcaption>
|
| 31 |
+
</figure>
|
| 32 |
+
|
| 33 |
+
As shown in Fig. [3](#fig:overview){reference-type="ref" reference="fig:overview"}, the proposed method consists of a CNN based on the UNet structure [@ronneberger2015u], where each block is constructed using a Res2Block [@gao2019res2net]. The details of the network architecture are provided in the supplementary material. In summary, the network is made up of an encoder ($h(x,\theta_{enc})$) and a decoder ($g(z,\theta_{dec})$). Here, the encoder and decoder are parameterized by $\theta_{enc}$ and $\theta_{dec}$, respectively. Furthermore, $x$ is the input to the network which is then mapped by the encoder to a latent vector $z$. In our case, $x$ is the rainy image from which we want to remove the rain streaks. The latent vector is then fed to the decoder to produce the output $r$, which in our case is the rain streaks. The rain streak component is then subtracted form the rainy image ($x$) to produce the clean image ($y$), , $$\begin{equation}
|
| 34 |
+
y = x - r,
|
| 35 |
+
\end{equation}$$ where $$\begin{equation}
|
| 36 |
+
r = g(h(x,\theta_{enc}), \theta_{dec}).
|
| 37 |
+
\end{equation}$$
|
| 38 |
+
|
| 39 |
+
In our problem formulation, the training dataset is $\mathcal{D}=\mathcal{D_L} \cup \mathcal{D_U}$, where $\mathcal{D_L}=\{x_l^i,y_l^i\}_{i=1}^{N_l}$ is a labeled training set consisting of $N_{l}$ samples and $\mathcal{D_U}=\{x_u^i\}_{i=1}^{N_u}$ is a set consisting of $N_{u}$ unlabeled samples. For the rest of the paper, $\mathcal{D_L}$ refers to labeled "*synthetic*" dataset and $\mathcal{D_U}$ refers to unlabeled "*real-world*" dataset, unless otherwise specified.
|
| 40 |
+
|
| 41 |
+
The goal of the proposed method is to learn the network parameters by leveraging both labeled ($\mathcal{D_L}$) and unlabeled dataset ($\mathcal{D_U}$). The training process iterates over labeled and unlabeled datasets. The network parameters are learned by minimizing (i) the supervised loss function ($\mathcal{L}_{sup}$) in the labeled training phase, and (ii) the unsupervised loss function ($\mathcal{L}_{unsup}$) in the unlabeled training phase. For the unlabeled training phase, we generate pseudo GT using GP formulation, which is then used in the unsupervised loss function. The two training phases are described in detail in the following sections.
|
| 42 |
+
|
| 43 |
+
In this phase, we use the labeled data $\mathcal{D_L}$ to learn the network parameters. Specifically, we minimize the following supervised loss function $$\begin{equation}
|
| 44 |
+
\label{eq:loss_sup}
|
| 45 |
+
\mathcal{L}_{sup} = \mathcal{L}_1 + \lambda_p \mathcal{L}_{p},
|
| 46 |
+
\end{equation}$$ where $\lambda_{p}$ is a constant, and $\mathcal{L}_1$ and $\mathcal{L}_p$ are $l_1$-loss and perceptual loss [@johnson2016perceptual; @zhang2018multi] functions, respectively. They are defined as follows $$\begin{equation}
|
| 47 |
+
\mathcal{L}_{1} = \|y^{pred}_l - y_l\|_1,
|
| 48 |
+
\end{equation}$$ $$\begin{equation}
|
| 49 |
+
\mathcal{L}_{p} = \|\Phi_{VGG}(y^{pred}_l) - \Phi_{VGG}(y_l)\|^2_2 ,
|
| 50 |
+
\end{equation}$$ where $y^{pred}_l=g(z,\theta_{dec})$ is the predicted output, $y_l$ is the ground-truth, $z=h(x, \theta_{enc})$ is the intermediate latent space vector and $\Phi_{VGG}(\cdot)$ represents the pre-trained VGG-16 [@simonyan2014very] network. For more details on the perceptual loss, please refer to supplementary material.
|
| 51 |
+
|
| 52 |
+
In addition to minimizing the loss function, we also store the intermediate feature vectors $z_l^i$'s for all the labeled training images $x_l^i$'s in a matrix $F_{z_l}$. That is $F_{z_l}= \{z_l^i\}_{i=1}^{N_l}$. It is used later in the unlabeled training phase to generate the pseudo-GT for the unlabeled data. In our case, $z_l^i$ is a vector of size $1\times M$, where M = 32,768 for the network in our proposed method. Thus $F_{z_l}$ is a matrix of size $N_l\times M$.
|
| 53 |
+
|
| 54 |
+
In this phase, we leverage the unlabeled data $\mathcal{D_U}$ to improve the generalization performance. Specifically, we provide supervision at the intermediate latent space by minimizing the error between the predicted latent vectors and the pseudo-GT obtained by modeling the latent space vectors of the labeled sample images $F_{z_l}$ and $z^{pred}_u$ jointly using GP.\
|
| 55 |
+
**Pseudo-GT using GP:** The training occurs in an iterative manner, where we first learn the weights using the labeled data ($\mathcal{D_L}$) followed by weight updates using the unlabeled data ($\mathcal{D_U}$). After the first iteration on $\mathcal{D_L}$, we store the latent space vectors of the labeled data in a list $F_{z_l}$. These vectors lie on a low dimension manifold. During the unlabeled phase, we project the latent space vector ($z_u$) of the unlabeled input onto the space of labeled vectors $F_{z_l}= \{z_l^i\}_{i=1}^{N_l}$. That is, we express the unlabeled latent space vector $z^k_u$ corresponding to the $k^{th}$ training sample from $\mathcal{D_U}$ as $$\begin{equation}
|
| 56 |
+
\label{eq:lin_combination}
|
| 57 |
+
z^k_u = \sum_{i=1}^{N_l} \alpha_i z_l^i +\epsilon,
|
| 58 |
+
\end{equation}$$ where $\alpha_{i}$ are the coefficients, and $\epsilon$ is additive noise $\mathcal{N}(0,\sigma_\epsilon^2)$.
|
| 59 |
+
|
| 60 |
+
With this formulation, we can jointly model the distribution of the latent space vectors of the labeled and the unlabeled samples using GP. Conditioning the joint distribution will yield the following conditional multi-variate Gaussian distribution for the unlabeled sample $$\begin{equation}
|
| 61 |
+
P(z_u^k|\mathcal{D_L},F_{z_l}) = \mathcal{N}(\mu_u^k,\Sigma_u^k),
|
| 62 |
+
\end{equation}$$ where $$\begin{equation}
|
| 63 |
+
\label{eq:mean}
|
| 64 |
+
\mu_u^k = K(z_u^k, F_{z_l}) [K(F_{z_l},F_{z_l}) + \sigma_\epsilon^2 I]^{-1}F_{z_l},
|
| 65 |
+
\end{equation}$$ $$\begin{equation}
|
| 66 |
+
\label{eq:sigma}
|
| 67 |
+
\begin{aligned}
|
| 68 |
+
\Sigma_u^k = {} & K(z_u^k,z_u^k) - K(z_u^k,F_{z_l})[K(F_{z_l},F_{z_l})+\sigma_\epsilon^2I]^{-1} \\
|
| 69 |
+
& K(F_{z_l},z_u^k) + \sigma_\epsilon^2
|
| 70 |
+
\end{aligned}
|
| 71 |
+
\end{equation}$$ where $\sigma_\epsilon^2$ is set equal to 1, $K$ is defined by the kernel function as follows $$\begin{equation}
|
| 72 |
+
K(Z,Z)_{k,i}= \kappa(z_u^k,z_l^i) = \frac{ \langle z_u^k, z_l^i\rangle}{|z_u^k|\cdot|z_l^i|}.
|
| 73 |
+
\end{equation}$$
|
| 74 |
+
|
| 75 |
+
Note that $F_{z_l}$ contains the latent space vectors of all the labeled images, $K(F_{z_l},F_{z_l})$ is a matrix of size $N_l\times N_l$, and $K(z_u^k,F_{z_l})$ is a vector of size $1\times N_l$. Using all the vectors may not be necessarily optimal for the following reasons: (i) These vectors will correspond to different regions in the image with a wide diversity in terms of content and density/orientation of rain streaks. It is important to consider only those vectors that are similar to the unlabeled vector. (ii) Using all the vectors is computationally prohibitive. Hence, we use only $N_n$ nearest labeled vectors corresponding to an unlabeled vector. More specifically, we replace $F_{z_l}$ by $F_{z_l,n}$ in Eq. [\[eq:lin_combination\]](#eq:lin_combination){reference-type="eqref" reference="eq:lin_combination"}-[\[eq:sigma\]](#eq:sigma){reference-type="eqref" reference="eq:sigma"}. Here $F_{z_l,n}=\{z_l^j : z_l^j \in nearest(z_u^k,F_{z_l} ,N_n) \}$ with $nearest(p,Q ,N_n)$ being a function that finds top $N_n$ nearest neighbors of $p$ in $Q$.
|
| 76 |
+
|
| 77 |
+
::: table*
|
| 78 |
+
[]{#tab:ddnsirr_synthetic label="tab:ddnsirr_synthetic"}
|
| 79 |
+
:::
|
| 80 |
+
|
| 81 |
+
We use the mean predicted by Eq. [\[eq:mean\]](#eq:mean){reference-type="eqref" reference="eq:mean"} as the pseudo-GT ( $z_{u,pseudo}^{k}$) for supervision at the latent space level. By minimizing the error between $z^{k}_{u,pred}=h(x_u,\theta_{enc})$ and $z_{u,pseudo}^{k}$, we update the weights of the encoder $h(\cdot,\theta_{enc})$, thereby adapting the network to unlabeled data which results in better generalization. We also minimize the prediction variance by minimizing Eq. [\[eq:sigma\]](#eq:sigma){reference-type="eqref" reference="eq:sigma"}. Using GP we are approximating $z_{u}^{k}$, latent vector of an unlabeled image using the latent space vectors in $F_{z_l}$, by doing this we may end up computing incorrect pseudo-GT predictions because of the dissimilarity between the latent vectors. This dissimilarity is due to different compositions in rain streaks like different densities, shapes, and directions of rain streaks. In order to address this issue we minimize the variance $\Sigma_{u,n}^k$ computed between $z^{k}_{u}$ and the $N_n$ nearest neighbors in the latent space vectors using GP. Additionally, we maximize the variance $\Sigma_{u,f}^{k}$ computed between $z^{k}_{u}$ and the $N_f$ farthest vectors in the latent space using GP, in order to ensure that the latent vectors in $F_{z_l}$ are dissimilar to the unlabeled vector $z_u^k$ and do not affect the GP prediction, as defined below $$\begin{equation}
|
| 82 |
+
\label{eq:sigma_far}
|
| 83 |
+
\begin{aligned}
|
| 84 |
+
\Sigma_{u,f}^{k} = {} & K(z_u^k,z_u^k) - K(z_u^k,F_{z_l,f})[K(F_{z_l,f},F_{z_l,f})+\sigma_\epsilon^2I]^{-1}\\
|
| 85 |
+
& K(F_{z_l,f},z_u^k) + \sigma_\epsilon^2,
|
| 86 |
+
\end{aligned}
|
| 87 |
+
\end{equation}$$ where $F_{z_l,f}$ is the matrix of $N_f$ labeled vectors that are farthest from $z_u^k$.
|
| 88 |
+
|
| 89 |
+
Thus, the loss used during training using the unlabeled data is defined as follows $$\begin{equation}
|
| 90 |
+
\mathcal{L}_{unsup} = \|{z}^{k}_{u,pred} - {z}_{u,pseudo}^{k}\|_2 + \log \Sigma_{u,n}^{k} + \log(1-\Sigma_{u,f}^{k}),
|
| 91 |
+
\end{equation}$$ where $z^{k}_{u,pred}$ is the latent vector obtained by forwarding an unlabeled input image $x_u^k$ through the encoder $h$, , $z^{k}_{u,pred}=h(x_u,\theta_{enc})$ , $z_{u,pseudo}^{k} = \mu_u^k$ is the pseudo-GT latent space vector (see Eq. [\[eq:mean\]](#eq:mean){reference-type="eqref" reference="eq:mean"}), and $\Sigma_{u,n}^{k}$ is the variance obtained by replacing $F_{z_l}$ in Eq. [\[eq:sigma\]](#eq:sigma){reference-type="eqref" reference="eq:sigma"} with $F_{z_l,n}$.
|
| 92 |
+
|
| 93 |
+
The overall loss function used for training the network is defined as follows $$\begin{equation}
|
| 94 |
+
\label{eq:loss_total}
|
| 95 |
+
\mathcal{L}_{total} = \mathcal{L}_{sup} + \lambda_{unsup} \mathcal{L}_{unsup},
|
| 96 |
+
\end{equation}$$ where $\lambda_{unsup}$ is a pre-defined weight that controls the contribution from $\mathcal{L}_{sup}$ and $\mathcal{L}_{unsup}$.
|
2006.09447/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-10-22T12:50:00.155Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36" version="15.5.5" etag="5dXtiv9hr7GHUJD9nRhz"><diagram id="uZGRHzetqW6cUJxMxixX">7Vpbc9o4FP41zLQPZJBvmMdAYPchO5NpOrPbJ0bYwlZjLK8sCvTX75Es302grSGZpXmIfY7u5ztXmYE52+z/4DgJ/2I+iQbGyN8PzIeBYSAXGfCQnEPGmYztjBFw6utOJeOZfieaOdLcLfVJWusoGIsETepMj8Ux8USNhzlnu3q3NYvqqyY40CuOSsazhyPS6vY39UWYcV1jXPL/JDQI85WRM8laNjjvrKdIQ+yzXWUtcz4wZ5wxkb1t9jMSSeHlcsk2tDjSWmyMk1icM0AD8Q1HW302vS9xyA/L2Tb2iew/GpjTXUgFeU6wJ1t3AC/wQrGJgELwumax0HghV9I0imYsYlzNZS7UH/CLc8s59R4IF2R/9ByokA6oFWEbIvgBuugBtgZKKxRyNL0r4RmPNC+sQGOZmom1SgTF1KXU4EULrluI5v9EiE5diMaoLUQLdQjRdHsQotWSGfHB6DTJuAhZwGIczUvutC7Vss8jY4mW5VcixEELE28Fq0saBMMP/8jxd3ZOftHTKeJhX6MOmqogZJTClxv+UdFzEmFBv9XHdclRD31iFGYsIDNHdcwsszFFyrbcI3pU1RE0JrKR/fpEAvOAiNZECtbiPGchbb8npNH5SCtbPIJ0JmZtRjoiKYHlptS/QpxrWM5p7wSOJJGvAlwR+c7kyGlCOIW1CK/yn0rmSSdG9yQP3UghKOBcLAZyMmo7tYmxsBcz6dQEZy+k0uJ4Llmt+/FwyKxruenYLRdndMUJ1EecGHcg4USwwnRVw8P5d8sUF3svgdL8oZfJ414qSLD6gCxAFXYES44Myy7fbeNjOR7eAvmcxx6kYDxfDLa5yptaqgCyFXUg64DELCYN9DQLRzSQ6HqAhVIQiRSFpOleN2yo7ytj7tKcuoE3ra6pPD2ogmE1PCdqq4LTFex6UAT3fJOkG5WLVgFpylVIF1hwH/GKRE8spVpeKyYE20CHSDZMC5WqWNha/UEXtdh9mmQ5sxQ0zgllznkXoEMhZLJ9L89sLDw/Nu4opNtrCijyOw9WNBY+Fhgekp/CM8u8h6YJx1+YUlr3irMUIY2XqnXpUe5FZOkd5H9O1pyk4TICJz9EhnuXxMFRl/xrfsE16sowQi1lsN22MuS8X1GGyWllILGvRCWtK8JpSr26Rhw1HpWcnI6ub5a+OA0jNF3759KX8amJ+ktf8gz4JgEbN5xmM/M/Fy90aqIe8UIdeEG1YbGBPUdLkRHvPxAaPQW+ScNS7HaVd6m4l9/7tKDACorBeCqGaDB+uHFILOuKkJxxeXHl8mD4dvVB4YbepD5AVgcWWc6eJji+aI1QqUQyxmeOv0Lqx5R4PhGZwwm+9RRGxuiBvFJSFLxs17deadjO21Ua+VKdwQ/8rXK3txcDnQYkpnVFSLouZPIg+BuSMgheEZKum5kbTenNSU8pfWuiHlP6My5QLo0X2VOR3SfLG+2MVp8O7iaTsabLO2VJHCpEJVOqfL159Vo5z9SuUJXbzeKsp6q8NVGPKtF1jSI96Pfbc6UmOmKJV3Cl+ZeOjhxWnrQzh5UNw1TJQCavyE32ZySoD2wnU1KCNzDmM05f4PHh0+PHEzlptosjOWkJDTqnhKlVKI65WExml0GwuHSqIIjsDgjHfUDYdWHyYxDCWc+BcL4n3jarKH4es/dhxrVv8Re4oS4MS2uEfcVrmzy6Xl4hoOikMY2D3/pw0kO4Vk0fun7x0pM+AFn+ICkLzeXPusz5fw==</diagram></mxfile>
|
2006.09447/main_diagram/main_diagram.pdf
ADDED
|
Binary file (15.3 kB). View file
|
|
|
2006.09447/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
An important aspect of autonomous decision-making agents is the ability to reason about the unknown intentions and behaviours of other agents. Much research has been devoted to this *agent modelling* problem [@albrecht2018autonomous], with recent works focused on learning informative representations about another agent's policy using deep learning architectures for agent modelling and reinforcement learning (RL) [@he2016opponent; @raileanu2018modeling; @grover2018learning; @rabinowitz2018machine; @zintgraf2021deep].
|
| 4 |
+
|
| 5 |
+
A common assumption in existing methods is that the modelling agent has access to the local trajectory of the modelled agents during execution [@albrecht2018autonomous], which may include their local observations of the environment state and their past actions. While it is certainly desirable to be able to observe the agents' local contexts in order to reason about their past and future decisions, in practice such an assumption may be too restrictive. Agents may only have a limited view of their surroundings, communication with other agents may be infeasible or unreliable [@skkr2010], and knowledge of the perception system of other agents may be unavailable [@gmytrasiewicz2005framework]. In such cases, an agent must reason with only locally available information.
|
| 6 |
+
|
| 7 |
+
We consider the following question: *Can effective agent modelling be achieved using only the locally available information of the controlled agent during execution?* A strength of deep learning techniques is their ability to identify informative features in data. Here, we use deep learning techniques to extract informative features about the trajectory of the modelled agent from a stream of local observations for the purpose of agent modelling. Specifically, we consider a multi-agent setting in which we control one agent which must learn to interact with a set of other agents. We assume a set of possible policies for the non-learning agents and that these policies are fixed.
|
| 8 |
+
|
| 9 |
+
We propose *Local Information Agent Modelling* (LIAM)[^1], which can be seen as the general idea of learning the relationship between the trajectory of the controlled agent and the trajectory of the modelled agent. In this work we propose one instantiation of this idea: an encoder-decoder agent modelling method that can extract a compact yet informative representation of the modelled agents given only the local information of the controlled agent (its local state observations, and past actions). The model is trained to replicate the observations and actions of the modelled agents from the local information only. During training, the modelled agent's observations and actions are utilised as reconstruction targets for the decoder; after training, only the encoding component is retained which generates representations using local observations of the controlled agent. The learned representation conditions the policy of the controlled agent in addition to its local observation, and the policy and model are optimised concurrently during the RL learning process.
|
| 10 |
+
|
| 11 |
+
We evaluate LIAM in three different multi-agent environments: double speaker-listener [@mordatch2017emergence; @lowe2017multi], level-based foraging (LBF) [@as2017aamas; @papoudakis2020comparative], and a modified version of predator-prey proposed by [@bohmer2020deep]. Our results support the idea that effective agent modelling can be achieved using only local information during execution: the same RL algorithm generally achieved higher average returns when combined with representations generated by LIAM than without, and in some cases the average returns are comparable to those achieved by an ideal baseline which has access to the modelled agent's trajectory during execution. We also provide detailed evaluations of the learned encoder and decoder of LIAM as well as comparison with different instantiations of LIAM.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
We control a single agent which must learn to interact with other agents that use one of a fixed number of policies. We model the problem as a Partially-Observable Stochastic Game (POSG) [@shapley1953stochastic; @hansen2004dynamic] which consists of $N$ agents $\sI = \{1,2,...,N\}$, a state space $\gS$, the joint action space $\gA = \gA^1 \times ... \times \gA^N$, a transition function $P:\gS \times \gA \times \gS \rightarrow [0,1]$ specifying the transition probabilities between states given a joint action, and for each agent $i \in \sI$ a reward function $r^i:\gS\times \gA \times \gS \rightarrow \mathbb{R}$. We consider that each agent $i$ has access to its observation $o^i \in \gO^i$, where $\gO^i$ is the observation set of agent $i$. The observation function $\Omega^i:\gS \times \gA \times \gO^i \rightarrow [0,1]$ defines a probability distribution over the possible next observations of agent $i$ given the previous state and the joint action of all agents.
|
| 16 |
+
|
| 17 |
+
We denote the agent under our control by $1$, and the modelled agents by $-1$ where for notational convenience we will treat the modelled agents as a single "combined" agent with joint observations $o^{-1}$ and actions $a^{-1}$. We assume a set of fixed policies, $\Pi= \{ \pi^{-1, k} | k=1,...,K \}$, which may be defined manually (heuristic) or pretrained using RL. Each fixed policy determines the modelled agent's actions as a mapping $\pi^{-1,k}(o^{-1})$ from the modelled agent's local observation $o^{-1}$ to a distribution over actions $a^{-1}$. Our goal is to find a policy $\pi_\theta$ parameterised by $\theta$ for agent $1$ which maximises the average return against the fixed policies from the training set $\Pi$, assuming that each fixed policy is initially equally probable and fixed during an episode: $$\begin{equation}
|
| 18 |
+
\arg \max_{\theta} \E_{\pi_\theta, \pi^{-1, k} \sim \gU(\Pi)} \left[ \sum_{t=0}^{H-1} \gamma^t r^{1}_{t+1} \right]
|
| 19 |
+
\end{equation}$$ where $r^{1}_{t+1}$ is the reward received by agent 1 at time $t+1$ after performing the action $a^1_t$, $H$ is the episode length (horizon), and $\gamma \in (0,1)$ is a discount factor. It is also important to note that neither during training nor during execution the controlled agent has access to the identity $k$ of the policy that is used by the modelled agent.
|
| 20 |
+
|
| 21 |
+
We aim to learn the relationship between the trajectory of the controlled agent and the trajectory of the modelled agent. We denote by $\tau^{-1} = \{o^{-1}_{t}, a^{-1}_t \}_{t=0}^{t=H}$ the trajectory of the modelled agent where $o^{-1}_{t}$ and $a^{-1}_{t}$ are the modelled agent's observation and action at time step $t$ in the trajectory, up to horizon $H$. These trajectories are generated from the fixed policies in $\Pi$. We assume the existence of some latent variables (or *embeddings*) in the space $\gZ$, and at each time step $t$ the latent variables $z_t$ contain information both about the fixed policy that is used by the modelled agent as well as the dynamics of the environment as perceived by the modelled agent. To learn the relationship between the modelled agent's trajectory and the latent variables we can use a parametric decoder. The decoder is denoted as $f_\vu:\gZ \rightarrow \tau^{-1}$ and is the model that decodes the latent variables to the trajectory of the modelled agent.
|
| 22 |
+
|
| 23 |
+
The last step to learn the relationship between the local and the modelled agent's trajectories is to use a recurrent encoding model, that we denote as $f_{\vw}: \tau^1 \rightarrow \gZ$, with parameters $\vw$, to learn the relationship between the local trajectory of the controlled agent and the latent variables. Specifically, we learn the function that relates the modelled agent's trajectory to the latent variables with an encoder that only depends on local information of the controlled agent. Since during execution only the encoder is required to generate the latent variables of the modelled agent, this approach removes the assumption that access to the modelled agent's observations and actions is available during execution.
|
| 24 |
+
|
| 25 |
+
:::: wrapfigure
|
| 26 |
+
r0.45
|
| 27 |
+
|
| 28 |
+
::: center
|
| 29 |
+
{width="100%"}
|
| 30 |
+
:::
|
| 31 |
+
::::
|
| 32 |
+
|
| 33 |
+
At each time step $t$, the recurrent encoder network generates an embedding $z_t$, which is conditioned on the information of the agent under control $(o^{1}_{1:t}, a^{1}_{1:t-1})$, until time step $t$. At each time step $t$ the parametric decoder learns to reconstruct the modelled agent's observation and action ($o^{-1}_t$ and $a^{-1}_t$) conditioned on the embedding $z_t$. Therefore, the decoder consists of a fully-connected feed-forward network with two output heads; the observation reconstruction head $f^o_\vu$, and the policy reconstruction head $f^\pi_\vu$ (see Figure [\[fig:self_vae\]](#fig:self_vae){reference-type="ref" reference="fig:self_vae"}). In each time step $t$, the decoder receives as input embedding $z_t$ and the observation reconstruction head reconstructs the modelled agent's observation $o^{-1}_t$, while the action reconstruction head outputs a categorical distribution over the modelled agent's action $a^{-1}_t$. We observe that the output dimensions of the two decoder's reconstruction heads grow linearly with respect to the number of the agents in the environment. We refer to this method as **LIAM** (**L**ocal **I**nformation **A**gent **M**odelling). LIAM uses the information of both the controlled agent and the modelled agent during training, but during execution only the information of the controlled agent is used. The encoder-decoder loss is defined as: $$\begin{equation}
|
| 34 |
+
\label{recon_vae}
|
| 35 |
+
\begin{aligned}
|
| 36 |
+
& \mathcal{L}_{ED} = \frac{1}{H}\sum_{t=1}^{H} [(f^o_\vu(z_t) - o^{-1}_t) ^ 2 - \log f^\pi_\vu(a^{-1}_t | z_t)] \hspace{0.5cm} \mathrm{where }\hspace{0.2cm} z_t = f_\vw(o^1_{:t}, a^1_{:t-1})
|
| 37 |
+
\end{aligned}
|
| 38 |
+
\end{equation}$$
|
| 39 |
+
|
| 40 |
+
The latent variables $z$ augmented with the controlled agent's observation can be used to condition the RL optimised policy. Consider the augmented space $\gO^1_{aug} = \gO^1 \times \gZ$, where $\gO^1$ is the original observation space of the controlled agent in the POSG, and $\gZ$ is the representation space about the agent's models. The advantage of learning the policy on $\gO^1_{aug}$ compared to $\gO^1$ is that the policy can specialise for different $z \in \gZ$. In our experiments we optimised the policy of the controlled agent using A2C, however, other RL algorithms could be used in its place. The input to the actor and critic are the local observation and the generated representation. We do not back-propagate the gradient from the actor-critic loss to the parameters of the encoder. We use different learning rates for optimising the parameters of the networks of RL and the encoder-decoder. We empirically observed that LIAM exhibits high stability during learning, allowing us to use larger learning rate compared to RL. Additionally, we subtract the policy entropy from the policy gradient loss to encourage exploration [@mnih2016asynchronous]. Given a batch $B$ of trajectories, the objective of A2C is: $$\begin{equation}
|
| 41 |
+
\label{rl_equation_a2c}
|
| 42 |
+
\begin{aligned}
|
| 43 |
+
& \mathcal{L}_{A2C} = \E_{(o_t, a_t, r_{t+1}, o_{t+1}) \sim B} [ \frac{1}{2} \big( r^1_{t+1} + \gamma V_{\vphi}(o^{1}_{t+1}, z_{t+1})-V_{\vphi}(o^1_{t}, z_t) )^2 \\
|
| 44 |
+
& \hspace{4cm} -\hat{A}\log\pi_{\theta}(a^1_t|o^1_t, z_t) - \mathcal{\beta} H(\pi_{\theta}(a^1_t|o^1_t, z_t))]
|
| 45 |
+
\end{aligned}
|
| 46 |
+
\end{equation}$$ The pseudocode of LIAM is given in [6](#sec:pseud_liom){reference-type="ref+Label" reference="sec:pseud_liom"} and the implementation details in [9](#sec:impl){reference-type="ref+Label" reference="sec:impl"}. Intuitively, at the beginning of each episode, LIAM starts with uninformative embeddings \"average\" over the possible agent trajectories. At each time step, the controlled agent interacts with the environment and the modelled agent, and updates the embeddings based on the local trajectory that it perceives.
|
2006.11197/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile modified="2019-09-19T07:57:38.237Z" host="www.draw.io" agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" etag="tGbukb1SD1eyUzqNRfKd" version="11.2.9" type="device" pages="1"><diagram id="m-3gLCOoA3r8bcmHM4LP" name="Page-1">7Vxbc6M2FP41eawHiftjbCe709ltd5ppN3nqKKBguthyAW/s/voKI3ERvig2SI537ZkMCBDwne8cnfNJzo05ma8/pGg5+0xCnNxAI1zfmNMbCC1g079Fw6ZsgACWDVEah2UTqBse4v8wazRY6yoOcdY6MSckyeNluzEgiwUO8lYbSlPy2j7thSTtuy5RhDsNDwFKuq1f4zCfla0edOv2jziOZvzOwPHLI3PET2Zvks1QSF4bTebdjTlJCcnLrfl6gpMCO45Led39nqPVg6V4kctcsPxrvUkfyZ/pr+Cz//D3H/9+Qh9/Ydb5jpIVe2H2sPmGI0B7oWDTnfHrLM7xwxIFxZFXam7aNsvnCd0DdBNly9ICL/Ea05uOWd84zfF670ODCgpKIUzmOE839BR+gcHQY/QBJtt/rY3BAZ417MDbEDN/VPVcI0Q3GEhvAMy9dMBcsw2Y7+sFzLtwwKDAMF8zwfxLx8sXPNLWDBiwJBBbhLfFYED3ggRlWRy0gcLrOH9sbD/RbWNks71p8eoG39mwnfImOOyMHwKS9EHIKg3wsSjcRbyBqL0DUd6W4gTl8ff2Y+yCmd3hC4npA9YRA3pCiPXaXZSPz65qDjBCR9ATOrKFjnKURjjvdLS1evXaZxBBZjAbgAgvcZJMSELS7S3MFy/AQdFtlqfkG24cefZsy34LdbRRAvhtS1rmqZRw2x05QC0lnIugRIiw97KTEk7g4eeX90AJfkkVJdxTKSFwy7bVUkImg3tXlJAYYDyd1DFhO2Pw4InM2Zd5qGKOTCqrMdHQFhoMoeQ42cCe0JED1RpYJvf+AQ0MBQcGhtWXB1tKDczf42eGeDYleGZVUcI+kRKe0JHidIAz4HrSAX3DgNW2pHtyHSl0pLiOhOZFUOIKogQwBOf2+ooSjuIoMZTG1A8hwG3xLS5f0Pd85H0VO0/8jsVOfZvt3gBKFiuuSnMcOM/XSktx8PJPpaUvdOQqpuVQilfPQ9ewtNRGI88ZGS6Nca4H6LZtd2ZeRj70fJOe6AHTsk4VTHzQ6kdU1KzWQ5hqCTiUvtbzQHmtBPQPERAadk8EtA4R0DU0ElCGfzpny0T922WJhK7JMvg29XNBFgVwIcpmBSJbmBqI7U9ELt51rLYSAUUtUdY5xI5ctZokfJsmebX2hH3ZU+xIsT1NCYUqm6FlsZnFiyjBzLTHglsPscwEYl1ud4IZHxyaJgZDBTNTQrvhYAWrNNmMUxR8K+qSY2ilZLUIKw/JylVpxsiyLBrRgeGbrm0Y1Pf6wZUXs3x+RG5BBXAGw1VCAKkRMo7DKVcxsODUA6IWaGdCvEw7huhQgDpmNyjTUPrAdkmaz0hEFii5q1vHbYTrcz4RsmS4/oPzfMPWTKJVTs7XoHYnyuyqA6ny+SOHhLhgsvhzVF1wjN30kB5kznMfCbHoHPcxth+F7lPJeLr8hw/mevyn9pmnxpH36D+OpP9Ye+Q5Rf4jUdSd4z/39w79qPQfoNl/LNBBdBqjKEVz2gg64NI3zw8VBgypJqysCSVxtCiyLYoUpu3jAsc4QMktOzCPwzDZV4u3jdqDIRyhKIDdzMpRWX1bXb22tkOX5FdjB3BxhuiGmNoQ3WTtagwB7XYJB/mSF22G6OpRKIpSHKGcvjd0kgL355RuRcUWKEYvUtpoOyZm9AiaF1gtnrNlBdI12s70dysTDdN5Sk3XlZ4OmQ7+0KYDo3aNT2tSvdaznQ7c2lLqw7M9MrPddX6teL7bYmYcviTdLVdaXjsoAOCPPNsFFmR/1U7H8PnvS2CVbKHW5o5xhDtKCjVpVpU/9tNVqPHHbIwAHzr25wLsMiUBzrLj1dozCr5FW078vsqTuIjnfSnYlpCHAp5+NBXsXVHYHghB+4L85XAUvmhhw4aS/qJX2OCPeTW6OpD8sexgwgafTv8prJ/nP+a7GG/sgeellAvr1W/PtQnrA09VKJdaK6i0TfW9m4ikJLIMVPbYplj2gBHwm592j0OvQnvLqo3V/K5akbbEaUxvXkgUU7ZO7UvddGqmHKUojKmpBHfjzdM4xUEek0IHecVZ3pMnuqagcVS/C2z4InfX9kKGoXxRYnn0OdHNmdyO7ycDRjeR5n53wY3a4DbwzJx7W3wVAlr9bE0bohLrMc9BdOxZPi16FCIKNCPqSsTisxCd3t0rdXr9RVZ39vh9e732tNsdWAgYezSbm6pEVHfa7WqdzrgaIcCVFgL28EONEOBKCAHXn28L2bbf1bKVZttuN5X5jYS4OxN8y2aJC0REq13PlK/hieUQ7Bqop7UWdLf+P6JlUVv/M1bz7n8=</diagram></mxfile>
|
2006.11197/main_diagram/main_diagram.pdf
ADDED
|
Binary file (14.5 kB). View file
|
|
|
2006.11197/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
In this section we present exact configurations of all model variants of MXGNet. Due to the complexity of architectures, we will describe each modules in sequence. The object-level representation has two variations which are (o1) CNN features and (o2) Spatial Attention features. Also the models for PGM and RAVEN dataset differ in details. Unless otherwise stated, in all layers we apply Batch Normalization [@ioffe2015batch] and use Rectified Linear Unit as activation function.
|
| 4 |
+
|
| 5 |
+
**CNN features**: The first approach applies a CNN on the input image and use each spatial location in the final CNN feature map as the object feature vector. This type of representation is used widely, such as in Relation Network [@santoro2017simple] and VQ-VAE [@van2017neural]. Formally, the output of a CNN is a feature map tensor of dimension $H\times W \times D$ where $H$, $W$ and $D$ are respectively height, width and depth of the feature map. At each $H$ and $W$ location, an object vector is extracted. This type of object representation is simple and fast, but does not guarantee that the receptive field at each feature map location fully bounds objects in the image.
|
| 6 |
+
|
| 7 |
+
We use a residual module [@he2016deep] with two residual blocks to extract CNN features, as shown in figure [7](#fig:cnn_feat){reference-type="ref" reference="fig:cnn_feat"}.This is because Residual connections show better performance in experiments. The structure of a single Residual Convolution Block is shown in figure [6](#fig:arch_res){reference-type="ref" reference="fig:arch_res"}.Unless otherwise stated, convolutional layer in residual blocks has kernel size of $3\times3$. The output feature map processed by another residual block is treated as background encoding because we found that convolutional background encoding gives better results than feature vectors.
|
| 8 |
+
|
| 9 |
+
<figure id="fig:arch_res" data-latex-placement="h">
|
| 10 |
+
<div class="center">
|
| 11 |
+
<img src="images/resconv.jpg" style="width:30.0%" />
|
| 12 |
+
</div>
|
| 13 |
+
<figcaption>Architecture of a single Residual Convolution Block.</figcaption>
|
| 14 |
+
</figure>
|
| 15 |
+
|
| 16 |
+
<figure id="fig:cnn_feat" data-latex-placement="h">
|
| 17 |
+
<div class="center">
|
| 18 |
+
<img src="images/cnn_feature_module.jpg" style="width:90.0%" />
|
| 19 |
+
</div>
|
| 20 |
+
<figcaption>CNN feature object-level representation module. ’Conv’ is convolution layers, ’Max-Pooling’ is max-pooling layer and ’ResConv Block’ is Residual Convolutional Block.</figcaption>
|
| 21 |
+
</figure>
|
| 22 |
+
|
| 23 |
+
**Spatial Attention Object-level representation**: The second approach is to use spatial attention to attend to locations of objects, and extract representations for each object attended. This is similar to object detection models such as faster R-CNN [@ren2015faster], which use a Region Proposal Network to propose bounding boxes of objects in the input image. In practice, we use Spatial Transformer [@jaderberg2015spatial] as our spatial attention module. Figure [8](#fig:sp_feat){reference-type="ref" reference="fig:sp_feat"} shows the architecture used for extracting object-level representation using spatial attention. A CNN composed of 1 conv layr and 2 residual blocks is first applied to the input image, and the last layer feature map is extracted. This part is the same as CNN grid feature module. A spatial attention network composed of 2 conv layer then processes information at each spatial location on the feature map, and outputs $k$ numbers of $z = (z^{pres},z^{where})$, corresponding to $k$ possible objects at each location. Here, $z^{pres}$ is a binary value indicating if an object exists in this location, and $z^{where}$ is an affine transformation matrix specifying a sampling region on the feature maps. $z^{pres}$, the binary variable, is sampled from Gumbel-Sigmoid distribution [@maddison2016concrete; @jang2016categorical], which approximates the Bernoulli distribution. We set Gumbel temperature to 0.7 throughout the experiments. For the PGM dataset we restricted $k$ to be 1 and $z^{where}$ to be a translation and scaling matrix as 'shapes' objects do not overlap and do not have affine transformation attributes other than scaling and translation. For all $z_i;i\subset[1,H\times W]$, if $z^{pres}_i$ is 1, an object encoder network samples a patch from location specified by $z^{where}_i$ using a grid sampler with a fixed window size of $4\times4$ pixels. More details of the grid sampler can be found in [@jaderberg2015spatial]. The sampled patches are then processed by a conv-layer to generate object embeddings.
|
| 24 |
+
|
| 25 |
+
<figure id="fig:sp_feat" data-latex-placement="h">
|
| 26 |
+
<div class="center">
|
| 27 |
+
<img src="images/sp_attn_feature_module.jpg" />
|
| 28 |
+
</div>
|
| 29 |
+
<figcaption>Spatial attention based feature object-level representation module. ’Conv’ is convolution layers, ’Max-Pooling’ is max-pooling layer and ’ResConv Block’ is Residual Convolutional Block. <span class="math inline"><em>z</em></span> is the spatial attention variable <span class="math inline">(<em>z</em><sup><em>p</em><em>r</em><em>e</em><em>s</em></sup>, <em>z</em><sup><em>w</em><em>h</em><em>e</em><em>r</em><em>e</em></sup>)</span>. Sampler is a grid sampler which samples grid of points from given feature maps.</figcaption>
|
| 30 |
+
</figure>
|
| 31 |
+
|
| 32 |
+
**Multiplex Edge Embeddings**:Figure [5](#fig:mplx){reference-type="ref" reference="fig:mplx"} in the main paper shows an overview of the multiplex graph architecture. While motivation and overview of architecture is explained in section [5.2](#sec:graph){reference-type="ref" reference="sec:graph"} of the main paper, in this section we provide exact configurations for each part of the model. Each sub-layer of the multiplex edge is embedded by a small MLP. For PGM dataset, we use 6 parallel layers for each multiplex edge embeddings , with each layer having 32 hidden units and 8 output units. For RAVEN dataset we use 4 layers with 16 hidden units and 8 output units because RAVEN dataset contains fewer relations types than PGM dataset. Gating function is implemented as one Sigmoid fully connected layer with hidden size equal to the length of concatenated aggregated embeddings. Gating variables are element-wise multiplied with concatenated embeddings for gating effects. Gated embeddings are then processed with a final fully connected layer with hidden size 64.
|
| 33 |
+
|
| 34 |
+
**Graph Summarization**: This module summarizes all node summary embeddings and background embeddings to produce a diagram subset embedding representing relations present in the set of diagrams. We experimented with various approaches and found that keeping embeddings as feature maps and processing them with residual blocks yields the best results. Background feature map embeddings are generated with one additional residual block of $48$ on top of lower layer feature-extracting resnet. For object representations obtained from CNN-grid features, we can simply reshape node embeddings into a feature map, and process it with additional conv-nets to generate a feature map embeddings of the same dimension to background feature map embeddings. For object representations with spatial attention, we can use another Spatial Transformer to write node summary embeddings to its corresponding locations on a canvas feature map. Finally we concatenate node summary embeddings and background embeddings and process it with 2 residual blocks of size $64$ to produce the relation embeddings.
|
| 35 |
+
|
| 36 |
+
Figure [9](#fig:reason_module){reference-type="ref" reference="fig:reason_module"} shows the reasoning network configuration for RPM tasks. We experimented with the approach introduced in [@barrett2018measuring], which compute scores for each answer candidates and finally normalize the scores. We found this approach leads to severe overfitting on the RAVEN dataset, and therefore used a simpler approach to just concatenate all relation embeddings and process them with a neural net. In practice we used two residual blocks of size 128 and 256, and a final fully connected layer with 8 units corresponding to 8 answer candidates. The output is normalized with softmax layer. For Meta-target prediction, all context relation embeddings (context rows and columns for PGM while only rows for RAVEN dataset) are summed and fed into a fully connected prediction layer with Sigmoid activation. For PGM there are 12 different meta-targets while for RAVEN there are 9.
|
| 37 |
+
|
| 38 |
+
<figure id="fig:reason_module" data-latex-placement="h">
|
| 39 |
+
<div class="center">
|
| 40 |
+
<img src="images/reasoning_module.jpg" style="width:80.0%" />
|
| 41 |
+
</div>
|
| 42 |
+
<figcaption>Architecture overview of reasoning module. ’RelEmbed’ is relation embeddings, ’Concat’ is concatenation layer. ’ResBlock’ is Residual Convolutional Block. ’FC’ is fully connected layer.</figcaption>
|
| 43 |
+
</figure>
|
| 44 |
+
|
| 45 |
+
The architecture is implemented in Pytorch framework. During training, we used RAdam optimizer [@liu2019variance] with learning rate 0.0001, $\beta_1 = 0.9$,$\beta_2 = 0.999$. We used batch size of 64, and distributed the training across 2 Nvidia Geforce Titan X GPUs. We early-stop training when validation accuracy stops increasing.
|
| 46 |
+
|
| 47 |
+
In PGM dataset there are two types of elements present in the diagram, namely shapes and lines. These elements have different attributes such as colour and size. In the PGM dataset, five types of relations can be present in the task: $\{Progression, AND, OR, XOR, Consistent Union\}$. The RAVEN dataset, compared to PGM, does not have logic relations $AND, OR, XOR$, but has additional relations $Arithmetic, Constant$. In addition RAVEN dataset only allow relations to be present in rows.
|
| 48 |
+
|
| 49 |
+
Figure [10](#fig:shape1){reference-type="ref" reference="fig:shape1"} and [12](#fig:shape2){reference-type="ref" reference="fig:shape2"} show two examples from the PGM dataset(Image courtesy [@barrett2018measuring]). The first example contains a 'Progression' relation of the number of objects across diagrams in columns. The second examples contains a 'XOR' relation of position of objects across diagrams in rows.
|
| 50 |
+
|
| 51 |
+
In addition to shape objects, diagrams in the PGM dataset can also contain background line objects that appear at fixed locations. Figure [13](#fig:line1){reference-type="ref" reference="fig:line1"} and [15](#fig:line2){reference-type="ref" reference="fig:line2"} show two examples of PGM tasks containing line objects.
|
| 52 |
+
|
| 53 |
+
<figure id="fig:shape2" data-latex-placement="h!">
|
| 54 |
+
<div class="center">
|
| 55 |
+
<figure id="fig:shape1">
|
| 56 |
+
<img src="images/PGM_1.jpg" />
|
| 57 |
+
<figcaption aria-hidden="true"><span id="fig:shape1" data-label="fig:shape1"></span></figcaption>
|
| 58 |
+
</figure>
|
| 59 |
+
<figure id="fig:shape2">
|
| 60 |
+
<img src="images/PGM_2.jpg" />
|
| 61 |
+
<figcaption aria-hidden="true"><span id="fig:shape2" data-label="fig:shape2"></span></figcaption>
|
| 62 |
+
</figure>
|
| 63 |
+
</div>
|
| 64 |
+
<figcaption>Two examples in PGM dataset. (a) task contains a ’Progression’ relation of the number of objects across diagrams in columns while (b) contains a ’XOR’ relation of position of objects across diagrams in rows.</figcaption>
|
| 65 |
+
</figure>
|
| 66 |
+
|
| 67 |
+
<figure id="fig:line2" data-latex-placement="h!">
|
| 68 |
+
<div class="center">
|
| 69 |
+
<figure id="fig:line1">
|
| 70 |
+
<img src="images/PGM_lines_eg_1.png" />
|
| 71 |
+
<figcaption aria-hidden="true"><span id="fig:line1" data-label="fig:line1"></span></figcaption>
|
| 72 |
+
</figure>
|
| 73 |
+
<figure id="fig:line2">
|
| 74 |
+
<img src="images/PGM_lines_eg_2.png" />
|
| 75 |
+
<figcaption aria-hidden="true"><span id="fig:line2" data-label="fig:line2"></span></figcaption>
|
| 76 |
+
</figure>
|
| 77 |
+
</div>
|
| 78 |
+
<figcaption>Two examples in PGM dataset containing background line objects.</figcaption>
|
| 79 |
+
</figure>
|
| 80 |
+
|
| 81 |
+
In this section we provide detailed architecture used for Search Space reduction, and present additional experimental results.
|
| 82 |
+
|
| 83 |
+
The node embeddings are generated by applying a Conv-Net of 4 convolutional layer (32 filters in each layer) of kernel size $3$, and a fully connected layer mapping flattened final-layer feature maps to a feature vector of size 256. Edge embeddings are generated by a 3-layer MLP of $512-512-256$ hidden units. Subset embeddings are generated by a fully connected layer of 512 units. The subset embeddings are gated with the gating variables and summed into a feature vector, which is then feed into the reasoning net, a 3-layer MLP with $256-256-13$. The output layer contains 13 units. The first unit gives probability of currently combined answer choice being true. The rest 12 units give meta-target prediction probabilities. This is the same as [@barrett2018measuring]. The training loss function is:
|
| 84 |
+
|
| 85 |
+
$$\begin{equation}
|
| 86 |
+
\mathcal{L} = \mathcal{L}_{ans} + \beta \mathcal{L}_{meta-target} + \lambda \left\lVert\sum_{(i,j,k)\subset S}{G_{i,j,k}}\right\rVert_{L1}
|
| 87 |
+
\end{equation}$$
|
| 88 |
+
|
| 89 |
+
In our experiment we have tested various values of $\lambda$, and found 0.01 to be the best. This model is trained with RAdam optimizer with learning rate of 0.0001 and batch size of 64. After 10 epochs of training, only gating variables of subsets that are rows and columns are above the 0.5 threshold. The Gating variables for three rows are 0.884, 0.812 and 0.832. The gating variables for three columns are 0.901, 0.845 and 0.854. All other gating variables are below 0.5. Among these, the one with highest absolute value is 0.411. Table [4](#tbl:ssr){reference-type="ref" reference="tbl:ssr"} shows the top-16 ranked subsets, with each subset indexed by 2 connecting edges in the subset. Figure [16](#fig:matrix_ordering){reference-type="ref" reference="fig:matrix_ordering"} illustrates this way of indexing the subset. For example, the first column with red inter-connecting arrows is indexed as 0-3-6. This indicates that there two edges, one connecting diagram 0 and 3, and the other connecting diagram 3-6. Similarly the subset connected by blue arrows is indexed as 1-2-5. Note that 1-2-5 and 2-1-5 is different because the 1-2-5 contains edge 1-2 and 2-5 while 2-1-5 contains edges 1-2 and 1-5.
|
| 90 |
+
|
| 91 |
+
<figure id="fig:matrix_ordering" data-latex-placement="h">
|
| 92 |
+
<div class="center">
|
| 93 |
+
<img src="images/matrix_ordering.png" style="width:50.0%" />
|
| 94 |
+
</div>
|
| 95 |
+
<figcaption>Illustration of diagram ordering in the matrix and numbered representation of subsets. </figcaption>
|
| 96 |
+
</figure>
|
| 97 |
+
|
| 98 |
+
:::: center
|
| 99 |
+
::: {#tbl:ssr}
|
| 100 |
+
------ ----------------- ---------------------
|
| 101 |
+
Rank Diagram subsets $|Gating Variable|$
|
| 102 |
+
1 0-3-6 0.901
|
| 103 |
+
2 0-1-2 0.884
|
| 104 |
+
3 2-5-8 0.854
|
| 105 |
+
4 1-4-7 0.845
|
| 106 |
+
5 6-7-8 0.832
|
| 107 |
+
6 3-4-5 0.812
|
| 108 |
+
7 1-2-5 0.411
|
| 109 |
+
8 2-1-5 0.384
|
| 110 |
+
9 3-6-7 0.381
|
| 111 |
+
10 3-7-4 0.364
|
| 112 |
+
11 6-3-7 0.360
|
| 113 |
+
12 1-5-4 0.357
|
| 114 |
+
13 0-4-6 0.285
|
| 115 |
+
14 3-4-7 0.282
|
| 116 |
+
15 1-3-4 0.273
|
| 117 |
+
16 1-4-5 0.271
|
| 118 |
+
------ ----------------- ---------------------
|
| 119 |
+
|
| 120 |
+
: All subsets ranked by the absolute value of their corresponding gating variables.
|
| 121 |
+
:::
|
| 122 |
+
::::
|
| 123 |
+
|
| 124 |
+
The original model in [@Wang2018Investigating] uses a Siamese Conv-Net model to process two input premise diagrams and output all consistent conclusions. Convolutional layers with shared weights are first applied to two input diagrams. The top layer feature maps are then flattened and fed into a reasoning network to make predictions. We simply use CNN grid features of the top layer feature maps as object-level representations, and use the multi-layer multiplex graph to capture object relations between the two input premise diagrams. We use a multiplex edge embeddings of 4 layers, with each layer of dimension 32. The cross-multiplexing here becomes self-multiplexing as there are only 2 diagrams (Only 1 embedding of node summary for edges from first diagram to second diagram). Final node embeddings are processed by a convolutional layer to produce the final embedding, which is also fed into the reasoning network along with the conv-net embeddings.
|
2007.01072/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-05-29T19:53:25.672Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36" version="13.1.7" etag="OIYoR8xiMi-57aAf3IsN" type="device"><diagram id="G2JKxd4xbtjrREkr_zZP">7V3XmqNIsn6auTz94c0lRlhJeBDc4YT3IMzTH6jqals92zPTc3b2rFTVXSKBJDMyMjL+ICLyN5ipFr732/TSRHH5GwREy28w+xsEQShI7n+OkvW1hADx14Kkz6LXIvBzgZFt8cdC4GPplEXx8NWFY9OUY9Z+XRg2dR2H41dlft8389eX3Zvy66e2fhJ/V2CEfvl9qZNFY/qxFxD+uVyIsyR9ezKIfexw5b9d/LEnQ+pHzfxFEXz6DWb6phlfv1ULE5cH8d7o8nof94OznxrWx/X4MzdArzc8/HL62LeP7RrXt872zVRH8XE98BtMz2k2xkbrh8fZeR/evSwdq3I/AvevfTP6Y9bU+yF5XP6x+rgf4+WHTQQ/dXznmLip4rFf90s+3oAAyOstH5kFIz4Sc/5MevCt2vQLsiPExxH/ONrJp6o/E2T/8pEm79MHfoc+WDkend5PYcn40sfXknuzd+hL0mHd1Lyd+J/hhYup/QIQa5fPJ99q0aZ4eKHcx9r2lr1W+PVD9uIvn/xXxupXjA0EfwBhEiJQHET2/7GvRgqBwe9GCic+vF769v874wb89XFD/jVf71OvPb5m1ctspw9CZPscP/tBXKrNkH3k46AZx6baLyiPE7QfFskLmZmmbPqXquD7y+eLOqgyS457x+aguT+0r1Loni3H4NAvj6TeSoG3kv175I/+ziWvhxDX1slvEJPZtKLPgMwnDbV/roaVnqxk/3ZCjmOXodz9D23dwwe8fxHPp/Kk2brLxJAXO4sFLo7w2OKzB23q5YrcLhcvqVIpoE9a0oqFOXOeXqGyqMtykkhdJ1eNLVmG1ZacnuznLLylOExu+qA5cV5zOp0qU2q5hrt7PNMy86ozGxRT9hlS1Zy/E15dPaZhn6X0BSKdHLohPRj68F1V1DiCtxyG4Vq5FbZu2/fyGMkbdTqfzix+8PyGecVZPrUEe7IvklXJFGmjcQb4kd09kMWc5iLsFFRNvfiaNeFDeqiPaXG01Y8lH7mg0XTaTChBrX69KBpqFa2oSd1dBZwsb0YOF5Jo5XJQh6RBPS+DhTIEueAXcBUoRIj6ITrf6Gy86ik9z/biKM2pyHnHzxhJ26oLk1YMVYIFkoedtMQLpoZ7N33SirKAXQmJjX3D5rdosjgwyvcm55o1DRjCLHc4JFFWHhov67I2ZIWWqVn8gU7Xg6kheq5juXHEq6mcuOtpbmug1SpozC2aQu5Sye/sQI93lUESNbMKaq2S+iLqpDUfDWgHVHH8SUCj9gHJ/L2TtehWaWfZLzz1sdhB2vj7lKBTRGQuCxCJPhpaHQMzJxFqIM0QNKQn1Rto2Z6TdmLTINy5yMW1Z6WtSgJ0b2LgoVcG87sOXFSPEBPpEakl7sduQN730wLripweIoszx3a6P8rpOBtF76soolKctuGquYhHChCZYxgwhgoa0LH/uGxiKOxXK4V6PIS23BlCr83k2IDlKNHD2PDgtj2EtCLopEpOc7qmd+ByP1eQv+A9L4RxwCXjLk/pGM2i+8FCorALFe6hOUgv2CDd3bpUyzQDcMWqKsLtprV07ljKsN9DLyxyh8+y2iZOKfHIzVV4z1LGartrxV4JpDRUuoZe5zDi2SQe6HkfVWMdZMIuilwLG2NgBt5Dz8Y9DnaGgE3aTYYaQ1NG7VVucUsi93HiIAd50adJrIeLTj1UBfeCbV0huWgGrrjbijtFhIuoGa3DJWTrwKQDuPUYGPc2oQmKuYISpPW5VLu4MHlzcF1oZsCSCYyqayu2Xce2iUkFRxnH4SYjZyI/cNxYddDUXBEt8RdJtTMmkKqHZ427lKbD3LuKdhLsnYTtGbTOMaqG3Foij4NP4LHXtZO93PIWPPOuqe6X4cr96pHw7DmQwdRI1/dIzEx01fVtmiOEr7uUTeI+2x50E04PjoJPVyxAjZEIQiSt66yu6qrZh4oLw4eYs8F1NFh/TbpA2sv0DRse5ws98NkuObhGg5WygbY55q9Yqz7cvUzlHX3vbDKdoXu8sffYtFeFUUFvPfhzBQbX83hXrjKkxci75Jc5kGqd6EJG3ZKicCZxLvNtW25v5mW522V5NZsVgFIk4sIue6wAUtSER7pOcZ3QG2MF6wMmYmOtphyOL6ZVlKtzFZscdHIhJBqiV9uOfDTMnYRlf8h2ItEeq3iPnoLc82Wr5SIS6PrROgaQc2KImGiIhQVq2dG1tCPssqTLsLhIh8W+7xeiep08QtMVZ4xhz9TF5aopNpQNmUme8nuHPUzivMTs3TL37prcVLPokrl5opnd5HbZKbS0vDsBqLHPCSFp6Z2rT3XZm5FQPhxkSFJijE5GjcXKrYfGa1Xm5wLknH3e0X1iXwqw6hJ73o/26glZO4OSZ8QsOCVXAE6brWvTBpuTwuD359Pl6SLfA2iZ477NdEZjvfHq1c44+hecovlN3xcuDpDvOXEJ5gHk6wfvBprC3438Eur3BcNZXCthm9j428FcAAXy0xBPgdt5InEPV1kYFBhvp12gdfjJA2s4WkSnh2JwnTtScxQh33mCxnVnhrVoU4RtxWHvJqJdDGv3bUxjdSUE9bIy4fm0ukRB1gty9YuS8xUQIB5Q7cMJiIDy7WDucOlHASpl6WBF4YQ5tYBXriuLKzhpD4+w3I2J7qUjDAwZ8lKFG+Gpv5bBuVCne87VaLcdS+ysC6/0OwW36RyKrumxCEXDp/C1GLrIuA52oLZLKJqpya5WqCX2j3VvRgVVwAJSE+qrP1KIPShFvQU6xLuIK1vUaelRFiQzOF4P+mOd58Z8pGZn4H6zyv6GxDgYw3nqAzc3orT9EjHb1xVuWU/wQeAc6XMszNlddaIroOadZAKz0wa2BZsixqP1Fts85GbloJMa4h3KYcQxLRkjO0ksCyv5PDzABwqXyWILygMkmCI0r5p87gFQ9OsNl4ejl5jFSRfhJozQw3Yu5OviBqdhM6JKZD8m+ZD22GbvJ+or3Ez+FppxJjN39fq4Ar0t+Cp5ukllViFDpYC5mIP44KrX3tvvoC7YNt7UUxuUaGzzM77thSUxOiEANQYkpcRq1uhWFbrFQsm8aV3XTdfucrNOg2ikgLWVZuvTmjgxwtYO8hyNiv06ZHT0OBSWYgsZQxcu5R0gxvGsIABkb/siqFPm8LJA7cSkj171yLSlgsqIBQwyLXZxkeb8cGPMEOXxZufso67Og2J4vgLDBAZTtbDMoRjpXtKZ/IZckGPVlAiQwSGCgGqjLC0bOZ8r63rrUSJvIXRzz/uzgtkLmPzEB+OUWD1OhvoNJFV7/+gXEeaSOuLvgNPToNPtV+d7pRaQcZd8SzZLbFplZR0dTxuqpLpMEzHV9wx9GbVHqTHUzRZUi8au/JATXZc7J2/HJzSlaFJ6aNj5tuS+GYkXKGsTd7oYoBfoOAclBwMql7uxRpUuyokQmyHGlSPy6CDApVWT5SQnGUPjdqJZC6p51QdF6s4CNng5X8+nMKIPjU47NLoC20LXcj9qdGoMsqI3kNWg+mPbb5dsQYoTBvu474AjqOP9cp5MWRQfqbhwh+SqQCvExcshw0Lf7Ow6rHc+5Y45fb0nYV81nV+l41kHrpdQ0u3KIW4duas2F/2YkP0lL/Pgem0fIc4XYS22Z3HjbQR6NInknVUvZLsIqgkF6rBORprbLeapWy6iLrSZfgA7ZEw6ddgrnpCYhK1eVFo47fXOXAjcwErcBTCvHUo6LekWeuoLKUmSAzQcv78CgUEg8oGAoc+Q6msIhr1ZVr6EYNgHDCAxCMUwAkFI6HsEtqM6FIdAEkJBGN7xNgH9dUSGPhHZE5E9EdkTkT0R2RORPRHZE5E9EdkTkT0R2ROR/bchMgT4gP5DMBn2xGRPTPbEZE9M9sRkT0z2xGRPTPbEZE9M9sRkT0z234bJMOQfgsjwJyJ7IrInInsisicieyKyJyJ7IrInInsisicieyKy/zZERuD/mLdkxL/GZAeyan9IlY9hqH7wdjnwZ6iFo18EzuFfUQt/C6j9klb7HTjxxT3fE+stkPKvEIf87wCsw3HsvwJW9lgCZP7AsAJpvUFWcIesc4jqL1pBuA0Pfbxj2zWGcCEHecSoRgZQrKQPuU5PGZrTDcnWACVNHrjedl0iWQkpMbLcMZ19TY0maxhD65JzYJwyiZN0UZSSMNeAQpGLcboWpzXq+118QKQqPC74sFgBGdxxGfNUfRVB2yX8+z4QNILDj8e9JPc/d9dTui5rDuAViBpy6YxDI9JFDWCSB9JlyC631CI15IRwmLwzV8+8RbnVbRSee6Ji8wHhR8V0NYJDrR1hzZOzob/qy2Wk1hww0mkvjkqQOBsY8dAkmZMRaDCHywmHHKleDkkPscA+OiHV3jGyU6otGtJRbVMa9ci6cEAhj2Kz5fPONXxfK2kjUOizcqVX8ljVpdsFZ9ljYZ5lxxVr9tBdO+FlbRNC+LrR3WIAlrqTd/WoNG7nq+zsau+8ibFlTnJ4n+l6r4eJNMDwqZlB+axbMmQSrVJMPNJAHi515pvsMk+60vg3nkWA5kG+QFBB4s6nqoluYzKeGz2jLMO63HIfPlPYKBQPNJGLmO7ocB9uyx06fYUzXz/RhHYOcz24dLgVYIWfuZaWnXzd6LTFTcuzme8QTq9Sagk8kMSuNXprZ5UizHtcAremEt37tpTR3up0VY91DjxrKCsoIX9aGOZqpvEyHisobZ/Bhq3jE3i3HONQaVRKjPpDe+P5s71PWlqAINkf0VyRUYWm+HZ41e5WOBxVy5nozeNMasUGJQZRfF8OaDRU5+g6HUCHZtxYCtkrYPs7R2/9Iw2c3GTdfJHjAxthh8xwdx0WlvPxemJmD0Yz6sxMmmdWpraWdWqJKazWok3Sq6lBJhBkWUdlu3IlUIK1PyEJbqhwK8kp9bZda6IB93Gr+8flluxas4JfWMfT5QmjDmUhHNML1B1K72D4RSHU5STNgXkgeL5bNZYvO8OwM0pxHrwJwWQEivfba3eDlEXCOO3uZhbMpKFZjLbKUN83AoYfT4Vu1LrG/PkA4+qhq4cb7DqTmmxRbRUjchVxNDF3bVW5VtGO1BsnZypz3icQdiWXCCySTU9J8OBXH3StR7kj9GsrGAtnV0wQXAwShtdE5KgiON9cQlRGFbF6IBHRNs6xUZ3rNJWRg+cGtbzrKex3V+hMFzwm05O86xtxeqKmwQmMQuZ2+HPFzUw/VDuENnyhSCx4dKXECnQEEBFwxXD1AmLNXJ/CYHwAgqaTjhPWZ8KYg/AKu0GIMl3d4prp4mY0HiplMTcDWMTORZ3G6TZ0PlMzwVAe5I7pDHWpWB+QU08lBgDD5WlualHA+W2cu94hPXqV8hV+rLRrmPzV36YHhXDaOb/ZsS1wG3k/NGsvOa/4UWFzjc6c50G3ZAps2TmnVmhiXIVaO7qgb/IuWG77is+VYnYioX0CHdCgbwK2bgk/a8BtJrprJs4Ve3KQmJ0DJigMmvO2uzNOFlGsm4NdJ4C/+eh2o/urdpfuVqqdxCRhDdfTHPumIxzOdc5tSY0tNIBwS5UUlVxb6JH+DFQc3Vvbo5gEngpZbAeGdRm6CCx6+OXcVMSWybbNdi0Jdty+6qkcikyOaA55l8vJBcCmTRjsJGzs5ayjFQNvAH73+xMDBYaHLqhAr0g+9PWVZ/N7QOr2wADjrkrQJ7Um73iCc/rDXiZR1wIQsFB5o+L6CriJT9298lHtOt/C0Mn1dj0LMhZq7KHEUd4FleK8lAUazvMwHdA7p8LkJGYpnweDArkLEaaTaJABShGDPKi1u4uWw5CFy+uEY4kTqsv9viuctniCWv26y0x6YpqVWcjqQZkUDHDRFudEhPfq3Aa2y5XQ1bQ82chHrQRD88A32HSIzS65ZZciNrUrSCIcZ12dOwuPN/Ew1zjsneh0uUQJclc7ET9X68mMrqYbKmqTFa20zC2AQiPDXZZMqUdjckddUG5aEepdnQKHssyxzmCshfQ6t1sMubtuPceoW7E6dRfmxbvr0wua47ymv7KaChPdfUrA4pBpYQrk1fSCNPdf8FazC1mzlo3ShPJoCFbAZ8rDbbvEBNzDxTXho3scrM3dlkUJOrtgMJigA3v16T6CDnjIQNUv6VXg24e+5FpjlbcHpLpFCwJ5fN0facr8cA9GyrzmfEwzc8zXw461yPWKYPd4trWhOxI8cNbeJ4Q4LTgLu9mxbh6CvJFEUGMH4JDlKstn+D7FA+GBb6qZI3iNamY7SkO78Y1zmGcX5IqPnQpdcH+Xu2GHTQtyTCvtEasmms6LSdVQgZPiHE0rxoByJCDiAabxJnPYpGFjBQnIGA4dtABo+nhs59bxwQ3Si3ybNMshQ9vQYbIxTV/o2RoYFOt68tAgafV9YaVBaNZArjQEWYyyrrFBRtYaufC7plIZqWNt0BVsmAU7X+nNoD2UsfWyN1ud915zl4RWs+US7vxE11DFw3y7umZxCY/1TTkzQrZNDoXVm1svwIEiaSA6iGNh6qMRZJr3gnbh8UJfm1bcsnDh771dcRwI9pY92rQp5o0KD4Y+3+8U32UCXgVbv00lBTmFpmyBr9xSOD1pxflSi9fBzNWBILaEthE20zgW2XkCIXX+BMQ1BqZu0i6OJk3aQDQyPGw1PIdn5RRs5y4qMuYkReyUoKstc4HeFy3kXWllACbCvwB3hBdu8mEs0BfyPBjNrpDSqzLsGJLogSYAOuB62CBO45m6tI9rJ6KwqDIQw1R2ysokKaPFrBK1YFuQC3O36y3Fylz2Lnl1ntspLqcCMJHIiqdkV7f9nFDXgnJ23eDAoGAdPSAjGkZvW3tQa7DLLSPhZHHbk0adHBqwMVM4dzIatqbnmtstqrAAHbXwmOZ80zQKvr6I66Q4FnJAINGVLfljkKxmIC4I5ETl0pZqSFReYLcRqXYZ7nobFh7a9NAdHE9dyUuizn1FzfOhblOnkjMLY9IqhvlZ0En8LowCPiDkrmJ9Tj+Cwd/jJvgDhsEgAGIAAuEEgL8DMtEPKIyhCEgQJELAAIb+dRz1ltroCaSeQOoJpJ5A6gmknkDqCaSeQOoJpJ5A6gmknkDqnwakSOKLtI84CH/1Puqtwn8ArAKfsOoJq56w6gmrnrDqCauesOoJq56w6gmrnrDqCav+mbDqB++n3hLqQ/8YXAU9cdUTVz1x1RNXPXHVE1c9cdUTVz1x1RNXPXHVE1f9o3DVWzTZP8fN78cbyr1tH/e22+HbBnHg93vFvd0S/Pnt5s6GefliT7ngnX3mvt5+7htwNx7a5y/fTg4Bv45rA77PywKj348U/At2jAP/S7aMA45j72PqFYDrrvqReiX7lHoFdH1rDsHlJfja5PABH1ziIk4+QQ36nPCnGQkqstUckAtf4q6NsdqZixNNx+8y6VEbdgfste4iIBu7zupKyZ6cypKL1pJv/CAVfT/WQYH0SXVtMCkAMHYKl6xvLBLir5DXexi1KTHswZ4Xc32N69jrKn3HP0Y8Hwr6dn/YjGVk2hHTe4OlTJMgc3TlRVDIYNYNbNUrB81YUcuAq9REEulXa5lQ8GNqOZwk7wnAP2ic8Vjk0KA7TtpBgX1mglvb5tquLHudb1OoKTeCmDbqrd5ATtAw4ZYbVI3c2f5WNmS3iWS5JitkG83NAqJGHgduGtBWChe6aDwXWyHOUL293RxxDXXzRKpHogJu7iXe2b80kSlYeTAuG0UoCUClJ5mVB651azaoON0BLb4irQxFZb64oY69XkQdO2t+rs0OsGkYlgATsFDnk5df9ZleVju5pO3+r1gr/94i9YmfI6rjRZ+bfYZdnJkRDTBwY3HZLttl5meoR3meFQZ6hWWEqKxDM29c6EpviBJ4IwnQLBYK2slYkIPuNLehpN7R48kAlas7xade5nXDXS5pdRohbczr9DKckCNMfjjGysd4ePB9jIY9/8B2F6fk7BVXNH+MzPvKtF0ClpRoWmAWxSc/Kq1b050vXXTkYRBD7c4oQkcap1k51sXhsGyMlHUrxBgK9VrZD9kJSTkXBuwr3V3FOpZoMRIlSna8KJKsjSpTDkJUke0onGRiC6x3yI8lszPVDjjbvZ5hF7eeBLM09S20UbLQiDPv032kWtf5GqwXFzjmFY07Ps6CvQg8RFYGkfBe3HtMWV1aBG3HbiudCZyDS/mBTh/a9X5MoNnIlRdSHI1P3XTwrwZE5g8lZCNlh9ppeyVGDLpWtMmrMQoIAmGB+XazSvLEijF5MDw/HIHOnBZF8RVbUDQzSIChl5QjVxY5c3ENb1sNWqMKXLjAjtxObKV8icEX+gUSdruwCVH4WlXfTJjlVbiwrw0khAyfgJ6WkuquDI39dZSJcD4HEO8Eg4C07F2qN41Ps4hyjhmSEqpwnx3TK1tNIKLIiHstuieOfRhl9n9XA7xPbQeNzUzEbiqMinDOAg+bQblqrF1ph27WKQURJI/R+IyddYdrlM6lMt88YWxFJgrW+A+07kRynPzD1EO/IEH/tg4nahPLmws8GswYDJcufNz2M4w4NL71NqkKTQGwoEupLCEhx4h2oz6084pusk2MrVJdt/symqyOSd6uoeau2L+sBsekpKoDc80n3a4qZuEc9TEcqXgEMMDTqyKZuW5tlbf6un3bCKDIq2PSDoEzAUqq34TwxgC5qBYYOcSsfKSoyGHVIADLi6X7AYNigaer4FSBE6yD9KErne4aBjq3ajy5M4MrIHq0hcgOVV++P1h3TbH4yNiwWp17khVkpHggKO9OD0D6KN52yGwOq7pyjARIY6iQPGTSUmZZzKBO0GH1SVVgIaLSHBdM4RZSZTlSTqEr0KQX0ueBBfKZFczYyyXYHvVh5qAGn8A7admVdCYb1Wb1dGTIJCZsiiIQeJvCCnRVxc4vWeUMSJeQyQho9JmpBuhpbNhaOllGsAuXwzwTpt4KKdaq8Q5yS8oWFzYFh4jWXLrzugiXM9X14CJwVpF7Rz4fe4kNTnUuBUUJ9yP7h+6BTVJB253VzzIqKQVaNravW3iNrZZ+4MuKKSa6YS7hZBuiMuSPyprOC3M2/e0y0WJVlwrSLPEUDewDKC/sjeC9jdODWYWC7obO3kpQ0UH1snKjs9k0UFCd5Dvr5lOgUZlMGxFl3ReOBvWMDvhZRcegqNFjOHP3ZZAXyeFbtgh0YlpsGoerE1Urrdqfx9YKc0D0HnOVFCQYUT0Z7kJ+dNlwO0PDRCfwLn+4ejpxju251mFOVdp88MCro2rK7K8xdxChBc0h4pUELwjwCkqX7RAmm6vomo26k1vLtMjntRrt0wh5XLYekZV00gndmEOnOKvSQdn0YdxQe2Qd9HI6jNPwzAqXWB6LhnG01pFyjB/wVLuqnDXlh2mGWKBddHkn81QnDhXcth6i0caTAd8gdjH6eF2FB/HgVI87S9f5sOvuYkmaL3fidvbkSORrfj2k1Vm7XeDsMLwd9qzjvstGEsZ4o1IxT2+m+6LacSQKhzKFHtZcIHbO7DVNz/lJfsk8hcfyIp0OCnVWexdwTKIwyF7DKIy0jtMwYKrZNEEvIo3a8FWVj4co0UMFnEiZ8yMFSdvKYaaxMGnHWYyoR/ohiZQZSJPE+376DBD5VT3MnX73OKfBATUdmbzeCwu0FLGZ2SvE93B9AwxNBg5Bt9qqhjmQqHv+cqmHiA7dUlBAgmp1Rx2NaWr3lQZuPO/oNJo/OkUgOBvQWjmYA+GiJQKXsxq58M4y4AnmoochLJHjqjgoqR4jH21GG8YK6ZVsQCQsOVTEfPSM5295Y83KXbqzHECmftSxAnOTDvOubjALg4wqLpxzBAkibReoCjQFtXt2D044j726M3LSh/cIfsRzHeQ0hQHqDu59uUIZHrDjXaeLldNQnw9DklcDvlsmzhxT3XqaaLO4L+Smr+Wq5UbhnZ1SXnzJ5AEVkDZ1dC9UQdpyc2KvzdwNTRSy1XnIgnOoz1VbuYs6AezmVZohp1TnX2cuLTX0xrE6UCnyNEmtEswh7yrNct32eX01GA0A5LThuIyWoF52zVPQxlW9Oi88SKN60LH74rt/ffTYRt8h+2j2Wp8uUHYBLCLZTJBjzNJeS4jX56j1DwvvYSsyTMMPT1zehQMdOf3WCqKJIO2c3UjGOOP30dEyQ+nEEzw31XIFu3MI0oVo+3w5KIfpqJM2QTwE9JHFRTxbpG87QL6vwBhD3x/nlTLYhruk0SrDgrrYPRvGZQyVk5usJ3EkvTPbCj1VHmYaToVv27pw3mHQSqJNOV4w3daAvOevcw2QgqHoy5fpU4jGZXQufBX2YW4I1K6gyvaCmkxbD9npLD129eduMjckFxmWvajb2WMXpU+xg8Fd4DZZGLzLVQIlVgdI55PjZ2Vdy659Igui5DSOctj5zrhXWUy2dmjDFpzoaLyw9THh3+4gjzsOEWgzt+OFCkWYduPL1xGNsHew+9+wHTy6w3ICw7/dQPwTYgS+Q4wY9IFASZAAABjFUAB72xz8K6wPfPiUVgYk8bdsNH8JUL63492v2CleyIaxeSHJqQriKMoOxPdnUfsw9k0RvwHTuqkPXHvPyvKbIv8jPA33YYr7d3BrlUXR8Zh3d53/el/6n0DFv4JREOADDn3BKV8zCgq9k2AIJj/AX6bZQd9llV/AG+/tvPE6hIcR+GtTzF/jFrUps/AgyzUe56Yvfsgqn4q/bME3/PP1OL430r/WIIS9WWXeDHkk8AF+x7kc/H6YkF8xTPh3FIijJDY+Hjb9mDZJU/vl6XPpN7z++Zpzc1h2XiiTx+O4Gq+GPn9Xob+mW7xk4+2L7+5R1c6sr0fs8rHml4P148FrO4/GfUXpoZn68K2I+Gnq93G5i+3H13W9R8iPt6pN9sKiH0cNfgtmfptrxDdjMfp9Eo8f7/o8HFTf++sXl7XHBcOPn/PJavvxOQQJfFndv2zXN9cfy9lLCz7zxiea/By7/ESmsD80g35CUH4pqo9y9Pj5Tq7vZ7CXz0ep8UX56+fvWasxkvwAkCDwJlHRryczSrwjgnfZ+AH5wgUKh76f278icdlbyrT3RDDyp0Uw8p4Ivr54agCneszG9SfkL/JD+bt3IWuHH62yX3DOd/bjXzCa+NtQvI0f8E7WuXckMf47AuRnRwt6Lz3C3zlamF8dJK2DoX05B+gvMnGfHv9/xo94J8PiO29X8F/w3uuttb9cGTb2cdrve+q/3w83iHwAgM/C92tXUgzDP7ypxP8GBfhtZXyHH757FfnXGIQKf27WBk+mOZhmV7BR+LusoZ8UOeTfCJqg996dfzNWr4lY/4Ay8wtys5L4hy+mGQJ9A1kI9B2ageiu5SBf6Dnv+B+gX+WyJQH4F5Dwv+Ql93N/kef+Is/9RZ77izz3F3nuL/LcX+S5v8hzf5Hn/iLP/UX+Y/YX+VOm6Fd4+EOY9vV74v+B/im7i0DvvSb+fwjJnpG0z0jaZyTtM5L2GUn7jKR9RtI+I2mfkbTPSNpnJO3/aSTt3wGrvn7ZhRDfoyrgAw5iKI6jCAACGPmOgwGEfEBRBMbwwyUDwgDoVzgc/NjDcmj9+qfeJ+PvvU9+8+J51/n2ter/0PfI3Mvnb+IT9Ov9Pf8H/t73ACP/pvfG+N/ke/LZJej/nyf238oMyNeOgO/4oRDwBwz+MsP038Qa73ls/grW+OTr940b2UcXl/7TdW98M/x/YZxf6Y0CAfCHbzzAUeL7lADgW9DHL2ePH/uI/nrJ8eSAn+MAAvo+xOdv44C3zBL/ngiAz17/7m9fOP3/2QgA+OfJ/xcjADD0a0977G+KAPhkxv+UIPP3IwC+bdc31//lCIA3Cv8fxPUY+yTeoR7A75Mz/Y8K6vkf7BsPORh7b6934sN7zuTop2v/0rR+z/30By9d7mW87FzRzMeMq6OPX9mw9IchC39PEO+yEHj5+e27OI1P6tVeo/Ox0yD0gURAEMcQHEAJAoKx1/MfxcS3Z1H0t++COT5V+yqdmz6K+y9b8/L5sbD4iTH8YoTQd2TuW9lfjSCCP5Dk58g9+Bv0gGHIBxAlP3++rv9V5n0nXL57CvqXnvIDEfZnpMaPk0L9aqnBU+Z/lKzAwX2IyB/lQ8aQ9/yRXyTHFwFEf1NwIPwHfGmfMuTfIEMA4Ovp/c3s3lnrm5XkZ+UGDP3Rmn+hrPgD7gJPrvvHcR2O/Mm16l/w3Hf1/kKOe8+S+uS4fwzHgTj0u9II27Uc4C/rSiD5V57yC7nxx8bcX60rmb1fD/emr+L+P0pnQkDkO6vZW/TfVzYT5MNbnpNfrhv9RCT8U2b8+2QGgn0KSvv05u6b8LKfFQxHVcAXWVa+Me1/G7X2CyXBe8bZJ4/9g3kM/TaD8i/iMeJv07bfZOGTx/6hPAbCv6+VfGct/GmW+9YE9S9r/oVc9wc2rX1y3b+D65DfxWJvAfB/nOf+WL2/kOOgn+e4/+QghHeTn/P1F8nPHWu17OUI5PEqhXK9MeUQe9FsTgMc6dxTXoYs0jVwYAyQg8MTsApMKfSrwGsA/5bNLOkUZFeOVW74dqtUmcnJHGg03dmS3YSjO5ZlWPYhPobDfXO7160KH0E5keahPj5LsKC8hoaR4Ibnh1NXdZ/uBDQwiUVRJjkxqkxHJ3Voq7tC1Q8EV/rHjJFFENMtsJa87gZ28lrFhDfXez+b1diQED0+rjGq26mXa34emmlGCEwljkg1AuzhqX0hO0BTDm/tSMLLOp2FC05XJ+hIeRxHZG4yC4MZ+Qgq7DIr+pobYs37kjEoixatgyOZynS+AF4S1o0zRgkDn3E9j0etHX2Zqtjr5XLE2EkJ/FCyVcBpdlHqWTKFBlMuK3utkiy6NZNeZ+WFPWLKpNNiawnICzY/ME11ufIvCcUXK3+UlGQy6kjci+VqhsIYDaRQ1bSvXKXmGh1OjE6KQ4+A8KxICHu0yFuGVHUe1aWiUfpQ4U64c0RxYABrCYnq4zp1LwqITDcvuyy2NzLuxGP14dLq3PAgX2+GytOVf0oLJ7/u7SF7DGfiCTq7t7hTrhV1eI9TuBrej2rlQO/NG2k8fAmWMifboMcZ0Hk48l1306mlo4gm0OGqtGoH52GbHAihJsTucblQLwnpb/jEj5K2lpwd6nBZw9qaS7RLDC6GHxEbCcN1o5QYsnQ+vEgTQbnDg3nnjFO0H7JQCjoFkqJhvDaR0qLO4QtPpKm3YRhd3M3hlB/+xTsjHilRj1B5TphxUyx56FY9FiSW+2XUIt/zLpfDiV90IlJOsPv5ZBtxtiDmaZ9LQm7ntXKE6/nF4afolDkklMBVa2SmY/yw04Y8juO+7dmVTQ7hRVv7mDGWxIIW4Qq2KMYt5u+NEJyMY/wtEsAuymdFOKIxKoxl1n67N4Mak1uYPBwjgGhdMtyaRTxq57GS7tBTJs9SrAQ2nzECbbcQSEYalMmSlvuqIF2M3B/BM5WzrrfXE2mlIxDao7HYR3xWGGG+4XCxT8yik1l3dWjx0XMNQVLCdgQhHd66NjqySHuLylQVD1dNpIzWwoa0uZDXoevN8AjrvhYcVNAy0p0e51vAAUyx1URyK0kexMWTvvpCmdKdqWPxGUNMApQ04YjOCFOmmhAS0Pd1GQ0k9IhAkOeIc2rJr06jtFRTtXljNj8iinJsVDQt6KIHc4EqEeRncuImixQl48UupLJnJwl4hFqhUzeh0FIv9vz5ute1WRA6r23ChGgrRmSTPTaqPUU1IBwxvY8L1jgsAKwTca+zoMnd1WSjE+ZztipzucwaEmrITTct2MU4O514hUbtgboPuPFFOyirh0ApMSDNk8IOGAO4GTfembQoYBEBTZ3RHwLi8hm5sZrQmf0Y9OW1yMRGvh9Jqjk5ra0Ez9Rgi465ttP3cScuSN8ONpOZl7rfVX9J8i5GTZ+rrfFTkUUWRXzQeVMnirRQxj476wCdr6pjNRhUCcDhfR2Jtq01hn8IHDhs7HI9AvqHBppwYED3mmnu5bKZXW8bmK9QZxi7tDwLBltzgI2iU/bIaSBq8Qccz3Jll6CV5YdYXY7pxga+T1oqXLN2Xkw11mk8NbDoGWp2wXi4GINyhfon9bzpmJSP7cqj7SM6jfPFovU4ri3VfHUathSQQwnpInDJ5V7OAFne52qiN144gm8fO4v1slIBysg5CoFmSUt22uUBeaEeDkfkmYycGl2WCVFQiEjfWzZapC1sAXPpZchyJRG/eYwe+Ufv3cQDLl0J7bJXktqBtGsHZtsVf8wrqC2kwvgdPmgeUldGzsoLmfKgm41Byt4FfKg6S0HxK3HXlGm8jQzmOZPZmBeR2JKudX28fREnC+nMoAbNXeOsq1vZS1iKBjCyECvLN6viZfsELFOzHG78ML1cqQtV42KZBVnYOMb2cErIznBVmxyTvt6GlyQiwkklUQsHxmNpWrxUL+UmlKMjq3HiohbqnCnYhuOctN1jIWXxlkHUiCIKT16u+sQPqCxDSVKDmU9743gHm17QD3Gt+LdAFNtmlc6v4dicqdFDgHuTg4nDiTi3ZCORR2SWZ2t3q+93iaRfd+npkuy9wI8ZDG0Jv4tBEe8zDhe5oSPR1sQ79HX9HTC4L8NWdgpnJ0Dgcv2WQPdDQOPbAogK3oQ8P3JwE9oJg/PjUt3g+TISu5A2ZzJFiwZub1MKlMzSHzG9hxMNXSCAkjfyRMIKaWJ3ghNv8rIrBccQp/V8CKlxyS4gLrj155UdHD6u7FJ111sI60Gzf1wpKTqnYSWt9zRim/IBVwv14NxtSrPL+eBAaYG2I8EKXT6qCFwrYVAvhcWd/PykFRJj8DVwbMDCeZ5snWVT4m3gMqb79CgXiMW3I8ydnjHFXSMJrcLlSCFPMhC/bt0aHTRnnd93U/9FeSgRCP0AH+mfv04p+WY/IL83jhJHGiwChhAEAjD8kyb+lXvZjgBB4gvfduLHOOJn7abIT6S4eiK/f6dNC/lwOBt++nzjyQQRH1D881noz5q74A84/oW56xvbA/K7T/mFqPA9D4df8WZFm+LhY1a+p3/0X0lxDn6d1QAlv/eG+dtcY5GnJ8I/WlbhIPqVNelrUYUD5Icvfdv+5GthHP4LD/mFkurpo/CP5kUUh/9+Xtz1uH8EL77nofDkxX8OLwLY71rZQeArPsH/JDNC2O8x4+8/5Bcy4y8IcIPe3T+2aYqpfSplf3TfGQL/Pnrp/yqB8puP4C9nBurhZ+VLCmUIeM23/VTX39sOA4M+7OLmB5zx3nYKfxNn7Id9c4zgZ6lyRBFdmig+rvhf</diagram></mxfile>
|
2007.01072/main_diagram/main_diagram.pdf
ADDED
|
Binary file (33.7 kB). View file
|
|
|
2007.01072/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure id="fig:example_image_sg" data-latex-placement="ht">
|
| 4 |
+
<figure id="fig:subim1">
|
| 5 |
+
<p><img src="motorbike_img.jpeg" style="height:6.5cm" alt="image" /> <span id="fig:subim1" data-label="fig:subim1"></span></p>
|
| 6 |
+
</figure>
|
| 7 |
+
<figure id="fig:subim2">
|
| 8 |
+
<p><img src="motorbike_sg.png" alt="image" /> <span id="fig:subim2" data-label="fig:subim2"></span></p>
|
| 9 |
+
</figure>
|
| 10 |
+
<figcaption>Example of an image and the corresponding scene graph. </figcaption>
|
| 11 |
+
</figure>
|
| 12 |
+
|
| 13 |
+
Visual Question Answering (VQA) is a demanding task that involves understanding and reasoning over two data modalities: images and natural language. Given an image and a free-form question---which formulates a question about the presented scene--- the issue is for the algorithm to find the correct answer.
|
| 14 |
+
|
| 15 |
+
VQA has recently found interest in different research communities and various real-world data sets, such as the *VQA* data set [@antol2015vqa], have been generated. It has been argued that, in the *VQA* data set, many of the apparently challenging reasoning tasks can be solved by an algorithm by exploiting trivial prior knowledge, and thus by shortcuts to proper reasoning (e.g., clouds are white or doors are made of wood). To address these shortcomings, the *GQA* dataset [@hudson2019gqa] has been developed. Compared to other real-world datasets, *GQA* is more suitable to evaluate reasoning abilities since the images and questions are carefully filtered to make the data less prone to biases.
|
| 16 |
+
|
| 17 |
+
Plenty of VQA approaches are agnostic towards the explicit relational structure of the objects in the presented scene and rely on monolithic neural network architectures that process regional features of the image separately [@anderson2018bottom; @yang2016stacked]. While these methods led to promising results on previous datasets, they lack explicit compositional reasoning abilities which results in weaker performance on more challenging datasets such as *GQA* . Other works [@teney2017graph; @shi2019explainable; @hudson2019learning] perform reasoning on explicitly detected objects and interactive semantic and spatial relationships among them. These approaches are closely related to the scene graph representations [@johnson2015image] of an image, where detected objects are labeled as nodes and relationship between the objects are labeled as edges.
|
| 18 |
+
|
| 19 |
+
In this work we aim to combine VQA techniques with recent research advances in the area of statistical relation learning on knowledge graphs (KGs). KGs provide human readable, structured representations of knowledge about the real world via collections of factual statements. Inspired by multi-hop reasoning methods on KGs such as [@minerva; @hildebrandt2020reasoning], we model the VQA task as a path-finding problem on scene graphs. The underlying idea can be summarized with the phrase: Learn to walk to the correct answer. More specifically, given an image, we consider a scene graph and train a reinforcement learning agent to conduct a policy-guided random walk on the scene graph until a conclusive inference path is obtained. In contrast to purely embedding-based approaches, our method provides explicit reasoning chains that leads to the derived answers. To sum up, our major contributions are as follows.
|
| 20 |
+
|
| 21 |
+
- To the best of our knowledge, we propose the first VQA method that employs reinforcement learning for reasoning on scene graphs.
|
| 22 |
+
|
| 23 |
+
- We conduct an experimental study to analyze the reasoning capabilities of our method. Instead of generating our own scene graphs, we consider manually curated scene graphs from the *GQA* dataset for these first experiments. This setting allows to isolate the noise associated to the visual perception task and focuses solely on the language understanding and reasoning task. Thereby, we can show that our method achieves human-like performance.
|
| 24 |
+
|
| 25 |
+
# Method
|
| 26 |
+
|
| 27 |
+
<figure id="fig:architecture">
|
| 28 |
+
<div class="center">
|
| 29 |
+
<img src="Architecture_Compact.png" style="width:80.0%" />
|
| 30 |
+
</div>
|
| 31 |
+
<figcaption>The architecture of our scene graph reasoning module. </figcaption>
|
| 32 |
+
</figure>
|
| 33 |
+
|
| 34 |
+
The task of VQA is framed as a scene graph traversal problem. Starting from a hub node that is connected to all other nodes, an agent sequentially samples transition to a neighboring node on the scene graph until the node corresponding to the answer is reached. In this way, by adding transitions to the current path, the reasoning chain is successively extended. Before describing the decision problem of the agent, we introduce the notation that we use throughout this work.
|
| 35 |
+
|
| 36 |
+
A scene graph is a directed multigraph where each node corresponds to a scene entity which is either an object associated with a bounding box or an attribute of an object. Each scene entity comes with a type that corresponds to the predicted object or attribute label. Typed edges specify how scene entities are related to each other. More formally, let $\mathcal{E}$ denote the set of scene entities and consider the set of binary relations $\mathcal{R}$. Then a scene graph $\mathcal{SG} \subset \mathcal{E} \times \mathcal{R} \times \mathcal{E}$ is a collection of ordered triples $(s, p, o)$ -- subject, predicate, and object. For example, as shown in Figure [3](#fig:example_image_sg){reference-type="ref" reference="fig:example_image_sg"}, the triple *(motorcycle-1, has_part, tire-1)* indicates that both a motorcycle (subject) and a tire (object) are detected in the image. The predicate *has_part* indicates the relation between the entities. Moreover, we denote with $p^{-1}$ the inverse relation corresponding to the predicate $p$. For the remainder of this work, we impose completeness with respect to inverse relations in the sense that for every $(s, p, o) \in \mathcal{SG}$ it is implied that $(o, p^{-1}, s) \in \mathcal{SG}$. Moreover, we add a so-called hub node (*hub*) to every scene graph which is connected to all other nodes.
|
| 37 |
+
|
| 38 |
+
The state space of the agent $\mathcal{S}$ is given by $\mathcal{E} \times \mathcal{Q}$ where $\mathcal{E}$ are the nodes of a scene graph $\mathcal{SG}$ and $\mathcal{Q}$ denotes the set of all questions. The state at time $t$ is the entity $e_t$ at which the agent is currently located and the question $Q$. Thus, a state $S_t \in \mathcal{S}$ for time $t \in \mathbb{N}$ is represented by $S_t = \left(e_t, Q\right)$. The set of available actions from a state $S_t$ is denoted by $\mathcal{A}_{S_t}$. It contains all outgoing edges from the node $e_t$ together with their corresponding object nodes. More formally, $\mathcal{A}_{S_t} = \left\{(r,e) \in \mathcal{R} \times \mathcal{E} : S_t = \left(e_t, Q\right) \land \left(e_t,r,e\right) \in \mathcal{SG}\right\}\, .$ Moreover, we denote with $A_t \in \mathcal{A}_{S_t}$ the action that the agent performed at time $t$. We include self-loops for each node in $\mathcal{SG}$ that produce a *NO_OP*-label. These self-loops allow the agent to remain at the current location if it reaches the answer node. To answer binary questions, we also include artificial *yes* and *no* nodes in the scene graph. The agent can transition to these nodes in the final step.
|
| 39 |
+
|
| 40 |
+
We initialize words in $Q$ with GloVe embeddings [@pennington2014glove] with dimension $d=300$. Similarly we initialize entities and relations in $\mathcal{SG}$ with the embeddings of their type labels. In the scene graph, the node embeddings are passed through a multi-layered graph attention network (GAT) [@velivckovic2017graph]. Extending the idea from graph convolutional networks [@kipf2016semi] with a self-attention mechanism, GATs mimic the convolution operator on regular grids where an entity embedding is formed by aggregating node features from its neighbors. Thus, the resulting embeddings are context-aware, which makes nodes with the same type but different graph neighborhoods distinguishable. To produce an embedding for the question $Q$, we first apply a Transformer [@vaswani2017attention], followed by a mean pooling operation.
|
| 41 |
+
|
| 42 |
+
We denote the agent's history until time $t$ with the tuple $H_t = \left(H_{t-1}, A_{t-1}\right)$ for $t \geq 1$ and $H_0 = hub$ along with $A_0 = \emptyset$ for $t = 0$. The history is encoded via a multilayered LSTM [@hochreiter1997long] $$\begin{equation}
|
| 43 |
+
\label{eq:lstm_agent}
|
| 44 |
+
\mathbf{h}_t = \textrm{LSTM}\left(\mathbf{a}_{t-1}\right) \, ,
|
| 45 |
+
\end{equation}$$ where $\mathbf{a}_{t-1} = \left[\mathbf{r}_{t-1},\mathbf{e}_{t}\right] \in \mathbb{R}^{2d}$ corresponds to the embedding of the previous action with $\mathbf{r}_{t-1}$ and $\mathbf{e}_{t}$ denoting the embeddings of the edge and the target node into $\mathbb{R}^{d}$, respectively. The history-dependent action distribution is given by $$\begin{equation}
|
| 46 |
+
\label{eq:policy_agent}
|
| 47 |
+
\mathbf{d}_t = \textrm{softmax}\left(\mathbf{A}_t \left(\mathbf{W}_2\textrm{ReLU}\left(\mathbf{W}_1 \left[ \mathbf{h}_t, \mathbf{Q} \right]\right)\right)\right) \, ,
|
| 48 |
+
\end{equation}$$ where the rows of $\mathbf{A}_t \in \mathbb{R}^{\vert \mathcal{A}_{S_t} \vert \times d}$ contain latent representations of all admissible actions. Moreover, $\mathbf{Q} \in \mathbb{R}^{d}$ encodes the question $Q$. The action $A_t = (r,e) \in \mathcal{A}_{S_t}$ is drawn according to $\textrm{categorical}\left(\mathbf{d}_t\right)$. Equations [\[eq:lstm_agent\]](#eq:lstm_agent){reference-type="eqref" reference="eq:lstm_agent"} and [\[eq:policy_agent\]](#eq:policy_agent){reference-type="eqref" reference="eq:policy_agent"} induce a stochastic policy $\pi_{\theta}$, where $\theta$ denotes the set of trainable parameters.
|
| 49 |
+
|
| 50 |
+
:::: table*
|
| 51 |
+
::: center
|
| 52 |
+
[]{#tab:gt_results label="tab:gt_results"}
|
| 53 |
+
:::
|
| 54 |
+
::::
|
| 55 |
+
|
| 56 |
+
<figure id="fig:example_paths">
|
| 57 |
+
<figure id="subfig:VQA_ex1">
|
| 58 |
+
<img src="bat-min.png" />
|
| 59 |
+
<figcaption>Question: Is the color of the number the same as that of the wristband?<br />
|
| 60 |
+
Answer: No.</figcaption>
|
| 61 |
+
</figure>
|
| 62 |
+
<figure id="subfig:VQA_ex2">
|
| 63 |
+
<img src="kitchen-min.png" />
|
| 64 |
+
<figcaption>Question: What is the name of the appliance that is not small?<br />
|
| 65 |
+
Answer: Refrigerator. </figcaption>
|
| 66 |
+
</figure>
|
| 67 |
+
<figure id="subfig:VQA_ex3">
|
| 68 |
+
<img src="food-min.png" />
|
| 69 |
+
<figcaption>Do both the pepper and the vegetable to the right of the ice cube have green color?<br />
|
| 70 |
+
Answer: Yes. </figcaption>
|
| 71 |
+
</figure>
|
| 72 |
+
<figcaption>Three examples question and the corresponding images and paths.</figcaption>
|
| 73 |
+
</figure>
|
| 74 |
+
|
| 75 |
+
After sampling $T$ transitions, a terminal reward is assigned according to $$\begin{equation}
|
| 76 |
+
R = \begin{cases}
|
| 77 |
+
1 &\text{if $e_T$ is the answer to $Q$,} \\
|
| 78 |
+
0 &\text{otherwise.}
|
| 79 |
+
\end{cases}
|
| 80 |
+
\end{equation}$$ We employ REINFORCE [@williams1992simple] to maximize the expected rewards. Thus, the agent's maximization problem is given by $$\begin{equation}
|
| 81 |
+
\label{eq:objective_agent}
|
| 82 |
+
\mathop{\mathrm{arg\,max}}_{\theta} \mathbb{E}_{Q \sim \mathcal{T}}\mathbb{E}_{A_1, A_2, \dots, A_N \sim \pi_{\theta}}\left[R \left\vert\vphantom{\frac{1}{1}}\right. e_c \right] \, ,
|
| 83 |
+
\end{equation}$$ where $\mathcal{T}$ denote the set of training questions.
|
| 84 |
+
|
| 85 |
+
false
|
2012.09446/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2020-09-01T19:22:02.542Z" agent="5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36" version="13.6.6" etag="-gga47K0s0jmeTi2JuvD" type="device"><diagram id="vNeBWvh84pKwgW9KM-69">7V1bc5s4FP41nmkfmjES4vKYOOnuQzuzs9mZbZ86xMg2XYy8mDT2/voVRrK5CFtOJISDnUwCByTE+XQuOjqSR3Cy3PyWBqvFVxLieATG4WYE70cAAAe69F9O2RYU2wMFYZ5GYUGyDoTH6D/MiGNGfY5CvK7cmBESZ9GqSpySJMHTrEIL0pS8VG+bkbj61FUwxw3C4zSIm9S/ozBbFFQPuAf67ziaL/iTLccvriwDfjOrYr0IQvJSkHYvBx9GcJISkhVHy80ExznzOF8KDnxuubpvWIqTTKYA4/uvIH5m7zYCdv6LJi8rdlg0NNvyt0/JcxLivILxCN69LKIMP66CaX71heJNaYtsGdMzix7OojiekJiku7IwRNgL7ZxOkowBa+X3/cJpFlEO38bRPKHEZRSG+QPv1llK/sGlKjzwBB2HXiH0oVG2Ze1gL0KrwZtWZlh7FtO+ickSZ+mW3sIKfLKhV5Th/dJHxfnLAWXASIsSwJD1y4D1q/m+6gPr6QHjvhgJ2ILEvCMU/N6AYEFjINhXcTgiDsgbd4YEuoqDWBy6BMG5ikMJCY9zniOBukPCvYoDAwEhYyB4FywOOgTA9rvjvX+ZAqCjy3fJdj7iKvH9DxIlGU7VcTwhCT6nk6vgqW/XuzJo8JTfUuaphRTw1HqnPG2ohw55Ck7wjqTZgsxJEsRfCFkxjv3EWbZl/AmeM1LlJ95E2bfS8fe8qhvEzu43rObdCTdxlFfp9lv5pCgFED8/lNud8YJ1pIrXwWEtELEmz+mUk5jWy4J0jrnMO9KwpTgOsuhXtf43YQAbGOTtf2SnBwQeDtS7XqF0CqQwWC92bRUYjRnKf0TGwNl9Xo8pkhdF9ZiKRsROnOXvuQqSCtrOv8952GrXlT+td3jd0hv81eZwjR7Ni/+51X5Ipj9G7p01cu+5GS/qps0qqud3K7TuM2+Kp1OByDWAe/KQjVQ5rtYY1dSj01SPSKAelZhx1G/16OpQj1436pEV3ZnwI9YQgRqORbtYsRqU+3bIoescVbzMF+gV4FrsoQBwHlY1ojvdqz18sz0UYIpM+jiiGIBqewgGaQ+pgrxpRvq1WUT/4nSmDiN5BnDKDaILbmojP3UmkRvbC4JXI7oVF2hsTnuCZuRDvfaEw9SegmkhXboTNIMtPXVsZGQECjwMkzIiSgRQJiNfHv/6mgvJhCxXKV6vNctKTSbqcfYAe7Op0EWcevhppkhWGoZHMGXk6RKVS84oUMH8xpyR1d3kBWhGPTrUU3yQy1SVdVRRHZyC0tD4e8UPEHsFMhrObWo436CCOx6t0I6K+ypU3LNQ0T8wFoHqGgT1QhMUlKi4+vxslyrOOypMAxnn8LBuxYdD6qWhbWQ7rtk4ns7M6yjaqmRcezxs0SPX+7Xwy8A9bsINgSa4b9M02JZuWOUoro/0Brc+MmP5q5+lCyBQ6yVFG17bZzi7rqPuN2t6y9ygG1oNDg5R0wOB6GvQ9NKoAJ2yNcTRugtrMsYXC5VH61CTjB3PYLlwGfMlRQwJRMwzKGKnBg89R+CVLEcmWS5afaHaY/jwgbZ0Aj7SP/Bjt97DDDty3kPo+k9jVd6DI6HZXF3eg2gpx9VOKc127S6qDI9nlly4nZL2BQW5IY6GGFjLsM3mUa29OI9vxqVPLQagbm4bas1B4bL8J56SZJ39iIcgzKixHKCZiaLN6by8TBT1TqcjCGfbtjkPiPeHdwTKK1FwDM6F2+865iEpGrYg5GF3Z+YgcqtmztOWtMXfVI9hy2fcoxAvgoxaNzoGoXZtMIFLmZX8ugKX9jWocrBlFSE2GLfk7bn6kMp8SOjWfEjB+h5dA0Jba8CmRXMOYtkWkEBVm+bUGrRpQXUQiw/qqO4XZ3WBqihV5url6EC1Sy+nPSgTyUBqQYrpDswarBRDEuZbA+xhi1oxw0l4m29vd3CrWqwkbPdaJPh9gp2c9taRRyPHHXr+jV/+VKtsybJpVuyAaj3eOU9RONwR7Wlybpdp9Jd7PNT+glw93WW/mKjrDsIZo3caDhiYgOMpsCftwz4lNr+zRC8+o1qmrQojIrPLly4jwnd6eE+7LZwPdTV5ugV3JVBb5nx7luPXj7UJ+8DIeWsTrBNhkhaLUY6LiJbsGwz7I6Nb2NTWJkij4qpGRZBObRSVZgbQ8JKekSgtC6gHZVf07UnPY8NJz6iTFCaWwdRt8tIMSCYvOU8O6nLvR222UmsczMymGxfkFnW7MQd6f7lNMtpd4AjZGrS7NAqd5hilQ5gfqucYiaKTunKMUH9zjGTcUcF6L9vgJlCO1rjMIIWjPnkqtjm6pk8dUdxlSLs0NL74pMMlzDzo0otIiLldGhzRbr4GdVyfIiEXu0uDCFSDWT+8OUPcpaHxtUJdqrg+bURjLtgLmsLgyAOpXhr6tBGNuWCvQEUZRcVEekxfAnm+SA0qjePJfMWdotAOPT18kWERwj18HSR8+B8=</diagram></mxfile>
|
2012.09446/main_diagram/main_diagram.pdf
ADDED
|
Binary file (50.3 kB). View file
|
|
|
2012.09446/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Discourse Parsing is a key Natural Language Processing (NLP) task for processing multi-sentential text. Most research in the area focuses on one of the two main discourse theories -- RST [@mann1988rhetorical] or PDTB [@prasadpenn]. The latter thereby postulates shallow discourse structures, combining adjacent sentences and mainly focuses on explicit and implicit discourse connectives. The RST discourse theory, on the other hand, proposes discourse trees over complete documents in a constituency-style manner, with tree leaves as so called Elementary Discourse Units (or EDUs), representing span-like sentence fragments. Internal tree-nodes encode discourse relations between sub-trees as a tuple of {Nuclearity, Relation}, where the nuclearity defines the sub-tree salience in the local context, and the relation further specifies the type of relationship between the binary child nodes (e.g. Elaboration)[^1]. While both discourse theories are of great value to the field of NLP, and have stimulated much progress in discourse parsing, there are major drawbacks when data is annotated according to these theories:\
|
| 4 |
+
**(1)** Since both theories rely on annotation-guidelines rather than data-driven algorithms, the human factor plays a substantial role in generating treebanks, posing a difficult task on linguistic experts. In this work, we are eliminating the human component from the annotation process by employing a data-driven approach to generate discourse trees directly from natural language, capturing commonly occurring phenomena in an unsupervised manner.\
|
| 5 |
+
**(2)** The annotation process following human-generated guidelines, especially following the RST discourse theory, is expensive and tedious, as the annotation itself requires linguistic expertise and a full understanding of the complete document. This limits available RST-style discourse corpora in both, size and number of domains where gold-standard datasets exist. Using an automated, data-driven approach as described in this paper allows us to crucially expand the size and domain-coverage of datasets annotated with RST-style discourse structures.
|
| 6 |
+
|
| 7 |
+
With the rapidly growing need for robust and general discourse structures for many downstream tasks and real-world applications (e.g. @gerani2014abstractive [@nejat2017exploring; @ji2017neural; @xiao-etal-2020-really; @huber-carenini-2020-sentiment]), the current lack of high-quality, high-quantity discourse treebanks poses a severe shortcoming.
|
| 8 |
+
|
| 9 |
+
Fortunately, more data-driven alternatives to infer discourse structures have been previously proposed. For example, our recently published MEGA-DT discourse treebank [@huber2020mega] with automatically inferred discourse structures and nuclearity attributes from large-scale *sentiment* datasets already reached state-of-the-art (SOTA) performance on the inter-domain discourse parsing task. Similarly, @liu2018learning infer latent discourse trees from the *text classification* task, and @liu2019single employ the downstream task of *summarization* using a transformer model to generate discourse trees. Outside the area of discourse parsing, syntactic trees have previously been inferred according to several strategies, e.g. @socher2011semi [@yogatama2016learning; @choi2018learning; @maillard2019jointly].
|
| 10 |
+
|
| 11 |
+
In general, the approaches mentioned above have shown to capture valuable structural information. Some models outperform baselines trained on human-annotated datasets (see @huber2020mega), others have proven to enhance diverse downstream tasks [@liu2018learning; @liu2019single; @choi2018learning]. However, despite these initial successes, one critical limitation that all aforementioned models share is the task-specificity, possibly only capturing downstream-task related information. This potentially compromises the generality of the resulting trees, as for instance shown for the model using *text classification* data [@liu2018learning] in @ferracane2019evaluating. In order to alleviate this limitation of task-specificity, we propose a new strategy to generate tree structures in a task-agnostic, unsupervised fashion by extending the latent tree induction framework proposed by @choi2018learning with an auto-encoding objective. Our system thereby extracts important knowledge from natural text by optimizing both the underlying tree structures and the distributed representations. We believe that the resulting discourse structures effectively aggregate related and commonly appearing patterns in the data by merging coherent text spans into intermediate sub-tree encodings, similar to the intuition presented in @drozdov2019unsupervised. However, in contrast to the approach by @drozdov2019unsupervised, our model makes discrete structural decisions, rather than joining possible subtrees using a soft attention mechanism. We believe that our discrete tree structures allow the model to more efficiently achieve the autoencoder objective in reconstructing the inputs, directly learning how written language can be aggregated in the wild (comparable to previous work in language modelling [@jozefowicz2016exploring]). In general, the proposed approach can be applied to any tree-structured objective, such as syntactic parsing, discourse parsing and further problems outside of NLP, like tree-planning [@guo2014deep] and decision-tree generation [@irsoy2016autoencoder]. Yet, due to the especially difficult annotation process to generate discourse trees, we initially develop a method to generate much larger and more diverse discourse treebanks.
|
2101.08165/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2020-07-12T15:06:01.618Z" agent="5.0 (Windows)" etag="ohYzCvQOTsKAZKmtWlXU" version="13.4.2" type="onedrive"><diagram id="wsrB2CC7djIjeXfG8sSl" name="Page-1">7Zxde6I4FMc/TS/rQxIIcDm1L3PR7nS3zz4zO3cRotIicUOwup9+QYISQMUqCq290RySADnn/zMhh16h/mT+wMl0/MRc6l9BzZ1fodsrCIGmwfgjsSxSiwlRahhxz5WV1oYX7z+atZTWyHNpqFQUjPnCm6pGhwUBdYRiI5yzd7XakPnqWadkREuGF4f4ZetPzxXj1GpBc23/Tr3RODszwHZ6ZEKyyvJOwjFx2XvOhO6uUJ8zJtJvk3mf+sngZeOStrvfcHR1YZwGok6DP69//kEi39PF73vPfvhLe56I6+w+QrHI7pi68QDIIuNizEYsIP7d2nrDWRS4NOlWi0vrOo+MTWMjiI2vVIiF9CaJBItNYzHx5VE698SvpHnPkKV/ckdu57LnZWGRFQLBF2kjDcHMkLS7ji0rw7rxsqS0fqbcm1BBuTQOWSDuycTzk1rfqT+jwnNIfKA8tHK0QxZxR47T+HWs/w6eILwJ4a3r98Hj6OY6C1HCR1RsGXdgpRWTwc6dQXrugbL4OvkirsCpT4Q3U6ORyKAererJpt84J4tchSnzAhHmen5ODHGFTJ9ZcEp1AqsQQ/vVj7+kV5CVcreyNi3jsjpGt43pjPiRHIYriP14dG9cb6bELv43StS0dOx1uIy+b3EFYE7nS69mx+Nvo+TzkQSjKAGA7C++vGWX6dFjnWVIiYj46iQDflBvm661qGLfjxGZqPV97An6MiXLuH2PKa1qkYTTlJtDb55o+mbo+X6f+YwvO0LDIcWOE9tDwdkbzR1xTXugZTqSWgdmVpaXAj6ksxnlgs63CkMetdSA1GXxPcdqibhxDtOZrUpIuWDeGqvVPLU6yFNgmkWe6qfnaTUnW4FJiJCKPXs7JnfUVzFZ0Rr2TF3pAEG7B7Xcn6HeYfp7I/sphPB+BN7mhQYI/MSEx4ILfzvL3yy02wJgu4P8NfL0XbO2mrwbnVmMgo9DF9acxOJG4FziIdILk85sbXgi+sHm5p/MIc3zb0LCtwthO0tYaOOe0SrGAuPLQvbjUEU1oQrgsal6kK9RY/B7iQaviS4bJdPMCyPiX+jXWfrpWuvohy/025t+el36oVbRT2+Mfj8u8LvAbxf8DNA6+HVxv+jc8DPqwk9vFfyMxuD3d9D4ovfCvo6zD8PWsU8rO+OLsC8fCPggFuKaLDRbhUJcQqE5j68wyTqZlYIiHJNp8nXKmUPDcLdSB8R5Gy0D5UckfC9Yyaso2CHcIFg8wAau8FOzz6VUeQKjrM9scyCvT6Q15CQASq5oqT5PoDOzps6OnodykAvNks4mZJ6MAGOxLkYl/6re2yGzjsnJricn2JScYBe30tDSb61MZdCP/mB343ZZz4RKLJ05fUC/7BfsHy5WTYAfP6wOooZVIjiao8tMSZHjitnnmimhTq5ksCLJM2X9bt2z60rar26oGRVZpG3KZ9tR/+C0362DmkPJff+oM0CXUGtYiQnsWHQwVDEB9YYxoWvVydV5TADtpJyAHeSEqecx0SZK1E4BaAclDMNWVQ9OofpywsHnVr0BWqd61EHV28Bu68IP1N78bofusbnf7GBH/YY4Ud6a/9ycwBtevTojJzqzF9yEqOuuz1G7NnVBeYHeZ4FDxFG1M7QcWr0AH1hGsppQtIMaf0mhMLNGCPRgST0rheXVYzSmni4+Xj33QzFU96cUGa1SHWrZr1Xziiu+lwmrFKfjCsXpTSlO78z+YJsUVzd5KXNbWxRXzl763IqDuqkozgBVikMQn3CGmMVERQpZMhSKLzakSyGwIV3qmVPXcwpvT6W9bsicWjsb7O9sg1quXuVsCw4Q3rqDUIyCIzgbFRYDho0rnL2CcN7ZuDFnf7XntbDw6is0m1uRxcX1f4pJ19Dr/7eD7v4H</diagram></mxfile>
|
2101.08165/main_diagram/main_diagram.pdf
ADDED
|
Binary file (24.3 kB). View file
|
|
|
2101.08165/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Video relation detection is to find all object trajectories and relation between them as triplet subject,predicate,object in a video. It bridges visual information and linguistic information, enabling the cross-modal information transformation. Comparing to other computer vision tasks like object detection and semantic segmentation, visual relation detection requires not only localizing and categorizing single object but also understanding the interaction between different objects. To capture the relation between objects, more information of the video content need to be utilized.
|
| 4 |
+
|
| 5 |
+
Visual relation detection in videos is much more difficult than in static images. On the one hand, spatial-temporal localization of objects is needed instead of just spatial localization. This requires to track the same object in different frames along the temporal axis. On the other hand, relations between objects become more variable. Relations between the same object pair can change during time and some temporal-related relations will be introduced, which makes it more difficult to predict the relations. Thus, it is hard to directly apply existing methods on visual relation detection[@liao2020ppdm] or scene graph generation[@ren2020scene] to this task.
|
| 6 |
+
|
| 7 |
+
Several methods have been proposed to solve the problem. [@shang2017video] firstly introduced a baseline for video relation detection. It firstly divides the video into several segments with same temporal length using sliding window. Secondly it performs video relation detection in segments by a object trajectory proposal module and a relation prediction module. Finally it generates the video relation detection results in the video by merging the result of those segments greedily. [@tsai2019video] introduced a Gated Spatio-Temporal Energy Graph model as the relation prediction module of the baseline proposed by [@shang2017video]. By constructing a Conditional Random Field on a fully-connected spatio-temporal graph, the statistical dependency between relational entities spatially and temporally can be better exploited. [@qian2019video] introduced graph convolutional network into the relation predictor, which takes better advantages of the spatial-temporal context.
|
| 8 |
+
|
| 9 |
+
In this paper, we propose a method for video relation detection problem. We follow the scheme of [@shang2017video] to build our system with a object trajectory detector module and a relation predictor module. For object trajectory detector, we first perform object detection for each video frame with the state-of-the-art detector Cascade R-CNN[@cai2018cascade] with ResNeSt101[@Zhang2020ResNeStSN] as backbone. Then we use a Dynamic Programming Algorithm improved from seq-NMS[@han2016seq-nms] to associate the object detection results of all frames and generate trajectories for each object. For relation predictor, we combine motion feature, visual feature, language feature and location mask feature for each trajectory pair to predict the relation between them. The use of multi-modal feature helps to increase the accuracy of relation prediction. The framework of our method is shown in Figure. [1](#framework){reference-type="ref" reference="framework"} Our method achieved the first place on the video relation detection task of Video Relation Understanding Grand Challenge[@shang2019relation] in ACM Multimedia 2020.
|
| 10 |
+
|
| 11 |
+
<figure id="framework" data-latex-placement="ht">
|
| 12 |
+
<img src="framework-report.png" style="width:100.0%" />
|
| 13 |
+
<figcaption>Framework of our method</figcaption>
|
| 14 |
+
</figure>
|
| 15 |
+
|
| 16 |
+
We choose Cascade R-CNN as our object detection model and ResNeSt101 as the network backbone. To train the object detector, we extract frames from each video to build the training set and validation set. Due to the high similarity between frames in the same video, using all frames is not necessary. Thus, we sample at most 15 key frames, whose bounding boxes are drawn by human, for each video in the training set of VidOR dataset[@shang2019annotating]. The training set of detection consists of 97221 images extracted by the above method. Also, the validation set consists of 31220 human-labeled frames extracted from the validation set of VidOR dataset.
|
| 17 |
+
|
| 18 |
+
During training, we notice that the class imbalance issue exists in our training procedure. Classes with more annotations(adult, child, baby, etc.) have high AP up to 0.7 while classes with less annotations(crocodile, frisbee, etc.) get low AP close to zero. To overcome this imbalance issue, we extend our training set with part of the images from MS COCO dataset, which is more balanced than our training set.
|
| 19 |
+
|
| 20 |
+
During testing, we perform object detection for all video frames and keep bounding boxes that have confidence score higher than 0.01 as our final detection results.
|
| 21 |
+
|
| 22 |
+
We take the tracking-by-detection strategy to generate object trajectories. Based on the object detection results of all video frames, we use a Dynamic Programming algorithm improved from seq-NMS to associate bounding boxes that belong to the same object and generate the trajectory. This algorithm consists of two part: Graph Building and Trajectory Selection. By regarding each bounding box as a node of the graph, we can link the bounding boxes that are likely to belong to the same object and from consecutive frames. After that, paths of the graph represent trajectories and we can run Dynamic Programming algorithm to pick paths that are more likely to be a object trajectory.
|
| 23 |
+
|
| 24 |
+
**Graph Building:** First, we regard each bounding box as a node and build the initial graph with no edge between them. Let $category_{t,n}$,$conf_{t,n}$,$bbox_{t,n}$,$in_{t,n}$,$out_{t,n}$ represent the object category, confidence score, bounding box, set of precursor nodes and set of successor nodes of the $n_{th}$ bounding box in frame $t$. We set $in$ and $out$ to empty for all nodes initially. Then, for each $t(0<=t<T$, T is the frame count$)$ and all possible $(i,j)$ that satisfy $node_{t,i}$ and $node_{t+1,j}$ exist, if $category_{t,i}$ and $category_{t+1,j}$ are the same and the IoU of $bbox_{t,i}$ and $bbox_{t+1,j}$ is higher than a threshold, add $node_{t+1,j}$ to $out_{t,i}$ and add $node_{t,i}$ to $in_{t+1,j}$. By doing this, we link bounding boxes pair from consecutive frames that has a IoU higher than the threshold. We set the threshold to 0.2 in our experiments.
|
| 25 |
+
|
| 26 |
+
We notice that when the camera or object in the video is moving violently, the IoU of bounding boxes that belong to the same object from consecutive frames will be very low. In this case, the original seq-NMS algorithm won't link them, causing the lost of tracking. To solve this problem, we introduce a new linking mechanism. First,for bounding boxes $B1=(x_1,y_1,w_1,h_1)$ and $B2=(x_2,y_2,w_2,h_2)$, we define scale_ratio and area_ratio as: $$\begin{equation}
|
| 27 |
+
scale\_ratio(B1,B2) =
|
| 28 |
+
\begin{cases}
|
| 29 |
+
\frac{h_2w_1}{w_2h_1}& \frac{h_2}{w_2}>\frac{h_1}{w_1}\\
|
| 30 |
+
\frac{h_1w_2}{w_1h_2}& \frac{h_2}{w_2}<=\frac{h_1}{w_1}
|
| 31 |
+
\end{cases}
|
| 32 |
+
\end{equation}$$ $$\begin{equation}
|
| 33 |
+
area\_ratio(B1,B2) =
|
| 34 |
+
\begin{cases}
|
| 35 |
+
\frac{w_1h_1}{w_2h_2}& w_1h_1>w_2h_2\\
|
| 36 |
+
\frac{w_2h_2}{w_1h_1}& w_1h_1<=w_2h_2
|
| 37 |
+
\end{cases}
|
| 38 |
+
\end{equation}$$ Then, for each $(t_1,t_2,i,j)$ that satisfies
|
| 39 |
+
|
| 40 |
+
- $0<=t_1<t_2<=T,t_2 - t_1 > 1$
|
| 41 |
+
|
| 42 |
+
- $node_{t_1,i}$, $node_{t_2,j}$ exist
|
| 43 |
+
|
| 44 |
+
- $scale\_ratio(bbox_{t_1,i},bbox_{t_2,j}) > 0.5$
|
| 45 |
+
|
| 46 |
+
- $area\_ratio(bbox_{t_1,i},bbox_{t_2,j}) > 0.5$
|
| 47 |
+
|
| 48 |
+
we create a path from $node_{t_1,i}$ to $node_{t_2,j}$ by interpolating nodes in each time $t(t_1<t<t_2)$, as shown in Fig. [2](#cflm){reference-type="ref" reference="cflm"}. The $bbox$ of the interpolated node is obtained by linear interpolation of $bbox_{t_1,i}$ and $bbox_{t_2,j}$. The confidence score of the interpolated node will be set to 0. By applying this linking mechanism, the trajectory generation module is more robust to violent movement of camera and objects. In our experiments, we limit $t_2 - t_1$ to be less than 8 to make a trade-off between performance and complexity.
|
| 49 |
+
|
| 50 |
+
<figure id="cflm" data-latex-placement="ht">
|
| 51 |
+
<img src="link.png" style="width:80.0%" />
|
| 52 |
+
<figcaption>Cross-frame Linking Mechanism</figcaption>
|
| 53 |
+
</figure>
|
| 54 |
+
|
| 55 |
+
**Trajectory Selection:** After building the graph, we can regard a full path(path that can not be extended) of the graph as a object trajectory and take sum of confidence score of nodes in the path as the score of the path. Then, we repeatedly select path with the highest score and remove the nodes of the path from the graph. We achieve this by Dynamic Programming Algorithm used in [@han2016seq-nms]. Trajectories selected by the algorithm will be returned as the trajectory detection result.
|
| 56 |
+
|
| 57 |
+
<figure id="rp" data-latex-placement="hb">
|
| 58 |
+
<img src="fc_eng.png" style="width:80.0%" />
|
| 59 |
+
<figcaption>Relation Prediction Network</figcaption>
|
| 60 |
+
</figure>
|
| 61 |
+
|
| 62 |
+
Follow the scheme of [@shang2017video], we first divide the video into overlapped segments with same length and perform object trajectory detection in all segments. We set segment length to 32 frames and overlap length to 16 frames in our experiments. After that, we predict the relation between all possible object pairs in the same segment.
|
| 63 |
+
|
| 64 |
+
To fully capture the video context and temporal movement, we use multi-modal features, including motion feature, visual feature, language feature and location mask feature, to help the relation prediction.
|
| 65 |
+
|
| 66 |
+
**Motion Feature:** For a trajectory pair in 32 frame segment, we first calculate the location feature following method used in [@sun2019video] for frame 0, 8, 16, 24 and 31. Let $feat_t$ be the feature calculate for frame $t$. To capture the relative location of the pair in the static frame, we generate static feature $feat_{static}$ by concatenating all the features calculated for frame 0, 8, 16, 24 and 31. To capture the dynamic movement of the pair, we generate dynamic feature $feat_{dynamic}$ by concatenating $feat_8-feat_0$, $feat_{16}-feat_0$, $feat_{24}-feat_0$, $feat_{31}-feat_0$.
|
| 67 |
+
|
| 68 |
+
**Visual Feature:** Due to the high complexity of extracting feature from video using network like I3D[@carreira2017quo], we choose to only extract visual feature from static frame using 2-D network. Most previous work used the object detection model to extract feature for relation prediction. However, detection model focus on the category of single object in the image. It can not capture the relation information properly. Thus, we use a scene graph generation model[@tang2020unbiased] pre-trained on Visual Genome Dataset[@krishna2017visual] to extract feature for relation prediction to help better capturing the interaction between objects. We only use the the middle frame of the segment to extract feature. For each pair, we extract a 4096-d feature for bounding box of the subject, bounding box of the object and the union of their bounding box respectively.
|
| 69 |
+
|
| 70 |
+
**Language Feature:** For language context, we follow [@sun2019video] to generate a 300-d feature for subject and object category respectively and concatenate them as the final language feature.
|
| 71 |
+
|
| 72 |
+
**Location Mask Feature:** Since coordinates only have very limited ability in representing location, we further introduce the binary mask of the bounding box to better capture the relative location of subject and object. We follow the method of [@zellers2018neural] to generate a mask base on the bounding boxes of the subject and object in the middle frame of the segment as a input of the relation predictor.
|
| 73 |
+
|
| 74 |
+
Using the features mentioned above as input, we design a simple neural network to predict the relation. The structure of the network is shown in Figure. [3](#rp){reference-type="ref" reference="rp"}.
|
| 75 |
+
|
| 76 |
+
After analysing the dataset, we find that about 99% of object pairs in the training set have no more than one spatial relation and one action relation. Thus, we convert the multi-label classification problem appeared in VidOR[@shang2019annotating] Dataset to two single-label classification problem. We use focal loss[@lin2017focal] to supervise the spatial label and the action label separately to deal with the severe imbalance issue.
|
2103.17242/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-03-26T18:38:58.266Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" etag="jQn1alz3bVR14h9K-UO0" version="14.5.1" type="device"><diagram id="WMYZA_UKSRp4eW0dpfWo" name="Page-1">5LzX0qNAtib6NH05EXhzifdOQgi4Ex6E9/D0k6mq3r17+pyIc65mIqaion79CKFk5VqfWZnUP3ChO5X5M1bWkOXtPzAkO/+Bi//AMAIlwb/wwPXnAIZjfw6Uc539OYT+68CzvvO/B5G/R7c6y5d/O3Edhnatx38/mA59n6frvx37zPNw/PtpxdD++7eOn/LvNyL/OvBMP23+H6e962yt/hxlMPpfx9W8Lqt/fjNKsX/e6T7/PPnvhZfqkw3HfzuES//AhXkY1j+vulPIWxi7f8blz+fk/5d3/2tgc96v/18+oBUfKWOia49W+ujDVUK8+39g+J/L7J92+3vHn7/DXa9/xmDNT/ANfLV2LTiAgpfLOg/fXBjaYQZH+qEHZ/JF3bb/y6FPW5c9+DUFY8zBcX7P57UG0eX+vtHVWQa/hj+qes2f4yeF33mAVALH5mHrsxwOH4GXH/r1b36wf3+VP13dwsTi5voDxsb/Z0z+hgl+b37+t0N/Y6TkQ5ev8wVO+fsu/ndm/5mvxN/fj3/NPvF3Rqv/NvH/nOXP33wr/+vC/5oS8OLvrPz/mSHiP2Yo+b97hkCN/R82ReR/TFH6f/kUscT/tinSSVfinYKpck8orkH+bmn3P9D/mKH/mCCAzyN8WXc/SvivKJufJG/dYanXeoDRToZ1HTpwQgvf4D/pt/zF+J+z9g8ML35//h9mah3glPy+gVvGP3yF/PMIeJ191s8/cO7Pr5g89uU/MKEOeOdxIIZSDhz4Yz9flfQqwSuVAP+A6ZIFLoLvjAUereBnyUmt5AUPgsvfWUxa2YP8gMvxVzGU9JOOkE4Qaz4dLtzD+02smlzSrkOXBJ778ucVT36GaLGweFOM6VzNKqzPxNdEsrRfJXPme6b0nSxXids2QV/gwuSNktbsTMkKBnMQ6trGTuWNtlqxzD5+PXDU6DLJdbB+YaWsjbbz6B4HOPyaor4hP/H0tttsH4kuZljl3VbzzCx3jFmrWNGzXr7jgXgjrJkyL/3TN+wjqGKnXUMZXOLbzNfhnLtJP0MyCpvxHZy58wjGzGk1PHhhwZ08uSeZ3Eu+OzYKPhiwY0pdbEJRxWTLNYaWhYmu8zHPz9Nu/Q+yzoYnKOAICQaDJKawzRwAHf67ZiH4gYzD54S/r+IZOtHbIJ+13SYximzTtTTjsyU/yrWIlRdvYBRbFxNYMIHz2/r1TnLKbue3zVCvJsXOp+6HATUhu8inZAfOzjv9iwUXrdZx749Bu8Xqtc7Ph92mfUQhi68H6Jj2zxPcewlymOdnju8Z1PJ6N5X2e6QHj0vhKJ28a76eYK+8fZQFcy5j6XEv6rCQMTp5AT8WJnWZ98G5QcVJi1lib/CxKPC5B9MM1okNCrs1YcGBQNktgQdLYlyObHA7rRNbfVviGJJdqnD7XdLKKuBoFxnC0x2UkyRchu85rJxAvORauKlRZdL9Dl5mYLS0RhpEczODCt50kulllN6l4KWFHwEs7zkP2bmcKXiHzusX6E3D0TI3jPiRGWjnIvpnN923GqNilpfTm33yVJfiKJIYp13p3itE/H0ZVJdLeHcQC7w69ve5NvF7ZbVFmMGVQW7UEUQTR0xTwsn7itmNo614+sX1NMEVrhs4Dzgc7gb/isnhuBNIOD5z289bLj2aEMUtzIovGdN7yXl9OasfixCESWRz1p+dQ7lpz3/Ts8LdCvjudF0zVSu7svPWF+XAspDnm7uLVDTxx4X7NOcnFd2Vero7/MOh5gIf66V5C7UJBspbfH9fVdw9ym0KSQYckTCckQveAi/plgr9eyvkB2uE57EYyAqgiCcMvElh/cHAYkSRHURS3E/wiy1rpsLugU5yAcua4Fwv5edDPppvRXXztOLoUZgdNg7Mycoaw39v75ViSrKw5P3tUzv5PBdQIbjHOW9eLQO/JBHRErCid1lfkNdZRzaK2pohHCzPznctF6m0WAnOPHi6p0KisQmuIQMRP18mXgvb0Uwvd68Jky6DhFG+qoNRuRaSNzHqekRT08WWGYfwZgGGP0cyKHcdZBD/qRNXs3uXEPvKWGFOPSOrVOKyeL+YsrlfOx0x8jLtBwfOZsIjofFrE2y2mssmv2xqAB+ZjH3nA8It7YGvqbXoROTJyGhvQXwFyE1g4c5J2aLh2PKrvCgxwAzyHcol9cJ9jrUnKE09XI3Cog68ge2yBH5waDkflno4ZX0UxKzOQF/zjzsVkjomGCr5+jid6TAXCvpxHc+dJfKxH29iskirqEpJw93TNnQqf3DPYL/4EMSK41ZcWjaEpQh+H8Y2e1PdCycUx8p8Q7+KNZFQGBkCBuo8moHsXy/8iCELmBWLq9j+e+dRPBziOlK+wXc18awouxkym2qrxymcQ+F9kmrJp8yQPiDOU2v3HRK/Twaij7j7XhOMcjFmApS3oUySkHs2gyjsIV3dXtkEfe8lvFNaLNftJU3oeX6oYfQoOR4AAlXKFg1mRM6TkZoSwgEvVQACntt0oqNH+lT6riRlQvp2m4TwnRcGyO1l7xxkDu9mNDAksXHfB6zi1b5gmoNDjckDckEPb3+rdkkd0nAzOB4cFpUZr5BHXLTsQJFJRXuIRYSx3cf1KyKAROaNzNrrAiPsj1ahv1SrgQvKyQohyIr2XOsz/mRzKZAHQs4docBOYkUv0q3wqddvQiNzi/+4lg2HIVilSVCyQCXJmlDY1DpYyTCP8WnzJb8ChFJ9Eb1aSn4EuxwjKQJvoOgYc4SfhkVL2ECTTfsr4Q29J+r35xULV8ufIoO7l74MAtdu1YNwYwhT4C83q0AipEpOU9s48u5G5dV7B0W0wup/Yyxu+6J9IFFetAEjib1KsIwsIREnM1nvELjZkYFBFp3Xe5t6WKUh7KJ/s4x3ZVbL2/kwdvSeHLL79uZCwO1vIkjiWBZdckzYLlrvx7weVzB0fgVLYOEvTQKZxjuh/MI3AcDSgSiOV4HRmHT1Jd8eHcUpnLRrKn8SBQgeOfBvXRGxUkmpSeIxzsKxuKxyFejMMz+zRZoF4vR7BpEGaw0u1m7DFZDTbkPCsCvZsMX9YvsHKPsyYvhIuLkDsZDDz8M3UzwCPHV53Nq4mUhZUeVKTCHd4RMqb30bjXsI7ajpIN7uYvzEKmmId996UIacE2IRNEcehXcZI5zsdpF/WZEA8WAMgrM0JkLnuO/Lc8uHX0JlBokG07uBe1Kj6NMznRtjIcNqdwtS72L0W7jkV+F4ZeFZwlgBSihyQAjMjmgwA+QJyAaZzVthg6AXv+eWSOo7dXOMncqUEpKw1uJpA+9tqTAfEsIcB90ewYaXnyyLODUrE3ebkot+BS1jie71Qy/m+Qio0txLnoknd965htWSSQxBVgo48ShyjxO2DDMeCJpl6GovW3NdT7bouObYVOwtFMlArtJ30e7JiWlAtDw9eta8b0T8sCC9uGsZu6nPsd8hzgCm4+ahF728DjonAcT3+QcdWNfpvHEQaaAlmMtg5vIdvPPe3/NXQdfI1F2ncvaGeQjhghNWAoyv/NmGRP0cr6ISS3pyJ+7BitvbVq/lkBZcVWFsQxAJCA5jS/HuLJpoVNTUJb9OVh8rQjbeCPPTzb6rYpcsJaAmHli5nPemPMskZ1yYRxG/H1XKCGUCgbQ5G71OqU+YUB+9y51gAnfxfBowzcU+K9DGgyChQJRMJq7Q+TebuDdNveLvxznNTR3zncuabSVU9Q7zgMJvagqqniVA9HYBYcnCafDcKjf7lV/VGd1iGR2iXQkFMUbiPoZ3GEMt9XoC1mNTDh8KINVfP/kPBzFhDJfYSlK2+6rwK0mOIV4IQIE8c8KHRFA2KpnoFpEVrwLN64HlZiDhAgQrpr7sP1gNzqFZMjcipuif0IrxYcmHIp5gQR6CGFmFst6T3kTotNsr9T1REqXDnipw/El2yWab47C5U9y0R434/YfKCQ1N6JEBRMKn2xsIp/hwOUE6orLDGKhY8APfppYSEaunqx1WC6J3jvxJSMQVKuR7i9qPsl8t5WofKGhwE5+zXQUTAYb3CRhPjlf7K/Cfi1dXqIN08t5bpMDtl4hu4c6/XFXgzK3q5q8UcZjXSGrjVC8WnPtyeUjRGemycjUxPNp8xAtdTuheQX7wQD2C0eMm/diYmHtArkJEkRaTaypIT3RJYJtSkq5eVUxJ960O6qFmvY8/9gyhQOAa24/HxKkvveia40EY1aKtu/v9abdc3Oqp7/HPwW1EgJMeY3+1rhx3c9oJRdqGEYsL+T4OqEBKKyR+pKeGt5LEk81Ap8Xd19ZPZIIQLzGF7HFgXgkuU5C8ukg50JLcvKLkWMzXY7AP5VV0iLka/CwrNCIqP1ZgQoEu3LA9vYZgufFdQDIFBIG6XMruNqb2ZErvEtKH0emnEux18J8ckEaOz9eC46/9hPXhvjglIQ+WGQvuxR4jpj+SN8Rj3mcI1ia29V0MtOWeW781hb0ygczKrP0W21Rz4szLPVcY/KIeQs8uv1r0mwA5yYrwYMTBMmNzLK1NLkjyiAqN5D5rhzANyqzso1CoMw/9b5THz09NDd7jw4tcwjk7tW7leauoZ2Eu17PUJGs0wr32wuAEYGuaAQ79FS+axgksczcSxnTSFzgNhdFnNrfEmQYaNfzgW9HRFEGIDJWFMbPipg9UxAYVllX26CP6E7MQSXJ7pQvaiT9oFZNJl8oZOCfOMF4w5vBUkvuo3pRjBU/fiSgUahK+b7rk9JkuV7b3uB8dAV0RvzWjCGeed579K7whRvqx9WBSk4M8GSKIu/A9Cgzap2us8Anchk/MnepC3pIHgxzSFgE5Lve9TEkzX7gz8yH43j5u/lV4Hyake5vTZy4sYsrazvTehDiD83Va7cEaXoYfHTKpe7V5NM7hz7WVMZY0qTdNP4LQvUMs9b9sOtWp8kEAMup367EvIsqvLxiCduf8CzK/AW8mO4amrJoAMB7gn1v6pmaD0skKGJ6D0qUmsmtKntPrmkIxSR8Qw+0CZponJPB+ZRp5fVWWhtXqVYS68PcKdIhbQKb5/rLsPiLmcwYNffSxKo45I/cbEH1kzRLeloRGQ5OPLxUmX41uxQaYRDXOT4QNjozQXsVnoqs3Rpv8i3EKQ/BrmH3KYyVgPE3H7wmvLnX4Ra8jygcDHY7EyjeODhHgxQ/DYXeav/XmEMwWfOLItiBNFtgOEdU6ACWAsq/hY5mdC6OjUkIBwGcXeKE9s93sjBvD2XAacQynOdyjB1GEci4lFJ+APj8kNNXt7ufwR+chl9UPA7KLWo4F4Bhvo15xxHsOKagSZFpGMKusWpE8jJyFSR74z8+Qtwf6DTJGaqi9/zxuhSb67eVTj/3rEL0A9Yb5xeO40BdOZODEabT6dlQe+TFgHO03UHA6jIgPIR0wGaf57r4vLXsGB1vGa+xhys4xmugiPOyUAPx90MCx3ElhjqeuD5ossAQ/ZDWYpLKPeHchDPXIqoVk2nVG3ywhd+5Qqu5q3DA1EquXmbFfyUh0JRC52hfjygRIEg/2B6NutIIdEPT6nAw74ch+Iv6DhHos5C1lLn9Ks+8CjkRbyM2D/KS2CIbRUKtiz4fPYwoH0Y4vm+tHz8PeHA4KoBG043u7qC0PDdBU7joG7hBzqshdD+BkqZ2AdsizPmvmyBoqd0XoPYCAG4GegthebFwDWwwkHIpPEA8o6eW8Dbl7sejWh4UZIQE5s6q2SnYmbqJneUYAMdxp32yBC7OrwkzSN01G1z2gREWIusneEyDus5S/99meSyygsw+/vKivCQV+0LJ9FfefvYRNE1mxpyY3YCaJd/e2h2NHg/yF44wMqlfdW9Y2Kc5hGKNPwacataiCjWWh66NccVfEmH+yyGM7fqMqR9EkEkJ+seciUiLMAu4+UlR1fB0IV4BFFkw0y8ZE10zRviKB1KZ0jWzVJ1HApC+1PfrwWkvUB/4Li+VyQVkY6DKNpUmRtixgbZk3vQ3zjZ9E8wb6lkxwDmpgP+RhU4spQCWdAvCSMDgy/Mr27Ce8ecar+E0pa6/yiQdv+4h4R6LNO/Fkfou8SQLAHcLj5dOcaupP41t03RwNKrCTKKcnKOd6HgQrGSWViRvPI4/NASgWjvMpw7KSJw8DUyd0g+zrSeF2K2QQ3INXabMpPTApG6XVqHaqZZGUbTC6c/0hsazgHrwdiA0ZJuTZUp6hE1u/749UUK+7QDlYwUDHXqJ3hwgYD2JmAQDHhyTu+uugNw5HIfmHKX01a0NRrzISmPEPFGxvlNnft8/sJB21wiGV9I1YH9LuV6gWBZjqfcjFl6vF3iFcgs9vxI0f3nr60ak72pV4fmEm58Qk4Qi77Cguj1jhb88OWAbvW8ahu0ER5qr12GhU/X7q2KZsODYE3PH5+rpuDV/CcXPLE3zc317Ctym/KBRLj4njB9VPnV7gfUwrAo5GWhjGzNAL92Y8KlxyoPePFLjQN/dUcU5/HOe7ndbgTboTRj9wIYzA+Dj1k5F4HkOA4yltDR/Fw8bCnvYJ7p2Lb34mcH8kDmMd1PsRqqIfTW3ZDcK4eIy4SVH5MTixNTzedtTk7pbvYaccZyyhZVQ+nMaN8dAIh6Mu9kR70ztOkpoHTEPicD3q7QVsTwDbtdvhaTPU3c/FzCiSGsQuNqyh6+B8QBNDWK9ttLtxHKNNcDBmueLVa4Ez5XgxrnZt0V60JgeXRFMGT1/TiXGIzGGc6ua2FHMHJz+d7YCpPu+0gn5eUFsoW0jWFiGLR153YLgyPnAunJQo33j2yZYaT/sh98Jy9nBXuuHxQ9aFskDGjTTJcJ7nJJ2EBLPJXpdu+S3Jgn2JNb/WTj01Uee1qVNZp3sizDlq9qNj+FzDlOx5+LvnFA/V4aTgIJyztbl0iV2btxVlQmaPXtElYcZFxLmNpIqjODK+T0rDxdY9tLNw4W12xK0pXeV7IC2CA4xQUxlIw51TwXgj2ljNFqFBaYQIagoLE6THNQHn8+yYrjsUu1TRh8NkftwzDJ+tI9dwwtXdoqKUysW93NhQ5qI5XtQtf9DP93l7zBMNlt5JYHXm2zuHHWhRkLApqfEssTDmUzvp0+m/SWp/qtSSxD7NsbCEnQrZSBKoO+JIviB2s20pvVAU3++M8Lhv/OU0NCaXUTnwRPX8K1cXlZbzsX9+KusbBf5KSA2jfuhu4QxAd78i5YEzRyTCF47vwEnvTroDp8WTSsfC8MgZRnxzwQKZkpzjexEl1MNMSFzEZAojW15ekh+Nq8kNr/FfLHo+cuFs2U5FbhxBVYuIw6QfpOBRYNZcQnuvZz3km9u8SmAQoEV1fyoJWuIg2IMsXty9LVuY7Kpwzgr7slRZgFnaJBx1oPAi0Je/iAmRzh1owT9QJkjt7KSZFLtlxHmKZIQRoJUF6BTXDunHDRVfpBbdQ5ld6i0sg2Sl3MkxenK+qy8TwT6rRXuul9+ntlfssR5OXn80YF03c3crQhuO0qDPuGhxIBQo5kbyKuoHpICT6DwffgYwvYftDnEkft34KPdSykyS41uwXpucSmls/RtLYSMJDdxi+0mAO0uZ7DB01n9C5zS9vYh6lDyCi2y760mZFkW/VmWvcIj71dQEdwJ0e95wwIUeHrBDVRj1Ov9YdiRf/jc5NpBIxV5KhaxrnOGmzE1i7auJRNbiD8U/OLfm7swapJiRYXPRUkPaX7GJGiR/P9Z2gS5TLzJI89Re7RfTQdGAihT8gXS79G5ZUJUrEBWepwjW9ytK4kFSBNRczJuQIV2dQHOVcpFN9vx1doeFTWfZQvgEQsiTAgWAsGVG8Qe3f6IMgXfvtUx57ZUwHinG0Zaidw0p0+XLfwP9Q0gJfnEBeIOxetItZlTh6S2sXkBGhakzCTeGAA7u/Ihng9uXozuOsGbLyLZpWm0ZcBcubPB2orDBjNdNsULtEzJ3yeZnvuDlh5s7qOxe6m9JTZtReP7bXS8UIT6sDPGO+HBZWFEae4rvJyZ9WuhUhdlsU78QC8pdSUj8CH781I1LcnIMLyWGtcsJpVA82retwbjuvTtw5cOQW8YcWfnDhpJOcnfxKs0muxt22W+GZw3JP0iNh+H0zD79IDgX0NEMcUC0iwYRg+90c/N5jgsewxx/sYns/tYZ9AE7ucfQDV3tHahBX1hKUhjOdTl5X3qSs3DkwIEtJB81aF4+5fTclj35Yq/5HXSsMe+oYpR2mxE6YT9E+0F4it1D28H4iwVcUqpvZlPgCbq8d5trlhZOoZxK+aF/WVKWrr4R6/dcOfPYzmQMZJe1Q+gyTuqqKqtNxh1DP0dYAH1cpcE30mqZ05kX0BHkF8Xbbj82kG3vSGioEudUTW+7C2i/4wUc3cpIyBjS7xtUxmVwj6VLGu64h73OfbG5wS2kXdA7naAy26I6kTTlEt9TfMewQDc38ZDSsCsGmPqbUMDyHvEi9qWP3DlecRZrDuGhcbXF7XaR8CtBHOZNe0qBaynOHmnB8bWd9PgTpvKiqsILICdct+fvbGhmmvGGRIfQdMvJOdyxDnM/RSwyLHIs8o1O9J4ZE5XEj7qXHojh2fZE9y0ivkXcD1iAWnxMIUO6NFlyLFJ+rw120wRuvL5GkOWspC8Y8Hr7WaNIpFsSXAscusxN3MnrKJQ2yyCht36WJLXqez5QS47cZ6IamjW5Si30L3Ovwp10E9b70EHGqoC6EftQcmGNn2JRXqxMHwxhAYB6uHPYcEBbHFJdZ1YxJihaGPFdhMNxEzIc7t5+M3z9WqMoOA3t3Nv2JKYFdR5BV2fsYRRozNia5OJAxB8MPisUIbm5hsCbcg+HPvBDdRRx+12KWEgphPUWqlFCPY9wvwOFBJ5zq6/HkRJHZZCM6vbN57wgm34sW594LEyam7O+iL5ATZTh5X0YkB2O+ySzemUUF64misj8zSVmGyb9e/DjgNJfJQBC/R1De+9t0G4qsOPgI4JCh5F8YPH9Y2Ah67vC8ReCxnkPXhje+ChkIgw+L9F0CEvzyb6i181u0x48izBBi7lruv6x2slzCeo53KAeQ0irkEkd1qrBI0S+b2EKCoBPN/p7ydRZMNf1CwW1hD8IIbmUEx3LwXHOK6DxeXNNKzAqIZbVt48VuS6ClLmeKZfTuA7cgLo8yS5lKzSEDfri/Z5bppjmZg3FVi8vjl53lx+p/WUCpY7gAxqaYT23ejI8DevFI6R28xLp+Ibj3S8ipY34UYgswSk/9Y5XBEf77NYTdZ6NLKbOkAipolsnUUjk9WBRjJPfsAsSd3bR74XmQBgroEiUUpO+ir5vCiAH5AXeHyF2CiT2+ONocUpm7hun4caEQKRYq2I2lXBR/3GXHE6yEV6GuXZw0W/PDeBT3n+ck0v0lE6FePIl5jZO6CtGUMvLLUp275Ud9crbY05OLpYDojZG39br4V4Izf4REABpYLjhELDeZvIJSR8uQRsysD6x3qXXQpdMWEZbRTgcU234vjebeuWhC5dNc0LGQ/hZuBCQfXE4phTfvCKFLe2JcGTYq2f0THH8m2xxnNZYwa26r5qVjPzRFtXdxORBc9cUPWPX2FRjk9LbKeSj9n8C3wbT+NuxQOUtKEpOcnI9ihEmFXHzYt4myWa/3itfdPJk8tO58x4QejWXXezZTcjtjb1lZbwKMxSFrZugY+wv52gMh+EGEOAuzcTb+/JfQNv0jls+O1terK8iSbuGNDnQx7LC5EdmHdiuvxm5oYsBej214NIP7TxScuCQ/U6/z9lmukyVcy5FwKc9oajwViLjlLuJPsOyxGOKsD53957yqc4xwaj/zEHXcJFkjJMmS2OQZGzrdccg7SaSzPNXHFZVhnkvfOKQaaAWppolB+xhzpbj0eUNGy5lPtIO//ls4dZ83fPLyFA7U4SxYktc1IxHvgBzjry2mA9k++agToIBd7ggIaYKa52Dl5omK1uFrZiuJjPMgR0Rp2/nwgQVuT/sBvr7g3eFQAH+VH5sauLpk6YXqPdSDvNlgOzqUIs5j+423U9O/rk3T6DxBLdPXmaLbJuZHBh8gW+g0Aroi8ld7vPojsRUFtNghu/zE7Vu3r7dt7dGY2yZfaNtPQ4ViRx3fMbs7YK67UaxewvLpAifO05jp2Qf9DHNrT/5L8QLzDHhHaZ8kMrlTzjUdbLambbzEXt3J0OcSGqmKRygluBCdlChkNB4d55wNYqZoksZ1KLwbHEDFMYBlvPrzYn0e++vOjHfbUOH1egXL1XK4rdPehMLFLKekMMdViiZGXz/AS5GeZfLwcKmoyU+cNJvu4fbXh/3JnKycE3kkzxdu+YDtyXuDcw9EL6L/ZED+o2jfoBAxHD2rFUZBV2LfmlJRyFg08yjLmT43ifeN08j+awi+EbySmoAupBva3WrMOxL9xCdscadsvG9YQDRefbBkfQfeK03s2Bf7ITqN2qvJ+MzirMA5YqyhYN2UyYMuYclFsSuenpPL/ENx8M5q1sukHoUauikl39D6b9OLQaMfOw51hLyyrnUYypJ2hcPMoIjYRebTxeLMxzuXd8JrFZKlLFydtOogHOTkioRfbTEPTg8/GOLit2t1z8dlYLMf91XP+O+b+7iVwg2ekCNstDCRSEQU4xFU5VgmpjkYVacHShOhfWXrAeiFhYPt85nI7thAREqaPcXp9wo1xXhAxqzwXMZJY7Ls4RdunkVnBmK9fdIsFslGvEpSDMm+uYZMSHTf4ljfC/L0AaU+lU4ESWedlIuUiucLqHWbipCtiXnR0Y0m0xRjrLcFTN/jl292Jm5QvNm+puk6Lxof+RBvKTF/zX1ZMJ+KlroeqfIPu6R1vcZK6UkULSH4i30oIeSeJItVS4p5wsj38aLiW1vTLXFjHuoaewXsQmcjhVR7NE+pjGclI+qjvrGJ8GO9T6N+k/z+hRu4beFKZckCtfVmeqlH2Ov6rc7sBRhV4U4tN5laYr48rh8kpGWaXtkWgq+GAycNomJ/WJCSA6kBMys/MHn7Xzv9HXH+EWSOU4ARdD5VpENn8JCGkN06jK3PSvXXzmid1fiua4fC5Pd0YA/zmn1+K1kx/TEyRhJurQM3PiVK8yLoZIxeZ29jGRm7edw/SULIpQg9V11HICTu7o6YJ7IoIq6BnnWry5R3Xq8YZeeD41788m0Z/sXhNpTbWaqPSo8KcGVyVXUCx0JCxO3nioKaxzMzJnjTlR055VOOR3CdPDHPqxNULSQMcWnKeKBlzGYLl9ZGTrsEjopW/A/tONDcQksD1EeT4Fy3VUP7aI+wZk0bIBDz5onOlqYy1F0l60iCD1mlBa07mNnrFdu6U/1W9dzzUmvm4/ZAO9nfeVIZuaGykYsvN1TvgfSESI+IYVFE3GhIigAwUqObwTW+bpHJ244eszNnwFlYpaTlhucuxVQvza8sIXsrHtnWexG461jrB6jSFiWItMJcdDkdMtoeN86ysl3SQg9IfGOx/LjkrLRymHMLJykGMkfh/yE8qfkYEuF1z9WOqNb7LbMu0jIPGzgzb7vCVkr1D7w7KWwtHHZvHryQMhXwKfLpYQZm8CzUXqzaZGqv6X6n+ZjsJAtinZjlJDu/8CBNkrBfn8iYH+l/UWTM6SXBogvGWFfSS4KTCQYXpt0zRA/r98+L74XG6Q9yVxdGnb8RvknT5XiZLfQx3y3r8gIvwBw7G2+hCMzL7B/nlfmQukcyHkoz3ncaKAe1rmD9zwxzMYSyD8VFLgzxTTKYZjYZznR52bygPkl4Q4gJFtMkAqHbJkkDwakhlBCPMCs852LMIFVIrb707MeA3d+mPAQKVDNhpUxZwD32S04Feoh46Vic2RDyVJY0m9wOoeHFqgRBFNZ6jLvqPcw3XLVETRF8E3H3cgrdSp6Vdn4UuvqgLa0z04cKgHUhOtD4d8W9V78MhL85Zmi/TK7nzNX31UX1RfpBrkWAiE/rTKKXXWTabcqglgUfHSIp9WFGW5R1g6NtAKbOWyUtFck5uXuzuxrHqlMVCzF3njdENkncFXeXgA8RCX3cOeFagPWkGnvT4IGNsqS8+ahSIriGKcjWAEveXhiXgWGXh6m46jWy1Nt1XHpGlVr4hHdDXUkjAk7SvXfe/rtonH+MAZP0czf+4NMUnz3P6/ak9dMRdgeL70KlkX54GbkqvrHBfP4pBIznPFyobDj4f62mYrbu3oD8wj3dhUhsHI2pE7W5mtghFXJ+cDMx1Wa7e9u6Lc6B0aiIeveJ0GkYN+aCcWEAoQBOXTrMe51ffom9SKzu/I6pz0ysfwhTvg/a+u8rxoSx/Gk+3w9gCZgQY0mRR2JOxw+7bf/mkCNKY5S/C2+7PQCV74Coa+pkn4WwFR5+BPuHOmWr/h1iCzChILh5p+QUXEOFg4eHd4y4A3UYTI0Jvp8h9nPCLRwJQjuBAHw2L7JBfJgM1dKALf4MaVEcL1LPDpaZjnd4pptPNmI3+GmF9zlZtqGlXe1uR0WwMoINiR3X6MTIM7ydSXuZMS0LRmQABeRsiJ+uFjC5rpcbNwTL2Av50xxgRVHDY6EJOSb92TuyX85WcGvqdAHRhr4g6V94IvvSX+/jz2dt193GEbx5aGvFpJS8SWWFLjL3GhY+MCCvLc6FhC+BlXRV0WiUsiH0Fkn+4hd7eKsjf/tPhBVMIyyfsW/7WIWlPp9yJN4nFNAPf4ciIbU3WC0pcc7MccOB1yHpkiHMa4674HuxLrSwUKaOWD7Y/WXl+96aiLdjJShVf8VYhKNHje8eb57PibRyIqeG4nj7B8wxbzjmYgYtbcQ53P8ugb1t/sQrrvPIjUOcFf4Yg1BuEi5+6bFlO6cDye+NX7mzJFu4ddOpZMpvrqhDIYC5TpuJgPOzMME3xYZ602MxLOVxtVdlwO3OEnZ0mb8zPDV6u96ZVWnv2+GPpYmUn1a4LQ9gWmqxhfXEw/Uc61BmSYq7S2PophiLfn8W+obWdq9u454ShzluwIuU++Hnw+nu3jvTZQuzC+99geDQUST8TVfsWxAv8ImjOH1LItFOETFcz08ADdWZCeHmOF3xsi4J+GOdFrIUmDc0KSlU/xC8IMTilEA2J9WvZYX81FYvMQ+rb5AC6ViiXWEqpXVvIv77TUOkOsu6T5EcTM/UrKjrLlnNYeT56+69Dxi/znt8VsgolbZ3lV03fd7PIsQ5lCQ4to49UUALbtbJWWTJ+KZAUYACkZfJLNHr7bDrNeOz8fWV1dXVhfXvJTsbLDCVYl0ApaDGXRGE1yq7mEU1s69aq98wE3/Tq8HhIZHQNfhj+vwOtzk4isANijvP1bRkLt6mCSTm/FvWfv89cHN3G/7VkHEpmL2D619I0SRvfOO8gxhnrrJ359qFMP+A6vido+puXv/HeLvhTL4219dlotrQ15k4tfKg72LsGXM9oW1Pb8TEe1jE3TOiM9ImKNqA6beOWyIYv4BwVpmR1yK6bvQJNy7UeyNe+8sc8Wt0KC6tkv3xVQMj+cbRFBZAiaO+vM9O/CRYTy4Ns0wme9Zkea6+3Li5b0nefwwlw4N6Q/xQ4JFEPE7wEt/V0Si3HHy0jOjl7FCBFyasrnbKXl7b3ax35RO7cNtjxyhHDxsJ3EBocz70Mzn280wsYgXQ3vrqW2YE8/Gh3w8+1uwocDL/JCDg7YaH7+tNI8Jzicn/gaI/eEdiJEyLfY+J4xISqiDeH3xQ+Ud42RQQiMf9Jh8sJ+/mZdZp9twGV24Yxl/2xHvGvmSyw93QxrE33ObbcnQP27Ong+uDhFd0KQimeEoyocUfAfulYTtRFDfJrocEW5h5ijKcFXNzQiKDEskrI2ILImoInj7DzN8jzDh+P4i2HOgCHHZ9o1kOfvNNfHxzez8BvKR147mCBdul5LLgwvMmhVuYSI+vOkzk1QyzQxVdDazYTd1SBGILoTWbTW5Q0k8nm8kwSNZT14Obnup9vsGLNpD+QwXVm7C8eTv3cyKXJ1trDwfyadOefcB7O4V3H8wemkpJu+fC25+SVXG6O2W0keBCX+We0PKOL6p14dPeHp2uiVTJ4DSOWWTPvDjWr9R2QmEIPvl/Gq10KnfwnXNekZN5pfLKpKKnI+COsiwUcr+2PuWmvNmE3KCawhjeLiJ+uQv435rnG0JGZywghcrTfF8MPoc/QKVA6MtBFWjRrjliRxHHhdGlkcOINqEo5DhMiZP9zJTiE3mRlLmn2JwzSkXujxlWB3/sCwuG0Mro26bKDm32DVmvJR7fO9wX6Q6iZ99z5bwTSi8G3UpgXw3fDe86bWOjDnAvHU3FdJ2UX+nwu2dAfLsN5Al2FWbtmItHA1uiJJZtwsXcrvbaG6x3FSzjamK/qfBzYrcW+ZIkzQj1Z7CwhEyucMvYi/iuUFlPQtAGcHbmCyUlEW2HAfaKFzZFOjLCpn7W7bCgCrU/A5t2PeViI8QFUJoquU7KjFLzK2GpSS48Wvqm+vPjpnP3+5iq+euuaT0GR6D09sfYj+tMnLjgt/2B+uSQGG634G1dl+VNi52nma7fjM8nvfXrTJQk8ukywRC5bKvKB8knFre7YB5dZbCHmihTmQo/holoZvf6orzOBLR0LZxJkNk9HwzRxgEx1Tnj0mDGzetJTyXaYVbDAqBc+js/SQL3F7ghGtwzGwv9Jy7mxuIW8ekoYBzI3fS5ZYemBjZ63rRLt+XDM9S3PhNvEOIHvAmNSjzHsdkw4Vd9/XbJi6L9+J0ug7jxu20zJBONuI64bQpSESsdac7HpEvnGohGdYNILKLGxsj4zyiHYOdyv4Rtg8jrJixIwYfbbs38VswlQaJed1mSRJAewZ5nqDTmaldRLfvRprmYW0bRcaowbosphUVLa9dsWCj53U3u1MiGrgCFkCxEXggb3QvayP1ojt0SKuHlffT3dfAOEfLNeaYP+ZmRzlZpe09JkCZmFnnnR175hjWE8spoqwjbck0xvJ4qZeUuAgRDTnSbUaxGMJfsSfjp/cvKphJtjBRht0zTcaJx+6wqYKrIX2kdIJfMSE7tqZuyMjv4nG3Y+W4vvwl9sfdaPLVTyUXomPVF2uzvD2m6G645yIURBjzb69dqjl/XZWhSuP9U+BDEO8jrBDvdbXS4ttNiHpg1nLTd3qRDGM47Sx+VZXkT9v9ONYxAoLx0vek5hzC3kSh/GRs6UwN1N4+/SCgAQPWnfeeobhHLqzPU5wtjs+hpXFl2WW0Oi8uDvoohKYpMkhINk9cJE2dVHHhuPiexNigO5Uw2vYDP9b9JQUR11+ctkxd8a6pbP/Jfgo5aiqXBPGTHVPxaI7umUKBYoS3a/j6wSHTdFLo3NwNEfzZDi1Jr+6lZWFOFN9PObCN/GloHhKu8WFEnkPzJnrMqBqJbeJGoSjd1CLRFSPxV9487r1RFGKWRZ1LqUJz+rzwOrG6pbO25uBGn64khjxMnALL6X6AT9zNKT/h8XqyjzE9GTwsDqwmSsWFaV2cBq5D8Rr8RYQTd/FC5UoU28isF+M38GXwxOn5NB2KDvrKT0V3PVjssKBggasLOHwGWu55YT7KdEK/rku35rdLxyTmf5B0HeTBvj0/zqfCmRIr64AaXmtLTISMDeVFGbbK9DJm119IAbh8K1+w73h7l+vWatrPXzwzVT+06yN2/jxcyEfnYz9qTkPET+iTz/tzX0MKfmT1FNsjasT1b1OQ43OSSK6uI5A80Alw2/WnpDFDWbRJx7Hdn7p06biX0b8i76OknmHSUDRlHjNMGQY3dX7xZvh2L5+F8B6nOM+4bXIoQ3843kcWTkTU9KFfptzi3lzExURBeLnk/4yJwNNDWaeqOUiZOcYIw4wL4DertKEcGEdcNQyd7SEifpJJ9YvSBpZFDBaE3AdjUR1BHDDUPa248nl04CJ8fUPeAN59jPok7pZXipOKXHIobIWUjvmhRZpRJ+Y2KRnD4TOGhGJ5ppQmfyRIwmVErmwP1e9Qk2HrWdzk8POeeoXZ3bLZ+BHKiU6QYiFT9yrr4l+hGjYxGDOCa9TdLB41rwV0MfsTlzy8nKSli8da89LWOsWKPIinJd3PJaLq0LWJMBSH/Fwt6cLb8fmtX5bn+fwo+Y8vYPc5MpVMGPCNsV4h1RByIMzoNUMOjXelP+Cg309k8+KXoUNV7WoWsak7xcK90HypM5l7PjKnkwRFC+EziSLxqMoB+UnLFXPVjgr035J2+1pC/R7c912L5OltiKgY6b310e/ZYOr4Eu0RcW9HSkjl+wMCOuGeJEKNEf6JhP1oXtx+v171lNvk8YikZ7Fkge6GO9y+x1O8CVtJwmaf1Ux2jFHwGUEWGl/TC/BZhzDRC942M+xe5G8gLYaDJNg2isbHstFDn8/v3k+D+0ZPFbt6uuU+zs9UQoiG5Fj8KIENoQaVCWudDStGRESsmkFSYAsF0I2tesoHztmrWQhyY1tHagCxqVGLKH/13Z3cHs96m7/1h32In5xMyIkwNEp4poygnvtWtt5hoteAxZe05DdNLsp8SF74sZHD3mUCKDasrztPhXulGp++gWwovD0U9CzV2NJ2ZZIz0PMlwh3ocoELTVHgy431d6noz1Ml+Ie5LXH+tqRmrbmtSuEmfSw8VME+GJQG5jjGyd+yzwyrsdTtBJcymbImeQ6zrmZbnTFZ0WYTnF/pgvVT95Ocfbnu6ncDE+4/gSQ1/nZZc8sk8vFkEZ/9UlCJm6rl03gOkEB1Vz/L9nT5fNbsgIsl6GSAC5IhwhTV3hB4enjm4aGEVFnjTNCufacvJNKsQna/S8/NV875xHG8eQ0+CcJrN5NGysnRiPpAXhsO1ztel9Ejb7NglKGkN40q1V1IO/mG2Mh+Cv7VKmPIfK65G9ht0Yo2YJOCOJ2JBYnlvyzObhESWUhKziRTDTKXDMzK0DEyQR/uIWYsuSJX5mhQ4Ni5vYR1ieAvcbEmPNEpW/kG1J1Uk8uhDW/RDvfwS12lChkiaVSV0jUMNc/px+u8IUcip6m07rywpSb5J1nItsRRMDR1nNcCTF6J72tOIcxyOvKF/z1ObZOpb5MnbPN9zK8e1m55rvOT2rU+DnEU+LpZaTDucwgCJIuJbFJleOiBUgXu+XtkZ+tnmd56svFoZhatevhso3wZ5L2FsMu8dWZmUjzs5QiR8nu6VlPfb4tbSyZT1Z3sDQbFcv8Bt8NtTKBON+b15XememM7pBwwN/gG40iL75fcsuY7fcbcknu0YFNCQRr88gkB3UUyN7Gn7A3ziBs853mlxpyEoj9cY6J31YKPKZVUwoh4IJFGE4erK9zLx75khckYfxEgkfadUkp7+tYRVzXbD/Ab83GV2IXbjCgfMwYEbfhSi3XmBlyoMoa/tY54wcXBDwYSe56DnL5viMNW7pTOG2E+U1c2TEK4fGDF+XwFPr7/NmJsSnrzElsV5yaOHbYTWGE/lgT76BvnB0FxbKXU7ky/Q0ioUplgEysqILa176mbOtEQgRlGH4X4g/MIKYsvM3NNoZOa4HbE3enwCQ0Zhe1/4ddtu+93n/jk+13yF9qcH9hb/xHY/IQXSTMoZlx9DyITLwMfQlYmrEn2or+DihamM2QHl3JYPDOSv36pj03RhIXNueHRzUKRvx28k15KqYjAp1eUs88jhaBse8VxgO938KKymAkNMYzf06YdkSZrGZADdncURCFqJYwc71AyLL7DwgtZrasvu+bVrcAEvZvghcgvtQN07HQ9h5Kh0Zcc4iQX97UssrngQ/azvZM862wszaZygu4u4Pz8+zK5iOSZiZPd46ao3CCzS0HI2ahuIt9GA2PtRo7g//Ihd3QZ0TF8foURfCRMkqpLv+p+k19shjVAu1xANISLk3OE+S4zZwJJGLpZio4jAGfxbtzJ3lCgXf4w9zuBjcWwmrPw4K37m0LdloYU4TYJRtHUTG+/Bx5/WMqRRea5q8TsB4vkVcK4234CAnQHzixoPZ54+2FXNUuUQCJ9RUHYU7Gh+36bS7ZNE53xOvZDOY9duu+QV2sqqJ8JN6P8TB7GhBo4jdzt0F8AnF46Fxbu99ZdusMCF2dZjaRCDn0yEVzVcJuyo/Ocgd1meNst1UhIcbqlwVO7d+bOh6ejUkkzIRXzkubvuS5fBkXma2Fzl6Ry0NHE+CFvj9BzBr72c76vfji1CIsN/7MJXoYXVr2MmN8YueD2a91IHw1cvVReidE93kkXdtCEmPEavbhfPHncNyiV+2hpclC6Tb6L0s+r39JUKCVwP0qKE3eZ5iZs5SuY0sRQfWADIJisqG1fCXxt7DlMx9IwQ5Ibe8Dwl4PKvsye+VbuqSShj7d+6+/Pz6m1cxeXcK8q7ycV1UR4wTIJ9kvWvcsdNoN+Ojg8J205iwmsRGybIz9D+8D/ELcc7vTFdJFFbtzjY29riTVAoicTDoDo6xv08PURvxAZs0zUPJCYONL1uCziqs5KUCvR25g7JZtTt8hXRxidIOH63MbZYuNFJojqLfomx2dCMx26qWnZZr43LCnUtkpzO3DlPJJhmzn5IcKCw3hkROTmwvBE8+Lr1EMSFzaDuzPDhOXFDmmIBt76mYIetjUw3+CX1kFNwJawjdsZNT1ve44nb5PsgtWyOFGlX+a+615+ZPj7Iby2dzHhAkFOA/syWLL01/eNqyraJgGGBlHs7ekdXmOh3cKOzeyUzHPSXN5ATZEVw1m8CBGy2gFMuByeaJP9HkSUt+NeGS817TqEu5brPPd0RjzKHLbDlXTvNywLHUmg8dqUgVGoW+LECYWTjRgfA+4WcMETGLX8lIqh86qz8fdSZ/3Qz9NEremG20mFsbJ4cFn6YKP9+LYdKK83Yx0OkSUjPes61yiZ7DeQW2NmnBDCwAlq8wMb3dC83ks0OUzHc3BTA3rVhKtn0HTC0CkhsQk0RaHLzC4pjQZGHTQffXmgXU8dL8mn+xpUMwSf9BX+WaIFWe2DEBbrZuSvcHzpPBuFDcRIAT9ynYawYYsc8CbQ9Ey59CklVEpoUR8gNS0Q48MJH/fMGSP5vpGZX+aXnfTHu4g/rzD1i6rh/yd577UtK7JkiX5Nv/ZAi0c0BBoCAnhDB1rLr7947DzV3XXqdp87RnWdGnUzd+ZesULhjrnZnNPMzVFLRKiXd2YG5VAYyBe/dzY1+YwRphf+bciIzsO8PMYqeEMQB5/+G29Vdj6ZXInoYr/5/BVnK/zFhJf+DczHkr0rsp37x5FDi/kSgLmzgHz+VWV/AxRq6/epOeHxxWRT0ZGeQJp4ls94Oiv2NkD0ghOLDuXHHD5obGHuHsC2QMS1vaA7xfdhDOYXQU5/N/lXkuyKYfPXVdZ5b6o3SKYyEptPRrPtncHwCPSp0xobA8LmSl6uP8whoxIUH2Zbfl43tV20IV8bNKfEnzrUD7UM78rNJAj2xgDNen87cBSQMtgTGAGTFDbC9zziHjtUtCksxtEoibNqMtjZLdYxj66mv+RrL/IHmR15VphQHRP55fTFCFiC1mbHmiGFVok2/OUiSvPQAF2O+1ZMzwQDyIEb2c/vIG4ojMM5bTV5yTBAG24fbuXNuRGY2Yqng7fIhYETYUDkj0cgH3DHDRu7HqhFkrVxJ5Z5qtvwsj9fIO6xEfsShAyG5OTeD4ZdGHhZEY/3NbsEhbP+95bBrUK9oJHwkvYjm87AL4gF3yFbzd7kfmtwRpZlTkwx0PevrwF9Ty6BLOP6zm0gFYxcRrQ/00GPwcxdpVMXVu0bijH+rXORdeLODsIjBOlBNdpbXgvyg44dCIcZGDKbbgoShqGPLYiv3cKaLpcVuWar/Iy/w7tl6OY2dT3nkesD0OYVb90kzW8mnEkWZVHjpW2qb5nsi6FWMObjo0uoin2d5qzeL0KE0FHuP2hVoP6x8jeZJIpWzMGbGDnH78EmQVbrCiXJP3T4zHFZZ31HDL1PS9hmX0VUJO16N59DdEHaPMguXuB8LCHqLZ5k7/EcaZFDR8ri9OZwhZHnRX7LDI4ID08X6k05q6FqiFM093DdZJUbZNUizvMn6fT2MmitNFPtfPrlTUjzkCff1KzXeB6xQsXpvKvuQIMID2gLadi8j0x1Vk60vwgPgtpNOTsIn0z8QhEM7NJpHwzCzwxMQyhFzXR2UmjxRQ+5WjGqO8Kj+wsUyhYwcallxPx78B3hXm3EolyJ3vKgM3KhFw5bft2xt8ouuMkI08wdh4OabOXHBqViAQaEdYt7YjXD8ZVGWyB7PAxhqXnf/WEiK9VUZUpZdpiTeUwYAongQTqPfn7ndTf98Hg25aFqmszJsMrbq0KKAUQ9AuQ0H8Sf9ubK8JrS0PT+BC0VvHwQ4nTd7Uv8Jzghar/q1ki39aajmxtvLF01noV3IFwcwU/hXDSO/iTkjplnKZw4R0U9IesdVBHD+EXGpm5sU6GpxyzBm/p3vRTNGyTPFLDXVqRS7U7mZG6ZwCp4klTKRHjiGOrzB2XrHd8638zgJNAhVhS/N4ctj1H/+KXMnGALEpu9NxOssyxhoz5Y9/pTFF7c6RCfegLl8itpypraC3mNfE1K9IL207frFnnLi5FcAQk6PEHVX1uID3WuXAIAP7sdiRXCExMd5d1jFomJ5RuGuMidVM5jrGkExm1IO2ozMl0zzZWwlfRGyKTPUk7HVABwbgrtdO0Eu27fJcRAeiiA1FaiW5klm8J5VhIzJpAkMu1gb+4qU/sZW/eCBSXUK5moh8y08Yv7PlCZIexbxjJneRFNmLhcgg4gCkCqEGcP/uafdX6+Qtt4Y/YGAzzXbj02/xI20gk6PkDORERX47E+Xx+yz+LU3oJ5K5KZwLz9PbFhyHexc+CLgqLq7YorH3rppPAd/And+bat3rrJXEUj/lyReqILwomWXW+oE3TpYy3byB+wsPdl3aUBr0ql2NY6X977g1ibqH6mMTxUSt3V2WK//ITeo/8AZ9XbPchdPwUZRVH2hfBz4C61zWK6EB3LFzDt7qhCJuDCKi4I/bJMzUdGZkkQGfLWb9OV1DM+CTuPa+B704i+j9c0RYUE2Z8QNNhhScbAjnn3AOP2puCbmWumdLEdQOL+JdfWLIX8cPDVYoTyuHhySQhy+hT5zIC9QAn527hw4QtmOiUQHCY/+ZRpgfyF7tm/OoGUhi/TgmN6fKeFIQzsHf5SVHXxDy3DXyo7HtKRPMQpGeEjs4DnJiSwhl6KQIUgI9B8H7SfEObO9A3DjdjvyxKfSLQ92jQ8XEkCRwE1sHBp1ta7iAkbkUECG25VPA2aqVakrrZ0gPavnAdG6qEpvZu9uOM4k70Q57m5XYG1f3ouWTppY55QqAQPhGNnFrRasnUwIPjdkSTezXb7tpCbt3oYUx1bDVnuzTQ3o1BjzXY3FZeiBAAayDwiC3QfzoA7IVPpwq8VCsVBSTBZ1QA95pVzL0rpbSKZs60e3wBRs0kdY4k7JRRmRRdTxPUNhzxlQAl3CAn2stn34u4syh+/rYUADgH/vhLjzgQpyWj2+qp0tT8qJNl+QuldNOvBWBS2d84l9TsLcAKubxfyAD6FK72P9m1DVm6aZ+mG7C2I9Skevck9zAzs02jv5npit16UTHFeokPdyi5dr4ybkxVnGPnEIuJjUwyLwFTFXvXgPsRmOEsJwWyA/BLVMgDwkH9FUHNCpy+uJrKAzY5ZRBMTPSKsqCzQMUiM5FIp4uEx7gl29zCSUDAom2Fql8dFe2ytKpM/A3SX0DueC2Pc61KgCLj86A8HzHXCSt/8iKEiwcwroTEm7a7IJhusPUmDqCpC/kcAZ3X+z98gLrA9ldgO4VgcRX+W5r4lhDwBaMYvPePk4756ujWDj//97bJb10M14Dyyro+VXk+MTPZ8m7r9i0CMeM89fWc3X/O++2oGWfhSTgWczm9/ibj5TOMlXwbxqSeQDa8zB7VLLDvAwvuSGJ/TLwtiZ0I6+t9sLQCW9T0ZT8n3Zko1q9S8EPL2wM0cwh6GcWOsD1F32+8wBayHuRYGTc6Uk2L2xqUYCOj0EwJjTF1TxuHYAZ1T3dZxx8qhy/nyv1psDT4Y5n1wckTVpe7GbHrc/YvKks76rs4uTU3ymUiMILz3IIUqIwcMWIkCxowNveiGlp1vFhSbzNB8Pvx5xVRrnL6elWuI4qyMJNceVxqHkAtAIt+0NS/2evP398g8nljRhELnH+KoszKavAuJKTJjT2pZefXWqUin3FNaLvC7M+azHSWpkXEsjh+MfafJMygwSLIuthto5A/7mA+CmtsdldP4yDjbAwRwemafP2zGdKfJ1i2GcTLHykBCQBuLsgeKYto+qB9CcEo8njBwzFJevPoK+WlmEZWMWPsifX79dteGZ4fO+wIJavjYXVdzw3Wha11zwCLLGw1pnOwvAH8+XdoGbdH396aTl7xxyU4jv/5XyTGbVu0K96FLlozrc4ZcS8YQ4spOrlQqysicbZ0qV4EJ6NGzZo3xQAbXMUOmUlbG3XeSrOsVxuOm2PvHBc4XqEHulSwgwy+T/ov69oSdLunxGVSQxvB0jDEXbbJAAyLx2DUYT7g7wKS+VggfLwCh4lGE4n/b+3s6EfQPj5MuQAMgocD+2RcyvNCB0U3+4M5DemfueWsa/9JtxrvJNsS/36tepNozo1DoApaB5+EwcI1Okv4dePxHoMt+L2g/fMJNSJ4G6+6PHZA4gvYQWxIsKE14XFQ9PxS7pnq1nraMNgn+ey0dO2VSGTEXwyg00hdB89K5q8+u6yRbk1Kr/EcD+yQ9MkwSVODxXS+Zg5Qv5pj/7fMCOIvJGgX99ZUJYIl55WVUsgwhLyjv/5K6LPMyifwkX1iiDHgWOQYlJC4aG5utqXoMFy2u1nmStbapJzUoLQTQ60E/wW63uoS0zzIda3w3pr1SJD8jwIJ5PQDopmpV2dMZ78Q7J/AaANncp6m7GaS7JnBuL/Pso4K5fnAws2MFX74EthSqpKVEt10UE8T40j3TdM2+kjwJxPidZl1i2pQ3jRVH5Cw13sYO95bZkuY4I7bVouK8Z3Lc7jhQzYycSxrrdX3XZuFgjI/e/X6gbXhvBb/b+hs+1rCs7zzCKDQG2i0pxxQaHvcZdSgNuRpy/2whwTdTPVLKLBhgfYLW6vmZFNHNXa4tx5+7L+OL01Fe26/5KDXu0rFvHhXZeqFLdqDA+X0PIWNRP4R3HQK1rlZaannmFAgTXaXvU9SdkjdUmTPO1Ml+kIJ126Yk7E/02r8PdllS5vXMwZU9zqzU9mOF0P4yvlDtptzG9jkjoikwdm4Hn85SN1vpHVhIoL/KBFpAM0Irvht3szuOA63D/9ZkuqhO0AP836O39996dP/V2hsmib9r7U1if9/a+2+/+3dv7Y38/6m1twL9S2tvJLXTr+84b5k/ZUepfv824P9D4wuustkxJziCC1VKO4EfhAkrP1rlpJXwVSqtshuZ+T6/rpTsxb10VfwqLnhG+OqOvFmlQvzVVvQPuCBO6LEjiiZJctuSrQCjkKQkL97WHS0k8DQgcZaaMN1F9VbqDzzgrFOQAWIqWR/7FtDwDbDp+eNj6hu7xfwSPu75ZmZ32j9hQlTWMO7xxPOqnyc6jz7AHZ3N0RZn0Zp1zjqoo/9zMZNLj73oxdEx3lH5E7mdtTTRfIPI02H4BJU1nesVFj5mueBr/VOMH1MyljvE6dWmWCn/1f7FdmLl0k8BoFFiwe6zZu/E+zFy/n3lyvXrLgYdnaQvxPPe1lagT+uUn5tMR/NXI/3jDQdWUxwKYtbplALtW0jc8rOHliDmDlDEAEjD1p2IBYRkEjUxPKEmqnshKMe9hV1dffB3q31eZKuBT2UnGVWNEIdn+/StAUqwRE3NQ6dH4kXJqJnGKnF8aTL4Yq+9NCmme0FVkWd0yCNduaSEzxZfkWLhgTVJq9ums/yOREB3TH2uM7PO3Sc6eTL18A+elMpts8FxHMg0aia6H+nbgMr9G5EFiTSc0BPNwQGuy9MHFnFJ3yEg3KhgTJZK+aPUWbc+BsNYkDxMtP649LRava9dPKHNg+p4oN8hOevMDPMZ+a1WaQiIs8hOc8zcVQRGpkbz26QXbASbB8gEFq9qLvxWpcS/7E+8lodLpsPp7ef/8vzrj2oKXra48fsrne43Ly4q5OvJIrabBfCTO+f3WZtBFIFuAGzZ0ROHnn8++eA2HLAPip5gJ29orV4QILOYFPZGqF3/pHM4s76lQGFJ0fNd88gT8osecJIQUK2iGQ7WaL4GIyXWKXdSS0JFjZkXaG4qIuYh8/nFCVpCwupBBRaR/LnkOkN+rbC82My2iRhbia9P9n1YYEK6sUJCmsIc1OXmITHngh6O93J8oQqM4JzVYqxFrEdTKbiom8oIniFsKyinlzMR72z5CtHAv+iL1/nFuzopzN8Np58gHN8ouQDr5jf+sfigGTcXIa6GK8QCY1n3K1w2K2NBZKAezs76O3s3r+XPNc/HViz4QZaUal3sm3yN60ogpEYPm5QgJZFD7EGUESEhNnWYx3ZTN/dwOVPTtUVYruXYjukJgx4PCfKgsUeBBQZIGFBwS5OSg2HrbzPMUZu2u8vvkTd0HcYcgQjBLeERZ9bhTpCQF8KgthUyGENza12XRmcg3354MWRx3LgPrJVJRivsHoI5zCxE+el4vIOv9+PCS5hyOWse+CFiAikGDFrKi0KxCKOF2mJm85eoqgcWAsCMoibA1Exkx2ltKFLvjt0iZIQvFbYVLzzJ8IuOCD1zlyIiJqVR/GAJeIuc8HSplQZs0oU+hCzJTLqMS8IHLB/h/grHwVKfq/u1FOfJUluClNmea5BQBaiylANs92jD9MQKTJa3QwZlUZUY3WxUXwoKBJBgkzC5WXRxtOnotDDe5OXWK47CWPabIY9fWfMUDG+t0+97Y41C8S2KbcyQXc3QXrbwBITn+KJ+BntQqFd7WsodW6qFUrAYvqR3P0m4Xx9v80XWMJJNxW/7aMisJvaBqU8KoCgD7mJtaknl3Jf2x1gEMuGz8k3uVZe/H8bNDfcJ0utAkvb1o6xUt4bIiyaLDkfbrLT8z0MRioksgQvukyVOVfq3G7tPQXA5q6OfRPLsOB3LOR/tUr1XpOTL6uEtzRzKaIuCaCDgCHcz9aDcTTSCy0jeX7svMmRfSIgkLHAJIWldJ9Rl1DzpWLJFN2rloVLpSjJlWT1ngPCLsR0XV5c/pMAsKgcHTBvaMb6Pq2JBaxPF4SvVqRKUH4tMSym5lJSsHtm7HZFSwr+N4Jj+BNB5uR+r6vn9RzdLPwMpc7YlqC3tK+28T/a5b7L8DXbMavKig5EwtdBsAa/mYPuLOnDJJyxcijKTrXFgKcie7pJeQ21b5Wg1I+d7oAr91xsp5w0viOj8/FVB9KwN+4VdHQrGZGx+hFEl5WEBKofY74ziC5K9QOKG5TcqMM1+x/c7amg8rE+FMvCyybXrhZ3eeqMzF31k7rzWJNiwg0AKI6ZhlQwAKB1ps1nR0Oxn2pG2Jwo+U5OszNt6kRZvZUEIAq4fMl3wBYJSSKVgbJDGZkE2Hxpa6PjUIn9ma0sBqm3bw6KL+6uwN7L2ZOLTYh6UUoS95p4o1p+U22Hgbg/eov0cMW2xBgcI30xhS8I6kIEEwJ/I2Rd9jwGdIgQaqzCstJBnIVGejLVCo7yRepQkSOXm904s37W0PSbwrAymrjgZJXrgkNloj1clDYqajinjSEMv8AVkWDIlCnPOPsy6hiYrSo93xr5H6uHelSXVB0pMc/8BwOAgww8D0s0ifv+6aHp565DRsRvxMCetnEXUBSyzuMjMIKHng9a1Z2S58jFpiCRKgd7y6muTJtipOIc1mUH8+pDRwGzP3lPAW0eI56mdAJm/bnvgCoQAKIGXdUaTFsSsw5JWT0hkRYo6nNegycfHN79pdjLWSvagPG35vWO5R+DuKQJ4MQxprQM11uHIjCMpS7RkDCP+F1w4FSGOrz7NaF4EEYm188GZJ1OQgQVbs+BGfh1EX/S2Ckh6ZtwMlOuw2WbGtLGj/dDzNGEYFM38+luThYTOmUUgGD0j4IoK9D5FaupB3JLXN4lUOOLuUPo1Mp6k0f792Vj5rqC7NAPp17Cir7OuMoszQ1SSC07Kw+WKRR+qa71DudS2OY+Vn7Ws3ItcDeP4BYuagW4hzUUzMwGg/PWhOtQjo9Up3okddEsDw23u0jnAtn6ztVWUp8B79Zp1nmiahGIhFN8BfyAkA/SUw93djx/HRU9cQf/ZfYw0CkwdDN0IcDZ5lmNMcqbt55gF9OsWTgCj9c3M+6HkAFPjtqjG58+DEotJmYRnIMCnQY5pDca76cWgCCeKNY0Hw+2UPk8b9lahfX1A2V/IiajmbQn++jxtAACb8jTK16j4M2/6Tjwfr4t/leQsnzAoZZTk+lPCrJUjecPEGD+P0KPAI5twhj+f461BEtGwu7P2/sA45z219SiXDwIcdj+BabhCRaDRi27xK5Fp+VH6jaP4unC29ii4OjDBIg2LZvcB8YdJA7dLnF/VWO3/B5FZmv7XZPa/Q//zP+g/l9v+/el8/4W5rfz+H9w22RNtNZhLBBi79ETW/l//s4rVmKzvzLZZHdCNgyZLzvokblRuYm7vXnrvKUECtXST+RtrTsX68gLINJZchIMl1LSyVW25CHIjakR5xaf0fkXu3L4fIH9TDHrvB7IG9hv8cXZWv+3siLgsDFH0dmG6TL7sBuYd7w1fHxYoQFyfz3fysgn0Kl7+Bt7oLTn9HjjUNnzymj81PQ8AyRoyWzHjBDoLgGsQexCmZoSe19vBPvGdZyq+GmQwZx/wkq2mp+QAawqS04/Uc1thnxjyKTfOrCxuCdVi5vpeOWMgBCs+GAlfANyKO5i0HwOmntVIp6sHggPMzrwseKtqpr2IXeD7250gE+K5Og/gArU0fq0Cwd08EpY9iaJpXcRurUC97OKw84emCBv9hgzAyBn+eKUMToWKVfGH4MleqQHYuQqwhdy7xWlordIYfzMJ0JVZy1kCvA++lIGWRj/quA7J4Vb24GZUXtqXjAe4GT6X3JvcCIqPkGwhA9T2wdXXcxCYHpoJvgaJS+l8WukI9hr4Tq44QHWb/AFygGNRZpFhhE8MA77/ejgM1Yz+fqhUmlJVWFs0TQdm8rE4R9fA1rxhYsHN5pvHu2Xe+PCDWSiLE5Osr9nDot0s15o5mIY8HvdT2UJA5g6KBV19kXCP0dZ0ZKt0KDtCM/Nh5OYCl4KJIO1O+w3/2ft3RtRbD+4PW4D0nspos+LlOh1q6+MgBzDtc2RdUGbIFPNE9GQQPoKhrP7yHeUL8Jd2poFN2C+0FN6K/GJPrVZ6gdgC+uFiNZAs5gI1OATJQgWA0xdIon9msgSONu6/iByEG2T108ZluENlJMPV167tTlwKerD2WUNB2v7mE05chPDl5Yv0xkn4uEkT10ruBJjrRTL5B9N2k1McxCDLPIeOGspVcNYEm8O7BDo7sL/ujzSOUNCLfJ3jd8T0c9ngCOSnE5SQhgyRu3ad0fdrrzkvpr+4oi5ZxptEnjycL3fNbAdOpUPZEXuB0R9KF2WWO2JGuqwGoX1QLNHkEQ4g45nFYP4qDSvnDal4J8inEj4mm35wmBautEo1NjZAOBHBBiXAIWR/QNfENZiuyjiDWx8w/3i7+vs3phjLYZit8mij7jMxFOv+Q0v9YexDuI4HuJitpuR1dS1MxZbgyHLgWlIm5qRvcDnlnxWk4lp/QTwCaY027HCY5Neu+hjvYbzflFg2cH9e599BQtNzszRn4wQlhLCwabnvrVsQrTDsDj0ynVnfDGSG8FgF3JAe8p93Bkb6a1vGtQHqlCXZL9Luwg5UzNavVIcPaoYWypQIme0A2WgqMDLQFY7dXPYFYA30aVjtYiPKMU1LdEOXkRvGxYjJHQLMoql524v5cG0FpgMrj4itGWoc7NtAsFb8MuB2igkl1VAZUAamFdgliA+h41BMk3WQLkd9+te2j/wVSlXmnNh+MRNfjHeII61y5ge62PbHnTFTzzUyFalQbj6l+MHZ6VmyQEFjXmUK+tOL852CQhZR0CjgNQ6PxmxlV0IGyhMfhWNzk61jLUIGmDxn04iwF6tK45/CBuufr8+csMHE07C3KUMLJqMvwcrn6l+/wYUsfh3so9fwitIonenGVldYla2PPaajvGaLbF40ZhWpnstFl/MwVtry0PzykKssW+ITd6Diw8HyBNNfOGySeGdcmr1sLztUSIiEivk+y2p8DOqEp32NqR5pcoY9bWZ1tNLJhxNREPd98HIv5KyIMXGiQjRX1xSzKG+qrg9OHlYucfP1ehCE8RHRr2VHKVEfj0layQv59NW7NDkeHX1S0ZIwV9ikj0BcKEuKqyHuU34hOCqlYG6NO5xg6xLTjE4aZhzA6QIpNy/F0ms9fG/lDyYfXGzrRRajX3gdifd0z1zQeDnTHXx/NtsRphg6my+Mre/NYmjqJR+yhjDi9mZBOKR0tWBzY4htFcmo7uGyC3Py8+bdeEs7cEgx5aG87DZTCcg4jLAhVaA1IFn6eNp6hxGGGZYyTpWbH7eIpK3awbgPxkhSC0bFx87Cd8V2sNlLOqiiLKZAp376MJVQDOvFtthDC4fIdS+pvkypg5oTzyIfeFlGnYGQqpro905ipAID2D7XSva5U1vJ+A9uuUFUGxjLVt4B1datRPDRmi425+/Ci1LRGcr5FiTmkyNDV4VpOFm0qvXKPsQLeHb2iDaVPhyDOFkKO6WxJDN1I1L3tJMjhlJqmuG+38pwyV/+sQNrZk/+ol+EkPOJ9VwvX7DuM47lsNOQZlfW+YDNxSLDPs6EetbbK1ZK+nd7gSnLP27D4jb0DoWZYmX2zgI6k2/aKkH2U5GfUGv3jDXlbWn/NLOTYferLz8ABpQjJcj7b1+amraHR4E1ejuL9QtfAnonM6wQOV3nGhov+eMBirbXGiIuJPfBVmDvamHFF3CpH02uukIk8Ry2pJ7oErl6wR61k2omml//XT6u7b+BFDb9PMNebvWNLrL4guLgw6aYGZO6wdzPkR4E66wpD1D8U6GOP3mI1f1RZ2s+o9C/sbHQrhX3hrM+hrE/QSpb09pky2exPROrgqlV7b/vutP46lPI9TJs/gbjy6cAGypY63Olc/62e8JGPoGBXCf0+CnQGBbUvopUzfPnuEAPIPvRFNfzTUfFuVBRAJD+v8NUYJL8T01V/v6Y6r+jKoBwjP/4VPzLoe5x8rdPgP63U/SvMpN/e/gfMCP/5rHQ/8CM/MeRt78zyX8vNieAx6z6F5dLIf11Pn+b3/9xBDEIzwiV+mcg0zDNqcJL9ETPhzRXeE1IaUhI+QrdylB9QfTtaSxsxwf7KcSsQ3eAFvb+7BOqMvXmqxdFRvM9Kd9yRNB9tiZznZsfGSNNcMo1qAgXbysv8u29IkmMn/D5ONR1XQYkvengk/QtfKvXHAAMgMdtin6M40r6rY9gyyXQCI/pfuyjjY7jHt8GpEhzAsRzY3U7UHS/SQ/ugLkKXn0EeEhwfXHygYx1CAILuFT9nBdpiFGAu0Ek+DwxaRU9GTvKw2ukgWK1DhUW6WFNw8x3bW/YeRP2m87hNtUysCA51iJWBzHoaAW7EbKWy/sU2TfptQwlMakfxtLWFdP0BjO74Woo+dYsgc0hYi3Zdhk6EjGhkW7zTrg8MQumhyvUm0rRR39Q7Sr3hJg73/bIHfDIOiHnfrNfh+DFYKH4WqAm+EZu+TUt5shsKJ6ZUU9FVyRKGESTcK5M1W7RyVYnoQotteWLN8JzqGYVAltO7jHA9XOtqntCmeMhqQrx1zZjNmzd6bcTmGbz+qhuVbf5stNHQ1IM5u5UXy8A170sQLfx4SEtw79U2oVkV3y5jhQjewbZ/AdJeOLeiqkLc5OrKiP+vkLeXdi5IKZBkpQhdsqh1BGUJFvRfsAnc0GX6rU5f5N7AZUvw5M55uiKHv/Cmoa8+vE2LxRVPDdlxVRqIvdhCdXAO9FSl5ptEe+kX4a/tE4gLRPGr6WJ/Q769YDeq3ULzkLtvuqNCtdy+xdn4s9oS/I0LrPSj1jcBuUIKjaIqiTk6t5zCFFnDipwmRd8xtQ483nkP5H3AXwQGRjQddSTOYjCRVj25R0d8V1wRG4pRhF5ErFtgCTofNxpna22Iw7hRTU/BLGwrN2f321U2o/bUHSHQRyosHGHzIp9Jg8f62s+qlpaPrGdoSFxbWi8Vjrelfd5ewzH0cvgRPvGCsHhK+ev+nRQmAp2fLyM6+uAuO+sVuUqH52o+A/P5z63aPi943qvMVSXWy2OARwPy1Si2WkFc7MKIcp7q1w4o6kBzyzO0XgQtkukkvhA0yWIG1AUxoFtL72ijyC+2trWVZ1Xmdoeptvf47nqHHCsE8sJNb+2UCYODR4NCidyvuNsuk8zlWEfhjK7+GG2HICCLKyLY8x7ZCfmIdhkJIove5KZ1JtaRXGqr4i7ox0PGTWhvTXMfoLQ0UieZS4QfJ1vo8V8WvIlslNoTuPjs7xpVK+RqzoWHypRESrX1mdtPUUnDtA6Yob+mIZvHpuGv5FwN+8PdDhxrDwbfx0SJ4mLl5XCb/rTw7sVEBO8fz89o+vu1vBZMh0RW7+RM0vfXz93tGPDsWtUiaahybik3z77mE8fZ3DEOLlnHtJcdOez0l4fq1gbolBAkxqfVtuc9R4u7i/fV+75E5wQ8bERzRDlHLXzmrgN4sLdAJF1NOu6LYZxVdSijWvA8WcgCb9aX4oqVnW5K6PdW3w0M00XUZ0KuJF4sSEnBhH2mGEFhTgtNWg8u/BsvLMbesjf7mn7S+U68OKHPcouZF6wjSoZhbs6fd0hiwhVqVRvVXDYVrH8NRWJ9qE2n3T/8dEI2y9V6vJ9PeIol5gNgWyuG0eCpfKGueDjsTUIxkTRb5g+cj3tV5soOffe5QNXf96Y0VoSM3m78kBtl0Ny+NUSZWzPYCUMFPOKMlWvXPOcVwCR15m4T/EBwYd+ohlyfuOsiIHOhr4SYiMJkhFe+6fnq2kCMNhTHm/o5aicfEaUpy/ncZlhRvvMWy0Vt9rniSCT/kt8YAaw44Lphuf2WuR8YX2e5Frh3RoqGtPVmDuhdwcA0ptmy0H80kANwtC+bNS0seEkypJX0U1CAxBFSGOebxJohDsFD+pKszge1uh74Pg02QhCS2JEi8bcHiRxiIHGtuq/nhxMNcXDyEAtWRSSio7zjEB0Uaz2FqCzbO3js/BNd9APU/0+/phaDGaJ3856Y4Pq7dfb8WCefcue8euv0phktulvE1Qt/FSgRPrQKY3rX38yYb00Nd8Bkyd5Xzbn0C6DRxxG7oX3y8cYsBHSdPjtJHz4PeSxIimUdPM90OKgQBPR2klNzldAA9LlvbkFjr+KPMLVGv/mSBF1YpyqDwkXltJRdWWicdjU1L2FaRLYM4Iy1I5Y+r5v0URMgMSnItNAoz65peHRJiqiwcnN3tc2zF+FMdWnbWQukh46uM/Q3FzwUrJuM7uKPTGqCFrGL6H2qFyUA3rCcYpMqItOlz39Iwir26rxKPSlIh5XFtoIzvQVd9kEQj+X2bbHe5XxlZFwZagv4xJ0aDuC+caipp6ncKTK1MZU5kza928n2wPiQB4m/ea5JcLoSbNMvuql9SqqxUwdhW+q020LW/2Uy1w0yWu0S7v9yPfGqaeZgaDQC1v+E9xUwAENLYHblcSGuTqPvr06Kp30wJyXrgJTiMnfwJ4T1TsUfz38Kc4/bJhRNI0YJJm/9uSDtstbYm6BRH9KuvyC2khvi0F9B8Pe+jnpzfTiqZaz1ZLXypBX/omTG/ntL2gnydLT5Z+oej6BsegmdeC+tbuR2nSkJrGDsofN5EqPdcCRtAknE4PAVdQxP19RX5sBzrdl13bvJ5TcjIirEVlIPW7X+PP8sGy0T3oyOAvjqlb0NYR4N2ANvMUv8qXZ9mD+GOhMZ25cqaUdDUshTBNiOQT927e8fjc4tgg4INlpa17pxUxc/CmVvvO9KHhAw4canPlVB/6bbGSaWULQh4qFsxX7nQ5DFQaaF6ZBf0DWih4JaLPQF1MKr8h42Dq4EzAoOQ0mp20sz81g0v8dazq/Zw9/VnrPM6r6soWBG0GsgmzDfvAbHZYGOCZe/DX1Akt+6OltmQ0YdtD0r073s0lN17AMTGMGOzcAzXczuKbuCWd8oDzSvafm1V/bJIjOfJ9JQh/ZsgV7iRz3txwCdeQIqh/dslHe+OKJ70asO+xzg4vG6OxnRb+dvHeX5dsKRMMItAsVk+AisMkB88ecQtwj8Ke82ugoH6d41Mo7yxuWeBFDjDi2JfmLKDK/ukDeFaCRpRzvyh3BUlkHxgB4LpDog1okD69emuetifoHLe82M5G2m0lO5vE94w78Na57vqBJv8Ntn2ZgVkGBjMER7pHjy29XRsk6LOEYq+oO8unxLhKI4s95nlOBVjhJ322k9h1LlPj3S9iM5B0NDgbGuWUget8efgAKF7zYryM6t80JjTJMzOUpTG+XHG235tQVpVTbIRsTN4JMNKj0idisKFIgceZK29xNmrtbzHmHq4bUAF81Mt64/6aWjmlFSlCZQsxLqpj4pFtB+/7gaIfAKsRUm5ghrnJdY47zWRJfjz5Yt6sVQhA7vrIZYeFJqJsONDnbezrwtUCM4PHpLvDFtAx7TgDySuEXnJAh2ngLRaY5mWaefeF1gNqQLJoNglj1sMQ0H93C+5SZQcdiXIbpMjo/uf4rIgLuKGznHVfJCOgTqasx7sYO/uA0nLijSS34h3VEfc28csrnx/e0q6izYlg8X7L9sU1gPIA8BDR2v/BtVo+Q7fsr/l7Pax8GMihqd3uls0KGLla9EuKkZMD4AwZ8lZuImERA9ofmGkUz3LbH6sD2dMxvia+KhUlQfLb4Rr91hy+8Ba0vJviWugPZEiO/14by9cE7s4Kx+XDgPP00MrFSXTbge0oz7t0puh2hIIlD2Itxkillq5nmvRB3jzV4w1TVvgg5qF2tCHLckn3uLaoAqIrq19Ju8vXhooDxiYnB355toLzGDUPbf+hB2EYqIZQXnHmiW044E5rhHjACQecRpx/1qke4S8ntglbvY1LpG/kEUiWbbd1UGCuamrQQGCXGD/sERGi+vyDiLltbZiJXc5ozM5HAkE9AT854yaUpJyocL4vXwONp/HmffmXl40DrlCfymayFnNLwpzy/EahND2yeOa2fWoE0GWxgSjqg7FkqMJHAO977viw79VeQTuh+p384tqocLwJqWd2DFeKUUuJFSSoTxlTrGjLvLbRW4baKrFb9jgTvq5VUmy5c/FUBzuJlxOIbFmOn4vTsQnotUEhTpYqXsGgJ0NsYbHtmSiVGSMJt1PL10IfjLAlTQCfGbj3O2xLNfn1+B9kRPXkL6DA5qZem4hg072DlVVuAp3zj9+7Y6epjxVtS0rHPPvN9xAW89M6nyUWfwr3Gy9jTnnDr+QSvbceEgP/0g2VLPzN5TkLSdBoG9HtDrq+MquI/i+qxer8eGHv037YKitHwV6YnTcpCBPUawgyPxGe5jt/lAX5ikkCQpXuKW98Rj42e0ZEfURzOAPlloEzve80TPaI/LwVQFBAUXtJaha2nP1C19XxVpxMUYm3BidVmGoHn1xIrgCQHNo2YrsCOO/azG9ROvv7qZe6Jwqegc9X0r8od71MmAO4ovZZNK1UaJm40xEIT5OZlDmEqtwaxi8yDavgDt97oCr5COW9lVrelCIsnJsAqUIDl1Lq3x6R2LiAJNOh0/0thIcxUy9xMms65y+59hrewIPO8oCC4s+kiKG9ffDkioTGF74Hzlh67sVVXY02mHENLhugo7juDDR4Wsx1KVauqW4l9w4zO3Q1KU08sWxA8j1J7nU/TZA+423ILd+ilXSicJgyrkOQtgaJ/hT8aagTTVZfEsspUaIU3vPP1yrBgCMLMOyQoBRKR2Ml3dlrUUPblMfVtQdW5buCEHRViYH724DyAiDL0+KINt/liV1Hyn9QTX24+Mnw1Jlwb/ZogFfXxhVvcnVWzKmA/Zryh/ZSTWqgc37HeiF+t+ykJLp/UZ/Er2FjtXs7galMenfohc3q33Y4ZdGGcWLcqX9BH1eiCrUZOKqPHJ3Z45VDA3JVNlb42bgf8sTVsrh4Pfenvz83kE1SUmBF67wetqcHC2nXz2nzeP+t8GKmth152IK4NQP/FHah/Dt92NnolNiwvh0GGyAfUhAUPjSCS7+HIcM5ZhoteT5q7UAePGALDunfoeFznLR3iOUOWMhmYdOAbQkVkcfFKwCR/I2PjYFA3yGiPlRmfj7wayKct0qhSF8OURnulxczK14dWofCm0XsH9gCW1RaHK+XH+XhrDpb6Dc6OAY8Rhx4d/Nf8IrpMC4ZAPywUR88IBdX9Ud7bGV68sUOOP+7ha1WpuvZD7kGBBSV2cvkpHUUri6GQ8+klSiMVuMq/rc7/vQL9vxW0/88K/T9LYKb+zwLzTyUGeu9P4T2+1Zq7Y5yCZ485BsLwd+2eL+Xh58eiatu/Kcr90ANBelnnock/fw0Q+Zff/E/CsyhCkCj+35jXv55F/5WsT/53/O+EfRj++3n/Wxbh3z3Vgf/zUx3/qm7tn53q+Adm5L9squMVev9SthYXhFNEzi1b+OvVNJcLTsduKvtuLvvuvLZ7jZb+5pLXm4ter7GLuvTUX7X57tK2iWo16qJeD3gTTxe06artfUX+DA4yuLcF9O1giyQFUodJo/uvFfCFoeDCQWeSXcBwlLTyiLx/PeBBKmQ/MWvLHyaWxutb0Z3pufrOeyKfO32i9PLa57pz5PPJnqCm/frE3HDmxB/mu0sU6Tx+G+PVnb66I/BZ2lkwC/udEpE8XL6Mab8cPSjm+GNXTnaDAxoFMTdZ4TryWVjGb2Q+1nrfyXEhplw7RoBc0t4CCg/LAgZdgIPyRA0dStfXugd0auj2pvIlncnBjj8aUX6+JBBFxo9i5vyBDEvNdxtC2/avN8JQ2jCKaHOwFyKPvlJBZGGRjsC2WEgoU1DHVw/GGHrGWrel/5Hgd8LBnUKFa4lH9PPzb2+6DK6bxP2oz6KdVD6Y/229Lm38Mmt2HZ6d1QVwqY1a+ddjBWKqMVsU65urbAP0rr25afOsKEz+Rmoc/Nnfw7aejIOkMTil9nd0GF2mDP7SLF6vl2pBdFpaM8IoZldSiu6woEi2ELxYbqsiafoe9JzwN3y5qaWPWKsZur5U2Nt7jCQK4s2ul3WB9cs8sS9Dvx9qfcg4UFT6TidmvnosEYDP0mvKpdI/J6Qsl4QXkW+8tuu6C3IplyTTrw5/Rx/2if0MsXKvmncbGQ+3JY0xq8Le2Pgqm+FWDHHgFO/kXvt1Em9SCF7BgK/fjzuH/Wosddp8LitC1jpvupPerdS57HMQbpWU9VSUpNmYrm2f+InZJboe2MMdF17wHNWen2E4e5xGLK1kTXUpuPI7E/lqIjVtzauI8rQBhzF3SBC8XpWbtOaJ7FIuRzojZC0Dyl0KOP8cKHO+PY20H0Ar2bHQvQbiNahu9Bakwu2j92fCkrwJAloFVWnQuBOes0N7RvrcB32DTqpjCGp+mpvMhLyIYx6ZiBqZhnfeFjvBr9ELe9EBqfXl20vgKODS5tdsWmVtiKyX4QP0r5GwA1pZEcLbV+h6PMK6cF/eID54AW0TOYoZaqc7sr1QJ+ChJQNsgAgNqf/mAwNrWfu93vzrxCTxuUVuAD1ouPEMDiSQRPZ3xC1AUa0YFFQAS+04xpg0Btd8F2MuoQ47HWYJisH8hbHcoG3K+7VCCJBO1SvOwCiRDjS+dF9014GamWXWs8hoLvoQGO9zMKKTe0FgaDlaVGZzhizmNYCQvy3nYUx+wsI818qkYwKY/zuaOVYCZnC3cop0qS7Gpen7Nz+OUCdoxKJ4PR2v94tem/yIbFNOKYJRXhMxZKoI3+cin55z4o3diF+R7f3icIDvmt4+6ruTVFmk43UD1OHlqDL5M5JP3mlfV1E2RI1YauQOkufS0cfzhp1s8RtoqtG0iLdu46Br/hNFwGiJVdZ5QqqIK8oM82FcBps3NwYTtvk2umQ6hpye3/pXMWqzk8KH/kaWZ2MqkuszI3xcM9rYrXqtMCSwalTGe1e7oyYO0fJ8XMe7+BXRkaF0DR3Hqz41yhlR0kt2yKZ+3O+14gPPaehIYHKbgcn9b+AQ4y6vmuHVHDkbf022G3Ulf8FKhuj3Pi5sow4cbyXh+iIaGLsJ7/NuGx+T7Z1O/BjQ9Lf4ivJrG3a1hBoPSTMsVlXFVtMX1HZI4VGV7zISEa2Z5n3wK4bOOMyUX/MPfNEFD2bcNBtu3+6rlx7Zfa+EvtDQLjOZqbnb29oStNrz3Ste6tx9jzwsGVbFUs0xxepndRywv2jKlY23u3epdS+GVrPGJGyZln1HfDdv180jw+Cs38Etzf713W8Hbd91q5vq4corwiInrQrN8aARQEipeJV6Fm3tkSkxWhMWwn1ftbfsxCcHpZHfskM/62OoSDGCpTqXmtbAgpfd4FHNQ2BsO3P/+iIrNaX5dkANIHNvcu6LZ47R1/2+kCxLikhlYqK9lELvcbSAeXFM02BGjc6ZboW/HWg5ie1rvJHosY5MtgSw5eY4zGBpNYCVU0TEBNjWmNMEVqMvexdRvQx9PfIX85jHgFNL8y00UqnjaW4eL9fFA+KhaehiEh3sBQ+Neldl47OadhXfjzC2puA3LK5tnlm4ym3Rrt22j79zNM35eBEoOfj43Sn8DpZUeurBVRJNJQKd2WbKL5RAey+sWpJ+GjYOveMD8RIMP4mH20azyck83mXUJU6MU75bInC71Q0GipN+J0iwrWxztGQ+ZDuwcor0XqiecDVjxge/fE5O+Dm776cjlqqO79St5C7S8o+fey8iM3Co0pSqCkvJWirafw+5aH4kdeIGq9qGLq6UN3ti08id+vi2PTIYxnj6sp73Ce/v74xm1j0Qo6qEOaxbJTx3Uneuvoa8WxetamKo1S0vITU0t9xX+n65czWU/eCOjlp16qJS0MqGQZipnk29WanEJVW8x5Oq7UrT0cy2Txm+VDPURcHyTMf1cNAV7XClO9TCb8dx09Fm34djq85JGnV/zIVUM47aLnTpiye7Et7LivwhpDg117lYmWR+fabyibQWK5ev2Uyn1YZdp1UEzf1SdAZrFUHYQhudb9E9G8IT7BSdzh7H3kfHqQdJWRYlQfs3TIq0tYTfJqf0+jWD9PGyhY9GQj/v9/4XgCi+LiuN0wCWU39+V5S6DHY/6xF9EVtLwdJ7vjqhePDlg79KLC1SwtYtkBEJiB7U0pUsU1tD/KVxJ2gHi/WzJN1piW4EkDJXTJjpiABT/Vwc8sLC1DnMJkU+3n1kTKHrktuZ5vLZHgA2fXeLF9nAqoN94LiC21590Ux5Y/CPYeDx7xQ0SCzL/MbKNH5eIYUvm7VlhseDERyznaXYTRKD68q4NF9kuJDU6XgCMU6OL+q3IJyUYz9Wz0t2Lwvaa/5wF6OyXMxsCqh9ofV0Z+wGfu75OHGvcN6n6YQjFv9IuJQomFRu7fKpYgZ1FX5U6v30WLeScML5sNO0TU4jn+t6WttzrYtY6oehSeEM4SfrNipWAE2fx2Tc1I95Tr3nkjqEmwbxFfLpVVjhMMXar8wnhEEHDhGDe0j+dhjBppzvryZwQRbYGmnWb3IuICcL6agskF9/FXm3bs1aUf/X0u1jdGYEkPydRqACEZ9kkNG5cCJYG1ykocBWP7y4ot4JsO0ItosRGkWOc2ca1MfjMwbkIgCYntHSuD/g0RF5HsQS9Vvn/ExVyVKAbzMJ3AM2fiPjDP/lLQ948EShpsjlcZTq47Dm77uorK9T1S+i7HzfEyZz3LlDagm2W0tiUEfh+mHvD0xnq10er+6D8BcojQwFUP3KnvdKt7Aow30939L03suBvzj/I0LxT0mjUWRxWA8BhTQwVSysKu/GicxAZwJCp/HehCJ7qB/3k3DYl+Phwty8yrL8aTj/H2Qc/P+Fcv/nkXHgf2CX139CHecfn9i/6TjIfzIdh/jn6zj/uqr3ny3k/ANT8l9XyIn/LSHn8l9NZ3fdx+5cN/7pOTreRW6k6UDEaSo36ryja6i20YGg8/+U915LsgPLdtjX6FXR8MAjvPcebw1vG95+PVFzLkWK1A2dEBWXh9KOmdkzbdFlMtdamZXpD7vHxk9wx0+HAYFnCO7EXwIcy/Zo+VHeYwH7TyR7kRPHXhZAxvj9iRDAyRfK9TzPr8Sex8LBWZm/+0qg/XhP9FjfMY38eFOZmZ19AYqVGZpD0B80gNxpVoX47JUyDGNMIpIuDMy9DWsSEtg1vrtNeoCb8kWf1OYjcJaY/Da1r2PfACp1dbRyk/3+IB/0EI+rgCuKAeQ9JiSOwj2jhVm5peWAmet4J2QyqHJcCr+n8pxLM/e/HKXJRbYg8q2uoshB31oy/0c0WCAHwBTAxxBMh959L/cq2myax8b7K5qzvFM6YPwP9q+RVvKUcIGgZEOTI48rMp31W6cMMaIQghd6667nzRfKkp/zzAFs9cgPdFMU3nvbGXQteWkAaTfWXzc7KBKvlRkRyyGzCs5cT24VPRc0fVmn9R6SJ6tX7/ctE5pSqLHuvrdDOIsGdQPsE1ykbDP4AMHdUlgXsy4IqdAWphVoxs3zsI9AyqiQQPbor+Dx9osfWrgevMNNk2hdTfeXaGs3uN5nQfAAP6ABIsIwRLwX/M9gPMJMW6tB4dYKTmIe+sZeYXgvEGNW3klbkuOD9ta5e0WM3rN/L8HaGbPQz91mSMzdvzOIuXMQa36ndce5H7XYRe8mljlExF8l+CmOTlfOG41hq4n7RSfvabIeMCKQ1QgodrmnFChLEzK/094hCY6vm0hrZ95aUmTDMPklVD/pqCDwCNJ/wy2vHPkfCVD1n3SRKNPnxGND9XHn2CDRwiPvG9qiuNf10Kj8mfJkmQ3QSxcX5xiZ6lOR2Zrprbd92Jrtrs3r5+UmJf7YNJDZ0XeH0/ivybi70k02A/5l5wJVVVM5JLLSvlG66MiW536rnQX6TS2FyVZirdtMXf6uLfgxUyODok6GgnQbbEubV9o+go4Atn5CoI6MftEP/qYUHYXeeDB4XD+gGwRQl8NGDWaSjCCvXNPA8xxkYwLySA7xrhDrw9XOnbbfKh5/VtFVzUDNhYWHYAvMAQ57yazgG0dOrF9ZQ+kydbQSVIysbj8aXECPfUlgL6aOrZ5LFGVHoakwxZTlzuaWuxATfesd/QuheDhFjoVwKWqeQXkt98z1sDvvLxVTQBL4xtMKd5qT7kSe/LIy4PzYDu1vL6zwyrp9V9KJezAdRPF2KQRwDJ27YN4KHnEC135ALjM1AxoL7R5qcP3j3YUDv1esut9NX+Evzsz1D21hP4HMfTaGLgDS0zbUEyG67q7F1WO2MAq/FmFu9/EhbKJatTU0fXpLeR5SGiHIhUGRVs3JpbO0CVUq++aCfmCv22E1HNUtxAvrvGhWp2QgYzsLS9cX9lLxffjZJ/khrUGNq/d1xRhtafelaZBDF6h+JYFdNiNW+2se9og5YPLcduPDvsQQzOJYHN+/1sWc/yM8vK4rvv5Wc/cZFnu4uxw98UQWpr+Gb+Dh219Bfn38NqrAoIYN2Yr87hSv0pZIctzLK1Kd9iPajapT/iL+7+ZBjYePXOJXQY0W9elRd+am3+fXvevX0fWZwgyekEDQs5ykDQh4fc2KRvxSiSDz7bFsrwJDIoutV8SESeiDExWrVYcan1/nxPGsVw4Hc/uEhX86zj5FCeYsPpURGMllwaqyaz3rgtTOjU7bIKUm0ucKy8uZVblj1GSU7UIHv2DC8jQQwBVoba4A08enSVE+zzempLV9V/5TTcpdW67IPLi6Xoe+Bl4vkqf92ni7U56vROi0E1kzykGc43KX5ai2Ii2/eL8E65cCeXy8FN0d8eWlgGK8DTt7YSAQIcawprNd38vlA/NkcMpEdZWMv8gCj9+rrNNZSnxtfNOW6o5p0Qc5E+nY62ux/HbDqiv/ta+2/0jVxRf1rNg+hrMm/oFyKLDcSPIKOwg7fc82mvUCNh4bMKGbvA4Z5bE0RjUzO8D0a3kAbc6+xBMzV+jj3PhcYZC1iKrRbmZKhtt5g8Cvx2LWHeuRMu82gGkz3HV8atGzxqEXJkcE3VmYzPRQ83Oui/QJ6Lm29gniPH8KXuNUMFUym4G/q4ALObiBy890a5W1z2u0fcuw7jz00aep0SyRKKpN4ZbHn2BVCGj7jBiosJcIxsCQHtIzkuZo1mlwGSRHP75334be1c6gj7EUcIpzj8pXsHj/9gQMwTq5GaefK4ryFVSxveDRTHZ5YeMieY/jFW19R8dGM3hgVeaGvjCYba4SgupAM2bYPLT8cZs+cLR5KR+xFRB+1exHx1Gyh9TdQes6MJevfCqO/vhYwqV4HwMjMq4PuwII00YL0v9IvFcroHLiaRjTp6pCHg8kNYOA93ZYfkGGZk/UdzhFRFUwfVQ2lqVfcbFg7mqDrOtZxcX3WgQT+xMS7Fm/4qBOBvFDM/3629oAfXfJUoNThlc5Oks7Wz/Ejghi9rIVVvwiizXxsvmPGjh86yegoZnAy7NNcaAWiYB518E8n3XUb9Y/TKP21Ge+wJImDJMAIYKqVLL01na47GAhRdXpQZ0Nr+WO4zeJ26C/3oIf2xWc33j0QpTMVb0kGLFtzJ2URDYBMhsX4AA7c/LpzVaNJO4X3YdEsRlOovNsuz4DC4a+QVQ9ym7OU+d0YKCAtbUvovAIne01tnGDnQ5Yz8FgDHEu3QHBL+jQwLG/Rsyx+7QZkcMVNeBxmwm1844RMLKeudC5Ey0c3sb5Gfvjr74dqbBtkxDLmvpddKTYv1lDRWXE9b1MbdPP89Ny8ZRAuthiW9w2mBbMZNofo4Q7N2f6fj5p2EBUk2CuMuu28fiNSRDWM2hge8vhyUoPJ6Rj5Oh/VNfRTGv7g7OkZJqGr3b8X0VEeKb/UOGZW0r81StPCWM1cYBixX4G5oVJjkZh/P4o6YnPhHROH25mx8Uw573/IHZJoy+eiK7eHuhyqImObgiXE+VGFAN+pu2/Cv0kMJfpSTsgYOHSu/591HcPjW6KX6mD1XfJ5CGP2Qz1bLDZjjQ3ajdoSLA+ZuL1XGldTMUcisF/TkNF//qz0G28qLT2m/aEunkpNK0Xi/11+eTvmphfazrjCBgL0wVpfKZAFozNbjOofvs8y73+go40ErbhxB6kqiP48rQV5J/7hmhgPdQlxMPs7cJQ4YCjBJRGaO3RxUPbHuxd030F7ymvUhG7ESAxl9l7H5d7JPC8HCH+PCk6WuvGBH50NhUFwdTJfljjs7uROQGh0VZWacsdpi21Y6i8cADQGf+g/MZCIGLJR5sDazIL2fuZ2MPMtSDz6gK4ShMnh5slY7VzX1/67V2r7WyoL9D+nvcZhu3MyWeSZ/iZjq2d+Eyo31CdgldlzvsmdG+4KZr9v6jy80+qD/i/w8D/hWSdfyIp4l9Q1vnnB/ZfVdYh/ufLOv/NSWSK/J+r6kD/9yPy/wVVp/uTiv+h6jBXdh7Z+0tAfv5N1cngDMP07kIECUM3ni/KcH/WuGuZH7WlKZxAUI4WXFXRZjUbfKXQNX3Kk9g1aqe7lWoh0xTVIBM8nYAeM3wvmm4EiuXbGd+IYvlrW/VoWq6T+jOi0U314zHdpaZ4T0f8Qu39cePmU5caWnLS+vilhhW/tve3dfYu6hC8otSm+LfgEGQsLUBVhrAj1Se80R1bLfMZUu3O+o8YQ8O9SU5gtModnpj5orK/qIK2pHyeh9mCec/q/VUCAocsFvjKgpv1AEWNXPRF/UBkwSM5vjA8Rxr8p9xbn2uX76yZhoUVCkAF8UnU4ZMvk+3Vx2Pie6nB+Y+Eg9PqR9Vmil9zf16Q2auhUnDzxBaW0M1w7LIM+Uy4/6CcmrPDOGgcy9DfhrNhY7b7sBq5L25E74IaySiuyYwemY9RZTwcK9/G4u5MuyZIwa2eqs8k+K1Gluqh2rgvb4p8tehXQ1hPme5UxxamdRGO1QMw9QXTPwwxBP+BabuNYrzos4j2ZtsRt2bFQrt+7WcntVyCbvWZPUlUoSIcJ33gLh07y8RiFiAIPc/L6NNKxPmK6s7+5FL+C38VchWEIFx8TXX12T1o0NGCWZhHpVclUrKPIJqMP4MW8YKZDIc7zP75ibyQCe6CYyLuSH43KRWl8SvTbCFesG/88DGWU0Dqvmqoi+2KHRs8z38duexAygbuq2tw68V1R5r9Z2pJPDx7z1T4Rvxxw0xQndWILwOoj0v7svN09HfOOR4HwsO/e5WYJeBe4o194ByAFt+LkaipiZ0oATPbMJ+mNR8tMXYQBW5g2C9QdbLDqZv5Z8QlchP0J2a9E+O2YjiNKEMlRfGNUaDdjsFL7SnseZZ16/IHh3skaoU91meOUB1840isb4cxjM8lyDIlW5sctQiv0b0vHIFF0gZwH0mA4wJUp/ljgrcnHsKrZ/tfOTZGX8fU21NdvxtnAMYXbQw6BwGx/A8vqbw4sspDbpsoxtSHnO1rD3KBmZ9GYAJsXUZI2gK7qtsXQVrb733zsDLaYhdu+oWdBpEiBc7RZHi9f38l6lBqV7zwyrP31QLQVHloDVX5rexeDGrEIajJmRx+egTmGQTd12yua92c5Lex/I9dhxK2f18Cm9gdPLtzv7Enn6GSaRZYEVL+UcZT6sROV7TLpi70DgYMncnaxdRDA8dAzUD9rrKK2c05zwTcWAOu3+CA6IsbJ7YZOXw0VQZEC12Pit30knf0hZ0VUUa0Q0c9ZozDXZhBP9GVXomujHLd8cJjqshs8swSCVmG66ipQ6+HODOoOLmrkw+gnw3kBMBpRwF2Xq4L53q7NFVPrhNuRme/DTAxtyfm35YAjn25ux5zLjt+Req37dfq+fnIv2RMdr2qkh6dVjjSJ3MhevdXuY80O6eHSnFXJ5RY9zylYdNifu4lgWu9WKXy4YGoIPjmuhRIlhwQWfGW1mVFNIJbiljYwEv12mZjXcDo6ojqvZ/xwd6BYX36s1TKt915Rq0Dl9mFay8ILKunzcAQWeEZnmm/p5/6F4kIv/2dwZSpajL3YUurcmA5nkObQuqXhSzxoC3GTGb1ooqZzmVKPF8fwhi3AU4NSg+fJtnM8pXIa7nZWXwZQSUQFXtoB7V6mfYoyFP46y8SnVhuP/evQKpc4QCbr8axOtmX2MVwsh7DX6ZZEiqrCVN4pj5baoNPo6X2a7X1l9Ycsi3J0jjaKCnn2iOYY8fDNs2VjKD+JnWsnjXALLVQT3u3vxQt1p2wueKHRy9Fr96xcemcuE9BNmm2scQh3XfEGFL1gQo3/+U/2Y65udetF2Y4BRYibv5w2PSR3WsIzp3hvwZaHi94X5CMJQ4t93i4zkpDt2l7fE2h41m+wtpO7we9TAVQOFqBX4ATIfYeBm3mZhZb/9BU+fJ4dhI/QBxE+p1KPFpieCEC+2B79hxMjq1yWkjkeZjtTlBChi7KusK2nipxiNy5AhATKZIodA2vwxuda6fFuePR5vJaUHps/ZYui3NHH1HhyTpyuWIUsC47HugaMgYVTkNRRqE4xQIVDI5tcao72agetOOPw0ETqJkVUfLlBoylQJ50aqqT7RpCqnR1lomFB2bZjGxfTjiRLetHKr7EX/uVxlNyygyJPfWL2t4kOcvPv7Pitm7Jouf8xbyNEOR02LiUPoKrPwgBoQtFYejo3WhOQMZHpwkTGTBLI1Df++PQXz/XoRBtx2EeDjWuCRVT0CJ4r72SxJnmfTyHOBn+zrH1hGyYZuKkrehP5mCbEQzC/6in98dUzwVb6ZNWmpcuMs1ZDRGnGB5I9x3Z1J88yb6YmsPd7kHVJ3x0px2/R2G5aMwykx3wqOgkgwjmIL3GC1RG7i0pSeuZISw/pHgIrugi1W9WX2f319SKrrYsZu45rKMftWNY+X6t/u83PeTa3lTBwmZarFLneyPmSiZMxIFFbgRGaIbLcLLS7ItSCU49W06lf2NoUrRMz4NxiN8J3F78izCwtth1NslYzSBt0JV6zbhhRlnoqtIFBmf5UuqC/lUOQfFqltWRVSDDll8uHUEODG9/JkR3mcXR+KozveQn7VYehSFttF1gY5lV9vw1b3btuiejJD6OsVVsQCROJytasR8aSakE26EB4o5y2xNA0CX53XZWpbtcjXmWuwgDXkdkcCIX7NI9XbB4ueH2nhf3KbbU4EAfvrIgFLaSWYnu693UpzUSpifkKxEEkFZsw8ORTB9Ln6dQcMCdzfwuejoZtZPm5a59wd7Oi3dhTwzB/mg7+MShN/NIAa5Rsid7AUiCiTy8JK8HXVCFpz/P9zqLJ0k00A1Ty4M84WGVflePTcpkyFYknq2l69il8+FhUvX2k9gUS/C7tS9cSHLGH8hBZWISAEuQNLR7KOaxP81N0RtE+YxmVGthz18jHF3XQ7HmNpdqUJWQEpy1khXK2Sl52g1Ou3+WyZysVKaXNDds1dQlig70U9Ljs8Fvs3Lhq9n4vwrwoWX4r98VAmm/j/lJiSh9GQYQHEHYIIMaCJcyOPJDOoGK9orWg7f7AlJIUc6P6FepzBWFDSVqV7wuWDcxh+kiXXvWP9OjLzl6YR15GcOmQ35l63M1CRiXPU8WdizFRKC62AUNWI7BJ89ohc7waszWC630nSJN+MUk5opOePm7f/Hr/aF3VgjLgESY5zTEvSN8bD+YdxEeZqRsQhKnVwBN7VMFKEMqTS4zSFRMJiZCbuCfk7j/wQaZ/VqyjpYtSruygLPquZZtyRBTh7/7EyQKKUQZD38liniBtfxkGgJnGJO6p++ZQ8n25DMrJzDc32vVKKMyfK9qXCo4RamwRlXHgJwOc739p4BGq/GsKQG2FXwAYFmtVb131IqLvB9k6NTqzL4rOXsgcmirX0GOMxquQoRpb5PaYLGj93cxSZh9+gI9oEM02ddQfwzrVF3TRs6n4fZuUpCn607/ueemPGjqEeDs6GFvX7DELIkpsCyqOyTkL1KXYxQGEdhN5IUGKnP+49zjFXXfqP0pPr/tvMzyr0dwjnNzfrVisUscJaregKMGJFpf4U3t5YaNYsRpkG6wnX7NzbAQKPgQcNP8I5BJIikW/G//udtNitz/VlgT9vOqLP+aIGLp/3G/+MPQC2z5cCFI5K+63POPO+tJe207L2DUTlAVn6H8Mcct8ZLQ7UGOLmm9WgVv3Lymq5F9p/ypmbwlWbSsPApeMd8Xjf3Idv1yCV6lh7p4PchrN//GA40iArpy69Yg8k8gbel/AKd/XJegeuBwyAZ+aCAVDKOCoxf+ir0ASRE8JCfLfsXIvK/+ivKxlPV3AsM9Du7MXZ5xvKjmUggujUQPIWY0Vi4Glt88askNusp1Y5vWMiNxmKZWc0ssFWeAWQ/2qdSw6kEsy2j01VpzEQSFNYaidka0d5HW65wzktN30iY5eCQTgVbaaHZ47JqnZzdySZGP7x5yHzVjS95Ve5hsb9w7yYIufaDse5VKSnF0jmJHep72UVdxqpaHdgVzJdumu3IUVqooWR7Os9isA8uqOPzkvOm4IVO/ep2Cj2z0pnCak+uhNjPuQlYGZz+v84wHgcFnTdjdtDGlcS0mtSWDPDnB5J1G14FLfv2I8ssj5ym/I1mWCggYFtoDyEn4rpI+/MffwChGT/TcKknbFfuzVfm2g4nPlUqbVNfW2PlWSIcnHxythe0zcPMX2+cIryj0Ki6nDpHTtq9HmRkUtWTnxtZLVhUGtI0RSMjiGuVXlr80v7W/bLkj3y6K3xli97Yg6/B4o1f9GWjHrzsvSX3mk34vbzzPaYHk29B61y2+Hcd+o+Pu3qHjVcprmUm5fgh+RyBFBKD8vGnnGyhXjArUZPJ6OeOKlKH9C4nhKTVcYuvH50fcIn2/8TRK+8bk9aM2NOr8KwgmTaUdqt4WA1aZF+2X3vgXxSaC9rTcf2xlauvH/ZEwty1Lzb0PsnQ5YSCZMGRLIrV5GRL59bS6+3w4WZ8xeKGAFMbvW+QpSKntAGy+xqM8Ut3FzeFYk0+BnIgswykkuVzw6KT1OiYrDdAwNd1BtJ7LxdcS4KIXinm7gCys7W3w8dOqjPP4duHwvMESS7A29KKZRyKTn4f6IwbPK3aNadqcYqds1O42Ct2SP+zvZMSWwKO7EHC6qjifMDIIO1A3Vx04oOKCsDlkjR+TfSPLwfOQVg6FJ7ApITX4NuPIRPPtTPu5dlmVJUweB0qhCRJTittaYhH2sjDAUn5mJrlk1Co7wyXV2c77D0hOu6bnStBJsktghdA9H5psB8YBFpBdXdJCK5pVFZnY7c0iOkUosF2v7lEXu6L4bK0pS2ryl2C0uaFPd6WR1dfk+U3JEZVY5ARO7STc4sudy/75ToL+9BKaFNJVvNSeFqWawTnTDirROI+LpOreBT2tmR0kq+dGrbpNqlYRjqTwXojgpNWi3NaE3yA1NlpI4mvo54M3leRvKN919K0Ql3ZDqMxLzHQZ39c1lJdZB74hxLJ3wXj8PEMOJA4f6zZysqRbjKFOeFHOS/cGreb8MG2ayTjSOzOtZif2HCE4A9joaJOK6YyoNnk5pfvLqUtM2hoZzjTfSL/bFz65w2mZwayF8mzGEThtTF/uy6BY2APXzIwV0moptPXbvlHb8Z2aAZFL5wWeRRRgoOMLo8hgC/WJet0vFKv9hEzNK4Y9cFYsqHvp+6tchTzZvG2kNMPoW2B52eXbkMJ5GUP60oYKgda7om3HCp2HZ4KNsRMF29quXjhIUmHC39wkzOpi+JILKlEoXlXnn5/4Vz7Judzjw38OEPhi0vSq/lpW/OLPwIpRHv/1MJ9NoqxB4Rpk8ASO5l9++wMWLLqSZqpi5as3LqxfVdpolOr6GsuehIVBY7p0bqw7Kce0bnhtzZNHAyhY8LKw763uv9ZPW4FMas+3dcr5Ljsp0X4azMwZKC5XfdVxoAm+7p8QrNBx92D4HXTVua2x7EFXS4Z1jMpgWb7eM7fS+6fOiOOoM89aOaem/QLFGo7GGMywpYlOHN0Fp95wagm2umYeO2BKaVcRNjCSPqZt6n6dGWSadO8Liy0uAcReIRw19S7DiHZsoRvn3ug+JoRmfEM/9Gt4FL+ibOGde/1CMJau4tYzPwyjGiZWlcB4qbOMgHcd/haVdh5TLK2wc3w5iRkTp/q6OCp56hfsq9uoSpEoq6En3I8W1fTGc62nEe8YJ0pzHv7ZQob+iPCnXZC/JihUT0pdD+ZmwAyUFdtjclX+0752XDnpIABc4NchwFR/CYNX0To4eVousg4tIjzis3F0bn/ikRkN3Ayl78ctpE9ejJljzc0Vs9FeIN/6a6rODoOg6OgBiJ01sXSy86yBTVpuSlbe1uIKmy3G1ewP0erDkVoSFRE9LMIh5RBvKy4wbIuFX/UTFGGlqbcyhfl68ktQ8bwgFeR0gawLtvhZTmr+2zmG6Zv1SUaLm0R0n2NgphU9haqtVSDuoZ40yH17CFXVCdxV6bM9cxn54sT5GV6q6c6cH0XJfHsAMDEc3ePyl6DWhIpJHq/JvzMUnl+WdVgVn+U60RAnm/xqwYHT09sqgVZtX1FmftIs/Hdp/ZDDB4cZZfrF04mYh9HR0NW/PZ6DMTie/66/ed3U7tCHkrZT4ewNRd2mTXAvcoFh1LCXORCUsqRZPQYmgjw1lX2BM+VLFVyfopYgOnu/bove/We/Ni/OOIczwtXkuY/nBcNA69UnjDxwvK+/q3XG2BG3SUkAQfzTJNJpqrkztsfQtnvMbyll/B2nRo/XUrP6BuX7ARnMe3eVVHAEsZAGztMQZiussvB6R3vRMmTJOfub29V0Es2qsQ5twvTnOltJ1xfoXEEmoeXiFK/EZSPIYi3Rt0lL9RN82D27LHoNr+ljsMQHTSA+hg64B8Zms3me9VpOtY05IIpxQanlgK8ogkNtvLNRlpjt+L0G0qC8p2N55Q+bHxHwD0L+kf3wxT5f8oAePF8c6YJYo3Q/XUnpgQzbBfcFSaSVJl3RGcllJPyjER2TzUZ+851jQkmZ9FTpafQXUj66ZtMfH4rsL+BPPnsn+EMTld2hdsV7IBAphDjLPRkAEqqwvVgfDBRwr3nbdVNLx35EcxgCwOLL4qDnmVsGY2qfuVh3Cz4C/c3MeYMJ82ibf1xKJDTe9g1rVz+hxKAYfBDbGNnta9n16YpVqTJiVedGA50qAqbEU3x53ZSbDj2CqwCmubfwv6JHwcTSiNGZT67P9/qrblA7q/9LtOxF3RcYz0J57RqsUzLKfrQk+x5H5tmIITV++cB0Dz4ScLAul7VTujPiEzASKc9KK/oyi3gc7FgM2V26d0TFthypXt9xpE8+Oj7bbszoOyKW3g3AG8YnapAQphOVGZSksr6iWtl24P60JG2IiH791CqbW11qxJpkcICnarBaE7SlB2ArTyztKYkIGYv+nI211qDuJtfe2wOlcNqbku2Qht/fwDMYTX/cJw/n5EdJIcnpXwRr5ao+GdzehGi0eFBGTjC3IGA0xqM/Ulvjsgjv3hRCxVOqJ3FE51177YnMknqnoBfxzwo5sNToR1Re3yYV8rTJiuQN7BXFMAyg7c0XtjBBBk9ptPnX7cSq8bXG7JLBX//LIuh1Yf5fhwF84enXmijM3ZZeL7zmIxMSv2Xp7KFeiloSBzGCPB/+pc0qLJF4sKU7cjXipDAmYiXQX7VJoqWZr/pOPgBIqNg2hwhFH3EU6pDPkVP4CEwJVt4EAAhcHHYlhk1wBkHEQQph5Mfc1Uzi5lTenfe8HRG7WtGPrE+BFoWR8SEV5GAJQY99w96Wnb8acppvXWCnkiSbA2Q6wNF3evl8I82/Ek5D+ERtWnU1YgC+wHrZrFfwHNF1BpWeo4etyf2C/S727N8coEushCo104yOJYgPQ7LArnJ+IOy9LBQpHwh2qL3TIrEVLXUJnOR8RGpjB5ihyUnZSz5bq/pfCcmVA6Q71MtK4JrRTsPX4jMKiecW9u63IaKZxG5qqScomXTGXEeP9E/u9Yh8eAiXuqZrm9p3mGSdYQtfZv72sQF2C7x5U0qzdndWr/+S/Ex4fsrB/0GOX/6QGaJoEEwItVUjNXnnHskdzuekGaeGfptgxi8zBhYkLphpR+HfijUj9VjRObOsdJfpnBs9rgs+TGwf/XlS0fgRP9R/khAjlx3en7g8luYTpSG0jqKxSwN51DmU/u4tpYZUo2ItsaE+edkqAdFZD8mDCDXHn4hgqKnykVlxCBi0RsqLaaTtyxXdmFQP49N+5c/0h4uh1Ug5lyMDyVNyfX/tK45GNSClZQNXC4Df1Cda1LQy9PA1PzDCLJMdBWBzljCxWtj5++vciIiBUhiLaD/4//OMHOLfSZ7418nIof6XTMj558f1PyefkP9NQs5/n45D/fej/p/Tlf5fT8f5J6oU/QdXy6GI/536P/37n5udA///ODuHLz7/5cyVnNiZ4zjC70NX/PslVypjs7QN2pg7tN3Qo0DLDPj6u735t57nDc/wzvuwv9//u8e07Q/Lb8AJpk2d3PQvJIbuyQ/IpoWaE/myLAH1b2XlABue9fuY5t4iRNbtWH9wO0Zz9f7rglNfvdJ9FnoWmXyJEuCjQdoN6vq/yS0M9VTryscTcDo3MYiC7xXc/7Xuyo6xAPclApjERdndZLkd+Q2qiKNr+AYvcacwKdD7kE6bMSO7Coi4dL5/0YwJTt1Dri59jfWcu9neyXQYdrxvCKH9IQS6cXOT5djzsY37CfbhN20miC2I3DvByalNKUC9H1tamiztzAJns2x19wF5HduAx2d/WWXGGgPoeyZIXyd7yINpuuHWeGmqYtcX2jXyjxKkjC8Xk06/2Ymi7KbaxYbmzK+YRv5FsdTeWuIRfY2Bkgjrz12ApM6slDfnnx3UeFcIgs5PzvByAj+2qDrdSlxPSwmMGOYN7cCcP809mroyVDdkCYK9gkJcitI4KlSjHNtTxZg14Jr53oVn+ZWCIOdOeuX6kKmqn2c0c595SmwtOu5Ds0BNkI7Z/r54uWQZKLTijlRsMS2J+xct/7VDW0u3dIIvGV4gW3/8whO9V0+nazJ3Ye7AgaUD6XtHKkCx3m4/DDOXtYgUcr3+jva1DXFENljmOORLN2vT1s2mbG6ptf/yeqXDMO/kRS6qZDUP/tQeutEDbXSlltq6RuPR9WNrOW969aHC9BJqoM4sX+hoFhymknDTnqOqdDCJDKO0qbZUH59tio+LofYG7ako42KLfv7aDnFHAofumeTGC36DI0qD16dZId2FAipcyGX6lYTk8GCL8lED7hL4q3m+4OrGVXwziWSHiSmg0+1b0eGq6t1i+TWKBgIb94W+xvHPSHre6ZkQpGgL/J5AKL9z3C6TIt8E0l76eCTT89G52CFEsK0FTv0RvSS0ddniq9Pggpeu+/iaprjSUE/hZk9oEASsprfbICTrV2vaaSk663GvJ1HxspM4U2m4AKKHQCQGJKo5h1WyyN64/y7WD0EmemnAmui0krKAdUHiqk33LkXCGENGIzsb1dMXEH1IBK+/ONTU4+Umv2MV60UfZOmLhSn4Qb+uIhh+FDPN8tG9C7J/MSZBz+rAqJPLlTHRkqNfg5LpU446ZzH1EM+oGzGdcxkLRvLXVPYw9fC8edA1TpgHL/LXaMCcbBIM4/uuX5+Doh86nQlxrkKSOde7jkYkzp57PKLSOUPlVOrX7DStqsEvRY+CYS2a4muJ27djHSdzErFmlrAKXO76ehxUZXSesCexzfS7HeiO2Gx4pcMack9TjofcL74IS5Eol5iIQlkBmqfk7wk8Y7o7b3FSAvDpxEYznMCQWzKG13MXl+XWPB7/Wb7tQ+h396jvM0LG19nHBXTdUW7xz7xx352N45e0yZMT39+OUr3PJArB+0BbtahY/mCvHX1vvd6nZzq7FHwwvjPmmuDpwxLw0pHgUECPyN3MUbOJ3Ff6/Czja4fcRKW2BLjohjFfmwszqimh7MwntgbKGl+Yr9vhGI0eY5Rbr+38Hy0XQP47mONfB8VC/0R+878gjP3nB/ZfE8b+E+ThP7pYwL8Yjv2Xalb8H4xjJe6/4Ng4+3phcLXS7xH/q9oAva5MlpJwauKxX89NFL0b7HtwnahrvEjhBxd8Y/ocqJ/BjbT3mx/2SOtvLBmXd+4fE3j4ZzMoCsaRsEDwn0Gi5V+pdApBgMCDPc8vIwiLev8fjM6cQirX5x1R1lSGw6ztJMWU9TxmcnlJBJUJWLYn872ZVLUbZ2cNE2YbmSvNA6MV+Fbo0PGXZDub0jr0+rC/xrmw3TRZ4JFzyOSdeO987sWQIzOBhf8Yi9yPdOrlKgFKjDEpK4dO/aN3sNFHp5u/3u/5QIbMg5G+XS6oSgbLcbAhhBkBCAeyvbQEK3FOMYuK8BnBDHquyiOzUnXRIgyBQJ4lDluP6YRx0VKKeQR0WWogFO4ZHHyR0bmWkClo3LfeXebzrU5WOGkGDuFS+JqvL0vxvMd2eH3E9odSICEDuO6/wP0n7/OWNe+85VFL7BjF9/mRloedTRKgB+nFEIUKYSz6stYrgidIIiVl0hl3ePHJoN8DWm+RGH/KejdacEqv5rJSLFDzisiktpivSBZ7hTc/79kLWLaGeY9HGaJ5hnFqvdF78+ZwmxAt5Zi4VST7P+n3zhOJkre/uohHokBX8Q/C4hJJsvZkD4ovaphGCUBUx6JFPQSVv5lxZ749FMw+3wP50Z7lObC5e6WU34pUR+/qTSrVsnCHmlLcBaaBuLPjk0igZUgQAfHOscFtAW80UjKIHtdVkAqBtJlEyHpc4dZ0tLXvV228XAoEKDAPyY7ewYWYsbZePzw1StA7wRQH7d9xyK54lln2yveZMW1DGiBLTuwijjJQ7vaSha735n5K57FO571IR0rMp5HJfVFuXkzlinbIMBEPL/uXr3iWdULhBUs9mBymL550VrW8m+BDqdaqEh9PmxAvmxBb/QVLhyTzTiTjD0qXQUmXH5MuWc99Uw5BAzik4aZKyrjwnQUeBzmaC76Wpa7i+r/seYXqf3oenaKhrsdnvBQwpBULzYOCzTuDzVuIzYcLzY0PzRNNzVWULzBnVHdkV6p8Q5tljdHXOUniM/9AwE31T6zMdktUeGewLVBtQ2yuslZuXb0bTniGWqBVkEvanYkhtj4FeTYFsU8O0UoOGWq1SsiXH1+ANVdakPfOnaIc366teAa9nWXzbONgVY91BRe4pH7aNQHn0CWk2L/2Oq8fpss/x1xJ2+fCv4xyfV7+Wwyi6+tDm6zFF032L4buX0SwQerTAIa5qz0cPai15piD7SRRowp2abHtMFaYaBesg05uHII2YWQ2TXD/j/xNrjwU/UebCl5OMEgZ3ndqVs4cfnY/D4debe+MDpjspKcm5EqkMiD0Qddy4SpAxBYS1jbPWJ1USd0F3g0BACTgmXvf2lc+vehBU/P5fO1c2WELlSZDtj+MGQr4pPyY9VMzj4HEhF/X+Ur9zOuZkHePvJ4lVoPjKbhQAaFnlZ8lfRAMVYC04WdGc5iuDGtHn4HLXF98LGoqixKs2XCDeyvwGTcHU8lM0S98qgpXxk+10njdz4ohm7n9kXXEl3HEZR3EwwfOpe1FbMNF9ID0uvwKRjp2I4e+6skS2oKGfo3EcQeWAqp2XDv49Bftmopr1M+feK/81dzHOrWreO9dF5HqZ4nNzPEeWO4Td08uUj7PNXpxDYZ0S3oF3Pk5X4oGEpHrdqg3lT5iPG97uwEnTSXWmw2WHC3vWxImoAMiKvCgEQ8nssIFhzFIBsy2Z6D9XvVy1wysL+J3Kv0yRVvtfdZ02tatCEMKfc6qnXyHdLy1ic5rf7PbbefoXidY75iM6ctyKeaJPz9A5LrQKXovpi1TDIcqpReZ44MeUVl0kjMFa+eJcDsg538kKGfKnnATr0ueElEAG/zGBt5JXMaglL/Rst8zEGRSKMP6vhWIMcp2jtc9WkIbRCAHpOnUyiy/K+o8aad552e/JBwN7Ua97HJx9xF9J4dXNHIfvwz/XlVD5O20M+Z3pvH+tLRHMlYUKi0lVHiu7bQrbB+qg1LfMU55NaHHJpnf0kcxr2sUTwaL/2SMm7i8nX7nJoDFoupjhibPYFqUkcVJtVPbuEl6U70akmY7Wo+4imb0tiZjC8ntwKRIrs3oWXGFmHnE3DBg4RvFaPLXtLuh7EAjP1WTGRKzSPUzokrVkeI8cvL4cenKZ3s6ttOmpPpas0aQcINPh0PfpZ9WjEuTagjLOa+cXtPts0z+ZafkYm1dQZXJDl0qNfMZfeazupWwHyWQZyjTmr+2Ymdds410izs2LGvwaAL84StfCV9UvCiiufN78UJIFMu8Eu3wd/f9lcL7XZj1oh4vt1jaoqx/5HQKEQAA2wEi0BRmzYNju3YKjdU8sM8GWdlrcapspNbEaw1bnbhm6leQKzAvZFwd304xPGGvSlvNP6jgj6KPdY7TC64fKK6v2fvN5R4PTffynOpUCNo1iVfSiXAVWYRzcZXgH2KHdr9kbbjU5iHU+QLxhsW9xs1Cbx2CFxgJ4CC3n3no5fwleIVdeYJlyPYbrpkbKf8PF6uj/h2q8C/EPv/XDKL88wP7H80+gRMYx+2/uk98x6jWx7wAj/hP</diagram></mxfile>
|
2103.17242/main_diagram/main_diagram.pdf
ADDED
|
Binary file (36.9 kB). View file
|
|
|
2103.17242/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Drones are actively being used in several daily life applications such as agriculture [@UAV_Agriculture], wildfire fighting [@UAV_WildfireFighing], inventory applications [@UAV_Inventary_Application], cinematography [@UAV_Cinema] and surveillance [@UAV_Survillance1; @UAV_Survillance2]. Due to a large-scale application of drones, recently computer vision researchers have put forward several new techniques for object detection [@UAV_Object_Detection], tracking [@UAV_Survillance2], agriculture monitoring [@UAV_Agriculture] and human action recognition [@UCFARG; @Okutama-Action] in the imagery obtained through drones. In addition to detecting different objects from a drone video, it is also important to detect the drone itself from a video captured by another drone to avoid drone attacks [@UAV_Attack], drone collisions [@UAV_PAMI] and safe multi-drone flights [@UAV_IROS; @UAV_CVPR2015].
|
| 4 |
+
|
| 5 |
+
Detection of ground objects and aerial drones from drone videos is a very challenging problem due to a large and abrupt camera motion, arbitrary drone shape and view changes, occlusion, and more importantly small object size. Although a lot of recent research has been conducted to detect and track ground objects and to detect human action using drones [@UAV_Object_Detection; @UAV_Survillance1; @UAV_Survillance2; @UCFARG; @Okutama-Action; @sultani2020human], limited work is being done to detect drones from drone videos [@UAV_IROS; @UAV_PAMI]. To tackle the problem of drone detection, Li et al., [@UAV_IROS] proposed a new drone to drone detection dataset and employed handcrafted features for background estimation and foreground moving objects detection. Similarly, Rozantsev et al., [@UAV_PAMI] introduced a new challenging dataset of drones and air-crafts. They employed regression-based approaches to achieve object-centric stabilization and perform cuboid classification for detection purposes.
|
| 6 |
+
|
| 7 |
+
Usually flying drones occupy a few pixels in the video frames. For instance, the average drone size respectively is 0.05% and 0.07% of the average frame size in drone detection datasets proposed in [@UAV_IROS] and [@UAV_PAMI]. Note that this is much smaller than that of PASCAL VOC (22.62%) and ImageNet (19.94%). Small objects including drones usually appear in the cluttered background and are oriented in different directions which makes its detection quite difficult. This issue was also pointed out by Huang et al., in [@small_object_low], where they demonstrated that the mean Average Precision (mAP) of small objects is much lower than that of larger objects. Furthermore, the object detection performance further worsens in the videos [@chen2020memory]. To address this, Noh et al., [@noh2019better] proposed a feature-level super-resolution-based approach that utilizes high-resolution target features for supervising a low-resolution model. However, this would require the availability of both low and high-resolution drone images which are difficult to obtain in drone videos where the drone is already flying at a far distance. Similarly, Yang et al., [@yang2019scrdet] employed a region proposal-based multi-level feature fusion approach to detect small objects and introduce a new loss function to handle rotated objects. However, due to very small size objects, less salient, and cluttered backgrounds i.e., clouds, buildings, etc., it is difficult to obtain well-localized region proposals, specifically in drone detection datasets. Through adversarial learning, Wu et al., [@ObjectDetection_UAVs] proposed to learn domain-specific features employing metadata (flying altitudes, weather, and view angles). Given the recent low prices of drones, it is more useful to use an RGB camera for detection and collision avoidance purposes instead of relying on the expensive hardware for metadata collection. Authors in [@zhu2017flow; @liu2018mobile; @wu2019sequence] proposed to use convolution feature aggregations across video frames to achieve improved video object detection. Our experimental results and analysis reveal that although feature aggregations [@zhu2017flow; @liu2018mobile; @wu2019sequence] techniques work well for large video object detection, for drone detection explicit motion information is more useful.
|
| 8 |
+
|
| 9 |
+
In this paper, we propose a two-stage segmentation-based approach to detect drones in cluttered backgrounds. The first stage uses only the appearance cues while the second stage exploits spatio-temporal cues. Given a video frame, we divide it into overlapping frame regions. Each frame region is passed through deep residual networks [@he2016deep] to obtain the convolution feature maps which are then followed by pyramid pooling layers [@zhao2017pyramid] to embed the contextual information. After that pixel-wise and channel-wise attention is employed on the convolution feature maps to discriminate drone boundaries from the background and achieve improved drone localization. The purpose of the second stage is to discover missing detections, remove false detections, and confirm the true positive detections by employing motion information. To discover the missing drones, we employ motion boundaries to find probable drone locations. Given the detections from the first stage and motion boundaries locations, we track each location forward and backward for a few (eight) frames. After that, cuboids are extracted across those tracks and are fed to a 3D convolutional neural network [@carreira2017quo] for spatio-temporal feature extraction. This is followed by pyramid pooling layers. Similar to the first stage, we employ pixel and channel-wise attention in the second stage as well to get the improved localization. The proposed approach significantly outperforms several competitive baselines. In the experimental section, we validate the efficacy of each step of the proposed approach. The rest of the paper is organized as follows. Section 2 provides a brief overview of the related developments in small object detection including drones in video and images. Section 3 deals with our proposed methodology and Section 4 covers experimental results. Finally, Section 5 concludes the paper.
|
| 10 |
+
|
| 11 |
+
<figure id="fig:pipeline" data-latex-placement="t">
|
| 12 |
+
<embed src="images/pipeline.pdf" style="height:7cm" />
|
| 13 |
+
<figcaption>Our pipeline is divided into two stages. Stage-1 extracts Resnet50* features from the overlapping regions of each frame followed by pyramid pooling to retain global and local contextual information. Channel-wise and pixel-wise attention help in learning better localization of drones. Resnet50* refers to the modifications that we have applied (ref Section 3.1). Stage-2 combines spatial information with temporal data of the videos. Detections from stage-1 along with candidate regions discovered using motion boundaries are used as candidate regions where UAV can exist. All the proposals are tracked for 8 frames in a forward and backward manner to generate cuboids of size 224<span class="math inline">×</span>224<span class="math inline">×</span>8. Each cuboid is passed through the I3D network followed by the attention network to accurately locate drones within each cuboid. In figure MD, TP, FP, and MB corresponds to missed detection, true positive, false positive, motion boundaries respectively.</figcaption>
|
| 14 |
+
</figure>
|
| 15 |
+
|
| 16 |
+
# Method
|
| 17 |
+
|
| 18 |
+
Our goal is to detect and localize drones in the video frames which are captured by other flying drones. Our proposed approach to tackle this challenging problem is based on the three observations: (1) due to very small drone size, region proposal based method may not be able to capture enough discriminative foreground-background information, therefore the bottom-up segmentation based approach which classifies each pixel is preferable; (2) the model should learn subtle visual differences between a drone and the background (clouds, etc); (3) due to large abrupt motion of target and source drones, the feature aggregation methods may not be sufficient and we need to use explicitly optical flow information as has been successfully employed in several action recognition works [@twostream_CVPR2018]. In the following, we first discuss details of the segmentation network (Sec. 3.1), followed by the attention networks used to get improved localization (Sec 3.2). Finally, we discuss how we use motion information to discover missing detection and thus improve recall.
|
| 19 |
+
|
| 20 |
+
We start with appearance-based pixel classification to accurately localize drones. For spatial feature computation, we resort to deep residual networks [@he2016deep]. However, given the extremely small size of drones (0.05% or 0.07% of image size), obtaining good discriminative features by utilizing the whole images is not possible. The standard 2D CNN networks such as Resnet50 require a fixed size input image (473$\times$`<!-- -->`{=html}473). Therefore image resizing from high-resolution image to low resolution (1080$\times$`<!-- -->`{=html}1920 to 473$\times$`<!-- -->`{=html}473) for feature computation can further decrease the spatial resolution of the drone to one or two pixels. Secondly, as the network goes deeper, we lose local information. To address this, we use two steps: 1) To avoid resizing the image, we divide each frame into overlapping regions, 2) We modify Resnet50 to keep local information intact as we go deep in the network. Specifically, we extract features from all four blocks of Resnet50 [@he2016deep] and concatenate them together after spatial resizing of the first block to avoid dimension mismatch. Finally, we use 1$\times$`<!-- -->`{=html}1 convolution to get back the original dimension. We call modified Resnet50 as Resnet50\*. Inspired by the use of pyramid pooling [@PSPNet] in several applications we employ pyramid pooling in our framework. Specifically, after obtaining features from Resnet50\*, we apply the pyramid pooling using four different kernel sizes and concatenate those multi-scale features after up-sampling.
|
| 21 |
+
|
| 22 |
+
In experiments, we observe that although the above network provides decent drone detection, in several cases, it is unable to accurately detect and localize the drones. Therefore, to make feature maps more focused on the foreground, we use pixel and channel-wise attentions. Pixel and channel-wise attention networks are described in the next section.
|
| 23 |
+
|
| 24 |
+
Assuming the drone of size 16$\times$`<!-- -->`{=html}11 (the average drone size in [@UAV_IROS]), by missing only a few pixels from both sides, the intersection over union (IOU) drops to below 0.5. Therefore, it is crucial to get accurate localization for true drone detection. To achieve this, we introduce detailed pixel-wise and channel-wise attention on the convolution feature maps. Recently, several attention networks [@NIPS2017_3f5ee243; @SEC_Network; @yang2019scrdet; @Choe_2019_CVPR] have been introduced for different computer vision applications.
|
| 25 |
+
|
| 26 |
+
Inspired by [@SEC_Network], we use a channel-wise attention network to automatically learn to give more weights to informative feature channels and suppress less informative ones. The architectural details of channel-wise attention networks are given in Figure [3](#fig:channel_pixel weighting_network){reference-type="ref" reference="fig:channel_pixel weighting_network"} (a). This attention is achieved through channel-wise multiplication of attention vector with convolution feature maps.
|
| 27 |
+
|
| 28 |
+
<figure id="fig:channel_pixel weighting_network" data-latex-placement="ht">
|
| 29 |
+
<embed src="images/pixel_channel_weighting.pdf" />
|
| 30 |
+
<figcaption>Architectural details of (a) channel-wise and (b) pixel-wise attention network, where ‘FC’ represents fully connected layer with number of units in (a), ‘C’ and ‘F’ represent convolution and number of filters respectively in (b).</figcaption>
|
| 31 |
+
</figure>
|
| 32 |
+
|
| 33 |
+
Similar to channel-wise attention vector, we generate pixel-wise attention matrix to assign more weights to spatial location which corresponds to drones and less weight to non-drone regions similar to [@Choe_2019_CVPR; @yang2019scrdet]. The architectural details of the pixel attention network are given in Figure [3](#fig:channel_pixel weighting_network){reference-type="ref" reference="fig:channel_pixel weighting_network"} (b). To suppress background information, we perform element-wise multiplication of pixel attention mask with all convolution maps channels. This is followed by the addition of attention masks to give high weights to regions containing useful information.
|
| 34 |
+
|
| 35 |
+
In the experiments, we observe that attention networks significantly help in achieving better drone localization. Note that the complete stage-1 is trained end-to-end where attention is learned automatically through network architecture, training data, and losses. Figure [4](#fig:attention_vs_non_attention){reference-type="ref" reference="fig:attention_vs_non_attention"} demonstrates the difference in the network outputs which training with and without attention networks.
|
| 36 |
+
|
| 37 |
+
<figure id="fig:attention_vs_non_attention" data-latex-placement="t">
|
| 38 |
+
<embed src="images/Attention_vs_non_attention.pdf" />
|
| 39 |
+
<figcaption>Role of attention. (a) Input image. (b) without and (c) with attention networks. Top two rows shows the examples where attention networks helps the network to learn to give more weights to the pixels associated with drones and the last row represents the case where attention network suppress the non-drone pixels.</figcaption>
|
| 40 |
+
</figure>
|
| 41 |
+
|
| 42 |
+
Drone detection datasets have two major challenges: There exists a large drone versus non-drone class imbalance i.e., the majority of pixels belong to the background and only a few pixels (if any) occupy the drone. Secondly, due to extremely small size drones, the only difference of even 1 or 2 pixels between the detected box and ground truth brings down the IoU score to less than 0.5. Therefore, we use multiple losses to train our network. Specifically, to address the class imbalance, focal loss [@lin2017focal] is used. and Distance-IOU loss [@Distance_IOU] is employed to achieve better IOU localization. Distance-IOU not only minimizes the IOU between the ground truth and detected bounding boxes but also reduces the distance between centers of two boxes. Finally, we use smooth-L1 loss [@ren2015faster] to jointly train pixel-wise attention network as shown in Figure [3](#fig:channel_pixel weighting_network){reference-type="ref" reference="fig:channel_pixel weighting_network"} (b).
|
| 43 |
+
|
| 44 |
+
The purpose of this stage is to confirm true detections, reject the false detections and discover the missing detections of the stage-1. To find out the new probable drone locations, we use motion gradients which is explained below.\
|
| 45 |
+
\
|
| 46 |
+
Drone can be characterized by locations undergoing motion. However, since drone detection datasets involve moving camera, simple optical flow magnitude cannot be much useful. Therefore, we propose to use optical flow gradients to capture the change in motion. Specifically, given every three frames of a video, we first stabilize them using key-points detection and then compute forward and backward optical flow. After that maximum motion gradients across all three frames are computed as follows:
|
| 47 |
+
|
| 48 |
+
$$\begin{gather}
|
| 49 |
+
G= max(\sqrt{ u_{x}^2 + u_{y}^2}, \sqrt{v_{x}^2 + v_{y}^2 }), \\
|
| 50 |
+
M=max(G_{0\rightarrow1},G_{1\rightarrow2},G_{2\rightarrow1},G_{1\rightarrow0}),
|
| 51 |
+
\end{gather}$$ where $u_{x}$, $v_{x}$, $u_{y}$, and $v_{y}$ are optical flow gradients along x and y axis respectively, $M$ show motion boundaries and $G_{0\rightarrow1},G_{1\rightarrow2},G_{2\rightarrow1},G_{1\rightarrow0}$ represents motion gradients between frames, 0, 1, 2, 1 respectively.
|
| 52 |
+
|
| 53 |
+
There are two limitations of motion boundaries: The motion boundaries provide high magnitude across the boundaries of drones and in most cases, do not completely cover drones. Secondly, due to the underlying approximation of optical flow calculation, usually, the maximum of optical flow gradient magnitude does not match exactly with the moving drone. To address these issues, we dilate the motion boundaries and then apply conditional random field [@crf] to get the better localization of candidate drone regions.\
|
| 54 |
+
|
| 55 |
+
Given detections from stage-1 and newly discovered locations obtained using motion boundaries, our next step is to extract spatio-temporal features from all candidate drone locations. To this end, we initialize the correlation tracker at each candidate location (including stage-1 detections and newly discovered locations). Due to small drone and complex camera motions, trajectories tend to drift away from their initial location within a few frames, therefore, we restrict trajectory length to eight frames. Specifically, given the candidate drone location, tracking is done three frames forward and four frames backward. Note that tracking is done after the motion stabilization of the corresponding eight frames. To capture contextual information across candidate location and to compensate for the trajectory drifting, N $\times$ N patches are extracted from video frames across each track, resulting in a cuboid of size N$\times$N$\times$`<!-- -->`{=html}8. Finally, to extract spatio-temporal features from each cuboid, we employ the Inflated-3D (I3D) network [@carreira2017quo]. We choose I3D due to its fast speed, small memory consumption, and excellent capability of capturing detailed spatio-temporal characteristics. To make the size of the cuboid consistent with that of the standard I3D network input dimensions, we use bilinear interpolation on each patch to resize the cuboid from N$\times$N$\times$`<!-- -->`{=html}8 to 224$\times$`<!-- -->`{=html}224$\times$`<!-- -->`{=html}8. 3D Convolution features are extracted from the third last layer of the I3D network which has dimensions of 14$\times$`<!-- -->`{=html}14$\times$`<!-- -->`{=html}480. To have consistency with the stage-1, we use bilinear interpolation to resize the feature maps to 60 $\times$`<!-- -->`{=html}60$\times$`<!-- -->`{=html}480. This is followed by 2D convolution layers to convert 60$\times$`<!-- -->`{=html}60$\times$`<!-- -->`{=html}480 to 60$\times$`<!-- -->`{=html}60$\times$`<!-- -->`{=html}2048 feature maps. Experimentally, we have also tried patch superresolution and feature map superresolution instead of resizing using bilinear interpolation, however, we did not observe any performance improvement.
|
| 56 |
+
|
| 57 |
+
Finally, spatio-temporal convolution feature maps of each cuboid are aggregated across different scales using spatial pyramid pooling. This is followed by attention networks and network losses as discussed in Section 3.2
|
| 58 |
+
|
| 59 |
+
<figure id="fig:frame_samples_dataset1" data-latex-placement="t!">
|
| 60 |
+
<embed src="images/Dataset_1_samples.pdf" />
|
| 61 |
+
<figcaption>Samples frames from NPS-drone <span class="citation" data-cites="UAV_IROS"></span> dataset. The green boxes enclose drones.</figcaption>
|
| 62 |
+
</figure>
|
2104.07555/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-04-15T13:10:04.797Z" agent="5.0 (Windows)" etag="Qb6H-ZQaXxO4RjOmcavL" version="14.6.0" type="device"><diagram id="t-eswc73T96jCtbiEeQb" name="Page-1">7V1bc5s4FP41ntl9iAcQ10cnabPbbaebJjtt9mWHgGyzxYaAHNv59ZWMsLmIGGwEspM+JOEgDvCd79PtCHUArmarm8gOp18CF/oDRXJXA3A9UBTNAvgnMawTAzCoYRJ5bmKSd4Y77wVSo0StC8+Fca4gCgIfeWHe6ATzOXRQzmZHUbDMFxsHfv6uoT2hd5R2hjvH9mGp2HfPRdPEairGzv4H9CbT9M6ybiVnZnZamDqOp7YbLDMm8GEArqIgQMlfs9UV9Al2KS7JdR8rzm4fLIJzVOeC8U/v6fmfr5r35X9VmzzcvBjLywvq5dn2F/SF6cOidYpAFCzmLiRO5AG4XE49BO9C2yFnlzjk2DZFM5+eLj9UegcYIbjKmOhD3sBgBlG0xkXSs2noKWMUlR4vM/hT0zQLvUSNNg35ZOt6hwr+gwLTACSFAZLu49teut5zDiz9aUHCeTkO5ugi3pB5hAuY4Wp3Dv81ob/9tOzBTq5tZF+g4OIertBGBVG4iFPP+GUT5/kbYvPmuVNrId6IuMoFNUZR8BNeBX4QYcs8mEPydJ7vF0y2703m+NDBwYfYfkmC7mEtjeiJmee65DZMFu14JrVFJGOo5amklamka2UqKbyYBIz9eoNzd0QqLgKkb8ex5+Sj4drxdCvHSpigO4GvgpSBQGOoKbVF0LeR95yvDVm40Dv8HXgbOtMYKHpBzJY6NPS8lzhYRA6kF2YrsIIvUKwYLC3vCNnRBKKSo02ktm9+ePDSeqjleuAbHEMcPgfGb06gIB9QIIOSPFUGN7nJU63RHJ6XPA05HwFFOUKe+33xVqgmXncGq77YCrE6NOqQ0Q4BXkTXWFVZNdFp9ZGjdh2URGG5VKhnFHAYxTHH+6a4JmCPvQSLZJYIDqwu6Q1arscxNtH6B2n5sEzp4QNtCDcH16vc0To9Wnkocxk+ekg94r93F5GD9JrKgCQsrVH9JRSsQSJB9FlsObZ+j26Cio7aE+cP1QTm/X+fw0X0RV78eSV/csbMgWKBdhMszrC+5rYTG/Zj6kFihyq9SjGHkgUUC0gYDguoah5as9z2qGBoykCVgaUBXQay0qFS5RotdgVkr0bgeByHlqQbkm4Zuinpms6o0lhdU25AqU2qfemQav/1+OxvDPpCplGfrwYy2XHTAD82/mdZVYOszmGUecGo84XRtaE5dkow4jO6Y8LHMRcs13nMMtDKXUJbY9KnEbTVEFaBzg1apWdorTNk7SqPmSAkTh2fI9SKYFA3GuOdFtRAMKhrdNlPFeridHnfUNcYlJ8q1IpgULc9WhAIatAb1MwBZ9tVNXOUEeDiHiJvbLBniBqNhU9mzMuqMATJu9+TeYSzSLRHAbKRF5CriBq7ZxcVcj2utZH6Y75OsxnjM0uIHJOV34apYVYeQ2mvM8VCUiAuxbGFWdkajeEZhVaV2wutvt9X4+hW3Eor3Mrc3mrHhsRpuzP2jTql3aTTgAzqZIx5LYFj48QSkSBt8d16jqYQt3+4gEvWw92OGjbMgrzI++q9nAoqkoJ7NMCth6CwEg1JTEJmyH1vDi/SRyNBJ4+mceLO7QLGpBu3pc9jxGBOBaHCt0Cn4jIckI49M4QyGIQyePGpRsaFd7YYAHmoCZktZkNWI5PSKFustwVkj0Nn9puZTfo1B+WLX4+QEPli9iNyzhk1TRhzxpHbzFcaIKEmGY8Fs/u0BBtbvtOKh6WMW8K27xlboLSMrQi87T1nzIZaxJRPS1D3lzNmQy1iyqclqPvLGbOh5rziTKAKpPe6WsRVaXwqkN6hbnuVmkBQC5YzBm0PztpMGlcNiU9m6AtY4ztBZnjf08bt8YuK2arFNm6zwsxPRkVhG2bKJhny8fZmcODmAIK8ym9wOMHn7m4Xo+vfz0E/e2vn5l/hMdKGjBlu5oem/OTBGmw3SpoYHJMmiUCINmbJvjXnlzrhwLPi154dfrX/7/fpt+ls5sbq091f0svL10/OZ0572DByAiXYKjFSiz1OY5t9yqKkMzqdgBNKRwtRToSYsEjqdocbotFTV2IrtAIFWjFI1aX0+KyeaVd6Vu/Sq147cwrSq7GY5i1Kz+pZejUmIQ/YqqawirEWQt0sbFSl8mJEM++k7rrGchWhDSU176u9nQKY0WPNa7agWPzbnhH+Jz9xsdE8XsKoyaqgc1UwANqwsDgWpFOJWRHjAZLFSbN1dn/zfS+Mq0DJRMGOw2S/y7G3IjAdPN96FKilleSM+Rily2qRNbVKqX9o22iwlCbvbxUfz19Th4Zf1jiFv85HhqcmscIWiqxKq1OJMb8u5KEx5V1jR8Sfn8bOrxkr713Yt8a6asfAu8aOiD8/jdVYpHpiGlPlwuJyvTyC7lRjqWPuGlPfNXZE/A/QGD7c7aufDNl3/zkB+PAL</diagram></mxfile>
|
2104.07555/main_diagram/main_diagram.pdf
ADDED
|
Binary file (22.2 kB). View file
|
|
|
2104.07555/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Data-to-Text Generation (DTG) aims at generating descriptions in natural language given a structured input, e.g. a table [@Gatt-2018-survey]. Reliability and precision of generated texts is currently regarded as a major issue in DTG [@Gardent-2020-book], with experimental surveys showing that real-life end users of DTG systems care more about accuracy than about readability [@Reiter-2009-investigation]. Neural NLG systems are known to be fluent, but prone to hallucinations [@lee2018hallucinations], i.e. they tend to include nonfactual information. However, their evaluation remains an open research problem [@Novikova-2017-need].
|
| 4 |
+
|
| 5 |
+
<figure id="fig:questeval-flowchart" data-latex-placement="t">
|
| 6 |
+
|
| 7 |
+
<figcaption><strong>Data-<span class="smallcaps">QuestEval</span> Flowchart.</strong> Figure adapted from the work of <span class="citation" data-cites="scialom2021safeval"></span> (equation numbers refer to equations in the original paper).</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
A recent approach, QuestEval [@scialom2021safeval] has shown significant improvement over standard metrics on Summarization tasks. To measure semantic matching between an evaluated summary and its source document, QuestEval relies on Question Generation and Answering (QG/QA) systems. As illustrated in [1](#fig:questeval-flowchart){reference-type="ref+Label" reference="fig:questeval-flowchart"}, a QG system generates a set of relevant questions conditioned on the source document, which are then asked on its generated summary. Conversely, questions generated from the summary are answered using only the source input. If the answers provided by the QA systems are correct, the summary is deemed consistent with its source document.
|
| 11 |
+
|
| 12 |
+
Can QuestEval be adapted for evaluation on DTG tasks? So far, QuestEval's QG/QA systems have been trained on a purely textual dataset, SQuAD [@rajpurkar2016squad], which restricts the evaluation to comparisons between two texts. Unfortunately, DTG inputs are of different modalities than text (e.g. structured tables). In the absence of specific multimodal-QA datasets, how can one obtain these multimodal-QG/QA models required for a data-QuestEval?
|
| 13 |
+
|
| 14 |
+
To fill this gap, we propose an effective method for creating synthetic multimodal-QG/QA datasets, by relying only on existing, purely textual, QG/QA datasets. Trained on such synthetic multimodal datasets, QA and QG models can now be used in QuestEval, enabling direct comparison between an evaluated text and its structured input, removing the need for costly gold references. Furthermore, this method does not rely on any task-specific annotated QA dataset, which makes the approach general and suitable for any DTG task.
|
| 15 |
+
|
| 16 |
+
# Method
|
| 17 |
+
|
| 18 |
+
<figure id="fig:synth-corpus" data-latex-placement="t">
|
| 19 |
+
|
| 20 |
+
<figcaption><strong>Synthetic corpus creation</strong> We are able to create a dataset of (table, question, answer) triples, by transcribing references into questions via a textual QG-model trained on SQuAD. Numerals refer to steps explained in <a href="#subsec:refless-safeval" data-reference-type="ref+Label" data-reference="subsec:refless-safeval">3.2</a>.</figcaption>
|
| 21 |
+
</figure>
|
| 22 |
+
|
| 23 |
+
To evaluate semantic matching between two input/output texts (e.g. a document and its summary), [QuestEval]{.smallcaps} [@scialom2021safeval] proceeds in two steps: 1) a Question Generation system generates a set of questions and (true) answers given the input text; 2) a Question Answering system predicts the (candidate) answers to these questions relying only on the output text currently evaluated. Candidate answers are evaluated based on F1-score against the true answers and Semantic Matching is then computed as the mean of all F1-scores.
|
| 24 |
+
|
| 25 |
+
To apply [QuestEval]{.smallcaps} to DTG, and still remain in the textual modality, one can consider a simple baseline: comparing an evaluated description with its (textual) reference, instead of its (data) source. Since the predicted description and the reference are both texts, this approach enables us to re-use [QuestEval]{.smallcaps} in its vanilla form without any multimodal requirements. However, this is not satisfactory, as this metric ignores the structured input, contrary to the original intent of [QuestEval]{.smallcaps}. Further, this makes the metric dependent on human annotations which may be costly to obtain.
|
| 26 |
+
|
| 27 |
+
In the following, we present our proposed method to make [QuestEval]{.smallcaps} data-compatible, allowing it to measure the similarity directly between a structured input and the evaluated description.
|
| 28 |
+
|
| 29 |
+
To make QG/QA metrics usable for reference-less evaluation on DTG tasks, specific QG/QA datasets are needed to train data-aware systems. Relying on an existing corpus is not generalizable: it is unreasonable to expect a multimodal-QA dataset for every DTG dataset. The annotation necessary to build such corpora is costly and time consuming. For this reason, we propose a general approach applicable to any DTG dataset, and requiring no annotated multimodal-QA dataset. The overall process entails four steps (illustrated in [2](#fig:synth-corpus){reference-type="ref+Label" reference="fig:synth-corpus"}):
|
| 30 |
+
|
| 31 |
+
First, following [QuestEval]{.smallcaps}, we train a textual QG model on SQuAD.
|
| 32 |
+
|
| 33 |
+
Given the training set of any DTG dataset, composed of (structured-input, textual description) pairs, we generate synthetic questions for each *textual description* using the *textual QG* (from step 1).
|
| 34 |
+
|
| 35 |
+
Each example in the training set is constituted of i) the source (i.e. structured data), ii) the textual description, and iii) the synthetic (Question, Answer) pairs generated during step 2. We can therefore match each structured input to its corresponding set of synthetic Questions & Answers to build a data-QG/QA dataset.
|
| 36 |
+
|
| 37 |
+
The newly built synthetic multimodal-Question corpus is used to train multimodal QG/QA models. For QA, a source corresponds to the structured data and a synthetic question; the target is the corresponding answer.
|
| 38 |
+
|
| 39 |
+
QG can be seen as the dual task of QA: any QA dataset can be used as a QG dataset by considering the question as the target. To learn representations from structured data, several approaches have been proposed -- e.g. a hierarchical network [@fang2019hierarchical]. We adopt the T5 [@Kale-2020-t5] paradigm, where any task is considered as a Text-to-Text task: we linearize the tables and encode them directly using T5.
|
| 40 |
+
|
| 41 |
+
In Question Answering, answer correctness (i.e. did the system find the correct answer?) is traditionally measured via F1-score, as popularized by the SQuAD evaluation script [@rajpurkar2016squad]. However, F1-score is based on exact-matching of n-grams and is not accurate when evaluating a correct answer that is realized differently from the reference (e.g. a synonym). This is especially concerning in DTG, where input tables often contain data that are not found verbatim in texts (e.g. "*Place of birth: France*" can be realized as "*She is a French \[\...\]*"). To deal with this issue, we move away from the F1-score and decide to measure answer's correctness via BERTScore [@Zhang-2020-bertscore], which compares the two contextualized representation of the compared answers. It allows to smooth the similarity function and provides a more accurate view of answer correctness in most DTG settings.
|
| 42 |
+
|
| 43 |
+
Beyond enabling the comparison of different versions of a given model for a specific project, evaluation metrics make it possible to compare different models altogether between different projects. To avoid inconsistencies in the reporting of [QuestEval]{.smallcaps} scores, our code follows the guidelines of [@post2018call] and produces a short version string that facilitates cross-paper comparisons, identifying the model checkpoints used, preprocessing steps, etc. Reporting this version string will ensure fair comparison across future works.
|
2104.08253/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-04-20T04:33:22.956Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" etag="CBUvU5zySyu8srKfbAan" version="14.3.0" type="device"><diagram id="RsR2iyIuz9Biq40EbhNH" name="Page-1">7Z1bd6JIEMc/jY+ZA81FfYwmbubMZHfOyc7O5Y1Aq2xQXMSJzqffRkC5tAaSRip0zcw5I81Nqn7d9L8Kyp42Xmz/CKzV/N53qNcjirPtaTc9QlSdkF70T3F2cUt/qMcNs8B1ko2ODQ/ub5o0KknrxnXoOrdh6Pte6K7yjba/XFI7zLVZQeA/5zeb+l7+rCtrlpxROTY82JZHS5t9c51wHrcOjMzWd9SdzdMzq0qyZmGlGycN67nl+M+ZJu22p40D3w/jT4vtmHqR8VK7xPtNTqw9fLGALsMqO3y8XuvbL9OJs/xKNqvFz5F5q14l3vlleZvkgpMvG+5SCwT+ZunQ6CBKTxs9z92QPqwsO1r7zHzO2ubhwmNLKvvoWY/UG1n202y/29j3/ICtWvpLtv1o6i/DxMVEj5Zdzytssg4D/4mmjT2iKUp/MmHXNEq+KQ1Cuj1pAvVgWEYk9Rc0DHZsk2QHo2/GuyQwqqlvnjOu1ZO2ecatBydaCU6zw7GPFmcfEqPzHbB274y7vz9ezz4N7T/7I+I+hXdXRNFLLvjrF12ecYP6shuyhmU2dAw6cHSedQfkUTNNnmui5eTsqgDT900jZ3rDKJv+4I6s6c3mLG+U4TdG99cPn3rGTdfsPyTw7G+W7G+tVuyKO2Z6VRnAs32/ZPuV2z3LEx2e5Qe8UWf8+aGDg46pwDP/kDPjMb0wstPKWubMb/63iSZne7Ncrfd2umYbEH213c9H0vXs0yz6P75vxwdj3y4+Xryqa54t3s719j2bKgYpb+cQ7K+W7S+qZyXzAim6Vmm6AMG3PJ0oyLf7eYccni1ORyB4VpN3OgLB/GX1X7I6XTrXUSSrdwiSZKxcsuZo/5etYUYJdt9Zo/LBSBd/9OIozn7hZptb2qVLWzfM7MaWfiTnij4fd4oW0n0qeI46aaDtlN/YRfubwKZVbvKhFcxoeHaip/NZyPja4Lg6bQuoZ4Xur/xX5vk/OcMX32UXk+nq/cLUd1BgKL7WZLds2K54JK2o3LXCkWJblI60B/Jw4W9hlBMnQUZfngghoxdktBxLQkYryGBk9IKMlmNuyGgF0YGMXpBRTnQSGX1ZPiGjF2SUF8IVxCho1qprHwNZE8NaGg0Awtprx1CI2gcZFcUoL/COjJ5mtPq8EhkVxShBRuswWkOfI6OiGOWkQpBRIYyayKggRhvMF3WR0RqaCRkVxWiD+aJOMlpdMyGjohhtMF/USUarayZkVBSjDeaLOslo9Vg8MiqK0QbzRZ1ktPq9vo+MCmIUVr4IPqPVNRMyKojR9HzIqPDYEzIqilHMMzWlmZBRUYwSZLQWo9U1EzIqilHMMzU1jg6QUUGMYp6pKV2PjIpiFPNMTel6ZFQUo5hnakrXI6OiGMU8U1OaCRkVxSgvz/TOyvZMp1Ni27z+4piPptFG2R6t/QoIGic700zZHgD2L5btAWD/tNzluy7bA8C1pbI9EHzbYEmmC5XtgeDZYtkeCJ4lvFGzgbI9AOxfLNsDwfwNhnC7+Jp0fJMHMxHWXzsRHhTL7unDD8PMn0H+sE3PinWsHlULw3iugxiKxhALRNUbDWGFBTqDIdaAqjcanph2IYZvwxDLPNXDkCCGTWAI68l8lBo1cRoO9Bw+sODCR+qlFB6goUythVDKNf+DDSU+NC+lNoYNJUEoZYRSVXXIVOJz8VIqHeBU4pPwUkod4FTis+9Sah3gVOLT7vWoJEjlBajE59vlvINrCmQqYeVnwFPZGbUDm0pM7MgZGQJNZfogL1IpmdqBTSWmduRUO7CpJEiljGOlacLCEJM5UopuaBhi9kZKlQ0NQ0zXSCmroWGI+RkpFQssDDWlinBmDkx96gfh3J/5S8u7PbaOjm+1R0wct/ns+6sEvX9pGO4SUqxN6Odh5iOrnEX2wuhVKDKkKSdmjm+kz0yDbgkzB6pfgE/YYDXkDVbvrOSPY9CBo/MGzgF51Mw2Sv6QfuvVE4achGwzJX8A2L9Y8geC/cupx6RST7dMXyrJA8H25QTbvpJOxyxfLJnTvuWT+6ScJXMgmJ/Um/RR79F/zs739g1sxdwP3N/MXpa3FwYHtWJ71nrt2vw5Xvr5R26+94IsUXOi5KhR2pYl1YukDtv9oTNlqPAxfMUDZoUjkcKRTmgWBoe1y2y2ijZYn/7KQzV/Hi0pdHlEPz6iYEFUJTyEfaMS8e3+TIVSuO/p/bNKuzL/SoF/tRgKEMS/qvRb6QBVAlPYASp1gJZDp0ozHYAoF+oAh59iu2wHqPI0KHaASh2g3RSWVoy1CuoAmnmhDqANW+kAvAdP32u07zzrqTD8lgAZtcwCy3EZ6zduQO3Q9ZesnaERrT+sy5xpMpmQ8Zgv8UexxO+9XUUWQ4fqJVXk2Pm+3NHRz/tPd/7ubvPPWu8/X9XUkA0lDkpGV/Z/LigBNd5vuXAtdsLDlUe5t4n+9GteMNL7+r538f5VDA1ftH+dcBgvNffeqsG/RxZKsWoIMPCG2/dVPv5cLHxUJRZeFxIRKBSD5xBQ4DzOKUvwHIL5sdD3K6ZHYB7cOeSe3lpTlJitPrijYqHvehjCeregMxhioe9aGCqwHuruDIZY6LveaEgQwyYwbLCQRCcx1BDDJjCEVTkCpYbQgrUtw5WqbSBwQX9xpTPCAzaUWP9ByvkfbCgJQlkHyq5oY9hQYjEIKaF8oS5o21RibQgplQ5wKrFUhJRSBziVWDlCSq0DnEqs7F2Pyo6EvoFTiZW95byDn68L2jaVsPIz8KnsitoBTWX6sh1SKVlkCDaVmNmRU+3AppIglbWo7IragU0l5nakHCuLNUXbxhCTOVKKbmgYYvZGSpUNDUNM10gpq6FhiPkZKRVLaxieLoGSg3BeovBNr6wPbMp/Zf1xYOhGRYLEVhJJ46ovvbJeLKP5mlfWP16v9e2X6cRZfiWb1eLnyLxVr3jT8bjcQ3TtOeOfKB2hmatttm6EYh8Me2xkiPan02m5vsQdtZxMgYn4pCcKTDC7h+fGnWRcyro9abI8dxaVarKZnyhrH0VedG3Lu05WLFzH2Rcd4iGVL0TkWY/UG1n202zfXjx5gSAB0KiHacIZanQx0LDFwI8cdOzYzAbze9+h0Rb/Aw==</diagram></mxfile>
|
2104.08253/main_diagram/main_diagram.pdf
ADDED
|
Binary file (25.4 kB). View file
|
|
|
2104.08253/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Language model (LM) pre-training has been very effective in learning text encoders that can be finetuned for many downstream tasks [\(Peters et al.,](#page-10-0) [2018;](#page-10-0) [Devlin et al.,](#page-9-0) [2019\)](#page-9-0). Deep bidirectional Transformer encoder [\(Vaswani et al.,](#page-10-1) [2017\)](#page-10-1) LMs like BERT [\(Devlin et al.,](#page-9-0) [2019\)](#page-9-0) are the state-ofthe-art. Recent works fine-tune the CLS token to encode input text sequence into a single vector representation [\(Lee et al.,](#page-9-1) [2019;](#page-9-1) [Chang et al.,](#page-8-0) [2020;](#page-8-0) [Karpukhin et al.,](#page-9-2) [2020\)](#page-9-2). The resulting model is referred to as dense encoder or bi-encoder. Finetuning associates with vector similarities some practical semantics, e.g., textual similarity or relevance, and therefore the vectors can be used for efficient text comparison or retrieval by inner product. Despite their efficiency, bi-encoders are hard to train. Even with sufficient data, bi-encoders still
|
| 4 |
+
|
| 5 |
+
require carefully designed sophisticated methods to train effectively [\(Xiong et al.,](#page-10-2) [2021;](#page-10-2) [Qu et al.,](#page-10-3) [2020;](#page-10-3) [Lin et al.,](#page-10-4) [2020\)](#page-10-4). They can also take big performance hits in low data situations [\(Karpukhin](#page-9-2) [et al.,](#page-9-2) [2020;](#page-9-2) [Thakur et al.,](#page-10-5) [2020;](#page-10-5) [Chang et al.,](#page-8-0) [2020\)](#page-8-0). Another common use of deep LM is cross-encoder, pass compared text pair directly in and use attention overall tokens to do prediction. In contrast to biencoder, cross encoder trains easier and is effective in low data for similarity and ranking tasks [\(Devlin](#page-9-0) [et al.,](#page-9-0) [2019;](#page-9-0) [Yang et al.,](#page-10-6) [2019\)](#page-10-6).
|
| 6 |
+
|
| 7 |
+
Based on the same LM, however, bi-encoder and cross encoder have similar language understanding capabilities. To explain the difficulty in training bi-encoder not seen in cross-encoder, we look into the internal structure of pre-trained LM. We find LM like BERT directly out of pre-training has a non-optimal attention structure. In particular, they were not trained to aggregate sophisticated information into a single dense representation. We term effort during fine-tuning to adjust the LM internal activation to channel its knowledge out for the target task, *structural readiness*. We argue bi-encoder fine-tuning is inefficient due to the lacking structural readiness. Many updates are used to adjust model attention structure than learn good representation.
|
| 8 |
+
|
| 9 |
+
Based on our observations, we propose to address structural readiness during pre-training. We introduce a novel Transformer pre-training architecture, Condenser, which establishes structural readiness by doing LM pre-training actively CONdition on DENSE Representation. Unlike previous works that pre-train towards a particular task, Condenser pre-trains towards the bi-encoder structure. Our results show the importance of structural readiness. We experiment with sentence similarity tasks, and retrieval for question answering and web search. We find under low data setups, with identical test time architecture, Condenser yields sizable improvement over standard LM and shows
|
| 10 |
+
|
| 11 |
+
<span id="page-0-0"></span><sup>1</sup>Code available at [https://github.com/luyug/](https://github.com/luyug/Condenser) [Condenser](https://github.com/luyug/Condenser)
|
| 12 |
+
|
| 13 |
+
comparable performance to strong task-specific pretrained models. With large training data, we find Condenser retriever optimize more easily, outperforming previous models trained with complicated techniques with a single round of negative mining.
|
| 14 |
+
|
| 15 |
+
# Method
|
| 16 |
+
|
| 17 |
+
This section discusses the motivation behind Condenser, its design, and its pre-training procedure.
|
| 18 |
+
|
| 19 |
+
Transformer Encoder Many recent state-of-theart deep LM adopts the architecture of Transformer encoder. It takes in a text sequence, embed it and pass it through a stack of L self-attentive Transformer blocks. Formally, given input text x = [x1, x2, ...], we can write iteratively,
|
| 20 |
+
|
| 21 |
+
$$h^0 = \text{Embed}(x) \tag{1}$$
|
| 22 |
+
|
| 23 |
+
$$h^{l} = \operatorname{Transformer}_{l}(h^{l-1}) \tag{2}$$
|
| 24 |
+
|
| 25 |
+
Intuitively, Transformer blocks refine each token's representation conditioning on all tokens in the sequence to effectively embed them.
|
| 26 |
+
|
| 27 |
+
Transformer LM Pre-training Many successful Transformer Encoder LMs such as BERT are trained with masked language model (MLM) task. MLM masks out a subset of input tokens and requires the model to predict them. For a masked out token x<sup>i</sup> at position i, its corresponding final representation h L i is used to predict the actual x<sup>i</sup> . Training uses a cross-entropy loss,
|
| 28 |
+
|
| 29 |
+
$$\mathcal{L}_{\text{mlm}} = \sum_{i \in \text{masked}} \text{CrossEntropy}(Wh_i^L, x_i) \quad (3)$$
|
| 30 |
+
|
| 31 |
+
A special token, typically referred to as CLS is prepended and encoded with the rest of the text.
|
| 32 |
+
|
| 33 |
+
$$[h_{cls}^0; h^0] = \text{Embed}([\text{CLS}; x]) \tag{4}$$
|
| 34 |
+
|
| 35 |
+
$$[h_{cls}^l; h^l] = TF_l([h_{cls}^{l-1}; h^{l-1}])$$
|
| 36 |
+
(5)
|
| 37 |
+
|
| 38 |
+
Some models train CLS explicitly during pretraining, notably BERT's next sentence prediction (NSP; [Devlin et al.](#page-9-0) [\(2019\)](#page-9-0)), while others implicitly [\(Yang et al.,](#page-10-6) [2019;](#page-10-6) [Liu et al.,](#page-10-7) [2019\)](#page-10-7).
|
| 39 |
+
|
| 40 |
+
Recall in Transformers, all tokens, including the CLS, receive information of other tokens in the sequence only with attention. Attention patterns, therefore, define how effective CLS can aggregate information. To understand the attentive behaviors of CLS, we borrow analysis of BERT from [Clark](#page-8-5) [et al.](#page-8-5) [\(2019\)](#page-8-5): 1) in most middle layers, the CLS token has similar attention patterns as other text tokens and is not attended by other tokens, 2) until the last layer, CLS has unique broad attention over
|
| 41 |
+
|
| 42 |
+
the entire sequence to perform NSP task. In other words, the CLS token remains dormant in many middle layers and reactivates only in the last round of attention. We argue that an effective bi-encoder should actively aggregate information of different granularity from the entire sentence through all layers, and this structure in standard pre-trained LM is not immediately ready for fine-tuning. We will verify this claim with experiments in [section 4](#page-3-0) and with quantitative analysis of attention of BERT, ICT, and the proposed Condenser in [section 5.](#page-7-0)
|
| 43 |
+
|
| 44 |
+
<span id="page-2-0"></span>
|
| 45 |
+
|
| 46 |
+
Figure 1: Condenser: We show 2 early and 2 late backbone layers here, in our experiments each have 6 layers. Condenser Head is dropped during fine-tuning.
|
| 47 |
+
|
| 48 |
+
Building upon Transformer encoder LMs, which conditions on left and right context [\(Devlin et al.,](#page-9-0) [2019\)](#page-9-0), we present bi-encoder pre-training architecture Condenser, which CONdition actively on DENSE Representation in LM pre-training.
|
| 49 |
+
|
| 50 |
+
Model Design Like Transformer Encoder, Condenser is parametrized into a stack of Transformer blocks, shown in [Figure 1.](#page-2-0) We divide them into three groups, L e early encoder backbone layers, L l late encoder backbone layers, and L <sup>h</sup> Condenser head Layers. Inputs is first encoded by backbone,
|
| 51 |
+
|
| 52 |
+
$$[h_{cls}^{early}; h^{early}] = \text{Encoder}_{\text{early}}([h_{cls}^{0}; h^{0}])$$
|
| 53 |
+
(6)
|
| 54 |
+
|
| 55 |
+
$$[h_{cls}^{late}; h^{late}] = \text{Encoder}_{\text{late}}([h_{cls}^{early}; h^{early}]) \quad (7)$$
|
| 56 |
+
|
| 57 |
+
Condenser Head The critical design is that we put a short circuit from early output to the head, which takes in a pair of *late-early* representations,
|
| 58 |
+
|
| 59 |
+
$$[h_{cls}^{cd}; h^{cd}] = \text{Condenser}_{\text{head}}([h_{cls}^{late}; h^{early}]) \quad (8)$$
|
| 60 |
+
|
| 61 |
+
We train with MLM loss with the head's output,
|
| 62 |
+
|
| 63 |
+
$$\mathcal{L}_{\text{mlm}} = \sum_{i \in \text{masked}} \text{CrossEntropy}(Wh_i^{cd}, x_i) \quad (9)$$
|
| 64 |
+
|
| 65 |
+
We follow the masking scheme in Devlin et al. (2019) to combat train test difference.
|
| 66 |
+
|
| 67 |
+
Within Condenser, the late encoder backbone can further refine the token representations but can only pass new information through $h_{cls}^{late}$ , the late CLS. The late CLS representation is therefore required to aggregate newly generated information later in the backbone, and the head can then *condition* on late CLS to make LM predictions. Meanwhile, skip connecting the early layers, we remove the burden of encoding local information and the syntactic structure of input text, focusing CLS on the global meaning of the input text. Layer numbers $L^e$ and $L^l$ control this separation of information.
|
| 68 |
+
|
| 69 |
+
Architecture of Condenser is inspired by Funnel Transformer (Dai et al., 2020), which itself is inspired by U-net (Ronneberger et al., 2015) from computer vision. Funnel Transformer reduces sequence length by a factor of 4 during forward and uses a 2-layer Transformer to decode the length compressed sequence onto a skip-connected full-length representation. Funnel Transformer was designed to speed up pre-training while our Condenser learns dense information aggregation.
|
| 70 |
+
|
| 71 |
+
Fine-tuning The Condenser head is a pre-train time component and is dropped during fine-tuning. Fine-tuning trains the late CLS $h_{cls}^{late}$ and backpropagate gradient into the backbone. In other words, a Condenser reduces to its encoder backbone, or effectively becomes a Transformer encoder for fine-tuning; the head is only used to guide pre-training. During fine-tuning, Condenser has an identical capacity as a similarly structured Transformer. In practice, Condenser can be a drop-in weight replacement for a typical Transformer LM like BERT.
|
| 72 |
+
|
| 73 |
+
In this paper, we opted to initialize Condenser with pre-trained Transformer LM weight. This accommodates our compute budget, avoiding the huge cost of pre-training from scratch. This also gives us a direct comparison to the original LM. Given a pre-trained LM, we initialize the entire Condenser backbone with its weights and randomly initialize the head. To prevent gradient back propagated from the random head from corrupting backbone weights, we place a semantic constraint by perform-
|
| 74 |
+
|
| 75 |
+
ing MLM also with backbone late outputs,
|
| 76 |
+
|
| 77 |
+
$$\mathcal{L}_{\text{mlm}}^{c} = \sum_{i \in \text{masked}} \text{CrossEntropy}(Wh_i^{late}, x_i) \quad (10)$$
|
| 78 |
+
|
| 79 |
+
The intuition behind this constraint is that encoding per-token representations $h^{late}$ and sequence representation $h^{late}_{cls}$ share similar mechanism and will not interfere with each other. As a result, $h^{late}$ can still be used for LM prediction. The full loss is then defined as a sum of two MLM losses,
|
| 80 |
+
|
| 81 |
+
$$\mathcal{L} = \mathcal{L}_{\text{mlm}} + \mathcal{L}_{\text{mlm}}^c \tag{11}$$
|
| 82 |
+
|
| 83 |
+
The output projection matrix W is shared between the two MLM losses to reduces the total number of parameters and memory usage.
|
2104.12437/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-05-25T17:20:49.317Z" agent="5.0 (Macintosh; Intel Mac OS X 11_3_1) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/89.0.4389.128 Electron/12.0.7 Safari/537.36" etag="i46baX3KRHBME1E13h_5" version="14.6.13" type="device"><diagram id="VdQI2MI_SHDW01Efr5z5" name="Page-1">7Vptk6I4EP41ftwpkvAiHx10Zq+urs6qma27uW9ZyQi7SDyIo+6vvwQChICKijq3O07VmHRIA/083elOHCBvsXlM8DL4g/okGkDD3wzQeAChZQL+Xwi2uQANzVwwT0I/F4FK8BT+IFJoSOkq9Emay6SIURqxcJnWZs9oHJMZq8lwktB1/bJXGvk1ZUs8Jw3B0wxHpDZPSP8KfRbk0iF0KvlnEs6D4s7AdvORBS4ulorTAPt0rYjQZIC8hFKWtxYbj0TCdmHtCR92jJYPlpCYdZkwxS9/snQzdaabr99+GKMZCZJP8jXecLSSLywflm0LC5DYHwlD8l5MYy68D9gi4j3AmynDCdOGM9lDGIlrjKyf0O/EoxFNMo3Izj7FlRJzk/fJJmR/i1l3roVk/yXrI9OW/fFGqs06W6UzJUm4IIwkUtY0kLRZSlfJjOyxijQC8ecqC6RZHwnld0m2/IJ1xQhLohwoZChkCYkwC9/qjMKSmPNSXXmHKQ35E0ND+hBEUo90IdMw6iry95GzVAZoikxQV4TgsK6IwzEnrKGIN5TXrkQZwY4g27BPss0inKbhTOMbOIpvOxnyTpC3oAbYcHga8sjSFCHjqsi7RyFfQXtkpOE6FB5wUJNtHk6sovuijlWhJOsVseSIeBX7XdiUG3dfvEHvinZoqLHFPTHg6IqArujCtAMN2o2bvIsinksINq2DkJGnJc6WhjXPZuoEfOXMUkgBATKGrXQxXMBHxQwaM0X+mn24HKfLPFV5DTfE38ecN5IwstlLicLUtqXFdplnKZRBLZTRA4HKjhocx9reatjeO8P2XS2pgfTw4CAHtYHkeQb/XMb2sFhXb2V7AA/HW56OLkWTvzmOIhLReYIX3B5LJYuqjSnp1WFX4WjIyCj6CWU8HtGYd93MZfIhJMa4aj8k7fBSfoeQCas67Z5m2LbnNWF3rXs4nijKx2HCeZI/wZqkrB/cea5/Z5p16B3rzmqADxFooq/Hwt7QRw3wJ71FvQ4Odc2oZ+pRz7q15zWNvy/TkXmMj9NAmCSzuGJ8IZ9ixn0uziTQgKXpi2oUKHVT0X6pkp7dFVOZHam5kZIqdc2OnMnInpxXbxXx6mCiVFz4XhIlUPd+nu/cueaJuZKt6UJNXRdOl9pWDTti0qfF7gmWMNr/rsS2xf0zDugCV33emovvUTGRP0c+N5f/rNkXdLSM2YY3jkNttXYNywqEAjwx8Clfmjl+BgDLTRPZz3xEjIuJxsC5z2wgWl7eyu+Rrr4WE4BCBUWcqRE6QDZ73JUwHBCmF4YqA2REVekiRTgK5yKMzjjMWRIj4A1nOBrJgUXo+9EuKiZ0FftZjDb0gN0epavcxWpZFcvMpY/0U98fKrZ5FPIBo4V98FLsg83CS2PfrDRFRTBk+uKvybmjaPV/otEuUmjc6SNVdd16gHKbHDGvGaDgwcXmbIrAD4ocQxELgDpFHOfGFGnLpfulCPqgyHEbHdp+noM6UQTa51Mknvzz/DhbBDEckkX8Zj0Y1ral1G1mmL2cKNT3fU+ogd5JxQIdzcmhdXLFAof6liO4YMXSir95Lfy7nwiUZw4GsOuVtQNLwY7aOut1PsY8fKzwvqjX2zGmpgjpeeylTxWMQ+tS5zq580pz8Z3yDgV1DwuIXidDs5ljXKpO/jJ5hi+j529fPPLb9/U8WQe/P7WcEHUIIN2366qd0mK7DpaG1NzynJ2zm7m0flCo49R580tz6VJx/y7dSoOT1pFzd20PZRGHCXKzfU9Yryah4xSnHkcnES6sq7KtqyLfPKb8QH5v/qg5quWciLu5YyW4Eu7H/SLll8dd93jkwr48Hl051rembx/Qd3Z55MB+XL48JDkbeN6tfkebX179GBlN/gM=</diagram></mxfile>
|
2104.12437/main_diagram/main_diagram.pdf
ADDED
|
Binary file (24 kB). View file
|
|
|
2104.12437/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
In set theory, the notion of function is built from the concept of *binary relation*. We adapt the definition and notation from @hamilton1982numbers.
|
| 4 |
+
|
| 5 |
+
::: definition
|
| 6 |
+
**Definition 1** (Binary relation). *A binary relation from a set $\mathcal{X}$ to a set $\mathcal{Y}$ is a subset of the Cartesian product of the two sets. If $R$ is such a relation and $(x, y) \in R$, we say that $x$ is related to $y$ and for convenience we may write $xRy$.*
|
| 7 |
+
:::
|
| 8 |
+
|
| 9 |
+
What differentiates a relation from a function is that multiple outcomes can be in the image of a single input element of a relation. The second difference is that some points from $\mathcal{X}$ may not have been related to any point in $\mathcal{Y}$. Hence the following definition:
|
| 10 |
+
|
| 11 |
+
::: {#def:function .definition}
|
| 12 |
+
**Definition 2** (Function). *A partial function $f$ is a binary relation that is single-valued. For all $x \in \mathcal{X}$, $y, z \in \mathcal{Y}^2$: $$\begin{equation*}
|
| 13 |
+
((x,y) \in f) \; \wedge \; ((x,z) \in f) \; \Rightarrow \; y = z
|
| 14 |
+
\end{equation*}$$ To obtain a function, we additionally require this partial function $f$ to be left-total: $$\begin{equation*}
|
| 15 |
+
\forall x \in \mathcal{X}, \; \exists y \in \mathcal{Y}, \; (x, y) \in f
|
| 16 |
+
\end{equation*}$$ When these two conditions are met, we can write the familiar expression $f(x)$ that denotes for all points of $\mathcal{X}$ the existing and unique element $y \in \mathcal{Y}$ such that $xfy$.*
|
| 17 |
+
:::
|
| 18 |
+
|
| 19 |
+
These two definitions are the starting point of our formalisation. We consider a given dataset of samples and associated labels. We want to express it as a dependence between the given input and associated labels. By definition, a dataset induces a binary relation between an input set $\mathcal{X}$ and a target set $\mathcal{Y}$ (continuous or discrete, it does not matter at this point). We denote it $D$. Without loss of generality, we assume $D$ to be left-total, or reduce $\mathcal{X}$ accordingly. The single-value condition in definition [2](#def:function){reference-type="ref" reference="def:function"} tells us when it is possible or not to uniquely assign a target label/value to a point in space, hence creating a functional dependence, *i.e.* given a dataset, this point is always related to a specific target, it *implies* it. For a given binary relation $R$, we define the subset of the domain $\mathcal{X}$ such that this condition is met: $$\begin{equation*}
|
| 20 |
+
A(R) = \{ x \in \mathcal{X} \mid \forall y,z \in \mathcal{Y}^2, \; xRy \wedge xRz \Rightarrow y = z \}
|
| 21 |
+
\end{equation*}$$ By construction, our dataset $D$ with its domain restricted to $A(D)$ is a function, meaning that points of $D$ are *uniquely associated* on $A(D)$. By contrast, all points in $\bar{A}(D)$ are such that multiple target labels are related to a single input, the information given by the sole point position is intrinsically not sufficient to predict or assign a label. This is aside from the probabilistic considerations we will have in [2.4](#sec:relaxation){reference-type="ref" reference="sec:relaxation"}.
|
| 22 |
+
|
| 23 |
+
For selection though, we are interested in *defining dependence to only a subset among the input dimensions*. To do that, we first ignore some features by setting them to zero. There, it is convenient to use canonical projections. We denote $\mathcal{X} \subseteq \mathbb{R}^n$, $(\vec{e_1},... \vec{e_n})$ the canonical base of $\mathbb{R}^n$ and for a subset of indices $I \subset [n]$ the canonical projection on those indices $P_I(x) = \textrm{proj}(x, \{ \vec{e_k} \mid k \in I \})$, and then define for a relation $R$, its relation with projected domain $R_I$: $$\begin{equation*}
|
| 24 |
+
R_I = (P_I \times \textrm{Id}_{\mathcal{Y}})(R)
|
| 25 |
+
\end{equation*}$$ For our dataset $D$, $D_I$ is the dataset such that all input features with indices not in $I$ are set to zero[^2]. As a result, multiple points in the domain of $D$, with potentially different labels, may be collapsed into a single representative in $D_I$, thus killing the functionality property they may have verified in $A(D)$. On the point-wise level, the construction of a projected relation $R_I$ implies that if $xRy$ then $(P_Ix)R_Iy$, and reciprocally if $x_IR_Iy$, there exists an antecedent $x$ such that $xRy$ and $P_Ix = x_I$. We refer to figure [2](#fig:intro){reference-type="ref" reference="fig:intro"} for a simple example to reason about the different concepts introduced in this section.
|
| 26 |
+
|
| 27 |
+
We now extend the previous definition of functional domain $A$ to the case where we only consider subsets of features.
|
| 28 |
+
|
| 29 |
+
::: {#def:local_dependance .definition}
|
| 30 |
+
**Definition 3**. *For a given relation $R \subset \mathcal{X} \times \mathcal{Y}$, a subset of indices $I \subset [n]$ and $R_I$ its projection to $I$, $A_I(R) \subset \mathcal{X}$ is the subset such that for all $x \in \mathcal{X}$, $y, y' \in \mathcal{Y}$, $x_I = P_Ix$, $$\begin{equation*}
|
| 31 |
+
x \in A_I(R) \Leftrightarrow (x_IR_Iy \wedge x_IR_Iy' \Rightarrow y = y')
|
| 32 |
+
\end{equation*}$$ Or equivalently, $x$ is in $A_I(R)$ if and only if $$\begin{equation*}
|
| 33 |
+
\forall x' \in \mathcal{X} \; \textit{s.t.}\, P_Ix' = P_Ix, \; xRy \wedge x'Ry' \Rightarrow y = y'
|
| 34 |
+
\end{equation*}$$*
|
| 35 |
+
:::
|
| 36 |
+
|
| 37 |
+
::: proof
|
| 38 |
+
*Proof.* $(P_Ix' = P_Ix = x_I) \wedge (xRy) \wedge (x'Ry') \Leftrightarrow (x_IR_Iy) \wedge (x_IR_Iy')$ ◻
|
| 39 |
+
:::
|
| 40 |
+
|
| 41 |
+
By construction, $D_I$ with its domain restricted to $P_I(A_I(D))$ is a function. Or said differently, for a given subset of indices $I$, for all points $x \in A_I(D)$, a target label $y$ can be uniquely associated to $x$ by the mere knowledge of its subset $I$ of features. By comparison to definition [2](#def:function){reference-type="ref" reference="def:function"}, the only added condition is that the single-valueness must be verified not only by $x$ but also all points with the same projection as $x$ on $I$.
|
| 42 |
+
|
| 43 |
+
Now, once we have computed all $2^n$ domain subsets $A_I(D)$, the selection problem is formulated as the task of finding minimal subsets of input indices that all points functionally depend on. Which leads to two possible settings:
|
| 44 |
+
|
| 45 |
+
::: problem
|
| 46 |
+
**Problem 1** (Global subset selection). *Given a relation $R$, find a subset of indices $I^* \subset [n]$ that minimises $$\begin{align*}
|
| 47 |
+
\min\limits_{J \subset [n]} \; & \textrm{Card}(J) \\
|
| 48 |
+
\textrm{s.t.} \quad & \forall x, \; x \in A_J(R)
|
| 49 |
+
\end{align*}$$*
|
| 50 |
+
:::
|
| 51 |
+
|
| 52 |
+
::: problem
|
| 53 |
+
**Problem 2** (Instance-wise subset selection). *Given a relation $R$, for all $x \in \mathcal{X}$, find a local subset of indices $I^*(x) \subset [n]$ that minimises $$\begin{align*}
|
| 54 |
+
\min\limits_{J \subset [n]} \; & \textrm{Card}(J) \\
|
| 55 |
+
\textrm{s.t.} \quad & x \in A_J(R)
|
| 56 |
+
\end{align*}$$*
|
| 57 |
+
:::
|
| 58 |
+
|
| 59 |
+
Note that it is not assured that these minima are unique, which is not problematic and rather natural, for instance when some input features are correlated.
|
| 60 |
+
|
| 61 |
+
Our derived definition of dependence/contribution and *global* selection coincides with @blum1997selection. In the rest on the paper, we study its *instance-wise* extension, for it is the most difficult case with the largest risk of providing degenerate explanations if not done carefully.
|
| 62 |
+
|
| 63 |
+
The above definitions allow us to derive properties a given instance-wise selection solution $\hat{I}(x)$ should verify.
|
| 64 |
+
|
| 65 |
+
::: {#prop:hyperspace .property}
|
| 66 |
+
**Property 1** (Complementary dependence). *If a point depends on a subset of indices, all point in directions in the complement of this subset have the same dependence : for $x \in \mathcal{X}$, if there exists $I \subset [n]$ such that $x \in A_I(R)$, then for all $x' \in \mathcal{X}$ such that $P_Ix' = P_Ix$, one has $x' \in A_I(R)$.*
|
| 67 |
+
:::
|
| 68 |
+
|
| 69 |
+
::: proof
|
| 70 |
+
*Proof.* $[(P_I(x'),y') \in R_I] \wedge [(P_I(x'),y'') \in R_I] = [(P_I(x),y') \in R_I] \wedge [(P_I(x),y'') \in R_I] \Rightarrow y' = y''$ ◻
|
| 71 |
+
:::
|
| 72 |
+
|
| 73 |
+
This property is illustrated in figure [1](#fig:front){reference-type="ref" reference="fig:front"}. We will see in the experiment section [4](#sec:exp){reference-type="ref" reference="sec:exp"} that this property is not verified by some widely used attribution methods.
|
| 74 |
+
|
| 75 |
+
::: {#prop:parents .property}
|
| 76 |
+
**Property 2** (Dependence hierarchy). *Any point that depends on a subset also depends on its parent subsets : $I \subset J \Rightarrow A_I(R) \subset A_J(R)$.*
|
| 77 |
+
:::
|
| 78 |
+
|
| 79 |
+
::: proof
|
| 80 |
+
*Proof.* $R_I = ((P_I \times \textrm{Id}_{\mathcal{Y}}) \circ (P_J \times \textrm{Id}_{\mathcal{Y}}))(R)$, thus $(P_JxR_Jy) \wedge (P_Jx'R_Jy') \Rightarrow (P_IxR_Iy) \wedge (P_Ix'R_Iy') \Rightarrow y = y'$ ◻
|
| 81 |
+
:::
|
| 82 |
+
|
| 83 |
+
We have formalised the notion of binary feature contributions for the selection task in quite an unrealistic case where we could find a perfect dependence. We now propose to *frame attribution values as its probabilistic relaxation*. Indeed, there are several reasons we may want to adopt a probabilistic framework and relax functional dependence:
|
| 84 |
+
|
| 85 |
+
- Real-data is noisy, we only have access to a sample of it, and may wish to control a certainty of dependence;
|
| 86 |
+
|
| 87 |
+
- For continuous $\mathcal{Y}$, we may tolerate having several outcomes for $x \in \bar{A}_I(R)$ but that are close to one another; and for categorical $\mathcal{Y}$, a small stochasticity of label;
|
| 88 |
+
|
| 89 |
+
- Generally, we want to accurately model probable associations of input and target label/values while minimising the weight of rare and out-of-distribution points.
|
| 90 |
+
|
| 91 |
+
Instead of a dataset $D$, we now consider probabilistic densities $p_X$ and $p_{Y\mid X}$ on $\mathcal{X}$ and $\mathcal{Y}$ with their usual associated input and target random variables $X$ and $Y$. We relax our notion to *approximate functional dependence*. At that point, we have to consider task-dependency as there is no one-relaxation-fits-all rule (*P1*). Attribution values should however still allow to differentiate between relevant and non-relevant subset of features to be meaningful (*P2*). As a general framework, we first define an attribution relaxation $\textrm{attr}_I(x)$ for all subsets $I \subset [n]$ and all samples $x \sim X$, and we then create the link to selection with a comparison to a chosen threshold parameter $\eta$. For instance, we could choose that all subsets of features with absolute attribution value higher than $\eta$ should be selected. We can not define an encompassing comparison mechanism, the implication mechanism from attribution to selection is part of the relaxation elaboration and directly translates the meaning of the degree of approximation we choose with $\eta$. We give some examples of attribution relaxation to clarify this framework.
|
| 92 |
+
|
| 93 |
+
**Regression setting** Let $Y$ be continuous, *e.g.* $\mathcal{Y} = \mathbb{R}$, and the function we want to interpret be the mean mapping $f(x) = \mathbb{E}[Y \mid X = x]$. To define an instance-wise responsibility measure $g_I$ that will imply functional dependence on $I$, we can use the conditional variance: $$\begin{align}
|
| 94 |
+
\label{measure:variance}
|
| 95 |
+
g_I(x) & = \textrm{Var}_{X \mid X_I}[ Y \mid X_I = P_I(x) = x_I ]
|
| 96 |
+
\end{align}$$ where $X_I$ denotes the projected random variable $P_I(X)$. We verify that $g_I(x) = 0$ if and only if for all samples $(x', y')$ such that $P_Ix' = x_I$, the associated value $y'$ is equal to the conditional mean $\mathbb{E}_{X_{\bar{I}} \mid X_I}[Y \mid X_I = x_I]$, hence verifying $x \in A_I(f)$ and thus (*P2*) in the perfect setting.
|
| 97 |
+
|
| 98 |
+
In the literature, it is more usual that attribution values near zero denote independence to a subset. To do that, we could use the reciprocal notion of *precision*: $\textrm{attr}_I(x) = 1 / g_I(x)$. When the precision is low, the samples with common features on the indices $I$ are spread, it is thus not possible to assign a value that will be representative enough of these points. When precision is high, the mean value will be a relevant predictor of the points, we can state that we have a dependence to $I$ *with a given precision/variance*.
|
| 99 |
+
|
| 100 |
+
With this measure and for a given variance threshold $\eta$, we have obtained **approximated functionality domains**: $$\begin{equation}
|
| 101 |
+
A_I^\eta(f) = \{ x \in \mathcal{X} \mid |\textrm{attr}_I(x)| \geq 1/\eta \}
|
| 102 |
+
\end{equation}$$ again, we verify that $A_I^0(f) = A_I(f)$ (*P2*).
|
| 103 |
+
|
| 104 |
+
To fix ideas through a simple application example, let us consider a bidimensional uniform input $X = (X_1, X_2)$ on $\mathcal{X} = [-1, 1]^2$, and $Y$ such that, $$\begin{align*}
|
| 105 |
+
& p_X = p_{X_1}p_{X_2} = 1/4 \\
|
| 106 |
+
& Y = X_1 + \alpha X_2, \quad | \alpha | < 1
|
| 107 |
+
\end{align*}$$ which corresponds to a deterministic identity mapping from $X_1$ to $Y$ with a small tilt effect from $X_2$ with coefficient $\alpha$. Then, for all $x_1 \in [-1, 1]$, $$\begin{equation*}
|
| 108 |
+
\textrm{Var}_{X_2 \mid X_1}[Y \mid X_1 = x_1] = \frac{1}{2} \int\limits_{-1}^{1} (\alpha t)^2 {dt} = \alpha^2 / 3
|
| 109 |
+
\end{equation*}$$ for a given variance threshold $\eta$, the attribution measure [\[measure:variance\]](#measure:variance){reference-type="eqref" reference="measure:variance"} states that if $\alpha \leq \sqrt{3\eta}$, the target variable $Y$ can be approximated with $Y' = X_1$, [i.e.]{.roman} $X_2$ is ignored and only $X_1$ is responsible for $Y$.
|
| 110 |
+
|
| 111 |
+
Similarly, let us have $Y = X_1 + \epsilon$, with $\epsilon$ a noise variable following $\mathcal{N}(0, \sigma^2)$. Because of the noise, there is no region of the domain where samples of $Y$ can be uniquely determined on a set of variables. But when $\sigma^2 \leq \eta$, the noise can be ignored and this distribution can be approximated by the univariate distribution of $Y' = X_1$ with variance $\eta$.
|
| 112 |
+
|
| 113 |
+
With the attribution measure [\[measure:variance\]](#measure:variance){reference-type="eqref" reference="measure:variance"}, we were able to relax dependence to a probabilistic framework allowing to control noise and small feature effects, and yield approximate feature contribution. Our choice of relaxation through the conditional variance works well when $Y$ is assumed to follow a normal law $\mathcal{N}(\mu(X), \sigma(X)^2)$. This is of course not the only possible attribution measure, in particular if we want to study more than the mean effects $f$ we chose.
|
| 114 |
+
|
| 115 |
+
**Classification setting** When $Y$ takes values in the set of $n$ labels $c_1, \dots c_n$. It seems natural to define an attribution measure as the probability of assigning the label with maximum probability. $$\begin{align}
|
| 116 |
+
P^c_I(x) & = \mathbb{P}( Y = c \mid X_I = P_Ix ) \nonumber \\
|
| 117 |
+
\textrm{attr}_I(x) & = \max\limits_c P^c_I(x) \label{measure:prob} \\
|
| 118 |
+
A_I^\eta(f) & = \{ x \in \mathcal{X} \mid \textrm{attr}_I(x) \geq 1 - \eta \}
|
| 119 |
+
\end{align}$$ The attribution value is bounded between $1/n$ (uniform) and $1$ (deterministic label). These responsibilities have a nice interpretation since they directly represents the proportion of samples in the same class when conditioning on the variables in $I$. Adjusting $\eta$ also means that we control the error on the prediction of a class for these samples.
|
| 120 |
+
|
| 121 |
+
In the perfect setting, for $\eta = 0$ we check that $x \in A_I^0(f) \Rightarrow \textrm{attr}_I(x) = 1 \Rightarrow x \in A_I(f)$, thus (*P2*). In the imperfect setting, our function $f$ under study is noisy, **the goal is to tune $\eta$ to maximise the verification of (*P2*)**, which we will evaluate in section [4](#sec:exp){reference-type="ref" reference="sec:exp"}.
|
| 122 |
+
|
| 123 |
+
Alternatively, it may be more relevant to take in consideration all labels probabilities with an entropy measure: $$\begin{align}
|
| 124 |
+
\textrm{attr}_I(x) & = 1 - \sum_c \frac{P^c_I \ln(P^c_I)}{\ln(1/n)} \label{measure:entropy}
|
| 125 |
+
\end{align}$$ We have normalised the entropy to obtain a value between 0 (uniform label distribution) and 1 (deterministic label).
|
| 126 |
+
|
| 127 |
+
# Method
|
| 128 |
+
|
| 129 |
+
We present classic and state-of-the-art selection/attribution methods in the light of the formalism we propose, and with a specific focus on instance-wise methods. Due to size constraints, it is impossible to present all variations of assumptions and clever solutions of these methods, we will thus only present four general ideas that, we think, constitute the bulk of research on instance-wise feature attribution.
|
| 130 |
+
|
| 131 |
+
The first thing we have to mention is that the attribution relaxation [\[measure:variance\]](#measure:variance){reference-type="eqref" reference="measure:variance"} we introduced in the context of regression is strongly inspired by the success of the classical *analysis of variance* diagnostics and its more recent formulation of *weighted functional ANOVA* [@hooker2007generalized] that decomposes $\mathcal{L}_2$ functions into the sum of all $n$-variate subfunctions under a hierarchical orthogonality constraint, weighted by the data distribution. Given that one takeaway of our paper will be that we have to consider the full input distribution for relevant interpretations, not just local information, we should have been happy with weighted fANOVA. Specifically, one key consequence of fANOVA is that the overall variance can be decomposed as a sum of variance from each subfunction, and hence each input subset. However, this decomposition is made identifiable through an *integration-to-zero* constraint on the subfunctions, allowing to formulate global selection criteria but not to distinguish the non-null instance-wise contributions we seek from any centering effects (see Supplementary A).
|
| 132 |
+
|
| 133 |
+
Another idea, similar in spirit to fANOVA, is to try to directly learn a mixture of $n$-variate functions. Since there is a potential exponential number of subfunctions, one approximation making training tractable is to consider only summed univariate contributions -- *e.g.* *GAM* [@hastie1990generalized]; or interactions up to a fixed order -- *e.g.* *GA^2^M* [@lou2013accurate], *NIT* [@tsang2018neural]; or with a fixed structure -- *e.g.* *Archipelago* [@tsang2020does], *InterpretableNN* [@afchar2020making]. The key advantage of mixture models is that they disentangle the different orders of interaction effects. In our formulation of dependence, no distinction can for instance be made between $f(x) = x_1 + x_2$ and $f(x) = x_1x_2$ with a uniform input distribution. This may be useful in some applications. But conversely, and beyond the trivial limitation that these models provide solutions within a restricted candidate space, additive models strongly suffer from an identifiability issue and can produce contradictory interpretations. Identifiability can be achieved with fANOVA-like regularisation [@lengerich2019purifying], but we have argued that this does not allow to obtain exact attribution in an instance-wise setting. This effect gets worst with high-order interactions and redundant or correlated features. Meanwhile, our attribution formalisation allows to distinguish multiple possible candidate solutions, hence isolating redundancies, but at the cost of interaction hierarchical decomposability. We may assert that both approaches are complementary.
|
| 134 |
+
|
| 135 |
+
A large body of work on instance-wise attribution circumvents the above tractability issue by providing proxy measurements of attribution. Two large class of methods are **gradient-based** analysis -- *e.g.* saliency methods [@simonyan2013deep], *SmoothGrad* [@smilkov2017smoothgrad], \... ; and **baseline-comparison** methods -- *e.g.* *LIME* [@ribeiro2016should], *SHAP* [@NIPS2017_7062], \... ; the line between these two classes is fuzzy -- *e.g.* *Integrated Gradient* [@sundararajan2017axiomatic], *Expected Gradient* [@erion2019learning]. Again, we will not discuss the profusion of variations but only their general spirit. For good meta-analysis on a unification of these methods, we recommend [@covert2020feature] and [@sundararajan2020many]. Nevertheless, the underlying principle behind the computation of a gradient as an indication of feature contribution can be found in its simplest form in @friedman2008. In substance, it says that *a function $F(x)$ is said to exhibit an interaction between $k$ variables with indexes $I = (i_1, \dots i_k)$ if $\mathbb{E}_X[{\partial^k F}/{\partial x_{i_1} \dots \partial x_{i_k}}]^2 > 0$*, meaning that the difference in value of $F(x)$ as a result of changing some variables of $I$ depends on the remaining variables of $I$. Beyond noise considerations that may create nuisance interactions, this approach is rather sound for global selection. Problems occur in its extension to the instance-wise setting when $\mathbb{E}_X$ is dropped without any further considerations. This is the foundation of saliency methods and subsequent papers have focused on providing gradient estimates that proved robust to noise. To adopt the same formalism as before, we could write those gradient-based selection measures in the general form: $$\begin{equation}
|
| 136 |
+
\label{measure:gradient}
|
| 137 |
+
G_I(f) = \{ x \in \mathcal{X} \mid ({\partial^{|I|} f(x)} / {\partial X_I})^2 > 0 \}
|
| 138 |
+
%G_I(f) = \left\{ x \in \mathcal{X} \mid \left(\frac{\partial^{|I|} f}{\partial X_I}(x)\right)^2 > 0 \right\}
|
| 139 |
+
\end{equation}$$ with $f$ a function. For a relaxed formulations for attribution, many aspects have to be considered to provide a relevant estimate for the derivatives for a given task, we will not discuss them here and assume an ideal favorable setting where this measure is available.
|
| 140 |
+
|
| 141 |
+
Baseline-comparisons methods, in the spirit of counterfactual reasoning, determine the extent to which a function output differs from an output considered \"neutral\" -- the baseline. Many choices exist to model the baseline, a common one is to estimate a conditional expectation. We may formalise them in the general form: $$\begin{equation}
|
| 142 |
+
\label{measure:baseline}
|
| 143 |
+
C_I(f) = \{ x \in \mathcal{X} \mid f(x) \neq \mathbb{E}[f(X) \mid X_I = P_Ix ] \}
|
| 144 |
+
\end{equation}$$ choosing another baseline, as $f(X_{I}, \mathbb{E}_{\bar{I}}(X_{\bar{I}} ) )$ [@NIPS2017_7062] does not change our discussion.
|
| 145 |
+
|
| 146 |
+
To link these two subsets with previous notions, we introduce the following subset of $\mathcal{X}$: $$\begin{equation}
|
| 147 |
+
\label{eq:B}
|
| 148 |
+
B_I(f) = \{ x \mid \exists x', \; P_{\bar{I}}x' = P_{\bar{I}}x, \; f(x) \neq f(x') \}
|
| 149 |
+
\end{equation}$$ *i.e.* the set of points $x$ for which when fixing the $I$ features, there is still a alternate value for $f$. This notion is reminiscent of the functionality property in the subsets $(A_I)$, and indeed we have the trivial connection $B_I = \overline{A_{\bar{I}}}$. Then, for gradient-based methods, having a finite non-null gradient implies that there exists a neighborhood such that there exists distinct values for $f$, and hence $G_I \subset B_I$. But gradient methods miss some cases, for instance if $f$ is constant in the neighborhood of $x$ but vary further away, $x$ will not be included in $G_I$. Similarly, we have $C_I \subset B_I$: to find a probable point that is different from an average, there must exists points with different value that counterweight its deviation from the mean. Note that the case of improbable points can be handled with a restriction of $\mathcal{X}$. $C_I$ also misses some points of $B_I$, if a point is associated with the baseline target value, there still may be other points with the same projection on $I$ and with different labels. Thus,
|
| 150 |
+
|
| 151 |
+
::::: center
|
| 152 |
+
::: minipage
|
| 153 |
+
$$\begin{equation}
|
| 154 |
+
\label{c_dual}
|
| 155 |
+
A_I \subset \overline{C_{\bar{I}}}
|
| 156 |
+
\end{equation}$$
|
| 157 |
+
:::
|
| 158 |
+
|
| 159 |
+
::: minipage
|
| 160 |
+
$$\begin{equation}
|
| 161 |
+
\label{g_dual}
|
| 162 |
+
A_I \subset \overline{G_{\bar{I}}}
|
| 163 |
+
\end{equation}$$
|
| 164 |
+
:::
|
| 165 |
+
:::::
|
| 166 |
+
|
| 167 |
+
Gradient-based and baseline-comparison proxies are **linked to the formalisation we derive and provide upper bounds for functionality domains**. In section [4](#sec:exp){reference-type="ref" reference="sec:exp"} we quantify how good these two approximations are.
|
| 168 |
+
|
| 169 |
+
A final recent idea is to try to incorporate and learn the instance-wise selection task during training [@chen2018learning; @yoon2018invase; @arik2019tabnet; @yamada2020feature]. These techniques have been referred to as *selector-predictor* [@camburu2020explaining]. The idea is to use two models: a *selector* $\textrm{Sel}: \mathcal{X} \mapsto \{ 0,1 \}^n$ whose goal is to determine a map $S$ of the most-relevant features for each point; and a *predictor* $\textrm{Pred}: \mathcal{X} \mapsto \mathcal{Y}$ acting as the usual prediction model of $Y$ with the twist that it takes $X \odot S$ as input. The training objective varies between methods but the general spirit is to maximise the performances of $\textrm{Pred}(X \odot \textrm{Sel}(X))$ at predicting $Y$ while either minimising the number of selected features in $\textrm{Sel}(X)$ or ensuring the constraint that $k < n$ features are selected. A first issue is that most of these methods are only evaluated on performance-degradation metrics or on rather global synthetic selection tasks, which do not truly evaluate instance-wise interpretations. A second, more alarming, issue is that the selector model is completely free and prone to degenerate selection solutions (see Supplementary B). In particular, the selector does not verify properties [1](#prop:hyperspace){reference-type="ref" reference="prop:hyperspace"} and [2](#prop:parents){reference-type="ref" reference="prop:parents"}.
|
| 170 |
+
|
| 171 |
+
We should lastly mention that we found our formalisation to resemble the concept of *functional dependency* from relational database theory [@armstrong1974dependency]. Our simple categorical attribution [\[measure:prob\]](#measure:prob){reference-type="eqref" reference="measure:prob"} is strikingly similar to [@kivinen1995approximate]. But the purpose is not interpretation and in this latter field, global multi-dependence among all columns of a table are sought, differently from between a subset of the input and a designated output, and, to our knowledge, not in an instance-wise manner.
|
2106.00524/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-01-20T18:40:53.789Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" etag="NuqxaY9hRTJwkIfU2TxV" version="14.2.4" type="google"><diagram id="fG35N57EhYjlVLFmYCkx" name="Page-1">7Ztbc9o4FMc/DTO7D2QsC3x5DJBL23R6yXbT7EtGwQq4NZZjCwL99Csh+SYJAhSTMGlm2qBjcyT99bN8dKS0YH8yv0hRMv5IAhy1bCuYt+CgZduuZ7P/uWEhDNCzhGGUhoEwgdJwHf7C0pjfNg0DnNVupIRENEzqxiGJYzykNRtKU/JUv+2BRPVaEzTCmuF6iCLdehMGdCysnu2W9kscjsZ5zcDxxZUJym+WLrIxCsiTMC07B89asJ8SQsWnybyPI65drotQ4HzF1aJhKY7pJl/I6Jcv/fPz9qfhV4f+cx1++e8uaEPhZYaiqezw9c9w+bWzyT0OgjAeZbL5dJFrkpJpHGDuFrRg72kcUnydoCG/+sQgYLYxnUTy8gOJqRxW4LByRlPys9CSqdDTOyL7NsMpxfOKSXbsApMJpumC3SKv5lBJyDqeKD6VIwbyW8aV0YLShiQko8JxqSP7IKXcQtaOJutXnCUkzvCxKQstRVr/haXt6sQmiIaIf3GQkoRM6RHK6r+0rO4msjpowsWK77OkkOFVy6zMCyBX/cVk9jSZ+ySekWhKQxJzsY9O0u5LS+qbJB0iimPERT1CaoHr1DQugqGKxoXtIBrnlVVE7oVBmLKIa8lt++LrtyMU2rbsutCOAWbnoEIDTegLNM2yEMVH9X5TlYUGhA+srK0pO8DLgOz4xPTAS4uprxukmEc/B3Tgi4urh7iajswNWwHj5zVEWSKWxQ/hnOt+IFFdRdQ8DVB9gxk0tRvT1NlA0zg45RkDVhpGiM25w7qUulKqliuVw0Ety6DrVtGla9Alt6U4YkHNrJ6bMIkla/hMQtaSclg8ZViA4iIj03SI5bequQTd0UnX/NjkrihKR5hqrpjCaFG5LeE3ZFs0WdZTwiA8lmgUqv4GLfpqqAFa8Dyk31nBYlKK0q38Lv884AJYeWEhCysJEyO3QWTxalAEJ45jFT+g/oZzfgNMsNqt1xSkvDPGqprlVF9OvslZbR1Ktq+8U/aEElDcNogSqCV2G0JJX0b/QUlFqdMISsWqs3mU8qoaRcnWkwV/UFLGvNPMC65zuBdc5xAvuJyct40S9C0l2vZ2nIigX+elSIjvmRe9xXlNzeKipyXeIC7A8U66VuXHr4+EmrjYFB7g+CfWGr/dZljSugON1TYLlr6h+0bBWgPArpPSc7w2NEdpvYHGapvlapNU30Zc9UlE0uUXoOiNmbgwiip3+n6fNUkhsXMUJLrrSLS9XUl015HYUdzuDUR3HYiy1mY5/JMebfGcjadELba/G0jQU8hR/OwrzNIaLCtqFpZGsqOMiXQh0qF2Ny/zhKiVF8qM6LKUp0R3SaNuTOazaVXvVRGsbmaBnZNfTt2RnW+fPkPw3hhrJLNZYczdirGDpe7/HX9Y0Oj05nZwCT5Ep58f330bt+1XxRj06okAsPNiVD190RxjV7Pet8GiC67e31717Di6+EiTtn7UomU7EZXDXWPNeZyS/EI7W4Jwym4AbjIvL7JPI/77cYozfiwm03fDrVa3/9fjHXfesvuiHBA+xfPC413L7dH2VcsdMPvfeWtY50SDhHvtOaB4TteFhDGJsRL7SROKwlHMHx+GJGb2Ht/hDYcoOpUXJmEQ8GqMu8zlXv5WU+rmu8huPRCSb7rqvrxvwHwfe8hGXky5qH3wksqjwat4SQUWgGPBOCmISVVcrDdPjO0qS1INGeOxA+A1hIwpHyWGKAhn5fAIU5YsD3ltSJF4pVjLaw9oEkYLcbWAiK3vIF/LjXE0w3yMtCs6imRKE9NBX51L2eb7ohMMVPZvjCiDMmVIskmuPofdq/1lNtFlzVwTx8zulkeHNG73AVueSMvnp/z9VZ2hLANu0P593IyBgumN9vZWcTz3XFlNKxnvXbOf0Ae1VbritqHkpyGRvud857qI84iPqcNVA3SIc+r4sff9x6d3l90f4P2ob7+f3PgfNnk4h9N0Vqj43JNaLma2WclsIfYreZ61Q9rejtk95qj2BNv18NJ2mkrvOcb2r0xCdtU9+/0mAI1wNhXb6nFC3e0DGta9sr6koxDpnmarglsWeGwUaxxZRNyV5XMZ2w0uizhuP9GLcrASGiZI6OlP9Q7LK1Ys/8xVEFv+rTA8+x8=</diagram></mxfile>
|
2106.00524/main_diagram/main_diagram.pdf
ADDED
|
Binary file (20.8 kB). View file
|
|
|
2106.00524/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Knowledge is distinguished by the ability to evolve over time. This progression of knowledge is usually incremental and its formation is related to the cognitive areas being studied. The process of Knowledge Tracing (KT) defined as the task of predicting students' performance has attracted the interest of many researchers in recent decades [@corbett1994knowledge]. The Knowledge State (KS) of a student is the degree of his or her mastering the Knowledge Components (KC) in a certain domain, for example "Algebra" or "Physics". A knowledge component generally refers to a learnable entity, such as a concept or a skill, that can be used alone or in combination with other KCs in order to solve an exercise or a problem [@koedinger2012knowledge]. Knowledge Tracing is the process of modeling and assessing a student's KS in order to predict his or her ability to answer the next problem correctly. The estimation of the student's knowledge state is useful for improving the educational process by identifying the level of his/her understanding of the various knowledge components. By exploiting this information it is possible to suggest appropriate educational material to cover the student's weaknesses and thus maximize the learning outcome.
|
| 4 |
+
|
| 5 |
+
The main problem of Knowledge Tracing is the efficient management of the responses over time. One of the factors which add complexity to the problem of KT is the student-specific learning pace. The knowledge acquisition may differ from person to person and may also be influenced by already existing knowledge. More specifically, KT is predominantly considered as a supervised sequence learning problem where the goal is to predict the probability that a student will answer correctly the future exercises, given his or her history of interactions with previous tests. Thus, the prediction of the correctness of the answer is based on the history of the student's answers in combination with the skill that is currently examined at this time instance.
|
| 6 |
+
|
| 7 |
+
Mathematically, the KT task is expressed as the probability $P(r_{t+1} = 1|q_{t+1}, X_t)$ that the student will offer the correct response in the next interaction $x_{t + 1}$, where the students learning activities are represented as a sequence of interactions $X_t = \{ x_1,x_2,x_3,...,x_t\}$ over time $T$. The $x_t$ interaction consists of a tuple $(q_t, r_t)$ which represents the question $q_t$ being answered at time $t$ and the student response $r_t$ to the question. Without loss of generality, we shall assume that knowledge components are represented by skills from a set $S = \{s_1, s_2,..., s_m\}$. One simplifying assumption, used by many authors [@zhang2017dynamic], is that every question in the set $Q = \{q_1, q_2,..., q_T\}$ is related to a unique skill from $S$. Then the knowledge levels of the student for each one of the skills in $S$ compose his or her knowledge state.
|
| 8 |
+
|
| 9 |
+
The dynamic nature of Knowledge Tracing leads to approa-ches that have the ability to model time-series or sequential data. In this work we propose two dynamic machine learning models that are implemented by time-dependent methods, specifically recurrent and time delay neural networks. Our models outperform the current state-of-the-art approaches in four out of five benchmark datasets that we have studied. The proposed models differ from the existing ones in two main architectural aspects:
|
| 10 |
+
|
| 11 |
+
- we find that attention does not help improve the performance and therefore we make no use of attention layers
|
| 12 |
+
|
| 13 |
+
- we experiment with and compare between two different skill embedding types: (a) initialized by pre-trained embeddings of the textual descriptions of the skill names using standard methods such as Word2Vec and FastText and (b) randomly initialized embeddings based on skill ids
|
| 14 |
+
|
| 15 |
+
The rest of the paper is organized as follows. Section 2 reviews the related works on KT and the existing models for student performance prediction. In Section 3 we present our proposed models and describe their architecture and characteristics. The datasets we prepared and used are present in Section 4 while the experiments setup and the results are explained in Section 5. Finally, Section 6 concludes this work and discusses the future works and extensions of the research.
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
As referenced in the relative literature, knowledge change over time is often modeled by dynamic neural networks. The dynamic models produce output based on a time window, called "context window", that contains the recent history of inputs and/or outputs.
|
| 20 |
+
|
| 21 |
+
There are two types of dynamic neural networks (Figure [1](#fig:DNN_architecture){reference-type="ref" reference="fig:DNN_architecture"}): (a) Time-Delay Neural Networks (TDNN), with only feed-forward connections and finite-memory of length $L$ equal to the length of the context window, and (b) Recurrent Neural Networks (RNN) with feed-back connections that can have potentially infinite-memory although, practically, their memory length is dictated by a forgetting factor parameter.
|
| 22 |
+
|
| 23 |
+
<figure id="fig:DNN_architecture" data-latex-placement="h!">
|
| 24 |
+
<img src="dynamic_models.png" />
|
| 25 |
+
<figcaption>Dynamic model architectures: (a) Time-Delay Neural Network (b) Recurrent Neural Network.</figcaption>
|
| 26 |
+
</figure>
|
| 27 |
+
|
| 28 |
+
We approach the task of predicting the student response (0=wrong, 1=correct) on a question involving a specific skill as a dynamic binary classification problem. In general, we view the response $r_t$ as a function of the previous student interactions: $$\begin{equation}
|
| 29 |
+
r_t = h( q_t,q_{t-1},q_{t-2},\dots,r_{t-1},r_{r-2},\dots ) + \epsilon_t
|
| 30 |
+
\label{eq:total_recurrent_model}
|
| 31 |
+
\end{equation}$$ where $q_t$, is the skill tested on time $t$ and $\epsilon_t$ is the prediction error. The response is therefore a function of the current and the previous tested skills $\{q_t, q_{t-1}, q_{t-2}, \dots\}$, as well as the previous responses $\{r_{t-1}, r_{t-2}, \dots\}$ given by the student.
|
| 32 |
+
|
| 33 |
+
We implement $h$ as a dynamic neural model. Our proposed general architecture is shown in Figure [2](#fig:EDM_architecture){reference-type="ref" reference="fig:EDM_architecture"}. The inputs are the skill and response sequences $\{q\}$, $\{r\}$ collected during a time-window of length $L$ prior to time $t$. Note that the skill sequence includes the current skill $q_t$ but the response sequence does not contain the current response which is actually what we want to predict. The architecture consists of two main parts:
|
| 34 |
+
|
| 35 |
+
- The Encoding sub-network. It is used to represent the response and skill input data using different embeddings. Clearly, embeddings are useful for encoding skills since skill ids are categorical variables. We found that using embeddings to encode responses is also very beneficial. The details of the embeddings initialization and usage are described in the next section.
|
| 36 |
+
|
| 37 |
+
- The Tracing sub-network. This firstly estimates the knowledge state of the student and then uses it to predict his/her response. Our model function consists of two parts: (i) the Knowledge-Tracing part, represented by the dynamic model $f$, which predicts the student knowledge state $\mathbf{v}_t$ and (ii) the classification part $g$, which predicts the student response based on the estimated knowledge state: $$\begin{eqnarray}
|
| 38 |
+
\mathbf{v}_t &=& f(q_t,q_{t-1},q_{t-2},\dots,r_{t-1},r_{r-2},\dots)
|
| 39 |
+
\label{eq:kt_estimation}
|
| 40 |
+
\\
|
| 41 |
+
\hat{r}_t &=& g(\mathbf{v}_t)
|
| 42 |
+
\label{eq:classification}
|
| 43 |
+
\end{eqnarray}$$ Depending on the memory length, we obtain two categories of models:
|
| 44 |
+
|
| 45 |
+
- models based on RNN networks which can potentially have infinite memory. In this case the KT model is recurrent: $$\mathbf{v}_t = f(\mathbf{v}_{t-1}, q_t,q_{t-1},\dots,q_{t-L},r_{t-1},\dots,r_{r-L})$$
|
| 46 |
+
|
| 47 |
+
- models based on TDNN networks which have finite memory of length $L$. In this case the KT model has finite impulse response $L$: $$\mathbf{v}_t = f(q_t,q_{t-1},\dots,q_{t-L},r_{t-1},\dots,r_{r-L})$$
|
| 48 |
+
|
| 49 |
+
Although RNNs have been used in the relevant literature, it is noteworthy that TDNN approaches have not been investigated in the context of knowledge tracing. The classification part is modeled by a fully-connected feed-forward network with a single output unit.
|
| 50 |
+
|
| 51 |
+
<figure id="fig:EDM_architecture" data-latex-placement="h!">
|
| 52 |
+
<img src="edm_architecture.png" />
|
| 53 |
+
<figcaption>General proposed architecture. The dynamic model can be either a Recurrent Neural Network (with a feedback connection from the output of the dynamic part into the model input) or a Time Delay Neural Network (without feedback connection).</figcaption>
|
| 54 |
+
</figure>
|
| 55 |
+
|
| 56 |
+
We investigated two different architectures: one based on recurrent neural networks and another based on time delay neural networks. The details of each proposed model architecture are described below.
|
| 57 |
+
|
| 58 |
+
The first part in all our proposed models consists of two parallel embedding layers with dimensions $d_q$ and $d_r$, respectively, which encode the tested skills and the responses given by the student. During model training the weights of the Embedding layers are updated. The response embedding vectors are initialized randomly. The skill embedding vectors, on the other hand, are initialized either randomly or using pretrained data. In the latter case we use pretrained vectors corresponding to the skill names obtained from Word2Vec [@mikolov2013efficient] or FastText [@joulin2016fasttext] methods.
|
| 59 |
+
|
| 60 |
+
A 1D spatial dropout layer [@tompson2015efficient] is added after each Embedding layer. The intuition behind the addition of spatial dropout was the overfitting phenomenon that was observed in the first epochs of each validation set. We postulated that the correlation among skill name embeddings, that might not actually exist, confused the model.
|
| 61 |
+
|
| 62 |
+
We experimented with two types of main dynamic sub-net- works, namely Recurrent Neural Networks and Time Delay Neural Networks. These two approaches are described next.
|
| 63 |
+
|
| 64 |
+
The model architecture based on the RNN method for the knowledge tracing task is shown in Figure [3](#fig:Bi_GRU){reference-type="ref" reference="fig:Bi_GRU"}.
|
| 65 |
+
|
| 66 |
+
<figure id="fig:Bi_GRU" data-latex-placement="h!">
|
| 67 |
+
<img src="GRUModel.png" />
|
| 68 |
+
<figcaption>Bi-GRU model</figcaption>
|
| 69 |
+
</figure>
|
| 70 |
+
|
| 71 |
+
The Spatial Dropout rate following the input embedding layers is $0.2$ for most of used datasets. Next, we feed the skills and the responses input branches into a Convolutional layer consisting of 100 filters, with kernel size 3, stride 1, and ReLU activation function. The Convolutional layer acts as a projection mechanism that reduces the input dimensions from the previous Embedding layer. This is found to help alleviate the overfitting problem. To the best of our knowledge, Convolutional layers have not been used in previously proposed neural models for this task. The two input branches are then concatenated to feed a Bidirectional Gated Recurrent Unit (GRU) layer with 64 units [@cho2014learning]. Batch normalization and ReLU activation layers are applied between convolutional and concatenation layers. This structure has resulted after extensive experiments with other popular recurrent models such as LSTM, plain GRU and also bi-directional versions of those models and we found this to be the proposed architecture is the most efficient one. On top of the RNN layer we append a fully connected sub-network consisting of three dense layers with 50 and 25 units and one output unit respectively. The first two dense layers have a ReLU activation function while the last one has sigmoid activation which is used to make the final prediction ($0 < \hat{r}_t < 1$).
|
| 72 |
+
|
| 73 |
+
In our TDNN model (Figure [4](#fig:Tdnn_v1){reference-type="ref" reference="fig:Tdnn_v1"}) we add a Convolutional layer after each embedding layer with 50 filters and kernel size equal to 5.
|
| 74 |
+
|
| 75 |
+
<figure id="fig:Tdnn_v1" data-latex-placement="h!">
|
| 76 |
+
<img src="TDNN_Model.png" />
|
| 77 |
+
<figcaption>TDNN model</figcaption>
|
| 78 |
+
</figure>
|
| 79 |
+
|
| 80 |
+
Batch normalization is used before the ReLU activation is applied. As with the RNN model, the two input branches are concatenated to feed the classification sub-network. It consists of four dense layers with 20, 15, 10, and 5 units respectively, using the ReLU activation function. This funnel schema of hidden layers (starting with wider layers and continuing with narrower ones) has helped achieve better results for all datasets we have experimented with. In the beginning of the classification sub-network we insert a Gaussian Dropout layer [@srivastava2014dropout] which multiplies neuron activations with a Gaussian random variable of mean value 1. This has been shown to work as good as the classical Bernoulli noise dropout and in our case even better.
|
| 81 |
+
|
| 82 |
+
::: table*
|
| 83 |
+
Dataset Skills Students Responses Baseline Accuracy
|
| 84 |
+
------------------------ -------- ---------- ----------- -------------------
|
| 85 |
+
ASSISTment09 110 4,151 325,637 65.84%
|
| 86 |
+
ASSISTment09 corrected 101 4,151 274,590 66.31%
|
| 87 |
+
ASSISTment12 196 28,834 2,036,080 69.65%
|
| 88 |
+
ASSISTment17 101 1,709 864,713 62.67%
|
| 89 |
+
FSAI-F1toF3 99 310 51,283 52.98%
|
| 90 |
+
:::
|
| 91 |
+
|
| 92 |
+
We tested our models using four popular datasets from the ASSISTments online tutoring platform. Three of them, "*ASSISTment09*", "*ASSISTment09 corrected*"[^1], and "*ASSISTment12*"[^2] were provided by the above platform. The fourth dataset, named "*ASSISTment17*" was obtained from 2017 Data Mining competition page[^3]. Finally a fifth dataset, "*FSAI-F1toF3*" provided by "Find Solution Ai Limited" was also used in our experiments. It is collected using data from the from the 4LittleTrees[^4] adaptive learning application.
|
| 93 |
+
|
| 94 |
+
The ASSISTments datasets contain data from student tests on mathematical problems [@assistmentsdata] and the content is organized in columns style. The student's interaction is recorded on each line. There are one or more interactions recorded for each student. We take into account the information concerning the responses of students to questions related with a skill. Thus, we use the following columns: "*user_id*", "*skill_id*", "*skill_name*", and "*correct*". The "*skill_name*" contains a verbal description of the skill tested. The "*correct*" column contains the values of the students' responses which are either $1$ (for correct) or $0$ (for wrong).
|
| 95 |
+
|
| 96 |
+
The original "*ASSISTment09*" dataset contains 525,534 student responses. It has been used extensively in the KT task from several researchers but according to [@assistmentsdata] data quality issues have been detected concerning duplicate rows. In our work we used the "*preprocessed ASSISTment09*" dataset found on DKVMN[^5] and Deep-IRT[^6] models GitHubs. In this dataset the duplicate rows and the empty field values were cleaned, so that finally 1,451 unique students participate with 325,623 total responses and 110 unique skills.
|
| 97 |
+
|
| 98 |
+
Even after this cleaning there are still some problems such as duplicate skill ids for the same skill name. These problems have been corrected in the "*Assistment09 corrected*" dataset. This dataset contains 346,860 students interactions and has been recently used in [@xu2020dynamic].
|
| 99 |
+
|
| 100 |
+
The "*ASSISTment12*" dataset contains students' data until the school year 2012-2013. The initial dataset contains 6,123,270 responses and 198 skills. Some of the skills have the same skill name but different skill id. The total number of skill ids is 265. The "*Assistment17*" dataset contains 942,816 students responses and 101 skills.
|
| 101 |
+
|
| 102 |
+
Finally, the "*FSAI-F1toF3*" dataset is the smallest dataset we used. It involves responses to mathematical problems from 7th grade to 9th grade Hong Kong students and consists of 51,283 students responses from 310 students on 99 skills and 2,266 questions. As it is commonly the case in most studies using this dataset, we have used the question tag as the model input $q_t$.
|
| 103 |
+
|
| 104 |
+
No preprocessing was performed on the "*ASSISTment09*" and "*FSAI-F1toF3*" datasets. For the remaining datasets we followed three preparation steps.
|
| 105 |
+
|
| 106 |
+
First, the skill ids had been repaired by replacement. In particular, the "*ASSISTments09 corrected*" dataset contained skills of the form of "*skill1_skill2*" and "*skill1_skill2_skill3*" which correspond to the same skill names, so we have merged them into the first skill id, found before the underscore. In other words, the skill "*10_13*" was replaced with skill "*10*" and so on. Moreover, few misspellings were observed that were corrected and the punctuations found in three skill names were converted to the corresponding words. For example, in the skill name "*Parts of a Polnomial Terms Coefficient Monomial Exponent Variable*" we corrected the "*Polnomial*" with "*Polynomial*". Also, in the skill name "*Order of Operations +,-,/,\*() positive reals*" we replaced the symbols "*+,-,/,\* ()*" with the words that express these symbols, ie. "*addition subtraction division multiplication parentheses*". The latter preprocessing action was preferred over the removal of punctuations since the datasets referred to mathematical methods and operations and without them, we would lose the meaning of each skill. Similar procedure has been followed for the "*ASSISTments12*" dataset. Furthermore, spaces after some skill names were removed i.e. the skill name "*Pattern Finding* " became "*Pattern Finding*". In the "*ASSISTment17*" dataset we came across skill names as "*application: multi-column subtraction*" and corrected them by replacing punctuation marks such as "*application multi column subtraction*". That text preparation operations made to ease the generation of word embeddings of the skill names descriptions. In addition, in the "*ASSISTment17*" dataset, the problem ids are used instead of the skill ids. We had to match and replace the problem ids with the corresponding skill ids with the aim of uniformity of the datasets between them.
|
| 107 |
+
|
| 108 |
+
Secondly, all rows containing missing values were discarded. Thus, after the preprocessing, the statistics of the data sets were formulated as described in the Table [\[tab:datasets\]](#tab:datasets){reference-type="ref" reference="tab:datasets"}.
|
| 109 |
+
|
| 110 |
+
Finally, we split the datasets so that 70% was used for training and 30% for testing. Then, the training subset was further split into five train-validation subsets using 80% for training and 20% for validation.
|
2106.00660/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2106.00660/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Improvements to machine learning (ML) have enabled automatic content creation [\(Ramesh et al.,](#page-9-0) [2021\)](#page-9-0) and manipulation [\(Yu et al.,](#page-9-0) [2018\)](#page-9-0): a user just needs to provide an image and describe the changes they want as the input to a generative model [\(Goodfellow,](#page-9-0) [2016;](#page-9-0) [Korshunova et al.,](#page-9-0) [2017;](#page-9-0) [Antic,](#page-8-0) [2018\)](#page-8-0). Computer graphics tools brought us digital *inpainting*: programs such as Photoshop enable manipulation of digital images with powerful software and, more recently, ML support [\(Vincent,](#page-9-0) [2020\)](#page-9-0). Modern inpainting software lets the user select a patch to be filled in; it then fills this
|
| 4 |
+
|
| 5 |
+
*Proceedings of the* 38 th *International Conference on Machine Learning*, PMLR 139, 2021. Copyright 2021 by the author(s).
|
| 6 |
+
|
| 7 |
+
area in with artificially generated content.
|
| 8 |
+
|
| 9 |
+
One increasingly popular application of inpainting is the removal of objects from photographs. This can be done for malicious purposes. For example, many images are distributed with a watermark that asserts copyright or carries a marketing message; people wishing to reuse the image without permission may want to remove the mark and restore a plausible background in its place. This naturally leads to the question of how we can make watermarks more robust, i.e. difficult to remove. There is substantial literature on using classic signal-processing techniques for mark removal, e.g. from [Cox et al.](#page-8-0) [\(2007\)](#page-8-0), but such tricks predate recent advances in ML and inpainting more specifically.
|
| 10 |
+
|
| 11 |
+
In this paper we investigate whether ML inpainters can be manipulated using techniques adapted from the field of adversarial machine learning. Our technique, which we dub *markpainting*, allows for arbitrary manipulation of how inpainters fill in the patch defined by a given 2-bit image *mask*. We do this by setting an arbitrary target image which we wish to appear in the filled-in area. We then generate *perturbations* – small pixel-wise augmentations – which are applied to the original image to manipulate the inpainting algorithms into producing something resembling our target. For example, in Figure [1a,](#page-1-0) the original image is a blackand-white cartoon; we set the target image to be the same cartoon but with *La Gioconda* pasted onto the otherwise blank canvas. After the application of our technique, the perturbations to the original image ensure that the resulting infilled patch does indeed resemble our target.
|
| 12 |
+
|
| 13 |
+
We find that the introduction of minor perturbations to input images can force many inpainters to generate arbitrary patches — even patterns not present in the training datasets. Consequently, setting the target to be the original image and applying our markpainting technique makes the image robust against watermark removal as shown in Figure [2.](#page-1-0) The original (left-most) image has an unsightly watermark that was removed successfully in the middle image by an inpainter. However, after treating the original image with our markpainting technique – setting the target to be the original image itself, to preserve the watermark – the attempt to paint out the watermark fails.
|
| 14 |
+
|
| 15 |
+
Figure [1b](#page-1-0) demonstrates the effect of markpainting on six different inpainters. The resulting markpainted sample (bottom
|
| 16 |
+
|
| 17 |
+
<sup>\*</sup>Equal contribution <sup>1</sup>Computer Laboratory, University of Cambridge <sup>2</sup>University of Toronto and Vector Institute. Correspondence to: Ilia Shumailov <ilia.shumailov@cl.cam.ac.uk>.
|
| 18 |
+
|
| 19 |
+
<span id="page-1-0"></span>
|
| 20 |
+
|
| 21 |
+
Figure 1. Demonstration of the proposed markpainting technique. The target image is set to be Leonardo da Vinci's *La Gioconda* pasted onto the otherwise-blank cartoon canvas. Figure 1a shows a visual abstract of the proposed markpainting technique, using the CRFILL model. Figure 1b shows the application of markpainting to multiple different inpainting models simultaneously — our technique can target multiple models at once and is not limited to just a single model. The *Adversarial* pane shows the combination of the original input image and the resulting perturbations. The top six images show the result of various inpainters filling-in the rectangular patch on the canvas as defined by the mask. Note that all six inpainters use the same input, namely *Adversarial*. Original cartoon from [freesvg.](https://freesvg.org/artist-with-blank-canvas)
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
Figure 2. Example of countering watermark removal using markpainting on Vincent van Gogh's *Boats at Sea*. The left-most image depicts the original image with the watermark. The middle image is the result of inpainting the mark without any perturbations, resulting is the successful removal of the watermark. The right-most image contains generated perturbations and has been treated with an inpainter for watermark removal; the output simply restores the mark. Performed on the CRFILL inpainting model with = 0.3.
|
| 26 |
+
|
| 27 |
+
right in Figure 1b) is a combination of the original image (top left in Figure 1a) and the accumulated perturbations. We can see that *La Gioconda* (the target) appears on the canvases (the patch to fill in as dictated by the mask) of the final inpainted images (top two rows in Figure 1b). These final images are obtained by running the markpainted sample through each of the inpainting models.
|
| 28 |
+
|
| 29 |
+
We find that markpainting can work even if the colors and structures of the target image are not present in the input image itself or the dataset the model was trained on. We evaluate the extent to which markpainting transfers from one inpainter to another and within the same inpainter trained on different datasets; the impact of perturbation size; and the viability of mask-agnostic markpainting.
|
| 30 |
+
|
| 31 |
+
Overall, we make the following contributions:
|
| 32 |
+
|
| 33 |
+
- We show that inpainting can be manipulated to produce arbitrary content, a technique we name *markpainting*.
|
| 34 |
+
- We present a mask-agnostic markpainting method that works regardless of the mask used.
|
| 35 |
+
- We evaluate the performance of markpainting thoroughly and find that markpainting a specific target is significantly more effective against more advanced inpainters (a 38% reduction in loss to target in the case of a weak Generative model, compared to a 78% reduction in EdgeConnect's case).
|
| 36 |
+
- In a robustness test, we show that markpainted samples sometimes transfer within the same inpainter trained on different datasets, and across different inpainters for markpainting with a target.
|
| 37 |
+
|
| 38 |
+
Malicious actors now manipulate public discourse with artificially generated or manipulated images, such as deepfakes [\(Goodfellow et al.,](#page-9-0) [2014;](#page-9-0) [Zhang et al.,](#page-10-0) [2020\)](#page-10-0). For example, as shown in Figure 3, it takes no special knowledge to remove a participant from a photo of the 6 January 2021 raid on United States Congress; this is not noticeable without inspecting the image closely. This motivating example led us to study the capacity of inpainting tools to remove or replace objects in images.
|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
|
| 44 |
+
Figure 3. Photo taken from the 6 January 2021 raid on United States Congress. Original on the left; the right photo has been modified using an inpainter to remove a participant. It is near impossible to tell which of the two images is the original one, without closer inspection.
|
| 45 |
+
|
| 46 |
+
Markpainting can provide protection against evidence tampering by preserving the integrity of published images. Consider an image of a crowd and an attacker who wants to forge evidence by removing a person from the crowd. The defender – e.g. the distributor of the image – does not know which person will be removed, but wants to stop the attacker. If they use our mask-agnostic markpainting technique with a solid color target image (such as pure red), then any attempt to remove a person from the image via inpainting will result in a red patch, clearly marking the image. In practice one would use more subtle techniques, which we discuss later.
|
| 47 |
+
|
| 48 |
+
# Method
|
| 49 |
+
|
| 50 |
+
*Inpainting* fills in information that is missing in an input image. During training, a part of the image is masked out and the inpainter aims to learn how to restore this area.
|
| 51 |
+
|
| 52 |
+
We define an input RGB image I ∈ R H×W×3 and a binary mask M ∈ R <sup>H</sup>×<sup>W</sup> . The binary mask M has 0s for the areas to be inpainted and 1s otherwise. We then assume an inpainter f, that populates the region covered by 1 − M taking as input masked input ˆI = I
|
| 53 |
+
(1 − M), where
|
| 54 |
+
represents the Hadamard product. The function f was trained to minimize dissimilarity Ltrain between ˆI and I. Training here may involve images of different sizes and irregular masks depending on the system.
|
| 55 |
+
|
| 56 |
+
We present two different flavors of markpainting: targeted and mask-agnostic. Targeted markpainting forces the reconstruction to resemble the target image, whilst mask-agnostic markpainting aims to generalize the technique to work with an arbitrary mask. These are presented in Algorithm 1 and Algorithm [2](#page-4-0) respectively. Algorithm 1 is visualized in Figure [1a.](#page-1-0) The formal setup is similar to adversarial ex-
|
| 57 |
+
|
| 58 |
+
```
|
| 59 |
+
Input: image I, mask M, target T, perturbation step size
|
| 60 |
+
|
| 61 |
+
0
|
| 62 |
+
, iterations t, targeted models Θ
|
| 63 |
+
for j = 0 to t do
|
| 64 |
+
η ← 0
|
| 65 |
+
for θ ∈ Θ do
|
| 66 |
+
η ← η +
|
| 67 |
+
0
|
| 68 |
+
sign(∇ILmark(θ, I, T))
|
| 69 |
+
end for
|
| 70 |
+
I ← I − (η
|
| 71 |
+
(1 − M))
|
| 72 |
+
end for
|
| 73 |
+
Iadv ← I
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
Output: markpainted sample Iadv (combination of original input image I and the accumulated perturbations)
|
| 77 |
+
|
| 78 |
+
ample generation [\(Szegedy et al.,](#page-9-0) [2013;](#page-9-0) [Madry et al.,](#page-9-0) [2019\)](#page-9-0), where the perturbation η is accumulated iteratively from scaled gradients ( 0 sign(∇ILmark(θ, I, T))).
|
| 79 |
+
|
| 80 |
+
We define Lmark(θ, x, x<sup>0</sup> ) = Lnetwork(θ, x) + αl2(x − x 0 ), where Lnetwork is the VGG perceptual loss [\(Johnson et al.,](#page-9-0) [2016\)](#page-9-0) and l<sup>2</sup> is MSE loss. We use the VGG perceptual loss to measure human visual similarity of markpainting, which is usually missed by pure L2 loss. L2 penalizes large deviations from the target, whilst VGG promotes humanunderstandable granularity. We set α = 4, based on experimentation. The effect of different α values on the markpainted result can be found in Section D of our Appendix.
|
| 81 |
+
|
| 82 |
+
Notice that the perturbation propagated to the natural input is (η
|
| 83 |
+
(1 − M)), because the regions to be infilled are masked out and do not receive gradients.
|
| 84 |
+
|
| 85 |
+
The technique aims to find a perturbation η with a given perturbation budget such that the used dissimilarity function Lmark parameterized by θ is minimized.
|
| 86 |
+
|
| 87 |
+
$$\begin{split} & \underset{\boldsymbol{\eta}}{\text{minimize}} & & \mathcal{L}_{\text{mark}}(\boldsymbol{\theta}, f((\mathbf{I} + \boldsymbol{\eta}) \odot (1 - \mathbf{M})), \hat{\mathbf{I}}) \\ & \text{subject to} & & ||\boldsymbol{\eta}||_p < \epsilon \end{split}$$
|
| 88 |
+
|
| 89 |
+
||η||<sup>p</sup> is the l<sup>p</sup> norm of η and in this paper we use p = ∞.
|
| 90 |
+
|
| 91 |
+
We represent the original input image using I, the original image with our carefully crafted perturbation as Ipert, the naturally inpainted image using Ibenign, and the inpainted results of Ipert as Imark . We denote the target image using T, and the mask is represented using M.
|
| 92 |
+
|
| 93 |
+
We find that we can apply our technique to a collection of models Θ simultaneously using a single input image I as detailed in Algorithm 1. An example result of application of markpainting to multiple models simultaneously is presented in Figure [1b](#page-1-0) and in Section A of our Appendix, where the *same* markpainted sample produces a visually-recognizable face, similar to the target, after being
|
| 94 |
+
|
| 95 |
+
```
|
| 96 |
+
Input: image I, target T, number of masks n, mask size
|
| 97 |
+
range [m_{\min}, m_{\max}], perturbation step size \epsilon', iterations t,
|
| 98 |
+
targeted models \Theta
|
| 99 |
+
Initialize set M to contain n random rectangular masks
|
| 100 |
+
of size s \in [m_{\min}, m_{\max}]
|
| 101 |
+
Initialize \mathbf{M} \leftarrow \emptyset
|
| 102 |
+
for j = 0 to t do
|
| 103 |
+
\mathbf{M} \leftarrow \hat{\mathbf{M}}_i for a random 0 \le i < n
|
| 104 |
+
\eta \leftarrow \mathbf{0}
|
| 105 |
+
for \theta \in \Theta do
|
| 106 |
+
\eta \leftarrow \eta + \epsilon' \mathbf{U}(0, 1) \operatorname{sign}(\nabla_{\mathbf{I}} \mathcal{L}_{\text{mark}}(\theta, \mathbf{I}, \mathbf{T}))
|
| 107 |
+
I \leftarrow I - (\eta \odot M)
|
| 108 |
+
end for
|
| 109 |
+
\mathbf{I}_{adv} \leftarrow \mathbf{I}
|
| 110 |
+
Output: markpainted sample I_{adv} (combination of origi-
|
| 111 |
+
nal input image I and the accumulated perturbations)
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
run through six different inpainters.
|
| 115 |
+
|
| 116 |
+
Although Algorithm 1 works well against a known mask M, there are other cases where we do not know which parts of an image might be tampered with. We adapt our technique to generate an image that will cause a system to markpaint regardless of the mask used. This problem is related to the construction of adversarial examples that work in physical environments under different conditions of lighting and viewing angles. We therefore extend an approach first introduced by Athalye et al. (2018) called *Expectation over Transformation* (EoT).
|
| 117 |
+
|
| 118 |
+
This extension is presented in Algorithm 2. For this technique, a set of random masks is produced with a given size range $[m_{\min}, m_{\max}]$ . We iteratively sample a single mask from the set and apply an algorithm similar to Algorithm 1. We find that further adding stochasticity helps to transfer to unseen masks: we weight the gradient step with a random uniformly-distributed vector $\mathbf{U}(0,1)$ .
|
| 119 |
+
|
| 120 |
+
Inpainting is a complex task, with neural networks trained to manipulate images of arbitrary size and with arbitrary patches. Furthermore, modern inpainters can fill irregular holes. As they are trying to be semantically aware and display both local and global consistency, they need to understand the global scenery well. That in turn makes them dependent not only on the area around the patch, but on the whole image. Imagine trying to fill in a hole around the squirrel eye depicted in Figure 4. Here, local information (shown in pink) would suggest that it has to be filled with
|
| 121 |
+
|
| 122 |
+
fur. Global information (shown in orange) on the other hand, should tell the inpainter that the picture features a squirrel in a particular pose and that an eye should be located there. As illustrated in the gradient visualization in Figure 4, gradients focus on both the area around the eye and the rest of the image. This dependency on global information makes inpainting both complex and prone to manipulation. The markpainter does not need to concentrate their perturbation around the patch area but can scatter it all over the image.
|
| 123 |
+
|
| 124 |
+
While at first glance markpainting seems similar to older techniques, such as ones proposed by Levin et al. (2004), there are fundamental differences in the two approaches. Inpainting requires a semantic understanding of the scenery and heavily depends on global information, as shown in Figure 4. Furthermore, markpainting can produce artifacts that are semantically meaningless for the model and not present in its training distribution.
|
2106.01532/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2021-05-31T08:51:54.367Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/89.0.4389.128 Electron/12.0.7 Safari/537.36" etag="W13R_d7N4dG1Pap-fEmm" version="14.6.13" type="device" pages="2"><diagram id="ietwdwISqTG3krwyUXy9" name="第 1 页">pPzX0qtIty4IX8067Ai8OQQhvPdwhvcehLn6Tt6qb6+9Yv9/R0d0RU1NCVJAjhz5mJGp+V/oZ7iENZlrbcqL/r8QKL/+C+X+C0EIigKv74H7nwM48u+Bam3yfw7B/33AaZ7i34PQv0ePJi+2/9Fwn6Z+b+b/eTCbxrHI9v9xLFnX6fyfzcqp/593nZOq+D8OOFnS/59Hgybf63+PwgT93yfEoqnqf29NIeQ/J4bkP43/7clWJ/l0/m+H0O9/oZ91mvZ/3g3Xp+jf2P0nLv98j///c/Z/PdhajPv/my/MGV50hh/KREP+wl9us1j7fyH/XOWX9Me/Hf73Yff7PxFYp2PMi/ci8H+h7Fk3e+HMSfaePcGQg2P1PvT/ni6bvv9M/bSCz+M0gkZsnmz1//r6//nQ//bjV6x7cf1vh/7thFBMQ7GvN2jy71kERf/5yv2fz/8G+Pzv8UH/kz71/zY0/2skkn9zovpf1/7vsIE3/0bu/3cUa6fM9Z9sCv9XEh/8yAVDGf2/iSIY/fl92wx/Ccf+/c1s8z85C4EjyX8+lM31xot9Q9KATFSTtOjNaWv2ZhrB+XTa92n43xowfVO9J/Zp/s+Vwac82ZP/Qpl/PiJ8OxfVfyFv/+gWvGAMYzldLNsVwzIWAz6BvxnwpDxyshzDVAJbZQJrVSJ7ZvLHqhSOtXSO2VSO7ZzvSXmiPZViDSWgQSV/sFH6SJt0nZnCZbPKMZfOWZB4Z7jpVrf5wR6dU/9zdasDVzzBlSqds6tbss9aa6X7/9uf9+onw9gf5vv2hmMciWHqL8tcX5B2PGUx4vn20wP9/b5/mP/+7/thTosHPZY+1qT+03q3wYnqyzGn9GVOj/8nTDzNWBKIj2WztlRr3lf4wnzN3vLFcwrbJV9JgpTrtH0HKplOQ+S7qjqFr7NIsKdebrJJcSZIdyPU4DrCgmze7nLJ8WbX5/3Ah+s4GPQuDuIxEfolQ204H3OsCBeulgKhxhq5mRW3l4Ogx5t4WJR2VoJhJpp5WZVnVwNkJxv8AMN0aaFwUa1876oL62EA022MHGKD/RfCuj1+lBNRrBf1MyG6TMGUYs2R+E+cQGzO73/i9E/H/x/i9H3j9AHNOJbJ/omTJaN/ra/vl3W+F1vLrGfldWZrjTZ9Jf6r2FMqsh9He7wf40ejan/qqNONb0/Zf7EaQORmpfUQ/alwo9O/theLzrdXPNj2/T6PAn9uY8EfYqSek1GH8jBGC7Enqsn41lEiNtKgdLPh9XESDvKQTIvRzUkyLsqwbKsB7WmCHupAnJv5vbJUvLVRgXbTg/M0RPQxwQ6zw4t0JIxx+T+iRE8gwTQwYyQb9JExzw/d9Je7ZVC0EhlolgjYdLmUdyj+1RVKyJuX3er8KWCXw3jgCrygQZwQzVmWK2U7tLzEDqCZgWBz5jHa5WAzwoGXkEmbljc/ODi8ttJ7xXKf8x3cfxcUgLeXxTwiLyAjBo6CiyD9B+Acn14OWSbVPxe9LqfyGB2BmAI2melyBP29wRmAb7MOrLpR7m/hRlY77mJo6laUqBdR0lLm7jrkAPpzW+nqGXnLn+5OZzlB55dT2+Mc6EH7WX6FnuSJb0YT6LU3IG0aehkutm9M6F1AdYbTvGgG1wFPA4KQgjiBiFx202fRwzPgPfmG6hTYhHk7dSaMEGGPW01IutGUrwqep4ktirMMfgEcY/fvl4sSH/EnP1Nups201sjiVfGQWYJbXtc7A9zKdRq1BK1xrtZy0dQ1BJ7G7EkE5i+MEwJ9hDR5A+RgE4jJ+xA5eGiHiZv+pQswTAoDhvLvPUMrHtdU4IEV1qUKWHgfWKximeHtBt8Pk5aas6jRdgSNDTtQMeTAVfO8jKjhC7tv7+A6ix+Mkf6D1uXZ9pJu/VpZacUiQiKlCsfFvA0FB8idLMqXz5DaddtHHydDq3LipF0qINxZx/rd1MKM2qJN+4Lu/czgyA/zmxBJsEspPqV8dKuRfdXINbZ4UxgYLuLYG/+uEEwOxhPbMUAHCTCHWTpNnmwcofdCOPzdbCXFlQJ/J8xGt9u+CwjILEGhP/j9LRTNl9/AMTaZvG9BWg9vlum7kyb6znz/TfZ9AjFseXDQbr7WlmHMczmHtqL+w8XB1j/ZzUrT1Q+6XwJWYqcx2A344xh6s4ZDhJTkvfAi4Gbe7XzAWKzEsQVSHbp9/LZjXBDe29uiDCcnIbmz5CdhKL7w8ds3zf9l7o5nA/4bDSgmRYhWKPQbjas4FW1NGas5EJ1sLRfLsiLLTkDO+eT09eghJZCj71BFZNFqAfdmjwTv1jC012UVCMPY7VzxB9Bz0MUPmOrRDWZw/jOdEeQ96YG5uj/2/m+eS2zFsyAUDpmo1d+Exni16bQ3LOabcMwa6n2KLrKY2arUGOJnM7R2DmJs3nQLdh0uJnIZz7q1fUSUzGRUxOCDBDNk8Hw1W7ShvQackkKQcxBzlPKokcxVbVnM4mJUjuHM94v2DUbbCrbmRqWQbG+y0fIClwUjucxvWaNBHsIBcQjKnlg2uRbaHbGGkXEwgT/25KWBx8NFsGVqKO9a89E/jiACzGLbcfjJvY4lR9cnKjkmMBnwNJJX/Jwfye8Xp/ybWw/hnQF6KHTv2ifV2g6Cg2QKHsZ2CveCJrzOWmUKqsmG+eD6ZrSSRRd8OXhNzQ4Cpck+SOPqugKLzNR5eGsEhiv5Wgj+OOvHAK94hQe+BibuOHz1nwB1W+XzREezQW6uyGDfuKOhRv5jSFE0MX80tmnYjQYXR7Np7mMlXdpyBCFI/DzJWM/r6lKlTXcfjr8pAY81eBWEDumguDkmcghzUaYOjeKGWXeOmLZ2ckHrTfh6ENYJ+M3wCFsYhjFkBb4kOu24rkTqfFN9XFM/5FvbspFJzXLUi9gLNbqFMMTZ8nT297ykmI+BqBKHf/WG0qrbcH8cAx4AI2AfGaE1pT4WakNDSNq5aiz77iYs9ctoFsIJP4JLu7ODI9IURtvG6P3eq4rW8ANHZykBnvBCXUmM9I0smP/RCq3j0BUAsD1AFIKiVjYAOxjM8MvOXi75BwTVHfBQvP/3mSp4v4G0vIJdbytscXkm6DNVyJdhKJExVZkYj3yuHrT4OkDskSWB3niiG/6Bj9k1KYYbq+IwXT38MIxTMIbrVP6ojB/3J4eqN8uBC9QKi0UeXSrpfCYLphv+vgcYXXkvqCdaZEqXayoy8RUm2JySYtU4Z7uP+BWycpb7M/fNcs4j3I47kp6inUM/knZIVMfV8F4yGKVR4JWxddiCBKQqO6YahqVvQve3xcLHnJ2qy7DV1ptP8VGiVMLoZEUamR7uCkMXP2Kq9gWRleMPCTUfzMst/tMco76cpHAEsIW9nabpKTZxYVse194t1ydHhvhRLssWJjfnI68KbVoagwyalgSTL4R7zljfUC4CtTqzaRTL5ExW4SIr4sLvW5xXOL0DCxrBLniTv5L+5OIFm+60y7dJVi36lWfMx4P8ecrRzNW+TRsdpDgx/O0GNbcYI0FOLPx5Om2flVpmBO6CBeNaR9pk05Nfq3DFv5P4Mp9I6pF4jmPXUUqCJr9O/WIZB8nhS3WMzq5CSFlwCHIMCr203Z92HwZs8Xt81vfiHqdCom9NfcAIyzs4ZGD18Jd0b4aFTItAh1IA5NgVjPwWaDOpi4fcQph9JsGpiY5bm/XbAaKPZZa9V4aq4CfH1/yZO2rjSaG/ePVJj8Ec6Bb5MwLKs7ienw/0p8NvCOZ8Uyg6IFIaHz8RPJuaGO3anSzb0DApH7MjrIXo336lCfO092lcMmZi5eoVMwGERr8dKFfRIgg2q5VWbsx67JNYsGH0grhs1H58u2FMLYrNB6ftcHxWOFxAt2vaawu4PX7c2DoQTIQkAI6gRyvxHlMDQBPZG4YdWjfqeZK3ubvKZu240ub5aoifQcaTagCBQfWSFxYDytm01n25Lj+LQUF+ZFEVH8k1jQBl548USuYPIvKXf5YRGAUqrlXm2sYvrOew6q2pKw8QAX3ha5et+mQGAKDAbbN9/KuJ9Or1lengRHl17hUpX+lkvIq2WNh0V2R5CuKITd2yQjutmMKnVjDLWGo2zMn236/A8lowJp/cm3fhi/MTBvahkBxlsXXCeyXAa6OB2SCYIbaQKkbSil0rb0RgDsXl5XY4husMgTYUesGGdDBevYAiM/qVBImEMEcnhQGbLc66ULmPH4iZicE1BWXFpblIhsvpAhAzjTH0U6aXbJueG/bOJDR/HtUo+E3QZxM9UsVighsyyZxdehI2CqbqX6JlSUxPzMb1pWm6kl/q3PrBRVHk1+Q3g2/1dJ7LaBAHmBYP57RMbCGRIs5FVy1CSVLYVGHKRLELH5FCFUXyxiwwcdg8pyk9/TjP+Du3Tr+fxsSjQL4Au76nIaMPvpRWt12QJd2YZfPtPsRWevASBF3ojx3yNF+90ncpNIwRtUzLf+YAtmHDMR39mxSlo9vQiRDXezNWrgfAII0ChufECXgoudRJUQcomNUADepEiWqurmFBwFniW2CLecZRqBIUp+b7Tu3NXY3voF6ATFD6FzcPBX9BOvFqFn1dUQpfrNEjF+rzofNG0cLjiXsuz24l0l/DLDndQfuOlTiuwGYT30gvZmHO5bF3ZiRvTcQDhL5/57luL5+6BsCXSKV6yAkN95j5jZlap+T1PipuGvYAdFh6YqsagHxN7PylG0loJgGFQp+34W8tZNjxf2rUcVwraLT21jmu4fAp1zIF8k2HvkjaTQgkJsNlowyX8gGx96dQWK2d4X9q5szUzWMUD59c1YlbcucOrCgwzNifuE1X7tc4sQFMrcBwt1EAa8fXgYbL15bDn1H5rkOYmKI4CG652c1iuDWGELWNpIQ4QjOba60r68LmkYqPlLm94an03PNbbgK4yfuigXfdOOwhxzaWJQrvZPw9IiC9iHxdTkzdN5qRLshlqakcjhXs+63mlL9w9M08ANCoh0ty/dh0VVl5zzjD56o+i0GbKQO5QBBdnUmefX5dg8FN5oHEmebo5FFjOeY+neeoCoAaFvyfiQVjFaZFlX1Q6xIFz1J/+l6jmQex4GgmWYUWq+DK7KjHVoP/PGiUbUQ30LFAfsOe7n/dYoGyIVFXuhkO9cbOt3gobZC27VpzJJsQ+UZx1gJJw48IGa29HIvV4Hrk6IWfuv/Yi6MBa8tItJnur+7nGckZXikOKXrXNcsk2V8JwQxHe23gWCJzgSapg/D3ifbTgzypXXGi7iO/L6qLUG53LbwoAXkCiZFhF0T6JLfawYV9k+94hMDYIJN/WF5E4aIrAQ31eo3odVUztkwwr0aNr3j5daAD4KsDkJaPauQ3AMQFxBCdtGN0nj9JmmVnerb2Okzotu1M/XwZTjZYaM78W1jB5WRywOqvXs/d4rBn8OJl7AoGelMwp2Ib/Cq17BULZy3ShGMPf/zdJE+eyNur9Wc9egt39PtK//H9lOCEQ3o6ninge6kMvbUdwlySB99dePQ9LYOfxFj0bmYO/bdl2stfuR7AsMR9UOVakc9kUso1GiSkY1bmxz15p9dAnahw/l1sEqCydi8zt947fvYmsIEzE9+KmnIkR/p2o+oSg4AbT3uNT6S47Y8o3+Yun2RBZ7s2+2hQo2lVv+EL/z5qoyb7kc+YBbcutseoiypWeQwPfn3e3n0pH4JRyjI/QGTLu/19uJ39UtUL7xxw0ujeslFSZqECfBPlxZniWB+bVu7cAF6FhB1kRzeM5SVEkPzGMgdyJD4bDqEyqk/+d0JsIl7Fi4WW6Bvkc+FqLLXuPF64RBxVGE35CVT6MV0AE+BBBNYEuiV/AAAZBsJ1f6lBwjcFcmfQJJZWaJ7iWXeiluhqUpP4DKgydJB9X8s+91+h51NkSgB8ikMrtaXgFC9avwJuUc6mZs8sK+onoI1lHlV/NoZL3+WNXtbQp8XkjfSTBhgumRaOi0Cxiw0VOgiMZ8C4Jsuv16YOfViHu0bGTDtx9e/+bmZoNCOCnW7f+Nhks33QYw6aggB3RoqY+6KD9pvmpxNh0aUSeLfgwTAcuGkR8YmdlvmmGxMbZrhDa63nEILVMBI4wQss6hEgWi1ycxeFEiAyc71GcRllBLIzJUuRVBUb4Ih2/Ove8HHewHSW+db5z10OZNWC/MQbDFCsoRi2FHbua/IHTu0hxrTrkX3ra81HCT1Vvpb2ep7fN0JkP5ZNzWP4rdS4cb/opDpaOQ3c9gGOLRpTvCtEhKmqKqPSXzmHut+hOWJa0+U4/m0EmD1FdmrpAnTFw2n2LCPaTvniecUGPT/7r8+Zr74/nB2udO57cAdrSiJBifPjp66aQEeNB1N9v2lYnDsnOI7BWC6ZZ9Rw7Hse7KvwMZLX4xj6r5e7dMgdtsJJLC5+M0n5D3QE3nrrCzTb/jQKQY7BeybJQEOag/y0snxnm/aqN+P3N9sWIMaQX7jNE2ToOPrBqIcBHJQB+SnSCTqdenast4cs7JA+GX+ds2iJouiaQIkZ/aAG9LROzZw+kswgEAnHW4WZoWCPK8xONPE4iECaZYY7OHSX7qpPLauaVBUjZS1VxUk/Lxb6ydAjSGdGSHuuqgi0RUYaznW7tP1SVJ+zS4sqJDkEMEXStFRZ+hHiFtthmAKQWCzN1RcuYFg5k1dTXcDL9OuFbGt6XfZSmVvl78R9nNCTFPSn0Z8EGbSVrPFGsus+VV8vZyQ1p+FxIL0PdFqX4Ev4mfwsDUzCw6a675gg+hlCy74YVyvQe4XmJ8vuSpaNBvbSSIybvzCPA8jg9dXq6rRUKMcQlc3JyhPOA8LsU8Oy8h+I7Dpz/Eff6ri3+uFNkeuLy6+b7l9YS7J7V7nvbXWYhz1HCONEbJbQ5euLrNI4MVyieSAvwxVZfJF+dH60ZhsQs0cZQXzdSc5exMvNlw9YdB/OSIAojP5RI60cegCl2zVaPwWITjBUwHsvCjxQl6WJBrBxxGdWByUJ0DgtMuseybVnThHzlxn1n49n+Eh8F1MlemqFCcAiAbjihU9F2ELgfT/pe/tjhFf318MWUWf2HRKCHaaZB4TT9iL+S0E2dtNkaUQw/s5N5nlZI9bfE9AVDr7JJpO5Y7bD9dofKqM/y4cgcpHG8AFG6MNktHKSqUMGieZYz2AXgx/5pjjzv9EEhsdY9VdIOcdLd2hCP7X6O3Fl/0eIlMqAOkqpJW5tMRlmZNQoEnnyZtv1A3Jlk8DdPzuevAzlfWmKroM4D4c+8O9IaBK7NwSllV6O+sKbGl+CLtSOykWS7C6cZ1XYQaijOfiDt/uw8qE2tYErtMeju0oF5pPgb1rm2rCRCHpYqxGFHGnV+Lvmy1/rmvYTKTz92IVSYx3WT8j8msixqj92u6n6ptGyh2R3+ebeCcRDIKN26uF5IhGwYwnTb6qb0hu1YYXEU8yTjtJPYD45MWXr83BeLZa9QN+Uqkd9yz6cyspsrqO780WbQI/ZCu4DF88fp4GfVHQP4j7NivZNasGjKndD7z7Xegjx4GWaN6bT2q90dSLK1Gq/m0sQTa9o856jmOywOdwyHKsctCqUf4bAbcAEC9WS+I24276I7K7h+GtcMoyJwYlleapQLRswsTXQBDEKDn4gbJ3X1vtzGrucml4cND1Hup+H3c+nZSUJiUxIFAD6UwOsIFySl8EDXKL0UqEuYPIe5RcsAP3IJ/4cNn49HaqMlV81DMYroIpvgmS/D3OJQpEVPzGk3JZ3keJAM3o2ZKEgEl9zeFTBZCxMJhIY+gXDU3tQje5mlKvNrdN4V4tBvp5rsuG64UjHs6doF8mBLUZdx5lKF11ddHNdTC+4kiehHm+ksiuimkuvE0o8HGgN3imfj9/lDIJ9ozPLk2Nakzo5Vd5ktbk9X0mNURFrOPat/7JhQyj+oO9zl/quiioSuEurKOynnX3UqWOdSQMHrdsesPJAZK3SVd30wisIOzpfSwatn4znFZvPbBtRyiSvgqgkFjsQYIeDaar9Aeh4NWAW6jNT9YPjCjzKNW9FiUjfM+T6vMPY+lED69tGYZkpt0wllVXOMI4WYZj5IhycuIFHtyZbXH9YJ6U9E7YvBp4j0FIsJMkX+eUISDFWpVdt/k5JPb6AGlHONnCQ3o6uiyI6nkEt6qu3sMnHPLpsGcTvPJ+acrxQ2dx9rRAvuoGFbq7KytZPvd1ph7GujuqyY0hlIbyVsVsbBePnAm+hBvMsIT9rq4uzr/I1719qioMoMsr5uRV19XNWUd3DCZyHeGa6nynt3OtElhWqitq0sUitfd1VaGLwSL5YYiTug+FidzBBHeECiRKcKdPH7pEB1X+Nb2HH97NLFx4thfEy1Vau/kLAr3Q/7m2j8b6ZWywUoAhjHsSUDJPwW+VdLpmFUZ23YiLoCJm3WI7IuleqtOHxD/6O0rhZWlc626Kr9otS/vkDIwbeRRWYn06/k7x9yCCLGex22w6Gj98TwOvE08b2CsIDE5X6YNyqemkGGHEMfo4OyUcspbS1uA+LOIXAp8BJAttvyx7hUyFmJz4tx84GKCdrTtczxQLhkgWBF6eLEFePJ2f63L7uJRiWDlud4mi0YZ5zP69wg4H5yPHM3Verw1TjYWhU26KLUnppFaavQysaVezv/PC9/C5f4nCm3t8aDCJ7RSkwLb/JCfpcsba1+KmJEE1B+0/tlEdjdbljAxFl4ecfZcz2PimFx3I6aP4InRN+8EZ7sEB3LK0qr/1kyewf8ZEmqmlmHzf/ZgsjxjxOO8Wfc1IaDaEhw/Ma5RP37WeakM0KVXExnzV+rerYcOl+mR4ctdB61/XUv8iwvZO8hJdLFNOponZ9il6a+b2g7EPXGOFKp46j8U9dVBp4BNjs/S01jOG7mqYMJsXEVPi8Rqag36rLy1LvhhR2flnv0lbQSX7d/p7SOCLKuUwzulZhPbD+qS2nPEZoDSdbzGpKHj/opJYXcJB8cHTAb/F+K86hUQhSPLWvWExauVt3NKfidXLH+5/qZyKqGcYF8J+OlNd8+2hvU/StOrkkjOoVlrqkXdnPdwSk0Xq5dKyv5TzkCLv2/L7HHBbFnRSOduML5dqnkEGva1lLjxu6/bqL6gmuAxoocX13lbzKrM/SWWV1b+e6jdV+T2Qu5Vupp/NfaQvYjkf2HX9jAxXsZhNgSjO57HqGKLZfvGQmBgNz0rITx5ieCkbRit9OdKJfxAbs89yspf+xk57TGOsVbuY0tvs79cl0/7Z4MCvch+D8F3oZ5ZsPyGf+eywZqzaZJCB/DSh+g9KYOs39gk6rGD1uPS3XJygYgSB6hV+wfKsYT/kLEMNtxb2jyjFseXTtn1nv4HHupiO1AwrXn4+j2VuRUZz8nKVrunNousvCwcaQ888GvW5b2q13BRqS0G4f9txysx4hlHLoREhUsDtrN5vwuzvBFSn95aKotJuO9+YgXWQkE9F12pAIzIz847UdVw11kJy0siYWf23j6zVc4L5AAsv8Xji2qkx9V7/ayvQwo7QosaJ8PlhQSt7dappDO3eKAt9/8iF/EaDNMCFgn5Eq3xBDW3hrQmKan+XAMEY0w4oucXhcVvtW8kthqAwjqNLZc9q9YjKuBQR2GpdhG0fLNBtex+qHt4k30E0BwbZ/yIGvqWPl5J6QZRxCG4DYN71NiXWm6t/uaa4VATF0/UiFcVGDhciI8B/696HxeLmX+XaqkUJ77jqxq+PGJePQBSHMxC7sQk/vj2qIrbK5Xa4ViEtzQ6NmYTkZrusfM5yvU17FDjUi+eTjaXZe4tWSiNvhgg1MqzPJY70PHXVQPQ9QaOXv4DQYw9C0rGNinE5umt3IhPMkSb0JQCTfuD6k072ZmX5eVV2HuVkUfLitN9Ic/jbzTS7Yb75Tk8EUxixLJgBm2Frs7rjuu1VUcvwSS2qg25rgg/+u5WJW5Uw+hTNZJRKiCLP0soQ8ktsH9/MnbB4NmIU+O/L81QuBiRQojSaz23iaxFzufiC+ufUWRkOsGfcMWbSo1D5KrTJkhpPAfGYYCV/+0qZbu8+25+fN6aiirH42IHdiJANTGZX+MEPzTlPmE/sidUJjkI++Mq8QflGxdX9agHfhDaeP40rkBhI/JsRfUMpAt93ocGsoSlEkEDTilL2G7bN5K9kfLgHwtrnJIx2BGbjRFPqli/BdEQZNmwKlkTn39y4hIETJbiddq3s7oyrlw4lnTyP/4dlw5MdedvDvDOnFWnZ61qlxAER2cl/Fj7IYyHKugiFtWO3wmJGNviN6/yWNxxsD4VXl7cFEsgT+eBTubU52UsG/zCni48GfqHPjliFf8SOGyLIu/iPy5ow0Qt7rSKl1dneLHwx7oDN4Xit3lF+9TlxUOqxvsz99YOtnaQosSCjqCkWbaHPk13jJj2IkclqljgMXSV7YZ2n7IWN51e8GzeCaJ8XlGZzP9ycBYx1WG8Kw7fRhAd7QybO6l6jR2OZ5nW2QkVYMbkcVvXbo7o6TQun1CYMAemIiTDKj9/Jac4TVT+iPeIDgHKFSB/7tFAoSmmEC5FWjoriGDjRur5b7hbCBw+Yli/ELnMbhWRnQ9NhFFD9zDnmipixyKfwfacbSye7ftvu2BENxVAaLS9AnBRDHP3lIaDs/P8fcuuOU4TjR7ug6XlGAywgjinOzeCrRHcke/faV5B2oS517vx/BKrUCX+Q97tKR/LYi2r357aydEvv+1/YULT2UXJXZlCz14ZO2dVZlw77eb4U/BwF4Z4Ow5u/qo490rySytJLwqKRfavGrWX87HOEwmqW12qT5g6AVFc3q4hjwjstzUEbVjXzcy6HJQzzoBtio9VtjyEpLL3q2zksnrt9298bR/Euauvfud/PlKcWxTva/p+HLz8tM1/vku3EBBzAtdXPiteN1tKYji4egCe76fXF9o9xO+3U+Xyo4DhP/vARFmmOGF8RDx4JJJ/CIXFffWxkzAhrhLucxP99dOmXBi5Owi9/7tBatI9/0FwvVMPwObBDKt1TNw+vvxxR1n94B2SeJjUk05HSYQFKFgtsQt9av2qivMCyYYJ/T5WWk8ydKQf2tm7eiGcUscwN4mII2O3qFap/warG6hv8WUephY7px6J5ryJ8jYjCEwkT8K5oln31lsi1e54AaFMWMMvCtl43ZrfALz0AHCXwjqr59VRgcgy/kGQn4dgENHzj1FhRSiq0mLfBN0bSoii76otT3BAjSPbWub6NBybopOLFlb31cODr3fBb/2PQrsp2wjlrFVrAOmNC8rsyLAQ9k63jU2Gir4JINd6ZFW+4zChzho/m+wvSUBBqkKdrYM49dZ1ptlIN/ZO2LgeMF6faapQZ7xCBItmnUUShqaYW7goELxi7rrNBtKdsh+PNQSCYKHxaW6UIh1YAu/vbVOkWAaQ5C3BZTOVy2LS9VFOyr059mazjSL9F6l8Q92qHW0C+JSdMzv/CztEZef0rPCK7hY5xdoDjy7Pc2h7/PxtoSkuHMu6sEsAT4vOXItvdyqkp87vad7QQI1b7F6J1YeTEJnYnqkrDZDkKHENJBJcMGUgK4JWTlf93QjB1mY1iofgLTAIoAKIwrgXj+Ne/zIlDcBGVtnOHckjEALgwTBjnDczkq9w7ef9siOM2hCLirAl65yut2XAm4CqgrCIswrPj2Gd+1ASHZ7RDqtiFGKfT5CszAHeerz+dAWPLH/k1N6Ks0OVgo+5nN45IkXOPfJS6Wg/5ohDAIYDa4b51mZKnM5FhFF5nBUeE4cRfYO2ndbDWbz2iwgJvLOfDjS01RaO3wd99ilDrGaDor1C9u4UD6ASRq4IfG5SMpYzGm+4OJPzL3E2CcSVl41C9KUxknPWPTGKzLQKpDhkpfHYaQru34GyYG6ntD7F8PgTq4mvaGJiKCVsCiPE+ivsajW6gpv8tq6M+8tC8Pq3apprQMzjKnobWvL1v951x0CxaesWsSxYF9SqOOwG8OvWkFw3DfFfU+BX4vX5At8NF4ZtOfBBnU2XV7xdAXhF9ngQPxcVIu6s/Yn0V1kGNODSehuNudEDQWK6aVcKISKQWZg37JejS5IOFQo7NQMCBFxmOtpelRpe1ry9/a/NRFxtpvUQ9IFwMALMtLKnK1w7timZFjS71QHSha+6uiZZyNnuC52Anpvtg3rx+KUhr6aFR6GjU+prsOw+Wg9IytOxCvXqwmqPZtL6VZCo3e3tuoREwb+7ePoOY1S/os1K8P6QMD+6n9KhjvxM/tMX19VMDKknQa5uui93yUZxxC0DqA610t46tlrOmTMfgDETlFwU9dHbm8f3OS4iM45nAvuNruLWmy3ANLZF70vu78yJ+3OgZqrCHbtDcPkgjLMrpwCsNI0Plpk6s2h1+GqAC76u2VRObvXY8RIC2HfuHYjfL1ltn4iGsFi2220MVXeL3lOf5QKsdEwEZz810kiBh1w7HUstNOGezQtOs2EtVl9HKVX7eWJEG6bSkre5oL1u9J/dq12dxLTt2wXzkxXX95/HH0U2B3mgBMrgi4jOWHMOjoEX3rnmDanYRzoMNt8l1AWLvkdYOcPmeBifxIW/hqreK/hRPHbc05GJd3taKc6esze7os3CvgDNe7fnfbpwPwznDi+/DgZtztmekf5wK7fBDKu4pUskD/rB+NPiqZj4OJ0M08adJGx/L5NYTFK8MilRT20l1jMkm4HBBxcNYA8RZuNCXCHz8qvg4gJsG13m/xl4yK6QVi5/jUqdh/pUvZ5lswXz9G+qjWRoLzsQnGf/Z+IpH+jDnlib2WaSTafJU3bCwDsn4hXJZRY4yPH8h5zN5SoQzfgZNH5+n5t8wdY+iFx6yYzihO/jl5v60b77Zs4HgbCoCtDxTBW2QKldjbR2T/WKIoYThmQrqxUwxhdI8Gjw7tHb8vtdhyeqMNykSGWu0CFhqf8xbQd7WMidZ6Lykt6WdkdjQqS+6OatC4vgsnE01sNIsVBYbp93t1c3MXxQk7hY+FXBP3yiGY0q1Ff+4WhnUeI394nQfI8Trmgmo4cXjLdm/fUcH6aLkxhr9wcN4tKXYz7GAgZ9kGPLUtT1oOz/OTaGWsx9boFbHrk3z4TM10sDuOywaWif+u4a9r6iXhktgIfN/1u+3jZD2vf8emMTQcZwEJ0QtZ3NQnSYtCiFJPT8tkJ1NZ9HpIzugLZkWBuyDCO96C2SI+ARSfeZ8FQZusqql0PA3kqp+gve+Ke3ocpIxyJo7jy2/abju5JmhBoA3lpvvq2P34sNtz9WXZ7uFbIdfW4IZaF97U+xxc75NvGuGkXZ5VG0t2uMyKfKbdF5EC47qxzcByq/JxrL/y78mtfvaGjVwsNCfEMBQGl3hmyJiSVUzuXxZ0z0J1r3XkxHGaBK5jsuJ4xYlsUIkuHpLdI7jJKuP2MdDg2GABhQXXWRPiWAknSHyh+/QbyHMwDMe73OMAz2RkwX27fV/7MNq3nYICPXrdJryq9tAcTfMge+ehVh8HEfO36erpogd4vuyrhLLWBDOnUtPnwq7nFYPaZ9Xh2fNhMU9nCZf0Oo7dB9g5wzzGTQMGdhfGbqeD4D5wpM01wXh1caa0I8cJ7si8g83yZsfCQnOfX+9yMj6oBQdqMDASs89uFzGrIqdtTdUt0QGmZGomx3TsbgfHcOcgjdwiY9bgoWDYLQE3HxTYjqayYQXIyXfbqicuxi98Cbb1fq41J53nQ/CG0FGt9xkfpiq3t7r/LqyMjgAs5tB1XlUH0SixJwN8lki1Qc/mM8splMbeZKVtJ8uexSWaIl0mo5vUMe/tuN4Fgqp3OAqI8yJF2vUWN6hQXqODbc2S36yjuq7Bm7WqnCabaqKRxQKg1zJMaMUR/wNBCH5+prhxjFlMqYSGUk8dQvdKnViZzlTBT3MdFzNF1gsNrW1sAjrg4czMRC46P07p7IseuzS6ZW0bo2a5KgIEPRamImRff09H/sQakzIVRmTc+FeVKgtHfbch1LDgo03b7MCO0WjkMBkH3ddqwDc19FAgIP3B4dfXDb0uyey9+r4LJuyhjTuIuTlDXYc2YmociJckZ+zfQKOFBvNapdlzFqcj10f1/C9ELoLoqc/jAcT1n/u1jYJxmDDwRfDWphLlDIod3FbwVNzfj1hgXzh0c0wCHkfiDsDLbDc/I4sJwK00mcncV2iPaBpR9t2L4761W9OKWaKA42uU00QDd0q/X3rkBLVwBOuhpVi6PzU4aWZFxSt/VQ1OOHyIDCXVwQ4pVhc2CiN9jGHsOcslARPIIx18QwOBbOLmJbS8HNdwwWxq1zBiS2/IeOpS4QgPrzgYX9b0XanErJKU5jhtFLRVH2cVJmIjIz7XH+WKBBaZ0IL2ptze1f7uf9nlbGxXi3/7pT/dwRqoltUhwvDycYvRQdb2jyGEQ/uV3zvoSXosiCj92O/6Comz9ILo2tECY7I/vClB6rf6LuxtJrNvh5uH6QJ8YLPSFJZ99QPBxHqsFEoudX0gqxOPRAq7mzIB9EgujX28f4HPRsQX4s13W4ItFVcble+2MTo5VJUq+jylTBn7B6y/dag1LaNQG3tbSbNScq4XVM6v/JrehYrDffOuMmg1k2YaLk4JdVNhEmIFdPswRAOuZJxuM+VAy/D7o40QwO3rTAjz6jEKct91CcGbXM9WIuwSsB1GsbT9QbeCGTjlEG6RjFloOUn4iGp1i5TBDj9j2SUyMhC8C4/+lwpKglC3CrI3sSn2ObOlvEQxOG6I/PsxFDIU3+GnJgREWdeRTbg0yLd5VJQopTui23/89/ETe+F3PnVZ3+eEvJvPFvNemDk+2t+WN+NoYXt92ZL7kSEgJjHABLhzjZO3yuF6qkh0EV5RXCQOddT4wuaV5sSzQCi9Bxp5/dpYQL5Kfr0O60jOObe/3fPTVdns91OOcPp0ShxeQpqMiDELqA6Z3Ue561i6xF3pmLKxh6VJQjV3dVxTdUC+eOG+e8/GV8ToCbUN65eC/CXtgi1xzQl+UuWqx779yH052LBkohWQn9qI0snV/9Tmw054vloPx6V15zBcm5mnWExzLsKJgh8cyx2aywdMpGXZvzohT/LEtR4dt9oAPSxnPe4vQxsfF8VXIxOOOlM6N4GsM3TcoDvbTUK8Gm1S4a3jP20XKZjIXCO8mrBqpwI8fDWf2s+1LUXJhmibjF7+MaEl7g4VTNfY79tsVAOczXmiVD5FiUN3Elw/MoASeOy6Nq9fAzzUbhXj6dw7WkfBWuTscdsrSk72rqN7rUM1+3AJ3zStaJxFTfGPo2fwFK1vL8iB7nhivVY4dr/Sco+fDmPkqP113z9+6dHZuE8gsFLH0fCuysQU72JDKy6WzCm4u24RaMWcrof7BJrqVXMYbb+jol94zsqHPbv2z7DE7mMxHSeaaTsCb1wmS7f/trzcKJs/BGSVSI7Tq2LE/+ufXZzK0uswfwTc4wzf+3MYjI0+CL00flT51JAHebZ5MmsY5rMaUBcMQ5t3R6wyfNh0DCdmGR7KNPfWRfyOzQiYEaOoiPhBei7g0l+qKMbC0DnhlaOif3g/1QmMIpvxygWzt+JW5AQqZA97f8wG6OcdfBtYFx4p1/nJcuLoP3e6LxlZHPNbCty3/sZ7HpOn+IPwcRUwnMICzDfdmYTlBPW0nCx3eo+xiTbeDQHRGivlz8foBVvwI0WIDzALLghVFrGmaJWtvzo3EQyHfQASv5WNe7eZFLK3jHHSZ8gkeuGXTHE0YjA27SsDeSvWemxMRKsWbhT/UOdb88C/TW18/obCCV5UUBggQhjqi6+BlgqJdWDQQTYDLH9+g8MWjEZ1tAIJxRVoB4E07wQ7Tdy3KS9K3+3zOf7axHcRZvd0ibJt530sku6stlKGLJrzAP6onq72+zR+dkmLywmogT3rsFGNF/v1o2xJMZ36QiBjhuKfDh74nHpYuwMCSn6Qtf5x+6GLP3QcDymwIaSU3OvmAAnmvLzyoyzxiUWyTysG83xAuf34SCZjwzuFzo6HbEhm/yzPQwkHh+Wt9425vSvbUEuu+HVS8CchA6y2g4ua0RhIyR211oFzaI3fVWk1RPmuDA04/EbokHsuP7lpWNHwEG/d0i/coL5WTdyzt4BJ8OHUHRuy2ttytzl0WiWlYLAuUgcZZCgk8B2Hneg6Qssy7Ounno+i3SxfMvWMxtojTlVnxiMhlrkq7TETyAMKjJaXdnesrCPzFlPY6gllYl3/dkqqKuN/EBKjNj7TR93Ius74Bmh2/vTUADZ7D8nAyxixFaULXY2uTbRK04R5tQHeAC9MwuftQzm7X3csOBbXVtgKh6e4mPRWCH8/+v4ZYPKaw4kzxhtJ/iEjEYdxMH8L/ZSVvx8PZtEDPeG7tsvdRPOaAG8dyEGmRhLksiiYv+8feiqbRLiWQQM8o7SABnbluPT1KotINLAIw57Lq7aWTx37qqincaaXlL/99jCkaJQe7A2UF9zS+bWBsj3eHRdzuNqe1zb9o/f4Kjjx3Nlg8ByyuswIGFl41nTcqLWZYYCjUKN3Fzs7ws+7H4DVZSmQmz0bzfv3uyr/yYx9+GDepnaVPjKdgFJbCS+xMdA3RlF5m6f2M2fX+PVokqvaRwoR1+aIXGT+1njwlSmK+9htO6XHAEU18WBa+wd822G8c7JP/gG1T3gthST87VJauvDqmQsZEFPC2bcMQMDxoRmt6s20w3krkTr06xoOibagw//0P6MSXmtxTKyErYPq3fcT0DvPisqVyfyrFvlfKJ/6hZHDGuzm3KMfqu/TAR5kVFtM6i2lUixgwJWdQ/jDWg/bYEgnig83ZXDvBUe9u3n31tSrPa1ACpAPWQ6fr/oq0cExYO0gswAXJFEWtFsB2PEu4kuP7xQ4bKCHkEuzrf4MbBecjCb8fzbGscP7o3oGk13eoNJAogl8BRYoDpePADnQhh3Z9hsOksEwozCQZA6OEYE30lEXI1D418BPl6G1+wMvaZ514vf5Y+RS5hkd/vutFL9Sf0v57lhmoUEDr4l0yjnjPKzrUs34dtoXWjsayRF8hXEukjaR4DuaU61GlFLjfixRWr74G9P2id9UuRbl9EZfhTYn4H801uHs5dtjdmgDkoLgfDJ7EOhjcrqovcRoKUbDpVAPtsw6tbN+PGg3JYKlNQnoFgqsibT9JDCaghcEfvIwvFflTKHLFpKijTP5KghYy4FCNZQPJ6/qN//JdpKKUaIyKxvlnkGuh68g+KrggyHrpP5STjgsHdIno/CBHLcv/nJxftz+UeNadglpBpneeKlEO6GYJtVsaptJHtM/+tYoMY+nLZ+e3235ACE7r8jL6I4QSQyNpEW2A29wc1Fkki9KFRapP5M/23lgkFIDM6PwmqEXK2rPPtST+qgtlSRyipXUhr6jwjwuPauHCvjqXZUAuCtUsMgKhv6jmNpp/kQwxYztgrw0w5SY71OkZ9zOu5xGwto7qF5Gy9lfO2YJdzm/KNA9Y6RLGU5b9L3Cp0cD+Ro4/3ci4BKefi0yZrwP+xl1e3gLJWL4aAWe1G6PcmOcFqIjRho9L7NE53/VWEMhpJiB9nZpMYERYuvdEdF7pnG8jPJOc3z/W7J18zAi7zcnK7fEoAH5Z2sNn2jwUJ7I+2+BsB26dhyDjibFBGOZmJwVaTnf3wYClRjclunapndgwn+/PICgMENINW3J9PH+piijREdhQ1/j3WYhrrn/SUzlM13j569MqrsqVkOJpG8QcEvHT8L/Uj/SHh3mSpqGxmEU/v7RkZOlveJx/yIhCvgdwzLlJIZdEbl0ElTng1RD2v7dhWN5iEm/JTuCzlC8po1ydd55UGImPNHDzyTvdEMRuLWbQpcMs/hbDITf3s6EdRm6Gr13LFdWTkVXhGZzoIlkvqzdfcFv55yN+nO/5rt55yUPSFefa14xsvxxgVrRnxKaP847bMRbd7O5LdHxHOk6tRFgzUbeYdKGLeCkwcPxBRhqVij218PEsGiN7S/YUttvHS1Vs7dOmzBF/kDTYSDh0mktc2elM1ajLAIxo3cWVsQoFy7aAAX07y5+/Wdy89TRbOtvbJmloS++Nce5BaOYza1zaZlqCu/uED5fW4+HrPXFdNLhYEcqbTtQuezlZlndEYVaeIzIlTFxsw8rixV9GslNhvgejbE5sibHZMJOePqNzaP7sUSBYcBMyzpSytaR4nztnt5qRJpw8M06Zm5hJ8A26ApM6S780UnKJTK+nfW3TXiL+YRAIwkbUnF+obEOOTm36rZ/LoTBVnfTiwY8raxKDT0Ctbg633F7ptkfd4kQsoC+BzlElzI07L+sl2nJWtw0NA3z2PCzfAvxvLFKgDIq4e10a1q1YpJWw2UoSDKrXh1fd3Rc0faHGapAsez7uAHCBBI+TztWfUpDdLd3sOoA13Nku3k5HXTY9ARTWtXREXQOZBnXcXXodaVremU7bDIkr5jbj9a+8r13XhQTsm//fp+Bf01MhnJKMaJV0pYe3xqIjfoH4hEjQROtsVj1EabmHzs87KorCWrXm1NHB3zdBZJ40kyETtvC76+aSL+GzT+p+6caex5OqbYoSnpYd3WTBgTHNKowo6Gjpi7SmnGE0mGqi2C6P2REMvmS7E+K4Z0Jc3/6kB+0Hvx9LziuvAK3kudy66IXVfAuP5KUnvfDRIcM2Bjbaai9cz9uRVF4RCtmQTD6/93emXW7iWSJ+tf0Y3kxD4/MIEAIxCRe7mIGgQAxw69vQk5Xpu1TlVnVdnXebGnlyiNzOAyxd+whIvYXVCk+EAz0svJCqQldl1Kyrdq9q/iGVm80HlWvmOF4/4lmZs3J676I6WLYrSdDbkqNIxbI4Mj2aSlidtKTw+baoVqAASqhW2oYJsnjupMt71uLnUhFL+vQv3HLOFb3iAeXZx9bIy3o60bnaAKqj6pdFmEbxtTmjVbWK++a3XJizHR+hUJdPEz5ofpAOkDabSyBGUfWqyL73G5wL7Doa+GeJz2PLlLZxO0pjo4mk1AGrTZ+Yjz+dmRNXmUyFxqCukFwjnDsLqXzRqvh1GYTJR9pgMAE7ZUGc06x1VDWNayYGfEQF3aeDyDBAVajcDVzWb74NKIWIlbUEqFepIv/snHXB3V3VItrZD8H8hoPeW28gYOXSNrMK1ANZ4tyJfCzzSEWZxW5BfGqGQa445GqfL+4T3TKYXVbnrt8HuWJNg+jlDJ9WNVaItIv275w5+sTHpZXT+72/J7s9qwVycNpu8Mhp6ERd3pfBoy0Co/MHOyrhCMV41HZ5XLtI3M9goVWrw5LeIOSvo7NcSdGaz/RnH9k96bP309Ux+9I584nsTqpzI1wQwQh8vs2pYjHdusVpYAtdGMMwwmpw2tEuIyjT7X5FbhF4hUVO25/SpF7nk2bmaVDrHEPTgGzhcgUzt4ZZQWTZ2cnO7zm8+6N090Tr8hda0MoarmL42AmMuRYxtK8S0jpVjOpIQjJHRj5xKtT6LXUc8ciRGb7rCFlr+0eBtJ4Q1l7lS+3luv5Cxnp3K0vDtuTup8DlNG3WfKwG+MMPY9MHxYu8qQ+7HniuxN5dMAxH+fDli/SnSWufA0C7UvYNHZ04x/BlGK4mCCQ+CQaGh1J4bZdVjHoiVOorNUYnDQwYwcSd9G4Z67t6e1MavoJA/PrDq69njjGxLWZK6cY2usRx+rJZJVgdD+VvD3CMfYsZQ8d4iiZKJ/VETA9oL5+XBRQoIPD9323h72p2RXVu9Fw8EsTyA5oC7GxvaFdJpA3KfSiwpRXldVVcgSeXPxYTXCxdfD6hLHb5qRwcCrPty6Xs+msakdS5SXwOUSRdFdQFXqt2ZG3MN2Q9ug+yj2PczW8hPON50Mo5+tMzy5tzD0eu52sRiyP+RGCDJJmb0d/eVrILQktAPdQVI2gAvYJOzdPj5HnaQser0TFejQlB+upV1XojYjd3uxu9Yze25V2WZ9KRwWq875mWvx4tC3MRslYX6VBpIOtl8PIZvqoagayeAIJbMphBMZLUPq1oN4hUQ2OVHbRz0QR51ke+qMcu3cpREZonSZvyazQtA7j64EYR3eHmnQCiUFlO9jL9lAtVx/yFVrHFWQB3B25y7eNF4To4dxo96yKWdvnwEfmIXwey70uhHtPxLCMJ9VOWjLVkuIr4uJX95lVr+/g0X8QU4/6GqmHf4/Ugwnie6Te3w/+cKQe+UbqvZF6b6TeG6n3Ruq9kXpvpN4bqfdG6r2Rem+k3hup90bqvZF6b6TeG6n3Ruq9kXpvpN4bqfdG6r2Rem+k3hup90bqvZF6b6TeG6n3Ruq9kXpvpN4bqfdG6r2Rem+k3hup90bqvZF6b6TeG6n3Ruq9kXpvpN4bqfdG6r2Rem+k3n+9kXpvpN4bqfdG6r2Rem+k3hup90bqvZF6b6TeG6n3Ruq9kXpvpN4bqfdG6r2Rem+k3hup90bqvZF6b6TeG6n3Ruq9kXpvpN4bqfdG6r2Rem+k3hup9weQegQM/dmYegj0AVSPqI/7sll7vCgCxW3d9q/fEM+pHV8tgUKvz28P/YbC9+UguMDfhnI/jh5mFEKhbv3tXxA5+GmlIXhI5TM/7/OdQdz4uvnnU76j/B3NDY4X4+N4Vx4+vvbpcZ8wep3wovxNY/v5zq9fh7+g++JDdmn/AdPvUSYJ+GO2a8tmfDUyzv4XzoO3KOsjFPzcBk3bgJOGsW+r9JuDfTs1CSAKvh6gBjRBNoyr/HX8m3PB23F/b1eUpkWR436MjsHwVyqGIdj3KkZC36vYF0X48RoG/69rWNw2h8imeDzE81a1H6ZqOEZ9o2vId7qGQj9J15Q9t9O/Cfdb69RWdbay1K7/9pGqfSPQtEmYvm8XIKI6HIYy/lq8nxvc++UNkH/WUmmSp/+0nX7TDvgHzfDlWJ/W4VjO6VcX/6htfrnDBSjOb3o8jX0lBhRFPuFfX2Ropz5Of/m7X9v4u0shOPx7lxrDPk/H7y71EtffX/3flyDyxxmvY3/83FtwObZL+/K4Nehwvx6//HqQXYpyTK9dGIO/XI4zvpb6CwZ7/bUX9+0Y/sKBpUEf+6WDY+gnCqJ//WD4N7326GERnCQZ9F3XPX4DQyRKpz+IzosSn2D662jiS5DwVTSBfSLQDwIK9Gd1QexPJ8C//asSzLKUiOOPJJiQdARBP0aCKPlnFB/xByzoYfauv/yz7ceizdsmrIVfj37jn349R2sBOvkln3s6jtsv8gIe9OcZ4f+hdcWRr8X0P7CuBER/giAc+vL537a11O8FZv9mwCU0cZscnfavE2L9CIONo5++9a4I+n2PR/+T8RL9kzSAT98a8L3Bp4lPKPIn0wD4o/z/H/jr4+3HMqyPfGoMm/zVvL/jlb8w9D/L9JfXAd/rNPvy9YXTf7XxB5I5hDCWTc695OwDZw59wv81B/EvpM4Q+pV0MBL/Tjo4jXzgjvGfJR30D7jjv1xCgxBfD5Nh9Dfa/4fTmW8uhMPfXOgnO1j4TxgMw9SHuQl1ZOrIB7GwwBDsEZj+mP6Fkx+L4zdqhUD493pF/jTrh/8sB1gO8SGusgnH9q/kBsHz/6JYyI+yugT+tVYgHwyK0x9Ym5+WA8F/JAn661ldnP70jSSoH2R3Ceg/bHf/wE5Bfz0B4jj0CYV/M7bxY5zoP7/sf9ylfpS0fiPaX00W/Pu+8jtPiBIojSbfe8I4SSMq+kE2j/y2g8Df2zyS+l5VCPpn2byflQr+mKswydG4Q9iXYLKQCB9AiE00dL+5QdR/OVlrh+Ef+txvT/4LOecfopjE14oJfzAo+dH0IfaznPGHE9Q/tMPjBA6T6fcdPoHiNP1BQQ5GfT1V9lHoCyMfzPxjyM9q19+dlv03u6p2XBj6p13w/2TPwr4MHPw9zP3A5CM/aeini/G0Mlz/RJTk7M+JxWL3v300tvCfnJdnDqmmf7mh4n9vNv63edWPGnAmkU/oNwNaH1mdn5VaQbkIIwHW68v/gy0794yYFv/IBP23W3j+7v6c/6DFXxN54POB1D9v4fndVqDf7xb6D3b57BqwyWfpsoa1QKqUt2DXxvPVKQQnB/s3Ksf/2IRl9OMnD5Y9PhEXnKKLrO4K/udFZZ9LGP79z4yGLAm+8Kh1tWudccFa+WT3Q/LhmKbS1g/9ObJ5fwnAWZzPKp4PnogEe3IaKyO56oJF4FddVQumayl8CnsXX7d1i3O5rFmMW4OzsWlLTwYvnnLDlvdWA+tHc31kH3itXbHg4YiylLBlF+IrDVYEV3LsFDC/7zkm8MRVOZNqQiTJ1Q8PB/RguceMksHZyOru9IRt15vlan+em+4O0xJYetvfkiiDtS0ZnGLUGoOb65kf7jR2g3NS3sxh5oVWXE5rkyR5pE05JepLlDdC4vAms2ZgdXLeKv7e5IgFLfsdcRjCDvkFx5Ii1QVQs1gvPHbjzJiaFv4OCTpuyhpkpHygSPkF15mQu1xqj7Huc7Fqt8vWy5z/uF7u3IUf+BuJxYZV0NG6yOYlP07xK2loVoggVe8sNfiydEOigioazy4ZBRpqRS4CISbhpvf1XAugWM21eGJwGkueGJ+eb7t5uzVCb90oTR7B2kXvptJ+Kq+Bz9/LtF/V9srZdcq2t3tB6cpmCIuuOH2BXAX8xOktu3Fxg9ahhtwyI9AsOzgZoUadDQfUV6Kqg2M6wJQmo4ENlbVejdrpIrkpFi3RJAOJzhhDDSbiHq80ZcN4bdirscqub6n740I/KclYjvYZMkxs7TBLllUCZSJCf053dvEFPLRZ//5QtNb3kcDv2j3IeiUTGLdLE4NrxwffPVd7FTx2sexkYPI1rmX3wncMjMTsIPQuBTO0XV7MXXiEFuhll0AfLyR+FXmX0iN00/e+NarMFeVne9En+uanI+c7uFLmzwnROKKQUlDO5zEzawbekDyH9iZYqYlIkPesFkehAZRm6+TCdTx7e+KYWNb0POtdZs0BoMY97rIuIhXFgWW96+IkTRrKwoA4kHPbhxtkJCSCCvwdp88bPhaD3+obg9bTNrLUKrCF0pxPFFIg/s1KrjF7O9fVYx1vhqSkC0DeXS/LgnDyVbru9SY5jygzZkMAq4vv9NZpi7hsBHOWM1DVgDWYZu+bA9rZYW4PD4W22QTdj2FNBb0+PXL32c87xtaiXV0n8/HDlhFS364j/GCmiv4EfxCyIuRPch7I/w3nUQHnof3iPIw6yqqf7zr0RxhMFJqQxFAmBXdyBdMhHfr3HAdvfXEcN9c1PMo2LTM9vWBmdbGVOXMS7TZt7yzLlXpYXG+uIhXMdTUjQTEJi1Nap4xsnMWqlgs0OzdKveDKllTIK6Ta8FhclZjPI+XCY+hzb/ZATPSbDwpGSBLdM/+xuyWoDE7RFgnHUzIhDivdBC875NDvCSmN/MNEESe7biQlN3GIw0l19gMTtcizJt+9+8YmIsyDer3rSLFu6YlENUVParPuruplhv0qoiT1xbcDfQ4BpvFaOSft9ixNM32UGOl0opmcxA4+qZ6xZLbimkHreVLhEBjCicYCyf3M6Ra8rr5o8bBvSpuin65+P13aJOm71WduUt90Us5c8FY/m7Qwd8hgX2QX4+vRnOGnC2oTalNYS26WEMUpUBMWVT5lUbt45tbAg8Xu2mEGo6Uf894TDY9jQ10z9aAgiqbIIvGEMQRe3654C4Ha3UJ3t05nE2pImxKUBe3TWm4TMq8QlnWjsuHPq/YsWRU/n5FgvdKaKLzIYbmShFAPqSLyRAfeqfw4j3fLBagyiYQYQl+gW3x16bgCmAt/holL1EaRH+wehgrdbofdY2sfvhsoI2oAlo0zl1W+zc09CG1QGbOq3KKmsKKdJmhN4RPz8DlXjq47tjacG7j44J/rjXyqkikloG61tvi+TzEpbAtQFWBYMbgMvwxZcAoX3ZWEpwrJUUHRBEwrV1iR+pFTq33tQLgCIc9H6/Tpma1ZZr6NHtJZ9ShcoRGNapqd8Lm14mTRzNsEP+2O5TZmz9NpqnOGlmmP5xl/Erj7ZrfceIk7IAqJ4OpVvWD0NiRZ/hhBiYcI7PbduGVX1Zo0H4F3yXXOgYa3mkwKpwR2K56xp1HHmHF6bQUt+dIJtCFvWpUHbz3eheSpR5yxllRQs+SYeIw06+CiT+21tS8/wDKbZKTZ6fiEcqUIHw3U6O2ha9rTMRbNfbZLuGpGfS1BLdojv58G8xRArTfG02VfcLVv6+B52kXQMDKC8pTh6Zo9xuaOizNCJSmKJYh8GAgYUajmBgCSKs4eIdy18Eool/udPTz9sCDNNCsWKNC8gvJOFLmSkxA3vtX5q4FIaGRoNhkUXTzxLVYPZ3S24Q636YfjtAYyvTi1QhTBe7ea5GtHBeax7fI0ZtaoQryDTwZ8DrNenxxTvOOookGxfhLMmQE1yfp+9aPerFuAMcnBu9S80QVsfIKBhWXYk+XgQl+d8jwHxh7890NyfOIbpwl/v4AAgT99GdP5ymn+gHGeD53mH1hB8BdwmucYiLX97DQ5c0fzx5/WaZ7Mz06zZWM4tG/r3VrlifJL6mUmsFlvC1nVlBgbt7gwi4K/C8JCamwq4cKJM1PBcARWMP3c6Z5IeZ8NX4CuF6G8srhWstPYmiYyXrB1lAth8h/J8lRrPY82ZUVvzmV3QYR83AmGfRrU07N7R+68LfqwC3rd1D/7CSaDK2g1v2mQNAKx+H3eQ5HFz2R95vIL9yr95NoWFQcyQ6zyCV+imrB9U24sMrmi8JJ4xSU7rklrLo9T8wwew7kAQMCW4dwTX493P/5xGeUXoLgPrw1U72cfalOuf0gBIbTGa0cCCyl8esZDHjcXMZ3mxrXrZAbWFKcpct+suc2YawhMJuY0W+5Gyl4EWRZnJJkC1Qkv3qkDG7yxNRWTTErcKJ3H3TmF9vPUqldUGEB71KlPybTiOi5Kq0a2zws6dkOKxJdbCqp0E0io9wrPWQLdZiujXWZ0T/SEXxniMu/d2aZt7OgjoA356urQfgVrSHHan9o6eLZvPbyuzUAZ7AmZn7c4CCvCyuzoljl+dW/C652qWtGxE9odZE0JQI2wfV7rZrNx6L4t/TMUMnZAaEAphjNSOGNbbZNRSUzrquWXB4OZVxJ2L+xNNph5EhEd3kHdvAR8w65lcGYh6HnpkmZw7KnsnJxKYiYjhHjHqgeSsjXclhVliouIEVsQTE6YbR2T4rrSBh3qNwR91QoIAekzPt2qi69wD5NLr2aJxSCmEAP6VKyc4o6FTkPw6T4FmCP1KVGMtyJ78G7YawzpsDI2rYw9+jBnZec8JjBFTboYVLeD5EwJUWia1pPPy+KUXIFkOu00tvzjchEdZt4G7GSFhaCWyq2Du0ZMk6up0W3JFvjTH47QbDlP+g3Ui665Y1w8WZQAlTuuksDr1+ZsXIrSUEfLkc6e651zcr3x2CuVJdcmGrL9lt+JTJUk7sEiXpA3zOMW50ug+p4TP2rXy083yrFM4olYz0df3LvUhVNBGW9CzYm+KD4IxUXyImKspk8fL4o0fTIiejwvz1zne3bHoVObik+kqc0TKVYsk+ikZl+s6g52xeAW1q3ZqaQQtWhFkL3j2vNy8SwLACcC7hzLCsft+kQZxWFIdlEM5lUPJjyKZvXeC6wmXQ1bU5Nb95Rz/fZo41nFodtArY6un1aoQnWSAMbGPk25rBzhBQeqk9kb2kCD9XSDMAJcqHufxOH11Pf7c8i0jvKua19Yxvl4uq62yMZyaPNSSRsAT40javVC9eSWvYv4xqaqI9p4VFvBWA+aLiJNIYcLy6xGvqdmYAkYPjqQIiMU0gknSxfX0LhuuH5SQykZ6kCsbooaKtf5yJAHntujE/98LOCh2F1FbPMBz9695vebkiCn6yNW1USSiyuCRkHZyucy35HJ7LRVwY7mE53LLTLTEnnw+tPwldSEG1t5OlSFRbGpBoq+OUTHB+tuXzBSsLB8YQss5HA3oPZX8DAyRuE9r5SzGIXW7Ii02o/oIU6FKZ+sYoC4kzkganJJd/mqe4/dFxNP5tX+5BHagpG1iDnuE7CLJ8Gr0156XDBJfMT2FeuWkxPAhiUiym2fGtPQ6fNZvbJ3zrpet6STPBIJ3ZKshBjBuDoP893jCkNSHED+eZL7tbXvpowv9zmizDGjGggwNbyI1y8zVgpUf4MMdig05ohyG7DRK7sl0vFmZLuAgu6ToCuDJjaKr+5HOlGpodpo1YtgJXsguCYpWFysOZA8bn+qjRKTg8F5XmwFzFSMAbZWuGs8wmENRJq9r5h2UcBNmqADVj2OI6K7xma0enxO5c1jYaAjfAMOBWWRsAM8E0UlvYGK7q8d3IdkWALNsR43S1YzXuXYbq+tLbJYTzNuOjMOGMufhBIEnlXKmpGe1MYo5icJHasjdpeAmTfT8yDPY4WOrrSe60cQRL4HwAmGPa8MwfWnZwbOc7eTtMeGeD9LlqQhK4eP02GsMHbRL5OkVr4lTdOwLnkr7RJBnpcZuzHbaVfaI9J7PTmgA4DG0iZoHEVjV8hAhC67oQlz0yJs9CBz7LDVBjEmMZSwjwthJCtdneOJUZi18b1byEt9ieETs1WedK0tNNyMAnIjQAuIinGbbsZlIUeeGzdHRW+ZTSZH34L3JuAEnrJerpy9WRd3JaZHQlTLBpq/VJP7i8vObrxSnLHVD6qyhrior90HnwypRW9Pv6HQ9jlzKYntQlxfEi4glNlWo/2Qg6IOUjXvPeN3Nfe0A8lQZKWNEkx8ZNc29b3EgCZVbOJOjK6YEuhtagEbIgBby0Z24nOkIbk2PAvMpi8VCevCjt2bO1qJD1k6cT1gydAhf47TS+To6urEmDpPjeLiwpMMsYXDQm/JK+AAKjrxNLwPhIW6LAZvl+KR0qo2O5Td3OXhNhOFF1iHVTLD9JkYeXpRVVtI6H0bKIbwpXRyUh6kIVl7D2FgY4kzVhbIclI1kuzloCAFahn5hZFKeuBgq+NMczv61wyoReF0hnhIQf2BDhR5esCdPuvmSFbyFpSLSt8bWqevtcYb9dpKdMiIYPz2MuEXt6MzZlevGeac9820Q34T9hlRjjCRLjR1Jl/kslVwRh8/2cw41iJ+WxPIbditcuR67yMNTkiI6CSHHCfsjCe7mjnAkmxTnB6Bh+kjzFBZEGA31Jja3wa/gWUUQE6GfWMfZzAYc5uddJYnj4prhVAMDmyvyQK1hXwAzPBnsQojZB43yADKc0rs7kErqUttMJoNNZlDwJs6u5DXz9Vwz89JMOZaAOMB7RR1QcBbYD8DRXb3HCvhNZwkFCU3t69J0p2CVm9uhVToUcVZYTUOAZIi6KqH+L0uA5XBz9iQB8ElI/aqdO4+fldsH+S9gzs0V+/OCingC8rxauauq6L5gsPYODFlBiYLuNzLIBBVXpEko0YnurKydlqTE1HKCs6iAotudRE5VOug88lHdS+nTy88E0mXhzEAkkJ6oRUz1KiQEaIGBGjwkKsETPYDcl7D/s6e07xEqd2I4GHHIOSBUpoP66eOZTEmr8BzzC5yoT0NBYbpaT5KunfVeLmX/QP1xPwIqO19vWcjiMVOIKqNGLMRzVaK3NeGnz0phnJ4OXxBmjQMF6Invr/r0X3UCKQFjbHDpYVcnEjOcjinGh1l7iC4G1QGdPOKbtYjnJlI3WsJ59ye0OgUdqMT0Gzh45tuFrxdzElwD0D0hl33a5Oduo0x7D1tBoMQYW+neBovPF4E6QGN0igetpB+BlNKN/KBWRSRUZeXHqRRuJgNCvjuIhw9HG5aD7UjBn283Pisnk37/NC94+md7tJrseTOizokWp09rhq36iwerBIPxxuc9Q+cIJGza895ee5g4gngsqvoZblMEpFs1MnavBhjYvu4M5Zqrv0rE/vho90EiX36khh/yd2/FDf/dvEe/on6oBrvpw14/wvLx/8/zt0FDOTu8ufcnY1KZP8P5O7Pz0lwMuG1I3Cnh94EQhlDjv/Q5t/J37kF+mXQ23e9CcNs02IHuc3O0ipdDzNcnXLuVMqukreYskzOZdDzfPD0q3Zk7Uip1MaFzDVOEjgBj68S4MssBkdiC8zTIY6OZNckxRl14QQlaRu6i2mVNj1AVLNDREOJO17IMa2j4ugkA2dN92rzTthNaMrGAhvLiu4y3CoKgWfWd+h7Wy7WJJAYUMFcFhYFwUt8AZOklLdo3g1ezgOEnqGmApN1jNZU2kNbptOTDzLZLjWncg2Q3WU2FYWM72s33QNxj8wFRy5CxUmxUfNwqW70jqWhKvsM246sGqwBrxg3brhkPcwIMOUvcuvfOc/GDQQEY0VT1Zctt1nejM/aZoK0n2RX1aKUImMRJEmdnrjSeI0jAzsLqdIwMl2g+c7Uinx1n8Aq6vTC65KUjwLFnZ86XMrJkQ30JHBw1UgVTrQfidQL437kJHNrFHe/vG0Pft9dcs9gGABsQQQVeN1FxdACBKRmT+do1tMyxvkVhAkIQaGFTRw+URu05uJr0Ky4EDSTcCWFF3IVYcsBjRsGYEwVXQgakW/Uua8MwqDSBZntiNi3Zr5RerAUGBx5eyvd3CvZziodJnsIlU1D4UfaRGfWOuZAL/sjTIzwNYGpwFdOsv48Xga3vAd1b1LoUgN+0qNLddA7uB5BtyqrVbx1hpHdWtpAb9feOLqRdCG2/qwEhpglXQV8XUM1F4wiAj0IKx3k22LuQvIRR4AwAfeeNrmLWMbcI1ycuBHY3tc+rLZ9zhMBUOHZWX8W/okYytvCHpHYOspWMCF7QD8CiK6s7UQRZTrPrptvyqoE9FQNtbER8b7aqXYV21dy4HVhMHlHSpyGeqHYoNXTqzhMqks3YuBq2xyrLM0MYgPvo8zd5HaNuFPVMEjOEKHWVzdf5dc0tcDoSFoSFL+Pi5/yiSBq2VkbGckvGoLtJsPok4RieSe98xkJLYgP9/ARA1Y8S7y23Bt6i8NuuRuVWNwn+brBczbPcwqGIoCWkhYhnhAqS1jzZUiujmtYKs7dFOWHDRsf2cQn/Ot1lzj+PbYFQT9B5Afe558sFP8H3gcoWAvWYv26MrsPu0JvkxSc8d8=</diagram><diagram id="3EPRlQL3GglhN5UbCisT" name="第 2 页">ldFfD4IgEADwT8NjW0GzfM3+uDXXVmv2yoSEQs+Qyvr06dCM9VIv7Pjt4LgDkSCrVpoWIgLGFcJDViEyRxj7Y69eG3hY8LBvIdWSWRr1sJNP3uKw1atkvHQSDYAysnAxgTzniXGMag13N+0Iyq1a0JR/wS6h6ltjyYywOsWT3kMuU9FVHnltfxntkttOSkEZ3D+ILBAJNICxUVYFXDWz6+ayjTbn+e1yOlzCeCaj9Z6ocGAvW/5z5N2C5rn59eo66J9Wb5z/JYsX</diagram></mxfile>
|
2106.01532/main_diagram/main_diagram.pdf
ADDED
|
Binary file (35.6 kB). View file
|
|
|
2106.01532/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,153 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
We use three different deep inpainting techniques including **GL** [@iizuka2017globally], **CA** [@yu2018generative] and **GC** [@Yu_2019_ICCV] to generate inpainted images on two datasets Places2 [@zhou2017places] and CelebA [@liu2015deep]. For each of the two datasets, we randomly select (without replacement) 50K, 10K and 10K images to create the training, validation and testing subsets respectively, following either our universal data generation or using one of the above three inpainting techniques (GL, CA and GC). We train the detection models on the training subset and test their performance on the test subset.
|
| 4 |
+
|
| 5 |
+
To simulate more diverse and complex real-world scenarios, we utilize the irregular mask setting in [@Yu_2019_ICCV] with arbitrary shapes and random locations for both training and testing. Besides, object-shape masks are also adopted for visual comparison, as shown in Figure [6](#ijcai2021:QualitativeCompare1){reference-type="ref" reference="ijcai2021:QualitativeCompare1"}.
|
| 6 |
+
|
| 7 |
+
We consider two baseline models: 1) **LDICN** [@li2019localization], a fully convolutional network designed for deep inpainting detection; and 2) **ManTra-Net** [@Wu_2019_CVPR], a state-of-the-art detection model for traditional image forgery such as splicing.
|
| 8 |
+
|
| 9 |
+
We use the Intersection over Union (IoU) as the performance metric, and report the mean IoU (mIoU) over the entire test subset of inpainted images.
|
| 10 |
+
|
| 11 |
+
We train the networks using the Adam optimizer with initial learning rate $1\times10^{-4}$. An early stopping strategy is also adopted based on the mIoU on the validation dataset: the model with the highest validation mIoU is saved as the final model. All of our experiments were run with a Nvidia Tesla V100 GPU.
|
| 12 |
+
|
| 13 |
+
:::: center
|
| 14 |
+
::: {#ijcai2021:tab0}
|
| 15 |
+
+-------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
| 16 |
+
| **Model** | **Training** | **Test mIoU** |
|
| 17 |
+
+:============+:===================================================:+:===================================================:+:=========:+:=========:+:=========:+:=========:+:=========:+:=========:+
|
| 18 |
+
| 4-9 | **Data** | Places2 | CelebA |
|
| 19 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 20 |
+
| 2-9 | GL | UT | GL | CA | GC | GL | CA | GC |
|
| 21 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 22 |
+
| LDICN | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | | 83.47 | 66.70 | 56.24 | 87.27 | 67.61 | 64.16 |
|
| 23 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 24 |
+
| ManTra-Net | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | | 88.76 | 70.18 | 64.60 | 92.53 | 76.22 | 70.98 |
|
| 25 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 26 |
+
| **NIX-Net** | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | | **91.82** | **80.55** | **77.63** | **93.37** | **84.48** | **81.24** |
|
| 27 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 28 |
+
| **NIX-Net** | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | **92.14** | **86.09** | **81.98** | **93.71** | **89.63** | **87.95** |
|
| 29 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 30 |
+
| | CA | UT | GL | CA | GC | GL | CA | GC |
|
| 31 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 32 |
+
| LDICN | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | | 69.53 | 82.48 | 57.73 | 75.85 | 87.04 | 68.49 |
|
| 33 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 34 |
+
| ManTra-Net | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | | 76.22 | 86.08 | 69.61 | 81.21 | 89.40 | 77.39 |
|
| 35 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 36 |
+
| **NIX-Net** | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | | **83.57** | **88.75** | **76.49** | **87.93** | **92.30** | **83.77** |
|
| 37 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 38 |
+
| **NIX-Net** | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | **90.50** | **89.16** | **83.80** | **92.49** | **92.74** | **88.36** |
|
| 39 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 40 |
+
| | GC | UT | GL | CA | GC | GL | CA | GC |
|
| 41 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 42 |
+
| LDICN | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | | 70.55 | 68.16 | 84.24 | 77.62 | 73.81 | 87.29 |
|
| 43 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 44 |
+
| ManTra-Net | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | | 80.85 | 74.69 | 84.90 | 83.31 | 81.25 | 88.46 |
|
| 45 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 46 |
+
| **NIX-Net** | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | | **84.77** | **81.03** | **85.38** | **90.57** | **86.44** | **88.97** |
|
| 47 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 48 |
+
| **NIX-Net** | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | **91.48** | **87.25** | **85.61** | **93.11** | **91.82** | **90.34** |
|
| 49 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 50 |
+
| | | UT | GL | CA | GC | GL | CA | GC |
|
| 51 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 52 |
+
| LDICN | | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | 82.95 | 80.79 | 78.29 | 85.52 | 82.98 | 81.43 |
|
| 53 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 54 |
+
| ManTra-Net | | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | 88.42 | 83.15 | 80.52 | 89.71 | 86.64 | 85.38 |
|
| 55 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 56 |
+
| **NIX-Net** | | (0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; | **91.33** | **88.46** | **84.71** | **93.06** | **91.59** | **88.20** |
|
| 57 |
+
+-------------+-----------------------------------------------------+-----------------------------------------------------+-----------+-----------+-----------+-----------+-----------+-----------+
|
| 58 |
+
|
| 59 |
+
: Quantitative Comparison on Places2 and CelebA datasets.
|
| 60 |
+
:::
|
| 61 |
+
::::
|
| 62 |
+
|
| 63 |
+
<figure id="ijcai2021:QualitativeCompare1" data-latex-placement="t!">
|
| 64 |
+
<table>
|
| 65 |
+
<tbody>
|
| 66 |
+
<tr>
|
| 67 |
+
<td style="text-align: center;"><img src="fig6/places/60.jpg" style="width:1.44cm" alt="image" /></td>
|
| 68 |
+
<td style="text-align: center;"><img src="fig6/places/61.png" style="width:1.44cm" alt="image" /></td>
|
| 69 |
+
<td style="text-align: center;"><img src="fig6/places/62.png" style="width:1.44cm" alt="image" /></td>
|
| 70 |
+
<td style="text-align: center;"><img src="fig6/places/63.png" style="width:1.44cm" alt="image" /></td>
|
| 71 |
+
<td style="text-align: center;"><img src="fig6/places/64.png" style="width:1.44cm" alt="image" /></td>
|
| 72 |
+
<td style="text-align: center;"><img src="fig6/places/65.png" style="width:1.44cm" alt="image" /></td>
|
| 73 |
+
<td style="text-align: center;"><img src="fig6/celeba/00.png" style="width:1.44cm" alt="image" /></td>
|
| 74 |
+
<td style="text-align: center;"><img src="fig6/celeba/01.png" style="width:1.44cm" alt="image" /></td>
|
| 75 |
+
<td style="text-align: center;"><img src="fig6/celeba/02.png" style="width:1.44cm" alt="image" /></td>
|
| 76 |
+
<td style="text-align: center;"><img src="fig6/celeba/03.png" style="width:1.44cm" alt="image" /></td>
|
| 77 |
+
<td style="text-align: center;"><img src="fig6/celeba/04.png" style="width:1.44cm" alt="image" /></td>
|
| 78 |
+
<td style="text-align: center;"><img src="fig6/celeba/05.png" style="width:1.44cm" alt="image" /></td>
|
| 79 |
+
</tr>
|
| 80 |
+
<tr>
|
| 81 |
+
<td style="text-align: center;"><img src="fig6/places/40.jpg" style="width:1.44cm" alt="image" /></td>
|
| 82 |
+
<td style="text-align: center;"><img src="fig6/places/41.png" style="width:1.44cm" alt="image" /></td>
|
| 83 |
+
<td style="text-align: center;"><img src="fig6/places/42.png" style="width:1.44cm" alt="image" /></td>
|
| 84 |
+
<td style="text-align: center;"><img src="fig6/places/43.png" style="width:1.44cm" alt="image" /></td>
|
| 85 |
+
<td style="text-align: center;"><img src="fig6/places/44.png" style="width:1.44cm" alt="image" /></td>
|
| 86 |
+
<td style="text-align: center;"><img src="fig6/places/45.png" style="width:1.44cm" alt="image" /></td>
|
| 87 |
+
<td style="text-align: center;"><img src="fig6/celeba/20.png" style="width:1.44cm" alt="image" /></td>
|
| 88 |
+
<td style="text-align: center;"><img src="fig6/celeba/21.png" style="width:1.44cm" alt="image" /></td>
|
| 89 |
+
<td style="text-align: center;"><img src="fig6/celeba/22.png" style="width:1.44cm" alt="image" /></td>
|
| 90 |
+
<td style="text-align: center;"><img src="fig6/celeba/23.png" style="width:1.44cm" alt="image" /></td>
|
| 91 |
+
<td style="text-align: center;"><img src="fig6/celeba/24.png" style="width:1.44cm" alt="image" /></td>
|
| 92 |
+
<td style="text-align: center;"><img src="fig6/celeba/25.png" style="width:1.44cm" alt="image" /></td>
|
| 93 |
+
</tr>
|
| 94 |
+
<tr>
|
| 95 |
+
<td style="text-align: center;"><img src="fig6/places/10.jpg" style="width:1.44cm" alt="image" /></td>
|
| 96 |
+
<td style="text-align: center;"><img src="fig6/places/11.jpg" style="width:1.44cm" alt="image" /></td>
|
| 97 |
+
<td style="text-align: center;"><img src="fig6/places/12.png" style="width:1.44cm" alt="image" /></td>
|
| 98 |
+
<td style="text-align: center;"><img src="fig6/places/13.png" style="width:1.44cm" alt="image" /></td>
|
| 99 |
+
<td style="text-align: center;"><img src="fig6/places/14.png" style="width:1.44cm" alt="image" /></td>
|
| 100 |
+
<td style="text-align: center;"><img src="fig6/places/15.png" style="width:1.44cm" alt="image" /></td>
|
| 101 |
+
<td style="text-align: center;"><img src="fig6/celeba/10.png" style="width:1.44cm" alt="image" /></td>
|
| 102 |
+
<td style="text-align: center;"><img src="fig6/celeba/11.png" style="width:1.44cm" alt="image" /></td>
|
| 103 |
+
<td style="text-align: center;"><img src="fig6/celeba/12.png" style="width:1.44cm" alt="image" /></td>
|
| 104 |
+
<td style="text-align: center;"><img src="fig6/celeba/13.png" style="width:1.44cm" alt="image" /></td>
|
| 105 |
+
<td style="text-align: center;"><img src="fig6/celeba/14.png" style="width:1.44cm" alt="image" /></td>
|
| 106 |
+
<td style="text-align: center;"><img src="fig6/celeba/15.png" style="width:1.44cm" alt="image" /></td>
|
| 107 |
+
</tr>
|
| 108 |
+
<tr>
|
| 109 |
+
<td style="text-align: center;"><img src="fig6/places/50.jpg" style="width:1.44cm" alt="image" /></td>
|
| 110 |
+
<td style="text-align: center;"><img src="fig6/places/51.png" style="width:1.44cm" alt="image" /></td>
|
| 111 |
+
<td style="text-align: center;"><img src="fig6/places/52.png" style="width:1.44cm" alt="image" /></td>
|
| 112 |
+
<td style="text-align: center;"><img src="fig6/places/53.png" style="width:1.44cm" alt="image" /></td>
|
| 113 |
+
<td style="text-align: center;"><img src="fig6/places/54.png" style="width:1.44cm" alt="image" /></td>
|
| 114 |
+
<td style="text-align: center;"><img src="fig6/places/55.png" style="width:1.44cm" alt="image" /></td>
|
| 115 |
+
<td style="text-align: center;"><img src="fig6/celeba/40.png" style="width:1.44cm" alt="image" /></td>
|
| 116 |
+
<td style="text-align: center;"><img src="fig6/celeba/41.png" style="width:1.44cm" alt="image" /></td>
|
| 117 |
+
<td style="text-align: center;"><img src="fig6/celeba/42.png" style="width:1.44cm" alt="image" /></td>
|
| 118 |
+
<td style="text-align: center;"><img src="fig6/celeba/43.png" style="width:1.44cm" alt="image" /></td>
|
| 119 |
+
<td style="text-align: center;"><img src="fig6/celeba/44.png" style="width:1.44cm" alt="image" /></td>
|
| 120 |
+
<td style="text-align: center;"><img src="fig6/celeba/45.png" style="width:1.44cm" alt="image" /></td>
|
| 121 |
+
</tr>
|
| 122 |
+
<tr>
|
| 123 |
+
<td style="text-align: center;"><img src="fig6/places/20.jpg" style="width:1.44cm" alt="image" /></td>
|
| 124 |
+
<td style="text-align: center;"><img src="fig6/places/21.jpg" style="width:1.44cm" alt="image" /></td>
|
| 125 |
+
<td style="text-align: center;"><img src="fig6/places/22.png" style="width:1.44cm" alt="image" /></td>
|
| 126 |
+
<td style="text-align: center;"><img src="fig6/places/23.png" style="width:1.44cm" alt="image" /></td>
|
| 127 |
+
<td style="text-align: center;"><img src="fig6/places/24.png" style="width:1.44cm" alt="image" /></td>
|
| 128 |
+
<td style="text-align: center;"><img src="fig6/places/25.png" style="width:1.44cm" alt="image" /></td>
|
| 129 |
+
<td style="text-align: center;"><img src="fig6/celeba/30.png" style="width:1.44cm" alt="image" /></td>
|
| 130 |
+
<td style="text-align: center;"><img src="fig6/celeba/31.png" style="width:1.44cm" alt="image" /></td>
|
| 131 |
+
<td style="text-align: center;"><img src="fig6/celeba/32.png" style="width:1.44cm" alt="image" /></td>
|
| 132 |
+
<td style="text-align: center;"><img src="fig6/celeba/33.png" style="width:1.44cm" alt="image" /></td>
|
| 133 |
+
<td style="text-align: center;"><img src="fig6/celeba/34.png" style="width:1.44cm" alt="image" /></td>
|
| 134 |
+
<td style="text-align: center;"><img src="fig6/celeba/35.png" style="width:1.44cm" alt="image" /></td>
|
| 135 |
+
</tr>
|
| 136 |
+
<tr>
|
| 137 |
+
<td style="text-align: center;">Original</td>
|
| 138 |
+
<td style="text-align: center;">Inpainted</td>
|
| 139 |
+
<td style="text-align: center;">Mask GT</td>
|
| 140 |
+
<td style="text-align: center;">LDICN</td>
|
| 141 |
+
<td style="text-align: center;">ManTra-Net</td>
|
| 142 |
+
<td style="text-align: center;">Ours</td>
|
| 143 |
+
<td style="text-align: center;">Original</td>
|
| 144 |
+
<td style="text-align: center;">Inpainted</td>
|
| 145 |
+
<td style="text-align: center;">Mask GT</td>
|
| 146 |
+
<td style="text-align: center;">LDICN</td>
|
| 147 |
+
<td style="text-align: center;">ManTra-Net</td>
|
| 148 |
+
<td style="text-align: center;">Ours</td>
|
| 149 |
+
</tr>
|
| 150 |
+
</tbody>
|
| 151 |
+
</table>
|
| 152 |
+
<figcaption>Qualitative comparisons on Places2 and CelebA. The original images are inpainted by CA. Mask GT refers to the ground truth of inpainting mask. LDICN and ManTra-Net are only trained on data generated by GL. Our model is only trained with UT data.</figcaption>
|
| 153 |
+
</figure>
|
2106.05321/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-06-03T21:05:12.898Z" agent="5.0 (X11)" version="14.7.4" etag="IGvDsISunJoW5IsCKRA6"><diagram id="odx6Gn4GY6az9FsfMO4h">7V1bj6M4Fv41kXYfGvkCBh67qmd6V5rp6d0aaXaeViRxqlCRkCWkq2p+/dqAuRgncRJiCElpNJ2YS8z5zjn2uTLBj8v3r0mwfvk1ntNogsD8fYK/TBCyEcbsHz7ykY9g34b5yHMSzvOx2sBT+BctBkExug3ndNM4MY3jKA3XzcFZvFrRWdoYC5Ikfmuetoij4leL+6+DZ9oaeJoFEW1cx0f/COfpSz7qIVKN/4OGzy/ilyHx8yPTYPb6nMTbVfF7q3hF8yPLQNymuPnmJZjHb7VZ4J8m+DGJ4zT/tHx/pBGna9iY+887jpZTTugq1bkA5Rf8CKJt8dTFvNIPQQY2xTX/mDKY6V8xv/JhTZNwSVOa1Me/V4MPby9hSp/WwYxf+cbOYGMv6TJi3yD7uAjfqUCcf8+IRfmcAPtWEoUf2rzSdPZSHEniNEjDeMW++tmZ+S2Ia+H6X/YLUfQYR3GSPQReLBZoNuNXpEn8SmtH5mRKHMKOFJSgSUrfd1ITlhgxvqcxe9zkg51SXPAJ+q5FChJ+iDHgeuXgW8VI2C7Y5aXORB6xXKfg4YJ/n8vfqYBkHwos1bjiw7hyDl3vfOhCqIJpJMuINjEwhhIlEHIt225RAnrY8mCbGNhxLOSC6s8+nzC2gjAkSjnHxOy56hQi/9vG4sCnnNM+sxOws36vDnLFknN5NfY7E4MNO/CNvrH//zteBqv6BeSZ//u0Xa/jJPtFmoo5sNnn08jPaWHGUEqbkpRQNrMCJS4RwTaNN5VgBVH4zKVlxkDLBJNDHTId97k4sAznc37xwzoOV2lGWOdh4nyRRChTYbL0FINN4eXzLyTbBrpihfdzkmMBv8FKjDFUfES8NhPZ7vlM4xyWpgMKL9is8yUq03y7NR6QNZ6sx7wZVeuxqefYTlcEx6BBbogYAsiv/twW7W1fKcK14XMAIDcGgO0Mi/7uYOg/d6g3t1X099AUE+2F/Cj6I+hYvlOjf3s1vyz9veHQP6DeQsn/ZObR6eIi9MfAZvxvmOb+YGhuRudA1NyuQUQsu777woYBEHbgABAwo3VkBBBkdk1N6/ieaQTgcBAwondkBDBgG03StkDMIaAy0HNbYR7+qOyE40wYG3RiwjwEs9dp5tvYYb2Uw4q59j/9bzR9i5PXI2d/1RYZRuUlwqWFunJ9AJXrA7d3qtAFbXERO6qzZEXD6XFl68UhDwuRPSwAQsv2Kn0FTesrlX/luleMYzHwEbZs1CMEGt6Kgfp0Pw3RqQsdzKy/9i6slDkIPdXxQ65eZtVZoAPnFNRwjgi8Z9spPYzkNEful2k5UAY0ftumUbgSEM+D5PU3dpsw5cQAFnCagygbPYYZCuzRBa2g/WgjLK9hELQBdRXiS3AHWGo4WsaApZn1ETpEXh5Rez9yMSw1nDajwNLIOgsFJOUyaxNzUGr4gga6pA5xRR1OmBRpuJjGIKNm1k5ot7ZGBtdOMd87ll1giaG8dhrFUuX2GiGWZvZBWOx6am4Cg1hquGXuWGrbJ3Zr3RT0NIGlhntnFFga2dMiR8LSVyS1XAzKI9xEdygPqliR21pCiX1zUGp4gG4z20+ZuHJP95v0nO6XM+xw8/0GlG/T1W7/EMmHlvE3wvSbAxAMLecPjy/95jgEes/6wxoejSsLph6HQC95fyIANACqG9I8g8v8w+PL5DgSgt5T//D4EjmOhKD33D98IxY6daANhC3RDEaxv27sc0daWT5BKKhpwEDHN5KiMZvTqTdVSSom2MfzjrB0gIQlwAaxvJEUDTgNIEUXlkpHCkz4dnurcTEgj8jPuAf1D2JJiCyVkO3eUdsLqk6Gw7CbuucjMjWuWToNBfR9T47oA9+3iLkkKvtGEjQMJcT5op6kHglmeO7LVr4YtDeSr2EoP84XSWpVBMo1KqkabpMxwGlK8wpzszfNeyNpG4bSyoHb0ryG8byV1A0zaTjAwzKewCyeN+IdMrUzapX1AIj72hndiLvIUJYVABK0PnaMSuqNeIxMwenLTiMMjMKp8hsNpab9XpJ/bVlToy7Jt1XuuF446zFezUPuPmUAjzL7DnfENS62LTlWBhBbMOuZMC0OQkChbYWmPKv1nsoB2AsHfaWMd4I0vvPP/p00QUTBQJ4F2vWaSr3TCdeofItHcg3qhGug9TtdbTjTgAUN0m3CLxkj/9gd8Y8NLSSzDxapEHWN4yvCSI7fAe+oHJm1DdLxCufELVU3DIgtduwxCjb8zJlYBYPo9B2iMQLIensjWoSLMdwBfTJ+nTG1vlsux7rr7EBavVbUFytyMbAIDtRl1etCVlVO6uuVVZvL6lOwXEfZuYskXt7FtLAM42QZZC8YCBmnh9NtlgqxnzjT5C7DOju2su3MfiEWxVSdC/HunuzXKMQuF+Kvn79zqqzmHIkkTuP0Y32GU2agsqz8kdOoNuM7lHDBZOdUwb6v2dplx1ju7aDaYGNXscHuRN5VkavrlXeHy/u5gtZwMTBFAdLCajzE3uNQJCeS/qBFfd8HHOW5IXLOiVIviMyUzvWCRtiTruaf+audOK2zFWPWxIm+h+l/ChLxz3/WPn/hz2k7liO+88ckqNG70feLg7WE3uwGaZA802qUx+bC1XPxqxt2NG3PKxv+OYzKya1E3m8WPtvE22RG992yfnZOCjoX76wq4M1vUih2WxvyhEZsof3RvJcKvOLS75xta0uI3OnJdSSvXfFw+WUVDyjuhCSm81zSvFVBevlWjN7BR+20QrJ2z1mO8JPifV27pgaBVBF08Ir2w5AiqbkSgXzalUCUoOjJyPkNDS6k4E/1vhBm7rIPq+lmnd/o8/Z5SbNJVD0T8tVwky+08gXnbxZlJcOE7JdgSiOF4q/Xt2kr/fwXdjXZOKC0nR1la9oirK1/dwe2DW+pUOU0L42nkbrNu1q8bU/SVWztLgN5DVccbq/edher9+5Q7zXu6om1QxWdbYNeL01KLXy3uY+2uQFp+9hclXyK8FPXu2tyfiDdRWpm1F5YrwD73VXpXXAB9KXQpu87FoT7kkxV/CC8s2fxw42U35hqGy/Wv176pYosr7GDaSi5H7WT+w02jieqoOYYwTSTL4x8KURtsnO88KiNHUtDWha5rTYeJrXsjZTcGALTtmUtaxTMWymyMdSW2pFjByZ7jJNbKbExAyYm8mbWZJNxovJDjhFMM/ufMo7fR5dxovIK3rE8WcsSKYZlss24O5hSjH9tKZ/6aJxOl3A4uq5ttXsOOETZSBteqA7DVTmZ7k5Ho05HCNv98iC2ao0tDXodBUcOoa3orb6j1x1Qd90bfUevq+EWapL0ACB6FC+VT7OTYBdKRm7dzRustqNbwkhsvMuvgxc2uBqemeuiJ27l5Bulp4Zz5MroKUfezNJTwz9xXfQsu5rUGgtAiyh2EpciqYaX4LpIiuUOfsZJqmGsXxlJ5V4xpknqafQvvS6StgTfx14zhdoccTVSGK6LuC0V0CNxNSy1KyNuu3FUb8TVMMEuQNz6yxZkCl+wzTuzeP12UwcIDdK7H3ur/kqES3A0QIpmGVmzHoMrXD+mV/0NBUZZWUFyH6O+9Eg/dlpPegQRJIU/ICDtIPPFiN2PEXdhJYIIlvOjXNsioBd27sem60mXtCnvebAvRTI2049ZXshyay5eKdfId7HJVdLXsAMP1ba2SkKPqSc9UL8qed9zkk1q9bRl1eyfk6rSNquoFadl5bQVensrVQW7aeBZB0cRpRRj5xa0QjlLwoOnVrQSIrdXAHJ17I6S1hMKQv0OOt15HZVy2vw/53EZpC+zIJq4D98m7pe/MYDYU4B//pcNvLKBvxcnnhy+VQZUFVpFP46r0mU6SSGwK5+KeNtiyXw+sHy8Lz5VRmMbdULHv0GJfU1ijmPFf+z5X36N57z89af/Aw==</diagram></mxfile>
|
2106.05321/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Deep learning continuously improves the state of the art in different fields, such as *natural language understanding* [\[42\]](#page-9-0) and *computer vision* [\[30\]](#page-8-0). However, a fundamental limitation of representation learning from raw data is the dependence on large amounts of task-specific or domainspecific data, labeled or not. This limitation inhibits the application of deep learning to real-world problems, such as rare species classification, where the cost of obtaining and annotating data from a new domain is high.
|
| 4 |
+
|
| 5 |
+
To address this limitation, *few-shot learning* [\[65,](#page-9-1) [58,](#page-9-2) [10\]](#page-8-1) has attracted significant interest in recent years. Few-shot learning is concerned with learning not only under limited
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
|
| 9 |
+
Figure 1. CUB original images (row 1) followed by images generated from separately trained reconstructors using as input tensor features (row 2) or vector features (row 3). More results and implementation details are given in the supplementary material.
|
| 10 |
+
|
| 11 |
+
<span id="page-0-0"></span>supervision, but also from limited data. This constraint excludes representation learning from scratch and inhibits adapting the representation, which is otherwise common in *transfer learning* [\[9,](#page-8-2) [28\]](#page-8-3) *domain/task adaptation* [\[11,](#page-8-4) [51\]](#page-9-3) and *continual learning* [\[52\]](#page-9-4).
|
| 12 |
+
|
| 13 |
+
*Data augmentation*, commonly based on simple input transformations, is a universal way of regularizing and improving the generalization ability of a model [\[30\]](#page-8-0), as well as exploiting unlabeled data [\[5,](#page-8-5) [59\]](#page-9-5). In few-shot learning, recent methods go beyond input transformations towards *synthetic data generation* and *hallucination*, either in the *image space* [\[7,](#page-8-6) [73\]](#page-10-0) or in the *feature space* [\[8,](#page-8-7) [34,](#page-8-8) [40\]](#page-9-6). Hence, they address the data deficiency by augmenting real data with synthetic, achieving a greater extent of diversity.
|
| 14 |
+
|
| 15 |
+
The vast majority of generative models focuses on highquality, high-resolution images, assuming a large amount of data. Also, the metrics used to evaluate generative models, focus on whether the generated data is realistic [\[18,](#page-8-9) [54\]](#page-9-7). Generating high quality, realistic data may not be necessary in "downstream tasks" such as classification. It is unclear whether and how state of the art generative models in the image space can succeed in the few-shot setting.
|
| 16 |
+
|
| 17 |
+
Most of the recent few-shot feature hallucination methods focus on generating vectors in the feature space [\[8,](#page-8-7) [34,](#page-8-8) [40\]](#page-9-6). These vectors are most commonly obtained by *global average pooling* (GAP) on the output feature maps. This
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
<span id="page-1-0"></span>Figure 2. Overview of our method. At inference: 1) Map the support examples $x_i^j$ (each color indicates a different class j) into tensor features $f_{\theta'}(x_i^j)$ through the pre-trained embedding network $f_{\theta'}$ . 2) Average $x_i^j$ into a tensor prototype $p_j$ per class j (3). 3) Map each $p_j$ to a class conditional vector $h(p_j)$ through the conditioner network h. Draw M samples $z_m$ per class from a k-dimensional normal distribution $\mathcal{N}(\mathbf{0}, I_k)$ . 5) Generate M class-conditional tensor features $g(z_m; h(p_j))$ per class j using generator network g. 6) Augment the support tensors with the generated tensors. 7) Perform global average pooling (GAP) and average the augmented features into vector prototype $\bar{p}_j$ per class j (5) and classify queries q to nearest prototype. At training (not shown): a) Train $f_\theta$ using cross-entropy (1). b) Fine-tune $f_\theta$ to $f_{\theta'}$ using self-distillation (2). c) Train tensor feature hallucinator (TFH) $\{h,g\}$ using reconstruction loss (4).
|
| 22 |
+
|
| 23 |
+
discards spatial details that might be necessary to model the underlying data distribution. We hypothesize that working with the feature map tensors directly may be more effective. To investigate this, we train two image reconstructors separately: one for tensor features and the other for vector features obtained by GAP. The latter has the same architecture as the former, except for one additional upsampling layer. As shown in Figure 1, the feature map tensors preserve more information indeed.
|
| 24 |
+
|
| 25 |
+
Motivated by this finding, we explore the potential of using *tensor features* instead of vector features in a simple generative model to improve few-shot classification. We employ a simple *conditioner-generator* architecture and we introduce a simple *reconstruction loss* between the generated tensor and the corresponding class prototype tensor. This allows the generation of a diverse set of synthetic data, not necessarily realistic, from a limited amount of real data from a previously unseen task. An overview is shown in Figure 2. We demonstrate empirically that our model provides state of the art results, outperforming more sophisticated generative models on a number of benchmarks. Our contributions are summarized as follows:
|
| 26 |
+
|
| 27 |
+
- 1. We are the first to generate *tensor features* instead of vector features in the few-shot setting and to leverage their structural properties (subsection 3.3).
|
| 28 |
+
- 2. We introduce a novel loss function that is simpler than alternatives in state of the art few-shot synthetic data generation methods [34, 40, 4] (subsection 3.4).
|
| 29 |
+
- 3. Our *tensor feature hallucinator* (TFH) sets new state of the art on three common few-shot classification benchmarks: *mini*Imagenet, CUB and CIFAR-FS.
|
| 30 |
+
- 4. We demonstrate the robustness of our hallucinator against using different backbone networks and classifiers, as well as its applicability to the challenging
|
| 31 |
+
|
| 32 |
+
setting of *cross-domain* few-shot learning.
|
| 33 |
+
|
| 34 |
+
# Method
|
| 35 |
+
|
| 36 |
+
We are given a labeled dataset $D_{\text{base}} := \{(x_i, y_i)\}_{i=1}^I$ of I examples, with each example $x_i$ having a label $y_i$ in one of the classes in $C_{\text{base}}$ . This dataset is used to learn the parameters $\theta$ of a mapping $f_{\theta} : \mathcal{X} \to \mathbb{R}^{d \times h \times w}$ from an input image space $\mathcal{X}$ to a *feature* or *embedding* space, where *fea-*
|
| 37 |
+
|
| 38 |
+
ture tensors have d dimensions (channels) and spatial resolution $h \times w$ (height $\times$ width).
|
| 39 |
+
|
| 40 |
+
The knowledge acquired at representation learning is used to solve *novel tasks*, assuming access to a dataset $D_{\mathrm{novel}}$ , with each example being associated with one of the classes in $C_{\mathrm{novel}}$ , where $C_{\mathrm{novel}}$ is disjoint from $C_{\mathrm{base}}$ . In few-shot classification [65], a novel task is defined by sampling a support set S from $D_{\mathrm{novel}}$ , consisting of N classes with K labeled examples per class, for a total of L := NK examples. Given the mapping $f_{\theta}$ and the support set S, the problem is to learn an N-way classifier that makes predictions on unlabeled queries, also sampled from novel classes. Queries are treated independently of each other. This is referred to as inductive inference.
|
| 41 |
+
|
| 42 |
+
The goal of representation learning is to learn the embedding function $f_{\theta}$ that can be applied to $D_{\text{novel}}$ to extract embeddings and solve novel tasks. We use $f_{\theta}$ followed by global average pooling (GAP) and a parametric base classifier $c_{\phi}$ to learn the representation. We denote by $\bar{f}_{\theta}: \mathcal{X} \to \mathbb{R}^d$ the composition of $f_{\theta}$ and GAP. We follow the two-stage regime by [61] to train our embedding model. In the first stage, we train $f_{\theta}$ on $D_{\text{base}}$ using standard crossentropy loss $L_{\text{CE}}$ :
|
| 43 |
+
|
| 44 |
+
<span id="page-3-1"></span>
|
| 45 |
+
$$J(D_{\text{base}}; \theta, \phi) := \sum_{i=1}^{I} \ell_{\text{CE}}(c_{\phi}(\bar{f}_{\theta}(x_i)), y_i) + R(\phi), \quad (1)$$
|
| 46 |
+
|
| 47 |
+
where R is a regularization term. In the *second stage*, we adopt a *self-distillation* process: The embedding model $f_{\theta}$ and classifier $c_{\phi}$ from the first stage serve as the teacher and we distill their knowledge to a new student model $f_{\theta'}$ and classifier $c_{\phi'}$ , with identical architecture. The student is trained using a linear combination of the standard crossentropy loss, as in stage one, and the Kullback-Leibler (KL) divergence between the student and teacher predictions:
|
| 48 |
+
|
| 49 |
+
$$J_{\text{KD}}(D_{\text{base}}; \theta', \phi') := \alpha J(D_{\text{base}}; \theta', \phi') + \beta \text{KL}(c_{\phi'}(\bar{f}_{\theta'}(x_i)), c_{\phi}(\bar{f}_{\theta}(x_i))),$$
|
| 50 |
+
(2)
|
| 51 |
+
|
| 52 |
+
where $\alpha$ and $\beta$ are scalar weights and $\theta$ , $\phi$ are fixed.
|
| 53 |
+
|
| 54 |
+
Existing feature hallucination methods [34, 4, 40, 72, 16] are trained using *vector features*, losing significant spatial and structural information. By contrast, our hallucinator is trained on tensor features before GAP and generates *tensor features* as well. In particular, we use the student model $f_{\theta'}: \mathcal{X} \to \mathbb{R}^{d \times h \times w}$ , pre-trained using (2), as our embedding network to train our tensor feature hallucinator.
|
| 55 |
+
|
| 56 |
+
The hallucinator consists of two networks: a *conditioner* network h and a *generator* network g. The conditioner
|
| 57 |
+
|
| 58 |
+
aids the generator in generating class-conditional examples. Given a set $X_j := \{x_i^j\}_{i=1}^K$ of examples associated with each class $j=1,\ldots,N$ , conditioning is based on the *prototype tensor* $p_j := p(X_j) \in \mathbb{R}^{d \times h \times w}$ of class j,
|
| 59 |
+
|
| 60 |
+
<span id="page-3-0"></span>
|
| 61 |
+
$$p(X_j) := \frac{1}{K} \sum_{i=1}^{K} f_{\theta'}(x_i^j). \tag{3}$$
|
| 62 |
+
|
| 63 |
+
The conditioner $h: \mathbb{R}^{d \times h \times w} \to \mathbb{R}^{d'}$ maps the prototype tensor to the class-conditional vector $s_j := h(p_j) \in \mathbb{R}^{d'}$ . The generator $g: \mathbb{R}^{k+d'} \to \mathbb{R}^{d \times h \times w}$ takes as input this vector as well as a latent vector $z \sim \mathcal{N}(\mathbf{0}, I_k)$ drawn from a k-dimensional standard normal distribution and generates a class-conditional tensor feature $g(z; s_j) \in \mathbb{R}^{d \times h \times w}$ for each class j.
|
| 64 |
+
|
| 65 |
+
```
|
| 66 |
+
\begin{array}{c|c} \textbf{input} : \textbf{training set } D_{\textbf{base}} \\ \textbf{input} : \textbf{pre-trained embedding } f_{\theta} \\ \textbf{output: trained tensor hallucinator } \{h,g\} \\ \textbf{1} & \textbf{while } not \ done \ \textbf{do} \\ \textbf{2} & & \textbf{Sample an } N\text{-way } K\text{-shot episode } E := \{E_j\}_{j=1}^{N} \ \textbf{from } D_{\textbf{base}} \\ \textbf{3} & & \textbf{for } class \ j = 1, \dots, N \ \textbf{do} \\ \textbf{4} & & \textbf{Obtain class prototype tensor } p_j := p(E_j) \ \textbf{by (3)} \\ \textbf{5} & & \textbf{Map } p_j \ \textbf{to } class\text{-conditional vector } s_j := h(p_j) \\ \textbf{6} & & \textbf{Draw } M \ \text{samples } \{z_m\}_{m=1}^{M} \ \text{from } \mathcal{N}(\mathbf{0}, I_k) \\ \textbf{7} & & \textbf{Generate } M \ class\text{-conditional tensor features} \\ & & & \{g(z_m; s_j)\}_{m=1}^{M} \\ \textbf{8} & & \textbf{Update parameters of hallucinator } \{h,g\} \ \textbf{by (4)} \\ \end{array}
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
<span id="page-3-2"></span>We train our hallucinator using a meta-training regime, similar to [34, 8, 57]. At every iteration, we sample a new episode by randomly sampling N classes and K examples $E_j := \{x_i^j\}_{i=1}^K$ for each class j from $D_{\text{base}}$ . For each class $j=1,\ldots,N$ , we obtain the prototype tensor $p_j := p(E_j)$ using (3) and the class-conditional vector $s_j := h(p_j)$ by the conditioner h. We then draw M samples $\{z_m\}_{m=1}^M$ from the standard normal distribution $\mathcal{N}(\mathbf{0},I_k)$ and generate M class-conditional tensor features $\{g(z_m;s_j)\}_{m=1}^M$ using the generator g. We train our hallucinator $\{h,g\}$ on the episode data $E := \{E_j\}_{j=1}^N$ by minimizing the mean squared error (MSE) of generated class-conditional tensor features of class j to the corresponding class prototype $p_j$ :
|
| 70 |
+
|
| 71 |
+
<span id="page-3-3"></span>
|
| 72 |
+
$$J_{H}(E; h, g) = \frac{1}{MN} \sum_{j=1}^{N} \sum_{m=1}^{M} \|g(z_{m}; h(p_{j})) - p_{j}\|^{2}.$$
|
| 73 |
+
(4)
|
| 74 |
+
|
| 75 |
+
Algorithm 1 summarizes the overall training process of the hallucinator.
|
| 76 |
+
|
| 77 |
+
At inference, we are given a few-shot task with a support set $S:=\{S_j\}_{j=1}^N$ , containing N novel classes with K examples $S_j:=\{x_i^j\}_{i=1}^K$ for each class j. For each class $j=1,\ldots,N$ , we use our trained backbone network $f_{\theta'}$ to compute the tensor feature $f_{\theta'}(x_i^j)\in\mathbb{R}^{d\times h\times w}$ of each example in $S_j$ and we obtain the prototype $p_j:=p(S_j)$ by (3). Then, using our trained tensor feature hallucinator $\{h,g\}$ , we generate M class-conditional tensor features $G_j:=\{g(z_m;h(p_j))\}_{m=1}^M$ , also in $\mathbb{R}^{d\times h\times w}$ , where $z_m$ are drawn from $\mathcal{N}(\mathbf{0},I_k)$ . We augment the support features $f_{\theta'}(S_j)$ with the generated features $G_j$ , resulting in K+M labeled tensor features per class in total. We now apply GAP to those tensor features and obtain new, vector class prototypes in $\mathbb{R}^d$ :
|
| 78 |
+
|
| 79 |
+
<span id="page-4-0"></span>
|
| 80 |
+
$$\bar{p}_j := \frac{1}{K+M} \left( \sum_{i=1}^K \bar{f}_{\theta'}(x_i^j) + \sum_{m=1}^M \bar{g}(z_m; h(p_j)) \right), \quad (5)$$
|
| 81 |
+
|
| 82 |
+
where $\bar{g}$ denotes the composition of g and GAP. Finally, given a query $q \in \mathcal{X}$ , we apply GAP to the tensor feature $f_{\theta'}(q)$ and assign it to the class of the nearest vector prototype. Algorithm 2 summarizes the inference process.
|
| 83 |
+
|
| 84 |
+
We refer to the above approach as *prototypical classifier*. In subsection 4.6 we experiment with alternative classifiers such as logistic regression and support vector machine on the same augmented (support + generated) features.
|
2106.05409/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-02-01T16:27:29.938Z" agent="5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36" etag="b1diTbZwm61euOQ7WQJJ" version="14.2.7" type="device"><diagram id="f23JKcAyH8rS2HAs4zZy" name="Page-1">7V1rd6I6F/41fqwLEgjw0WsvS2d6Oe+Zdr6hoPIWxSK22l9/ggJyCQqaAFpds6YQQ4TsJztP9t7Z1GBrurq11fmkb2m6WQOctqrBdg0AwCGE/7gl620JzynStmRsG5pXtit4Mb51v6JXujQ0fRGp6FiW6RjzaOHQms30oRMpU23b+opWG1lm9Ffn6lhPFLwMVTNZ+sfQnMm2VAbSrvxON8YT/5d5pGy/map+Ze9JFhNVs75CRbBTgy3bspzt0XTV0k239/x+2V7XTfk2uDFbnzlZLrib/P7F/+/hTn+325+r1b0D+bcbXvFu91M1l94je7frrP0+sK3lTNPdZrgabH5NDEd/matD99svLHZcNnGmJj7j8eHIMM2WZVo2Pp9ZM1ypuXBs6z3oPuBWsmaOJ2sg4PPks3iP96nbjr4KFXnPdqtbU92x17iK/63fzx7SoOKdf4XEhryySUhkUPIKVQ8q46DtXW/iA69DU2Q1UiajwWvvt9qEapNf393cPdxAjnLfJjtSNY3xDB+b+sghd76mLiab9nnv5FF1HN2ebUoAJ7CRBpSj0hD4pDR4mSQNyEwa/I+VBuJRXRKrJhCQQSBYac7dQwf3vv5tue0157pt4DvQ7XD5467wsIZa6f5MQ5ZjWHA1AEcjHQ2HuNy2HNUxLFdaN25n+deG6mqSMuA4RmIESTEStBwv7qqFJYlERoLk5asg82lHha+DcgVJZAPAF9zFsQEJZWQDAV2j3rno584/vFQmGyAT3wsFOk/oWjLQZWZdKyS6Utfwoso7tWxnYo2tmWp2dqXNaGfv6vQsa+518f91x1l73acuHSsqANyF9vrVvb4u+qdvXnObk/Yqcrb2zk6Q0sJa2kN9T094HeGo9lh39vWY16DbTXuFbusmnrc+o2tTBgIUS5HgynBefRnh47edMPHZTnzuyTosy6OlHpruuc2nIngQFNp48C59tAx8z4HyAAhG9XJ88tveqXdVDFXBbRwPNIGghJHpeH0eASD6WFr+FzeLjTQauAIQ5quNSPzv8dHY+0unoZY1+7TM5YYtAo7328XPu206+nMDm8pdJ1qnyxPC9FfhSDx5BDY8OcmK0QCJiNFUJkSnsoCKhaYyICACH2Y2k5WrBwPd9xb6hoUeZKPt/Fnt8PQHqjX9kbhhtdUSuAi1lKaFEuqqULUEBKFqaom0Sq82QOEVoMwACgWxYgAFZzNvMp3/YMb5DwjVmv/g2amXX1f1wky9iHLVaDmApaiX4in2YeYMaWsOsqEAxayMQIzJdqsMmRkKiB6mnJoE4cGdGNn1eh0fqlN3bM4Gi3mOwY6Hj1M7ZAP2LfJDLNqNS8sdd8ZQNRveF1ND0zb4JGmLKGYLsSeLKCZpn1yERrsIk4OdZzXYRXQuXOK8bJEgqzFSFCvFTsA5GA27rYsgJIeJx0AWBZFgQex2Ffxho6KQrNQlUDVOopSppvxj9i4TRspIzqiMqrVSAhRoSdoQv5/Nl87P5iI8kOsx0wbBuy0JyWHOzLdNDCLMKXGZRESDieH4Jprq8H3gihpw020gdq65Zu9t4c4a5J18CgdlynSVhlKZDkpvxDhjTmIUU7okSJlFGsF0A052eBF1UmvzWPip3FYw3IzZmIyQKxS2i6nk4qlYKIjlxOIQV08BEzkbWiJktcMIcqV4iUCapWIwuAbkhpmGLCQCcmWpnhy9hYfk+osYFhTz0bLMqAav1nxeCMkEAqjHwqBEAssUitTa/iq1Clr7kmxeQtZlpihVSp8H6Dsn60IEDjt0pAHC3x25xYBqOw1392YtFE2Ny7qG23He5ZpfY2iqi4Ux3BZ6VXgSyBoixwlJkEECyGITlSZznBTU87qdKwCJPEc9FvS0+YihyWOP1ZSuXbMSZkxejs06CoFuEDesMuMapL3ALPz1pXrTi41pJRBLgUwsg9KwrAVmC0ORjqzp+FAItqyFa8ea27o2NPaCJrc/pTLOlBhA22JHbgskgMqgCREjgEpKYk9pBZwp3P4AMo8SlBMxVgRRKYQN+yT3cHwaV639SYEX4EqHT0IZaEuIgDKY1E2IUxVeKgt9iDb6Npfi7lLXoQpzN3hpEWo5FgMFhShfQ4ocRnKyfiySJm992Yu82Y2U7R1TDawCJW3WvA6lUoYS/Y2mxw2lE4eGJHN764vwtPqFDD1fagQGvpirs0x8lifx2Y2XzmXOxtALjw55/7ZV5sEvLYOy8NbGUPEBD/D2VgtZweUkyIDWCi5hH0YSDDhzaYt1xDBKf4Og9K2uB/0CUQF5apaxs+CIzBZhWx4hndrpaS4ICyliKgZ2+1cr5EjY7/5lsgQqw7pKnmGEGDREMaYXWAfQcwzttjs/4jWQfidx35oaOBUBwexXqFsxQPvVeH+qdIPskpUx3gNO2qvrL95mVuoSjM+c7ICBOePEbAcUQlpprkPo/DhI1UZHLHIKdFOVs8iBslJHoHqLHB4y3O1zDX9yxSyjum/erxBTgZTck1emQkgaWwGy4qcwqcLC9MdzF0g92o28CJZ86uObWQFX50IfEG2QcfK54LHLJT4R+2mCDPUP8hVSAz+VxhB0XRqNEZUgLqcgJiOXqvFqxzjaCtuJwV7zRFEBJRhtIsX8RmookYI9ngsjRW/ldZLFtaV/z0ydUgENYOZWSF+YXd0KGd+lQXAriIgUPciOn/uLRWYw6V9hknfy85Pk7PM+ETK3MANJEDZ2qoc7FSRpHOuEdj8iJOyEhi7Nnc7ToWh8TJkppFcBFGw4p6DLzham4CiYVnclQQmmcf9OFWBKIeDjbGF6eB38I2HqLnhj1tsqIHV/TCl9T6SUI87EN+dtpXrG4fvBvJXB5ejVrIjLEXD7bSEX76ouCB/obPFBSnoQB4xpGvNF2uoshAJ1Md++GHaTnSKvOJLLxrGtagaWRnyBySKEKJp5XiYt/AmaPW6Qoui2KSfz7s8exzlCS6inAz4RLhny0PyAcYxkPm40dl1fBLdEwaO5nNS6P3UfT2nbc5L7XWKrWymevTnFHZLXhRH/nWK2tAEKC+W0DHj5lsVprXRmC306MFOz51U0IKkaSfVkwqtci02qBwQKFsNrOvoswhdj+yaQnBS+QHJqBCFsDF5uw9BF+pLMqnkG3i/Wri1Zjtq4EFdyfCIQc2RYHC5tc920sWDd2f8Qzd31bB6CIuQgKMlci26NaKoc/KEzgOVYFIMEkwMYEETHM3vzMxApaO+08et1AKaagCPO8tfRnISETIjPUoodziTKWMRwTslIRxjO0Qx0Rw3nqCBh01QHuulmDR9v7jIGqYllG9/4CtV/FCqilyKiJ0Uik7QBjTRY3EiZjAavvd9qE6pNfn13c/cQvOH1mvKM5ta4RMozkeCTYpfwjCzp/dvjLn0rNPMNzkiui9EAJCglSHjWMEtCa6G0efQDxMmIuaZDLGAzShV0Q5XeV8pONxwb150q9wyvJ6T+7rC0WGkMGEnZfaIsI2OyBUKzUR0U2DML0kCgQnNWlvfoVgSXGXY8iZkRXNDeKNdHJEfXRQjutq7kRS6pOSnZHGv8lrvn5XLx6yfyOIzfolxKAlJo4pfUXPH4hUkrQEXxmzm2kLlrUYaJLXh8grlnhUG21lijgII/8ert2Qu6NG8P6UUsjF4+TBb92bx8mGUEkr9Zn7nu4BKjHQh4Hgjxei9RaH5FEn9JX8ZNm9R0SDlMvsqhNoW9jIU5bMW41jra7iVKQt13LYZaAygTWHOH58TT8AY3TitAhzwWylkVXMdCEctYDqMV0hoOG3OhcOxwoIVXgRTvEQOw7xw0pqrbwQHT6rkutkdrYXgeuoHlONa0lup7C7vr4mzNcZGeDIPd/GTDL+X8EnysqY6K+ef2FHTHxqgGWs+cefu7bU6644ba6PUbjUe3wzb/vhqNRu8DLRff/e/Oc+e5fb/uz/4Czbg37pfSUpobb9Nm46lz+3f69wN/nO8hxF8Z+LJG605sNjruUaO3+b/xp9G08J9n0O9PHj5W/4473Snmwl1Rntw99db3z2vLeL81GsuZvfg7dhVX91n6lN41rf3QaEvPg5c/dueVbwp8a9r+4uTn7uO//SeOG3UetF+Dmamhv/3B0/0Hvm5l3//zzxM/WIEWr5nT7pf2OP21WtiK0tO02eJ7YIybrfdFrzP+Gt9O+r32i3Xfeu6/P0zeYHuumZ2nhkSHwSIlkR9ChoAU6uuDKOLgRunj7SQem+eFoFcI/2wIC6IUxy8r8OJT23KXxDt1jdeek/7m7eWw8x8=</diagram></mxfile>
|
2106.05409/main_diagram/main_diagram.pdf
ADDED
|
Binary file (27.6 kB). View file
|
|
|
2106.05409/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Following typical early exit frameworks, we add $M$ shallow Internal Classifiers, $\mathrm{IC}_1,\ldots,\mathrm{IC}_M$, on intermediate layers of $f_\theta$. Namely, let $g_{\phi_m}$, for $m\in\{1,\ldots,M\}$, be the $m$-th $\mathrm{IC}$ network returning $K$ logits, which is attached to hidden layer $f_{\theta_{m}}$ of the base network $f_\theta$. The index $m$ is independent of $f_\theta$ layer numbering. In general, $M$ is lower than the overall number of $f_\theta$ hidden layers since we do not add $\mathrm{IC}$s after every layer (see more details in Appendix [\[sec:appendix_placement_ic\]](#sec:appendix_placement_ic){reference-type="ref" reference="sec:appendix_placement_ic"}).
|
| 4 |
+
|
| 5 |
+
Although using $\mathrm{IC}$s to return an answer early can reduce overall computation time [@kaya2019shallow], in a standard setting each $\mathrm{IC}$ makes its decision independently, ignoring the responses returned by previous $\mathrm{IC}$s. As we show in Section [4.2](#sec:information_loss){reference-type="ref" reference="sec:information_loss"}, early layers often give correct answers for examples that are misclassified by later classifiers, and hence discarding their information leads to waste and performance drops. To address this issue, we need mechanisms that collect the information from the first $(m-1)$ $\mathrm{IC}$s to inform the decision of $\mathrm{IC}_m$. For this purpose, we introduce two complementary techniques: *cascade connections* and *ensembling*, and show how they help reduce information waste and, in turn, accelerate the model.
|
| 6 |
+
|
| 7 |
+
[Cascade connections]{style="color: black"} directly transfer the already inferred information between consecutive $\mathrm{IC}$s instead of re-computing it again. Thus, they improve the performance of initial $\mathrm{IC}$s that lack enough predictive power to classify correctly based on low-level features. Ensembling of individual $\mathrm{IC}$s improves performance as the number of members increases, thus showing greatest improvements in the deeper part of the network. This is visualized in Figure [1](#fig:re_intro){reference-type="ref" reference="fig:re_intro"} where [cascade connections]{style="color: black"} are used first to pass already inferred information to later $\mathrm{IC}$s, while ensembling is utilized to conclude the $\mathrm{IC}$ prediction. The details on these two techniques are presented in the following paragraphs.
|
| 8 |
+
|
| 9 |
+
Inspired by the gradient boosting algorithm and literature on cascading classifiers [@viola2004robust], we allow each $\mathrm{IC}$ to improve on the predictions of previous $\mathrm{IC}$s instead of inferring them from scratch. The idea of [cascade connections]{style="color: black"} is implemented by adding skip connections that combine the output of the base model hidden layer $f_{\theta_m}$ with the logits of $\mathrm{IC}_{m-1}$ and pass it to $\mathrm{IC}_m$. The prediction is realized by the softmax function applied to $g_{\phi_m}$ (the $m$-th $\mathrm{IC}$ network): $$\begin{equation}
|
| 10 |
+
\label{eq:stacking}
|
| 11 |
+
p_m =
|
| 12 |
+
\mathrm{softmax}(g_{\phi_m} (f_{\theta_m}(x), g_{\phi_{m-1}} \circ f_{\theta_{m-1}}(x))) \text{, for } m > 1,
|
| 13 |
+
\end{equation}$$ where $g \circ f (x) = g(f(x))$ denotes the composition of functions. Formally, $p_m = p_m(x;\phi_m)$, where $\phi_m$ are trainable parameters of $\mathrm{IC}_m$, but we drop these parameters in notation for brevity. $\mathrm{IC}_1$ uses only the information coming from the layer $f_{\theta_1}$ which does not need to be the first hidden layer of $f_\theta$. Figure [1](#fig:re_intro){reference-type="ref" reference="fig:re_intro"} shows the skip connections as red horizontal arrows.
|
| 14 |
+
|
| 15 |
+
Each $\mathrm{IC}_m$ is trained in parallel (with respect to $\phi_m$) to optimize the prediction of all output classes using an appropriate loss function $\mathcal{L}(p_m)$, e.g. cross-entropy for classification. However, during the backward step it is crucial to stop the gradient of a loss function from passing to the previous classifier. Allowing the gradients of loss $\mathcal{L}(p_{m})$ to affect $\phi_{j}$ for $j \in {1, .., m-1}$ leads to a significant performance degradation of earlier layers due to increased focus on the features important for $\mathrm{IC}_m$, as we show in Appendix [\[sec:stop_gradient\]](#sec:stop_gradient){reference-type="ref" reference="sec:stop_gradient"}.
|
| 16 |
+
|
| 17 |
+
Ensembling in machine learning models reliably increases the performance of a model while improving robustness and uncertainty estimation [@fort2019deep; @lakshminarayanan2017simple]. The main drawback of this approach is its wastefulness, as it requires to train multiple models and use them to process the same examples. However, in our setup we can adopt this idea to combine predictions which were already pre-computed in previous $\mathrm{IC}$s, with near-zero additional computational cost.
|
| 18 |
+
|
| 19 |
+
To obtain a reliable zero-waste system, we build ensembles that combine outputs from groups of $\mathrm{IC}$s to provide the final answer of the $m$-th classifier. Since the classifiers we are using vary significantly in predictive strength (later $\mathrm{IC}$s achieve better performance than early $\mathrm{IC}$s) and their predictions are correlated, the standard approach to deep model ensembling does not work in our case. Thus, we introduce weighted geometric mean with class balancing, which allows us to reliably find a combination of pre-computed responses that maximizes the expected result.
|
| 20 |
+
|
| 21 |
+
Let $p_1,p_2,\dots,p_m$ be the outputs of $m$ consecutive $\mathrm{IC}$ predictions (after [cascade connections stage]{style="color: black"}) for a given $x$ (Figure [1](#fig:re_intro){reference-type="ref" reference="fig:re_intro"}). We define the probability of the $i$-th class in the $m$-th ensemble to be: $$\begin{equation}
|
| 22 |
+
\label{eq:ensemble_eq}
|
| 23 |
+
q_m^i(x){}=\frac{1}{Z_m}\,{}b_m^i\prod_{j\leq{}m}\big(p_j^i(x)\big)^{w_m^j},
|
| 24 |
+
\end{equation}$$ where $b_m^i > 0$ and $w_m^j>0$, for $j=1,\dots,m$, are trainable parameters, and $Z_m$ is a normalization factor, such that $\sum_i q_m^i(x) = 1$. Observe that $w_m^j$ can be interpreted as our prior belief in predictions of $\mathrm{IC}_j$, i.e. large weight $w_m^j$ indicates less confidence in the predictions of $\mathrm{IC}_j$. On the other hand, $b_m^i$ represents the prior of $i$-th class for $\mathrm{IC}_m$. The $m$ indices in $w_m$ and $b_m$ are needed as the weights are trained independently for each subset $\{\mathrm{IC}_j: j\leq m\}$. Although there are viable potential approaches to setting these parameters by hand, we verified that optimizing them directly by minimizing the cross-entropy loss on the training dataset works best.
|
| 25 |
+
|
| 26 |
+
Out of additive and geometric ensemble settings we found the latter to be preferable. In this formulation, a low class confidence of a single $\mathrm{IC}$ would significantly reduce the probability of that class in the whole ensemble. In consequence, in order for the confidence of the given class to be high, we require all $\mathrm{IC}$s to be confident in that class. Thus, in geometric ensembling, an incorrect although confident $\mathrm{IC}$ answer has less chance of ending calculations prematurely. In the additive setting, the negative impact of a single confident but incorrect $\mathrm{IC}$ is much higher, as we show in Appendix [\[sec:additive_vs_geometric\]](#sec:additive_vs_geometric){reference-type="ref" reference="sec:additive_vs_geometric"}. Hence our choice of geometric ensembling.
|
| 27 |
+
|
| 28 |
+
Direct calculation of the product in [\[eq:ensemble_eq\]](#eq:ensemble_eq){reference-type="eqref" reference="eq:ensemble_eq"} might lead to numerical instabilities whenever the probabilities are close to zero. To avoid this problem we note that $$\begin{equation*}
|
| 29 |
+
b_m^i\prod_{j\leq{}m}\big(p_j^{i}(x)\big)^{w_m^j}=b_m^i\exp\bigg(\sum_{j\leq{}m} w_m^j \ln{}p_j^{i}(x)\bigg)
|
| 30 |
+
\label{eq:expln},
|
| 31 |
+
\end{equation*}$$ and that log-probabilities $\ln p_j^{i}$ can be obtained by running the numerically stable log softmax function on the logits $g_{\phi_m}$ of the classifier.
|
| 32 |
+
|
| 33 |
+
::: algorithm
|
| 34 |
+
**Input:** pre-trained model $f_\theta$, cross-entropy loss function $\mathcal{L}$, training set $\mathcal{T}$.
|
| 35 |
+
|
| 36 |
+
**Initialize** $M$ shallow models $g_{\phi_m}$ at selected layers $f_{\theta_m}$.
|
| 37 |
+
|
| 38 |
+
[]{#alg:ztw label="alg:ztw"}
|
| 39 |
+
:::
|
| 40 |
+
|
| 41 |
+
Both cascade connections and ensembling have different impact on the model. Cascade connections primarily boost the accuracy of early $\mathrm{IC}$s. Ensembling, on the other hand, improves primarily the performance of later $\mathrm{IC}$s, which combine the information from many previous classifiers.
|
| 42 |
+
|
| 43 |
+
This is not surprising, given that the power of the ensemble increases with the number of members, provided they are at least weak in the sense of boosting theory [@schapire_strength_1990]. As such, the two techniques introduced above are complementary, which we also show empirically via ablation studies in Appendix [\[sec:ablation\]](#sec:ablation){reference-type="ref" reference="sec:ablation"}. The whole training procedure is presented in Algorithm [\[alg:ztw\]](#alg:ztw){reference-type="ref" reference="alg:ztw"}.
|
| 44 |
+
|
| 45 |
+
Once a ZTW model is trained, the following question appears: how to use the constructed system at test time? More precisely, we need to dynamically find the shortest processing path for a given input example. For this purpose, we use one of the standard confidence scores given by the probability of the most confident class. If the $m$-th classifier is confident enough about its prediction, i.e. if $$\begin{equation}
|
| 46 |
+
\underset{i}{\max}\, q_m^i > \tau \text{, for a~fixed } \tau>0,
|
| 47 |
+
\label{eq:thres}
|
| 48 |
+
\end{equation}$$ where $i$ is the class index, then we terminate the computation and return the response given by this $\mathrm{IC}$. If this condition is not satisfied, we continue processing $x$ and go to the next $\mathrm{IC}$.
|
| 49 |
+
|
| 50 |
+
Threshold $\tau$ in [\[eq:thres\]](#eq:thres){reference-type="eqref" reference="eq:thres"} is a manually selected value, which controls the acceleration-performance trade-off of the model. A lower threshold leads to a significant speed-up at the cost of a possible drop in accuracy. Observe that for $\tau>1$, we recover the original model $f_\theta$, since none of the $\mathrm{IC}$s is confident enough to answer earlier. In practice, to select its appropriate value, we advise using a held-out set to evaluate a range of possible values of $\tau$.
|
2108.00230/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-06-10T16:47:44.672Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36" etag="pbNu26WYRNLvm54nfBxk" version="13.1.3" type="device"><diagram id="gpe6MXyyHRUA5FygsWAw" name="Page-1">7VxLk6IwEP41HneLgBA87jju7mEfUzVV+7pRmhGqkLiIo+6v3zAQkEZJZJDAyMUibefV+b50pyOOjOlq/yl01u5XuiD+SNcW+5FxP9J1bOvsMxYcEsFY1xLBMvQWiQjlgkfvH0mFXG3rLcimoBhR6kfeuiic0yAg86ggc8KQ7opqT9Qv9rp2lqQkeJw7fln601tEbiK1dZzLPxNv6fKekTVJvlk5XDmdycZ1FnR3JDJmI2MaUholT6v9lPix7bhdknofz3ybDSwkQSRTYTP1VxP6ZfZ9Nbuj3zx3+ePP33dpK8+Ov00nnA42OnALsFaYsVnhbud6EXlcO/P4mx1bbiZzo5XPSog9Opt1sgJP3p6wTu/StkkYkf3ZQaPMFAxChK5IFB6YSlpB11LrcfikxV2+FlzkHi0Dlznp6i+zhnMDsYfURpfYS+uZwZByi/UNYhPVBtN7ZjCk3GJGzyymfhubSFgsWHyIHSgrBTQgRSuRvRf9Onr+zZ6192Zauo/nrfHCgRcCNvSsUlw4qhUX82ovpazegnnkdFw0jFy6pIHjz3Lp3XwbPsdr9dJyMpG4UvVqscnSbTgnEsCKnHBJIpEbLS//0fKaJ5aXy0LiO5H3XBzvqTVPe3igHptJhi4DA3RhAJtknmmt4wgBNJTFZrwhHTSU2KHUEIOJczhSW8cKm/MDzrZY3o+lVY4L6hfV2UMygJwO2RLUZ4gu4+gHhnBnNTCkWYbYoB8kYIitgCEygd3AEH5iGBjSLEPAgJHIh2AFDJGJS6UY8l5DuMASXM0SVnggoccmQEL1DHgbyNbH6BSCGge2CYCN7Ophme0D27QkgK3ywGVYxbVSfd4ycc8MpjxtZNo9s5jqJIgpc6TvksGUp434Bt4biynfxqzbC/mF0Q1HkTC+505TYv3biYxMA8CrbmRk2qChK8X8YxDqiPJGUL+N0MjSB4qcc00DRa5PEXBIESWOoH4rFJEJht8WReocm3kELGYNVkmRMSpusghGH9KZI6vY0ORKDAEJKiRiCNRvhSJdP/2UbjSVB/PDjaZUtCq7qSh1u6UkslF3T4FZYvs6m4oB6GAINhWo38aegm/v8FbL7UoHq7ZSigBvqUNkS0emSMC1pvyuAfoR+V2j0k9fiSNNnd60Y47UYAgS8KNJwEvHmbcN+MYw1vVfq5XSjqpjO2wOnksZ7bQiGHhkc3kopp3MNDTuZqzT4z1/6mw/AYK7fn1qgvhb9b0D7vr1KTSY8utT3PUEArSYch/T9evTEsRUW4zHX72xmPJtzO7bvq+clfbtXRUIz2scRcLzGu7abRoeA3jVTeth8ELAtdJ6FuCDKJiE+m0Ek7aMp6+kSF2416GWsrQeR4iYNWqzHDBLVjfLMYZvEFwprWde+C4P1G8lrWe/+nJo4Ej5PDZwRJYjsB8RR4B+K1fOnJcDRyo5wlMRYo5MbpojjcHy1beWHJZaAZY1QIkqIdkkxmSvIAeMNYMxmXP5G7v14/+l0+3fgcA74KYQhtpG2O3lMWoFoNKpDaWHNOPM29mXwxLi+0qXYtaFP0+C+q88pLFi/jdXiXr+X2HG7D8=</diagram></mxfile>
|
2108.00230/main_diagram/main_diagram.pdf
ADDED
|
Binary file (9.64 kB). View file
|
|
|