Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +92 -0
- 2003.08413/paper.pdf +3 -0
- 2006.16668/paper.pdf +3 -0
- 2007.02863/main_diagram/main_diagram.drawio +1 -0
- 2007.02863/main_diagram/main_diagram.pdf +0 -0
- 2007.02863/paper_text/intro_method.md +114 -0
- 2009.08257/paper.pdf +3 -0
- 2009.10259/main_diagram/main_diagram.pdf +3 -0
- 2009.10259/main_diagram/main_diagram.png +3 -0
- 2101.09178/paper.pdf +3 -0
- 2104.00678/main_diagram/main_diagram.drawio +1 -0
- 2104.00678/main_diagram/main_diagram.pdf +0 -0
- 2104.00678/paper_text/intro_method.md +75 -0
- 2105.01203/paper.pdf +3 -0
- 2105.03491/paper.pdf +3 -0
- 2105.12774/paper.pdf +3 -0
- 2106.05303/main_diagram/main_diagram.drawio +1 -0
- 2106.05303/main_diagram/main_diagram.pdf +0 -0
- 2106.05303/paper_text/intro_method.md +217 -0
- 2106.05665/main_diagram/main_diagram.drawio +1 -0
- 2106.05665/main_diagram/main_diagram.pdf +0 -0
- 2106.05665/paper_text/intro_method.md +129 -0
- 2107.10140/paper.pdf +3 -0
- 2108.02479/paper.pdf +3 -0
- 2108.13499/main_diagram/main_diagram.png +3 -0
- 2109.09133/paper.pdf +3 -0
- 2109.13432/main_diagram/main_diagram.drawio +0 -0
- 2109.13432/paper_text/intro_method.md +33 -0
- 2110.05892/main_diagram/main_diagram.drawio +1 -0
- 2110.05892/paper_text/intro_method.md +7 -0
- 2111.04239/main_diagram/main_diagram.drawio +0 -0
- 2111.04239/paper_text/intro_method.md +145 -0
- 2112.00735/paper.pdf +3 -0
- 2112.06170/main_diagram/main_diagram.drawio +0 -0
- 2112.06170/paper_text/intro_method.md +198 -0
- 2201.12990/main_diagram/main_diagram.drawio +1 -0
- 2201.12990/main_diagram/main_diagram.pdf +0 -0
- 2201.12990/paper_text/intro_method.md +435 -0
- 2202.01085/paper.pdf +3 -0
- 2203.06345/main_diagram/main_diagram.drawio +0 -0
- 2203.06345/paper_text/intro_method.md +84 -0
- 2203.11894/paper.pdf +3 -0
- 2204.07258/main_diagram/main_diagram.drawio +1 -0
- 2204.07258/paper_text/intro_method.md +177 -0
- 2204.09263/paper.pdf +3 -0
- 2205.05871/paper.pdf +3 -0
- 2205.06688/paper.pdf +3 -0
- 2205.13346/paper.pdf +3 -0
- 2205.15674/main_diagram/main_diagram.drawio +0 -0
- 2205.15674/paper_text/intro_method.md +39 -0
.gitattributes
CHANGED
|
@@ -4666,3 +4666,95 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 4666 |
2305.10157/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4667 |
2205.12493/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4668 |
2206.08311/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4666 |
2305.10157/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4667 |
2205.12493/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4668 |
2206.08311/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4669 |
+
2406.05707/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4670 |
+
2207.07621/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4671 |
+
2403.11641/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4672 |
+
2105.01203/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4673 |
+
2501.19309/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4674 |
+
2009.08257/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4675 |
+
2401.01887/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4676 |
+
2303.12337/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4677 |
+
2301.07944/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4678 |
+
2503.11544/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4679 |
+
2404.02393/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4680 |
+
2210.08559/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4681 |
+
2305.05940/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4682 |
+
2503.02103/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4683 |
+
2107.10140/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4684 |
+
2405.01379/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4685 |
+
2101.09178/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4686 |
+
2311.14464/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4687 |
+
2204.09263/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4688 |
+
2410.05851/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4689 |
+
2305.08381/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4690 |
+
2006.16668/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4691 |
+
2305.19486/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4692 |
+
2506.21080/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4693 |
+
2307.14336/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4694 |
+
2407.18249/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4695 |
+
2505.15229/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4696 |
+
2309.17179/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4697 |
+
2311.12501/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4698 |
+
2209.10222/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4699 |
+
2402.11733/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4700 |
+
2305.18818/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4701 |
+
2306.14479/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4702 |
+
2412.07072/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4703 |
+
2403.03234/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4704 |
+
2405.02154/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4705 |
+
2205.13346/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4706 |
+
2109.09133/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4707 |
+
2003.08413/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4708 |
+
2310.09536/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4709 |
+
2310.07487/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4710 |
+
2210.08884/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4711 |
+
2207.09944/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4712 |
+
2308.09228/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4713 |
+
2212.01026/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4714 |
+
2307.01630/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4715 |
+
2506.18337/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4716 |
+
2112.00735/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4717 |
+
2302.12611/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4718 |
+
2403.04253/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4719 |
+
2305.01139/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4720 |
+
2108.02479/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4721 |
+
2405.20279/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4722 |
+
2502.21315/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4723 |
+
2302.03251/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4724 |
+
2305.03691/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4725 |
+
2211.05568/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4726 |
+
2205.06688/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4727 |
+
2210.15088/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4728 |
+
2206.14754/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4729 |
+
2508.03002/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4730 |
+
2309.02028/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4731 |
+
2211.14646/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4732 |
+
2105.03491/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4733 |
+
2212.13545/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4734 |
+
2202.01085/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4735 |
+
2411.03033/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4736 |
+
2105.12774/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4737 |
+
2306.15222/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4738 |
+
2510.26243/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4739 |
+
2305.19308/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4740 |
+
2511.00524/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4741 |
+
2502.06494/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4742 |
+
2312.06071/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4743 |
+
2502.17377/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4744 |
+
2309.15775/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4745 |
+
2205.05871/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4746 |
+
2402.13077/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4747 |
+
2407.10299/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4748 |
+
2203.11894/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4749 |
+
2309.04914/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4750 |
+
2407.04620/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4751 |
+
2403.13041/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4752 |
+
2405.17509/main_diagram/main_diagram.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4753 |
+
2411.01545/main_diagram/main_diagram.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4754 |
+
2009.10259/main_diagram/main_diagram.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4755 |
+
2403.15605/main_diagram/main_diagram.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4756 |
+
2410.18096/main_diagram/main_diagram.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4757 |
+
2301.11674/main_diagram/main_diagram.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4758 |
+
2309.14052/main_diagram/main_diagram.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4759 |
+
2502.12769/main_diagram/main_diagram.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4760 |
+
2405.15388/main_diagram/main_diagram.pdf filter=lfs diff=lfs merge=lfs -text
|
2003.08413/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2a7e0e8b00450e605c3b4ea0f368d6a23a03d1ae197ee3b21a02e5986327b653
|
| 3 |
+
size 5532909
|
2006.16668/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4078d81e9ef6db2e3a982e56d3d8816766d8d49f350e02f7410501ca496ad54e
|
| 3 |
+
size 1747868
|
2007.02863/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-06-02T00:41:46.806Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" version="13.1.12" etag="ypHZ-JMXBX8cW2hEnisr" type="google"><diagram id="M5fQM6vb_0E8BDtQEJKj">7V1bk6o6Fv41Vs08jAWEJPDY9u4z87BP1a7dNTNnnk6xFZVzUBzE3d3z6ycBopAsFTFEtLEfWiIEyPrWJd/KZYSeV+9/T4PN8tdkFsYjx5q9j9CXkePYhCL2j5d8FCXIIm5RskijWVFmHQpeo/+F5aWidBfNwm1ZVhRlSRJn0aZeOE3W63Ca1cqCNE3e6qfNk3hWK9gEi7D2GLzgdRrEoXLav6NZtixKPVw5+x9htFiKO9tW+csqECeXVWyXwSx5q9wLvYzQc5okWfFt9f4cxrz16u3yy5Ff9w+WhuusyQVOccHPIN6V7zZymCzc1xF++fp7VhyUj5p9iPdPk916FvIqrBGavC2jLHzdBFP+6xsTOStbZquYHdns6zxZZ6UIHX76PIrj5yRO0rwuNAtCbz5l5dssTf4MK7+QqRf+mLNfFnGw3ZY32zcYP5gmq2hafi9fJEyz8P1oY9j7JmbgDJNVmKUf7BRxgUXJ2C1ft8SmQ+yyjd4OkhaCXlaELMqCEluLffWH5mdfSgnA0kAnpPF9kAaXBjUnDfeINJ4G3SilQZE5aeBjuvH7iE6YMCb2iH7hkvlkcnFc76ZaQhrK5fsgF6P6QgG5SE0frmdPPBpiR1PeLPz9q61dtKKIcBDU/uy40sovhP+x8vA9yn4ra+Hf/8MbdozLoy/vZTvnBx/lgSTLuTcNp6Asf3jYxfyKWbBd5sjJb7NmDfSbqJgfVG7KDw93zY8+arIOZyLiE4FZskunYc0tZ0G6CIWEGuOhoajTMA6y6Gf9ISD5l3f4lkTsxge42TZgBlxar6d4qfLSalQo1wYaFSTVVrSHUlsOzH0DNMKqd8LTDnFPbjl8c5bDHyyHLsvh3YPlsABHZbW1HDYAXp9IT6XPcghGYADr9WB1VbCS3oEV2zK8kC0e82KwIkupzfFdqTaNYLWP+LlesS06XJiL6RgTKX4wSKSIW92QSZnPQwIr9oz6PyxdwQLU0ia7GfYxzqpXLElnmDbY0baP8VE9ZUB0tDmx3du2eVPWyQS7YciiQG1usvthQ4zSVSGdYzaka6AbHUZxIsiohnHCdPQojgNA5gu3dXEch6k9Fi5/byY8f+xXPvWqNQZ12om2zwVXgFzrH1xd31f9UGt2DRNXhatPzMD19lzbLSNjk35MO49m2DA0kFOXhsEBDEPvyDOMHNWPyQzCVX7Ms8z4MUc7k/a54ApQvXcBV2SR1nAF/Jhnm/FjDsSlkTjj8t8E6xpuyX93fDxVDr6/bXM0PrETbHfzfviRfVsU/3MHiJ/5WC0W/dDJP5lnZP9ev7KuHnOQrL9X+sjiduxJizuK62WlieNosw0buE0Jz5j/gXFZ/lG1qfjUtY7FUuVx+USi3a7yrYiim/J7wjZdY6zkRmpjgjo0KA7Uj2suKVP9ON9SgICopPUXpJ2B2pzubAjEXXZqQ54GG3Lahhjk9sDxfQ9mQ6DONboHG+LKg00a2xDXUWHlyulsjTYEYoi7jUO+DzbklA0xmf0CRz8+mA3BgA1x7sKG+G7bOASrvXriS3jRaEMgCrnbOGSwIadtiEGeEBz9+GA2hN6DvcC4bb8lp5rxgfTA9YqNcSIQ42yAE8lT4XTCft9O02iT5dU5E/ZTnh0fjIxjY4zUlI/JmWUQu2sgTB2g0QoaJmNYBDGpHfgf8f0i76PB9/hA/Nq7BDPi/mjvQQhBkqlon2umHjA0AnXlggRuHxRMwoz2O8tDLDR2ECXExw7DkjIrCI8J3f+KcHtkSWGO1R2sjDO1Q7ZnL2nfd247K/7xiVrUN1KWjzqQZd4+rwPO5EdmOkXIOEE7JHlOmw6DOR70+Pws6lv4AZmO9ukc0HQYGiuJjPOyQ27npOkw2i1+fFoW9Y2WBU1H6ywOOI8eGzIdvgoWJgqhIEmaLZNFsg7il0Pp5FD6NUk2JU7+CLPsowRRsMuSOsIEoiwDiKrNQ+YH34IsC9N1ofAWH+453aU/K1OV99qxTtZhUfJLxNus8ZRkoL99AWovgyN72OCjcsKG42B7HK0+9ce+hC+vnA5yDJTQNRRTCW3Fk7TFnmuc6x3SiZLbsvYfiXozmVh0DRG7N/RgAuq99mDt84qQBzOVTHQd01ZkSCY2zBjZHr4pIeeaZ2OHZGJ7aBj1OYa42qb5n7Hl1pzOGZeTH30L04i9eJhe4Iecnvkh23IYENAhoagkgVovDcdX4EEH/6OM4LbNuCeI1u0iuGkR2thnUNYEUMDI7d4txuQgb8wX/ILzjNQiWvKMjquGVB7uLNUoJm8OPfereu6il17ruTenqq82f5bKHnpOaxIaqI2SZnMKdLAKEkPQ6Bria2YV6KAXOvQCGE5/Qfa3C71QkHydXjSca6NDLySMN7sGadYLb9CLrvyFQb2wASTT1noB1WZ3pRduC3+hXqNdL4YMSGcZkNvGUXr9hdIT7VAvzmdnAL3QHEeJztegF/r14pH8BcFd6YXTQi/Ua3T7CyyEPOjFpXphAO/5JMm2M9yar8d9+i6tBm9cqh82IvbYIvL7nVYQ8CLdeX0MMJyDhtyJhpzGri4NaTUyUouGnOtygBdpj63QoCF3qyGnsatNQ9pMO9CiIefIKvgi3VGWO2jI3WqIIR9iRkPYu1zuQ4CLtGsIHjTkbjXETD/EUJQFgP18PwTSEN1R1pAh18FgOUDGw6QmudiEJp2+iylfcyHfpV6h3cto3zUCXHhf42rbbTWr8W4T+0GSTXRHVZ2+TYOkCKl7y7Xd4YR6amWeJ1WmbwwU1jD3zSw6L0dZl7MS+o9O7fsFq/gkuDt8at9aA5nFZ492/QRMae8GmkK7flpW220LDO/6SbRvrPGpjCmwr8Z94LP1thqm8alhmuAnDkXvYaA+gM8rdikyjU/nPD4v2p5suwxmOZT5uQsO5xFAAkyTFUe5Apyyu18+03U7lIlpQ/uQSYydreBiPzKyCgzb0TBXi0DT+B6iYamrrJ+IlYZFxB2LRT5qbethDW3rXmRUy3evtVOlYWFCCjBbfMa5Yo6d4ybzjMG8fEJUA4Mp9LlmMfu2/pntimVR9xCS96Fpvjq0J1WF5YXUNBrLy6bF3Qh35ybIXT7jc8DdjXEHLbTGbKD7rxF+iR5mQ3KmAH69TakYc1LBBwHwQXQ4bIg0dosZ8+ydsnQ1opNvwYh++Uve7H/tot2rfr3qyvfOH5gxb2i3eCpU/JRwfCCc0iKce+dMj++fWaxk0203nwDZMNocAoassy8tpc2axm1pnX3LU3ag6JDSJxr2hiCb91yYzZbtiDpckKMycV8st3FYRkPngh06zJLnWZJZsizFLHW1IAfVwD72eEF2Aeuemw3PqS6FIHEn2EM6FkrgTAGwoWJnIZ9oZiUamZ8K+ZbJ6sdue17F6zHKUWahRiZUYOqpViC0ZzikkBXwCUUB0aTtvquM2nNqYxxUJqKreFEstPzAC8AJENY6db3jXxnAgG0KJQk3jxyg2rD0VBr1HKIJ5V7Hc7JaJWshLY6wdDfNdmlYCQIk5PFrofBX4gMBijCIowUnJNJCUhOusNE0iJ/K8lU0m8XHwogjhsVuTFlKEbmWCEHp9IvceZUIFr6uitAWPDA7TBMevx3wwFpm+WsyC/kZ/wc=</diagram></mxfile>
|
2007.02863/main_diagram/main_diagram.pdf
ADDED
|
Binary file (69.1 kB). View file
|
|
|
2007.02863/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,114 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
High-dimensional dynamical systems are often composed of simple subprocesses that affect one another through sparse interaction. If the subprocesses *never* interacted, an agent could realize significant gains in sample efficiency by globally factoring the dynamics and modeling each subprocess independently [28, 29]. In most cases, however, the subprocesses do *eventually* interact and so the prevailing approach is to model the entire process using a monolithic, unfactored model. In this paper, we take advantage of the observation that *locally—during the time between their interactions—the subprocesses are causally independent*. By locally factoring dynamic processes in this way, we are able to capture the benefits of factorization even when their subprocesses interact on the global scale.
|
| 4 |
+
|
| 5 |
+
Consider a game of billiards, where each ball can be viewed as a separate physical subprocess. Predicting the opening break is difficult because all balls are mechanically coupled by their initial placement. Indeed, a dynamics model with dense coupling amongst balls may seem sensible when considering the expected outcomes over the course of the game, as each ball has a non-zero chance of colliding with the others. But at any given timestep, interactions between balls are usually sparse.
|
| 6 |
+
|
| 7 |
+
One way to take advantage of sparse interactions between otherwise disentangled entities is to use a structured state representation together with a graph neural network or other message passing transition model that captures the local interactions [26, 39]. When it is tractable to do so, such architectures can be used to model the world dynamics directly, producing transferable, task-agnostic models. In many cases, however, the underlying processes are difficult to model precisely, and model-free [46, 87] or task-oriented model-based [18, 63] approaches are less biased and exhibit superior performance. In this paper we argue that *knowledge of whether or not local interactions*
|
| 8 |
+
|
| 9 |
+
<sup>1</sup>Code available at <https://github.com/spitis/mrl>
|
| 10 |
+
|
| 11 |
+

|
| 12 |
+
|
| 13 |
+
Figure 1: **Counterfactual Data Augmentation (CoDA)**. Given 3 factual samples, knowledge of the local causal structure lets us mix and match factored subprocesses to form counterfactual samples. The first proposal is rejected because one of its factual sources (the blue ball) is not locally factored. The third proposal is rejected because it is not itself factored. The second proposal is accepted, and can be used as additional training data for a reinforcement learning agent.
|
| 14 |
+
|
| 15 |
+
occur is useful in and of itself, and can be used to generate causally-valid counterfactual data even in absence of a forward dynamics model. In fact, if two trajectories have the same local factorization in their transition dynamics, then under mild conditions we can produce new counterfactually plausible data using our proposed **Counterfactual Data Augmentation** (**CoDA**) technique, wherein factorized subspaces of observed trajectory pairs are swapped (Figure 1). This lets us sample from a counterfactual data distribution by stitching together subsamples from observed transitions. Since CoDA acts only on the agent's training data, it is compatible with any agent architecture (including unfactored ones).
|
| 16 |
+
|
| 17 |
+
In the remainder of this paper, we formalize this data augmentation strategy and discuss how it can improve performance of model-free RL agents in locally factored tasks.
|
| 18 |
+
|
| 19 |
+
- 1. We define local causal models (LCMs), which are induced from a global model by conditioning on a subset of the state space, and show how local structure can simplify counterfactual reasoning.
|
| 20 |
+
- 2. We introduce CoDA as a generalized data augmentation strategy that is able to leverage local factorizations to manufacture unseen, yet causally valid, samples of the environment dynamics. We show that goal relabeling [36, 1] and visual augmentation [2, 46] are instances of CoDA that use global independence relations and we propose a locally conditioned variant of CoDA that swaps independent subprocesses to form counterfactual experiences (Figure 1).
|
| 21 |
+
- 3. Using an attention-based method for discovering local causal structure in a disentangled state space, we show that our CoDA algorithm significantly improves the sample efficiency in standard, batch-constrained, and goal-conditioned reinforcement learning settings.
|
| 22 |
+
|
| 23 |
+
# Method
|
| 24 |
+
|
| 25 |
+
The basic model for decision making in a controlled dynamic process is a Markov Decision Process (MDP), described by tuple $\langle \mathcal{S}, \mathcal{A}, P, R, \gamma \rangle$ consisting of the state space, action space, transition function, reward function, and discount factor, respectively [72, 83]. Note that MDPs generalize uncontrolled Markov processes (set $A = \emptyset$ ), so that our work applies also to sequential prediction. We denote individual states and actions using lowercase $s \in \mathcal{S}$ and $a \in \mathcal{A}$ , and variables using the uppercase S and S and S actions at each state, and an agent is typically tasked with learning a parameterized policy S that maximizes value S and S are S and S are S and S are S and S are S and S are S are S and S are S and S are S and S are S and S are S and S are S and S are S are S and S are S and S are S and S are S and S are S and S are S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S are S and S are S and S are S and S are S are S and S are S and S are S are S and S are S and S are S and S are S and S are S are S and S are S and S are S are S and S are S are S and S are S and S are S and S are S and S are S are S and S are S and S are S and S are S are S and S are S and S are S and S are S are S and S and S are S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S are S and S are S and S are S and S are S and S are S and S are S and S are S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S are S and S
|
| 26 |
+
|
| 27 |
+
In most non-trivial cases, the state $s \in \mathcal{S}$ can be described as an object hierarchy together with global context. For instance, this decomposition will emerge naturally in any simulated process or game that is defined using a high-level programming language (e.g., the commonly used Atari [7] or Minecraft [35] simulators). In this paper we consider MDPs with a single, known top-level decomposition of the state space $\mathcal{S} = \mathcal{S}^1 \oplus \mathcal{S}^2 \oplus \cdots \oplus \mathcal{S}^n$ for fixed n, leaving extensions to hierarchical decomposition and
|
| 28 |
+
|
| 29 |
+
multiple representations [32, 17], dynamic factor count n [92], and (learned) latent representations [13] to future work. The action space might be similarly decomposed: $\mathcal{A} = \mathcal{A}^1 \oplus \mathcal{A}^2 \oplus \cdots \oplus \mathcal{A}^m$ .
|
| 30 |
+
|
| 31 |
+
Given such state and action decompositions, we may model time slice (t, t+1) using a structural causal model (SCM) $\mathcal{M}_t = \langle V_t, U_t, \mathcal{F} \rangle$ ([65], Ch. 7) with directed acyclic graph (DAG) $\mathcal{G}$ , where:
|
| 32 |
+
|
| 33 |
+
- $V_t = \{V_{t+1}^i\}_{i=0}^{2n+m} = \{S_t^1 \dots S_t^n, A_t^1 \dots A_t^m, S_{t+1}^1 \dots S_{t+1}^n\}$ are the nodes (variables) of $\mathcal{G}$ .
|
| 34 |
+
- $U_t = \{U^i_{t[+1]}\}_{i=0}^{2n+m}$ is a set of noise variables, one for each $V^i$ , determined by the initial state, past actions, and environment stochasticity. We assume that noise variables at time t+1 are independent from other noise variables: $U^i_{t+1} \perp U^j_{t[+1]} \forall i,j$ . The instance $u=(u^1,u^2,\ldots,u^{2n+m})$ of $U_t$ denotes an individual realization of the noise variables.
|
| 35 |
+
- $\mathcal{F}=\{f^i\}_{i=0}^{2n+m}$ is a set of functions ("structural equations") that map from $U^i_{t[+1]} \times \operatorname{Pa}(V^i_{t[+1]})$ to $V^i_{t[+1]}$ , where $\operatorname{Pa}(V^i_{t[+1]}) \subset V_t \setminus V^i_{t[+1]}$ are the parents of $V^i_{t[+1]}$ in $\mathcal{G}$ ; hence each $f^i$ is associated with the set of incoming edges to node $V^i_{t[+1]}$ in $\mathcal{G}$ ; see, e.g., Figure 2 (center).
|
| 36 |
+
|
| 37 |
+
Note that while $V_t$ , $U_t$ , and $\mathcal{M}_t$ are indexed by t (their distributions change over time), the structural equations $f^i \in \mathcal{F}$ and causal graph $\mathcal{G}$ represent the global transition function P and apply at all times t. To reduce clutter, we drop the subscript t on V, U, and $\mathcal{M}$ when no confusion can arise.
|
| 38 |
+
|
| 39 |
+
Critically, we require the set of edges in $\mathcal{G}$ (and thus the number of inputs to each $f^i$ ) to be *structurally minimal* ([67], Remark 6.6).
|
| 40 |
+
|
| 41 |
+
**Assumption** (Structural Minimality). $V^j \in Pa(V^i)$ if and only if there exists some $\{u^i, v^{-ij}\}$ with $u^i \in range(U^i), v^{-ij} \in range(V \setminus \{V^i, V^j\})$ and pair $(v^j_1, v^j_2)$ with $v^j_1, v^j_2 \in range(V^j)$ such that $v^i_1 = f^i(\{u^i, v^{-ij}, v^j_1\}) \neq f^i(\{u^i, v^{-ij}, v^j_2\}) = v^j_2$ .
|
| 42 |
+
|
| 43 |
+
Intuitively, structural minimality says that $V^j$ is a parent of $V^i$ if and only if setting the value of $V^j$ can have a nonzero *direct* effect<sup>2</sup> on the child $V^i$ through the structural equation $f^i$ . The structurally minimal representation is unique [67].
|
| 44 |
+
|
| 45 |
+
Given structural minimality, we can think of edges in $\mathcal G$ as representing global causal dependence. The probability distribution of $S^i_{t+1}$ is fully specified by its parents $\operatorname{Pa}(S^i_{t+1})$ together with its noise variable $U_i$ ; that is, we have $P(S^i_{t+1} \mid S_t, A_t) = P(S^i_{t+1} \mid \operatorname{Pa}(S^i_{t+1}))$ so that $S^i_{t+1} \perp V^j \mid \operatorname{Pa}(S^i_{t+1})$ for all nodes $V^j \not\in \operatorname{Pa}(S^i_{t+1})$ . We call an MDP with this structure a factored MDP [37]. When edges in $\mathcal G$ are sparse, factored MDPs admit more efficient solutions than unfactored MDPs [28].
|
| 46 |
+
|
| 47 |
+
**Limitations of Global Models** Unfortunately, even if states and actions can be cleanly decomposed into several nodes, in most practical scenarios the DAG $\mathcal{G}$ is fully connected (or nearly so): since the $f^i$ apply globally, so too does structural minimality, and edge $(S_k^i, S_{k+1}^j)$ at time k is present so long as there is a single instance—at any time t, no matter how unlikely—in which $S_t^i$ influences $S_{t+1}^j$ . In the words of Andrew Gelman, "there are (almost) no true zeros" [24]. As a result, the factorized causal model $\mathcal{M}_t$ , based on globally factorized dynamics, rarely offers an advantage over a simpler causal model that treats states and actions as monolithic entities (e.g., [12]).
|
| 48 |
+
|
| 49 |
+
**LCMs** Our key insight is that for each pair of nodes $(V_t^i, S_{t+1}^j)$ with $V_t^i \in \operatorname{Pa}(S_{t+1}^j)$ in $\mathcal{G}$ , there often exists a large subspace $\mathcal{L}^{(j\perp i)} \subset \mathcal{S} \times \mathcal{A}$ for which $S_{t+1}^j \perp V_t^i \mid \operatorname{Pa}(S_{t+1}^j) \setminus V_t^i, (s_t, a_t) \in \mathcal{L}^{(j\perp i)}$ . For example, in case of a two-armed robot (Figure 2), there is a large subspace of states in which the two arms are too far apart to influence each other physically. Thus, if we restrict our attention to $(s_t, a_t) \in \mathcal{L}^{(j\perp i)}$ , we can consider a *local* causal model $\mathcal{M}_t^{\mathcal{L}^{(j\perp i)}}$ whose local DAG $\mathcal{G}^{\mathcal{L}^{(j\perp i)}}$ is strictly sparser than the global DAG $\mathcal{G}$ , as the structural minimality assumption applied to $\mathcal{G}^{\mathcal{L}^{(j\perp i)}}$ implies that there is no edge from $V_t^i$ to $S_{t+1}^j$ . More generally, for any subspace $\mathcal{L} \subseteq S \times A$ , we can induce the Local Causal Model (LCM) $\mathcal{M}_t^{\mathcal{L}} = \langle V_t^{\mathcal{L}}, U_t^{\mathcal{L}}, \mathcal{F}^{\mathcal{L}} \rangle$ with DAG $\mathcal{G}^{\mathcal{L}}$ from the global model $\mathcal{M}_t$ as:
|
| 50 |
+
|
| 51 |
+
<sup>&</sup>lt;sup>2</sup>Thus parentage does describe knock-on effects, e.g. $V_1$ on $V_3$ in the Markov chain $V_1 \rightarrow V_2 \rightarrow V_3$ .
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
|
| 55 |
+
Figure 2: A two-armed robot (left) might be modeled as an MDP whose state and action spaces decompose into left and right subspaces: $S = S^L \oplus S^R$ , $A = A^L \oplus A^R$ . Because the arms can touch, the global causal model (**center left**) between time steps is fully connected, even though left-to-right and right-to-left connections (dashed red edges) are rarely active. By restricting our attention to the subspace of states in which left and right dynamics are independent we get a local causal model (center right) with two components that can be considered separately for training and inference.
|
| 56 |
+
|
| 57 |
+
- $$\begin{split} \bullet & \ V_t^{\mathcal{L}} = \{V_{t[+1]}^{\mathcal{L},i}\}_{i=0}^{2n+m}, \text{where } P(V_{t[+1]}^{\mathcal{L},i}) = P(V_{t[+1]}^i \,|\, (s_t,a_t) \in \mathcal{L}). \\ \bullet & \ U_t^{\mathcal{L}} = \{U_{t[+1]}^{\mathcal{L},i}\}_{i=0}^{2n+m}, \text{where } P(U_{t[+1]}^{\mathcal{L},i}) = P(U_{t[+1]}^i \,|\, (s_t,a_t) \in \mathcal{L}). \end{split}$$
|
| 58 |
+
- $\mathcal{F}^{\mathcal{L}} = \{f^{\mathcal{L},i}\}_{i=0}^{2n+m}$ , where $f^{\mathcal{L},i} = f^i|_{\mathcal{L}}$ ( $f^i$ with range of input variables restricted to $\mathcal{L}$ ). Due to structural minimality, the signature of $f^{\mathcal{L},i}$ may shrink (as the range of the relevant variables is now restricted to $\mathcal{L}$ ), and corresponding edges in $\mathcal{G}$ will not be present in $\mathcal{G}^{\mathcal{L},3}$
|
| 59 |
+
|
| 60 |
+
In case of the two-armed robot, conditioning on the arms being far apart simplifies the global DAG to a local DAG with two connected components (Figure 2). This can make counterfactual reasoning considerably more efficient: given a factual situation in which the robot's arms are far apart, we can carry out separate counterfactual reasoning about each arm.
|
| 61 |
+
|
| 62 |
+
**Leveraging LCMs** To see the efficiency therein, consider a general case with global causal model M. To answer the counterfactual question, "what might the transition at time t have looked like if component $S^i_t$ had value x instead of value y?", we would ordinarily apply Pearl's do-calculus to $\mathcal{M}$ to obtain submodel $\mathcal{M}_{\text{do}(S^i_t=x)}=\langle V,U,\mathcal{F}_x\rangle$ , where $\mathcal{F}_x=\mathcal{F}\setminus f^i\cup\{S^i_t=x\}$ and incoming edges to $S_t^i$ are removed from $\mathcal{G}_{do(S_t^i=x)}$ [65]. The component distributions at time t+1 can be computed by reevaluating each function $f^j$ that depends on $S^i_t$ . When $S^i_t$ has many children (as is often the case in the global $\mathcal{G}$ ), this requires one to estimate outcomes for many structural equations $\{f^j|V^j\in {\rm Children}(V^i_t)\}$ . But if both the original value of $S_t$ (with $S^i_t=y$ ) and its new value (with $S^i_t=x$ ) are in the set ${\mathcal L}$ , the intervention is "within the bounds" of local model ${\mathcal M}^{\mathcal L}$ and we can instead work directly with local submodel ${\mathcal M}^{\mathcal L}_{{\rm do}(S^i_t=x)}$ (defined accordingly). The validity of this follows from the definitions: since $f^{\mathcal{L},j}=f^j|_{\mathcal{L}}$ for all of $S^i_t$ 's children, the nodes $V^k_t$ for $k\neq i$ at time t are held fixed, and the noise variables at time t+1 are unaffected, the distribution at time t+1 is the same under both models. When $S_t^i$ has fewer children in $\mathcal{M}^{\mathcal{L}}$ than in $\mathcal{M}$ , this reduces the number of structural equations that need to be considered.
|
| 63 |
+
|
| 64 |
+
We hypothesize that local causal models will have several applications, and potentially lead to improved agent designs, algorithms, and interpretability. In this paper we focus on improving offpolicy learning in RL by exploiting causal independence in local models for Counterfactual Data Augmentation (CoDA). CoDA augments real data by making counterfactual modifications to a subset of the causal factors at time t, leaving the rest of the factors untouched. Following the logic outlined in the Subsection 2.2, this can understood as manufacturing "fake" data samples using the counterfactual model $\mathcal{M}_{\text{do}(S_t^{i\cdots j}=x)}^{[\mathcal{L}]}$ , where we modify the causal factors $S_t^{i\cdots j}$ and resample their children. While this is always possible using a model-based approach if we have good models of the structural equations, it is particularly nice when the causal mechanisms are independent, as we can do counterfactual reasoning directly by reusing subsamples from observed trajectories.
|
| 65 |
+
|
| 66 |
+
As a trivial example, if $f^i$ is a function of binary variable $V^j$ , and $\mathcal{L} = \{(s, a) | V^j = 0\}$ , then $f^{\mathcal{L}, i}$ is not a function of $V^j$ (which is now a constant), and there is no longer an edge from $V^j$ to $V^i$ in $\mathcal{G}^{\mathcal{L}}$ .
|
| 67 |
+
|
| 68 |
+

|
| 69 |
+
|
| 70 |
+
Figure 3: Four instances of CoDA; orange nodes are relabeled, noise variables omitted for clarity. **First:** Goal relabeling [36], including HER [1], augments transitions with counterfactual goals. **Second:** Visual feature augmentation [2, 46] uses domain knowledge to change visual features $S_t^V$ (such as textures, lighting, and camera positions) that the designer knows do not impact the physical state $S_{t+1}^P$ . **Third:** Dyna [82], including MBPO [34], augments real states with new actions and resamples the next state using a learned dynamics model. **Fourth (ours):** Given two transitions that share local causal structures, we propose to swap connected components to form new transitions.
|
| 71 |
+
|
| 72 |
+
**Definition.** The causal mechanisms represented by subgraphs $G_i, G_j \subset G$ are **independent** when $G_i$ and $G_j$ are disconnected in G.
|
| 73 |
+
|
| 74 |
+
When $\mathcal{G}$ is divisible into two (or more) connected components, we can think of each subgraph as an independent causal mechanism that can be reasoned about separately.
|
| 75 |
+
|
| 76 |
+
Existing data augmentation techniques can be interpreted as specific instances of CoDA (Figure 3). For example, goal relabeling [36], as used in Hindsight Experience Replay (HER) [1] and Q-learning for Reward Machines [33], exploits the independence of the goal dynamics $G_t \mapsto G_{t+1}$ (identity map) and the next state dynamics $S_t \times A_t \mapsto S_{t+1}$ in order to relabel the goal variable $G_t$ with a counterfactual goal. While the goal relabeling is done model-free, we typically assume knowledge of the goal-based reward mechanism $G_t \times S_t \times A_t \times S_{t+1} \mapsto R_{t+1}$ to relabel the reward, ultimately mixing model-free and model-based features. Similarly, visual feature augmentation, as used in reinforcement learning from pixels [46, 41] and sim-to-real transfer [2], exploits the independence of the physical dynamics $S_t^P \times A_t \mapsto S_{t+1}^P$ and visual feature dynamics $S_t^V \mapsto S_{t+1}^V$ such as textures and camera position, assumed to be static $(S_{t+1}^V = S_t^V)$ , to counterfactually augment visual features. Both goal relabeling and visual data augmentation rely on global independence relationships.
|
| 77 |
+
|
| 78 |
+
We propose a novel form of knowlen particular, we observe that whenever an environment transition is within the bounds of some local model $\mathcal{M}^{\mathcal{L}}$ whose graph $\mathcal{G}^{\mathcal{L}}$ has the locally independent causal mechanism $\mathcal{G}_i$ as a disconnected subgraph (note: $\mathcal{G}_i$ itself need not be connected), that transition contains an unbiased sample from $\mathcal{G}_i$ . Thus, given two transitions in $\mathcal{L}$ , we may mix and match the samples of $\mathcal{G}_i$ to generate counterfactual data, so long as the resulting transitions are themselves in $\mathcal{L}$ .
|
| 79 |
+
|
| 80 |
+
**Remark 3.1.** How much data can we generate using our CoDA algorithm? If we have n independent samples from subspace $\mathcal{L}$ whose graph $\mathcal{G}^{\mathcal{L}}$ has m connected components, we have n choices for each of the m components, for a total of $n^m$ CoDA samples—an **exponential** increase in data! One might term this the "blessing of independent subspaces."
|
| 81 |
+
|
| 82 |
+
**Remark 3.2.** Our discussion has been at the level of a single transition (time slice (t, t + 1)), which is consistent with the form of data that RL agents typically consume. But we could also use CoDA to mix and match locally independent components over several time steps (see, e.g., Figure 1).
|
| 83 |
+
|
| 84 |
+
**Remark 3.3.** As is typical, counterfactual reasoning changes the data distribution. While off-policy agents are typically robust to distributional shift, future work might explore different ways to control or prioritize the counterfactual data distribution [77, 43]. We note, however, that certain prioritization schemes may introduce selection bias [31], effectively entangling otherwise independent causal mechanisms (e.g., HER's "future" strategy [1] may introduce "hindsight bias" [45, 78]).
|
| 85 |
+
|
| 86 |
+
**Remark 3.4.** The global independence relations relied upon by goal relabeling and image augmentation are incredibly general, as evidenced by their wide applicability. We posit that certain local independence relations are similarly general. For example, the physical independence of objects separated by space (the billiards balls of Figure 1, the two-armed robot of Figure 2, and the environments used in Section 4), and the independence between an agent's actions and the truth of (but not belief about) certain facts the agent is ignorant of (e.g., an opponent's true beliefs).
|
| 87 |
+
|
| 88 |
+
```
|
| 89 |
+
function CODA(transition t1, transition t2): \begin{array}{ll} \text{func} \\ \text{s1}, \text{a1}, \text{s1'} \leftarrow \text{t1} \\ \text{s2}, \text{a2}, \text{s2'} \leftarrow \text{t2} \\ \text{m1}, \text{m2} \leftarrow \text{MASK}(\text{s1}, \text{a1}), \text{MASK}(\text{s2}, \text{a2}) \\ \text{D1} \leftarrow \text{COMPONENTS}(\text{m1}) \\ \text{D2} \leftarrow \text{COMPONENTS}(\text{m2}) \\ \text{d} \leftarrow \text{random sample from } (\text{D1} \cap \text{D2}) \\ \text{\~s}, \text{\~s}, \text{\~s'} \leftarrow \text{copy}(\text{s1}, \text{a1}, \text{s1'}) \\ \text{\~s[d]}, \text{\~a[d]}, \text{\~s'}[\text{d]} \leftarrow \text{s2[d]}, \text{a2[d]}, \text{s2'[d]} \\ \text{\~D} \leftarrow \text{COMPONENTS}(\text{MASK}(\text{\~s}, \text{\~a})) \\ \text{return } (\text{\~s}, \text{\~s}, \text{\~s'}) \text{ if } \text{d} \in \tilde{\text{D}} \text{ else } \emptyset \end{array}
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
Returns $(n+m) \times (n)$ matrix indicating if the n next state components (columns) locally depend on the n state and m action components (rows).
|
| 93 |
+
|
| 94 |
+
Using the mask as the adjacency matrix for $\mathcal{G}^{\mathcal{L}}$ (with dummy columns for next action), finds the set of connected components $C = \{C_j\}$ , and returns the set of independent components $D = \{\mathcal{G}_i = \bigcup_k \mathcal{C}_k^i \mid \mathcal{C}^i \subset \text{powerset}(C)\}.$
|
| 95 |
+
|
| 96 |
+
Implementing CoDA We implement CoDA, as outlined above and visualized in Figure 3(d), as a function of two factual transitions and a mask function $M(s_t, a_t) : \mathcal{S} \times \mathcal{A} \to \{0, 1\}^{(n+m) \times n}$ that represents the adjacency matrix of the sparsest local causal graph $\mathcal{G}^{\mathcal{L}}$ such that $\mathcal{L}$ is a neighborhood of $(s_t, a_t)$ . We apply M to each transition to obtain local masks $\mathtt{m}_1$ and $\mathtt{m}_2$ , compute their connected components, and swap independent components $\mathcal{G}_i$ and $\mathcal{G}_j$ (mutually disjoint and collectively exhaustive groups of connected components) between the transitions to produce a counterfactual proposal. We then apply M to the counterfactual $(\tilde{s}_t, \tilde{a}_t)$ to validate the proposal—if the counterfactual mask $\tilde{m}$ shares the same graph partitions as $\mathtt{m}_1$ and $\mathtt{m}_2$ , we accept the proposal as a CoDA sample. See Algorithm 1.
|
| 97 |
+
|
| 98 |
+
Note that masks $m_1$ , $m_2$ and $\tilde{m}$ correspond to different neighborhoods $\mathcal{L}_1$ , $\mathcal{L}_2$ and $\tilde{\mathcal{L}}$ , so it is not clear that we are "within the bounds" of any model $\mathcal{M}^{\mathcal{L}}$ as was required in Subsection 2.2 for valid counterfactual reasoning. To correct this discrepancy we use the following proposition and additionally require the causal mechanisms (subgraphs) for independent components $\mathcal{G}_i$ and $\mathcal{G}_j$ to share structural equations in each local neighborhood: $f^{\mathcal{L}_1,i}=f^{\mathcal{L}_2,i}=f^{\tilde{\mathcal{L}},i}$ and $f^{\mathcal{L}_1,j}=f^{\mathcal{L}_2,j}=f^{\tilde{\mathcal{L}},j}$ . This makes our reasoning valid in the local subspace $\mathcal{L}^*=\mathcal{L}_1\cup\mathcal{L}_2\cup\tilde{\mathcal{L}}$ . See Appendix A for proof.
|
| 99 |
+
|
| 100 |
+
**Proposition 1.** The causal mechanisms represented by $\mathcal{G}_i, \mathcal{G}_j \subset \mathcal{G}$ are independent in $\mathcal{G}^{\mathcal{L}_1 \cup \mathcal{L}_2}$ if and only if $\mathcal{G}_i$ and $\mathcal{G}_j$ are independent in both $\mathcal{G}^{\mathcal{L}_1}$ and $\mathcal{G}^{\mathcal{L}_2}$ , and $f^{\mathcal{L}_1,i} = f^{\mathcal{L}_2,i}, f^{\mathcal{L}_1,j} = f^{\mathcal{L}_2,j}$ .
|
| 101 |
+
|
| 102 |
+
Since CoDA only modifies data within local subspaces, this biases the resulting replay buffer to have more factorized transitions. In our experiments below, we specify the ratio of observed-to-counterfactual data heuristically to control this selection bias, but find that off-policy agents are reasonably robust to large proportions of CoDA-sampled trajectories. We leave a full characterization of the selection bias in CoDA to future studies, noting that knowledge of graph topology was shown to be useful in mitigating selection bias for causal effect estimation [5, 6].
|
| 103 |
+
|
| 104 |
+
Inferring local factorization $\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,$
|
| 105 |
+
|
| 106 |
+
Learning the local factorization is similar to conditional causal structure discovery [81, 75, 67], conditioned on neighborhood $\mathcal{L}$ of $(s_t, a_t)$ , except that the same structural equations must be applied globally (if the structural equations were conditioned on $\mathcal{L}$ , Proposition 1 would fail). As there are many algorithms for general structure discovery [81, 75], and the arrow of time simplifies the inquiry
|
| 107 |
+
|
| 108 |
+
<sup>&</sup>lt;sup>4</sup>If Jacobian $\partial P/\partial x$ exists at $x=(s_t,a_t)$ , the ground truth $M(s_t,a_t)$ equals $|(\partial P/\partial x)^T|>0$ .
|
| 109 |
+
|
| 110 |
+
<sup>&</sup>lt;sup>5</sup>To see why this is not trivially true, imagine there are two rooms, one of which is icy. In either room the ground conditions are locally independent of movement dynamics, but not so if we consider their union.
|
| 111 |
+
|
| 112 |
+
[27, 67], there may be many ways to approach this problem. For now, we consider a generalization of the global network mask approach used by MADE [25] (eq. 10) for autoregressive distribution modeling and GraN-DAG [44] (eq. 6) for causal discovery, which additionally conditions the mask on the current state and action.
|
| 113 |
+
|
| 114 |
+
This approach computes a locally conditioned network mask $M(s_t, a_t)$ by taking the matrix product of locally conditioned layer masks: $M(s_t, a_t) = \Pi_{\ell=1}^L M_\ell(s_t, a_t)$ . This mask can be understood as an upper bound on the network's absolute Jacobian (see Appendix C). Again, there may be several models allow one to compute conditional layer masks. We tested two such models: a mixture of MLP experts and a single-head set transformer architecture [85, 47]. Each is described in more detail in Appendix C. Both are *trained* to model forward dynamics using an L2 prediction loss and induce a sparse network mask either via a sparsity penality (in case of the mixture of experts model) or via a sparse attention mechanism (in case of the set transformer). In preliminary experiments (Appendix C) we found that the set transformer performed better, and proceed to use it in our main experiments (Section 4). The set transformer uses the attention mask at each layer as the layer mask for that layer, so that the network mask is simply the product of the attention masks. Though trained to model forward dynamics, the CoDA models are used by the agent to *infer* local factorization rather than to directly sample future states as is typical in model-based RL. We found this produced reasonable results in the tested domains (below). See Appendix C for details. Future work should consider other approaches to inferring local structure such as graph neural networks [39, 13].
|
2009.08257/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0aa29be4a847f33e42eafee53c80774c4e3a3084c55cab7fc930b2998f443f1d
|
| 3 |
+
size 212447
|
2009.10259/main_diagram/main_diagram.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eb99b4674e227fbb8526330fc7da7026ad117b4c58ce5f995d453ad5626a5706
|
| 3 |
+
size 258999
|
2009.10259/main_diagram/main_diagram.png
ADDED
|
Git LFS Details
|
2101.09178/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9d1a01e22529e74691d5d70d7523f0b48425686b57d91a56b81e67f4eff4d139
|
| 3 |
+
size 4912989
|
2104.00678/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-03-18T06:13:31.030Z" agent="5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" version="14.4.8" etag="QqsSGq_pZ9zcI2J9s5z-" type="google"><diagram id="I7LuUSAfMHuN_mEil1Eu">5VpdU9s4FP01mW0fmrEkW7YfSYDtDttOZ+jMbh+VWDhqFStrKxD661eK5a9IgBOSQCA8YF3ry/ccHV1de4DG89WfOVnMvoiE8gH0ktUAnQ8gBDhE6p+23JeWKMalIc1ZYio1hmv2mxqjZ6xLltCiU1EKwSVbdI1TkWV0Kjs2kufirlvtRvDuqAuSmhG9xnA9JZxa1f5hiZwZK8Bxc+MzZenMDB3BsLwxJ1Vl03ExI4m4a5nQxQCNcyFkeTVfjSnXzqv8Ura7fOBuPbGcZrJPA1g2uCV8aZ7NzEveVw+rprjQl1PBOZHqanQ3Y5JeL8hUm+8UxMo2k3OuSkBd5kISyUSmirGnyjcikwZDgE35kswZ1+h/JzMxJ8Y6Flzk61FR+dN2xnnLfoH1n7KbidNc0tWDDw9qlyouUjGnMr9XVaoGMEDDoGxlmAghiofQ2O4abCOD16yFKvCNkRg6pfUIjcfVhXG6GwBkAfB1ADGZa59mk2KxflLM1ZCjYkGyDjT4v6XmyWhCpr/SXCyz5NO09NSZqpenkw/QjwaaAWoaHvTj5jrwPjbt1VW6/j9GgzishlMzL0c0NzdmhSyirGdA9XN5W5Hk0xYs2QfqXowt1GPfhTpADtjRHlD3LdQ1hhORUcur6kFl13WEs1T7baoemCqwR9odTKnTmbkxZ0mim49yWrDfZLLuSvt4IVgm19MORoPgXPe1lKIwXj8qCI6lh/zeSw/uAYPgaemjWXKmtwvtbE6Kgk27SNAVk/9q36pHKUs/WnfOV8bt68J9VcjUPFuNdLFupQtNs3WparcNNDSp9i8DTCGW+ZR2FEeSPKWVO3vD18IlcOBS2XKqtgp2252ECywzwjdNzBY7EI422eFDMAyDbk/lY5nG7V1usz8fWP2h0O6v9InV35pJtRN6kQtb5PpyGrL+QdFjNP54suLuYM6xxT08lLB4jwrL3gQC2gKBT0EgvGcJRNCnv/0JRNSDJAqka1MUuZyJVGSEXzTWUXdRNnX+FmJhmPSTSnlvaKE3+w2euXiojJeMV1U6NPpM+S3VsUaXbyqe6cWs2LH19F/ivTnTd6HGFgYF5TeVUE7ySiOJlGpuWs/UCU4kS25HaQ0S4Gl5fL0HojCMLfUMlHpiSz0xthc3AHtQz6qPFipXqigmP/VJXp3pSZawRB1ECwuENxoq48Czgxf3luaS3H3EygA8f0/b2/6EbRUB0WvboPzYjjg9ONx1fwp8R3foUJsT6JEW2g/cVaBTBT0/OkHPdieo5szkPkHtRq0tFuyxqBXaIS7a4EJvYmFXCqwfsRT85L5VzUjmIxNH9ljmOPrg/KIYPtFGXZQz2Zntdg7uUGyvmdvhbUPj3ZkLfJu6dTb6tTA39IGtYuHOohiGju6ig4minbS7evEzvVef1x843Z/oMR4H1nns2Md4YCcIr/5Q5YlYOWLPU3FsHEOHW+34PvQP5FVnZsxr9o73ENGj2E5+uyP6QyW/gZ2k+itbLOW7gwIAK75AIa4TP22h8Q6FhZ0LYhmTjPD2uddOS9QnYQ2U2oY4y1K71gc6TIeqxtW368eyuiedtvBjKxCA1TcGbQRDB4LhPhC0M0mWq1/uvDRwR4+d87Pf2+dHChUxtpMenn4/6MfVL+x22j9shNb7X99XUejBIkdop7TK+A2/F5WNsNdru/MPJLHQzl9Nc1EU7znXGwUvnuutZPLtvwXpiG3QG6vnqmgcOU7I8bBfsmgXpeuRStlq+SSkmNV126sh09/KqIO2zMWv+iM8sK/XIBXj64XhB0MUWwuj/pCnszJgaL033Glx2PmGy8uvb1WNYn9jh1hrkWOHAIfSoj7fA70JLTqG7sCNBaRVB4RB89sxdrNo8kTHOwdxqth8hVtWb75lRhf/Aw==</diagram></mxfile>
|
2104.00678/main_diagram/main_diagram.pdf
ADDED
|
Binary file (24.4 kB). View file
|
|
|
2104.00678/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
We follow recent practice [@qi2019votenet; @xie2020mlcvnet] to use the PointNet++ as our default backbone network for a fair comparison. The backbone network has four set abstraction layers and two feature propagation layers. For each set abstraction layer, the input point cloud is sub-sampled to 2048, 1024, 512, and 256 points with the increasing receptive radius of 0.2, 0.4, 0.8, and 1.2, respectively. Then, two feature propagation layers successively up-sample the points to 512 and 1024, respectively.
|
| 4 |
+
|
| 5 |
+
In the training phase, we use 50k[^4] points as input and adopt the same data augmentation as in [@qi2019votenet], including a random flip, a random rotation between \[$-5^{\circ}$, $5^{\circ}$\], and a random scaling of the point cloud by \[0.9, 1.1\]. The network is trained from scratch by the AdamW optimizer ($\beta_1$=0.9, $\beta_2$=0.999) with 400 epochs. The weight decay is set to 5e-4. The initial learning rate is 0.006 and decayed by 10$\times$ at the 280-th epoch and the 340-th epoch. The learning rate of the attention modules is set as 1/10 of that in the backbone network. The *gradnorm_clip* is applied to stabilize the training dynamics. Following [@qi2019votenet] we use class-aware head for box size prediction.
|
| 6 |
+
|
| 7 |
+
The implementation settings mostly follow [@qi2019votenet]. We use 20k points as input for each point cloud. The network architecture and the data augmentation are the same as that for ScanNet V2. As the orientation of the 3D box is required in evaluation, we include an additional orientation prediction branch for all decoder layers. The orientation branch contains a classification task and an offset regression task with loss weights of 0.1 and 0.04, respectively.
|
| 8 |
+
|
| 9 |
+
In training, the network is trained from scratch by the AdamW optimizer ($\beta_1$=0.9, $\beta_2$=0.999) with 600 epochs if not specified. The initial learning rate is 0.004 and decayed by 10$\times$ at the 420-th epoch, the 480-th epoch, and the 540-th epoch. The learning rate of attention modules is set as 1/20 of the backbone network. The weight decay is set to 1e-7, and the *gradnorm_clip* is used. We use class-agnostic head for size prediction.
|
| 10 |
+
|
| 11 |
+
For a fair comparison, we only switch the feature aggregation mechanism while all other settings remain unchanged. In the following, we will introduce the implementation details of RoI-Pooling and Voting aggregation mechanism.
|
| 12 |
+
|
| 13 |
+
For a given object candidate, the points within the predicted box of the object candidate are aggregated together, and the refined box is predicted from the aggregated features. The same as our group-free approach, the multi-stage refinement is also adopted. Thus the aggregated points and features will be updated and refined in multiple stages. Also, we tried two different strategies for feature aggregation: average-pooling and max-pooling. The results are shown in Table. [1](#tab::ablation_roi_pooling){reference-type="ref" reference="tab::ablation_roi_pooling"}. We could find that the approach with max-pooling performs better, so we use it for comparison by default.
|
| 14 |
+
|
| 15 |
+
:::: center
|
| 16 |
+
::: {#tab::ablation_roi_pooling}
|
| 17 |
+
method mAP@0.25 mAP@0.5
|
| 18 |
+
--------- ---------- ---------
|
| 19 |
+
average 64.2 44.2
|
| 20 |
+
max 65.1 44.4
|
| 21 |
+
|
| 22 |
+
: Comparison between average-pooling and max-pooling on ScanNet V2.
|
| 23 |
+
:::
|
| 24 |
+
::::
|
| 25 |
+
|
| 26 |
+
The voting mechanism is first introduced by VoteNet [@qi2019votenet] and we implement it in our framework. Specifically, each point predicts the center of its corresponding object, and if the distance between the predicted center of points and the center of an object candidate is less than a threshold (set to 0.3 meters), then these points and the candidate are grouped. Further, a two-layer MLP with max-pooling is used to form the aggregation feature of the object candidate, and the refined boxes are predicted from the aggregated features in the multi-stage refinement process.
|
| 27 |
+
|
| 28 |
+
:::: table*
|
| 29 |
+
::: center
|
| 30 |
+
methods backbone cab bed chair sofa tabl door wind bkshf pic cntr desk curt fridg showr toil sink bath ofurn mAP
|
| 31 |
+
--------------------------- ---------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
|
| 32 |
+
VoteNet [@qi2019votenet] PointNet++ 47.7 88.7 89.5 89.3 62.1 54.1 40.8 54.3 12.0 63.9 69.4 52.0 52.5 73.3 95.9 52.0 92.5 42.4 62.9
|
| 33 |
+
MLCVNet [@xie2020mlcvnet] PointNet++ 42.5 88.5 90.0 87.4 63.5 56.9 47.0 56.9 11.9 63.9 76.1 56.7 **60.9** 65.9 98.3 59.2 87.2 47.9 64.5
|
| 34 |
+
H3DNet [@zhang2020h3dnet] 4$\times$PointNet++ 49.4 88.6 91.8 **90.2** 64.9 **61.0** 51.9 54.9 **18.6** 62.0 75.9 57.3 57.2 75.3 97.9 67.4 92.5 53.6 67.2
|
| 35 |
+
Ours (L6, O256) PointNet++ 54.1 86.2 92.0 84.8 67.8 55.8 46.9 48.5 15.0 59.4 80.4 64.2 57.2 **76.3** 97.6 **76.8** 92.5 55.0 67.3
|
| 36 |
+
Ours (L12, O256) PointNet++ 55.4 86.6 91.8 86.6 **73.0** 54.5 49.4 47.7 13.1 63.3 **82.4** 63.3 53.2 74.0 99.2 67.7 91.7 55.8 67.2
|
| 37 |
+
Ours (L12, O256) PointNet++w2$\times$ **56.5** 88.2 92.5 88.2 71.6 57.5 48.3 53.7 17.5 **71.0** 79.5 63.4 58.1 71.7 99.4 71.1 93.0 **57.8** 68.8
|
| 38 |
+
Ours (L12, O512) PointNet++w2$\times$ 52.1 **91.9** **93.6** 88.0 70.7 60.7 **53.7** **62.4** 16.1 58.5 80.9 **67.9** 47.0 **76.3** **99.6** 72.0 **95.3** 56.4 **69.1**
|
| 39 |
+
:::
|
| 40 |
+
::::
|
| 41 |
+
|
| 42 |
+
:::: table*
|
| 43 |
+
::: center
|
| 44 |
+
methods backbone cab bed chair sofa tabl door wind bkshf pic cntr desk curt fridg showr toil sink bath ofurn mAP
|
| 45 |
+
--------------------------- ---------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- --------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
|
| 46 |
+
VoteNet [@qi2019votenet] PointNet++ 14.6 77.8 73.1 **80.5** 46.5 25.1 16.0 41.8 2.5 22.3 33.3 25.0 31.0 17.6 87.8 23.0 81.6 18.7 39.9
|
| 47 |
+
H3DNet [@zhang2020h3dnet] 4$\times$PointNet++ 20.5 79.7 80.1 79.6 56.2 29.0 21.3 45.5 4.2 33.5 50.6 37.3 41.4 37.0 89.1 35.1 **90.2** 35.4 48.1
|
| 48 |
+
Ours (L6, O256) PointNet++ 23.0 78.4 78.9 68.7 55.1 35.3 23.6 39.4 7.5 27.2 66.4 43.3 43.0 41.2 89.7 38.0 83.4 37.3 48.9
|
| 49 |
+
Ours (L12, O256) PointNet++ 23.8 77.2 81.6 65.1 **62.8** 35.0 21.3 39.4 7.0 33.1 66.3 39.3 43.9 **47.0** 91.2 38.5 85.2 37.4 49.7
|
| 50 |
+
Ours (L12, O256) PointNet++w2$\times$ **26.2** 80.7 **83.5** 70.7 57.0 37.4 21.2 47.7 **8.8** **45.3** 60.7 42.2 43.5 42.7 **95.5** **42.3** 89.7 **43.4** 52.1
|
| 51 |
+
Ours (L12, O512) PointNet++w2$\times$ 26.0 **81.3** 82.9 70.7 62.2 **41.7** **26.5** **55.8** 7.8 34.7 **67.2** **43.9** **44.3** 44.1 92.8 37.4 89.7 40.6 **52.8**
|
| 52 |
+
:::
|
| 53 |
+
::::
|
| 54 |
+
|
| 55 |
+
:::: table*
|
| 56 |
+
::: center
|
| 57 |
+
methods backbone bathtub bed bkshf chair desk drser nigtstd sofa table toilet mAP
|
| 58 |
+
------------------------------- --------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
|
| 59 |
+
VoteNet [@qi2019votenet] PointNet++ 75.5 85.6 31.9 77.4 24.8 27.9 58.6 67.4 51.1 90.5 59.1
|
| 60 |
+
MLCVNet [@xie2020mlcvnet] PointNet++ 79.2 85.8 31.9 75.8 26.5 31.3 61.5 66.3 50.4 89.1 59.8
|
| 61 |
+
HGNet [@chen2020hierarchical] PointNet++ w/ FPN 78.0 84.5 **35.7** 75.2 **34.3** 37.6 61.7 65.7 51.6 **91.1** 61.6
|
| 62 |
+
H3DNet [@zhang2020h3dnet] 4$\times$PointNet++ 73.8 85.6 31.0 76.7 29.6 33.4 65.5 66.5 50.8 88.2 60.1
|
| 63 |
+
Ours (L6, O256) PointNet++ **80.0** **87.8** 32.5 **79.4** 32.6 **36.0** **66.7** **70.0** **53.8** **91.1** **63.0**
|
| 64 |
+
:::
|
| 65 |
+
::::
|
| 66 |
+
|
| 67 |
+
:::: table*
|
| 68 |
+
::: center
|
| 69 |
+
methods backbone bathtub bed bkshf chair desk drser nigtstd sofa table toilet mAP
|
| 70 |
+
--------------------------- --------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
|
| 71 |
+
VoteNet [@qi2019votenet] PointNet++ 45.4 53.4 6.8 56.5 5.9 12.0 38.6 49.1 21.3 68.5 35.8
|
| 72 |
+
H3DNet [@zhang2020h3dnet] 4$\times$PointNet++ 47.6 52.9 8.6 60.1 8.4 20.6 45.6 50.4 27.1 69.1 39.0
|
| 73 |
+
Ours (L6, O256) PointNet++ **64.0** **67.1** **12.4** **62.6** **14.5** **21.9** **49.8** **58.2** **29.2** **72.2** **45.2**
|
| 74 |
+
:::
|
| 75 |
+
::::
|
2105.01203/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9b9f4c85ee3fda839417d364ac2b4447320dacc5ffbc1866bec3e3380e728ee1
|
| 3 |
+
size 3028214
|
2105.03491/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:629f33f39f2e456b0fc2cabb64bcae3ff9f4288444664258dc53ba1dc171d1c1
|
| 3 |
+
size 5322433
|
2105.12774/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ae166bdddea44b49e26bc187cd1b5183844affaf22af7c1a0e8a9dd7a4442ffa
|
| 3 |
+
size 17041561
|
2106.05303/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-06-03T14:20:40.573Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36" etag="lY6WBpr9X1K-EgdzBQwB" version="14.7.0" type="google"><diagram id="dqVQpVsxduED7lZiNpmL" name="Page-1">7V1rc6M4Fv01rtr94BQP8/qYOI/Z6vTU1Caznd4vU7LBNt3Y8gLuxPPrV4CEeQgbsIVkQmaq2pJByDrnXt17kGCkTtcfTz7Yrr5C2/FGimR/jNT7kaLIE0UZRf9L9j6pMXRcsfRdGx90qHhx/3ZwJandubYT5A4MIfRCd5uvnMPNxpmHuTrg+/A9f9gCevmrbsHSKVW8zIFXrv3m2uEK1yqmdPjiN8ddrvClNQV/sQbkYFwRrIAN3zNV6sNInfoQhsmn9cfU8aLBI+OSnPdY8W3aMd/ZhHVOCB7sv36Hxn/dbwvDDKGk/Ovpr7GsWkk7v4C3wz8ZdzfckzFwNvZtNJSoNPdAELjzkXq3CtceqpDRxyD04c90eDRUU+4b7q5j54Yb9/TJgWsn9PfogPfDIGt44FbZ4cV1vuOB0P2VBwlgrJdpc+kV/oAu6okiYV6qanLGPl8kDQRw588dfE52SAvNaKQ7Fe2EwF86Yakd9CHzmw9VMWAV4H1/fVr/qXq/K/aXlfQW/ra9/XOsU6DTPTRSdwsYd/OAof6/HSRfjIPYym7RAaq0/YjhIt+jT8vo31d37ZDGUN+S9pKvyvRAmD6DGTL8HCuA5y43EWUQBxwfVfxy/NBFlnWLv1i7th21cec7qEdgFrcnofI2GrB4CLW7kXaPu419gypR+XWC3ejSzkeOLNhh4MvmbDJHSHyWdKMphplDW74IFSfSjarl2h3LUr4RuFgEztkMog7QRGJh/th+HB+xCGH/sgVzd7PE6AbIKgiYei1sxfYdiBntvEehIUWr5z0QFmCfOQxbS93+yqpSYFLS4EU9k3EZz1RyS48OCHd+5JnAeovKm1mwTb4W1VP5MERkg1EritHEdaWWebbrGks3yMvoFyH/mMRlhE36zaTQCENvRRwwhVezs6a7AjttEIJxcq4/zzW4CsMo8LyNfltCtuBmCeHSc8DWDW7mcI2q5wE65HEB1q4XDdMzCGH2kmf0862S6Wn1rJL9iEYhzXNPoQcR6+83cBNxfOF6XqGqvoG8r9zQifx9dM13lBLEJrDb2I5N3D8Jg6Umvr9kBZVsV3Q971Yl3ERmdphQZgdFqraFHJGbshZd7pOz9uvA2qasnagmb9Zq1OnzBRehH67gEm6A93CoLQza4ZhnCLcYvR9OGO7x6IEd4tiJWBKNqb9/QwU0jUnyhNR8j65xI1ukfP+BL5qU9tlSGogS/DJcGSnqQov+O4ZjEokdHS2jZox65vyrq4UcpNhERdBYTl0l/XhDFVFs0+hTlzTqdS4Wfh6F41hak+cqzQVkeZm1/5MOCJFKiv8u4xwMOT+GmlbyDSSqzzkHVVNYeQfii7r1DqkvQK5A0vKuwDz4hia+oOxwGnuHxFKOjRYxWh5+5DycVVrEXTSkFVzPdkErI6oz8NF0m6lP/i5jWbpk5SxLla3ytDspWxapYzDgFhfD+nDDt8znxKQ0XDrYU1TImdNbtpA564gZXsDiTtsR4S3r+bg4jarFbLhiGr3YTEeuJ0yYliHA9xz+dDZcMfb6JO880jm4aSymqwVRhVUsVnGdpv1iG7tNFMqUkySKwRZsqJnoPGFQlIX6y9k/IiCmUUaq4n/1fyY5bJyxkhQ3Ohr4LvCSgwKwCcYBCg8WybFR4jnGOWR0qOcscCY8A/Ofy9iGxoULx6FZ1Fj2Q/bazbP6M+SB6cPoVjqdeyej+qnTb70ga6qWXo4DOk2/NVMMv17ln0nEIN1YlpWLGmTTOB43oEIxCj8jtu4q87aIjHdu5m0Wbhqz8vZV12HrvXUu8esFYkr2/CncNyxGirX5MzFahZyN+VNxnap+Vf1Atnwzy/kop/iXcpMcVT663kG9IOubirfBDREJK8vFOynF2e5CN6hlWTrqqU6fUFj6lKda+XRJa6d9tiDtyvoC919CSzHc2f0f/vTv27k8Fi5la5bAMxDITLUbTusnsvVLMVrRtWMEPX0C0RWqTeCcw4+65jqOwGJlL3R6WKI4+UzQq2SC3ujWlDppHPXmJwQGZkVIdVoz6coAZVlRjnOJ1aSiTk6YYKNJ5VJTQbXW8TmWBnwflgac0iYsoxAdmTgu70CboHJWHTiL5cjdeT84uy4mbqvE/cEkapnEROJsEtUrvEQn8zNETddcPvs5yIXC3fwam3RhNi92GYJEv00lDgb39rqKUzU1f2+vrshWI3mSDSYBb7HHE62hiNL4hE4C5BrrT5BD2EYf8YL4LIlp3iQrvEUsnSMXAtxNnJXJcdnzwDZwD6v85yvXs5/BHu5CchlSulv6wHad3AIVwwG6U2udGK66gNMqkIyyQmxCc1mlW9YX81k1thUS3NAPD13g/Tva0rBZ1oGwDJHtw+0r8R7pzgzHf/jlJBs0iv4ID30Ye8Poy+T2bvxxBsMQTbtJwcejVd7ucYcGdBrJZdp9dK9Xu5MP5XgvyBa53SncIBoAN0bWAUH47gQhFfOj9D9NhFNQX2LJEl3NrLGF7Byos3td4rkMDcbCi6ehFYoanE1jGAvzkg0cczGvsteaQFVtNDwBVJc4ydeGU711tro+nbLA6SOPBzfYaBLR1cOmqpb1+MgQNpOyf6JT2GgqyQDbyeSLBCTccKOl8gNuJ3EzKAumOsVNG3BrgVsqWnDDjfYQC6Fx4xs9pvsFeYclNXZYiYWbEPamSrzjkgaCyoDbATeylY4bbowFlZ5mb9SF3F3ipjBWRwYhrIj8Po8oN11M6aXe0tTTpvwXVxgjuFwPUCxC0AZACaKMKYPE0go33tIY9Xk4V49b01CmOW7ctTHl6jQWMRwld3FMuTqRRQyL466O1XmOzfUBx36K4y6PKVcns4jhKrnrY0ovdRb2FsddIFOvbhmKIMDxVshUxjrJoJBVKGTUPLBLhYzkM/0y2aZhqVp/sSc3Ex2UlmZACaKQqYPS0go33gqZOggtrXDjrpCpvRRa2DtK7gqZOggtrYDjrpDVeUCtWMCJ4Sq5K2R1XjwmFnBixCbcFbI6r4wSCzgxLI67QjYZFqS0A463QmY0iCpF2QTrAH2m63UwZLYJVjG474I1GIeVn0XbTC1A4H2w1JeTXb17laTptOI9JscZL66aafQ08mQGlCBqpnl1gWcd3CLU2OLGW800exl3sseNu5pJnq08AHdlaqbZy/t17IHjrmaaPb1hxzo04a5mmld3x04Mi+OuZpq9vGPXAXC81Uyzl3fsOgCOt5ppMtZJBk2sBL0oW2LNXiovTW3WbPlQvw6BsnoptTSNQxsAJYhEZg1SSyvceEtk5AUn/cKNvb1xl8isXiotHQDHWyKzBqWlFXDcJTJrUFraAcdbIrN6qbSwtzjuEpk1KC3tgOMtkVm9XJHSAXC8JTJreHsCJ4mM+57Y9CUr/TLaptNkagHiamSyNIgtzZASRCSTpUFtaQUcb5VMlga1pRVw3GUyWeql3MLeV3LXyWRp0FtaIcddKJOlQXBphxxvpUyWeqm4sEeOu1QmS4Pk0g453lqZLA1rU9oh16FYRn8n8KT6hfQivX07e8Y8RehQqS7iv/Iruu88MP+JTrqDH8OburPUM3R8Ctnw2+Fr4OlU1BuId6Ls1ObxumKLPAOeIGeVkUuT1ix0RIJgAN3wGMqmmvsJG2gk3VLRZjdn6AJIgFWPJblAFFAbK6GePVnRRwFEv+uCqnudvaLLvRT9jjxw5lLIdSi0V3SZseg3TIZNNml0PTcKoBuK4HC12tDxs9OeCoWVD9E7Gyph5kYBhMLrMjJR5sY6D1egYZKBDARbhA4qLNyPSC05N+5o/rxdufCsNNmiPSuNzbD++GHev8rT1Th8/c/Lt40JXoE5bpB9D8LJATejQ92EihvjPLqHgeJR+ouhmVC7KEAa3mmocZTuQsSE1B4KkHN3GlecCVP38SC1w71cZHMkYrkMah3GgtQOM06Uh8lPAI2E2mMB0m7+TrblM3k7hEmAHJt/yFIfJkHmQgEW4lyTcTGfC1HRh9GKlPS7JzSWq6/QdqIj/g8=</diagram></mxfile>
|
2106.05303/main_diagram/main_diagram.pdf
ADDED
|
Binary file (21 kB). View file
|
|
|
2106.05303/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
What do we need to trust a machine learning model? If accuracy is necessary, it might not always be sufficient. With the application of machine learning to critical areas such as medicine, finance and the criminal justice system, the black-box nature of modern machine learning models has appeared as a major hindrance to their large scale deployment (Caruana et al., 2015; Lipton, 2016; Ching et al., 2018). With the necessity to address this problem, the field
|
| 4 |
+
|
| 5 |
+
*Proceedings of the* 38 th *International Conference on Machine Learning*, PMLR 139, 2021. Copyright 2021 by the author(s).
|
| 6 |
+
|
| 7 |
+
of explainable artificial intelligence (XAI) thrived (Barredo Arrieta et al., 2020; Das & Rad, 2020; Tjoa & Guan, 2020).
|
| 8 |
+
|
| 9 |
+
Saliency Methods Among the many possibilities to increase the transparency of a machine learning model, we focus here on *saliency methods*. The purpose of these methods is to highlight the features in an input that are relevant for a model to issue a prediction. We can distinguish them according to the way they interact with a model to produce importance scores.
|
| 10 |
+
|
| 11 |
+
*Gradient-based:* These methods use the gradient of the model's prediction with respect to the features to produce importance scores. The premise is the following: if a feature is salient, we expect it to have a big impact on the model's prediction when varied locally. This translates into a big gradient of the model's output with respect to this feature. Popular gradient methods include Integrated Gradient (Sundararajan et al., 2017), DeepLIFT (Shrikumar et al., 2017) and GradSHAP (Lundberg et al., 2018).
|
| 12 |
+
|
| 13 |
+
*Perturbation-based:* These methods use the effect of a perturbation in the input on the model's prediction to produce importance scores. The premise is similar to the one used in gradient based method. The key difference lies in the way in which the features are varied. If gradient based methods use local variation of the features (according to the gradient), perturbation-based method use the data itself to produce a variation. A first example is Feature Occlusion (Suresh et al., 2017) that replaces a group of features with a baseline. Another example is Feature Permutation that performs individual permutation of features within a batch.
|
| 14 |
+
|
| 15 |
+
*Attention-based:* For some models, the architecture allows to perform simple explainability tasks, such as determining feature saliency. A popular example, building on the success attention mechanisms (Vaswani et al., 2017), is the usage of attention layers to produce importance scores (Choi et al., 2016; Song et al., 2017; Xu et al., 2018; Kwon et al., 2018).
|
| 16 |
+
|
| 17 |
+
*Other:* There are some methods that don't clearly fall in one of the above categories. A first example is SHAP (Lundberg & Lee, 2017), which attributes importance scores based on Shapley values. Another popular example is LIME in which the importance scores correspond to weights in a local linear model (Ribeiro et al., 2016a). Finally, some methods such as INVASE (Yoon et al., 2019a) or ASAC (Yoon et al., 2019b)
|
| 18 |
+
|
| 19 |
+
<sup>1</sup>DAMTP, University of Cambridge, UK <sup>2</sup>University of California Los Angeles, USA <sup>3</sup>The Alan Turing Institute, UK. Correspondence to: Jonathan Crabbe´ <jc2133@cam.ac.uk>, Mihaela van der Schaar <mv472@cam.ac.uk>.
|
| 20 |
+
|
| 21 |
+
train a selector network to highlight important features.
|
| 22 |
+
|
| 23 |
+
Time Series Saliency Saliency methods were originally introduced in the context of image classification (Simonyan et al., 2013). Since then, most methods have focused on images and tabular data. Very little attention has been given to time series (Barredo Arrieta et al., 2020). A possible explanation for this is the increasing interest in *model agnostic methods* (Ribeiro et al., 2016b).
|
| 24 |
+
|
| 25 |
+
Model agnostic methods are designed to be used with a very wide range of models and data structures. In particular, nothing prevents us from using these methods for *Recurrent Neural Networks* (RNN) trained to handle multivariate time series. For instance, one could compute the Shapley values induced by this RNN for each input xt,i describing a feature i at time t. In this configuration, all of these inputs are considered as individual features and the time ordering is forgotten. This approach creates a conceptual problem illustrated in Figure 1.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+
Figure 1. Context matters. This graph shows a fictional security price over time. A simplistic model recommends to sell just after a local maximum, to buy just after a local minimum and to wait otherwise. When recommending to sell or to buy, 3 consecutive time steps are required as a context.
|
| 30 |
+
|
| 31 |
+
In this example, the time dependency is crucial for the model. At each time step, the model provides a decision based on the two previous time steps. This oversimplified example illustrates that the time dependency induces a context, which might be crucial to understand the prediction of some models. In fact, recurrent models are inherently endowed with this notion of context as they rely on memory cells. This might explain the poor performances of model agnostic methods to identify salient features for RNNs (Ismail et al., 2020). These methods are *static* as the time dependency, and hence the context, is forgotten when treating all the time steps as separate features.
|
| 32 |
+
|
| 33 |
+
If we are ready to relax the model agnosticism, we can take a look at attention based methods. In methods such as RE-TAIN (Choi et al., 2016), attention weights are interpreted as importance scores. In particular, a high attention weight for a given input indicates that this input is salient for the model to issue a prediction. In this case, the architecture of the model allows us to provide importance scores without altering the dynamic nature of the data. However, it has
|
| 34 |
+
|
| 35 |
+
been shown recently that the attention weights of a model can significantly be changed without inducing any effect on the predictions (Jain & Wallace, 2019). This renders the parallel between attention weights and feature importance rather unclear. This discrepancy also appears in our experiments.
|
| 36 |
+
|
| 37 |
+
Another challenge induced by time series is the large number of inputs. The number of features is multiplied by the number of time steps that the model uses to issue a prediction. A saliency map can therefore become quickly overwhelming in this setting. To address these challenges, it is necessary to incorporate some treatment of *parsimony* and *legibility* in a time series saliency method. By *parsimony*, we mean that a saliency method should not select more inputs than necessary to explain a given prediction. For instance, a method that identifies 90% of the inputs as salient is not making the interpretation task easier. By *legibility*, we mean that the analysis of the importance scores by the user should be as simple as possible. Clearly, for long time sequences, analysing feature maps such as (c) and (d) on Figure 3 can quickly become daunting.
|
| 38 |
+
|
| 39 |
+
To address all of these challenges, we introduce Dynamask. It is a *perturbation-based* saliency method building on the concept of *masks* to produce post-hoc explanations for any time series model. Masks have been introduced in the context of image classification (Fong & Vedaldi, 2017; Fong et al., 2019). In this framework, a mask is fitted to each image. The mask highlights the regions of the image that are salient for the black-box classifier to issue its prediction. These masks are obtained by perturbing the pixels of the original image according to the surrounding pixels and study the impact of such perturbations on the black-box prediction. It has been suggested to extend the usage of masks beyond image classification (Phillips et al., 2018; Ho et al., 2019). However, to our knowledge, no available implementation and quantitative comparison with benchmarks exists in the context of multivariate time series. Moreover, few works in the literature explicitly mention explainability in a multivariate time series setting (Siddiqui et al., 2019; Tonekaboni et al., 2020). Both of these considerations motivate our proposition.
|
| 40 |
+
|
| 41 |
+
Contributions By building on the challenges that have been described, our work is, to our knowledge, the first saliency method to rigorously address the following questions in a time series setting.
|
| 42 |
+
|
| 43 |
+
*(1) How to incorporate the context?* In our framework, this is naturally achieved by studying the effect of *dynamic* perturbations. Concretely, a perturbation is built for each feature at each time by using the value of this feature at adjacent times. This allows us to build meaningful perturbations that carry contexts such as the one illustrated in Figure 1.
|
| 44 |
+
|
| 45 |
+
*(2) How to be parsimonious?* A great advantage of using
|
| 46 |
+
|
| 47 |
+
masks is that the notion of parsimony naturally translates into the *extremal* property. Conceptually, an *extremal mask* selects the minimal number of inputs allowing to reconstruct the black-box prediction with a given precision.
|
| 48 |
+
|
| 49 |
+
(3) How to be legible? To make our masks as simple as possible, we encourage them to be almost binary. Concretely, this means that we enforce a polarization between low and high saliency scores. Moreover, to make the notion of legibility quantitative, we propose a parallel with information theory<sup>1</sup>. This allows us to introduce two metrics: the mask information and the mask entropy. As illustrated in Figure 3, the entropy can be used to assess the legibility of a given mask. Moreover, these metrics can also be computed for other saliency methods, hence allowing insightful comparisons.
|
| 50 |
+
|
| 51 |
+
The paper is structured as follows. In Section 2, we outline the mathematical formalism related to our method. Then, we evaluate our method by comparing it with several benchmarks in Section 3 and conclude in Section 4.
|
| 52 |
+
|
| 53 |
+
# Method
|
| 54 |
+
|
| 55 |
+
In this section, we formalize the problem of feature importance over time as well as the proposed solution. For the following, it is helpful to have the big picture in mind. Hence, we present the blueprint of Dynamask in Figure 2. The rest of this section makes this construction rigorous.
|
| 56 |
+
|
| 57 |
+
Let $\mathcal{X} \subset \mathbb{R}^{d_X}$ be an input (or feature) space and $\mathcal{Y} \subset$ $\mathbb{R}^{d_Y}$ be an output (or label) space, where $d_X$ and $d_Y$ are respectively the dimension of the input and the output space. For the sake of concision, we denote by $[n_1 : n_2]$ the set of natural numbers between the natural numbers $n_1$ and $n_2$ with $n_1 < n_2$ . We assume that the data is given in terms of time series $(\mathbf{x}_t)_{t \in [1:T]}$ , where the inputs $\mathbf{x}_t \in \mathcal{X}$ are indexed by a time parameter $t \in [1:T]$ with $T \in \mathbb{N}^*$ . We consider the problem of predicting a sequence $(\mathbf{y}_t)_{t \in [t_u:T]}$ of outputs $\mathbf{y}_t \in \mathcal{Y}$ indexed by the same time parameter, but starting at a time $t_y \geq 1$ , hence allowing to cover sequence to vector predictions $(t_y = T)$ as well as sequence to sequence predictions ( $t_y < T$ ). For the following, it is convenient to introduce a matrix notation for these time series. In this way, $\mathbf{X} = (x_{t,i})_{(t,i) \in [1:T] \times [1:d_X]}$ denotes the matrix in $\mathbb{R}^{T \times d_X}$ whose rows correspond to time steps and whose columns correspond to features. Similarly $^2$ , we denote $\mathbf{Y}=$ $(y_{t,i})_{(t,i)\in[t_y:T]\times[1:d_Y]}$ for the matrix in $\mathbb{R}^{(T+1-t_y)\times d_Y}$ .
|
| 58 |
+
|
| 59 |
+
Our task is to explain the prediction $\mathbf{Y} = f(\mathbf{X})$ of a black box f that has been pre-trained for the aforementioned prediction task. In other words, our method aims to explain individual predictions of a given black box. More specifically, our purpose is to identify the parts of the input $\mathbf{X}$ that are the most relevant for f to produce the prediction $\mathbf{Y}$ . To do this, we use the concept of masks introduced in (Fong & Vedaldi, 2017; Fong et al., 2019), but in a different context. Here, we adapt the notion of mask to a dynamic setting.
|
| 60 |
+
|
| 61 |
+
**Definition 2.1** (Mask). A mask associated to an input sequence $\mathbf{X} \in \mathbb{R}^{T \times d_X}$ and a black box $f : \mathcal{X}^T \to \mathcal{Y}^{T+1-t_y}$ is a matrix $\mathbf{M} = (m_{t,i}) \in [0,1]^{T \times d_X}$ of the same dimension as the input sequence. The element $m_{t,i}$ of this matrix represents the importance of feature i at time t for f to produce the prediction $\mathbf{Y} = f(\mathbf{X})$ .
|
| 62 |
+
|
| 63 |
+
*Remark* 2.1. For a given coefficient in the mask matrix, a value close to 1 indicates that the feature is salient, while a value close to 0 indicates the opposite.
|
| 64 |
+
|
| 65 |
+
Now how can we obtain such a mask given a black box and an input sequence? To answer this question, we should let the mask act on the inputs and measure the effect it has on a black box prediction. As the name suggests, we would like this mask to hide irrelevant inputs contained in **X**. To make this rigorous, it is useful to spend some time to think about perturbation operators.
|
| 66 |
+
|
| 67 |
+
The method that we are proposing in this paper is perturbation-based. Concretely, this means that a mask is used to build a *perturbation operator*. Examples of such operators are given in (Fong et al., 2019) in the context of image classification. Here, we propose a general definition and we explain how to take advantage of the dynamic nature of the data to build meaningful perturbations. By recalling that the mask coefficients indicate the saliency, we expect the perturbation to vanish for features $x_{t,i}$ whose mask coefficient $m_{t,i}$ is close to one. This motivates the following definition.
|
| 68 |
+
|
| 69 |
+
**Definition 2.2** (Perturbation operator). A perturbation operator associated to a mask $\mathbf{M} \in [0,1]^{T \times d_X}$ is a linear operator acting on the input sequence space $\Pi_{\mathbf{M}} : \mathbb{R}^{T \times d_X} \to \mathbb{R}^{T \times d_X}$ . It needs to fulfil the two following assumptions for any given $(t,i) \in [1:T] \times [1:d_X]$ :
|
| 70 |
+
|
| 71 |
+
1. The perturbation for $x_{t,i}$ is dictated by $m_{t,i}$ :
|
| 72 |
+
|
| 73 |
+
$$\left[\Pi_{\mathbf{M}}\left(\mathbf{X}\right)\right]_{t,i} = \pi\left(\mathbf{X}, m_{t,i}; t, i\right),\,$$
|
| 74 |
+
|
| 75 |
+
where $\pi$ is differentiable for $m \in (0,1)$ and continuous at m=0,1.
|
| 76 |
+
|
| 77 |
+
2. The action of the perturbation operator is trivial when the mask coefficient is set to one :
|
| 78 |
+
|
| 79 |
+
$$\pi(\mathbf{X}, 1; t, i) = x_{t,i}.$$
|
| 80 |
+
|
| 81 |
+
<sup>&</sup>lt;sup>1</sup>Many connections exist between explainability and information theory, see for instance (Chen et al., 2018).
|
| 82 |
+
|
| 83 |
+
$<sup>^2</sup>$ In the following, we do not make a distinction between $\mathbb{R}^{T \times d_X}$ and $\mathcal{X}^T$ as these are isomorphic vector spaces. The same goes for $\mathbb{R}^{(T+1-t_y) \times d_Y}$ and $\mathcal{Y}^{T+1-t_y}$ .
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
|
| 87 |
+
Figure 2. Diagram for Dynamask. An input matrix $\mathbf{X}$ , extracted from a multivariate time series, is fed to a black-box to produce a prediction $\mathbf{Y}$ . The objective is to give a saliency score for each component of $\mathbf{X}$ . In Dynamask, these saliency scores are stored in a mask $\mathbf{M}$ of the same shape as the input $\mathbf{X}$ . To detect the salient information in the input $\mathbf{X}$ , the mask produces a perturbed version of $\mathbf{X}$ via a perturbation operator $\Pi$ . This perturbed $\mathbf{X}$ is fed to the black-box to produce a perturbed prediction $\mathbf{Y}_{\mathbf{M}}$ . The perturbed prediction is compared to the original prediction and the error is backpropagated to adapt the saliency scores contained in the mask.
|
| 88 |
+
|
| 89 |
+
Moreover, we say that the perturbation is *dynamic* if for any given feature at any given time, the perturbation is constructed with the values that this feature takes in neighbouring times. More formally, the perturbation is dynamic if there exist a couple $(W_1,W_2) \in \mathbb{N} \times \mathbb{N} \setminus \{(0,0)\}$ such that for all $(t,i) \in [1:T] \times [1:d_X]$ we have:
|
| 90 |
+
|
| 91 |
+
$$\frac{\partial \left[\pi\left(\mathbf{X}, m_{t,i} ; t, i\right)\right]}{\partial x_{t',i}} \neq 0$$
|
| 92 |
+
|
| 93 |
+
for all
|
| 94 |
+
$$t' \in [t - W_1 : t + W_2] \cap [1 : T]$$
|
| 95 |
+
.
|
| 96 |
+
|
| 97 |
+
Remark 2.2. The first assumption ensures that the perturbations are applied independently for all inputs and that our method is suitable for gradient-based optimization. The second assumption<sup>3</sup> translates the fact that the perturbation should have less effect on salient inputs.
|
| 98 |
+
|
| 99 |
+
Remark 2.3. The parameters $W_1$ and $W_2$ control how the perturbation depends on neighbouring times. A non-zero value for $W_1$ indicates that the perturbation depends on past time steps. A non-zero value for $W_2$ indicates that the perturbation depends on future time steps. For the perturbation to be dynamic, at least one of these parameters has to be different from zero.
|
| 100 |
+
|
| 101 |
+
This definition gives the freedom to design perturbation operators adapted to particular contexts. The method that we develop can be used for any such perturbation operator. In our case, we would like to build our masks by taking advantage the dynamic nature of the data into account. This is crucial if we want to capture local time variations of the features, such as a quick increase of the blood-pressure or
|
| 102 |
+
|
| 103 |
+
an extremal security price from Figure 1. This is achieved by using dynamic perturbation operators. To illustrate, we provide three examples:
|
| 104 |
+
|
| 105 |
+
$$\pi^{g}\left(\mathbf{X}, m_{t,i} ; t, i\right) = \frac{\sum_{t'=1}^{T} x_{t',i} \cdot g_{\sigma(m_{t,i})}(t - t')}{\sum_{t'=1}^{T} g_{\sigma(m_{t,i})}(t - t')}$$
|
| 106 |
+
|
| 107 |
+
$$\pi^{m}\left(\mathbf{X}, m_{t,i} ; t, i\right) = m_{t,i} \cdot x_{t,i} + (1 - m_{t,i}) \cdot \mu_{t,i}$$
|
| 108 |
+
|
| 109 |
+
$$\pi^{p}\left(\mathbf{X}, m_{t,i} ; t, i\right) = m_{t,i} \cdot x_{t,i} + (1 - m_{t,i}) \cdot \mu_{t,i}^{p}$$
|
| 110 |
+
|
| 111 |
+
where $\pi^g$ is a temporal Gaussian blur <sup>4</sup> with
|
| 112 |
+
|
| 113 |
+
$$g_{\sigma}(t) = \exp\left(-\frac{t^2}{2\sigma^2}\right); \ \sigma(m) = \sigma_{max} \cdot (1 - m).$$
|
| 114 |
+
|
| 115 |
+
Similarly, $\pi^m$ can be interpreted as a fade to moving average perturbation with
|
| 116 |
+
|
| 117 |
+
$$\mu_{t,i} = \frac{1}{2W+1} \sum_{t'=t-W}^{t+W} x_{t',i},$$
|
| 118 |
+
|
| 119 |
+
where $W \in \mathbb{N}$ is the size of the moving window. Finally, $\pi^p$ is similar to $\pi^m$ with one difference: the former only uses past values of the features to compute the perturbation<sup>5</sup>:
|
| 120 |
+
|
| 121 |
+
$$\mu_{t,i}^p = \frac{1}{W+1} \sum_{t'=t-W}^t x_{t',i}.$$
|
| 122 |
+
|
| 123 |
+
Note that the Hadamard product $\mathbf{M} \odot \mathbf{X}$ used in (Ho et al., 2019) is a particular case of $\pi^m$ with $\mu_{t,i} = 0$ = so that
|
| 124 |
+
|
| 125 |
+
$<sup>^3</sup>$ Together with the continuity of $\pi$ with respect to the mask coefficients.
|
| 126 |
+
|
| 127 |
+
<sup>&</sup>lt;sup>4</sup>For this perturbation to be continuous, we assume that $\pi^g(\mathbf{X}, 1; t, i) \equiv x_{t,i}$ .
|
| 128 |
+
|
| 129 |
+
<sup>&</sup>lt;sup>5</sup>This is useful in a typical forecasting setting where the future values of the feature are unknown.
|
| 130 |
+
|
| 131 |
+
this perturbation is static. In the following, to stress that a mask is obtained by inspecting the effect of a dynamic perturbation operator, we shall refer to it as a dynamic mask (or Dynamask in short).
|
| 132 |
+
|
| 133 |
+
In terms of complexity, the computation of $\pi^g$ requires $\mathcal{O}(d_X \cdot T^2)$ operations <sup>6</sup> while $\pi^m$ requires $\mathcal{O}(d_X \cdot T \cdot W)$ operations <sup>7</sup>. When T is big<sup>8</sup>, it might therefore be more interesting to use $\pi^m$ with $W \ll T$ or a windowed version of $\pi^g$ . With this analysis of perturbation operators, everything is ready to explain how dynamic masks are obtained.
|
| 134 |
+
|
| 135 |
+
To design an objective function for our mask, it is helpful to keep in mind what an ideal mask does. From the previous subsection, it is clear that we should compare the black-box predictions for both the unperturbed and the perturbed input. Ideally, the mask will identify a subset of salient features contained in the input that explains the black-box prediction. Since this subset of features is salient, the mask should indicate to the perturbation operator to preserve it. More concretely, a first part of our objective function should keep the shift in the black-box prediction to be small. We call it the *error* part of our objective function. In practice, the expression for the error part depends on the task done by the black-box. In a regression context, we minimize the squared error between the unperturbed and the perturbed prediction:
|
| 136 |
+
|
| 137 |
+
$$\mathcal{L}_{e}\left(\mathbf{M}\right) = \sum_{t=t_{u}}^{T} \sum_{i=1}^{d_{Y}} \left( \left[ \left( f \circ \Pi_{\mathbf{M}} \right) \left( \mathbf{X} \right) \right]_{t,i} - \left[ f(\mathbf{X}) \right]_{t,i} \right)^{2}.$$
|
| 138 |
+
|
| 139 |
+
Similarly, in a classification task, we minimize the crossentropy between the predictions:
|
| 140 |
+
|
| 141 |
+
$$\mathcal{L}_{e}\left(\mathbf{M}\right) = -\sum_{t=t_{y}}^{T} \sum_{c=1}^{d_{Y}} \left[f(\mathbf{X})\right]_{t,c} \log \left[\left(f \circ \Pi_{\mathbf{M}}\right)\left(\mathbf{X}\right)\right]_{t,c}.$$
|
| 142 |
+
|
| 143 |
+
Now we have to make sure that the mask actually selects salient features and discards the others. By remembering that $m_{t,i}=0$ indicates that the feature $x_{t,i}$ is irrelevant for the black-box prediction, this selection translates into imposing sparsity in the mask matrix $\mathbf{M}$ . A first approach for this, used in (Fong & Vedaldi, 2017; Ho et al., 2019), is to add a $l^1$ regularisation term on the coefficients of $\mathbf{M}$ . However, it was noted in (Fong et al., 2019) that this produces mask that vary with the regularization coefficient $\lambda$ in a way that renders comparisons between different $\lambda$ difficult. To solve this issue, they introduce a new regularization term
|
| 144 |
+
|
| 145 |
+
to impose sparsity:
|
| 146 |
+
|
| 147 |
+
$$\mathcal{L}_a(\mathbf{M}) = \|\text{vecsort}(\mathbf{M}) - \mathbf{r}_a\|^2,$$
|
| 148 |
+
|
| 149 |
+
where $\|.\|$ denotes the vector 2-norm, vecsort is a function that vectorizes $\mathbf{M}$ and then sort the elements of the resulting vector in ascending order. The vector $\mathbf{r}_a$ contains $(1-a)\cdot d_X\cdot T$ zeros followed by $a\cdot d_X\cdot T$ ones, where $a\in[0,1]$ . In short, this regularization term encourages the mask to highlight a fraction a of the inputs. For this reason, a can also be referred to as the area of the mask. A first advantage of this approach is that the hyperparameter a can be modulated by the user to highlight a desired fraction of the input. For instance, one can start with a small value of a and slide it to higher values in order to see features gradually appearing in order of importance. Another advantage of this regularization term is that it encourages the mask to be binary, which makes the mask more legible as we will detail in the next section.
|
| 150 |
+
|
| 151 |
+
Finally, we might want to avoid quick variations in the saliency over time. This could either be a prior belief or a preference of the user with respect to the saliency map. If this is relevant, we can enforce the salient regions to be connected in time with the following loss that penalizes jumps of the saliency over time:
|
| 152 |
+
|
| 153 |
+
$$\mathcal{L}_c(\mathbf{M}) = \sum_{t=1}^{T-1} \sum_{i=1}^{d_X} | m_{t+1,i} - m_{t,i} |.$$
|
| 154 |
+
|
| 155 |
+
For a fixed fraction a, the mask optimization problem can therefore be written as
|
| 156 |
+
|
| 157 |
+
$$\mathbf{M}_{a}^{*} = \underset{\mathbf{M} \in \left[0,1\right]^{T \times d_{X}}}{\operatorname{arg\,min}} \ \mathcal{L}_{e}\left(\mathbf{M}\right) + \lambda_{a} \cdot \mathcal{L}_{a}\left(\mathbf{M}\right) + \lambda_{c} \cdot \mathcal{L}_{c}\left(\mathbf{M}\right).$$
|
| 158 |
+
|
| 159 |
+
Note that, in this optimization problem, the fraction a is fixed. In some contexts, one might want to find the smallest fraction of input features that allows us to reproduce the black-box prediction with a given precision. Finding this minimal fraction corresponds to the following optimization problem:
|
| 160 |
+
|
| 161 |
+
$$a^* = \min \left\{ a \in [0, 1] \mid \mathcal{L}_e \left( \mathbf{M}_a^* \right) < \varepsilon \right\},$$
|
| 162 |
+
|
| 163 |
+
where $\varepsilon$ is the threshold that sets the acceptable precision. The resulting mask $\mathbf{M}_{a^*}^*$ is then called *extremal mask*. The idea of an extremal mask is extremely interesting by itself: it explains the black-box prediction in terms of a minimal number of salient features. This is precisely the parsimony that we were referring to in the introduction.
|
| 164 |
+
|
| 165 |
+
Finally, it is worth mentioning that we have here presented a scheme where the mask preserves the features that minimize the error. There exists a variant of this scheme where the mask preserves the features that maximize the error. This other scheme, together with the detailed optimization algorithm used in our implementation, can be found in Appendix B.
|
| 166 |
+
|
| 167 |
+
$<sup>^6</sup>$ One must compute a perturbation for each $d_X \cdot T$ component of the input and the sums in each perturbation have T terms
|
| 168 |
+
|
| 169 |
+
<sup>&</sup>lt;sup>7</sup>One must compute a perturbation for each $d_X \cdot T$ component of the input and each moving average involves 2W + 1 terms
|
| 170 |
+
|
| 171 |
+
$<sup>^{8}</sup>$ In our experiments, however, we never deal with T>100 so that both approaches are reasonable.
|
| 172 |
+
|
| 173 |
+
Once the mask is obtained, it highlights the features that contain crucial information for the black-box to issue a prediction. This motivates a parallel between masks and information theory. To make this rigorous, we notice that the mask admits a natural interpretation in terms of information content. As aforementioned, a value close to 1 for mt,i indicates that the input feature xt,i carries information that is important for the black box f to predict an outcome. It is therefore natural to interpret a mask coefficient as a probability that the associated feature is salient for the black box to issue its prediction. It is tempting to use this analogy to build the counterpart of *Shannon information content* in order to measure the quantity of information contained in subsequences extracted from the time series. However, this analogy requires a closer analysis. We recall that the Shannon information content of an outcome decreases when the probability of this outcome increases (Shannon, 1948; MacKay, 2003; Cover & Thomas, 2005). In our framework, we would like to adapt this notion so that the information content increases with the mask coefficients (if mt,i gets closer to one, this indicates that xt,i carries more useful information for the black-box to issue its prediction). To solve this discrepancy, we have to use 1−mt,i as the pseudoprobability appearing in our adapted notion of Shannon information content.
|
| 174 |
+
|
| 175 |
+
Definition 2.3 (Mask information). The mask information associated to a mask M and a subsequence (xt,i)(t,i)∈<sup>A</sup> of the input X with A ⊆ [1 : T] × [1 : dX] is
|
| 176 |
+
|
| 177 |
+
$$I_{\mathbf{M}}(A) = -\sum_{(t,i)\in A} \ln(1 - m_{t,i}).$$
|
| 178 |
+
|
| 179 |
+
*Remark* 2.4*.* Conceptually, the information content of a subsequence measures the quantity of useful information it contains for the black-box to issue a prediction. It allows us to associate a saliency score to a group of inputs according to the mask.
|
| 180 |
+
|
| 181 |
+
*Remark* 2.5*.* Note that, in theory, the mask information diverges when a mask coefficient is set to one. In practice, this can be avoided by imposing M ∈ (0, 1)<sup>T</sup> <sup>×</sup>d<sup>X</sup> .
|
| 182 |
+
|
| 183 |
+
As in traditional information theory, the information content is not entirely informative on its own. For instance, consider two subsequences indexed by, respectively, A, B with |A| = |B| = 10. We assume that the submask extracted from M with A contains 3 coefficients m = 0.9 and 7 coefficients m = 0 so that the information content of A is IM(A) ≈ 6.9. Now we consider that all the coefficient extracted from M with B are equal to 0.5 so that IM(B) ≈ 6.9 and hence IM(A) ≈ IM(B). In this example, A clearly identifies 3 important features while B gives a mixed score for the 10 features. Intuitively, it is pretty clear that the information provided by A is sharper. Unfortunately,
|
| 184 |
+
|
| 185 |
+
the mask information by itself does not allow to distinguish these two subsequences. Hopefully, a natural distinction is given by the counterpart of Shannon entropy.
|
| 186 |
+
|
| 187 |
+
Definition 2.4 (Mask entropy). The mask entropy associated to a mask M and a subsequence (xt,i)(t,i)∈<sup>A</sup> of the input X with A ⊆ [1 : T] × [1 : dX] is
|
| 188 |
+
|
| 189 |
+
$$S_{\mathbf{M}}(A) = -\sum_{(t,i)\in A} m_{t,i} \ln m_{t,i} + (1 - m_{t,i}) \ln (1 - m_{t,i})$$
|
| 190 |
+
|
| 191 |
+
*Remark* 2.6*.* We stress that the mask entropy is *not* the Shannon entropy for the subsequence (xt,i)(t,i)∈A. Indeed, the pseudo-probabilities that we consider are not the probabilities p(xt,i) for each feature xt,i to occur. In particular, there is no reason to expect that the probability of each input decouple from the others so that it would be wrong to sum the individual contribution of each input separately 9 like we are doing here.
|
| 192 |
+
|
| 193 |
+
Clearly, it is desirable for our mask to provide explanations with low entropy. This stems from the fact that the entropy is maximized when mask coefficients mt,i are close to 0.5. In this case, given our probabilistic interpretation, the mask coefficient is ambiguous as it does not really indicate whether the feature is salient. This is consistent with our previous example where SM(A) ≈ 0.98 while SM(B) ≈ 6.93 so that SM(A) SM(B). Since masks coefficients take various values in practice, masks with higher entropy appear less legible, as illustrated in Figure 3. Therefore, we use the entropy as a measure of the mask's sharpness and legibility in our experiments. In particular, the entropy is minimized for perfectly binary masks M ∈ {0, 1} <sup>T</sup> <sup>×</sup>d<sup>X</sup> , which are easy to read and contain no ambiguity. Consequently, the regularization term La(M) that we used in Section 2.3 has the effect of reducing the entropy. Since our adapted notions of information and entropy rely on individual contributions from each feature, they come with natural properties.
|
| 194 |
+
|
| 195 |
+
Proposition 2.1 (Metric properties). *For all labelling sets* A, B ⊆ [1 : T]×[1 : dX]*, the mask information and entropy enjoy the following properties:*
|
| 196 |
+
|
| 197 |
+
$$I_{\mathbf{M}}(A) \ge 0$$
|
| 198 |
+
$S_{\mathbf{M}}(A) \ge 0$
|
| 199 |
+
|
| 200 |
+
$$I_{\mathbf{M}}(A \cup B) = I_{\mathbf{M}}(A) + I_{\mathbf{M}}(B) - I_{\mathbf{M}}(A \cap B)$$
|
| 201 |
+
|
| 202 |
+
$$S_{\mathbf{M}}(A \cup B) = S_{\mathbf{M}}(A) + S_{\mathbf{M}}(B) - S_{\mathbf{M}}(A \cap B)$$
|
| 203 |
+
|
| 204 |
+
*3. Monotonicity If* A ⊂ B *:*
|
| 205 |
+
|
| 206 |
+
$$I_{\mathbf{M}}(A) \le I_{\mathbf{M}}(B)$$
|
| 207 |
+
$S_{\mathbf{M}}(A) \le S_{\mathbf{M}}(B).$
|
| 208 |
+
|
| 209 |
+
<sup>9</sup> It is nonetheless possible to build a single mask coefficient for a group of inputs in order to imitate a joint distribution.
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
|
| 213 |
+
Figure 3. Mask information and entropy. For a given subsequence and a given mask, the information increases when more features are relevant (a → b or c → d). The entropy increases when the sharpness of the saliency map decreases (a → c or b → d). Masks with high entropy appear less legible, especially for long sequences.
|
| 214 |
+
|
| 215 |
+
*Proof.* See Appendix A.
|
| 216 |
+
|
| 217 |
+
Together, these properties guarantee that I<sup>M</sup> and S<sup>M</sup> define measures for the discrete σ-algebra P ([1 : T] × [1 : dX]) on the set of input indexes [1 : T] × [1 : dX]. All of these metrics can be computed for any saliency method, hence allowing comparisons in our experiments. For more details, please refer to Appendix A.
|
2106.05665/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-10-04T18:10:41.824Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.50 Safari/537.36 Edg/94.0.992.23" etag="zDzr3BvQxQRClGe8pv4D" version="15.4.1" type="google"><diagram id="dS8sRDbr3op2R_7dbu0e" name="Page-1">7Vxtc6M2EP41nrl+SAck8MvHOJf0Ou11MpPO9PqpI4NsNIeRT8ixc7++Ei/mZfHVrglgcL4EFmkF+zxa7S4yI/yw3v8iyMb7zF3qj5Dh7kf44wgh05hN1T8teYslY9OMBSvB3KRRJnhh32naM5FumUvDQkPJuS/Zpih0eBBQRxZkRAi+KzZbcr846oaskhGNTPDiEJ+CZn8xV3qxdIommfwTZSsvHdkcz+Ira5I2ThSHHnH5LifCjyP8IDiX8dF6/0B9bbzULnG/pyNXDzcmaCBP6fApkGxxJ7+hL7/9I+jT5jv68vedNY7VvBJ/mzxxcrfyLTUBDdx7bUl1FvBACecuCT2q1ZrqxJNrPznU8mciJRVBJEEGVtJQCv71YD313HPBt4EbKTDUGXWVxZPhuJAeX/GA+I+ZdB7fkm539NnNg0UVFSlfUyneVJNdhpmd4ODl4EplgvpEsteiepJQZ3VQdxjhmTM1MDISmuNZAnpCcsswiipCvhUOTXrlMfoPRaZdUiSJWFEJFKmD3GNnoogCZ9ABncUGxydhyJwIYiIkFOeo0TfI7VTPpZCrnkXIrdMgV9Ymb7lmG90gPP+GT72vUnt1EN9BrfzDN290OTWtI8idTc2yoqa9kXVjw+VsQNO6HNW0XTbYNzbUEKmgmthQVtQ0GyY3NtTAhnFdbBi3y4bZjQ01hLR1+YayoqbZkNYObnS4KIysK3AoK2qcDiagw+rDfoTGvtRAbhfqcKUPjVSmhsmJfwLkyaDWBNl5TNKXDXH01Z0gmyJ/lsz3H7jPRdQXL6cOdZwDg3JXFlPbso0DM16pkHR/LjeOBWuT5DzHnUkFd5BxnCYFXM4GAZYW8iBsSFAw8fjbVtfF5kseyLswqgreqwamsdlnF1OAKqDE1VDmpdGQ14uwVXayrSMMk/fqaWb2Z5phu2sgwCzJAHZVzyuLxiM+W+nVzVHPTZWd5toqzCH+fXJhzVw3WrUEVbORLCJV2ohJ2Unptecj+6PWtZU8nrGR6qL9k7U2D1YiqsPpje3iimVCNFCjaMAsxRwMGtjoGhpTgAaMC3uLhtU1NGDOhoeDxrRjaCAYI1uDQcNCXUMDBsv2cNDo2iqOYGA7vg40BJdEMh4kemsAx+7aoo7gS6LHPXW20WMj48Ov2sTIeBKUwgQDgnaqZU8HtyptKZaKaoDlkHekeUg6h3KwjCtgwe8GC8xDLqy59AYrjItYoYophBvFCmYp/6M0M/0Z67+G6jO9YYM16xobYJb0noW6I8WgARLB7BoRYIJWr1swhgcynnQM5FTxbbY3PNvtrhEBFgB+J5KG+j3es6Auc+KIuq+AlCPoKkAajaAxrAF8ZuGaSMcbEAp22yjA3P+UmvGVmh+kJhXmb9Yrwez+lCLxtZr/BPY3a36YxffY/CACb9388Mc5pxTlr9X85ZCodfPDukif2V8uRLRufliIKFR2XzzBgq8sWKmTPwnze5wtmHax6o5mEJtZo9j8uDYwiC11aApr7I3u9Ul/ndNQ7o56v6WuvIu9fYRhDvjulXmr9zCXf9bWPsyn7py8vSw7nLcVlVgwKHwSZK3fMf+xXS+UyfqLRXkLWQUWptEoGD9+VzXoiTPp2sSp5XVSY9FKb4hQrmq0TgT7usLW/hDB7hoRYHSL+mt+EMlU/FSkWfPDqLPH5S2wHrZu/qqvW8Q+CTgp7ftG+htXaT6U+T28XBqGDrrKrjDv9+L+wPHlxukr7GD1ax32qs9Y1Ag7vsFetda1Djt8ldP5Sk5/2DDrWuRT9fWSm+9/b9hbdwKwXDHoPVZV87DR3T02rEkMb49V1bSoCQV1mn3fNP6cR/aVWPz4Lw==</diagram></mxfile>
|
2106.05665/main_diagram/main_diagram.pdf
ADDED
|
Binary file (18.7 kB). View file
|
|
|
2106.05665/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Real-time perception is an important precursor for developing intelligent embodied systems. These systems operate on sensory data on top of a hardware substrate, *viz*, a mix of resource-constrained, embedded and networked computers. Resource planning is a critical consideration in such constrained scenarios. Further, these systems need to be carefully designed to be latency sensitive and ensure safe behaviour (See Figure [2\)](#page-2-0).
|
| 4 |
+
|
| 5 |
+
Many runtime execution decisions exist, e.g. the spatial resolution (scale) to operate at, temporal stride of model execution, choice of model architecture etc. The runtime decisions are influenced by (a) intrinsic context derived from sensor data (e.g. image content) (b) extrinsic context observed from system characteristics, e.g. contention from external processes (other applications). A execution framework is needed to use available context, jointly optimize accuracy and latency while taking these decisions.
|
| 6 |
+
|
| 7 |
+
Traditional execution frameworks [\[1,](#page-10-0) [2,](#page-10-1) [3,](#page-10-2) [4\]](#page-10-3) operate in the following paradigm – optimize model to operate within a latency budget. Largely, they do not jointly optimize accuracy and latency for real-time tasks. Instead, they optimize proxies like latency budget [\[1\]](#page-10-0), input resolution [\[2\]](#page-10-1), latency from resource contention [\[4\]](#page-10-3) or energy [\[5\]](#page-10-4). Their approach requires rule-based decision making – heuristic design for every characteristic, instead of learning the decision function. These frameworks explicitly handcraft rules to accommodate hardware capabilities. Moreover, traditional strategies (Figure [2\)](#page-2-0) budgeting latency are suboptimal when operating on streaming data [\[6\]](#page-10-5).
|
| 8 |
+
|
| 9 |
+
<sup>∗</sup>Work done at Microsoft Research India
|
| 10 |
+
|
| 11 |
+

|
| 12 |
+
|
| 13 |
+
Figure 1: Optimizing real-time perception inference. (a) A powerful detector (HTC [\[7\]](#page-10-6)) has very good offline (non real-time) (38.2 mAP) performance. (b) However, by the time it finishes execution (750ms per frame), inferences are not aligned with objects (6.2 sAP). (c) Solely optimizing for latency via a faster model (30ms, RetinaNet [\[8\]](#page-10-7) at low resolution) leads to suboptimal and unsafe behavior (6.0 sAP), like missing pedestrians. (d) Learning decisions with favourable inference tradeoffs with Chanakya (21.3 sAP) achieves better real-time performance.
|
| 14 |
+
|
| 15 |
+
Operating in the new streaming perception paradigm, we propose Chanakya[2](#page-1-0) – a novel learning-based approximate execution framework to learn runtime decisions for real-time perception. Chanakya operates in the following manner – (a) Content and system characteristics are captured by intrinsic and extrinsic context which guide runtime decisions. (b) Decision classes are defined, such as the input resolution (scale) choices, the model choices etc. Chanakya handles these interacting decisions and their tradeoffs which combinatorially increase. (c) Accuracy and latency are jointly optimized using a novel reward function without approximating either objectives. (d) Execution policy is learnt from streaming data, and is dynamic, i.e. configuration changes during real-time execution of the perception system.
|
| 16 |
+
|
| 17 |
+
Chanakya learns performant execution policies and can incorporate new decision dimensions, stochastic system context and be ported to different hardware. Importantly, our improvements are in addition to, and complementary to models and the decision dimensions themselves – faster & more accurate models [\[9,](#page-10-8) [10\]](#page-10-9), vision-guided runtime decisions [\[11,](#page-10-10) [12,](#page-10-11) [13\]](#page-10-12) and system-focused decisions [\[14,](#page-10-13) [15\]](#page-10-14) can be easily incorporated.
|
| 18 |
+
|
| 19 |
+
# Method
|
| 20 |
+
|
| 21 |
+
```
|
| 22 |
+
Initialize policy \pi
|
| 23 |
+
Set change system configuration probability p
|
| 24 |
+
\tau = 0
|
| 25 |
+
for e=0,1,\cdots,E-1 do
|
| 26 |
+
Reset simulator;
|
| 27 |
+
for every sequence do
|
| 28 |
+
Initialize empty stream replay buffer S;
|
| 29 |
+
i - 1
|
| 30 |
+
while streaming do
|
| 31 |
+
// System executes on streaming data
|
| 32 |
+
value = Draw from Bernoulli(p);
|
| 33 |
+
if value == true then
|
| 34 |
+
Observe x_i from the stream;
|
| 35 |
+
z_{\tau} = context(x_i);
|
| 36 |
+
a_{\tau} = \pi(z_{\tau}) \; ;
|
| 37 |
+
Change system config. using a_{\tau};
|
| 38 |
+
t_{a_{\tau}} = time.now();
|
| 39 |
+
Store (z_{\tau}, a_{\tau}, t_{a_{\tau}}) in stream replay buffer
|
| 40 |
+
\tau = \tau + 1;
|
| 41 |
+
i = i + 1;
|
| 42 |
+
\pi = train\_controller(\pi, S);
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
```
|
| 46 |
+
\begin{split} & \textbf{Function} \; \texttt{train\_controller}(\pi : Policy, \, S : \, \textit{Stream Replay Buffer}) ; \\ & \text{Unpack } K \; \text{length sequence} \\ & \{(z^1, a^1, t^1) \cdots (z^K, a^K, t^K)\} \; \text{from stream replay buffer } S; \\ & \textbf{for } n = l, \cdots, K \; \textbf{do} \\ & \qquad \qquad r^n = R(t^n, t^{n+1}); \\ & \text{Store} \; (z^n, a^n, r^n) \; \text{in controller's replay buffer} \\ & \qquad B; \\ & \pi = update(\pi) \; ; \end{split}
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
```
|
| 50 |
+
Function update (\pi: Policy):
|
| 51 |
+
|
| 52 |
+
// Optimize the Q function
|
| 53 |
+
|
| 54 |
+
Sample N tuples
|
| 55 |
+
\{(z^1, a^1, r^1), \cdots, (z^N, a^N, r^N)\} uniformly from buffer B;
|
| 56 |
+
for n = l, \cdots, N do
|
| 57 |
+
|
| 58 |
+
// Set the targets
|
| 59 |
+
y^n = r^n
|
| 60 |
+
// Calculate the loss for the batch
|
| 61 |
+
\mathcal{L} = \frac{1}{N} \sum_{n=1}^{N} \frac{1}{|D|} \sum_{d \in a^n} [y^n - Q_{\theta}(z^n, d)]^2
|
| 62 |
+
Use optimizer to optimize \theta to minimize \mathcal{L};
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
observation $O_y$ with action $a_y$ is taken at $t_y$ ( $t_y > t_x$ ). We propose a reward,
|
| 66 |
+
|
| 67 |
+
<span id="page-5-0"></span>
|
| 68 |
+
$$R(t_x, t_y) = L(\{y_{t_k}, \hat{y}_{\varphi(t_k)}\}_{k=x}^y)$$
|
| 69 |
+
(1)
|
| 70 |
+
|
| 71 |
+
where L is an arbitrary single frame loss. Intuitively, the reward for action $a_x$ is streaming accuracy obtained for a stream sequence segment until a new action $a_y$ is taken in the future and $\varphi$ ensures only real-time results are accounted for. Reward $R(t_x,t_y)$ ensures learned controller implicitly considers real-time constraints and accuracy of the decision $a_x$ , not assuming any properties of the algorithm f or loss L. Thus, it can applied to other perception tasks.
|
| 72 |
+
|
| 73 |
+
While [6] defined $L_{streaming} = L(\{y_{t_k}, \hat{y}_{\varphi(t_k)}\}_{k=0}^N)$ and corresponds to R(0,T) (Loss over N observations from a video stream of length T), $L_{streaming}$ was employed to characterize performance of f on the sensor stream. Directly employing it as reward made training the controller sample-inefficient, and did not converge before we exhausted our training budget (of 10 epochs).
|
| 74 |
+
|
| 75 |
+
While reward proposed in Equation 1 is learnable, it is biased towards the inherent characteristics of the video sequence. Consider two video sequences A and B, Sequence A with one large object that can be easily detected would have much higher average rewards than Sequence B with many occluding and small objects. To normalize for this factor, we consider a fixed policy $\pi_{fixed}$ , i.e., a fixed configuration of algorithm f and propose,
|
| 76 |
+
|
| 77 |
+
<span id="page-5-1"></span>
|
| 78 |
+
$$R_{fixed\_adv}(t_x, t_y) = R_{\pi}(t_x, t_y) - R_{\pi_{fixed}}(t_x, t_y)$$
|
| 79 |
+
(2)
|
| 80 |
+
|
| 81 |
+
where $R_{\pi_{fixed}}$ is the reward obtained by executing algorithm f following a fixed policy. To implement $R_{fixed\_adv}$ , f with fixed policy $\pi_{fixed}$ are executed and prefetched the predictions and the timestamps are stored. During training, we compute $R_{\pi}$ and load $\pi_{fixed}$ outputs to simulate $R_{\pi_{fixed}}$ .
|
| 82 |
+
|
| 83 |
+
This eliminates the bias in the inital proposed reward, and provides the *advantage* of learned policy over fixed policy (See Figure 6). In Section 4.1, we show any fixed policy configuration ( $\pi_{fixed}$ ) suffices and the policy learned is resilient to variations of this selection. We refer to the reward in Equation 1 as $R_1$ and Equation 2 as $R_2$ in our experiments for simplicity.
|
| 84 |
+
|
| 85 |
+
Our controller design decisions (such as employing deep contextual bandits) are motivated by severe real-time constraints. For e.g., we considered individual frames as independent trials while learning our controller, i.e. we compute context from a single frame to incur minimal overhead (amortized overhead of around 1%). Considering sequences as controller's input incurred significant overheads and impeded performance.
|
| 86 |
+
|
| 87 |
+
<span id="page-6-0"></span>Table 1: **Improvements on a predefined system.** Chanakya outperforms competing execution policies for a predefined system. All the execution policies operate on top of a perception system employing the same components: Faster R-CNN and Kalman Filter.
|
| 88 |
+
|
| 89 |
+
| Approach | sAP | $sAP_{50}$ | $sAP_{75}$ |
|
| 90 |
+
|-----------------------------------------------------------------------------------------------------------------------|-------------|--------------|------------------|
|
| 91 |
+
| 1. Streamer (s=900) [6] (Static Policy) 2. Streamer (s=600) [6] (Static Policy) | 18.2 | 35.3 | 16.8 |
|
| 92 |
+
| | 20.4 | 35.6 | 20.8 |
|
| 93 |
+
| 3. Streamer (s=600, np=300) (Static-Expert Policy) 4. Streamer (s=600, np=500) (Static-Expert Policy) | 20.8 | 36.0<br>35.9 | 20.9 |
|
| 94 |
+
| 5. Streamer + AdaScale [2] (Dynamic-Traditional Policy) 6. Streamer + AdaScale + Our Scheduler (Dynamic-Trad. Policy) | 13.4 | 23.1 | 13.8 |
|
| 95 |
+
| | 13.8 | 23.4 | 14.3 |
|
| 96 |
+
| 7. Chanakya $(s_1, np_1, \mathbf{R} = R_1)$ | 21.0 | 36.8 | <b>21.2</b> 21.1 |
|
| 97 |
+
| 8. Chanakya $(s_1, np_1, \mathbf{R} = R_2, \pi_{fixed} = (\mathbf{s} = 480, \mathbf{np} = 300))$ | <b>21.3</b> | <b>37.3</b> | |
|
| 98 |
+
| 9. Offline Upper Bound (s=600, Latency Ignored) | 24.3 | 38.9 | 26.1 |
|
| 99 |
+
|
| 100 |
+
Combinatorially Increasing Configurations. Consider decision space $\mathbb{D}=\{D_1,D_2..D_m\}$ , with dimensionality $M=|\mathbb{D}|$ , where each decision dimension $D_i$ corresponds to a configuration parameter of the algorithm f discretized as $d_k$ . Thus, an action a has M decision dimensions and each discrete sub-action has a fixed number of choices. For example, if we consider 2 decision dimensions, $\mathbb{D}=\{D_{scale},D_{model}\}$ , the potential configurations would be $D_{scale}=\{720,600,480,360,240\}$ , $D_{model}=\{yolo,fcos,frcnn\}$ . Using conventional discrete-action algorithms, $\prod_{d\in\mathbb{D}}|d|$ possible actions need to be considered. Efficiently exploring such large action spaces is difficult, rendering naive discrete-action algorithms like [3,21] intractable.
|
| 101 |
+
|
| 102 |
+
Chanakya uses a multi-layer perceptron for predicting $Q_{\theta}(z,a_i)$ for each sub-action $a_i$ and given context z, and employs action branching architecture [40] to separately predict each sub-action, while operating on a shared intermediate representation. This significantly reduces the number of action-values to be predicted from $\prod_{d\in\mathbb{D}}|d|$ to $\sum_{d\in\mathbb{D}}|d|$ .
|
| 103 |
+
|
| 104 |
+
**Real-time Scheduler.** Choosing which frames to process is critical to achieve good streaming performance. Temporal aliasing, i.e., the mismatch between the output and the input streams, reduces performance. [6] prove optimality of shrinking tail scheduler. However, this scheduler relies on the assumption that the runtime $\rho$ of the algorithm f is constant. This is reasonable as runtime distributions are unimodal, without considering resource contention.
|
| 105 |
+
|
| 106 |
+
Chanakya changes configurations at runtime, and corresponding $\rho$ changes. Fortunately, our space of configurations is *discrete*, and $\rho$ 's (for *every* configuration) can be assumed to be constant. Thus, a fragment of the sequence where a particular configuration is selected is a scenario where shrinking tail scheduler using that runtime will hold. We cache configuration runtimes of algorithm f and *modify the shrinking tail scheduler* to incorporate this assumption.
|
| 107 |
+
|
| 108 |
+
**Asynchronous Training.** Chanakya is trained in such a way that real-time stream processing is simulated, akin to challenges presented in [19, 20]. Thus, a key practical challenge is to factor in average latency for generating the context from the observations, computing the controller's action during training. We also need to ignore the time taken to train the controller as these would disrupt and change the system performance. Training the controller concurrently on a different GPU incurred high communication latency overhead.
|
| 109 |
+
|
| 110 |
+
Algorithms 1 and 2 describe the training strategy. We train the controller intermittently – after a sequence has finished processing, the controller's parameters are fixed while the sequence is streamed. While the sequence is processing, we store the (z,a,t) tuples in a buffer S along with all the algorithm's predictions (to construct the reward r). During training, first we add these tuples to controller's buffer B and, sample from B to update $Q_{\theta}(z,a)$ .
|
| 111 |
+
|
| 112 |
+
**Training for edge devices.** We emulate training the controller for edge devices on a server by using the measured runtimes for each configuration on the edge device, since Chanakya only requires the runtimes of the algorithm $f_{\pi}$ on the target device to train the controller.
|
| 113 |
+
|
| 114 |
+
<span id="page-7-0"></span>Table 2: Handling large decision spaces. We incorporate a tracker in our predefined perception system. Choice of detector scale and proposals, tracker scale and stride, combinatorially expands our decision space. Chanakya is able to learn a competitive policy compared to best static policies [\[6\]](#page-10-5).
|
| 115 |
+
|
| 116 |
+
| Approach | sAP |
|
| 117 |
+
|------------------------------------|------|
|
| 118 |
+
| Static Policy (s=900, ts=600, k=5) | 17.8 |
|
| 119 |
+
| Static Policy (s=600, ts=600, k=5) | 19.0 |
|
| 120 |
+
| Chanakya (s1, np1, ts, k, R2) | 19.4 |
|
| 121 |
+
|
| 122 |
+
Table 3: Learning decisions from scratch. Chanakya is able to decide model and other metaparameter choices even when optimal algorithmic components are unknown.
|
| 123 |
+
|
| 124 |
+
| Approach | sAP |
|
| 125 |
+
|---------------------------------|------|
|
| 126 |
+
| Static Policy (m=f cos, s=600) | 16.7 |
|
| 127 |
+
| Static Policy (m=yolov3, s=600) | 20.2 |
|
| 128 |
+
| Static Policy (m=frcnn, s=600) | 20.4 |
|
| 129 |
+
| Chanakya (m=m, s1, np1, R=R2) | 20.7 |
|
2107.10140/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cc0f981b7c416f139acdad5f6d165d44fdec2c9ce50c96849b5bc21a2eb2243e
|
| 3 |
+
size 3232751
|
2108.02479/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fa57276c1fa0bd22dcb7f6cdd352dc66894930ff0cab505cf01311fca8de4dd6
|
| 3 |
+
size 17826929
|
2108.13499/main_diagram/main_diagram.png
ADDED
|
Git LFS Details
|
2109.09133/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f0795991fb53b72bbc2d1cb5aec509413abf9ddd43e7e5ae158aeaef367cda49
|
| 3 |
+
size 312596
|
2109.13432/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2109.13432/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Semantic segmentation, i.e. assigning a semantic class to each pixel in an input image, is an integral task in understanding shapes, geometry, and interaction of components from images. The field has enjoyed revolutionary improvements thanks to deep learning [\[21,](#page-8-0) [54,](#page-10-0) [35\]](#page-9-0). However, obtaining a large-scale dataset with pixel-level annotations is particularly expensive: for example, labeling takes 1.5 hours on average per image in the Cityscapes dataset [\[12\]](#page-8-1). Despite the recent introduction of datasets that are significantly larger than their predecessors [\[12,](#page-8-1) [43,](#page-9-1) [11,](#page-8-2) [29\]](#page-9-2), scarcity of labeled data remains a bottleneck when compared to other recognition tasks in computer vision [\[34,](#page-9-3) [42,](#page-9-4) [23\]](#page-8-3).
|
| 4 |
+
|
| 5 |
+
In the common scenario where data is provided as videos with labels for sparsely subsampled frames, a prominent way to tackle data scarcity is *label propagation* (LP), which automatically annotates additional video frames by propagating labels through time [\[58,](#page-10-1) [3\]](#page-8-4). This intuitive idea to leverage
|
| 6 |
+
|
| 7 |
+
<span id="page-0-0"></span>
|
| 8 |
+
|
| 9 |
+
Figure 1: Quantitative comparison of different auto-labelling methods for multiple propagation lengths using ApolloScape [\[43\]](#page-9-1) framewise ground-truth. We compare our propagation methods, *warprefine* and *warp-inpaint*, to prior-arts *semantic-only* and *motiononly* propagation, and show that *warp-refine* is vastly superior to other auto-labeling approaches, especially for large time-steps.
|
| 10 |
+
|
| 11 |
+
motion-cues via temporal consistency in videos has been widely explored, using estimated motion [\[2,](#page-8-5) [13,](#page-8-6) [27\]](#page-9-5), patch matching [\[3,](#page-8-4) [4\]](#page-8-7), or predicting video frames [\[59\]](#page-10-2). However, as discussed in Zhu et al. [\[59\]](#page-10-2), estimating dense motion fields across long periods of time remains notoriously difficult. Further, these methods are often sensitive to hyperparameters (e.g. patch size), cannot handle de-occlusion, or require highly accurate optical flow, thus limiting their applicability.
|
| 12 |
+
|
| 13 |
+
Another promising approach for obtaining large-scale annotation in semi-supervised settings is *self-training* (ST), in which a teacher model, trained to capture semantic cues, is used to generate additional annotations on unlabeled images [\[19,](#page-8-8) [20,](#page-8-9) [61,](#page-10-3) [60\]](#page-10-4). While there have been significant improvements in ST, various challenges still remain in controlling the noise in pseudo-labels, such as heuristic decisions on confidence thresholds [\[37\]](#page-9-6), class imbalance in pseudo-labels [\[14\]](#page-8-10), inaccurate predictions for small segments, and misalignment of category definition between source and target domain.
|
| 14 |
+
|
| 15 |
+
To mitigate the drawbacks of LP and ST, we propose
|
| 16 |
+
|
| 17 |
+
<sup>\*</sup>Work done while A. Ganeshan was at Preferred Networks, Inc.
|
| 18 |
+
|
| 19 |
+
<span id="page-1-1"></span><span id="page-1-0"></span>
|
| 20 |
+
|
| 21 |
+
Figure 2: Accuracy of propagated labels. We visually compare the proposed *warp-refine* propagation (top-center) with the *motion-only* model (bottom-left), the *semantic-only* model (bottom-right), and the ground-truth annotation (bottom-center). The *motion-only* model (i) fails to correctly classify the new regions introduced in the target frame , and (ii) often suffers from *drifting* . In contrast, the semantic-only model (iii) tends to fail for far-away segments, and (iv) cannot handle misaligned class definitions between the teacher and the student model (e.g. *ignore* label) . Our method effectively combines the strengths of both of these approaches to overcome their respective limitations. For details of *motion-only* and *semantic-only* models, see Section [4.4.](#page-5-0)
|
| 22 |
+
|
| 23 |
+
*Warp-Refine Propagation* (referred to as *warp-refine*), a novel method to automatically generate dense pixel-level labels for raw video frames. Our method is built on two key insights: (i) by combining motion cues with semantic cues, we can overcome the respective limitations of LP and ST, and (ii) by leveraging *cycle-consistency* across time, we can learn to combine these two complementary cues in a *semisupervised* setting without sequentially-annotated videos.
|
| 24 |
+
|
| 25 |
+
Specifically, our method first constructs an initial estimate by directly combining labels generated via motion cues and semantic cues. This initial estimate, containing erroneous conflict resolution and faulty merges, is then rectified by a separate refinement network. The refinement network is trained in a *semi-supervised* setting via a novel *cycleconsistency* loss. This loss compares the ground-truth labels with their cyclically propagated version created by propagating the labels forward-and-backward through time in a cyclic loop (t → t + k → t). Our loss is built on the observation that as our auto-labeling method is bi-directional, it can be used to generate different versions of each annotated frame. Once this network is trained, it is used to correct errors caused by propagation of variable length. In Fig. [2](#page-1-0) we show a qualitative comparison of our method against prior-arts, demonstrating drastic improvements in label quality.
|
| 26 |
+
|
| 27 |
+
With quantitative analysis on a large scale autonomous driving datasets (ApolloScape [\[43\]](#page-9-1)), we concretely establish the superior accuracy of our method against previous *state-of-* *the-art* auto-labeling methods. Such an analysis of different methods has been starkly missing from prior works [\[59,](#page-10-2) [37,](#page-9-6) [27\]](#page-9-5). As shown in Fig. [1,](#page-0-0) we observe that *warp-refine* accurately propagates labels for significantly longer time intervals, with a notable average improvement of 13.1 *mIoU* over the previous best method on ApolloScape. Further, it accurately labels rare classes such as 'Bicycle' and thin structures such as 'Poles' (cf. Section [4.3\)](#page-4-0). As a result, by training single-frame semantic segmentation models with the additional data labeled by our method, we achieve *stateof-the-art* performance on KITTI [\[1\]](#page-8-11), NYU-V2 [\[28\]](#page-9-7) and Cityscapes [\[12\]](#page-8-1) benchmarks (cf. Section [4.5\)](#page-6-0).
|
| 28 |
+
|
| 29 |
+
In summary, our main contributions are: 1) A novel algorithm, termed *Warp-Refine Propagation*, that produces significantly more accurate pseudo-labels, especially for frames distant in time; 2) A novel loss function, based on the *cycleconsistency* of learned transformations, to train our method in a *semi-supervised* setting; and 3) A quantitative analysis on the *quality* and *utility* of different auto-labeling methods on multiple diverse datasets. To the best of our knowledge, our work is the first to utilize both semantic and geometric understanding for the task of video auto-labeling.
|
| 30 |
+
|
| 31 |
+
# Method
|
| 32 |
+
|
| 33 |
+
We evaluate the benefit of the propagated labels on different semantic segmentation models. This result is summarized in Table A. We see that the propagated labels are significantly beneficial for smaller architectures, which have lower performance. However, in the case of motion-only propagated labels, we see that the performance is unaffected or sometimes deteriorated. Note that these results do not use the 20000 additional coarse labels, nor pretraining on Mapillary Vistas [29].
|
2110.05892/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-06-18T08:41:43.045Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.106 Safari/537.36" version="14.8.0" etag="Pu4lw5w_K8qBGV8JhFnb" type="google"><diagram id="n5fxVVrdoeMvLtedoDiF">7Z1dc6M2FIZ/jWfai+wgwOBc5nPbmabT6e5Mu1cexSg2XUCujGO7v76SERgQJE7WliPr7O5sQF8GHT0g6bw+GXg36fozw/PZA41IMnCdaD3wbgeui0aXLv8hUjYyxXGDImXK4kim7RK+xP+RsqBMXcYRWTQK5pQmeTxvJk5olpFJ3kjDjNFVs9gTTZqfOsdToiR8meBETf0rjvJZkToaOrv0X0g8nZWfjByZk+KysGxiMcMRXRVJ2zLe3cC7YZTmxVG6viGJ6L2yX4qG7ntyqwtjJMv3qSAt8YyTpbw3eV35prxZRpdZRER5Z+Bdr2ZxTr7M8UTkrrh9edosTxN+hvhhgh9Jco0n36fbajc0oYxnZTTj5a+faJaXSQPXu793+N+qFmURYY1sdI9CUS1OklZLi5zR76RZ2EVulVOaxecp8hYJy8m6t5tQ1fl82BKakpxteBFZYeRLA8oR63ryfLUzPxrJtFnN9H4oE7EcctOq7Z1V+IE0TLeRvA4jBUkuO7RhreDfJS0zLhZbcK54Adefr3eZYsgXBtylfY1TTpTr/E5W/P8/aYqzeoVgWvz0+b+H5eL72C2Oywvht1BcS1mwdxCh1wdR3d7ctNGQjCK/y+gj99ELgsOYuG3hUYeF/S4LH8DAfq+BZ0g8xhK8WDSsxe+JbS7yOOf9y/s5zqYJKU9rVua9nM4ZnTfqzgiOkjhrllRH0CNdiwHEmy6GUJzNCIvFaCtqpJhN46zIc8Tokv8jRtKq0BxHUdVCkS/HnxifK9mNItMXj0iZOSksLJLZ9PEntM1xb0Tj9cOfqwridi5mtdbQJ69r9J4QmhXO8sXYe4WaKlnY/QgoPeLJKPK6UHI93x/yRq8FRDF/1V0l8ZSb9zaNo0h89BEY81yNjA2BsXNn7CtZJHg8CK8RGoS3J0aNIP7eCrtQuwxCDx/nrRX4GokKgCj9RP0IHyllZOx//FfQEd40gUYuQuDCLC74SOS3xa0wHtoIh+9ohGMEcJgFx9Wv48BKKnROpS6BCrOoYGS6THAe02wcWkmHzu2xctsf8DAFD34J45GNXAx1bmkhBFyYxUWcTZKluLPxpZV06FyGoy7fJtDxgel4omy7e+ucfvf2JHgEOhfiyFM6jERT8kWekuSRru52CdeTJXuuOpOyfEanNMPJb3QLgkj8h+T5Roo18DKnzQ4m6zj/mx87n4by7NugkBSI49t1/WRTnggga5XE6bd63q7a9qys16czKO5Y3GbDYAu6ZJMySfpicw4ikd1d+o72MCwjYlXw3Gy/y0zbqleM4U2twJzGWb6otfyHSNiNl1LNUTmpHadl8qLF3QCoLm2/MdHviD6JQ+cu4esrdO5KA50bDajfDQpakoNZ+KRiEgR+ubP3dIOapIMyrXISBF6+s6fsM6XcRnYJSk6qKEHgHTRsyW6RpOSkmhIEHkLDyLBNVHJSVUk58wM8TMHDGlnJSXUl5eUCF6ZwYZ2w5KTKEhd8hIbxYY+y5KTSkhe+cwpgfEgwbJOWnFRbAt/YNQ0P67QlJxWXvPBtWxASHM/NrHVp2eVm7u1C5/UuPNPQIS3JTrBv5JDgEDbqd1KC2uNwep5hC0OtK1hwmZ29HxrUHh2UaVV7wBf0zp8yG6OHtJjSKvYoJ0LAlClraIvEHsrbRufukgdeO8PIsE3s0cZDq9jDA6edYXhYI/ZQuNA6oQKfnWFcWCf2UPjQuVXmgdPOMD7sEXu0wdAq9vAgNq5hYNgm9lDw0Lochy/UGoaHdWIPZSNX63o8VHrsxUgimoKH9CkG3htUpBEApbjjFyOJeNIzW48k4l/ubdm3RRJRQoVUE4hNy/plE8VlyVo7Y781JEnQ+pxDhyTx+v3boCQ6noRB675Fv3MVVCoHM3FwSpVK2QZMoXROofRiE5FJzBGwXafS5kyrTsUHz+HZc2ahTqXNlFadig/uRsOW/xbpVJS3jc6NMR8cjoaRITYV1pZoVNpoaNWo+OBrNAwNazQqChdaJ1PgajSMC+s0KgofWjfKwNdoGB8XlihU2lhoVaj4EG3XMCxsU6goeGhdiMOXgA3DwzqFirKFq3U1/sE8zOcpIlA8zDoXluVnQziSV5QeXtNIne7JrngkVcEfslK/fxKkHgcDcRS0QNS5gh2Cu+zsXdAQkKSDMq1CjyG43s6eMguFHm2mtAo9huCzM2wVbZHQQ3nb6NxfGoLXzjAybAtI0sZDq9hjCE47w/CwRuyhcKF1QgVeO8O4sE7sofChdasM3HaG8WFPQJI2GFrlHkMIt2sYGLbJPRQ8dC7Hy88CPEzBwzq5h7KRq3M9HiClx14MSNKI7KEpOsl7o5D0CQ+KO34xIEkgPbP1gCSht7dl3xaQ5K1xRJDTEiYcOpBIefcfRXpwnhogRXqgc78h6HeKgrrkYCZGzinlJQE46c7e8Q3yki7MtOpLAvD4nT1mFupLFKi0CkwC8BMatm63SGCivm+0bmmBq9AwNGxTmCh8aJWYBOAqNIyPOYuf8WRjic5EpUPrxAr8hYbRYZ3QRAVE565ZCB5DwwDJaG6J0kQhQ6vUJISIuYaRYZvUROVD58I87HebAh8fkg/rtCbqpq7OlXn4wZzOX2mEN+cnLFC9zjoXmGGX17nVhySLrhijq0EV2iPCi1nVqfUOfJuGZ49OqvXBsKMLyrQf/J1C1Y6XNEEYtLq2UBspv1Po1YZQu6GeX070DtlP2OXItM5yI3Qgy7UbOqbl+r1l8sE6qSyxexR6zvbPKxOI4zyA7zI+zSmkcvOEt5CS7SN3rycwfzLmzbHWfJyWcYfUAEJYig8m/NMIT+9XJXQ915sxkd4X9OgAj/fL1vj01Ie7izr4cA/xcO/3PS3mws4dA6g1DdxN/WpHuzngdrg94TRONkUd3hBO58WA9cS7M5fjK9uOLybHV7NMs7n26G3kFdcrMjPKUpzsN7dNSM5H0AW/6Uk1MW7VF8P0Qo44kS0HXSM75gMqy9V59TYzZzhb8ClqWjYvJvuywIqyqPnp9eqP1bC8aPX+Nv6h7PVtrLfyeFizQRQvOJSbcsFQrDLKXkkozusX1Cb7hnebALQP7WKcvIz2a7Oq48HV/l2AYccGSydd3iHo6vdcAV1Al/j5EK/5G8gUmpDTnkx14NQZQe/tNPFTRkVf7eZQ/FZnDzQiosT/</diagram></mxfile>
|
2110.05892/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Recently, deep neural network models have emerged in various fields of natural language processing (NLP) and replaced the mainstream position of conventional count-based methods [@LB16; @VS17; @SS16]. In addition to providing significant performance improvements, neural models often require high hardware conditions and a large amount of clean training data. However, there is usually only a limited amount of cleanly labeled data available, so techniques such as data augmentation and self-training are commonly used to generate additional synthetic data.
|
| 4 |
+
|
| 5 |
+
Significant progress has been made in recent years in designing data augmentations for computer vision (CV) [@KrizhevskySH12], automatic speech recognition (ASR) [@ParkCZCZCL19], natural language understanding (NLU) [@HouLCL18] and machine translation (MT) [@wang-etal-2018-switchout] in supervised settings. In addition, semi-supervised approaches using self-training techniques [@BlumM98] have shown promising performance in conventional named entity recognition (NER) systems [@KB05; @daume-iii-2008-cross; @tackstrom-2012-nudging]. In this work, the effectiveness of self-training and data augmentation techniques on neural NER architectures is explored.
|
| 6 |
+
|
| 7 |
+
To cover different data situations, we select three different datasets: The English CoNLL 2003 [@SD03] dataset, which is the benchmark on which almost all NER systems report results, it is very clean and the baseline models achieve an F1 score of around 92.6%; The English W-NUT 2017 [@D17] dataset, which is generated by users and contains inconsistencies, baseline models get an F1 score of around 52.7%; The GermEval 2014 [@B14] dataset, a fairly clean German dataset with baseline scores of around 86.3%[^2]. We observe that the baseline scores on clean datasets such as CoNLL and GermEval can hardly be improved by data adaptation techniques, while the performance on the W-NUT dataset, which is relatively small and inconsistent, can be significantly improved.
|
2111.04239/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2111.04239/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Learning from noisy labels for deep models is a challenging problem in practice [@angluin1988learning; @frenay2013classification]. Since data noise is ubiquitous in the real world, collecting training data with clean labels would be resource-intensive, especially for some domains with ambiguous labels, such as semantic segmentation. As noisy labels are corrupted from ground-truth labels, the robustness of learned deep models would be degraded under the circumstance of high noise ratios, due to its high capability of fitting noisy labels [@zhang2017understanding].
|
| 4 |
+
|
| 5 |
+
To reduce the impact of corrupted labels in supervised learning tasks, two effective strategies of sample re-weighting and loss correction are introduced in previous methods. The key idea of the former is to down-weight training samples that are likely to have incorrect labels by a weighting function. Existing weighting functions [@kumar2010self] commonly leverage the empirical loss of a sample as the input to estimate the corresponding weight value and monotonically decrease the weight as the loss increases. Those functions can be pre-designed using certain prior knowledge [@zadrozny2004learning] or dynamically optimized in each iteration of the training process [@khan2017cost; @jiang2018Mentornet]. The strategy of loss correction focuses on changing the form of the loss function. The straightforward method is to correct corrupted labels via a confusion matrix [@ryutaro2019learning]. This matrix characterises the transform probability between true and corrupted labels in the training data, where the annotations can be corrected accordingly and the model would be trained on a cleaner dataset. Although these strategies can eliminate the impact of noisy labels to some extent, there still exist two limitations in practice. Firstly, the form of the weighting or correction functions need to be specified manually under certain assumptions on data, which is infeasible in the real world. Secondly, hyper-parameters in these functions are usually tuned by cross-validation, impairing stability of performance of trained models.
|
| 6 |
+
|
| 7 |
+
Methods based on meta-learning for noisy labels have emerged recently [@ren2018learning; @chen2015webly; @hendrycks2018using]. These methods essentially formulate the task as a meta-learning problem. By constructing a small completely clean data set called meta-data set, an adaptive weighting or correction meta-function is learned from the meta-data set, which can avoid manually tuning of hyperparameters [@ren2018learning] and omit the assumption of the function form [@shu2019meta]. Although existing meta-learning-based approaches have achieved great efficiency and significantly improved robustness of prediction models, there still exist two deficiencies. (1) Those methods built with the deterministic model usually neglect sample ambiguity [@finn2018probabilistic], even with the effective prior, there might not be enough information in the sample to estimate the weight or rectify the loss with high certainty. It would be desirable for the meta-function to propose multiple potential solutions to the ambiguous weighting or rectifying task. (2) The meta-function, such as the meta weighting function, utilizes the training loss as the input to estimate the corresponding weights, which is deficient in exploiting structure information in the prediction.
|
| 8 |
+
|
| 9 |
+
In this paper, we build a hierarchical probabilistic model, warped probabilistic inference (WarPI), to achieve adaptively learning from the meta-data set for noisy labels in the meta-learning scenario. In contrast to the deterministic models, we treat the rectifying vector as the latent variable. The training process for classification networks can be rectified effectively by learning an amortization meta-network. The meta-network estimates the distribution of the rectifying vector, which can deal with sample ambiguity by modeling its uncertainty and be more robust to serious label noise. Unlike existing meta-functions to leverage the training loss as input, we design a more powerful meta-network that generates the distribution from the input of the logits and labels, which demonstrates a significant improvement of the generalization ability in our experiments. Our WarPI can be seamlessly integrated into the SGD optimization of the classification network and show favorable properties to alleviate the impact of noisy labels.
|
| 10 |
+
|
| 11 |
+
Contributions can be summarized in three aspects. **(1) Our WarPI is the first probabilistic model to resolve label noise within the meta-learning scenario. (2) We design a powerful amortized meta-network to estimate the distribution of the rectifying vector from the input of labels and predicted vector. (3) WarPI can be directly integrated into the training of the prediction network, demonstrating favorable effectiveness to learn from noisy labels.**
|
| 12 |
+
|
| 13 |
+
We conduct experiments on the CIFAR10, CIFAR100, Clothing1M, and Food-101N datasets to evaluate the proposed WarPI. The experimental results show that our method consistently outperforms the state-of-the-art method under a variant of noise ratios. Extensive analysis and study illustrate the complementary effectiveness of WarPI.
|
| 14 |
+
|
| 15 |
+
# Method
|
| 16 |
+
|
| 17 |
+
We propose to learn to rectify the training processing within the meta-learning scenario. A meta-network generates a rectifying vector, which promotes robust learning with noisy labels. By treating the rectifying vector as a latent variable, the learning procedure can be formulated as a hierarchical probabilistic model. Varied from deterministic models, we introduce an amortized meta-network to estimate the distribution of the rectifying vector to enhance its robustness.
|
| 18 |
+
|
| 19 |
+
In addition to the noisy training set $D_N = \{\mathbf{x}^i,\mathbf{y}^i\} _{i = 1}^N$ with $N$ samples, we provide a smaller set of clean samples $D_M = \{ \tilde{\mathbf{x}}^i,\tilde{\mathbf{y}}^i\} _{i = 1}^M$, referred to as the meta-data set, under the setting of meta-learning, where $N \gg M$. Differed with conventional supervised learning, there exists an extra meta-network $V(\mathbf{y}^i, \mathbf{z}^i; \phi )$ with the parameter of $\phi$ that takes logits $\mathbf{z}^i$ of the sample and its corresponding label $\mathbf{y}^i$ as input. The output of the meta-network is a vector $\mathbf{v}^i$ for the current sample $(\mathbf{x}^i,\mathbf{y}^i)$ that rectifies the learning process of the classifier. Given the rectifying vector $\mathbf{v}^i$, the classification network $F(\mathbf{x}^i, \mathbf{v}^i;\theta)$ with the parameter of $\theta$ can achieve robustness learning on the main task of classification with corrupted labels by multiplying $\mathbf{v}^i$ on the logits.
|
| 20 |
+
|
| 21 |
+
To enhance the robustness of the model, we propose to formulate the inference process as a hierarchical probabilistic model, warped probabilistic inference (WarPI). Unlike those methods that correct the corrupted label to the pseudo one, our model meta-learns a warp $\mathbf{v}$ for the loss surface of the classification network to produce effective update direction under the case of noisy labels. Here, we consider the rectifying vector $\mathbf{v}$ as a latent variable. The goal of our task is to meta-learn accurate approximations to the posterior predictive distribution with shared parameters $\theta$ $$\begin{equation}
|
| 22 |
+
\label{eq:ppd}
|
| 23 |
+
p(\mathbf{y}|\mathbf{x}, \theta) = \int p(\mathbf{y}| \mathbf{x},\mathbf{v}, \theta)p(\mathbf{v}| \mathbf{x}, \theta)\text{d}\mathbf{v}.
|
| 24 |
+
\end{equation}$$
|
| 25 |
+
|
| 26 |
+
The rectified learning process comprises two steps. First, form the posterior distribution $p(\mathbf{v}| \mathbf{x},\theta)$ over $\mathbf{v}$ for each sample. Second, compute the posterior predictive $p(\mathbf{y} | \mathbf{x}, \mathbf{v}, \theta)$. Since the posterior is intractable, we approximates the posterior predictive distribution in Eq. ([\[eq:ppd\]](#eq:ppd){reference-type="ref" reference="eq:ppd"}) by an amortized distribution $$\begin{equation}
|
| 27 |
+
\label{eq:ad}
|
| 28 |
+
q_\phi(\mathbf{y} | \mathbf{x}) = \int p(\mathbf{y} | \mathbf{x}, \mathbf{v})q_\phi(\mathbf{v}| \mathbf{x},\mathbf{y})\text{d}\mathbf{v}.
|
| 29 |
+
\end{equation}$$ Specifically, we construct an amortized distribution $q_\phi(\mathbf{v}| \mathbf{x},\mathbf{y})$ by introducing the meta-network $\phi$ that takes $(\mathbf{x},\mathbf{y})$ as inputs and returns the distribution over the rectifying vector $\mathbf{v}$. In our work, we choose the factorized Gaussian distribution for $q_\phi(\mathbf{v}| \mathbf{x},\mathbf{y})$ where the mean and variance are computed from the meta-network $\phi$. The graphical model corresponding to our framework is illustrated in Figure [1](#fig:prob){reference-type="ref" reference="fig:prob"}.
|
| 30 |
+
|
| 31 |
+
{#fig:prob width="9cm"}
|
| 32 |
+
|
| 33 |
+
To evaluate the quality of the approximation of the predictive posterior, we choose the KL-divergence between the true predictive posterior and the approximated one $\KL[p(\mathbf{y}|\mathbf{x}) || q_\phi(\mathbf{y} | \mathbf{x})]$. The goal of learning is to minimize the expectation of the KL value over samples $$\begin{equation}
|
| 34 |
+
\label{eq:kl}
|
| 35 |
+
\phi^* = \mathop{\argmin}_{\phi} \mathop{\E}_{p(\mathbf{x})} [\KL[p(\mathbf{y}|\mathbf{x}) || q_\phi(\mathbf{y} | \mathbf{x})]].
|
| 36 |
+
\end{equation}$$ The training process will finally return the amortizing network $\phi$ that best approximates the posterior predictive distribution. Indeed, the optimal will recover the true posterior $p(\mathbf{v}| \mathbf{x},\mathbf{y})$ if $q_\phi(\mathbf{v}| \mathbf{x},\mathbf{y})$ in Eq. ([\[eq:ad\]](#eq:ad){reference-type="ref" reference="eq:ad"}) is powerful enough. The optimization in Eq. ([\[eq:kl\]](#eq:kl){reference-type="ref" reference="eq:kl"}) is closed related to the maximization of the log density of the predictive distribution. In this case, we have $$\begin{equation}
|
| 37 |
+
\label{eq:kl2log}
|
| 38 |
+
\mathop{\E}_{p(\mathbf{x})} [\KL[p(\mathbf{y}|\mathbf{x}) || q_\phi(\mathbf{y} | \mathbf{x})] + \text{H}(p(\mathbf{y} | \mathbf{x})) ] = \mathop{\E}_{p(\mathbf{x},\mathbf{y})} [-\log q_\phi(\mathbf{y} | \mathbf{x})],
|
| 39 |
+
\end{equation}$$ where $\text{H}(p)$ is the entropy of $p$. Thus, we derive the tractable objective function $$\begin{equation}
|
| 40 |
+
\label{eq:log}
|
| 41 |
+
\mathop{\argmax}_{\phi} \mathop{\E}_{p(\mathbf{x},\mathbf{y})} \log \int p_\theta(\mathbf{y} | \mathbf{x}, \mathbf{v})q_\phi(\mathbf{v}| \mathbf{x},\mathbf{y})\text{d}\mathbf{v}.
|
| 42 |
+
\end{equation}$$
|
| 43 |
+
|
| 44 |
+
The Eq. ([\[eq:log\]](#eq:log){reference-type="ref" reference="eq:log"}) indicates the inference procedure: (i) randomly select a sample $(\mathbf{x}_i, \mathbf{y}_i)$; (ii) form the posterior predictive distribution $q(\mathbf{y}_i|\mathbf{x}_i)$ based on Eq. ([\[eq:ad\]](#eq:ad){reference-type="ref" reference="eq:ad"}); (iii) calculate the log-likelihood $q_\phi(\mathbf{y}_i | \mathbf{x}_i)$. Indeed, this framework is generalized from the Bayesian decision theory (BDT) [@gordon2018metalearning]. The optimal prediction in BDT minimizes the expected distributional loss with predictive distribution $q(\mathbf{y}|\mathbf{x})$ over the variable $\mathbf{y}$ $$\begin{equation}
|
| 45 |
+
\label{eq:bdt}
|
| 46 |
+
\mathop{\argmin}_{q} \int p(\mathbf{y} | \mathbf{x})L(\mathbf{y} ,q(\cdot)) \text{d}\mathbf{y},
|
| 47 |
+
\end{equation}$$ where $p(\mathbf{y} | \mathbf{x}) = \int p(\mathbf{y} | \mathbf{x}, \mathbf{v}) p(\mathbf{v} | \mathbf{x}) \text{d}\mathbf{v}$ is the Bayesian predictive distribution and $L(\mathbf{y}, q(\cdot))$ denotes the cost between the true label $\mathbf{y}$ and prediction $q(\mathbf{y}|\mathbf{x})$ which is omitted as $q(\cdot)$.
|
| 48 |
+
|
| 49 |
+
In practice, we implement the amortizing meta-network $V$ with parameters $\phi$ that takes a pair of the logits of the observation and label as input and outputs the distribution of the rectifying vector $q$. By sampling a rectifying vector $\mathbf{v}$ from $q$, the classification network $F$ with parameters $\theta$ get a rectified prediction $\hat{\mathbf{y}}$ with $\mathbf{v}$. We can get an unbiased estimate of the objective in Eq. ([\[eq:log\]](#eq:log){reference-type="ref" reference="eq:log"}) via Monte Carlo Sampling of repeating the above process many times and averaging results.
|
| 50 |
+
|
| 51 |
+
There are two networks in our framework. The amortizing meta-network $V(\cdot)$ takes logits and corrupted labels of the sample as inputs to generate the distribution $q(\mathbf{v})$ of the rectifying vector $\mathbf{v}$, while the classification network $F(\cdot)$ employs the sampled $\mathbf{v}$ to estimate the predictive posterior. The optimization for two networks is conducted via a bi-level iterative updating. We provide the exhaustive derivation for each updating step in the following.
|
| 52 |
+
|
| 53 |
+
In order to achieve better generalization under the case of noisy labels, the objective for our prediction model $F_\theta(\cdot)$ is to minimize the rectified loss with the support of the meta-network $$\begin{equation}
|
| 54 |
+
\label{eq:obj0}
|
| 55 |
+
\mathop{\argmin}_{\theta} L^{train}(\theta) = \frac{1}{N}\sum\limits_{i = 1}^N L(\mathbf{y}^i,\mathbf{v}^i \odot F(\mathbf{x}^i;\theta)),
|
| 56 |
+
\end{equation}$$ where $\mathbf{v}^i$ is sampled from the distribution $q(\mathbf{v})$ computed by the meta-network $q^i(\mathbf{v}) \leftarrow V(F(\mathbf{x}^i;\theta)),\mathbf{y}^i, \phi)$. Note that $F(\mathbf{x}^i;\theta))$ is the output of the fully-connected layer. By multiplying $\mathbf{v}^i$ to $F(\mathbf{x}^i;\theta))$ with Hadamard product $\odot$, also known as element-wise product, we compute the cross-entropy loss with the softmax function from the rectifying logits. More specifically, since we assume that the variable $\mathbf{v}$ obeys a factorized Gaussian distribution $\mathbf{v}\sim N(\mu ,\sigma^2)$, we adopt the reparameterization trick proposed in [@kingma2013auto] to perform back-propagation of the sampling operation as $$\begin{equation}
|
| 57 |
+
\label{eq:reparam}
|
| 58 |
+
\mathbf{v}=\mu + \sigma \cdot \epsilon \quad \text{with} \;\; \epsilon \sim \mathcal{N}(0 ,\text{I}).
|
| 59 |
+
\end{equation}$$ Here, $(\mu, \sigma)$ are the output of the meta-network. We denote $\text{RP}$ as the sampling operation with the reparameterization trick in the following section.
|
| 60 |
+
|
| 61 |
+
**The objective for $\theta$**. Recall the aim of computing the predictive posterior. We attain its unbiased estimation via Monte Carlo sampling. Supposing $\mathbf{v}$ are sampled $\ell$ times, the objective in Eq. ([\[eq:obj0\]](#eq:obj0){reference-type="ref" reference="eq:obj0"}) can be rewritten as $$\begin{equation}
|
| 62 |
+
\label{eq:obj1}
|
| 63 |
+
\mathop{\argmin}_{\theta} \, L^{train}(\theta) = \frac{1}{\ell N}\sum\limits_{i = 1}^N \sum\limits_{j = 1}^\ell L(\mathbf{y}^i, \text{RP}^{(j)}[V(F(\mathbf{x}^i;\theta), \mathbf{y}^i; \phi )] \odot F(\mathbf{x}^i;\theta)).
|
| 64 |
+
\end{equation}$$
|
| 65 |
+
|
| 66 |
+
The Monte Carlo sampling and averaging strategies for estimating the posterior ensure an efficient feed-forward propagation phase of the model at the training time. We have conducted further analysis of balancing their efficiency and accuracy in experiments.
|
| 67 |
+
|
| 68 |
+
**The objective for $\phi$**. Besides, the meta-network in WarPI is also evaluated by using a clean unbiased meta-data set $D_M$. Note that the updated $\theta$ is closely corresponding to $\phi$. Once we obtain $F(\cdot)$ with parameters $\theta^*(\phi)$, the objective for the meta-network is $$\begin{equation}
|
| 69 |
+
\label{eq:obj2}
|
| 70 |
+
\begin{split}
|
| 71 |
+
\mathop{\argmin}_{\phi} \, L^{meta}(\phi) = \frac{1}{M}\sum\limits_{i = 1}^M L(\tilde{\mathbf{y}}^i, F(\tilde{\mathbf{x}}^i;\theta^*(\phi))).
|
| 72 |
+
\end{split}
|
| 73 |
+
\end{equation}$$ By minimizing Eq. ([\[eq:obj2\]](#eq:obj2){reference-type="ref" reference="eq:obj2"}) with respect of $\phi$, the learned $V_{\phi^*}$ can generate effective rectifying vectors to guide following updates for $F_{\theta}$.
|
| 74 |
+
|
| 75 |
+
![Flowchart of our WarPI learning algorithm [\[alg:1\]](#alg:1){reference-type="ref" reference="alg:1"}. The solid and dashed lines denote forward and backward propagation, respectively. The meta-network $V_\phi$ generates the distribution of the rectifying vector $\mathbf{v}$, and then produces multiple examples via the sampling module. By backpropagating through the updating process of step 4, the meta-network can be optimized with step 5. The classification network will be optimized with support of the learned meta-network thereafter.](flowchart.pdf){#fig:flowchart width="8.5cm"}
|
| 76 |
+
|
| 77 |
+
To calculate the optimal parameters ${\theta^*}$ and ${\phi^*}$, we resort to a bi-level iterative optimization as MAML [@finn2017model] and use an online strategy to optimize the meta-network.
|
| 78 |
+
|
| 79 |
+
**Learning process for $F_\theta(\cdot)$**. For each iterative step of the classification network, we sample a mini-batch of $n$ training examples $\{(\mathbf{x}^i,\mathbf{y}^i) \}^n_{i=1}$. The updating step with the size of $\alpha$ for $F_\theta(\cdot)$ *w.r.t.* Eq. ([\[eq:obj1\]](#eq:obj1){reference-type="ref" reference="eq:obj1"}) can be derived as $$\begin{equation}
|
| 80 |
+
\label{eq:theta_step}
|
| 81 |
+
\hat{\theta}^{(t)}(\phi ) = \theta^{(t)} - \alpha \frac{1}{\ell n}\sum\limits_{i = 1}^n \sum\limits_{j = 1}^\ell \nabla _\theta L(\mathbf{y}^i, \text{RP}^{(j)}[V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i; \phi )] \odot F(\mathbf{x}^i;\theta^{(t)})).
|
| 82 |
+
\end{equation}$$
|
| 83 |
+
|
| 84 |
+
**Updating $\phi$ with the learning process of $\theta$**. As we obtain parameters $\hat{\theta}^{(t)}(\phi)$ with fixed $\phi$ in Eq. ([\[eq:theta_step\]](#eq:theta_step){reference-type="ref" reference="eq:theta_step"}), the meta-network $V_\phi(\cdot)$ can be updated by using a batch of $m$ meta samples $\{(\tilde{\mathbf{x}}^i,\tilde{\mathbf{y}}^i) \}^m_{i=1}$. Specifically, ${\phi^{(t)}}$ moves along the direction of gradients *w.r.t.* the objective in Eq. ([\[eq:obj2\]](#eq:obj2){reference-type="ref" reference="eq:obj2"}) $$\begin{equation}
|
| 85 |
+
\label{eq:phi_update}
|
| 86 |
+
\phi^{(t + 1)} = {\phi^{(t)}} - \beta \frac{1}{m}\sum\limits_{i = 1}^m {{\nabla _\phi }} L(\tilde{\mathbf{y}}^i, F(\tilde{\mathbf{x}}^i;\hat{\theta}^{(t)}(\phi^{(t)}))),
|
| 87 |
+
\end{equation}$$ where $\beta$ denotes the step size. Typically, the gradient-based update rule for $\phi$ is to compute gradients through the learning process of $\theta$, which is similar as MAML.
|
| 88 |
+
|
| 89 |
+
**Update $\theta$ with the learned $\phi$**. We employ the updated $V_{\phi^{(t + 1)}}(\cdot)$ to improve learning of the classification network $F_{\theta}(\cdot)$ $$\begin{equation}
|
| 90 |
+
\label{eq:theta_update}
|
| 91 |
+
\theta^{(t+1)} = \theta^{(t)}- \alpha \frac{1}{\ell n}\sum\limits_{i = 1}^n \sum\limits_{j = 1}^\ell \nabla _\theta L(\mathbf{y}^i, \text{RP}^{(j)}[V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i; \phi^{(t+1)} )] \odot F(\mathbf{x}^i;\theta^{(t)})).
|
| 92 |
+
\end{equation}$$
|
| 93 |
+
|
| 94 |
+
The overall steps can be summarized in Algorithm [\[alg:1\]](#alg:1){reference-type="ref" reference="alg:1"}. The main steps are illustrated in Figure [2](#fig:flowchart){reference-type="ref" reference="fig:flowchart"}. Thanks to the reparameterization trick, the sampling operation can be implemented as a linear transformation, which is tractable for gradient computation. Besides, estimating predictive posterior in Eq. ([\[eq:obj1\]](#eq:obj1){reference-type="ref" reference="eq:obj1"}) by Monte Carlo sampling of averaging $\ell$ results can be also efficient. All gradients computations, including those in the bi-level iterative process, can be efficiently implemented by automatic differentiation tools.
|
| 95 |
+
|
| 96 |
+
:::: algorithm
|
| 97 |
+
::: algorithmic
|
| 98 |
+
Training data $D_N$, meta data $D_M$ $\triangleright$ Datasets\
|
| 99 |
+
Batch size $n, m$, outer iterations $T$,\
|
| 100 |
+
sample number $\ell$, learning rate $\alpha$, $\beta$ $\triangleright$ Hyper-parameters\
|
| 101 |
+
Optimal $\theta^*$\
|
| 102 |
+
Initialize parameters $\theta^{(0)}$ and $\phi^{(0)}$\
|
| 103 |
+
$(\mathbf{x}, \mathbf{y}), (\tilde{\mathbf{x}}, \tilde{\mathbf{y}})\leftarrow \text{SampleBatch}(D_N, n), \text{SampleBatch}(D_M, m)$\
|
| 104 |
+
Formulate learning process of $\theta$ with probabilistic inference $\triangleright$ Eq. ([\[eq:theta_step\]](#eq:theta_step){reference-type="ref" reference="eq:theta_step"})\
|
| 105 |
+
Update $\phi$ with the learning process of $\theta$ $\triangleright$ Eq. ([\[eq:phi_update\]](#eq:phi_update){reference-type="ref" reference="eq:phi_update"})\
|
| 106 |
+
Update $\theta$ with the learned $\phi$ $\triangleright$ Eq. ([\[eq:theta_update\]](#eq:theta_update){reference-type="ref" reference="eq:theta_update"})\
|
| 107 |
+
\
|
| 108 |
+
:::
|
| 109 |
+
::::
|
| 110 |
+
|
| 111 |
+
To demonstrate the property of WarPI and illustrate its effectiveness of the rectification process, we expand the updating steps of $\theta$ and $\phi$ in Eq.([\[eq:theta_step\]](#eq:theta_step){reference-type="ref" reference="eq:theta_step"}-[\[eq:phi_update\]](#eq:phi_update){reference-type="ref" reference="eq:phi_update"}). The sampling operation of the probabilistic inference is not included in following derivation for convenience. Here, $V(F(\mathbf{x}^i;{\theta^{(t)}});\phi)$ denotes one of the sampled rectifying vector $\mathbf{v}$ in the following equation. To facilitate the further derivation we expand Eq. ([\[eq:theta_step\]](#eq:theta_step){reference-type="ref" reference="eq:theta_step"}) for step $t$ as $$\begin{equation}
|
| 112 |
+
\label{eq:theta_step_exp}
|
| 113 |
+
\begin{split}
|
| 114 |
+
% \frac{\partial \hat{\theta}^{(t)}(\phi)}{\partial \phi} & =\frac{\partial \theta^{(t)}}{\partial \theta \partial \phi} - \alpha \frac{1}{n}\sum\limits_{i = 1}^n \frac{\partial \left( L(F(\mathbf{x}^i;\theta^{(t)}) \odot V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i;\phi )\right)}{\partial \theta \partial \phi}\\
|
| 115 |
+
% &= = -\alpha \frac{1}{n}\sum\limits_{i=1}^n\frac{\partial }{\partial \phi } \left({\frac{\partial \mathbf{\hat {y}}}{\partial \theta}}^{T}\cdot \frac{\partial L(\mathbf{y}^i,\mathbf{\hat {y}})}{\partial \mathbf{\hat {y}}}\right)
|
| 116 |
+
\hat{\theta}^{(t)}(\phi)
|
| 117 |
+
&=\theta^{(t)}
|
| 118 |
+
- \alpha \frac{1}{n}\sum\limits_{i = 1}^n
|
| 119 |
+
\nabla_\theta {L(F(\mathbf{x}^i;\theta^{(t)}) \odot V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i;\phi )}\\
|
| 120 |
+
&=\theta^{(t)} -\alpha \frac{1}{n}\sum\limits_{i=1}^n\frac{\partial F(\mathbf{x}^i;\theta^{(t)})}{\partial \theta }\odot V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i;\phi ) \cdot \frac{\partial L(\mathbf{y}^i,\mathbf{\hat {y}})}{\partial \mathbf{\hat {y}}}.
|
| 121 |
+
\end{split}
|
| 122 |
+
\end{equation}$$ Here, the prediction $\mathbf{\hat {y}} = F(\mathbf{x}^i;\theta^{(t)}) \odot V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i;\phi )$. Note that we detach the input $F(\mathbf{x}^i;\theta^{(t)})$ of $V(\cdot)$ from the computation graph, therefore, the gradient of $\hat{\theta}^{(t)}(\phi)$ with respect to $\phi$ can be written as $$\begin{equation}
|
| 123 |
+
\label{eq:theta_step_exp_gra}
|
| 124 |
+
\frac{\partial \hat{\theta}^{(t)}(\phi)}{\partial \phi} = \frac{\alpha }{n}\sum\limits_{i = 1}^n\frac{\partial V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i;\phi )}{\partial \phi }^{T}
|
| 125 |
+
\cdot\frac{\partial L(\mathbf{y}^i,\mathbf{\hat {y}})}{\partial \mathbf{\hat {y}}} \odot\frac{\partial F(\mathbf{x}^i;\theta^{(t)})}{\partial \theta}
|
| 126 |
+
\end{equation}$$
|
| 127 |
+
|
| 128 |
+
By substituting Eq. ([\[eq:theta_step_exp_gra\]](#eq:theta_step_exp_gra){reference-type="ref" reference="eq:theta_step_exp_gra"}) into Eq. ([\[eq:phi_update\]](#eq:phi_update){reference-type="ref" reference="eq:phi_update"}), we have $$\begin{equation}
|
| 129 |
+
\label{eq:second-order}
|
| 130 |
+
\begin{split}
|
| 131 |
+
{\phi^{(t + 1)}} &= {\phi^{(t)}} - \beta \frac{1}{m}\sum\limits_{i = 1}^m {{\nabla _\phi }} L(\tilde{\mathbf{y}}^i, F(\tilde{\mathbf{x}}^i;\hat{\theta}(\phi))) \\
|
| 132 |
+
&= \phi ^{(t)}-\frac{\beta }{m}\sum_{j=1}^{m}\sum_{i=1}^{n} \frac{\partial \hat{\theta} ^{(t)}(\phi )}{\partial V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i;\phi )}\\
|
| 133 |
+
& \cdot \frac{\partial V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i;\phi )}{\partial \phi}^{T} \cdot\frac{\partial L(\tilde {\mathbf{y}}^j,F(\tilde {\mathbf{x}}^j;\hat{\theta}^{(t)}(\phi ) ))}{\partial \hat{\theta}^{(t)}(\phi )}\\
|
| 134 |
+
& = \phi ^{(t)}+\frac{\beta }{m}\sum\limits_{j = 1}^m\frac{\alpha }{n}\sum\limits_{i = 1}^n\frac{\partial V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i;\phi )}{\partial \phi }^{T}
|
| 135 |
+
\cdot\frac{\partial L(\mathbf{y}^i,\mathbf{\hat {y}})}{\partial \mathbf{\hat {y}}}\\
|
| 136 |
+
&\odot \frac{\partial F(\mathbf{x}^i;\theta^{(t)})}{\partial \theta }
|
| 137 |
+
\cdot \frac{\partial L(\tilde {\mathbf{y}}^j,F(\tilde {\mathbf{x}}^j;\hat{\theta}^{(t)}(\phi ) ))}{\partial\hat{\theta}^{(t)}(\phi)}\\
|
| 138 |
+
& = \phi ^{(t)}+\frac{\alpha\cdot \beta }{m\cdot n}\sum\limits_{j = 1}^m\sum\limits_{i = 1}^n
|
| 139 |
+
\frac{\partial V(F(\mathbf{x}^i;\theta^{(t)}), \mathbf{y}^i;\phi )}{\partial \phi }^{T}
|
| 140 |
+
\cdot\frac{\partial L(\mathbf{y}^i,\mathbf{\hat {y}})}{\partial \mathbf{\hat {y}}}\\
|
| 141 |
+
&\odot\frac{\partial F(\mathbf{x}^i;\theta^{(t)})}{\partial \theta}
|
| 142 |
+
\cdot \frac{\partial F(\tilde {\mathbf{x}}^j;\hat{\theta}^{(t)}(\phi ) )}{\partial \hat{\theta }^{(t)}(\phi )}^{T}
|
| 143 |
+
\cdot\frac{\partial L(\tilde {\mathbf{y}}^j,F(\tilde {\mathbf{x}}^j;\hat{\theta}^{(t)}(\phi ) ))}{\partial F(\tilde {\mathbf{x}}^j;\hat{\theta}^{(t)}(\phi ) )}.
|
| 144 |
+
\end{split}
|
| 145 |
+
\end{equation}$$ It can be shown that the gradient of $\hat{\theta}^{(t)}(\phi)$ is computed in virtue of the meta batch and then backpropagates through $V(\cdot)$ using the training batch, which is similar to MAML as gradients of updating steps.
|
2112.00735/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ca5780904c80f32ac1e0fccae4fc7483caff21043f397bf99c824d14a5b188f7
|
| 3 |
+
size 4840119
|
2112.06170/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2112.06170/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Most present-day cameras are equipped with CMOS sensors due to advantages such as slimmer readout circuitry, lower cost, and higher frame rate over their CCD counterparts. While capturing an image, CMOS sensor array is exposed to a scene in a sequentially row-wise manner. The flip side is that, in the presence of camera motion, the inter-row delay leads to undesirable geometric effects, also known as rolling shutter (RS) distortions. This is because rows of the RS image do not necessarily sense the same camera motion. A prominent effect of RS distortion is the manifestation of straight lines as curves which call for correction of RS effect also termed as RS rectification. Rectification of RS distortion involves finding the camera motion for every row of RS image (row motions). Each row of the RS image is then warped using estimated row motions by taking one of the rows as reference. More than aesthetic appeal, the implications of RS rectification are critical for vision-tasks such as image registration, structure from motion (SFM), etc which perform scene inference based on geometric attributes in the captured images.
|
| 4 |
+
|
| 5 |
+
Multi-frame RS rectification methods use videos [@liang2008analysis; @ringaby2012efficient; @kim2011system; @grundmann] and estimate the motion across the frames using point-correspondences. The inter-frame motion helps with estimating the row motion of each RS frame, aiding the warping process to obtain distortion-free frames. Different algorithms are proposed for RS deblurring [@vijay_rolling_deblurring; @mahesh_rolling_deblurring], RS super-resolution [@abhijith_rolling_sr], RS registration [@registraation1], and change detection [@vijay_rolling_changedetection]. The works in [@vasu2018occlusion; @zhuang2017rolling] have addressed the problem of depth-aware RS rectification and again rely on multiple input frames. A differential SFM based framework is employed in [@zhuang2017rolling] to perform RS rectification of input images captured by a slow-moving camera. [@vasu2018occlusion] can handle the additional effects of occlusion that arise while capturing images using a fast-moving RS camera. Some methods have used external sensor information such as a gyroscope [@hee2014gyro; @jia2012probabilistic; @patron2015spline] to stabilize RS distortion in videos. Moreover, these methods are strongly constrained by the availability as well as reliability of external sensor data.
|
| 6 |
+
|
| 7 |
+
The afore-mentioned methods are data greedy and time-consuming except [@grundmann]. Moreover, they are rendered unusable when only a single image is available. [@rengarajan2016bows; @purkait2017rolling; @lao2018robust] rely on straight lines becoming curves as a prominent effect to correct RS distortions. However, these methods are tailored for scenes that consist predominantly of straight lines and hence fail to generalize to natural images where actual curves are present in the 3D world. Moreover, they require knowledge of intrinsic camera parameters for RS rectification.
|
| 8 |
+
|
| 9 |
+
In this paper, we address the problem of single image RS rectification using a deep neural network. The prior work to use a deep network for RS rectification for 2D scenes is [@rengarajan2017unrolling] wherein a neural network is trained using RS images as input, and ground truth distortion (i.e., motion) parameters as the target. Given an RS image during inference, the trained neural network predicts motion parameters corresponding to a set of key rows which is then followed by interpolation for all rows. A main drawback of this approach is that it restricts the solution space of estimated camera parameters to the ground truth parameters used during training. Moreover, arriving at the rectified image is challenging since the association between the estimated motion parameters and the pixel position of ground truth global shutter (GS) image is unknown. [@rengarajan2017unrolling] attempts to solve this problem using an optimization framework as a complex post-processing step.
|
| 10 |
+
|
| 11 |
+
Recent findings in image restoration advocate that *end-to-end* training performs better than decoupled or piece-wise training such as in image deblurring [@mathieu2015deep; @tao2018scale], ghost imaging [@josawang2019learning], hyperspectral imaging [@josafu2020hyperspectral] and image super-resolution [@lim2017enhanced; @kim2016accurate]. As also reiterated in [@yin2018fisheyerecnet], a fisheye distortion rectification network, regressing for ground truth distortion parameters and then rectifying the distorted image gives sub-optimal performance compared to an end-to-end approach for the clean image. To this end, we propose a simple and elegant *end-to-end* deep network which uses ground truth image to guide the rectification process during training. RS rectification is done in a single step during inference.
|
| 12 |
+
|
| 13 |
+
Rolling shutter distortion due to row-wise exposure of sensor array depends on the relative motion between camera and scene. Fig. [\[rollingShutter\]](#rollingShutter){reference-type="ref" reference="rollingShutter"} shows a scene captured using RS camera under different camera trajectories i.e., different values of $[r_x, r_y, r_z, t_x, t_y, t_z]$ where $t_{\phi}$ and $r_{\phi}$, $\phi \in \{x , y, z\}$, indicate translations along and rotations about $\phi$ axis, respectively. As observed in the figure and as also stated in [@rengarajan2017unrolling], the effect of $t_y, t_z$ and $r_x$ on RS distortion is negligible as compared to the effect of $t_x, r_y$ and $r_z$. Moreover, the effect of $r_y$ can be approximated by $t_x$ for large focal length and when the movement of camera towards or away from scene is minimal. Hence, it suffices to consider only $t_x$ and $r_z$ to be essentially responsible for RS image formation.
|
| 14 |
+
|
| 15 |
+
The GS image coordinates $(x_{\mbox{gs}}, y_{\mbox{gs}})$ are related to RS image coordinates $(x_{\mbox{rs}}, y_{\mbox{rs}})$ by
|
| 16 |
+
|
| 17 |
+
$$\begin{equation}
|
| 18 |
+
\label{rsImageFormation}
|
| 19 |
+
\begin{split}
|
| 20 |
+
x_{\mbox{rs}} = x_{\mbox{gs}} \cdot \ensuremath{\,\textrm{cos}}\,(r_z(x_{\mbox{rs}})) - y_{\mbox{gs}} \cdot \ensuremath{\,\textrm{sin}}\,(r_z(x_{\mbox{rs}})) + t_x(x_{\mbox{rs}}) \\
|
| 21 |
+
y_{\mbox{rs}} = x_{\mbox{gs}} \cdot \ensuremath{\,\textrm{sin}}\,(r_z(x_{\mbox{rs}})) + y_{\mbox{gs}} \cdot \ensuremath{\,\textrm{cos}}\,(r_z(x_{\mbox{rs}}))
|
| 22 |
+
\end{split}
|
| 23 |
+
\end{equation}$$ where $r_z(x_{\mbox{rs}})$ and $t_x(x_{\mbox{rs}})$ are the rotation and translation motion experienced by the $x_{\mbox{rs}}^{th}$ row of RS image. The GS-RS image pairs required for training our neural network are synthesized using Eq [\[rsImageFormation\]](#rsImageFormation){reference-type="ref" reference="rsImageFormation"}. Given a GS image and the rotation and translation motion for every row of the RS image, the RS image can be generated using either source-to-target (S-T) or target-to-source (T-S) mapping with GS coordinates as source and RS coordinates as target. Since the motion parameters are associated with RS coordinates, S-T mapping is not employed for RS image generation. In T-S mapping, each pixel location of target RS image is multiplied with corresponding warping matrix formed by the motion parameters to yield source GS pixel location. The intensity at the resultant GS pixel coordinate is found by using bilinear interpolation and then copied to the RS pixel location.
|
| 24 |
+
|
| 25 |
+
Given an RS image and motion parameters for each row of RS image, the RS observation can be rectified akin to the process of RS image formation except that the source is now the RS image while target is the RS rectified image. In S-T mapping, each pixel location of RS image along with its row wise camera motion is substituted in Eq. ([\[rsImageFormation\]](#rsImageFormation){reference-type="ref" reference="rsImageFormation"}) to get RS rectified (target) pixel location. However, there is a possibility that some of the pixels in the target RS rectified image may go unfilled leaving holes in the resultant RS image. In T-S mapping, for every pixel location of RS rectified image (target), the same set of equations (i.e., Eq. ([\[rsImageFormation\]](#rsImageFormation){reference-type="ref" reference="rsImageFormation"})) can be used to solve for RS image (source) coordinates provided the camera motion acting on RS rectified coordinates is known.
|
| 26 |
+
|
| 27 |
+
{#arch height="7cm" width="13cm"}
|
| 28 |
+
|
| 29 |
+
# Method
|
| 30 |
+
|
| 31 |
+
Our main objective is to find a direct mapping from the input RS image to the target RS rectified image. This requires estimation of row-wise camera motion parameters, and the correspondence between estimated motion parameters and target pixel locations. We achieve the above in a principled manner as follows. We propose to use image features of input RS image for estimating camera motion parameters and devise a mapping function to relate estimated motion parameters with pixel locations of target image.
|
| 32 |
+
|
| 33 |
+
Our network architecture (Fig. [1](#arch){reference-type="ref" reference="arch"}) consists of five basic modules: motion block, trajectory module, row block, RS regeneration module and RS rectification module. The motion block predicts camera motion parameters ($t_x$ and $r_z$) for each row of the input RS image. The trajectory module ensures that the estimated camera motion parameters follow a smooth and continuous curve as we traverse the rows in the RS image, in compliance with real-life camera trajectories. For each pixel location in the target image, the corresponding camera motion is found using the row block. The output of row block and trajectory module are used by the RS rectification module to warp the input RS image to get the RS rectified image. For faster convergence during training and to better-condition the optimisation process, we also employ an RS regeneration module which takes motion parameters from the trajectory module and warps the GS image to estimate the given (input) RS image. A detailed discussion of each of the module follows.
|
| 34 |
+
|
| 35 |
+
**Motion block** This consists of a base block followed by $t_{x}$(translation) and $r_{z}$(rotation) blocks, respectively. The base block extracts features from the input RS image which are then used to find row wise translation and rotation motion of the input image. Thus, the motion block takes input color image of size $r \times r \times 3$ and outputs two 1D vectors of length $r$ indicating rotation and translation motion parameters for each row of the input RS image. The base block is designed using three convolutional layers. Both translation and rotation blocks, which take the output of the base network, are designed using three convolutional layers followed by two fully connected (FC) layers. The final FC layer is of dimension \[$r$,1\] reflecting the motion for every row of the input RS image. Each convolutional layer is followed by batch normalization.
|
| 36 |
+
|
| 37 |
+
**Row block** As discussed in the second section, every pixel coordinate in the GS (target) image is substituted in Eq. ([\[rsImageFormation\]](#rsImageFormation){reference-type="ref" reference="rsImageFormation"}) along with the corresponding camera motion to get the RS (source) image coordinate. However, camera motion acting on each GS pixel to form the RS image is known only to the extent that it is one of the motions experienced by the rows of RS image. Specifically, for a given input RS image, all the pixels in a row stem from a single camera motion and the motion will generally be different for each row. In contrast, pixels in a row of the GS image need not be influenced by a single motion. The ambiguity of which motion to associate to a pixel in target image was addressed in [@rengarajan2017unrolling] as a post-processing step, which is a complicated exercise and implicitly constrains the estimated motion parameters.
|
| 38 |
+
|
| 39 |
+
We propose to use a deep network to solve this issue thus rendering our network *end-to-end*. The row block takes an image of size $r\times r\times 3$ and outputs a matrix of dimension $r\times r$, with each location indicating which row motion of RS image must be considered from the estimated motion parameters for rectification. In real camera trajectory, camera motion is typically smooth. Consequently, the output of row block at each coordinate location can be expected to be close to its corresponding row number. Hence, for both stability and faster convergence, we learn to solve only for the residual (motivated by [@he2016deep; @kim2016accurate]). The residual image is an $r \times r$ matrix and is the output of the row block. This image is added to another matrix (say $A$) which is initialized with its own row number i.e., $A(i,j) = i$ for the reason stated above. The resultant matrix values indicate which camera motion is to be considered from the estimated motion parameters for rectification of RS image. From now, we refer to output of row block as the sum of learnt residual with a matrix initialized with its row number at each coordinate position. The row block consists of five convolutional layers with each layer followed by batch normalization and an activation function. Use of three layers for base block and a total of 6 convolution layers for motion block is partly motivated by [@rengarajan2017unrolling] (it uses five convolution layers for motion estimation). Since, the objective of row block (finding residual for) is comparatively less complex compared to motion block we used only 3 layers.
|
| 40 |
+
|
| 41 |
+
<figure id="rfImages">
|
| 42 |
+
<table>
|
| 43 |
+
<tbody>
|
| 44 |
+
<tr>
|
| 45 |
+
<td style="text-align: center;"><figure>
|
| 46 |
+
<img src="9.png" />
|
| 47 |
+
<figcaption>Input RS Frames</figcaption>
|
| 48 |
+
</figure></td>
|
| 49 |
+
<td style="text-align: center;"><figure>
|
| 50 |
+
<img src="9_ICCV_2017.jpg" />
|
| 51 |
+
<figcaption><span class="citation" data-cites="purkait2017rolling"></span>, <span class="math inline">|<em>R</em><sub><em>F</em></sub>|</span>=29.0</figcaption>
|
| 52 |
+
</figure></td>
|
| 53 |
+
<td style="text-align: center;"><figure>
|
| 54 |
+
<img src="9_CVPR_2016_rect.png" />
|
| 55 |
+
<figcaption><span class="citation" data-cites="rengarajan2016bows"></span>, <span class="math inline">|<em>R</em><sub><em>F</em></sub>|</span>=35.7</figcaption>
|
| 56 |
+
</figure></td>
|
| 57 |
+
<td style="text-align: center;"><figure>
|
| 58 |
+
<img src="9_CVPR_2018.png" />
|
| 59 |
+
<figcaption><span class="citation" data-cites="lao2018robust"></span>, <span class="math inline">|<em>R</em><sub><em>F</em></sub>|</span>=40.7</figcaption>
|
| 60 |
+
</figure></td>
|
| 61 |
+
<td style="text-align: center;"><figure>
|
| 62 |
+
<img src="9_CVPR_2017_rect.png" />
|
| 63 |
+
<figcaption><span class="citation" data-cites="rengarajan2017unrolling"></span>, <span class="math inline">|<em>R</em><sub><em>F</em></sub>|</span>=41.7</figcaption>
|
| 64 |
+
</figure></td>
|
| 65 |
+
<td style="text-align: center;"><figure>
|
| 66 |
+
<img src="9_Ours.png" />
|
| 67 |
+
<figcaption>Ours,<span class="math inline">|<em>R</em><sub><em>F</em></sub>|</span>=45.92</figcaption>
|
| 68 |
+
</figure></td>
|
| 69 |
+
<td style="text-align: center;"></td>
|
| 70 |
+
</tr>
|
| 71 |
+
</tbody>
|
| 72 |
+
</table>
|
| 73 |
+
<figcaption>Comparison of <span class="math inline">|<em>R</em><sub><em>F</em></sub>|</span> value with different algorithms on a real video.</figcaption>
|
| 74 |
+
</figure>
|
| 75 |
+
|
| 76 |
+
Given an input RS image, using the trajectory module, row block and RS rectification module, an input image can be rectified to give an RS distortion-free or rectified image. We employ different loss functions to enable the network to learn the rectification process.
|
| 77 |
+
|
| 78 |
+
The first loss is the mean squared error (MSE) between rectified RS image and ground truth image but with a modification. In the rectified RS image, it is possible that certain regions in the boundary are not recovered (when compared with GS image) since these regions were not present in the RS image itself due to camera motion. This can be noticed in Fig. 5 (third row) where the building has been rectified but there are regions on the boundary where the rectification algorithm could not retrieve pixel values as they were not present in the original RS image. To account for this effect, we used a visibility aware MSE loss where MSE is considered between two pixels only if the intensity in at least one of the channels in the rectified RS image is non-zero. Let $I_{\mbox{rs}}, I_{\mbox{gs}}, I_{\mbox{rs\_rec}}$ be input RS, ground truth GS, and RS rectified image, respectively. Then, we define mask $M_{\mbox{rs\_rec}}$, such that $$\begin{equation*}
|
| 79 |
+
M_{\mbox{rs\_rec}}(i,j) = \begin{cases} 0 & \mbox{if} \sum \limits_{k=1}^3 I_{\mbox{rs\_rec}}(i,j,k) = 0\\ 1 & \ensuremath{\,\textrm{otherwise}}\end{cases}
|
| 80 |
+
\end{equation*}$$ where $k$ indicates color channel in the RGB image. The error between GS and RS rectified image can be written as $$\begin{equation*}
|
| 81 |
+
L_{\mbox{rs\_rec\_MSE}} = || I_{\mbox{rs\_rec}} - M_{\mbox{rs\_rec}} \otimes I_{\mbox{gs}}||^2_2
|
| 82 |
+
\end{equation*}$$ where $\otimes$ refers to point-wise multiplication.
|
| 83 |
+
|
| 84 |
+
The second loss that we devise is based on the error between the given RS image and the GS image distorted by estimated motion parameters. To account for holes in the boundary, we again define mask $M_{\mbox{rs\_reg}}(i,j)$ such that $$\begin{equation*}
|
| 85 |
+
M_{\mbox{rs\_reg}}(i,j) = \begin{cases} 0 & \mbox{if} \sum \limits_{k=1}^3 I_{\mbox{rs\_reg}}(i,j,k) = 0\\ 1 & \ensuremath{\,\textrm{otherwise}}\end{cases}
|
| 86 |
+
\end{equation*}$$ where $I_{\mbox{rs\_reg}}$ is the image obtained by applying estimated motion parameters on the GS image. The error between the RS image and the RS regenerated image is given by $$\begin{equation*}
|
| 87 |
+
L_{\mbox{rs\_reg\_MSE}} = || I_{\mbox{rs\_reg}} - M_{\mbox{rs\_reg}} \otimes I_{\mbox{rs}}||^2_2
|
| 88 |
+
\end{equation*}$$ Since edges play a very important role in RS rectification, we also compare Sobel edges of RS rectified and RS regenerated images with ground truth GS and input RS images, respectively. Let the Sobel operation be represented as $E(.)$. Then the edge losses for regeneration phase and rectification phase can be formulated as $$\begin{equation*}
|
| 89 |
+
L_{\mbox{rs\_rec\_edge}} = || E(I_{\mbox{rs\_rec}}) - M_{\mbox{rs\_rec}} \otimes E(I_{\mbox{gs}})||^2_2
|
| 90 |
+
\end{equation*}$$ $$\begin{equation*}
|
| 91 |
+
L_{\mbox{rs\_reg\_edge}} = || E(I_{\mbox{rs\_reg}}) - M_{\mbox{rs\_reg}} \otimes E(I_{\mbox{rs}})||^2_2
|
| 92 |
+
\end{equation*}$$ The overall loss function (please refer to Appendix for back propagation equations w.r.t different loss functions) of our network is a combination of the afore-mentioned loss functions and is given by $$\begin{multline}
|
| 93 |
+
L_{total} = \lambda_1L_{\mbox{rs\_rec\_MSE}} + \lambda_2 L_{\mbox{rs\_reg\_MSE}}
|
| 94 |
+
+ \lambda_3 L_{\mbox{rs\_rec\_edge}} + \lambda_4 L_{\mbox{rs\_reg\_edge}}
|
| 95 |
+
\label{total_sum_loss}
|
| 96 |
+
\end{multline}$$
|
| 97 |
+
|
| 98 |
+
<figure id="allComparison">
|
| 99 |
+
<table>
|
| 100 |
+
<tbody>
|
| 101 |
+
<tr>
|
| 102 |
+
<td style="text-align: center;"><figure>
|
| 103 |
+
<img src="16_rs.png" />
|
| 104 |
+
</figure></td>
|
| 105 |
+
<td style="text-align: center;"><figure>
|
| 106 |
+
<img src="16_CVPR_2016.png" />
|
| 107 |
+
</figure></td>
|
| 108 |
+
<td style="text-align: center;"><figure>
|
| 109 |
+
<img src="16_ICCV_2017.png" />
|
| 110 |
+
</figure></td>
|
| 111 |
+
<td style="text-align: center;"><figure>
|
| 112 |
+
<img src="16_CVPR_2018.png" />
|
| 113 |
+
</figure></td>
|
| 114 |
+
<td style="text-align: center;"><figure>
|
| 115 |
+
<img src="16_CVPR_2017.png" />
|
| 116 |
+
</figure></td>
|
| 117 |
+
<td style="text-align: center;"><figure>
|
| 118 |
+
<img src="16_Ours.png" />
|
| 119 |
+
</figure></td>
|
| 120 |
+
<td style="text-align: center;"><figure>
|
| 121 |
+
<img src="16_target.png" />
|
| 122 |
+
</figure></td>
|
| 123 |
+
</tr>
|
| 124 |
+
<tr>
|
| 125 |
+
<td style="text-align: center;"><figure>
|
| 126 |
+
<img src="5_rs.png" />
|
| 127 |
+
</figure></td>
|
| 128 |
+
<td style="text-align: center;"><figure>
|
| 129 |
+
<img src="5_CVPR_2016.png" />
|
| 130 |
+
</figure></td>
|
| 131 |
+
<td style="text-align: center;"><figure>
|
| 132 |
+
<img src="5_ICCV_2017.jpg" />
|
| 133 |
+
</figure></td>
|
| 134 |
+
<td style="text-align: center;"><figure>
|
| 135 |
+
<img src="5CVPR_2018.png" />
|
| 136 |
+
</figure></td>
|
| 137 |
+
<td style="text-align: center;"><figure>
|
| 138 |
+
<img src="5_CVPR_2017.png" />
|
| 139 |
+
</figure></td>
|
| 140 |
+
<td style="text-align: center;"><figure>
|
| 141 |
+
<img src="5_Ours.png" />
|
| 142 |
+
</figure></td>
|
| 143 |
+
<td style="text-align: center;"><figure>
|
| 144 |
+
<img src="5_GT.png" />
|
| 145 |
+
</figure></td>
|
| 146 |
+
</tr>
|
| 147 |
+
<tr>
|
| 148 |
+
<td style="text-align: center;"></td>
|
| 149 |
+
<td style="text-align: center;"></td>
|
| 150 |
+
<td style="text-align: center;"></td>
|
| 151 |
+
<td style="text-align: center;"></td>
|
| 152 |
+
<td style="text-align: center;"></td>
|
| 153 |
+
<td style="text-align: center;"></td>
|
| 154 |
+
<td style="text-align: center;"></td>
|
| 155 |
+
</tr>
|
| 156 |
+
<tr>
|
| 157 |
+
<td style="text-align: center;"><figure>
|
| 158 |
+
<img src="4_rs.jpg" />
|
| 159 |
+
<figcaption>Input RS image</figcaption>
|
| 160 |
+
</figure></td>
|
| 161 |
+
<td style="text-align: center;"><figure>
|
| 162 |
+
<img src="4_2016_CVPR.jpg" />
|
| 163 |
+
<figcaption aria-hidden="true"><span class="citation" data-cites="rengarajan2016bows"></span></figcaption>
|
| 164 |
+
</figure></td>
|
| 165 |
+
<td style="text-align: center;"><figure>
|
| 166 |
+
<img src="4_ICCV_2017.jpg" />
|
| 167 |
+
<figcaption aria-hidden="true"><span class="citation" data-cites="purkait2017rolling"></span></figcaption>
|
| 168 |
+
</figure></td>
|
| 169 |
+
<td style="text-align: center;"><figure>
|
| 170 |
+
<img src="4_CVPR_2018.jpg" />
|
| 171 |
+
<figcaption aria-hidden="true"><span class="citation" data-cites="lao2018robust"></span></figcaption>
|
| 172 |
+
</figure></td>
|
| 173 |
+
<td style="text-align: center;"><figure>
|
| 174 |
+
<img src="4_CVPR_2017.jpg" />
|
| 175 |
+
<figcaption><span class="citation" data-cites="rengarajan2017unrolling"></span> </figcaption>
|
| 176 |
+
</figure></td>
|
| 177 |
+
<td style="text-align: center;"><figure>
|
| 178 |
+
<img src="4_Ours.jpg" />
|
| 179 |
+
<figcaption>Ours</figcaption>
|
| 180 |
+
</figure></td>
|
| 181 |
+
<td style="text-align: center;"><figure>
|
| 182 |
+
<img src="4_GT.png" />
|
| 183 |
+
<figcaption>Ground truth</figcaption>
|
| 184 |
+
</figure></td>
|
| 185 |
+
</tr>
|
| 186 |
+
<tr>
|
| 187 |
+
<td style="text-align: center;"></td>
|
| 188 |
+
<td style="text-align: center;"></td>
|
| 189 |
+
<td style="text-align: center;"></td>
|
| 190 |
+
<td style="text-align: center;"></td>
|
| 191 |
+
<td style="text-align: center;"></td>
|
| 192 |
+
<td style="text-align: center;"></td>
|
| 193 |
+
<td style="text-align: center;"></td>
|
| 194 |
+
</tr>
|
| 195 |
+
</tbody>
|
| 196 |
+
</table>
|
| 197 |
+
<figcaption>Visual comparisons on synthetic examples with different RS rectification methods.</figcaption>
|
| 198 |
+
</figure>
|
2201.12990/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-01-28T02:31:19.314Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" etag="zT7_SMDN0ZAeQ9UOkRbH" version="16.4.7" type="google" pages="3"><diagram id="4F15gPhPdplM0iygjr9l" name="Page-2">7Z3bcptIEIafRpdJMcBwuJTkw9bWencr3tpNrlJEwhKxLByEYzlPv8NhEAxgjTBzsN2+sRgQQvDR/N093ZpY87v9ZRLcr6/iZbiZmMZyP7HOJqaJbNMh/7KRp2LE9e1iYJVEy3Kjw8B19CssB41y9CFahrvGhmkcb9Lovjm4iLfbcJE2xoIkiR+bm93Em+an3gersDVwvQg27dH/omW6LkY90z2M/xZGqzX9ZOT4xZpvweJ2lcQP2/LztvE2LNbcBXQ35c5362AZPxZD+de2zifWPInjtHh1t5+Hm+y00jN2m26Sq+/7q6vfkf2nvXj89HVtfChO2MUpb6m+XBJu08G7vrm99JN4/sPbTx8+/eX/u7bjL+VbjF36RM9kfi7C7D3GxJrFSbqOV/E22PwRx/dkEJHB72GaPpUMBA9pTIbW6d2mXEuOMXn6XL4/X/iSLXw0MV0+29fXnj2VSzfxNi33avpkeZcGSTrN4CAji02w20ULOnwRbQ6ft6Qb5RcvHynXG/n2SXxbUUGu2ozzdFKOg2QVps+cQ1RuGC4bjJZX4TKM70LyNckGSbgJ0uhnE9mgJH9VbVe99e84IgdoGvQutcp7srxHbYvZxS5+SBZh+a4DCeRF7TAOQzkfJ7CCWqxk3/i6XCzP/RBg9lH6ufb6C6WDvD6gki081blpI6YDX3b2QWTlPN7ESX6WrGUQejeLasvamqk/t89nZE3zptvdhuliXX78abAWCDxzES1eqI2xoW7geCp7Jg97z5zFF2BpfHRxjUw0hMxyF6rgrCF3gfy57/Nge+MtwsVCIYGjm9UXEVge9s9g81B+0sQk9tjefzWKFy1C03CfNpGqn+HycgWbaLXNLjE5uSEZn/0MkzQiwmZarriLlstsj7PHdZSG1/dBfnIfiYxrEb8Mdut8AXUBRd4ZbVf/5PQbhwGy5ByWZnGaxnflFvdhEpGTFybX1abdTOV/J5OSfdNw/+ylLdf6RvPBR43T40Hq2eXQuqbyTEMQC3YHC6x5at2YNQ7akqR+tdxyuXaGjfyv71n5kdqn02zSMZPEb4JqBkf3x1UNGdyBDB17oVQzGalmeS6XVGvvyGN25DLHUpwZYZoPA+mSSPfHfiyqIR0b1jikY/ZYBJPunK4wVfo6ihXlcHd6vBtBLw/FlcrPwRIeJ0iWa1F5s23XYnhMqM+VFugiqwOw2zJaJiN/WV0r2DJ6p5M9lu9dN5PPO94a+M9iMBROl9WkC7ty6fI7FGbhVyPwq+X61RiXe1DlV1OL2oLhCYIswmGw2PSC6igL9XQ6aADTIJsG9bahHXmAVCVzeWVnIB02GsXr7LN0ub4jVXQgSw9NOzCgpZJOPodvZC2MMKdPRid8aBIUQO1HlCLO3OdBe3vuE0VGtnfuOFiuKesK0UNiUokD5WDVktnpgAHyNSLyNdz2RU0exjEHZhxZaeY4cuNByAWEJSFs8roNahB27YGpRBZhz5CbSkQDIuaQSxSRS+Qm/MVBdNZoShaBviZ+hhZOhjy/VB5fTaHp+nItGhWsUvnqNFbVGn3n0Z7G2PGQhiWHMYRZxhjhJ7i8gB4xZALVO7KVfVHlyHaIfoj2n2w5ZE0hYKbu8aYFRrMcfXPzf0EITHyi0NYsBkavNgQQxAcQeKfnmx6nJZITabBZgzU0WIZZvw9J1uUwP18a6+7YT101rA+OqrGsu45k1mGGvi5RNe5bgdfsSyoi7kotFDoRPEzZOlG9i+n10ACTTcXTgFkFptxr6JuHDpNN5dOg3DZYXPFu6HWitNeJ+LBWT2TjZOXcZ+wkKWfaJwi6p2jfPUU40y7DNFvtNZRpz5A7g9vqCHRAva2O9baikW5bV765F+17wzmSZBecxrA0KUp4Y4W24vlrYuMhvqf7Ub3hscFk0fxxFSuA6H3TohdZI4ne1pQ42aIXg+gF0Vu8gS3RGSp62xOLJYteuSkQEL3amum2dR0oehFSLHoH9E0C0asBfyOJXlZvSBe97VoNmGiouK2Az+5i6PMZGbbkB7QudRivuq/AzD83pngEK3h0JoPNO5UWjz4J/2UNiFXUY7zSvgKsusOhtxSIkux+Az6SW2pGvyf0G1BfpuEbqhuhmx0wsIYJ5p9KtS9qirU9d6R+Az6nJzGaPesqHgGEhSDM25xJDcK+P1K/AWRgJJfhAW2mYGq0iKnR3IiPHP+TrgIVZlf0azgg0WGVxldTaSLDlmzSBqQ63m3HgXFc3KNBEFtSPTHbiYDAJ7cVgd1XKAKtCKT7uAfLo8zJhRTBy20HltWOk5kNyJtLGM109NWRQC8C+TVmyuNj9EEGwQXxwQXen+3BNqclUlOfPTiQxtZne5Lrs2nMA1gXzjrm7bvB/dRVw/rgiBvLOjIkN97AHWxDxE1FxI3/XuC1+3JS+Li/aRX4mLKFononk9IJ3Qg0qD9X7zf0/agDdCOQT4MGxoErGA6VWW+6MostD6yM1EtLt1s7Eq2dFU75f6+VWQNTQrLbEfisCR3KNDKw3JnfWO5vPuhYmvVMFvw9lWa1zetI/QjaOXjBuQysSTWDJrVZF/PXUZvFNiRABudPOB2VHGRPcgmkBw7C9x0LX7ZEcLDwbU2akyx86c0DwlegLX6dLQkGC9+Oucdyha8jNxOio/CFngQ95nWkngTSha8Dnbheo/BtT1oeKnxZySFf+PalgyZ4nq7DNICsUJUHYGaFOFNvPjs7GU7+/IBtoo+4gYdtK84XOX35ogoXSBv14HI+dWaOIxcXHyvGpetnwhu4mIBLNy7e2XTmnovExXW0sy595RAVLhbg0mNdvKmFhT6M2rgoty5dP6zhbNLy7JA1i+oEOT8e4jQ/PXQ2Z22oRhIdzHbwYZdf0ynZwLTv9/mmhzc5q+J/xuPj14k7I7ucGxP3jCJaHAn5ZsXB0O2PksuPaRKSIwy+5bvKIcpUYn6W8WyS05DJ/+Jb1Ga+MrdFx51SB9pWABtbmFXFh2uooa75o5Yw1rrKLQSydgw0BKCJAM22VINGE9U6gIbAookCDZvKQesqo1AIGlg0MaDR66wONFMP0Cb43ACdNi5s7G8yd1k1VypsXdUOimEDyyYGNlc5bF3Bc4WwgV4TBpvrKIetK/SuGDawbEJgq3qTq4OtK3CvBjYEmm1U2NjGJRpotq6wv2LYwLKJgU29ZpOcNDgGG2g2YbBpoNk0yRrUYQPLJgQ29ZqN5sh0gQ00mzjYlD9GPU2yB6DZhMPmK3cQPH0yCKDZxMKGDKScNn1SCCDaRNPmqFZtyNBHtkF6VHBcV7lsI+ZVP9rAuAmhTb1uQ4Y+wg0SpGJp00C4IUMf5QYpUtG4CVRuZDGJswtbrbtMgvv1VbwMsy3+Bw==</diagram><diagram name="Copy of Page-2" id="eGaYcpIOMVecbKvvifkl">7Z1Lc5vIFoB/jZaTonmzlGQndzGpmipP1Z2sUoxEJG5k4UE4lufXX4RohJrGamH6nLZ9solpEELNx+G8e+LM7/df8vhh/TVbJpuJbS33E+dmYtvMtf3yv8PI83EkiNzjwCpPl/VBp4G79N+kHrTq0cd0mezODiyybFOkD+eDi2y7TRbF2Vic59nT+WE/ss35tz7Eq6QzcLeIN93R/6bLYn0cDe3gNP6fJF2t+TczPzru+Tte/Fzl2eO2/r5ttk2Oe+5jfpr65Lt1vMyejkPVz3ZuJ848z7Li+Nf9fp5sDtPKZyze3v2xn+6+Lfbfv7o3v9+k0df1b8cJ+3zNR5oflyfbYtxT2/VPK575TFZzkRw+Y02cWZYX62yVbePN71n2UA6ycvB/SVE81wzEj0VWDq2L+029t7zG/Pmv+vPVxrfDxifb49s3+/bem+d660e2Leqz2lG5vSvivJge4ChHFpt4t0sXfPhzujl935IfVN28aqTeb1XH59nPhoryrs0Up5NzHOer5MXjnOOByfKM0foufEmy+6T8meUBebKJi/TXObJxTf6qOa756B9ZWl6gbfGn1KmfyfoZZZFwil32mC+S+lMnEso/WpdxGqr4uIIVp8PK4Rff1Zv13A8BZp8Wf7X+/sbpKP8+oXLYeG5z00XMBL7cwxeVO+fZJsurWXKWcRL+WDRHtvZMo7l7Oyv3nD90u59JsVjXX38drEcEXjjOU4XaHhvqMxyvZc9VYe+FWXwFltanwGuRyYaQWZ8CC84Wcp9ZNI8iFWx/hItksUAkcHSx+ioC68v+FW8e62+a2CWV7v67dfyjQ2iR7ItzpNozXN+ueJOutodbXE5uUo7PfiV5kZaKzbTecZ8ul4czzp7WaZHcPcTV5D6ValyH+GW8W1cbTAZU+cl0u/qzot86DZRb/mlrlhVFdl8f8ZDkaTl5SX7XHCpnqvp3NSmHX5rsX7y19d7IOn/xBfX200nVc+uhdUvLsy1NLPgSFkTx1HkwWxx0VZL23Qrq7dYMW9W/vnflJy6frpNJl0SSughqCRzTX1ctZDwJMnzslaqaLapqbqCkqnVPFAoncoRrOc6MNp0vINKBSGejmxs4qNu+Mw7qdgiLeni9iolp7CCrlMPt6RGfBLNslAgUoJMsvIwQlHHR2LNd42K4V6jPmNZoJCMSKJeNji1owKGg2mqWjdzNi2F/tyXly8a3ATa0JhC18+Wc89VsQ/HFTcKudc3Iuoa1rh1+M7Csay5UOzA8k6tFOwyOaLli+1q4uSOhgUQDNA34sqF7tylgKdxe6DikH6jFISVax/mJgsiH1To8M7TagW4tTDrVjL6xteFA0Syzo7GfhNcJLd8UzoKXQXuHBlQAZEAJBrrve7CiTOaop/AkigHle9gqcyiBgaI2WqI2qvIFJxjj2wPjjqJq5vvADqGIEAZC2DY7nhi4A+OJIsKhBRtPtAf4zCmgqCOgqEz4q73ootCEVQKbRHVsO8MIIwPOLoXj61zRDMQEcN18Sd7B2vmSCqtmj7nZtNcxdtml4cEwxjyRMUHx01xkYPe5+ykSCG7INvIFy5CV3Gzy9l8rOXjGqPa3k5C/pxoWGE1y9GXo/0suMP2BQtcwH5hNOfpgDgTVJH2uvhriaXBFgTXUWeaJdh8D1sspSx+M9Wjsty4O64O9aiLrgQ/MOqXpm+JVU34UVMU+TDTeloUWjnoiWZjQeiK6icnhpGRTBBo8UQPDthr4W5uSTQ2gAV82KPm7qeMJascT/W6tHs/G1Zpzn7AD0pwdpf491EOlB3fIHiramQ4EpoORmA4t2Axup/s+pppbI2tudSPdla5quRfdZ8O/EGTX3WXNkKKEd1Zqq5+/c2xCpvZ2v6hvhKIzWTd/SsUKpPS+a6WXOSMpvZ2UOGilNyCll5Te4wfEEp2hSm83sRhY6YUNgZDSa6yY7krXgUovY8hK74DeSaT0GsDfSEqvqG9AK73crUyJhua0FYjEUwx9PzPLhX1Bu6bUYbzpvgKz6NaaeiNIwYuZDDwX5mImgzd6Ev7rmqIrxaeor4DMWPeScKkRJeh+AxGDLTVz+8s0KG0CuEwjspCzJlxXAoMomCj/FFS+4BRrh8FI/QYiRUtiNHkmKx4hhLUgrNqcCQfhKBqp3wCzPNjUaHdAmylKjdaRGq2M+Mj+P3AtEDG6Yl7DAUCDFYwvoe245QKLtAGhjg/bcWAcE/eyEwTKH+h14INtRcCfMmpFgG/jniQPlpHrUYhgBNkB1Y5TyAZUjSWMJTq4jKReBAbUmKH7xzxbQgM5F3Q4FzxlSeQrSiKc+uzBjjSxPjsErs/2ZIEBYl0L66p9N5TfujisD/a4iawzC7jxhgdbfkIetxGeBVW5D7SqdH/TKrIxoRVFA4xMWdcq6kYAg4NYQoZvN/Qt6kDdCOBpMEA4KDnDqTLrXVdmieWBjZB6bel250S6dWfElP+PWpk1MCQE3Y4gEkXoUKaZ5cFmfvvdIMFHK816IQr+kUqzuuJ1pH4E3Ri85lgGdyBiJ3kYUpv1ef42arPEhgTMUlzC6aLKUZ4JmECJC5kU3w+m+IolgoMV307SHLDi61MfLv2y+G22JBis+Epyj4EVX2rERT0JesTrSD0J4BVf6sT1FhXfbtLyUMVXVDngFd++cNDEmxfrpIgpKtTEAYSsEH8azmc3V8OpHh9wBTgc3rgPKzzg90WLGlgoaNQDy+3Un/k+JCwRcmjRly0RfgaLTbDIYQlvprPgVicsoWeYZOkrhGhgcQiWHskSTh1P62tIhAVbsvBYzhks/qao56bcs2imx//nMSuqyeFZnK2hFkd88HCC33bVHZ2WB9juw7469PQhf3X8/0Dj0/dJMCtPObcmwQ0H9Hgl5S87Xgw//iK36pDmSXmF8d/VqSqEDtphNcvebFKxcFD7j7+ilfEqPBSS56SNs4uAWqcgy+2ixmR5o4421mRlFhpZuwQaI9C0gCaRacCg2eaAxkii6QLNDtFBk5VPIIJGEk0LaA5faAoPNFnPJgTQJt6tRXrauLCJazHbEqkWgMImq3JAho0kmx7YAnTYZE5zRNhIX9MGW+CjwyZzuiPDRpJNC2xNT3I82GROexzYGOlso8ImNiwxQGeTOf2RYSPJpgc2dJ2N+11MgY10Nm2w4etsoSFRgzZsJNm0wIavs4XmWKOks2mGDf81ao41SjqbXtgidAMhNMwaJZ1NG2zMYui0GWaOktKmkTYfW2tjVnctBer2e223X8aA2v2KSwe6jnAKzXUazFJaCIEqlFErlC82EWRWpEy2WW0EmYWxUML7LCrWvc75qBiOLmBfiWH/kglUqQa8ZIIr0dlBKwGaAl7qBa29F7SOVxfO+mtOOLDzuS2YEU4gXIvmFgiMyeIKhLsW3Jlqv2f1VyQO7p41sPm5iLsnXot23CV0U/NzjObn1zwNhtktbEAzpHfWN+aNNEw0m0K5jBRXSXdFXVe7jOzaO9RC5vUtZNRZhG424wXQhPUtIEGLFIJb3B5ffQrP4qb1IxCXExGjEPgOGFpAwhwc8KUD90dQaw7OynsJoXfqi9EL2ZltUCU75aFpI63pbIZImiGl7JSEppU0T5bxCEyaIXXslICml7Sgq6lBk2ZOci2159BcV4xeNsBsc5JrqT+HXtrw6waartqm0EY6mzbaDCgcYDwX2CTcSLhpwk1j5UC5mWeHG9vs+5LHD+uv2TI5HPF/</diagram><diagram id="MwR6DceT7Erss7Lp49ei" name="Page-4">7Vxbc6M2FP41nrYPeHQF9LhJdrcPbWc7O223TzsEFFuzGFyQY2d/fSVu5iLHlwDOxs5kxkgCYc736ZyjD8kTfLvYfEy85fz3OODhBIFgM8F3E6T+KFYfuuYpr3EYyStmiQjyKrit+Cy+86ISFLUrEfC0caKM41CKZbPSj6OI+7JR5yVJvG6e9hCHzbsuvRnvVHz2vbBb+48I5DyvdZGzrf+Vi9m8vDO0Wd6y8MqTiy7SuRfE67wqezj8foJvkziW+dFic8tDbbzSLh8p+xL7fvj3tz9X68Vfy/s/WGrlZvlwzCXVIyQ8kv12jfKuH71wVdireFb5VBpQ9aKwUoWb9VxI/nnp+bplreii6uZyEaoSVIdeuswBfBAbrm56E3jpXB9kralM4m8VBFjVPIgwvI3DOFHlKI70HR7iSBYU0va9OfDBCwM98kTyTQ2zwhAfebzgMnlSpxStFsR2fk3BahvTvLzecgSXFJ7X+FFVegUvZ1XnW9urg8L8R0CB90OhH1Aoav/m3fPwU5wKKeJINd3HUsYLZa/yhHehmOkGGbcwUhxe6s4Wm5ke7tN7LxX+1Ev8DCEvke+iWXY3MKUAUgYJs7GLCCX6ah4FVbsqJrH0im9gQRd0QCYXjig5YHApi2ontx0CjUFTR647fl5kWh40/GbXsDW7UYPZyrqEh4oEj01vazJlcYdPsVBfr8INsgZqjNBmD2m8SnxeXFT3bs/3U6Ff9qO4PeOy00+Ga/XQp0NNr1DvhVoNUdoAyWGngu3s6WhgtO2j0PZDL1Vedh/GDTYoXJOnL4WfzQr/1gt3m0bpqSi9jCa50fYnC6+ETq0RfzqZmlxywbhccs7EJXjlUv9cslwwtVXeRCFwcZY8NaMSA1PMXEJcBm3gjks0dwiibbk1hcqIdV81dTHaw7Gs9IknQj0bT8YhHr4Sb2TisaGJR1CTeFAlkVfi7SFeO4MCpxEPolb4HDkVKydvo8TPKb1G0PM6MgZYvXlkrsEhuPYm5na9JePnncZDZECY6H96+5B4/sS5UUdLMXFUE1Alkh1lZ7SZIPlGmrBvS55dFdQr1DtfMUBHp46stxBBEO5SZJN4FQWZ8wJtT9amWlYuvvGhIvPRyhzeMb+qEROWI6vOTDKULAdNSmsBshRhwDW06nDOpfeVHIzu4aglPBXfvftKR11qOmdPSRW97nRfKxmnOVAm3IaCqnQG7VhegwoBk4Q6GFTHSainRPe34X57Etb2S3RDO+DjhNSLCrFvRDyFJvXUDrU3TZde1MDa/m+l367e+HmAfKetOLv/WRlU3RuUH79kkAKNsvXgLUT4lJ865+Ej1y641p57Vd2K0HJTb8hvqluiOFl4Ya1tXaCpGwkAeUvIpfLzlvrOvohm3St1hLCKoKDbqrhQtgkVpqOiV1B+l6xFJl6UPqi+yl6ztEAzLU6C5h2rC+89/9ssi/1Wy1qIuLmhEGHFAa1sFsaebN8lEOky9AojiigURUOJhjqa5Z+7oiYoo2aOq+JJDm15nTmY7nm/PFYUhG5rgNiGN4mGcT5cwmLSiK9Z6QvfF7dAPn9aahJoryj3OfdwyNlBNomhV5B7Hcru2VFGJlFyR6xUh1/xJU0y7eYk06bnnmQik6zXQKtMa+Al4WQ5TaCoQQ2AbFSgdqpzCFgXL95YlT8rR5bBDyLjCrjBANultBnRuqzR5TbBYoZJxshu0KS15WDlILXxuqioZSHUAgyefXSZtLKrkPLDCilWJ9/4oWUUq0oOntFRIBlTSEEHLNz8wdbYj+sDW9PpUtmuAWpKCNtrS/rD85DFk0NsX4HmefXlAnHVrvoHuTVRfgWyxlW8GiBKguYKp/Ovj8DPqFdXmHuC+fxSNH5G9rrC3I/PPv9bJWzSzHpLkeqAThD+8IGpv13oq2w4EMrgtQsY05c001xEa+feiUTdMW9Qc0Jtw9GyrGrre7UYsZtlYdOLYDzYlB+fcZPwUvDuBAZTx2UQQewCh2HqOq0ZzJQyAohDkQsdiBzmHOIa6kw6kDQWdDIPUKNLGq8yupiotOapvHAmHbCyblQmKZswFxCKIIOMULdNJJfaQFGIQOZg14Zun0R6ztvUSEYvnDIHrM0blzKKLRgrGLWWSOyO73EpoQRhB1KivA/ZGZj6okzpcRp+CVw6a86ouZlDlgMgIMRm2HEdBDqOxlZkoqpBhzVqZ3ANEbHYMfGKexcfr3rYJ803Qhb7uAAuyvmOQVqUtju5dOGpVuh3s2C+VveAUfNK1h4T2CTEyYuPKW4xy20RZuDFx7iHXdBvb/sAKdcevhRe7DThZdQeF97j9hrv+R2WN4Etwn0N3fKH3860b4CYZMnLxrY9btmpu7jbHdlgXLdM9i7Bqy+YfMUS5Gg79dpx1O5maKZX62io/IyYdMbdS15/uqKowyVphUtnMBRVcftLlfmw3f7eJ37/Pw==</diagram></mxfile>
|
2201.12990/main_diagram/main_diagram.pdf
ADDED
|
Binary file (23 kB). View file
|
|
|
2201.12990/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,435 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The majority of machine learning problems take the form: find a function $h_{w_0,...,w_k}$ in some family of hypothesis functions $\mathcal{H}$ that are parameterized over the $w_0,...,w_k$ which best explains the data, $$\begin{equation}
|
| 4 |
+
\label{eq:data_def}
|
| 5 |
+
D = \begin{array}{c|cccccc}
|
| 6 |
+
& x_0 & \dots & x_u &
|
| 7 |
+
y_0 & \dots & y_v \\
|
| 8 |
+
\hline
|
| 9 |
+
|
| 10 |
+
D_0 & x^{0}_1 & \dots & x_u^{0} &
|
| 11 |
+
y_1^{0} & \dots & y_v^{0} \\
|
| 12 |
+
|
| 13 |
+
D_1 & x^{1}_1 & \dots & x^{1}_u &
|
| 14 |
+
y^{1}_1 & \dots & y^{1}_v \\
|
| 15 |
+
|
| 16 |
+
\vdots & \vdots & \ddots & \vdots &
|
| 17 |
+
\vdots & \ddots & \vdots \\
|
| 18 |
+
|
| 19 |
+
D_N & x^{(N)}_1 & \dots & x_u^{(N)} &
|
| 20 |
+
y^{(N)}_1 & \dots & y_v^{(N)} \\
|
| 21 |
+
\end{array},
|
| 22 |
+
\end{equation}$$ where the $D_i$ are the datapoints, the $x_i$ are the input features, and the $y_i$ are the output features.
|
| 23 |
+
|
| 24 |
+
If the $h_{w_0,...,w_k}$ are smoothly parameterized by the $w_0,...,w_k$, then this is usually accomplished by performing gradient descent on the summation of loss functions of the form $l_i(w) = l(h_w(x^{(i)}), y^{(i)})$ to compute
|
| 25 |
+
|
| 26 |
+
$$\begin{align}
|
| 27 |
+
% \resizebox{\hsize}{!}{$
|
| 28 |
+
\min_{w \in \mathcal{W}} \mathcal{L}(D,w) & \overset{\text{def}} {=} \min_{w \in \mathcal{W}} \sum_{D_i}l\left(h_w(x^{(i)}),y^{(i)}
|
| 29 |
+
\right) = \nonumber \\\label{eq:min_loss}
|
| 30 |
+
& =\min_{w \in \mathcal{W}} \sum_{D_i}l_i\left(w
|
| 31 |
+
\right) ;
|
| 32 |
+
% $}
|
| 33 |
+
\end{align}$$ *i.e., find the $w$ that best fits $D$.* For example, one of the most common loss functions, $l(f,y)=\frac{1}{2}||f-y||^2$, gives us the mean squared error and the ubiquitous method of least squares.
|
| 34 |
+
|
| 35 |
+
If the dataset $D$ has many datapoints $D_i$ then the overall computation, or *job*, is distributed as *tasks* amongst *workers*, which model a distributed network of computing devices. This solution creates a new problem; stragglers and other faults can severely impact the performance and overall training time. An emerging technique is to use distributed coded computation to mitigate stragglers and other failures in the network. Many of the current algorithms only encode the data; this paper proposes further encoding the directional derivatives as well in such a way that allows for asynchronous gradient updates using low weight codes. Furthermore the number of weights usually grow quite large as well[^1], which necessitates a "2D" coding scheme which codes both the data and the derivatives.
|
| 36 |
+
|
| 37 |
+
# Method
|
| 38 |
+
|
| 39 |
+
We quickly give some important definitions and background from coding theory, information theory, and geometry. In coded distributed computing an *erasure code* is a pair of functions $\mathcal{C} = (\mathcal{E}, \mathcal{D})$ where the workers tasks are given by the encoding procedure $$\begin{equation*}
|
| 40 |
+
\left\{ \tilde \theta_0 , ..., \tilde \theta_{n-1}\right\} := \mathcal{E} \left\{ \theta_0 , ..., \theta_{k}\right\}
|
| 41 |
+
\end{equation*}$$ and a decoding procedure for some family of fault-tolerant subsets, $\mathcal{F}_\mathcal{C}$, such that $$\begin{equation*}
|
| 42 |
+
\left\{ \tilde \theta_{i_1} , ..., \tilde \theta_{i_m}\right\} \in \mathcal{F}_\mathcal{C} \implies \mathcal{D}\left\{ \tilde \theta_{i_1} , ..., \tilde \theta_{i_m}\right\} = \left\{ \theta_0 , ...,\theta_{k}\right\}.
|
| 43 |
+
\end{equation*}$$ If $\mathcal{F}_\mathcal{C}$ consists of all the $m$-subsets (for some integer $r$) of $\left\{ \tilde \theta_0 , ..., \tilde \theta_{n-1}\right\}$, then $\mathcal{C}$ can correct any $r:= n-m$ erasures or stragglers; furthermore, if $r = n-k$ then the code is a *maximum distance separable* (MDS) code. If the encoder $\mathcal{E}$ is given by a generator matrix $\mathcal{G}_\mathcal{C}$, *i.e.,* if $$\begin{equation*}
|
| 44 |
+
\mathcal{E} \begin{bmatrix}
|
| 45 |
+
\theta_0 & ... & \theta_{k}
|
| 46 |
+
\end{bmatrix}^T =
|
| 47 |
+
\mathcal{G}_\mathcal{C}\begin{bmatrix}
|
| 48 |
+
\theta_0 & ... & \theta_{k}
|
| 49 |
+
\end{bmatrix}^T
|
| 50 |
+
\end{equation*}$$ then $\mathcal{C}$ is called a *linear code*. The *weight* of a linear code is the maximum number of 0's in the rows of the matrix $\mathcal{G}_\mathcal{C}$; the importance of the weight metric stems from the fact that it measures the amount of work that the workers do since the rows of $\mathcal{G}_\mathcal{C}$ are the worker tasks $\tilde \theta_i$. Thus, in order to avoid confusion we will use $t$ to denote the weight of the code as well as the number of *tasks* that each worker does; equivalently $t$ is the number of data partitions on the workers. To further simplify notation we abuse notation and use $\mathcal{C}$ in place of $\mathcal{G}_\mathcal{C}$ and $\mathcal{E}_\mathcal{C}$ when the context is clear.
|
| 51 |
+
|
| 52 |
+
A potential point of confusion is that the $\theta$ *need not be* the weights $w$ of the $h_w$. This is because *the derivative of loss function $\mathcal{L}$ also implicitly takes the data $D_i$ as an input;* this is an important insight used in all gradient coding algorithms. One of the key insights of this paper is to allow the coded gradient to be linear combinations of *both* $\frac{\partial}{\partial D_i}$ *and* the $\frac{\partial}{\partial w_i }$. An important notational convention is that we let the $D_i$ be partitions (or batches) of the data set instead of just datapoints as is common in the gradient coding literature; in particular, $D_0,...,D_{t-1}$ denotes a partitioning of the data-set into $t$ pieces.
|
| 53 |
+
|
| 54 |
+
The reason for the name "maximum distance separable code" is that an MDS maximizes the distances between the codewords $\mathcal{E} \left\{ \theta_0 , ..., \theta_{k}\right\}$ using the Hamming distance; in particular, maximum distance separable means that the code words $\tilde \theta \in \mathcal{C}$ have achieve the maximum $\max _{\mathcal{C}': \text{code on }\Theta} \min_{\tilde \theta , \tilde \theta ' \in \mathcal{C}'}d(\tilde \theta , \tilde \theta ' )$ where $d$ is the Hamming distance. There are two problems with this approach: the first is that MDS codes in this context require arbitrarily large amount of work, *i.e.,* they have a large weight, and the second is that the classical *discrete* MDS codes are using the wrong metric. This paper proposes to use the metric given by the projective geometry[^3] on the space of derivatives. Here we mean maximum distance separable with respect to the distance function $d(\theta , \theta') = \min\{\arccos \langle \tilde \theta ,
|
| 55 |
+
\tilde \theta '\rangle , \arccos \langle - \tilde \theta , \tilde \theta \rangle\}$.
|
| 56 |
+
|
| 57 |
+
Consider the case where there are two derivatives and we wish to create two parity tasks using only summation and subtraction in the encoding procedure. Such a code is given by the following generator matrix $$\begin{equation*}
|
| 58 |
+
\mathcal{C}=\begin{blockarray}{cc}
|
| 59 |
+
\ & \mbox{\scriptsize$ \Theta_0 $} \\
|
| 60 |
+
\begin{block}{c[c]}
|
| 61 |
+
\mbox{\scriptsize$\tilde \Theta_0$} & I \\
|
| 62 |
+
\mbox{\scriptsize$ \tilde \Theta_1$} & P \\
|
| 63 |
+
\end{block}
|
| 64 |
+
\end{blockarray}=
|
| 65 |
+
\begin{blockarray}{ccc}
|
| 66 |
+
\ & \mbox{\scriptsize$\theta_{0}$} & \mbox{\scriptsize$ \theta_{1}$} \\
|
| 67 |
+
\begin{block}{c[cc]}
|
| 68 |
+
\mbox{\scriptsize$\tilde \theta_{0}$} & 1 & 0 \\
|
| 69 |
+
\mbox{\scriptsize$\tilde \theta_{1}$} & 0 & 1 \\
|
| 70 |
+
\mbox{\scriptsize$\tilde \theta_{2}$} & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
|
| 71 |
+
\mbox{\scriptsize$\tilde \theta_{3}$} & \frac{1}{\sqrt{2}} &- \frac{1}{\sqrt{2}} \\
|
| 72 |
+
\end{block}
|
| 73 |
+
\end{blockarray}
|
| 74 |
+
\end{equation*}$$ which adds fault tolerance to the job $%\begin{equation*}
|
| 75 |
+
I = \begin{bmatrix}
|
| 76 |
+
1 & 0 \\
|
| 77 |
+
0 & 1 \\
|
| 78 |
+
\end{bmatrix}$ with the *parity* tasks $P = \begin{bmatrix}
|
| 79 |
+
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
|
| 80 |
+
\frac{1}{\sqrt{2}} & - \frac{1}{\sqrt{2}} \\
|
| 81 |
+
\end{bmatrix}.$ *This code has the serendipitous property of having negligible decoding complexity and negligible communication complexity!* For example, if the master receives $\nabla$ in the direction $\tilde \theta_3=\frac{1}{\sqrt{2}}(\theta_0+\theta_1)$, then the master can decrease both $\theta_0$ and $\theta_1$ by the value returned by $\mathcal{W}_2$, *i.e.,* $\tilde \theta _ 2$, if the master receives $\nabla$ in the direction $\tilde \theta_2=\frac{1}{\sqrt{2}}(\theta_0-\theta_1)$, then the master can decrease $\theta_0$ and increase $\theta_1$ by the value returned by $\mathcal{W}_3$, *i.e.,* $\tilde \theta _ 3$. *The master need only perform 2 additions/subtractions, and more generally (see Eq. [\[eq:code\]](#eq:code){reference-type="ref" reference="eq:code"}) if there are $t$ "sub-tasks" the master only needs to perform $t$ additions/subtractions. The multiplication by $\frac{1}{\sqrt{2}}$ can be subsumed by the learning rate; thus, our code has zero multiplication overhead.* Furthermore, this information can be communicated using only one float, since the master knows which direction/worker the derivative was computed from.
|
| 82 |
+
|
| 83 |
+
Looking at Fig. [1](#fig:1){reference-type="ref" reference="fig:1"}, we see that the code $\mathcal{G}$ is MDS in the sense that it maximizes the independence between the vectors. Equivalently [^4] $\mathcal{G}$ minimizes the confusion between codewords or minimizes the mutual information between codewords; thereby maximizing the entropy or the information content. As we will soon see this has the effect of allowing *lossy low-distortion compression* for larger codes. A second contribution of this paper is to show how to preserve an *approximate* MDS property for larger codes which allows for this form of compression.
|
| 84 |
+
|
| 85 |
+
In what sense does $\mathcal{C}$ being MDS imply fault tolerance? The following example illustrates one kind of error which the code is immune to:
|
| 86 |
+
|
| 87 |
+
{reference-type="ref" reference="fn:1"}) to maximum information about the $\theta_i$. ](2circles.png){#fig:1 width=".48\\textwidth"}
|
| 88 |
+
|
| 89 |
+
Consider the case where two workers return the derivatives in the directions $\tilde \theta = \begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{pmatrix}^T, \tilde \theta ' = \begin{pmatrix} \cos\left(\frac{5\pi}{4}- \epsilon\right) & \sin\left(\frac{5\pi}{4}- \epsilon\right)\end{pmatrix}^T.$ By inspecting the second diagram in Fig. [1](#fig:1){reference-type="ref" reference="fig:1"}, it is easy to see that $\lim_{\epsilon \to 0} \tilde \theta ' \to - \tilde \theta,$ so that $%\begin{equation*}
|
| 90 |
+
\lim_{\epsilon \to 0} \arccos \left\langle \tilde \theta , \tilde \theta '\right\rangle \to \pi .$ We will also show that the error in the derivative can get bigger and bigger as $\epsilon \to 0$.
|
| 91 |
+
|
| 92 |
+
If worker one computes $\frac{\partial \mathcal{L}}{\partial \tilde \theta}$, worker two computes $\frac{\partial \mathcal{L}}{\partial \tilde \theta'}$, and $\frac{\partial l}{\partial \theta_i} = (-1)^{i}L,$ then we have that $\frac{\partial \mathcal{L}}{\partial \tilde \theta} = \frac{1}{\sqrt{2}}(L+(-L))=0$ and that $\frac{\partial \mathcal{L}}{\partial \tilde \theta'}= \cos\left(\frac{5\pi}{4}- \epsilon\right)L-\sin\left(\frac{5\pi}{4}- \epsilon\right)L= \left(\cos\left(\frac{5\pi}{4}- \epsilon\right)-\sin\left(\frac{5\pi}{4}- \epsilon\right)\right)L.$ Therefore if both $( \epsilon \approx 0)$ and $( L >>1)$, then $\frac{\partial \mathcal{L}}{\partial \tilde \theta} \approx\frac{\partial \mathcal{L}}{\partial \tilde \theta'}\approx 0 ,$ which is an error; *when the master receives the messages from workers one and two she will think she has arrived at a optimal fit since both $\frac{\partial \mathcal{L}}{\partial \tilde \theta}$ and $\frac{\partial \mathcal{L}}{\partial \tilde \theta'}$ are very small; *i.e.,* the master may halt the algorithm on a terrible fit.* Furthermore it is easy to see that $\max_{\epsilon }\frac{\partial \mathcal{L}}{\partial \tilde \theta'}$ occurs when $\epsilon= \frac{\pm \pi}{2}$, *i.e.,* when $\arccos \left\langle \tilde \theta , \tilde \theta '\right\rangle = \frac{\pi}{2}$ so that our previous choice is optimal.
|
| 93 |
+
|
| 94 |
+
The last example did not allow us to show the more general *lossy compression* phenomenon that can occur for more general codes. Also we will soon prove that it is impossible to have MDS codes for large dimensions where the workers perform a small amount of work[^5]. In a sense, $k = 2$ is a very special case. Therefore before showing the general compression phenomenon, let us show how to *compress the derivative* for $k = 4$.
|
| 95 |
+
|
| 96 |
+
Suppose that we have 8 workers $\tilde \theta_i$, loss function $l(h,y)=\frac{1}{2}||h-y||^2$, $u$ input features $x_i$, and space of hypothesis functions $$\begin{equation*}
|
| 97 |
+
\mathcal{H} = \left\lbrace h_{w} = (y_1,...,y_v) \middle| y_i = \frac{e^{w_i^Tx}}{1 + \sum_j^{w_j^Tx}}\ , \ w_i \in \mathbb{R}^2 \right\rbrace,
|
| 98 |
+
\end{equation*}$$ where $w_j^Tx = w_{j,1}x_1 + ... + w_{j,u}x_u$; *i.e.,* $\mathcal{H}$ is the space of multinomial logistic regression functions (*however, this procedure will work for any feed-forward deep neural network*, see Fig. [2](#fig:weight_partition){reference-type="ref" reference="fig:weight_partition"}). Similarly to the previous design we can give the following directions to the workers $$\begin{equation}
|
| 99 |
+
\label{eq:basecode}
|
| 100 |
+
\mathcal{C}^{(8,4,2)} = \begin{blockarray}{ccccc}
|
| 101 |
+
\ & \mbox{\scriptsize$\theta_{0}$} & \mbox{\scriptsize$ \theta_{1}$} & \mbox{\scriptsize$ \theta_{2}$} & \mbox{\scriptsize$ \theta_{3}$} \\
|
| 102 |
+
\begin{block}{c[cccc]}
|
| 103 |
+
\mbox{\scriptsize$\tilde \theta_{0}$}
|
| 104 |
+
& \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0& 0 \\
|
| 105 |
+
\mbox{\scriptsize$\tilde \theta_{1}$}
|
| 106 |
+
& \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 &0 \\
|
| 107 |
+
\mbox{\scriptsize$\tilde \theta_{2}$}
|
| 108 |
+
& 0 & 0 & \frac{1}{\sqrt{2}} &\frac{1}{\sqrt{2}}\\
|
| 109 |
+
\mbox{\scriptsize$\tilde \theta_{3}$}
|
| 110 |
+
& 0 & 0 & \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}}\\
|
| 111 |
+
\mbox{\scriptsize$\tilde \theta_{4}$}
|
| 112 |
+
& 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0\\
|
| 113 |
+
\mbox{\scriptsize$\tilde \theta_{5}$}
|
| 114 |
+
& 0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} &0 \\
|
| 115 |
+
\mbox{\scriptsize$\tilde \theta_{6}$}
|
| 116 |
+
&\frac{1}{\sqrt{2}} & 0 & 0 & \frac{1}{\sqrt{2}} \\
|
| 117 |
+
\mbox{\scriptsize$\tilde \theta_{7}$}
|
| 118 |
+
&-\frac{1}{\sqrt{2}} & 0 & 0 & \frac{1}{\sqrt{2}} \\
|
| 119 |
+
\end{block}
|
| 120 |
+
\end{blockarray};
|
| 121 |
+
\end{equation}$$ however, this time we let $\frac{\partial}{\partial \theta_0} =$"the derivative of the first half of the output nodes with respect to the first half of the dataset", $\frac{\partial}{\partial \theta_1} =$"the derivative of the second half of the output nodes with respect to the first half of the dataset", $\frac{\partial}{\partial \theta_2} =$"the derivative of the first half of the output nodes with respect to the second half of the dataset", and $\frac{\partial}{\partial \theta_3} =$"the derivative of the second half of the output nodes with respect to the second half of the dataset". We can see that "lossy" part of the lossy compression is that the 4 workers don't necessarily return the gradient perfectly, but we will later prove that they return a pretty good approximation of it. However, we had a second further lossy compression step to our code; we give workers 5 and 6 the data partitions $D_2, D_3$ and give workers 7 and 8 the data partitions $D_1, D_4$ instead of giving these workers all of the partitions as the code in Eq. [\[eq:basecode\]](#eq:basecode){reference-type="ref" reference="eq:basecode"} suggests.
|
| 122 |
+
|
| 123 |
+
![The different ways to partition the backpropagation gradient. The first partition shows how to partition the gradient for a simple neural network with no hidden nodes. The two other partition corresponds to a more general deep-neural network with hidden nodes. The last partition shows how to apply the recursive step in Alg. [\[alg:grad_part\]](#alg:grad_part){reference-type="ref" reference="alg:grad_part"}.](ICML_fig_same_color.png){#fig:weight_partition width=".41\\textwidth"}
|
| 124 |
+
|
| 125 |
+
We first show how to construct what we will denote as a *$[n,k,t]$-projective derivative code*, or $[n,k,t]$-code, for $n=2^m$, $k = 2^l$ and $t=2^p$, *i.e.,* $n,k,t$ are all powers of 2, and then show how to use "cyclic" and "toroidal" permutations to construct the code for more general $k,n$; however, the $t$ is always chosen to be a power of two for reasons that will soon become clear. The parameter $n$ is the number of workers, $k$ the number of derivatives/features, and $t$ is the number of sub-tasks that each worker will perform; *i.e.,* the number of derivatives per worker.
|
| 126 |
+
|
| 127 |
+
To construct the code we first construct *the characteristic vectors* from the following family of functions $\chi_{\alpha} : \mathbb{F}^p_2 \longrightarrow \mathbb{C},$ defined by the lambda expression $$\begin{equation*}
|
| 128 |
+
\chi_{\alpha} : \beta \mapsto \frac{1}{\sqrt{2^p}}e^{ \left\langle \alpha , \beta \right\rangle \pi i} = \frac{(-1)^{ \left\langle \alpha , \beta \right\rangle }}{\sqrt{2^p}},
|
| 129 |
+
\end{equation*}$$ where $\alpha, \beta \in \mathbb{F}^p_2$ are defined as binary strings of length $t$ and $\left\langle \alpha , \beta \right\rangle$ is the *dot product* on $\alpha, \beta \in \mathbb{F}^p_2$, which is equivalent to taking the *bit-wise* AND[^6] of $\alpha$ and $\beta$ and then taking the XOR[^7] of the result. In particular, $\left\langle \alpha , \beta \right\rangle$ is defined as $\left\langle \alpha , \beta \right\rangle = (\alpha _0 \land \beta _0) \oplus (\alpha_1 \land \beta _1) \oplus \dots\oplus (\alpha _{p-1} \land \beta _{p-1}),$ where $\oplus$ is as defined in fn. [7](#fn:2){reference-type="ref" reference="fn:2"}.
|
| 130 |
+
|
| 131 |
+
It is an elementary fact of representation theory that the vectors, $\sqrt{2^p}\chi_\alpha$ correspond to the *irreducible representations* of $\mathbb{F}^p_2$ in $\mathbb{C}$ and are therefore an orthogonal basis see Thm. 6 of [@scott2012linear] or Thm. 2.12 of [@fulton1991representation]. One can also prove this fact by direct computation using discrete Fourier analysis see ch. 4 of [@tao2006additive]. These functions are well-studied in discrete mathematics and usually referred to as the (additive) characters of $\mathbb{F}^p_2$.
|
| 132 |
+
|
| 133 |
+
Let us construct these vectors for $p= 2$ and verify the veracity of these statements for that case. The binary strings of length $2$ are $\alpha \in \{00,01,10,11\}$ and this corresponds to the functions
|
| 134 |
+
|
| 135 |
+
$$\begin{equation*}
|
| 136 |
+
\resizebox{\hsize}{!}{$
|
| 137 |
+
X^{4}
|
| 138 |
+
= \!\!\! \begin{blockarray}{ccccc}
|
| 139 |
+
\ & \mbox{\scriptsize$\theta_0$} & \mbox{\scriptsize$\theta_1$} & \mbox{\scriptsize$\theta_2$} & \mbox{\scriptsize$\theta_3$} \\
|
| 140 |
+
\begin{block}{c@{\hspace{5.6pt}}[c@{\hspace{5.6pt}}c@{\hspace{5.6pt}}c@{\hspace{5.6pt}}c]}
|
| 141 |
+
\mbox{\scriptsize$\chi_{00}$}
|
| 142 |
+
& (-2)^{\frac{-p}{2} \left\langle 00 , 00 \right\rangle }
|
| 143 |
+
& (-2)^{\frac{-p}{2} \left\langle 00 , 01 \right\rangle }
|
| 144 |
+
& (-2)^{\frac{-p}{2} \left\langle 00 , 10 \right\rangle }
|
| 145 |
+
& (-2)^{\frac{-p}{2} \left\langle 00 , 11 \right\rangle }\\
|
| 146 |
+
\mbox{\scriptsize$\chi_{01}$}
|
| 147 |
+
& (-2)^{\frac{-p}{2} \left\langle 01 , 00 \right\rangle }
|
| 148 |
+
& (-2)^{\frac{-p}{2} \left\langle 01 , 01 \right\rangle }
|
| 149 |
+
& (-2)^{\frac{-p}{2} \left\langle 01 , 10 \right\rangle }
|
| 150 |
+
& (-2)^{\frac{-p}{2} \left\langle 01 , 11 \right\rangle }\\
|
| 151 |
+
\mbox{\scriptsize$\chi_{10}$}
|
| 152 |
+
& (-2)^{\frac{-p}{2} \left\langle 10 , 00 \right\rangle }
|
| 153 |
+
& (-2)^{\frac{-p}{2} \left\langle 10 , 01 \right\rangle }
|
| 154 |
+
& (-2)^{\frac{-p}{2} \left\langle 10 , 10 \right\rangle }
|
| 155 |
+
& (-2)^{\frac{-p}{2} \left\langle 10 , 11 \right\rangle }\\
|
| 156 |
+
\mbox{\scriptsize$\chi_{11}$}
|
| 157 |
+
& (-2)^{\frac{-p}{2} \left\langle 11 , 00 \right\rangle }
|
| 158 |
+
& (-2)^{\frac{-p}{2} \left\langle 11 , 01 \right\rangle }
|
| 159 |
+
& (-2)^{\frac{-p}{2} \left\langle 11 , 10 \right\rangle }
|
| 160 |
+
& (-2)^{\frac{-p}{2} \left\langle 11 , 11 \right\rangle }\\
|
| 161 |
+
\end{block}
|
| 162 |
+
\end{blockarray}\!
|
| 163 |
+
$}
|
| 164 |
+
\end{equation*}$$ $$\begin{equation*}
|
| 165 |
+
=\!\!\! \begin{blockarray}{ccccc}
|
| 166 |
+
\ & \mbox{\scriptsize$\theta_0$} & \mbox{\scriptsize$\theta_1$} & \mbox{\scriptsize$\theta_2$} & \mbox{\scriptsize$\theta_3$} \\
|
| 167 |
+
\begin{block}{c@{\hspace{5.6pt}}[c@{\hspace{5.6pt}}c@{\hspace{5.6pt}}c@{\hspace{5.6pt}}c]}
|
| 168 |
+
\mbox{\scriptsize$\chi_{00}$}
|
| 169 |
+
& 1/2 & 1/2 & 1/2 &1/2\\
|
| 170 |
+
\mbox{\scriptsize$\chi_{01}$}
|
| 171 |
+
& 1/2 & -1/2 & 1/2 & -1/2\\
|
| 172 |
+
\mbox{\scriptsize$\chi_{10}$}
|
| 173 |
+
& 1/2 & 1/2 & -1/2 & -1/2 \\
|
| 174 |
+
\mbox{\scriptsize$\chi_{11}$}
|
| 175 |
+
& 1/2 & -1/2 & -1/2 & 1/2 \\
|
| 176 |
+
\end{block}
|
| 177 |
+
\end{blockarray},
|
| 178 |
+
\end{equation*}$$
|
| 179 |
+
|
| 180 |
+
and it is straightforward to see that all of the vectors $\chi_{00}$, $\chi_{01}$, $\chi_{10}$, and $\chi_{11}$ are orthonormal.
|
| 181 |
+
|
| 182 |
+
If we identify the binary strings $\alpha,\beta$ with the integers that they represent and let $X^{(t)}$ be the matrix defined coordinate-wise by the equation $$\begin{equation*}
|
| 183 |
+
X^{(t)}_{\alpha,\beta} = \chi_\alpha(\beta) = \frac{1}{\sqrt{t}}e^{ \left\langle \alpha , \beta \right\rangle \pi i},
|
| 184 |
+
\end{equation*}$$ where $p = \log(t)$ and $\chi_{\alpha} : \mathbb{F}^p_2 \longrightarrow \mathbb{C}$ and let $L^{(t)}$ and $R^{(t)}$ be the matrices defined by the equation $$\begin{equation*}
|
| 185 |
+
% \resizebox{\hsize}{!}{$
|
| 186 |
+
L^{(2t)} = \begin{bmatrix}
|
| 187 |
+
0^{\left(t \right)} & \frac{1}{\sqrt{2}}X^{\left( t \right)} \\
|
| 188 |
+
0^{\left(t \right)} & \frac{1}{\sqrt{2}}X^{\left( t \right)} \\
|
| 189 |
+
\end{bmatrix}, \ \ \
|
| 190 |
+
R^{(2t)} =
|
| 191 |
+
\begin{bmatrix}
|
| 192 |
+
\frac{1}{\sqrt{2}}X^{\left( t \right)} & 0^{\left(t \right)} \\
|
| 193 |
+
- \frac{1}{\sqrt{2}}X^{\left( t \right)} & 0^{\left(t \right)} \\
|
| 194 |
+
\end{bmatrix},
|
| 195 |
+
% $}
|
| 196 |
+
\end{equation*}$$ then we can define $\mathcal{C}^{(2k,k,t)}$, the generator for the $[2k,k,t]$-code, as $$\begin{equation}
|
| 197 |
+
\label{eq:code}
|
| 198 |
+
\begin{blockarray}{ccccccc}
|
| 199 |
+
\ & \mbox{\scriptsize$ \theta^{(t)}_0$}
|
| 200 |
+
& \mbox{\scriptsize$\theta^{(t)}_1$}
|
| 201 |
+
& \mbox{\scriptsize$\theta^{(t)}_2$}
|
| 202 |
+
& \dots
|
| 203 |
+
& \mbox{\scriptsize$\theta^{(t)}_{s-2}$}
|
| 204 |
+
& \mbox{\scriptsize$\theta^{(t)}_{s-1}$}\\
|
| 205 |
+
\begin{block}{c[cccccc]}
|
| 206 |
+
\mbox{\scriptsize$\tilde \theta^{(t)}_0$}
|
| 207 |
+
& X^{(t)} & 0 & 0 & \hdots & 0 & 0 \\
|
| 208 |
+
\mbox{\scriptsize$\tilde \theta^{(t)}_1$}
|
| 209 |
+
& 0 & X^{(t)} & 0 & \hdots & 0 & 0 \\
|
| 210 |
+
\mbox{\scriptsize$\tilde \theta^{(t)}_2$}
|
| 211 |
+
& 0 & 0 & X^{(t)} & \hdots &\hdots &0 \\
|
| 212 |
+
\mbox{\scriptsize$ \vdots$}
|
| 213 |
+
& \vdots & \vdots &\vdots &\ddots & \vdots&\vdots \\
|
| 214 |
+
\mbox{\scriptsize$ \tilde \theta^{(t)}_{s-2} $}
|
| 215 |
+
& 0 & 0 & 0 &\hdots &X^{(t)}&0 \\
|
| 216 |
+
\mbox{\scriptsize$ \tilde \theta^{(t)}_{s-1}$}
|
| 217 |
+
& 0 & 0 & 0 &\hdots &0 &X^{(t)} \\
|
| 218 |
+
\mbox{\scriptsize$\tilde \theta^{(t)}_{s}$}
|
| 219 |
+
& L^{(t)} & R^{(t)} & 0 & \hdots & 0 &0 \\
|
| 220 |
+
\mbox{\scriptsize$\tilde \theta^{(t)}_{s+1}$}
|
| 221 |
+
& 0 & L^{(t)} & R^{(t)} & \hdots &0 &0 \\
|
| 222 |
+
\mbox{\scriptsize$ \vdots$}
|
| 223 |
+
& \vdots & \vdots &\vdots &\ddots & \vdots& \vdots \\
|
| 224 |
+
\mbox{\scriptsize$ \tilde\theta^{(t)}_{2s-2} $}
|
| 225 |
+
& 0 & 0 & 0 &\hdots &L^{(t)}&R^{(t)} \\
|
| 226 |
+
\mbox{\scriptsize$\tilde \theta^{(t)}_{2s-1}$}
|
| 227 |
+
& R^{(t)} & 0 & 0 &\hdots &0 &L^{(t)} \\
|
| 228 |
+
\end{block}
|
| 229 |
+
\end{blockarray},
|
| 230 |
+
\end{equation}$$ where[^8] $s$ is the ratio of tasks to sub-tasks, $\theta^{(t)}_{i}$ is the sequence of sub-tasks $\theta_{it}$ through $\theta_{(i+1)t-1}$, and $\tilde \theta^{(t)}_{i}$ is similarly defined as a sequence of $t$ consecutive workers. Equivalently if we define the "'rectangles'' $\mathcal{R}^{(t)}_{u,v}$ as $$\begin{equation*}
|
| 231 |
+
\mathcal{R}_{u,v}^{(t)} = \{(i,j) \in \mathbb{N}^2 \ | \ ut \leq i < (u+1)t , \ vt \leq j < (v+1)t \},
|
| 232 |
+
\end{equation*}$$ then we can define $\mathcal{C}^{(2k,k,t)}$ coordinate -wise as $$\begin{equation*}
|
| 233 |
+
\resizebox{\hsize}{!}{$
|
| 234 |
+
\mathcal{C}^{(2k,k,t)}_{\theta _i , \tilde \theta _j} =
|
| 235 |
+
\begin{cases}
|
| 236 |
+
X^{(t)}_{i \% t,j\% t } & \text{if } i < k \text{ and } ( i,j) \in \mathcal{R}_{\left\lfloor\frac{i}{t} \right\rfloor,\left\lfloor\frac{i}{t} \right\rfloor}^{(t)} \\
|
| 237 |
+
L^{(t)}_{i \% t,j\% t } & \text{if } k\leq i \text{ and }( i,j) \in \mathcal{R}_{\left\lfloor\frac{i}{t} \right\rfloor+\frac{k}{t},\left\lfloor\frac{i}{t} \right\rfloor}^{(t)} \\
|
| 238 |
+
R^{(t)}_{i \% t,j\% t } & \text{if }( i,j) \in \mathcal{R}_{\frac{2k}{t}-1,0}^{(t)} \text{ or } k\leq i \\
|
| 239 |
+
& \text{ and }t\leq j \text{ and }( i,j) \in \mathcal{R}_{\left\lfloor\frac{i}{t} \right\rfloor+\frac{k}{t},\left\lfloor\frac{i}{t} \right\rfloor}^{(t)} \\
|
| 240 |
+
|
| 241 |
+
0 & \text{otherwise}. \\
|
| 242 |
+
\end{cases}
|
| 243 |
+
$}
|
| 244 |
+
\end{equation*}$$
|
| 245 |
+
|
| 246 |
+
It is straightforward to prove the following beautiful property
|
| 247 |
+
|
| 248 |
+
::: {#lem:tensor .lemma}
|
| 249 |
+
**Lemma 1**. *The matrices $X^{(t)}$ satisfy the following recursion relation $X^{(2t)} = X^{(2)} \otimes X^{(t)}$.*
|
| 250 |
+
:::
|
| 251 |
+
|
| 252 |
+
::: proof
|
| 253 |
+
*Proof.* This is a direct consequence of Thm. 10 in [@scott2012linear]. ◻
|
| 254 |
+
:::
|
| 255 |
+
|
| 256 |
+
An alternative is the weaker statement[^9] "$X^{(2)}$ is a Hadamard matrix and the tensor product of two Hadamard matrices is a Hadamard matrix" whose proof can be found in [@2003fundamentals].
|
| 257 |
+
|
| 258 |
+
Similar to the example given in Sec. [2.2](#subsec:mot_ex){reference-type="ref" reference="subsec:mot_ex"} we give the workers $\tilde \theta_i$ the data partition given by Alg. [\[alg:data_part\]](#alg:data_part){reference-type="ref" reference="alg:data_part"}.
|
| 259 |
+
|
| 260 |
+
:::: algorithm
|
| 261 |
+
::: algorithmic
|
| 262 |
+
`data `$D$, `code_parameters` $(n,k,t)$ Partition the data $D$ into $D_0, .., D_{k-1}$ Set $\mathcal{C} := \mathcal{C} ^{(n,k,t)}$ `Data`$[ \tilde \theta_i] :=\emptyset$ Set `Data`$[ \tilde \theta_i]:=\texttt{Data}[\tilde \theta_i]\cup D_j$
|
| 263 |
+
:::
|
| 264 |
+
::::
|
| 265 |
+
|
| 266 |
+
The idea behind Alg. [\[alg:data_part\]](#alg:data_part){reference-type="ref" reference="alg:data_part"} is simple; we give the first worker `Data`$[ \tilde \theta_0]= D_0,...,D_{t-1}$, and the second worker `Data`$[ \tilde \theta_1]= D_t,...,D_{2t-1}$, and so on up to worker $k$, at which point we give the workers $k,...,n$ a cyclic shift of the previous assignment, *e.g,* worker $k$ gets `Data`$[ \tilde \theta_k]= D_{\frac{t}{2}},...,D_{t+\frac{t}{2}}$.
|
| 267 |
+
|
| 268 |
+
The procedure for partitioning and encoding the gradients, Alg. [\[alg:grad_part\]](#alg:grad_part){reference-type="ref" reference="alg:grad_part"}, is slightly more involved; however, the main idea is illustrated in Fig. [2](#fig:weight_partition){reference-type="ref" reference="fig:weight_partition"}.
|
| 269 |
+
|
| 270 |
+
:::: algorithm
|
| 271 |
+
::: algorithmic
|
| 272 |
+
`network` $x,z^0,...,z^m,y$, `code_parameters` $(n,k,t)$ Set $\mathcal{C} := \mathcal{C} ^{(n,k,t)}$ Partition $y$ into $t$ groups $y^{(i)}$ where $y^{(0)}=(y_0,...,y_{t-1});\dots ; y^{(t)}=(y_{v-t},...,y_{v})$ Encode `grad`$[\tilde \theta_i]$ according to row $i$ in $\mathcal{C}$ as in Fig. [2](#fig:weight_partition){reference-type="ref" reference="fig:weight_partition"} End Procedure Recursively call "Gradient_Partition_Assignment" on the network $x,z^0,...,z^m$ parameters $(n,k,t)$ as in Fig. [2](#fig:weight_partition){reference-type="ref" reference="fig:weight_partition"} to encode `grad`$[\tilde \theta_i]$ according to row $i$ in $\mathcal{C}$ by repeatedly splitting the (non-zero-)row by $t$
|
| 273 |
+
:::
|
| 274 |
+
::::
|
| 275 |
+
|
| 276 |
+
The main intuition behind Alg. [\[alg:grad_part\]](#alg:grad_part){reference-type="ref" reference="alg:grad_part"} is to encode the gradient in the *manner in which backpropagation occurs;* this allows for the iterative decoding/gradient update at the master node, which in turn allows for asynchronous gradient updating.
|
| 277 |
+
|
| 278 |
+
Given some general $(n,k,t)$ we construct the matrix $\mathcal{C}^{(n',k',t)}$, where $n'$ and $k'$ are the next nearest powers of 2 (repeating rows if necessary) and use a "2-D" permutation algorithm similar to [@9213028] to distribute the sub-tasks in each round; however our algorithm uses more general (prime number) step-sizes chosen in each round and the permutations now occur in "higher dimensions[^10]." In particular; we now use a similar procedure to permute tasks amongst workers if $n$ and $k$ are not powers of 2. For example if we have $n=6$ workers and $k=3$ tasks we can add extra virtual tasks $\theta_3 = \theta_0,$ $\theta_4 = \theta_1,$ $\dots,$ $\theta_{x} = \theta_{x\%3}$ and perform the following toroidal permutations on $\mathcal{C}^{(5,3,2)}$
|
| 279 |
+
|
| 280 |
+
<figure>
|
| 281 |
+
<p><span class="math display">$$\begin{equation}
|
| 282 |
+
\label{eq:spin}\begin{gathered}
|
| 283 |
+
\begin{blockarray}{cccccccccc}
|
| 284 |
+
\ & & \mbox{\scriptsize$\tilde \vartheta_{0}$} & \mbox{\scriptsize$ \tilde \vartheta_{1}$} & \mbox{\scriptsize$\tilde \vartheta_{2}$} & \mbox{\scriptsize$\tilde \vartheta_{3}$} & \mbox{\scriptsize$\tilde \vartheta_{4}$} & & & \\
|
| 285 |
+
\ & & \mbox{\scriptsize$\tilde \theta_{0}$} & \mbox{\scriptsize$ \tilde \theta_{1}$} & \mbox{\scriptsize$\tilde \theta_{2}$} & \mbox{\scriptsize$\tilde \theta_{3}$} & \mbox{\scriptsize$\tilde \theta_{4}$} & \mbox{\scriptsize$ \tilde \theta_{5}$} & \mbox{\scriptsize$\tilde \theta_{6}$} & \mbox{\scriptsize$\tilde \theta_{7}$}\\
|
| 286 |
+
\begin{block}{c@{\hspace{3pt}}c[@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c]}
|
| 287 |
+
\mbox{\scriptsize$t_{0}$}& \mbox{\scriptsize$\theta_{0}$}
|
| 288 |
+
& \color{blue}1 & \color{blue} 1 & \color{blue}0& \color{blue}0&\color{blue}0&0&1& -1\\
|
| 289 |
+
\mbox{\scriptsize$t_{1}$}& \mbox{\scriptsize$\theta_{1}$}
|
| 290 |
+
& \color{blue}1 & \color{blue}-1 & \color{blue}0 &\color{blue}0&\color{blue}1&1&0&0 \\
|
| 291 |
+
\mbox{\scriptsize$t_{2}$}& \mbox{\scriptsize$\theta_{2}$}
|
| 292 |
+
& \color{blue}0 & \color{blue}0 & \color{blue}1 &\color{blue}1&\color{blue}1&-1&0&0 \\
|
| 293 |
+
& \mbox{\scriptsize$\theta_{3}$}
|
| 294 |
+
& 0 & 0 & 1 &-1&0&0&1&1 \\
|
| 295 |
+
\end{block}
|
| 296 |
+
\end{blockarray} \Rightarrow \begin{blockarray}{cccccccccc}
|
| 297 |
+
\ & &\mbox{\scriptsize$\tilde \vartheta_{3}$} &\mbox{\scriptsize$\tilde \vartheta_{4}$} & & & &\mbox{\scriptsize$\tilde \vartheta_{0}$} & \mbox{\scriptsize$ \tilde \vartheta_{1}$}& \mbox{\scriptsize$\tilde \vartheta_{2}$} \\
|
| 298 |
+
\ & & \mbox{\scriptsize$\tilde \theta_{0}$} & \mbox{\scriptsize$ \tilde \theta_{1}$} & \mbox{\scriptsize$\tilde \theta_{2}$} & \mbox{\scriptsize$\tilde \theta_{3}$} & \mbox{\scriptsize$\tilde \theta_{4}$} & \mbox{\scriptsize$ \tilde \theta_{5}$} & \mbox{\scriptsize$\tilde \theta_{6}$} & \mbox{\scriptsize$\tilde \theta_{7}$}\\
|
| 299 |
+
\begin{block}{c@{\hspace{3pt}}c[@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c]}
|
| 300 |
+
& \mbox{\scriptsize$\theta_{0}$}
|
| 301 |
+
& 1 &1 &0& 0&0& 0&1& -1\\
|
| 302 |
+
\mbox{\scriptsize$t_{0}$}& \mbox{\scriptsize$\theta_{1}$}
|
| 303 |
+
& \color{blue}1 & \color{blue}-1 &0 &0&1& \color{blue}1&\color{blue}0&\color{blue}0 \\
|
| 304 |
+
\mbox{\scriptsize$t_{1}$}& \mbox{\scriptsize$\theta_{2}$}
|
| 305 |
+
& \color{blue} 0 & \color{blue}0 & 1 &1&1&\color{blue}-1&\color{blue}0&\color{blue}0 \\
|
| 306 |
+
\mbox{\scriptsize$t_{2}$} & \mbox{\scriptsize$\theta_{3}$}
|
| 307 |
+
& \color{blue} 0 &\color{blue}0 &1 &-1&0& \color{blue}0&\color{blue}1&\color{blue}1 \\
|
| 308 |
+
\end{block}
|
| 309 |
+
\end{blockarray} .
|
| 310 |
+
\\
|
| 311 |
+
\begin{blockarray}{cccccccccc}
|
| 312 |
+
\ & & & & \mbox{\scriptsize$\tilde \vartheta_{0}$} &\mbox{\scriptsize$\tilde \vartheta_{1}$} & \mbox{\scriptsize$\tilde \vartheta_{2}$} & \mbox{\scriptsize$ \tilde \vartheta_{3}$} & \mbox{\scriptsize$ \tilde \vartheta_{4}$}& \\
|
| 313 |
+
\ & & \mbox{\scriptsize$\tilde \theta_{0}$} & \mbox{\scriptsize$ \tilde \theta_{1}$} & \mbox{\scriptsize$\tilde \theta_{2}$} & \mbox{\scriptsize$\tilde \theta_{3}$} & \mbox{\scriptsize$\tilde \theta_{4}$} & \mbox{\scriptsize$ \tilde \theta_{5}$} & \mbox{\scriptsize$\tilde \theta_{6}$} & \mbox{\scriptsize$\tilde \theta_{7}$}\\
|
| 314 |
+
\begin{block}{c@{\hspace{3pt}}c[@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c]}
|
| 315 |
+
\mbox{\scriptsize$t_{2}$}& \mbox{\scriptsize$\theta_{4}$}
|
| 316 |
+
& 1 & 1 & \color{blue} 0& 0&\color{blue}0&\color{blue}0&\color{blue}1& -1\\
|
| 317 |
+
& \mbox{\scriptsize$\theta_{1}$}
|
| 318 |
+
& 1 & -1 & 0 &0&1&1&0&0 \\
|
| 319 |
+
\mbox{\scriptsize$t_{0}$}& \mbox{\scriptsize$\theta_{2}$}
|
| 320 |
+
& 0 & 0 & \color{blue} 1 &1&\color{blue}1&\color{blue}-1&\color{blue}0&0 \\
|
| 321 |
+
\mbox{\scriptsize$t_{1}$}& \mbox{\scriptsize$\theta_{3}$}
|
| 322 |
+
& 0 & 0 & \color{blue}1 &-1&\color{blue}0&\color{blue}0&\color{blue}1&1 \\
|
| 323 |
+
\end{block}
|
| 324 |
+
\end{blockarray} \Rightarrow \begin{blockarray}{cccccccccc}
|
| 325 |
+
\ & & \mbox{\scriptsize$\tilde \vartheta_{1}$} & \mbox{\scriptsize$\tilde \vartheta_{2}$} & \mbox{\scriptsize$\tilde \vartheta_{3}$} & \mbox{\scriptsize$\tilde \vartheta_{4}$} & & & & \mbox{\scriptsize$ \tilde \vartheta_{0}$} \\
|
| 326 |
+
\ & & \mbox{\scriptsize$\tilde \theta_{0}$} & \mbox{\scriptsize$ \tilde \theta_{1}$} & \mbox{\scriptsize$\tilde \theta_{2}$} & \mbox{\scriptsize$\tilde \theta_{3}$} & \mbox{\scriptsize$\tilde \theta_{4}$} & \mbox{\scriptsize$ \tilde \theta_{5}$} & \mbox{\scriptsize$\tilde \theta_{6}$} & \mbox{\scriptsize$\tilde \theta_{7}$}\\
|
| 327 |
+
\begin{block}{c@{\hspace{3pt}}c[@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c@{\hspace{3pt}}c]}
|
| 328 |
+
\mbox{\scriptsize$t_{1}$}& \mbox{\scriptsize$\theta_{4}$}
|
| 329 |
+
& \color{blue}1 &\color{blue}1 & \color{blue}0& \color{blue}0&0&0&1& \color{blue} -1\\
|
| 330 |
+
\mbox{\scriptsize$t_{2}$}& \mbox{\scriptsize$\theta_{5}$}
|
| 331 |
+
& \color{blue} 1 &\color{blue} -1 & \color{blue}0 &\color{blue}0&1&1&0&\color{blue}0 \\
|
| 332 |
+
& \mbox{\scriptsize$\theta_{2}$}
|
| 333 |
+
&0 & 0 & 1 &1&1&-1&0&0 \\
|
| 334 |
+
\mbox{\scriptsize$t_{0}$}& \mbox{\scriptsize$\theta_{3}$}
|
| 335 |
+
& \color{blue} 0 & \color{blue} 0 & \color{blue}1 &\color{blue}-1&0& 0& \color{blue} 1& \color{blue} 1 \\
|
| 336 |
+
\end{block}
|
| 337 |
+
\end{blockarray} .\end{gathered}
|
| 338 |
+
\end{equation}$$</span></p>
|
| 339 |
+
</figure>
|
| 340 |
+
|
| 341 |
+
so that at round $r$ worker $i$ performs task $\theta_{i + 5r \% n'}$ and similarly at round $r$ we have $t_i = \theta_{i+r \% k}$.
|
| 342 |
+
|
| 343 |
+
More generally we find a displacement $d$ equal to an (odd) prime number that is co-prime[^11] to $k$ and we let worker $i$ performs task $\theta_{i + dr \% n'}$ at round $r$ and let $t_i = \theta_{i+r \% k}$ at round $r$. This allows gives the following statistical uniformity lemma:
|
| 344 |
+
|
| 345 |
+
::: {#lem:spin .lemma}
|
| 346 |
+
**Lemma 2**. *If the displacement, $d$, is equal to an (odd) prime number that is co-prime to $k$ then the blue rectangle in (the general form of) Eq. [\[eq:spin\]](#eq:spin){reference-type="ref" reference="eq:spin"} will visit every entry in the matrix with every possible pattern of $X^{(t)}$ and every cyclic permutation of the $t_i$ contained inside of the blue rectangle.*
|
| 347 |
+
:::
|
| 348 |
+
|
| 349 |
+
::: proof
|
| 350 |
+
*Proof.* The leftmost point of the blue rectangle is equal to $(i,i)+r(1,d) \equiv (i + r ,i+dr ) \text{ mod }\mathbb{Z }/n'\mathbb{Z} \times \mathbb{Z }/k\mathbb{Z}$. By the Chinese remainder theorem (see [@ireland1982classical] or [@dummit2003abstract]) $(1,d)$ is a generator of $\text{ mod }\mathbb{Z }/n'\mathbb{Z} \times \mathbb{Z }/k\mathbb{Z}$ since $d$ is coprime to $1,$ $k,$ and $n'.$ ◻
|
| 351 |
+
:::
|
| 352 |
+
|
| 353 |
+
:::::: table*
|
| 354 |
+
::::: center
|
| 355 |
+
:::: small
|
| 356 |
+
::: sc
|
| 357 |
+
-------- ------------------- ---------------------------- ------------------------------------------------------- ------------------------- -------------- --------------
|
| 358 |
+
Code Encoding Communication Decoding Weight Asynchronous Parameter
|
| 359 |
+
Scheme Complexity Complexity Complexity Range ? Compression?
|
| 360 |
+
LWPD 0 $\mathcal{O}(\frac{k}{t})$ 0 $t \in [2,\frac{n}{4}]$ $\surd$ $\surd$
|
| 361 |
+
GC $\mathcal{O}(nk)$ $\mathcal{O}(k)$ $\mathcal{O}(k^{\omega}) \leq \mathcal{O}(k^{2.38})$ $t = n-k+1$ $\times$ $\times$
|
| 362 |
+
$K$-AC 0 $\mathcal{O}(k)$ 0 $t\in [1,n]$ $\surd$ $\times$
|
| 363 |
+
-------- ------------------- ---------------------------- ------------------------------------------------------- ------------------------- -------------- --------------
|
| 364 |
+
:::
|
| 365 |
+
::::
|
| 366 |
+
:::::
|
| 367 |
+
::::::
|
| 368 |
+
|
| 369 |
+
<figure id="fig:exp_fig1">
|
| 370 |
+
<figure>
|
| 371 |
+
<img src="ICML_testingloss_n9_lr_001_nout_4.png" />
|
| 372 |
+
</figure>
|
| 373 |
+
<figure>
|
| 374 |
+
<img src="ICML_testingloss_n17_lr_001_nout_4.png" />
|
| 375 |
+
</figure>
|
| 376 |
+
<figure>
|
| 377 |
+
<img src="ICML_testingloss_n33_lr_001_nout_4_npart_4.png" />
|
| 378 |
+
</figure>
|
| 379 |
+
<figcaption>Experiments with 8, 16, and 32 workers.</figcaption>
|
| 380 |
+
</figure>
|
| 381 |
+
|
| 382 |
+
In this section we give a theoretical comparison of the algorithms, see Table [\[tab:compare\]](#tab:compare){reference-type="ref" reference="tab:compare"}, and we prove theorems regarding the existence and non-existence of codes with certain properties. The following theorem, *i.e.,* Thm. [3](#thm:no_mds){reference-type="ref" reference="thm:no_mds"}, shows that Hamming-distance MDS coding schemes must have the workers do an arbitrarily large amount of work. We then later show that our codes are approximately MDS with respects to the projective geometry metric which maximize the amount of information sent back by the workers[^12] while keeping the amount of work done by the workers as low as possible; *i.e.,* there are approximately projective-MDS that have weights $t=2,...,n$.
|
| 383 |
+
|
| 384 |
+
::: {#thm:no_mds .theorem}
|
| 385 |
+
**Theorem 3**. *If the parameters $(n,k,t)$ satisfy $t \leq n-k$ then there is no Hamming-distance MDS $(n,k)$-code for the derivatives.*
|
| 386 |
+
:::
|
| 387 |
+
|
| 388 |
+
::: proof
|
| 389 |
+
*Proof.* If $A(\mathcal{C})_i=$"number of rows of weight $i$", then Theorem 7.4.1 in [@2003fundamentals] gives us that an MDS will have $A(\mathcal{C})_i=0$ for $i \leq n-k$. ◻
|
| 390 |
+
:::
|
| 391 |
+
|
| 392 |
+
In particular; the proof of Thm. [3](#thm:no_mds){reference-type="ref" reference="thm:no_mds"} can be strengthened to say that:
|
| 393 |
+
|
| 394 |
+
::: {#cor:must_work .corollary}
|
| 395 |
+
**Corollary 4**. *In an MDS $(n,k)$-coding scheme $A(\mathcal{C})_i=0$, for $i \leq n-k$, where $A(\mathcal{C})_i=0$ is the weight distribution of a code.*
|
| 396 |
+
:::
|
| 397 |
+
|
| 398 |
+
The importance of Cor. [4](#cor:must_work){reference-type="ref" reference="cor:must_work"} is made clear through the following interpretation:
|
| 399 |
+
|
| 400 |
+
::: corollary
|
| 401 |
+
**Corollary 5**. *In an MDS $(n,k)$-coding scheme all of the workers must do at least $n-k$ amount of work.*
|
| 402 |
+
:::
|
| 403 |
+
|
| 404 |
+
However a simple observation of the construction given in Sec. [3.2](#subsec:pow_two_con){reference-type="ref" reference="subsec:pow_two_con"} gives us that:
|
| 405 |
+
|
| 406 |
+
::: theorem
|
| 407 |
+
**Theorem 6**. *There exists $(n,k,t)$-LWPD codes for any $t \geq 2$.*
|
| 408 |
+
:::
|
| 409 |
+
|
| 410 |
+
The next theorem proves that under the projective distance we have that our code achieves approximately maximal distance.
|
| 411 |
+
|
| 412 |
+
::: {#thm:opt .theorem}
|
| 413 |
+
**Theorem 7**. *The family $(n,k,t)$-code are approximately MDS $(n,k)$-code for the derivatives in the projective-distance for $n \leq 2k$.*
|
| 414 |
+
:::
|
| 415 |
+
|
| 416 |
+
::: proof
|
| 417 |
+
*Proof.* By Lem. [2](#lem:spin){reference-type="ref" reference="lem:spin"} it suffices to prove this for powers of two. The distance between any vectors is $\arccos \frac{1}{2}= \frac{\pi}{3}$ and this only happens for $t$ out of $n$ choices for any vector; the distance is equal to the maximum $\frac{\pi}{2}$ for all other vectors. ◻
|
| 418 |
+
:::
|
| 419 |
+
|
| 420 |
+
The proof of Thm. [7](#thm:opt){reference-type="ref" reference="thm:opt"} gives us that the distance between any two codewords $\tilde \theta, \tilde \theta$ is bounded above by $% \begin{equation*}
|
| 421 |
+
d(\mathcal{C}) = \min_{\tilde \theta, \tilde \theta \in \mathcal{C}} d( \tilde \theta, \tilde \theta) = \frac{\pi}{3}
|
| 422 |
+
% \end{equation*}$ and thus in term of percentages of the optimal $\frac{\pi}{2}$ we have $$\begin{equation}
|
| 423 |
+
\label{eq:per_mds}
|
| 424 |
+
\frac{ \frac{\pi}{2} - d(\mathcal{C}) }{\frac{\pi}{2} } = \frac{1}{6} \approx 16 \%
|
| 425 |
+
\end{equation}$$ of the "theoretical" optimal distance; however there can be no code that achieves the "theoretical" optimal distance:
|
| 426 |
+
|
| 427 |
+
::: theorem
|
| 428 |
+
**Theorem 8**. *The percentage in Eq. [\[eq:per_mds\]](#eq:per_mds){reference-type="ref" reference="eq:per_mds"} cannot be made 100%; *i.e.,* there are no projective MDS codes for $n>k$ which achieve distance $\frac{\pi}{2}$.*
|
| 429 |
+
:::
|
| 430 |
+
|
| 431 |
+
::: proof
|
| 432 |
+
*Proof.* If this statement was false there would be $n+1$ linearly independent vectors in $n$-dimensional space, $\mathbb{F}^n$. ◻
|
| 433 |
+
:::
|
| 434 |
+
|
| 435 |
+
An interesting fact about the bound given by Thm. [7](#thm:opt){reference-type="ref" reference="thm:opt"} is that is a constant independent of the dimension of the code and thus it scales well for larger and larger number of workers.
|
2202.01085/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:874afa75defafc5b2c537fdeff2c5bbcfea77fbdc0c4d1f3acf0bbaaa2a08f38
|
| 3 |
+
size 3132499
|
2203.06345/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2203.06345/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Transformer [@vaswani2017attention], as the *de facto* neural architecture in natural language processing (NLP) [@devlin2018bert; @brown2020language], recently revolutionizes modern computer vision applications such as image classification [@dosovitskiy2020image; @touvron2020training; @han2020survey], object detection [@zheng2020end; @carion2020end; @dai2020up; @zhu2021deformable], and image generation [@parmar2018image; @pmlr-v119-chen20s; @jiang2021transgan]. Rather than relying on convolution-like inductive bias, vision transformers [@dosovitskiy2020image] (ViTs) leverage the self-attention [@vaswani2017attention] to aggregate image patches across all spatial positions and model their global-range relationships, which are believed to improve model expressiveness and representation flexibility.
|
| 4 |
+
|
| 5 |
+
<figure id="fig:teaser" data-latex-placement="t">
|
| 6 |
+
<embed src="Figs/Diversity.pdf" style="width:90.0%" />
|
| 7 |
+
<figcaption>Relative similarity comparisons in embedding, attention, and weight spaces of DeiT-Small on ImageNet. The larger number indicates severer correlation/redundancy. B1<span class="math inline">∼</span>B5 donate the blocks in the DeiT-Small model. <em>Cosine</em>, (normalized) <em>MSE</em>, <span class="math inline">1</span> - (normalized) <em>reconstruction loss</em> are adopted to measure embedding, attention, and weight similarity. The former two are computed with <span class="math inline">10, 000</span> images from the ImageNet training set without data augmentation, following the standard in <span class="citation" data-cites="gong2021vision"></span>. </figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
Despite their promising potentials, the ViT training still suffers from considerable instability, especially when going deeper [@touvron2021going; @gong2021vision]. One of the major reasons [@gong2021vision] is that the global information aggregation among all patches encourages their representations to become overly similar, causing substantially degraded discrimination ability. This phenomenon, known as over-smoothening, suggests a high degree of "redundancy\" or ineffective usage of the ViT expressiveness and flexibility, and has been studied by a few prior arts [@touvron2021going; @zhou2021deepvit; @zhou2021refiner; @gong2021vision]. Several initial attempts strive to fill the gap from different aspects. For example [@gong2021vision] proposes contrastive-based regularization to diversity patch embeddings, and [@zhou2021refiner] directly refines the self-attention maps via convolution-like aggregation to augment local patterns.
|
| 11 |
+
|
| 12 |
+
This paper aims to comprehensively study and mitigate the ViT redundancy issue. We first systematically demonstrate the ubiquitous existence of redundancy **at all three levels**: *patch embedding, attention map, and weight space*, for current state-of-the-art (SOTA) ViTs. That is even the case for those equipped with strong data augmentations (i.e., DeiT [@touvron2021training]) or sophisticated attention mechanisms (i.e., Swin [@liu2021swin]), e.g, as shown in Figure [1](#fig:teaser){reference-type="ref" reference="fig:teaser"}. In view of such collapse, we advocate a **principle of diversity** for training ViTs, by proposing corresponding regularizers that encourage the representation diversity and coverage at each of those levels, that unleashes the true discriminative power and representation flexibility of ViTs. We find each level's regularizers to provide generalization gains, and applying them altogether consistently yields superior performance. Our contributions lie in the following aspects:
|
| 13 |
+
|
| 14 |
+
- We provide the first comprehensive investigation of redundancy in ViTs by demonstrating its ubiquitous existence in all three levels of patch embeddings, attentions, and weights, across SOTA ViT models.
|
| 15 |
+
|
| 16 |
+
- For each of the three levels, we present diversity regularizers for training ViTs, which demonstrate complementary effects in eliminating redundancy, encouraging diversity, and enhancing generalization.
|
| 17 |
+
|
| 18 |
+
- We conduct extensive experiments with vanilla ViT, DeiT, and Swin transformer backbones on the ImageNet datasets, showing consistent and significant performance boost gains by addressing the tri-level redundancy issues with our proposed regularizers. Specifically, our proposals improve DeiT and Swin, by $0.70\%\sim 1.76\%$ and $0.15\% \sim 0.32\%$ accuracy.
|
| 19 |
+
|
| 20 |
+
# Method
|
| 21 |
+
|
| 22 |
+
<figure id="fig:DViT" data-latex-placement="t">
|
| 23 |
+
<embed src="Figs/D_ViT.pdf" style="width:100.0%" />
|
| 24 |
+
<figcaption>(<em>Left</em>) A overall pipeline of vision transformers <span class="citation" data-cites="dosovitskiy2020image touvron2021training"></span>. Each image is divided into patches and transformed into embeddings via a linear projection layer. Then, embeddings are fed to the transformer encoder consists of MHA and FFN modules. Other operations like softmax and normalization are omitted here. (<em>Right</em>) An illustration of the redundancy of embedding, attention, and weight.</figcaption>
|
| 25 |
+
</figure>
|
| 26 |
+
|
| 27 |
+
Revisit that transformer architectures [@vaswani2017attention; @dosovitskiy2020image] usually contain the multi-head self-attention modules (MHA) and feed-forward networks (FFN). In MHA, keys, queries, and values are linearly transformed for computing the attention heads, and then all the heads are aggregated by another linear transformation. FFNs are also built on two linear transformations with activations, as shown in Fig. [2](#fig:DViT){reference-type="ref" reference="fig:DViT"}.
|
| 28 |
+
|
| 29 |
+
Here, we use $\boldsymbol{W}^{\mathrm{MHA}}$ and $\boldsymbol{W}^{\mathrm{FFN}}$ to denote the weights in MHA and FFN modules, respectively. $\boldsymbol{A}$ represents the attention map (or the affinity matrix). It is calculated by $\boldsymbol{A}=\mathrm{softmax}(\alpha\boldsymbol{Q}\boldsymbol{K}^{\top})$, where $\boldsymbol{Q}$ is the query matrix, $\boldsymbol{K}$ is the key matrix, and $\alpha$ is a scale (typically $\frac{1}{\sqrt{d}}$ and $d$ is the dimension of the keys and the queries). Let $\boldsymbol{e}^{l}=[\boldsymbol{e}^{l}_{\mathrm{class}},\boldsymbol{e}^{l}_1,\cdots,\boldsymbol{e}^{l}_n]$ be the feature embedding of layer $l$ ($1\le l\le \mathrm{L}$), where $n$ is the total number of image patches. Without loss of generality, we take image recognition as an example. Then, the vision transformer is optimized by minimizing a classification loss $\mathcal{L}(\mathcal{C}(\boldsymbol{e}_{\mathrm{class}}^{\mathrm{L}}),y)$, where $\mathcal{C}$ is the classification head and $y$ is the label of input samples.
|
| 30 |
+
|
| 31 |
+
We investigate the redundancy of the feature embedding by calculating the token-wise cosine similarity. It is depicted as follows: $$\begin{equation}
|
| 32 |
+
\mathcal{R}_{\mathrm{cosine}}^{s}(\boldsymbol{h}):=\frac{1}{n(n-1)}\sum_{i\not = j}\frac{|h_i^{^\top}h_j|}{\|h_i\|_2\|h_j\|_2},
|
| 33 |
+
\label{eq:cosine_same}
|
| 34 |
+
\end{equation}$$ $$\begin{equation}
|
| 35 |
+
\mathcal{R}_{\mathrm{cosine}}^{d}(\boldsymbol{h}^{l_1},\boldsymbol{h}^{l_2}):=\frac{1}{n}\sum_{i}\frac{|h_i^{l_1^\top}h_i^{l_2}|}{\|h_i^{l_1}\|_2\|h_i^{l_2}\|_2},
|
| 36 |
+
\label{eq:cosine_different}
|
| 37 |
+
\end{equation}$$ where $\boldsymbol{h}$ is the feature embedding $\boldsymbol{e}=[\boldsymbol{e}_{\mathrm{class}},\boldsymbol{e}_1,\cdots,\boldsymbol{e}_n]$ (superscript $l$ is omitted for simplicity), and $n$ is the total number of tokens.
|
| 38 |
+
|
| 39 |
+
Notably, $\mathcal{R}_{\mathrm{cosine}}^{s}(\boldsymbol{h})$ and $\mathcal{R}_{\mathrm{cosine}}^{d}(\boldsymbol{h}^{l_1},\boldsymbol{h}^{l_2})$ denote the cosine similarity of the feature embedding within the [s]{.underline}ame layer and across two [d]{.underline}ifferent layers $l_1$, $l_2$, respectively. The larger cosine similarity suggests more redundancy. Intuitively, the within-layer redundancy hinders ViT from capturing different tokens' features; and the cross-layer redundancy hurts the learning capacity of ViTs since highly correlated representations actually collapse the effective depth of ViT to fewer or even single transformer layer.
|
| 40 |
+
|
| 41 |
+
We consider $\mathcal{R}_{\mathrm{cosine}}(\boldsymbol{A})$ to measure the cosine similarity of attention maps within the same layer. Similarly, $\mathcal{R}_{\mathrm{MSE}}(\boldsymbol{A}):=\frac{1}{n(n-1)}\sum_{i\not = j}\|A_i-A_j\|_2^2$ can also be used for the redundancy quantification. In contrast to these two metrics which show the similarity across attention heads, we further use the standard deviation statistics to indicate the element-wise variance within an attention head.
|
| 42 |
+
|
| 43 |
+
If the parameter space is highly redundant, then the weight matrix will fall approximately into a low-rank parameters subspace. Thus, we use the reconstruction error to depict the weight redundancy: $$\begin{equation}
|
| 44 |
+
\mathcal{R}_{\mathrm{PCA}}(\boldsymbol{W}):=\|\boldsymbol{W} - \Tilde{\boldsymbol{W}}\|_2^2
|
| 45 |
+
\end{equation}$$ where $\Tilde{\boldsymbol{W}}$ is the reconstructed weight matrix by the principal component analysis (PCA) with the top-$k$ principal components. Given a fixed reconstruction error, the larger $k$ implies better diversity. In other words, given $k$, the larger reconstruction error means less weight redundancy. [@Chen_2019_ICCV; @liu2021learning] also dissect the weight redundancy from the view of rank.
|
| 46 |
+
|
| 47 |
+
To mitigate the observed redundancy, we introduce three groups of regularization to encourage the diversity of $i)$ learned feature embeddings; $ii)$ attention maps; $iii)$ model weights in the training of vision transformers.
|
| 48 |
+
|
| 49 |
+
To diversify patch feature embeddings, we use the cosine angle regularization $\mathcal{R}_{\mathrm{cosine}}^{s}(\boldsymbol{e})$ and $\mathcal{R}_{\mathrm{cosine}}^{d}(\boldsymbol{e}^{l_1},\boldsymbol{e}^{l_2})$ to constrain within-layer and cross-layer embedding, respectively. Similar methods are leveraged to obtain diversified representations in vision [@gong2021vision], language [@gao2019representation], and graph [@chen2020measuring] scenarios. Meanwhile, we adopt the contrastive regularization $\mathcal{R}_{\mathrm{contrastive}}^{d}(\boldsymbol{e}^{l_1},\boldsymbol{e}^{l_2})$ to boost cross-layer embedding diversity, which is presented as follows: $$\begin{equation}
|
| 50 |
+
\begin{aligned}
|
| 51 |
+
&\mathcal{R}_{\mathrm{contrastive}}(\boldsymbol{e}^{l_1}, \boldsymbol{e}^{l_2}) := \\
|
| 52 |
+
&-\frac{1}{n}\sum_{i=1}^n \mathrm{log} \frac{\mathrm{exp}(e_i^{l_1^\top} e_i^{l_2})}{\mathrm{exp}(e_i^{l_1^\top} e_i^{l_2}) + \mathrm{exp}(e_i^{l_1^\top}(\frac{1}{n-1}\sum_{j\not=i}e_j^{l_2}))},
|
| 53 |
+
\label{eq:contrastive}
|
| 54 |
+
\end{aligned}
|
| 55 |
+
\end{equation}$$ where $l_1$ and $l_2$ are two different layer indexes. Note that the contrasitve regularizer is not applicable for the within-layer embedding diversification since the lack of positive pairs.
|
| 56 |
+
|
| 57 |
+
$\rhd$ [*Rationale.*]{.underline} As pointed out by [@merikoski; @gong2021vision], the cosine angle regularization can function like minimizing the upper bound of the largest eigenvalue of patch embedding $\boldsymbol{e}$, hence bringing improvements of expressiveness [@gong2021vision] and diversity to learned representations. For contrastive regularization, it pulls embeddings corresponding to the same patch together and simultaneously pushes apart embeddings belonging to different patches, reducing the feature correlation between different layers. As a result, it enables to learn separable patch embedding and maintain tolerance to semantically similar patches [@wang2020understanding; @wang2021understanding], improving the representation qualities and the ViT performance.
|
| 58 |
+
|
| 59 |
+
In the same way, the cosine regularization $\mathcal{R}^s_{\mathrm{cosine}}(\boldsymbol{A})$ can be applied to remove the redundancy of attention, where $\boldsymbol{A}=[\boldsymbol{A}_1,\boldsymbol{A}_2,\cdots,\boldsymbol{A}_{\mathrm{H}}]$ and $\mathrm{H}$ is the number of attention heads within one layer. Inspired by the orthogonality regularization's empirical effectiveness in vision [@Chen_2019_ICCV; @lezama2018ole; @ranasinghe2021orthogonal] and language tasks [@zhang-etal-2021-orthogonality], we investigate it under the context of ViTs. We adopt the canonical soft orthogonal regularization (SO) [@bansal2018can] as follows: $$\begin{equation}
|
| 60 |
+
\mathcal{R}_{\mathrm{SO}}(\boldsymbol{A}):=\|\boldsymbol{A}^\top \boldsymbol{A} - \boldsymbol{I}\|_\mathrm{F}^2,
|
| 61 |
+
\end{equation}$$ where $\|\cdot\|_\mathrm{F}$ is the Frobenius norm and $\boldsymbol{I}$ is the identity matrix sharing the same size as $\boldsymbol{A}^\top \boldsymbol{A}$.
|
| 62 |
+
|
| 63 |
+
We also try an alternative Conditional number orthogonal regularization (CNO) [@Chen_2019_ICCV] as follows: $$\begin{equation}
|
| 64 |
+
\mathcal{R}_{\mathrm{CNO}}(\boldsymbol{A}) = \| \lambda_1(\boldsymbol{A}^\top\boldsymbol{A})-\lambda_2(\boldsymbol{A}^\top\boldsymbol{A})\|^2.
|
| 65 |
+
\end{equation}$$ It enforces the orthogonality via directly regularizing the conditional number $\kappa=\frac{\lambda_1}{\lambda_2}$ to 1, where $\lambda_1$ and $\lambda_2$ are the largest and smallest eigenvalues of the target matrix $\boldsymbol{A}^\top\boldsymbol{A}$. To make it computationally more tractable and stable, we alternatively constrain the difference between $\lambda_1$ and $\lambda_2$.
|
| 66 |
+
|
| 67 |
+
$\rhd$ [*Rationale.*]{.underline} These regularizations (i.e., SO and CNO) encourage diverse attention maps by constraining them to be orthogonal with each other, which actually upper-bounds the Lipschitz constant of learned function mappings [@zhang-etal-2021-orthogonality], leading to robust and informative representations. As illustrated in [@zhang-etal-2021-orthogonality], introducing an orthogonal diversity regularizer to the attention map also stabilizes the transformer training and boosts its generalization on NLP tasks.
|
| 68 |
+
|
| 69 |
+
Similarly, the orthogonality regularization, e.g., $\mathcal{R}_{\mathrm{CNO}}(\boldsymbol{W})$, can be easily plugged in and promote the diversity in ViT's weight space. Compared to orthogonality, hyperspherical uniformity is another more general diversity regularization demonstrated in [@liu2021learning]. Although it has been explored in CNNs, its study in ViTs has been absent so far. We study the minimum hyperspherical separation (MHS) regularizer, which maximizes the separation distance (or the smallest pairwise distance) as follows: $$\begin{equation}
|
| 70 |
+
\mathrm{max}_{\{\hat{\boldsymbol{w}}_1,\cdots,\hat{\boldsymbol{w}}_m\}\in\mathbb{S}^{t-1}}\{\mathcal{R}_{\mathrm{MHS}}(\hat{\boldsymbol{W}}):=\mathrm{min}_{i\not= j}\rho(\hat{\boldsymbol{w}}_i,\hat{\boldsymbol{w}}_j)\},
|
| 71 |
+
\label{eq:mhs}
|
| 72 |
+
\end{equation}$$ where $\boldsymbol{W}=[\boldsymbol{w}_1,\boldsymbol{w}_2\cdots,\boldsymbol{w}_m]$, $\hat{\boldsymbol{w}_i}:=\frac{\boldsymbol{w}_i}{\|\boldsymbol{w}_i\|}$ is the $i$th weight vector projected onto a unit hypersphere $\mathbb{S}^{t-1}:=\{\hat{\boldsymbol{w}}\in\mathbb{R}^t|\|\hat{\boldsymbol{w}}\|=1\}$, $\rho(\cdot,\cdot)$ is the geodesic distance on the unit hypersphere. As indicated in Equation [\[eq:mhs\]](#eq:mhs){reference-type="ref" reference="eq:mhs"}, it is formulated as a max-min optimization and we solve it with alternative gradient ascent/descent.
|
| 73 |
+
|
| 74 |
+
Furthermore, we examine another maximum gram determinant (MGD) regularizer $\mathcal{R}_{\mathrm{MGD}}(\hat{\boldsymbol{W}})$ as follows: $$\begin{equation}
|
| 75 |
+
\mathrm{max}_{\{\hat{\boldsymbol{w}}_1,\cdots,\hat{\boldsymbol{w}}_m\}\in\mathbb{S}^{t-1}}\mathrm{logdet}\big(\boldsymbol{G}:=(\mathcal{K}(\hat{\boldsymbol{w}}_i,\hat{\boldsymbol{w}}_j))^m_{i,j=1}\big),
|
| 76 |
+
\label{eq:mgd}
|
| 77 |
+
\end{equation}$$ where $\mathrm{det}(\boldsymbol{G})$ is the determinant of the kernel gram matrix $\boldsymbol{G}\in\mathbb{R}^{m\times m}$ and $\mathcal{K}(\boldsymbol{u},\boldsymbol{v}):=\mathrm{exp}(-\sum_{i=1}^t\epsilon^2(u_i-v_i)^2)$ denotes the kernel function with a scale $\epsilon>0$. By maximizing the $\mathrm{det}(\boldsymbol{G})$ of weights $\hat{\boldsymbol{W}}$, MGD forces weight vectors to uniformly dispersed over the hypersphere.
|
| 78 |
+
|
| 79 |
+
$\rhd$ [*Rationale.*]{.underline} As demonstrated in [@liu2018learning; @lin2020regularizing; @liu2021learning], the hyperspherical uniformity regularizations (i.e., MHS and MGD) characterizes the diversity of vectors on a unit hypersphere, which encodes a strong inductive bias with relational information. We believe it to benefit ViT training from two perspectives [@liu2021learning]: ($i$) eliminating weight redundancy and improving the representative capacity; ($ii$) learning better optimization and generalization by reducing the spurious local minima, evidenced in [@liu2021learning; @xie2017diverse; @lin2020regularizing; @liu2018learning].
|
| 80 |
+
|
| 81 |
+
<figure id="fig:current_vit" data-latex-placement="t">
|
| 82 |
+
<embed src="Figs/Current_ViT.pdf" style="width:100.0%" />
|
| 83 |
+
<figcaption>(<em>Left</em>) Redundancy comparisons in embedding, attention, and weight spaces of ViT <span class="citation" data-cites="dosovitskiy2020image"></span>, DeiT <span class="citation" data-cites="touvron2020training"></span>, Swin <span class="citation" data-cites="dosovitskiy2020image"></span>, SAM-ViT <span class="citation" data-cites="chen2021vision"></span>, Refiner <span class="citation" data-cites="zhou2021refiner"></span>, Pyramid-ViT <span class="citation" data-cites="wang2021pyramid"></span>, Cross-ViT <span class="citation" data-cites="chen2021crossvit"></span>, T2T <span class="citation" data-cites="yuan2021tokens"></span>, TNT <span class="citation" data-cites="han2021transformer"></span>, VOLO <span class="citation" data-cites="yuan2021volo"></span>, CvT <span class="citation" data-cites="wu2021cvt"></span>, and our diverse DeiT on ImageNet. We use publicly available pre-trained models for benchmarking their all levels of redundancy. For a fair comparison, most of selected pre-trained transformers share similar parameter counts, i.e., <span class="math inline">19M ∼ 27M</span>, while even the smallest released SAM-ViT and Refiner have <span class="math inline">83</span>M and <span class="math inline">78</span>M. <span class="math inline">↑</span>/<span class="math inline">↓</span> denote that the larger/smaller number indicates better diversity. <em>Cosine</em>, <em>cosine</em>, (normalized) <em>reconstruction error</em> are adopted to measure embedding, attention, and weight similarity. The former two are computed with <span class="math inline">10, 000</span> sub-sampled images from the ImageNet training set without data augmentation, following the standard in <span class="citation" data-cites="gong2021vision"></span>. (<em>Right</em>) The layer-wise similarity/reconstruction error of ViT, DeiT, and Swin, which B1<span class="math inline">∼</span>B8 is the corresponding transformer blocks (or layers).</figcaption>
|
| 84 |
+
</figure>
|
2203.11894/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a95089e551ef80fe1f66250c2ecffda03a735a5dd02f18190689f99589c34475
|
| 3 |
+
size 4807818
|
2204.07258/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2022-06-02T10:47:54.121Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36" version="17.4.5" etag="bWhqn0Ytcjsz3lhr-s1h" type="google"><diagram id="zl9mmlc9yP7l4bN0E-G8">7V1bc+M2lv41rtp9MIu4g4/tdmeytcluprI1O3maoiXa1rQseik53T2/fkHxDoAXUQBISXQqbYsiKQrnOxd85+DgDn1++/6XJHx//TVeR9s76K+/36HHOwgR9LH4lR75kR0BAPDsyEuyWWfH/OrA75t/RfmJxdGPzTra58eyQ4c43h42782Dq3i3i1aHxrEwSeJvzdOe4+26ceA9fIkaj5Ee+H0VbiPltP/drA+v2VEOWXX852jz8lp8MqBB9s5bWJyc32L/Gq7jb7XPQl/u0Ockjg/ZX2/fP0fbdPSa4/JTy7vlgyXR7jDkApg/xuFH8d2itfiq+ctdvBO/HpL4Y7eO0it88SpODq/xS7wLt7/E8bs4CMTBf0aHw49cUOHHIRaHXg9v2/zd6Pvm8Pf0co/kr/7Ib5b+/fi9/uJH8WJ3SH7ULkpf/lHcL31RXXZ8VVy3P4TJ4VMq5OoLHI/9tNluyyvWxRnxe7T7n9fNLjtaO2d/SOKv0ed4GyfHgUH+8ad8pxA8OD7e8RPy74+ye+WvsHiVDXI6sk3hxx/JKj9ESI7jMHmJctERpooTlCAR6hXFb5H47uKUJNqGh82fzfuHOcxfyvPKS3+LN+KO0M918h7SwIOY+YRziAgMUHaHQkEh581bZo+e36XCl/ij9ljVoSPq9AhECwJnjMDgBhCIrw6Bz/HuIOEmCKygrkIZakdZHVGBO0RhLBBFmjBCkHkQBeXPMFANvDeAAsF++QOb985GwQRg88/9M9x+5IOjIPhF4PU9R0KB5DyW+TNKDtF3XZwUPhWX+4OFVAwIAeqAYJoOSHbsWxUnoSLae63FSAh4jLaLuDFaHUND+4em0uRUjb69bg7R7+/h0Qp+E/FqU2u34VO0fQhXX1+Ol9XU4qfjj6IWYowewmSVK0WqIutw/1p+3LNQtNpNvjym/w2WU10eRC+P3nH2zx9kpg4y+fxv4n/xvIc79vBltxIxfyJOedrGq6/i9x17/AdIf5PP/64I5HhVY9TD7eZFWKbHlfi24j7oIR2LjYi/P+VvvG3W6/Ryrfiaprox/O3iLCWRxAdhdeJdYUprAstdgTFZ5RcEXoBYU3WQRwkDGPLiX6JoEQy4BwqNq0sYQo/y+tXsfInzfrWq+ZZ8nBojX5Nuevy38CAkuzsegT7SqlGbF9OMv+JyBliu2lgSXx3G4pgJL4SDmlfAkpEk0MPnuKHOm0Oi3NycHwoWVJwT7cre8hwg6O5nVfYFFbMI/3Thc4sWoe/edkEB+kGRRaY2I1GoRqLU9xUHKg6qIsYGwiMA+0fhpCD01BCmTUuUqR09/rRM7VSd1ASwZRTcKtGG5Pr1zp5UkEYqdHvIx6shHvp/H3Hxxv3+GMp/Sr+///69elP89XL8ncW+KbX79Cyi37+nwa74fbgH4q8s4s0+Rzxi9lH5hYq93G437/u2sLYeH+/fM1r7efM9BYYslWe+ilYrncSfOMFEnczDcyWYX8GxdyRvMACMYubThhpqgljePJ9oZixt55wFBuwIDH9MD4Y1ifga68DA4RPSqL8pMGDm+dRHAQSEcipmI3MFg45PsQGGTyUY4GRgCCP+rLUMdMUj8ZS2wCDCEiACExBQn3GMikhhfmDQMEiSGP9WCs+a/Sbpfyd4bE1snP0Ykh6A0EMkFx1ikOC5ik/DTTmPQJGGHHYbgQ7ga240AmUTRqA6vsRqBHqF8WeL/C4s/oQ69sRq/HmF0Wc3FC4l+oQ6zsRu9DndVMRe9NkNhkuJPgs/fWvRZ7f0Lib6dFTMNFV5h1JiNKKkJRugzkIkcAJjb6TqSE6cBLCrRqSFLtcm55o3JnLy3VwNk6MiprkgrwVFFw4Y3b1JkWW1gBmyYMYJZrg9yHCHJoZeHVzGVVd2w8xQKtoA5kgtDwzkWkXqD80DD8Mz6MSzQRSyBYWXg8LUo2FYg6FBFGo9sTMY8quD4QXASVt1LTBkKFqfEE6BEzhd2nqCC4LhOKgMs2onmMkxC7B8N9i73fUvE01LTa110d7b1VoXBBZ0Xjw6NRMGU+DU3brbFBvEpoatl8FqvdgADC02gJbS2khXWCmNwm0WG2T46C42sCYVyxWOr2G60ktONZfZZvhwhfWObfJsyTgXbS6KNJXHWVD7IfPJWyFXFZCh83KEUzLNVpRdm4GGjDWxMR8o3Gb9Y4/0OPKCoJaAlopJZqzZA+oh63GnOK+YdxfHVttwv9+shhFTJ8enjdi26pEBzopXHa3jIhSpoRfhHpFmQ4ODWcDBoBsaDGF1laI0fEsVdve0fz+KwZcPiThpEx2f/HWzP8THMc+txFOYSHHBz3lccMgNft/NVXwKBPyShoF3IxeJJ5HwU3mMnQLtPR2441AS8aSPXTZiSICOuqtX7n3P93mznQk+D4DNuxQXxM/P++hsQCyLesfag8D3iF95AdYsPj1rWWffra2u6sRuCMK5kMzjYXY+fpidMoFAyrtZrBLAbvi6JT9rCHG4Zla4wSIBHZhdZdOwxmUtnPFlccYa/BiijHV3dsUY4wFcqW3GGKnf3ylfjHXM5MIXl+iYhi/GlinAXr54uiXR1vjiNnlePF+MdW327PLFLhMKzjjjHoBcCmeMWxsCXjVn3CO9i+WMsY4TnIwCGh+BOmOBNQl4EHiU1MQ7LlS9J0hzbyi5e4Ph6azIv4uQPJeoFaGKHq0JfmxdSyp59d7EK4IEC12FZ9XRb+7CB5Qr0ygYMC+A4wQOAt39hHWnDorsyIC+fYvoS1ExzQzajMGHvubW9uw9UcU8CStmdscEA2R/czkyGQzAuWQAxmBBXbt+NhbsyvWqeXmForzUBXtFs4YWMr5C1Zfq6DTcvCm0ygjrwm4duUehd4N3gLnSbeMSDCeJz42JAVNLWIgfDAKuGNnwR+20vE6j/cN0KxMAC7qVQfeEoy7Ceb6s0o3sC4zWFFdVmOLv3143U3eGisCaRExH2ASUofCczlB6aJeo4chDrNpqiEtRN88t1gyoGOJmzf317441KKvZMKNMNaPUghkdjAQ9x/qXf3zS0qonJdBUNXtJwnVa4VfnV48/5zLefZ0zhRKhJs0NIK6Km2oqiTUTPHmmNGqg3azsXlROVblAo3IWYu7BSNCwoUeV++OaVA4S3wPyZntuVY7qmEeruedP9dzz9TVD7DOyaSGqCMp5nnyWYuFAK/ypQiCq4yaXhWzntU7tMQkYexBXuUsfzRshGrJ68dhTbCFLkerBbWwhOxgZrvZ8ueJ1jYNsReVMyIxn09RNk8zFUPQaigIUjVDf3W7nPPAArPifgMqYlbbMNMeGUzc9NxcIjoOgBV81PwgunOJEBAfVcIpsQk6R3gSnCPDknCJdOMWpVE7DKbIJOUV6E5wim5xSZNNSilOyRlPRigD30Ip0TqQRm5pWvML1Tj0AoX2s4rwAsrCKM5mpMQ2raGOmNhgZ7lnFq1z9NsRcXAaxyBZicSa2gmuqH5kzYpEzj7MatKTNWSBBnty0yRyvwxZqcc4gdEYtTgvChVyciOlgGnKRT0guslsgF6Fm0R7GTomOhVqcSuE01CKfkFpkt0AtAp95AFRLYuUdpRxrH5+aZpyORZqIZkRYWNyqKUrR6m2mpWl8apYR3RrLCAD0gqDqqhTweQNkYRnnMmnTsIzuJm0Apn4NVpZNjiqtztr4FIymO9c1E0YzM0013yWJeEaEZvEorpeiTpg3nWw5KvCRh2htOeqccTHREuUJA92pcMGoR2viA0qB4IxQoWsHKctj2WRkVGAAfZViOm+PEeJ6jxGu24Omf4+RJP6nUMxNvEsHNk42/0r/Ui8bOBESbx3Cj/RYes1DtVNJbk+ekpopasQrDeeEPlU3u898VWGWpCe78E1NMp1u91rmNzUpAdq46b2NTU74be3ePWb212IXm42xnDVEuseFOS17ygzcwkK/ezOpdT6SS++dddziA7ptZs3gtX2pbG0nqml0Q2E5+ax5PVTEJHW3h4BX5ADPCSkCHbF6DoPd3he+nOKpYVyYrHI1Kjq+lx8nRYmN5vL9gmpY2qA3vkC+JsCQFWDUMGv4ycyhiidOPepjtIrXwhFB/2kbr76mzlO4Q5A5XjWpcLxqpEPTCbBpgk9t9J/Eh/AYQGSX10WWm3hz0io6t3lcitaQRwkDWEyd8n81jZ4DWupcI1aHHuX1i5kBicMZGh0CVaODYKBaHN3uB6YsjultjOdrcTIMdFsc7UibsDgaZqlhcb7sdBbn4XYtTpu0yozoqeZG602s2BodWzS1rdEFOFpbo/W5hmzNAMLkWmwNmTC6ad2XojO6uWVb0yKtMrpBI6Kb2qa21i1O98R+aTZbazYLDTSbLfqcT9VsVrNfMgGWms2mvIHSN5b29Y3VPOGoi3C+otFUs9lARz7Yz+Tc315+D9O+/B7Q2MxpMjnAV3mXXza7KEzOi09mUz94jxlU8inpPu4AKkIotrUyXTQIfJV1ua5BFnOWwYNMvcIUmR9nNaqeaTQwda1VfxQxIBgopFsPBkqDMkVFNPDVPS0WAFgEANcBYMKSeOCrbNN1Wdq0pky1tNTzqUN31r3wcNEys1qGNIWqJdCn0TKVzroqLaPMKyaVvTpWxjjm1YwtauZQzYry3Kaa0SnV7PqqVvTMVO82SOfv4dUgrCwItY3ZOSYfaqvXJC4AsTN2+dLtUYQG1gmOYJBAsavTFeFxOKzM86qTQRKqqEHDONRh1CVC1nqZgiL7aRmDJ3qm/Iz+TQxVLjLbPLSGLtBAV59hzL114bmN1PkVYGx4QuRsU0ztTrsyyT94g1VMPYYhI5xCn/KAyG0KIfKYZDLNVfwBAK4VrhrggU7gtZjPcbZz5nBV7OlguALNzegw4zwKn92M6QXjcxTaTkf1AHyCopS54fCRM4ePAfPktgl4LECPO7alC/Klja9LsPLA833lbRvQ7eZ6rw26vdscW4Eu1thWh9AlmrVLmI2FLgG6WNUiQg0sqsW6lPsv0YsA2PEMZRXSoCS7WtjUhGNRT6SWGJksgLo7n6kTAvTk/DoB2IO+sp7DQXcYAAYUQJ67rN1wuQIMAjGC1SJ1JA+m7xXz/jnUKwCVev6P3ftHWsjjp3eB/n9/HLLXvZg3ufYudRyZ2rarkwlpUeTJ65cI7sM70BYAGwG8bqmn2RyAgVHDPvR8HshByQkGA3oY2BpCrgzhf0WHb3GSlqduwx9pueo1Qhkx7snGphfJgTUgq+Vxn+PdKjxEu6zo9jplAJnH6t3VJFdKAfUAVhTHhUCgWph2nmU5tdx6cDMCevxpCfPTT/0tPAhA7I4fLUK6O6u5S4Sgx6TZHmGBzolzXRk+LhOa54nPciO0stz0Vao0/eK77DFku3Y0kEs9oF8uSJlDOKaLb684JeQyOX43YHrua5glp6v0UeDBGlrlmRhKc7utK+uHr+BH3Sv4iZ+uV6+8mGTADE7poavGaaVZuwKjBrkwaqwDJnMzam52d1iMWotR0+THS8VzZdRkv8uQEhedZL7qVlKe9RDuBUEJYmqt/wiAOnrKrvUqGvxdtgWTW/bN3oLp1hvbFjS8fEFLPfh8Oc6Ym5i7K0QXR2XZUekqRqGzwoN7SFJH1ZqKzXyW0jn/ZPeVfUxl++Bk7uv66mEvCu66OhvornCR6cKyqir/ZFwzae4qk2bucO1gaXbyVl+aPZWLPn2JtgGnDgLf80HleyX7hX3qFWmsOTh1pCO+baDh5yxeE389TAgJAwJGgHic1+XQlDAoiO5ZiNdNRegY53G+4+tybzZdE9LshVYqkn3XJECiuCY41i+h4rmr2mcpn2bQ9SCoMTYSPGfZmMqA2SjLgMqBRqqdAEyX4JQlMs4SuClgHKt3dq1Bi1Y3FdhZbIkI8WpZcB8rDXGHNR8epYLtVYL793DX7u+fw7fN9kfm8ctCwPyE12j7Z5QWH8hvpOce/5VCBq6NGKj/n3dQfAX/r9mvv92lA1EGCdnztQUJ9hueZ8YmTtZRUrcRNP2vy0YM6iqYqWc7J+h7AHFpzpC9OndBQeOe90Wds9GG6OX6iBNb5R1/X1SrvPvAv9NWshrwIEBuUo5A4BX779aDTaCrtPA9jtqxMdyNGOB6gU77fw33X8WgK/sl7KPt831a0rLLCqOGzRmmrwIsW9xWC2uQV6wLo4QAldnVOf9UbtV6MnEdMSFE7ZYZA5xAnxB/iqL1/XOcfAuT9VDLPb2o0j2tlQ3R0h0ogI8wPg6/WhvIdT0nRdBDAINYjHCAxNUGmrkKs+8mcHPI/03FebaEgc3JnabnZKkwJ2BKVv/OlrMtcSLm6kQPiy9G/MKI2Fv6jAzQhFob8TmJ93vnFr3o7fK4SfJtf9BjFO4PpiyILO2UBfRz4yFst8r4aavkrVj7YuLpzICwugUZuky3aUF6l06OWAPXnf9orLyrWR959d14GzJ8fpmjikCmWY+GPUIrL4PbAaI3KhgidVkaAx4vTcrAjUBVc0VVXir9NFALeoalNdqb555j0YqUyxlRT9u89a83NFPNzIk5UiSHio2NuECxf7fL7IbDHsSWtkCXfFlJRzWa6ms7kdUOn+eyXJGVY8nF0mvBkwJfKUeha6dSht9nbJymdT/BqUC4V5s/Utwxo9F7BXkHEaGTI7dWY1j1XBhIhsZQO3YkZ/WKpzbVIB3gdiJ2qGnqYlEl7rTbAF24R+o2Zz7xGKaQBUBMyJHEdwKEWVp34SMRbYlZv9xBYpz1clD4em1e554y2U5wldoU/qUIoyW3Y4RqwZe6cY4JATDZUCOgCqBMc9eHvySmzxt8N+Wll5ugJCNkynxdf5wq199h7lrq2CgRE9BAqfNUtNYCJVW0rlxyl/Pyv5nmziB3KbGhhuaQbvqQ3lYJVdE3tmnghncZt2rgmIfqyz+lW0KJkx1cAUyJzOXLttJgqVXhkk/cCHPJuDeDUl23RSbwoW5XLaygp0kJmsq7k3bycsm7N2XGA4+0t99BHM4lDV8YwSUNn0suYF2NkxDiIvKscvJqTs1ZTp444ieXnHzD+DWjBXh6tNBtGiAeES2wbmvjB7o8sIXJETmfT7yxhRy5d68typLgwNS8h6OFHOJlEqdSqNAgBun1VxGxpWf8Pw==</diagram></mxfile>
|
2204.07258/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,177 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure id="fig:multi-input-transformer" data-latex-placement="tbp">
|
| 4 |
+
<div class="center">
|
| 5 |
+
<img src="figures/multi-input-causal-transformer" style="width:90.0%" />
|
| 6 |
+
</div>
|
| 7 |
+
<figcaption>Overview of our CT. We distinguish two timelines: time steps <span class="math inline">1, …, <em>t</em></span> refer to observational data (patient trajectories) and thus input; time steps <span class="math inline"><em>t</em> + 1, …, <em>t</em> + <em>τ</em></span> is the projection horizon and thus output. Three separate transformers are used in parallel for encoding observational data as input: treatments <span class="math inline"><strong>A</strong><sub><em>t</em></sub></span> / treatment interventions <span class="math inline"><strong>a</strong><sub><em>t</em></sub></span> (blue), outcomes <span class="math inline"><strong>Y</strong><sub><em>t</em></sub></span> / outcome predictions <span class="math inline">$\hat{\mathbf{Y}}_{t}$</span> (green), and time-varying covariates <span class="math inline"><strong>X</strong><sub><em>t</em></sub></span> (red). These are fused via <span class="math inline"><em>B</em></span> stacked multi-input blocks. Additional static covariates <span class="math inline"><strong>V</strong></span> (gray) are fed into all multi-input blocks. Each multi-input block further makes use of cross-attentions. Afterward, the three respective representation for treatments, outcomes, and time-varying covariates are averaged, giving the (balanced) representation <span class="math inline"><strong>Φ</strong><sub><em>t</em></sub></span> (purple). On top of that are two additional networks <span class="math inline"><em>G</em><sub><em>Y</em></sub></span> (outcome prediction network) and <span class="math inline"><em>G</em><sub><em>A</em></sub></span> (treatment classifier network) for learning balanced representations in our CDC loss. Layer normalizations and residual connections are omitted for clarity. </figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
Decision-making in medicine requires precise knowledge of individualized health outcomes over time after applying different treatments [@huang2012analysis; @hill2013assessing]. This then informs the choice of treatment plans and thus ensures effective care personalized to individual patients. Traditionally, the gold standard for estimating the effects of treatments are randomized controlled trials (RCTs). However, RCTs are costly, often impractical, or even unethical. To address this, there is a growing interest in estimating health outcomes over time from observational data, such as, e. g., electronic health records.
|
| 11 |
+
|
| 12 |
+
Numerous methods have been proposed for estimating (counterfactual) outcomes from observational data in the static setting [@van2006targeted; @chipman2010bart; @johansson2016learning; @curth2021nonparametric; @kuzmanovic2022estimating]. Different from that, we focus on longitudinal settings, that is, *over time*. In fact, longitudinal data are nowadays paramount in medical practice. For example, almost all electronic health records (EHRs) nowadays store sequences of medical events over time [@allam2021analyzing]. However, estimating counterfactual outcomes over time is challenging. One reason is that counterfactual outcomes are generally never observed. On top of that, directly estimating counterfactual outcomes with traditional machine learning methods in the presence of (time-varying) confounding has a larger generalization error of estimation [@alaa2018limits], or is even biased (in case of multiple-step-ahead prediction) [@robins2009estimation; @frauen2022estimating]. Instead, tailored methods are needed.
|
| 13 |
+
|
| 14 |
+
To estimate counterfactual outcomes over time, state-of-the-art methods make nowadays use of machine learning. Prominent examples are: recurrent marginal structural networks (RMSNs) [@lim2018forecasting], counterfactual recurrent network (CRN) [@bica2020estimating], and G-Net [@li2021g]. However, these methods build upon simple long short-term memory (LSTM) networks, because of which their ability to model complex, long-range dependencies in observational data is limited. Long-range dependencies are omnipresent in medical data; e. g., long-term treatment effects have been observed for obesity [@latner2000effective], multiple sclerosis [@sormani2015can], or diabetes [@jacobson2013long]. To address this, we develop a *Causal Transformer* (CT) for estimating counterfactual outcomes over time. It is carefully designed to capture complex, long-range dependencies in medical data that are nowadays common in EHRs.
|
| 15 |
+
|
| 16 |
+
In this paper, we aim at estimating counterfactual outcomes over time, that is, for one- and multi-step-ahead predictions. For this, we develop a novel *Causal Transformer* (CT). It combines two innovations: (1) a tailored transformer-based architecture to capture complex, long-range dependencies in the observational data; and (2) a novel counterfactual domain confusion (CDC) loss for end-to-end training.
|
| 17 |
+
|
| 18 |
+
For (1), we combine three separate transformer subnetworks for processing time-varying covariates, past treatments, and past outcomes, respectively, into a joint network with in-between cross-attentions. Here, each transformer subnetwork is further extended by (i) masked multi-head self-attention, (ii) shared trainable relative positional encoding, and (iii) attentional dropout.
|
| 19 |
+
|
| 20 |
+
For (2), we develop a custom end-to-end training procedure based on our CDC loss. This allows us to solve an adversarial balancing objective in which we balance representations to be (a) predictive of outcomes and (b) non-predictive of the current treatment assignment. The latter is crucial to address confounding bias and thus reduces the generalization error of counterfactual prediction. Importantly, this objective is different from previously proposed gradient reversal balancing [@ganin2015unsupervised; @bica2020estimating], as it aims to minimize a reversed KL-divergence to build balanced representations.
|
| 21 |
+
|
| 22 |
+
We demonstrate the effectiveness of our CT over state-of-the-art methods using an extensive series of experiments with synthetic and real-world data. Our ablation study (e.g., against a single-subnetwork architecture) shows that neither (1) nor (2) alone are sufficient for learning. Rather, it is crucial to combine our transformer-based architecture based on three subnetworks *and* our novel CDC loss.
|
| 23 |
+
|
| 24 |
+
Overall, our **main contributions** are as follows:[^1]
|
| 25 |
+
|
| 26 |
+
1. We propose a new end-to-end model for estimating counterfactual outcomes over time: the *Causal Transformer* (CT). To the best of our knowledge, this is the first transformer tailored to causal inference.
|
| 27 |
+
|
| 28 |
+
2. We develop a custom training procedure for our CT based on a novel counterfactual domain confusion (CDC) loss.
|
| 29 |
+
|
| 30 |
+
3. We use synthetic and real-world data to demonstrate that our CT achieves state-of-the-art performance. We further achieve this both for one- and multi-step-ahead predictions.
|
| 31 |
+
|
| 32 |
+
# Method
|
| 33 |
+
|
| 34 |
+
We build upon the standard setting for estimating counterfactual outcomes over time as in [@robins2009estimation; @lim2018forecasting; @bica2020estimating; @li2021g]. Let $i$ refer to some patient and with health trajectories that span time steps $t = 1, \dots, T^{(i)}$. For each time step $t$ and each patient $i$, we have the following: $d_x$ time-varying covariates $\mathbf{X}_{t}^{(i)} \in \mathbb{R}^{d_x}$; $d_a$ categorical treatments $\mathbf{A}_{t}^{(i)} \in \{a_1, \dots, a_{d_a}\}$; and $d_y$ outcomes $\mathbf{Y}_{t}^{(i)} \in \mathbb{R}^{d_y}$. For example, data from critical care units of COVID-19 patients would involve blood pressure and heart rate as time-varying covariates, ventilation as treatment, and respiratory frequency as outcome. Treatments are modeled as categorical variables as this relates to the question of whether to apply a treatment or not, and is thus consistent with prior works [@lim2018forecasting; @bica2020estimating; @li2021g]. Further, we record static covariates describing a patient $\mathbf{V}^{(i)}$ (e. g., gender, age, or other risk factors). For notation, we omit patient index $(i)$ unless needed.
|
| 35 |
+
|
| 36 |
+
For learning, we have access to i.i.d. observational data $\mathcal{D} = \big\{ \{\mathbf{x}_{t}^{(i)}, \mathbf{a}_{t}^{(i)}, \mathbf{y}_{t}^{(i)}\}_{t=1}^{T^{(i)}} \cup \mathbf{v}^{(i)} \big\}_{i=1}^N$. In clinical settings, such data are nowadays widely available in form of EHRs [@allam2021analyzing]. Here, we summarize the patient trajectory by $\bar{\mathbf{H}}_{t} = \{ \bar{\mathbf{X}}_{t}, \bar{\mathbf{A}}_{t-1}, \bar{\mathbf{Y}}_{t}, \mathbf{V} \}$, where $\bar{\mathbf{X}}_{t} = (\mathbf{X}_1, \dots, \mathbf{X}_t)$, $\bar{\mathbf{Y}}_{t} = (\mathbf{Y}_1, \dots, \mathbf{Y}_t)$, and $\bar{\mathbf{A}}_{t-1} = (\mathbf{A}_1, \dots, \mathbf{A}_{t-1})$.
|
| 37 |
+
|
| 38 |
+
We build upon the potential outcomes framework [@neyman1923application; @rubin1978bayesian] and its extension to time-varying treatments and outcomes [@robins2009estimation]. Let $\tau \geq 1$ denote projection horizon for a $\tau$-step-ahead prediction. Further, let $\bar{\mathbf{a}} _{t: t+\tau-1} = (\mathbf{a}_t, \ldots, \mathbf{a}_{t + \tau - 1})$ denote a given (non-random) treatment intervention. Then, we are interested in the potential outcomes, $\mathbf{Y}_{t + \tau}[\bar{\mathbf{a}}_{t: t+\tau-1}]$, under the treatment intervention. However, the potential outcomes for a specific treatment intervention are typically never observed for a patient but must be estimated. Formally, the potential counterfactual outcomes over time are identifiable from factual observational data $\mathcal{D}$ under three standard assumptions: (1) consistency, (2) sequential ignorability, and (3) sequential overlap (see Appendix [7](#app:assumptions){reference-type="ref" reference="app:assumptions"} for details).
|
| 39 |
+
|
| 40 |
+
Our task is thus to estimate future counterfactual outcomes $\mathbf{Y}_{t + \tau}$, after applying a treatment intervention $\bar{\mathbf{a}}_{t: t+\tau-1}$ for a given patient history $\bar{\mathbf{H}}_{t}$. Formally, we aim to estimate: $$\begin{equation}
|
| 41 |
+
\label{eq:estimand}
|
| 42 |
+
\mathbb{E} \big( \mathbf{Y}_{t + \tau}[\bar{\mathbf{a}}_{t: t+\tau-1}] \;\mid\; \bar{\mathbf{H}}_{t} \big) .
|
| 43 |
+
\end{equation}$$ To do so, we learn a function $g(\tau, \bar{\mathbf{a}}_{t: t+\tau-1}, \bar{\mathbf{H}}_{t})$. Simply estimating $g(\cdot)$ with traditional machine learning is biased [@robins2009estimation]. For example, one reason is that treatment interventions not only influence outcomes but also future covariates. To address this, we develop a tailored model for estimation.
|
| 44 |
+
|
| 45 |
+
Our *Causal Transformer* (CT) is a single multi-input architecture, which combines three separate transformer subnetworks. Each subnetwork processes a different sequence as input: (i) past time-varying covariates $\bar{\mathbf{X}}_{t}$; (ii) past outcomes $\bar{\mathbf{Y}}_{t}$; and (iii) past treatments before intervention $\bar{\mathbf{A}}_{t-1}$. Since we aim at estimating the counterfactual outcome after treatment intervention, we further input the future treatment assignment that a medical practitioners wants to intervene on. Also, we autoregressively feed predictions of outcomes $\bar{\hat{\mathbf{Y}}}_{t+1:t+\tau-1}$, starting at the intervention time step (prediction origin). Thus, we concatenate two treatment sequences $\bar{\mathbf{A}}_{t-1} \cup \bar{\mathbf{a}}_{t: t+\tau-1}$, and two outcome sequences $\bar{\mathbf{Y}}_{t} \cup \bar{\hat{\mathbf{Y}}}_{t+1:t+\tau-1}$ for input. Additionally, (iv) the static covariates $\mathbf{V}$ are fed into all subnetworks.
|
| 46 |
+
|
| 47 |
+
Our CT yields a sequence of treatment-invariant (balanced) *representations* $\bar{\mathbf{\Phi}}_{t+\tau-1} = (\mathbf{\Phi}_1, \dots, \mathbf{\Phi}_{t+\tau-1})$. To do so, we stack $B$ identical *transformer blocks*. The first transformer block receives the three input sequences. The $B$-th transformer block outputs a sequence of representations $\bar{\mathbf{\Phi}}_{t+\tau-1}$. The architecture is shown in Fig. [1](#fig:multi-input-transformer){reference-type="ref" reference="fig:multi-input-transformer"}.
|
| 48 |
+
|
| 49 |
+
Let $b = 1, \ldots, B$ index the different transformer blocks. Each transformer block receives three parallel sequences of hidden states as input (for each of the input sequences). For time step $t$, we denote the respective hidden state by $\mathbf{A}^b_t$ or $\mathbf{a}^b_t$; $\mathbf{Y}^b_t$ or $\hat{\mathbf{Y}}^b_t$; and $\mathbf{X}^b_t$. We denote size of the hidden states by $d_h$. Further, each transformer block receives a representation vector of the static covariates $\tilde{\mathbf{V}}$ as additional input.
|
| 50 |
+
|
| 51 |
+
For the first transformer block ($b=1$), we use linearly-transformed time-series as input: $$\begin{align}
|
| 52 |
+
\begin{split}
|
| 53 |
+
& \mathbf{A}_t^0, \mathbf{a}_t^0 = \operatorname{Linear}_A(\mathbf{A}_{t}, \mathbf{a}_{t}), \quad \,\, \mathbf{X}_t^0 = \operatorname{Linear}_X(\mathbf{X}_{t}), \\
|
| 54 |
+
& \mathbf{Y}_t^0, \hat{\mathbf{Y}}_t^0 = \operatorname{Linear}_Y(\mathbf{Y}_{t}, \hat{\mathbf{Y}}_t), \quad \,\, \tilde{\mathbf{V}} = \operatorname{Linear}_V(\mathbf{V}),
|
| 55 |
+
\end{split}
|
| 56 |
+
\end{align}$$ where parameters of fully-connected linear layers are shared for all time steps. All blocks $\ge 2$ use the output sequence of the previous block $b-1$ as inputs. For notation, we denote sequences of hidden states after block $b$ by three tensors $\mathrm{A}^b = \big(\bar{\mathbf{A}}^b_{t-1} \cup \bar{\mathbf{a}}^b_{t: t+\tau-1} \big)^\top, \mathrm{X}^b = \big(\bar{\mathbf{X}}_{t}^b\big)^\top$, and $\mathrm{Y}^b = \big(\bar{\mathbf{Y}}_{t}^b \cup \bar{\hat{\mathbf{Y}}}_{t+1:t+\tau-1}^b \big), ^\top$.
|
| 57 |
+
|
| 58 |
+
Following [@dong2021attention; @lu2021pretrained], each transformer block combines a (i) multi-head self-/cross-attention, (ii) feed-forward layer, and (iii) layer normalization. Details are in Appendix [9](#app:CT-block){reference-type="ref" reference="app:CT-block"}.
|
| 59 |
+
|
| 60 |
+
[(i) Multi-head self-/cross-attention]{.underline} uses a scaled dot-product attention with several parallel attention heads. Each attention head requires a 3-tuple of keys, queries, and values, i. e., $K, Q, V \in \mathbb{R}^{T \times d_{qkv}}$, respectively. These are obtained from a sequence of hidden states $\mathrm{H}^b = \big(\mathbf{h}_1^b, \dots, \mathbf{h}_t^b\big)^\top \in \mathbb{R}^{T \times d_h}$ ($\mathrm{H}^b$ is one of $\mathrm{A}^b$, $\mathrm{X}^b$ or $\mathrm{Y}^b$, depending on the subnetwork). Formally, we compute $$\begin{align}
|
| 61 |
+
% \begin{split}
|
| 62 |
+
& \operatorname{Attn}^{(i)}(Q^{(i)}, K^{(i)}, V^{(i)}) = \operatorname{softmax}\Big(\frac{Q^{(i)}K^{(i)}{}^\top}{\sqrt{d_{qkv}}}\Big) V^{(i)} , \label{eq:attention}
|
| 63 |
+
% \end{split}
|
| 64 |
+
\\
|
| 65 |
+
% \begin{split}
|
| 66 |
+
& Q^{(i)} = Q^{(i)}(\mathrm{H}^b) = \mathrm{H}^b \, W_Q^{(i)} + \mathbf{1} b_Q^{(i)}{}^\top , \\
|
| 67 |
+
& K^{(i)} = K^{(i)}(\mathrm{H}^b) = \mathrm{H}^b \, W_K^{(i)} + \mathbf{1} b^{(i)}_K{}^\top , \\
|
| 68 |
+
& V^{(i)} = V^{(i)}(\mathrm{H}^b) = \mathrm{H}^b \, W_V^{(i)} + \mathbf{1} b_V^{(i)}{}^\top ,
|
| 69 |
+
% \end{split}
|
| 70 |
+
\end{align}$$ where $W_Q^{(i)}, W_K^{(i)}, W_V^{(i)} \in \mathbb{R}^{d_h \times d_{qkv}}$ and $b_Q^{(i)}$, $b_Q^{(i)}$, $b_V^{(i)} \in \mathbb{R}^{d_{qkv}}$ are parameters of a single attention head $i$, where $\operatorname{softmax}(\cdot)$ operates separately on each row, and where $\mathbf{1} \in \mathbb{R}^{d_{qkv}}$ is a vector of ones. We set the dimensionality of keys and queries to $d_{qkv} = d_{h} / n_h$, where $n_h$ is the number of heads.
|
| 71 |
+
|
| 72 |
+
The output of a multi-head attention is a concatenation of the different heads, i. e., $$\begin{equation}
|
| 73 |
+
\operatorname{MHA}(Q, K, V) = \operatorname{Concat}(\operatorname{Attn}^{(1)}, \dots, \operatorname{Attn}^{(n_h)}) .
|
| 74 |
+
\end{equation}$$ Here, we simplified the original multi-head attention in [@vaswani2017attention] by omitting the final output projection layer after concatenation to reduce risk of overfitting.
|
| 75 |
+
|
| 76 |
+
In our CT, self-attention uses the sequence of hidden states from the same transformer subnetwork to infer keys, queries, and values, while cross-attention uses the sequence of hidden states of the other two transformer subnetworks as keys and values. We use multiple cross-attentions to exchange the information between parallel hidden states.[^2] These are placed on top of the self-attention layers (see subdiagram in Fig. [1](#fig:multi-input-transformer){reference-type="ref" reference="fig:multi-input-transformer"}). We add the representation vector of static covariates, $\tilde{\mathbf{V}}$ when pooling different cross-attention outputs. We mask hidden states for self- and cross-attentions by setting the attention logits in Eq. [\[eq:attention\]](#eq:attention){reference-type="eqref" reference="eq:attention"} to $-\infty$. This ensures that information flows only from the current input to future hidden states (and not the other way around).
|
| 77 |
+
|
| 78 |
+
[(ii) Feed-forward layer]{.underline} ($\operatorname{FF}$) with ReLU activation is applied time-step-wise to the sequence of hidden states, i. e., $$\begin{equation*}
|
| 79 |
+
%\begin{split}
|
| 80 |
+
\operatorname{FF}(\mathbf{h}_t) = \operatorname{Linear} \big( \operatorname{ReLU}
|
| 81 |
+
(\operatorname{Linear}(\mathbf{h}_t))\big),
|
| 82 |
+
%\end{split}
|
| 83 |
+
\end{equation*}$$ where fully-connected linear layers are followed by dropout.
|
| 84 |
+
|
| 85 |
+
[(iii) Layer normalization]{.underline} ($\operatorname{LN}$) [@lei2016layer] and residual connections are added after each self- and cross-attention. We compute the layer normalization via $$\begin{equation}
|
| 86 |
+
\operatorname{LN}(\mathbf{h}_t) = \frac{\gamma}{\sigma} \odot (\mathbf{h}_t - \mu) + \beta ,
|
| 87 |
+
\end{equation}$$ $$\begin{equation}
|
| 88 |
+
\mu = \frac{1}{d_h} \sum_{j=1}^{d_h} (\mathbf{h}_t)_j, \quad \sigma = \sqrt{\frac{1}{d_h} \sum_{j=1}^{d_h} \big((\mathbf{h}_t)_j - \mu \big)^2} ,
|
| 89 |
+
\end{equation}$$ where $\gamma, \beta \in \mathbb{R}^{d_h}$ are scale and shift parameters and where $\odot$ is an element-wise product.
|
| 90 |
+
|
| 91 |
+
**Balanced representations.** The (balanced) representations are then constructed via average pooling over three (or two) parallel hidden states of the $B$-th transformer block. Thereby, we use a fully-connected linear layer and an exponential linear unit (ELU) non-linearity; i. e., $$\begin{align}
|
| 92 |
+
\nonumber
|
| 93 |
+
& \mathbf{\tilde{\Phi}}_i =
|
| 94 |
+
\begin{cases}
|
| 95 |
+
\frac{1}{3}(\mathbf{A}_{i-1}^{B} + \mathbf{X}_i^{B} + \mathbf{Y}_i^{B}), & i \in \{1, \dots, t\} , \\
|
| 96 |
+
\frac{1}{2}(\mathbf{a}_{i-1}^{B} + \hat{\mathbf{Y}}_i^{B}), & i \in \{t+1, \dots, t + \tau - 1\} ,
|
| 97 |
+
\end{cases} \\
|
| 98 |
+
& \mathbf{\Phi}_t = \operatorname{ELU}(\operatorname{Linear}(\mathbf{\tilde{\Phi}}_t)) \label{eq:output-repr}
|
| 99 |
+
\end{align}$$ where fully-connected linear layer is followed by dropout, $\mathbf{\Phi}_t \in \mathbb{R}^{d_r}$ and $d_r$ is the dimensionality of the balanced representation.
|
| 100 |
+
|
| 101 |
+
In order to preserve information about the order of hidden states, we make use of position encoding (PE). This is especially relevant for clinical practice as it allows us to distinguish sequences such as, e. g., $\langle$treatment A $\mapsto$ side-effect S $\mapsto$ treatment B$\rangle$ from $\langle$treatment A $\mapsto$ treatment B $\mapsto$ side-effect S$\rangle$.
|
| 102 |
+
|
| 103 |
+
We model information about relative positions in the input at time steps $j$ and $i$ with $0 \le j \le i \le t$ by a set of vectors $a^V_{ij}, a^K_{ij} \in \mathbb{R}^{d_{qkv}}$ [@shaw2018self]. Specifically, they are shaped in the form of Toeplitz matrices $$\begin{align}
|
| 104 |
+
& a^V_{ij} = w^V_{\operatorname{clip}(j-i, l_{\text{max}})}, \qquad a^K_{ij} = w^K_{\operatorname{clip}(j-i, l_{\text{max}})}, \\
|
| 105 |
+
& \operatorname{clip}(x, l_{\text{max}}) = \max\{ -l_{\text{max}}, \min\{ l_{\text{max}}, x \}\}
|
| 106 |
+
\end{align}$$ with trainable weights $w^K_l, w^V_l \in \mathbb{R}^{d_{qkv}}$, for $l \in \{-l_{\text{max}}, \dots, 0\}$, and where $l_{\text{max}}$ is the maximum distinguishable distance in the relative PE. The above formalization ensures that we obtain *relative* encodings, that is, our CT considers the distance between past or current position $j$ and current position $i$, but not the actual location. Furthermore, the current position $i$ attends only to past information or itself, and, thus, we never use $a^V_{ij}$ and $a^K_{ij}$ where $i < j$. As a result, there are only $(l_{\text{max}} + 1) \times d_{qkv}$ parameters to estimate.
|
| 107 |
+
|
| 108 |
+
We then use the relative PE to modify the self-attention operation (Eq. [\[eq:attention\]](#eq:attention){reference-type="eqref" reference="eq:attention"}). Formally, we compute the attention scores via (indices of heads are dropped for clarity) $$\begin{align}
|
| 109 |
+
& (\operatorname{Attn}(Q, K, V))_i = \sum_{j=1}^t \alpha_{ij}(V_j + a_{ij}^V) , \\
|
| 110 |
+
& \alpha_{ij} = \operatorname{softmax}_j \left(\frac{Q_i^\top (K_j + a_{ij}^K)}{\sqrt{d_{qkv}}} \right) , \label{eq:attn-relative-enc}
|
| 111 |
+
\end{align}$$ with attention scores $\alpha_{ij}$ and where $K_j$, $V_j$, and $Q_i$ are columns of corresponding matrices and where $\operatorname{softmax}_j$ operates with respect to index $j$. Cross-attention with PE is defined in an analogous way. In our CT, the attention scores are shared across all the heads and blocks, as well as the three different subnetworks.
|
| 112 |
+
|
| 113 |
+
In our CT, we use relative positional encodings [@shaw2018self] that are incorporated in every self- and cross-attention. This is different from the original transformer [@vaswani2017attention], which used absolute positional encodings with fixed weights for the initial hidden states of the first transformer block (see Appendix [10](#app:abs-pe){reference-type="ref" reference="app:abs-pe"} for details). However, relative PE is regarded as more robust and, further, suited for patient trajectories where the order of treatments and diagnoses is informative [@allam2021analyzing], but not the absolute time step. Additionally, it allows for better generalization to unseen sequence lengths: for the ranges beyond the maximal distinguishable distance $l_{\text{max}}$, CT stops to distinguish the precise relative location of states and considers everything as distant past information. In line with this, our experiments later also confirm relative PE to be superior over absolute PE.
|
| 114 |
+
|
| 115 |
+
<figure id="fig:results-tg-sim" data-latex-placement="tbp">
|
| 116 |
+
<p><span id="fig:results-tg-sim-one-step" data-label="fig:results-tg-sim-one-step"></span> <span id="fig:results-tg-sim-six-step-timing" data-label="fig:results-tg-sim-six-step-timing"></span> <span id="fig:results-tg-sim-six-step-rand" data-label="fig:results-tg-sim-six-step-rand"></span></p>
|
| 117 |
+
<figcaption>Results for fully-synthetic data based on tumor growth simulator (lower values are better). Shown is the mean performance averaged over five runs with different seeds. Here: <span class="math inline"><em>τ</em> = 6</span>.</figcaption>
|
| 118 |
+
</figure>
|
| 119 |
+
|
| 120 |
+
:::: table*
|
| 121 |
+
::: center
|
| 122 |
+
:::
|
| 123 |
+
::::
|
| 124 |
+
|
| 125 |
+
In our CT, we aim at two simultaneous objectives to address confounding bias: we aim at learning representations that are (a) predictive of the next outcome and (b) are non-predictive of the current treatment assignment. This thus naturally yields an adversarial objective. For this purpose, we make use of balanced representations, which we train via a novel *counterfactual domain confusion (CDC) loss*.
|
| 126 |
+
|
| 127 |
+
As in [@bica2020estimating], we build *balanced* representations that allow us to achieve the adversarial objectives (a) and (b). For this, we put two fully-connected networks on top of the representation $\mathbf{\Phi}_t$, corresponding to the respective objectives: (a) an outcome prediction network $G_Y$ and (b) a treatment classifier network $G_A$. Both receive the representation $\mathbf{\Phi}_t$ as input; the outcome prediction network additionally receives the current treatment $\mathbf{A}_t$. We implement both as single hidden layer fully-connected networks with number of units $n_{\text{FC}}$ and ELU activation. For notation, let $\theta_{Y}$ and $\theta_{A}$ denote the trainable parameters in $G_Y$ and $G_A$, respectively. Further, let $\theta_{R}$ denote all trainable parameters in CT for generating the representation $\mathbf{\Phi}_t$.
|
| 128 |
+
|
| 129 |
+
For objective (a), we fit the outcome prediction network $G_Y$, and thus $\mathbf{\Phi}_t$, by minimizing the factual loss of the next outcome. This can be done, e. g., via the mean squared error (MSE). We then yield $$\begin{align}
|
| 130 |
+
& \mathcal{L}_{G_Y} (\theta_Y, \theta_R) = \left\Vert \mathbf{Y}_{t+1} - G_Y\big(\mathbf{\Phi}_t(\theta_R), \mathbf{A}_t; \theta_Y \big) \right\Vert^2 .
|
| 131 |
+
\end{align}$$
|
| 132 |
+
|
| 133 |
+
For objective (b), we want to fit the treatment classifier network $G_A$, and thus the representation $\mathbf{\Phi}_t$, in way that it is non-predictive of the current treatment $\mathbf{A}_t$. To achieve this, we develop a novel CDC loss tailored for counterfactual inference. Our idea builds upon the domain confusion loss [@tzeng2015simultaneous] for handling adversarial objectives, which was previously used for unsupervised domain adaptation, whereas we adapt it specifically for counterfactual inference.
|
| 134 |
+
|
| 135 |
+
Then, we fit $G_A$ so that it can predict the current treatment, i. e., via $$\begin{equation}
|
| 136 |
+
\label{eq:loss-ga}
|
| 137 |
+
\hspace{-0.3cm}
|
| 138 |
+
\mathcal{L}_{G_A} (\theta_A, \theta_R) = - \sum_{j=1}^{d_a} \mathbbm{1}_{[\mathbf{A}_t = a_j]} \log G_A (\mathbf{\Phi}_t(\theta_R); \theta_A) ,
|
| 139 |
+
\end{equation}$$ where $\mathbbm{1}_{[\cdot]}$ is the indicator function. This thus minimizes a classification loss of the current treatment assignment given $\mathbf{\Phi}_t$. However, while $G_A$ can predict the current treatment, the actual representation $\mathbf{\Phi}_t$ should not, and should rather be non-predictive. For this, we propose to minimize the cross-entropy between a uniform distribution over treatment categorical space and predictions of $G_A$ via $$\begin{equation}
|
| 140 |
+
\label{eq:loss-conf}
|
| 141 |
+
\mathcal{L}_{\text{conf}} (\theta_A, \theta_R) = - \sum_{j=1}^{d_a} \frac{1}{d_a} \log G_A (\mathbf{\Phi}_t(\theta_R); \theta_A) ,
|
| 142 |
+
\end{equation}$$ thus achieving domain confusion.
|
| 143 |
+
|
| 144 |
+
Using the above, CT is trained via $$\begin{align}
|
| 145 |
+
\hspace{-0.3cm}
|
| 146 |
+
(\hat{\theta}_Y, \hat{\theta}_R) & = \mathop{\mathrm{arg\,min}}_{\theta_Y, \theta_R} \mathcal{L}_{G_Y} (\theta_Y, \theta_R) + \alpha \mathcal{L}_{\text{conf}} (\hat{\theta}_A, \theta_R) , \label{eq:loss-yr}\\
|
| 147 |
+
\hat{\theta}_A & = \mathop{\mathrm{arg\,min}}_{\theta_A} \alpha \mathcal{L}_{G_A} (\theta_A, \hat{\theta}_R) , \label{eq:loss-a}
|
| 148 |
+
\end{align}$$ where $\alpha$ is a hyperparameter for domain confusion. Thereby, optimal values of $\hat{\theta}_Y$, $\hat{\theta}_R$ and $\hat{\theta}_A$ achieve an equilibrium between factual outcome prediction and domain confusion. In CT, we implement this by performing iterative updates of the parameters (rather than optimizing globally). Details are in Appendix [11](#app:adv-training){reference-type="ref" reference="app:adv-training"}.
|
| 149 |
+
|
| 150 |
+
Previous work [@bica2020estimating] has addressed the above adversarial objective through gradient reversal [@ganin2015unsupervised]. However, this has two shortcomings: (i) If the parameter $\lambda$ of gradient reversal becomes too large, the representation may be predictive of opposite treatment [@atan2018counterfactual]. (ii) If the treatment classifier network learns too fast, gradients vanish and are not passed to representations, leading to poor fit [@tzeng2017adversarial]. Different from that, we propose a novel CDC loss. As we see later, our loss is highly effective: it even improves CRN [@bica2020estimating], when replacing gradient reversal with our loss.
|
| 151 |
+
|
| 152 |
+
We further stabilize the above adversarial training by employing exponential moving average (EMA) of model parameters during training [@yaz2018unusual]. EMA helps to limit cycles of model parameters around the equilibrium with vanishing amplitude and thus accelerates overall convergence. We apply EMA to all trainable parameters (i. e., $\theta_Y$, $\theta_R$, $\theta_A$). Formally, we update parameters during training via $$\begin{equation}
|
| 153 |
+
\theta^{(i)}_{\text{EMA}} = \beta \, \theta^{(i - 1)}_{\text{EMA}} + (1 - \beta) \, \theta^{(i)} ,
|
| 154 |
+
\end{equation}$$ where superscripts $(i)$ refers to the different steps of the optimization algorithm, where $\beta$ is a exponential smoothing parameter, and where we initialize $\theta^{(0)}_{\text{EMA}} = \theta^{(0)}$. We provide pseudocode for an iterative gradient update in CT via EMA in Appendix [11](#app:adv-training){reference-type="ref" reference="app:adv-training"}.
|
| 155 |
+
|
| 156 |
+
To reduce the risk of overfitting between time steps, we implement attentional dropout via DropAttention [@zehui2019dropattention]. During training, attention scores $\alpha_{ij}$ in Eq. [\[eq:attn-relative-enc\]](#eq:attn-relative-enc){reference-type="eqref" reference="eq:attn-relative-enc"} are element-wise randomly set to zero with probability $p$ (i. e., the dropout rate). However, we make a small simplification. We do not perform normalized rescaling [@zehui2019dropattention] of attention scores but opt for traditional dropout rescaling [@srivastava2014dropout], as this resulted in more stable training for short-length sequences.
|
| 157 |
+
|
| 158 |
+
For training data $\mathcal{D}$, we always have access to the full time-series, that is, including all time-varying covariates $\mathbf{x}_{1}^{(i)}, \dots, \mathbf{x}_{T^{(i)}}^{(i)}$. However, upon deployment, these are no longer observable for $\tau$-step-ahead predictions with $\tau \ge 2$. To reflect this during training, we perform data augmentation at the mini-batch level. For this, we duplicate the training samples: We uniformly sample the length $1 \leq t_s \leq T^{(i)}$ of the masking window, and then create a duplicate data sample where the last $t_s$ time-varying covariates $\mathbf{x}_{t_s}^{(i)}, \dots, \mathbf{x}_{T^{(i)}}^{(i)}$ are masked by setting the corresponding attention logits of $\mathrm{H}^b = \mathrm{X}^b$ in Eq. [\[eq:attention\]](#eq:attention){reference-type="eqref" reference="eq:attention"} to $-\infty$.
|
| 159 |
+
|
| 160 |
+
Mini-batch augmentation with masking allows us to train a single model for both one- and multiple-step-ahead prediction in end-to-end fashion. This distinguishes our CT from RMSNs and CRN, which are built on top of encoder-decoder architectures and trained in a multiple-stage procedure. Later, we also experiment with an encoder-decoder version of CT(i.e., a single-subnetwork variant) but find that it is inferior performance to our end-to-end model.
|
| 161 |
+
|
| 162 |
+
The following result provides a theoretical justification that our CDC loss indeed leads to balanced representations, and, thus, removes the bias induced by time-varying confounders.[^3]
|
| 163 |
+
|
| 164 |
+
::: {#thrm:domain_conf_loss_short .theorem}
|
| 165 |
+
**Theorem 1**. *We fix $t \in \mathbb{N}$ and define $P$ as the distribution of $\bar{\mathbf{H}}_t$, $P_j$ as the distribution of $\bar{\mathbf{H}}_t$ given $\mathbf{A}_t = a_j$, and $P^\Phi_j$ as the distribution of $\mathbf{\Phi}_t = \Phi(\bar{\mathbf{H}}_t)$ given $\mathbf{A}_t = a_j$ for all $j \in \{1, \dots, d_a\}$. Here, $\Phi(\cdot) = \Phi(\cdot; \theta_R)$ denotes any network that generates representations. Let $G^j_A$ denote the output of $G_A$ corresponding to treatment $a_j$. Then, there exists an optimal pair $(\Phi^\ast, G^\ast_A)$ such that $$\begin{align}
|
| 166 |
+
\
|
| 167 |
+
\Phi^\ast &= \mathop{\mathrm{arg\,max}}_{\Phi} \sum_{j=1}^{d_a} \mathbb{E}_{\bar{\mathbf{H}}_t \sim P}\left[ \log{G^\ast}^j_A(\Phi(\bar{\mathbf{H}}_t) \right] \label{eq:phi-star}\\
|
| 168 |
+
G^\ast_A &= \mathop{\mathrm{arg\,max}}_{G_A} \sum_{j=1}^{d_a} \mathbb{E}_{\bar{\mathbf{H}}_t \sim P_j}\left[\log {G}^j_A(\Phi^\ast(\bar{\mathbf{H}}_t) \right] \mathbb{P}(\mathbf{A}_t = a_j) \label{eq:ga-star}\\
|
| 169 |
+
& \text{subject to } \sum_{i=1}^{d_a} {G}^i_A(\Phi^\ast(\bar{\mathbf{H}}_t)) = 1.
|
| 170 |
+
\end{align}$$ Furthermore, $\Phi^\ast$ satisfies Eq. [\[eq:phi-star\]](#eq:phi-star){reference-type="eqref" reference="eq:phi-star"} if and only if it induces balanced representations across treatments, i.e., $P^{\Phi^\ast}_1 = \ldots = P^{\Phi^\ast}_{d_a}$.*
|
| 171 |
+
:::
|
| 172 |
+
|
| 173 |
+
::: proof
|
| 174 |
+
*Proof.* See Appendix [12](#app:proof){reference-type="ref" reference="app:proof"}. ◻
|
| 175 |
+
:::
|
| 176 |
+
|
| 177 |
+
Further, it can be easily shown that objectives [\[eq:loss-ga\]](#eq:loss-ga){reference-type="eqref" reference="eq:loss-ga"} and [\[eq:loss-conf\]](#eq:loss-conf){reference-type="eqref" reference="eq:loss-conf"} are exactly finite sample versions of [\[eq:ga-star\]](#eq:ga-star){reference-type="eqref" reference="eq:ga-star"} and [\[eq:phi-star\]](#eq:phi-star){reference-type="eqref" reference="eq:phi-star"} from Theorem [1](#thrm:domain_conf_loss_short){reference-type="ref" reference="thrm:domain_conf_loss_short"}, respectively.
|
2204.09263/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:128d07f5fe7e739ccad65f7ba6ad98e414c1fe771db22ccf439918a68bfae767
|
| 3 |
+
size 1860644
|
2205.05871/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cc2057cbc2ad24da50ed15d0e5e78fa811cade28fe8968e5e6ae97991c8a9c5e
|
| 3 |
+
size 1243600
|
2205.06688/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f24da4dd6af5994d0e60e4cfdc4b6da353935f29f481de4fdad6cc1babdce81f
|
| 3 |
+
size 37440550
|
2205.13346/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4749496aeffb0cfa7245e96f8d3cc5a55952b2e3d6669f736ecca06321027ece
|
| 3 |
+
size 681197
|
2205.15674/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2205.15674/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Implicit neural representations (INRs) are a class of techniques to parametrise signals using neural networks [@stanley2007compositional; @park2019deepsdf; @sitzmann2020implicit; @lipman2021phase]. INRs are trained to map each point in a given domain to the corresponding value of a signal at that point. For example, INRs for images learn to map the 2D coordinates of pixels to their corresponding RGB values. INRs can also be conditioned on a latent vector, typically learned end-to-end as part of the model, that allows them to represent different signals on the same domain. INRs have been successfully applied to model complex signals like images [@stanley2007compositional; @sitzmann2020implicit], signed distance functions [@park2019deepsdf], and radiance fields [@mildenhall2020nerf].
|
| 4 |
+
|
| 5 |
+
By expressing a signal as an INR, we obtain a continuous approximation of the signal on the whole domain. This allows us to compute a higher-resolution approximation of the signal by sampling more points on the domain (, a finer grid of pixels). Additionally, since INRs are fully differentiable, the latent vector of a conditional INR can be optimised with backpropagation and gradient descent to obtain a signal with some desired characteristics. For example, a conditional INR for signed distance functions can be used to design surfaces with a target aerodynamic profile [@remelli2020meshsdf].
|
| 6 |
+
|
| 7 |
+
So far, the literature on INRs has only focused on signals on Euclidean domains. However, signals defined on non-Euclidean domains are ubiquitous in nature and are especially relevant in artificial intelligence, as demonstrated by the recent rise of geometric deep learning [@bronstein2017geometric]. In this paper, we propose an extension of the INR setting to signals on arbitrary non-Euclidean domains.
|
| 8 |
+
|
| 9 |
+
We formulate the *generalised* INR problem as the task of learning an implicit representation for a signal on an arbitrary topological space $\cT$ where, instead of observing the signal sampled on a regular lattice, we observe a graph (, a discretisation of $\cT$) and the corresponding graph signal. In most practical cases, $\cT$ is unknown and we cannot represent the sampled vertices in a coordinate system. Even when $\cT$ is known (as in the Euclidean case), training an INR on a fixed coordinate system means that the model will depend on this choice. We solve both these issues by identifying the sampled nodes with an intrinsic spectral embedding obtained from the eigenvectors of the graph Laplacian. We then train a neural network to map the spectral embeddings to the corresponding signal values. Figure [1](#fig:scheme){reference-type="ref" reference="fig:scheme"} shows a schematic view of the method.
|
| 10 |
+
|
| 11 |
+
Since the eigenvectors of the graph Laplacian are a discrete approximation of the continuous eigenfunctions of the Laplace-Beltrami operator on $\cT$ (when appropriately rescaled) [@belkin2001laplacian; @bengio2003spectral; @belkin2006convergence], at inference time we can map arbitrary points on $\cT$ to the corresponding approximation of the signal, as long as the discrete graph signal is sampled consistently. This allows us to compute higher-resolution signals or to estimate the signal on different graph realisations of the same phenomenon (, different social networks with a similar structure).
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
In the standard INR setting, we consider a signal $f: \cX \rightarrow \cY$ with $\cX \subseteq \bR^d$ and $\cY \subseteq \bR^p$. We observe a discrete realisation of the signal $f(\x_i)$ for $i=1, \dots, n$, where points $\x_i$ are sampled on a regular lattice on $\cX$. Then, we train a neural network $f_\theta: \cX \rightarrow \cY$, with parameters $\theta$, on input-output pairs $(\x_i, f(\x_i))$. Since we know that the signal domain is a subset of $\bR^d$, at inference time we can sample points anywhere on $\cX$ to compute the approximated value of the signal at those points. For example, an INR for images maps equispaced points in the unit square (pixel coordinates) to points in the unit cube (RGB values normalised between 0 and 1). The specific image on which we train the INR is a realisation of one such signal at a given resolution, and at inference time we can sample a finer lattice to super-resolve the image [@stanley2007compositional; @sitzmann2020implicit].
|
| 16 |
+
|
| 17 |
+
In the generalised setting, we consider a continuous signal $f: \cT \rightarrow \cY$, with $\cT$ an arbitrary topological space. We observe a discrete graph signal $f(v_i)$ on an undirected graph $G = (V, E)$, with node set $V=\{v_i\}$ for $i=1,\dots,n$ and edge set $E \subseteq V \times V$. Note that the graph can be weighted if a metric is available on $\cT$. The meaning of sampling $G$ from $\cT$ is intuitive in the case of geometric meshes or other physical structures like proteins (in which case we also know the coordinates of $v_i$), but the same reasoning also applies to more complex domains with an abstract meaning (, the space of all possible papers and their citations). We generally assume that the graph describes some measure of closeness between uniformly sampled points on $\cT$, be it a function of the coordinates (, a kernel) or some logical or functional relation that is given as part of the data. In general, we don't assume to know the true $\cT$ from which $G$ is sampled or the coordinates of $v_i$. We only observe $G$ and the associated graph signal. For an in-depth discussion on the relation between graphs and topological spaces, see references [@belkin2001laplacian], [@boguna2021network], and [@levie2021transferability].
|
| 18 |
+
|
| 19 |
+
We approach the generalised INR problem by mapping $v_i$ to $f(v_i)$ through a spectral embedding obtained from the graph Laplacian. Let $\A \in \bR^{n \times n}$ be the weighted adjacency matrix of graph $G$, $\D$ the diagonal degree matrix, and $\L = \D - \A$ the combinatorial Laplacian. The eigendecomposition of the Laplacian yields an orthonormal basis of eigenvectors $\{\u_k \in \bR^n\}$ for $k=1, \dots, n$, with a canonical ordering given by their associated eigenvalues $\lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_n$. We define a generalised spectral embedding of size $k$ for node $v_i$ as $$\begin{equation}
|
| 20 |
+
\e_i = \sqrt{n} \left[\u_{1, i}, \dots, \u_{k, i}\right]^\top \in \bR^k.
|
| 21 |
+
\end{equation}$$
|
| 22 |
+
|
| 23 |
+
::: wrapfigure
|
| 24 |
+
r0.45 {width="45%"}
|
| 25 |
+
|
| 26 |
+
[]{#fig:comparison_eigv_embs_path_graph label="fig:comparison_eigv_embs_path_graph"}
|
| 27 |
+
:::
|
| 28 |
+
|
| 29 |
+
The generalised spectral embeddings are a discrete approximation of the eigenfunctions of the continuous Laplace-Beltrami operator on $\cT$ and converge to them for $n \rightarrow \infty$. See the discussion in references [@belkin2001laplacian], [@bengio2003spectral], [@bengio2003out], and [@belkin2006convergence] for a formal analysis. In practice, rescaling the eigenvectors by $\sqrt{n}$ ensures that the embeddings for graphs of different sizes will have the same range component-wise and that similar nodes will have similar embeddings regardless of the graph size. As an example, we show a comparison between the eigenvectors and the generalised spectral embeddings of a path graph in Figure [\[fig:comparison_eigv_embs_path_graph\]](#fig:comparison_eigv_embs_path_graph){reference-type="ref" reference="fig:comparison_eigv_embs_path_graph"}.
|
| 30 |
+
|
| 31 |
+
We train a neural network $f_\theta: \e_{i} \mapsto f(v_i)$ on the observed graph signal. Since the generalised spectral embeddings are an intrinsic property of the graph, the INR will not depend on any choice of coordinate system but only on the topology of the underlying continuous domain.
|
| 32 |
+
|
| 33 |
+
At inference time, we can compute the approximated value of the signal at an arbitrary location on $\cT$ by computing the associated spectral embedding. If we know $\cT$, or if we estimate it, we can sample new vertices directly from the domain and apply the same procedure used to construct the training graph. If $\cT$ is unknown (, in the case of natural graphs that describe some abstract concept like citations, social interactions, or biological relations), then we just assume to observe a similar graph sampled from $\cT$. An important difference between generalised INRs and Euclidean INRs is that generally we must observe the full graph in order to compute the inputs. This is necessary because, while in the Euclidean case we completely know the domain of the signal , in the generalised case we need to estimate the topology of the domain by sampling.
|
| 34 |
+
|
| 35 |
+
Although computing the full eigendecomposition of the Laplacian has a complexity of $O(n^3)$, here we are only interested in the first $k$ eigenvectors. Also, the Laplacian of most graphs is very sparse, with nodes having an average degree $\bar d \ll n$. We can therefore use the implicitly restarted Lanczos method for eigendecomposition implemented by the ARPACK software, which has a complexity of $O(\bar d n^2)$ and can be easily parallelised [@lehoucq1998arpack]. In practice, all computations for this paper were easily managed on a commercial laptop with 10 CPU cores, scaling up to graphs in the order of $10^5$ nodes.
|
| 36 |
+
|
| 37 |
+
We also note that at inference time, depending on how the edges are constructed, it could be possible to add new nodes and edges to the training graph instead of sampling an entirely new graph from $\cT$. In this case, the spectral embeddings for the new nodes can be estimated using the Nyström method without needing to compute the full eigendecomposition [@baker1977numerical; @bengio2003out].
|
| 38 |
+
|
| 39 |
+
The generalised Laplacian embeddings used in this paper are only one of many possibilities to represent nodes sampled from $\cT$. To name a few, locally linear embeddings [@roweis2000nonlinear], Isomap [@tenenbaum2000global], Laplacian eigenmaps [@belkin2003laplacian] and diffusion maps [@coifman2005geometric] are all based on the idea of embedding data using the first few principal eigenvectors of a similarity matrix. Here we focus on the Laplacian since it is well known, sparse, and easy to compute, and its eigendecomposition is stable to graph perturbations. One disadvantage of Laplacian eigenvectors is that they are only unique up to sign, leading to $2^k$ possible eigenbases for a given $\cT$ and the consequent ambiguity when transferring an INR to a different graph realisation. Additionally, eigenvalues of multiplicity greater than 1 also introduce ambiguity in the eigenvectors, since all rotations and reflections of the associated eigenspace are valid choices. However, simple heuristics can be used to eliminate sign ambiguity, and we did not encounter any other practical issue in this regard. Alternative ways to resolve the ambiguity is to use PEs based on random walks [@dwivedi2022graph; @mialon2021graphit; @li2020distance], heat kernels [@sun2009concise; @feldman2022weisfeiler], or the more recent sign-and-basis invariant neural networks [@lim2022sign]. We leave the exploration of these alternatives to future work, since they do not significantly impact our main contributions.
|