Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +164 -0
- 2003.12753/paper.pdf +3 -0
- 2003.13085/main_diagram/main_diagram.drawio +1 -0
- 2003.13085/main_diagram/main_diagram.pdf +0 -0
- 2003.13085/paper.pdf +3 -0
- 2003.13085/paper_text/intro_method.md +106 -0
- 2004.04312/paper.pdf +3 -0
- 2004.12935/paper.pdf +3 -0
- 2005.09812/paper.pdf +3 -0
- 2006.02425/main_diagram/main_diagram.drawio +1 -0
- 2006.02425/main_diagram/main_diagram.pdf +0 -0
- 2006.02425/paper.pdf +3 -0
- 2006.02425/paper_text/intro_method.md +17 -0
- 2007.13040/main_diagram/main_diagram.drawio +1 -0
- 2007.13040/main_diagram/main_diagram.pdf +0 -0
- 2007.13040/paper.pdf +3 -0
- 2007.13040/paper_text/intro_method.md +99 -0
- 2008.09641/main_diagram/main_diagram.drawio +1 -0
- 2008.09641/main_diagram/main_diagram.pdf +0 -0
- 2008.09641/paper.pdf +3 -0
- 2008.09641/paper_text/intro_method.md +139 -0
- 2010.13685/paper.pdf +3 -0
- 2011.02426/main_diagram/main_diagram.drawio +1 -0
- 2011.02426/main_diagram/main_diagram.pdf +0 -0
- 2011.02426/paper.pdf +3 -0
- 2011.02426/paper_text/intro_method.md +164 -0
- 2102.04152/main_diagram/main_diagram.drawio +1 -0
- 2102.04152/main_diagram/main_diagram.pdf +0 -0
- 2102.04152/paper.pdf +3 -0
- 2102.04152/paper_text/intro_method.md +37 -0
- 2102.08201/paper.pdf +3 -0
- 2103.17229/main_diagram/main_diagram.drawio +0 -0
- 2103.17229/paper.pdf +3 -0
- 2103.17229/paper_text/intro_method.md +21 -0
- 2104.08701/main_diagram/main_diagram.drawio +1 -0
- 2104.08701/main_diagram/main_diagram.pdf +0 -0
- 2104.08701/paper.pdf +3 -0
- 2104.08701/paper_text/intro_method.md +70 -0
- 2104.09667/paper.pdf +3 -0
- 2104.12280/main_diagram/main_diagram.drawio +1 -0
- 2104.12280/main_diagram/main_diagram.pdf +0 -0
- 2104.12280/paper_text/intro_method.md +166 -0
- 2107.08929/paper.pdf +3 -0
- 2108.01499/main_diagram/main_diagram.drawio +0 -0
- 2108.01499/paper.pdf +3 -0
- 2108.01499/paper_text/intro_method.md +138 -0
- 2108.05997/paper.pdf +3 -0
- 2110.03618/main_diagram/main_diagram.pdf +0 -0
- 2110.03618/paper.pdf +3 -0
- 2110.08421/paper.pdf +3 -0
.gitattributes
CHANGED
|
@@ -2198,3 +2198,167 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 2198 |
2411.01494/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2199 |
2305.05356/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2200 |
2404.03349/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2198 |
2411.01494/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2199 |
2305.05356/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2200 |
2404.03349/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2201 |
+
2305.18738/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2202 |
+
2501.14653/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2203 |
+
2312.09391/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2204 |
+
2508.05402/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2205 |
+
2211.08402/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2206 |
+
2406.18847/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2207 |
+
2304.11582/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2208 |
+
2112.04728/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2209 |
+
2308.05219/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2210 |
+
2509.17847/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2211 |
+
2311.04072/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2212 |
+
2407.08268/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2213 |
+
2205.08536/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2214 |
+
2503.06965/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2215 |
+
2406.13415/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2216 |
+
2312.05741/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2217 |
+
2412.10704/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2218 |
+
2502.09507/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2219 |
+
2102.04152/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2220 |
+
2404.19508/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2221 |
+
2405.13205/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2222 |
+
2203.04723/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2223 |
+
2412.00306/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2224 |
+
2007.13040/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2225 |
+
2407.04022/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2226 |
+
2401.06643/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2227 |
+
2305.10309/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2228 |
+
2301.13348/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2229 |
+
2312.10945/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2230 |
+
2303.03391/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2231 |
+
2303.03052/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2232 |
+
2301.01795/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2233 |
+
2411.05783/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2234 |
+
2401.07803/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2235 |
+
2108.01499/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2236 |
+
2408.11680/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2237 |
+
2503.00357/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2238 |
+
2404.08817/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2239 |
+
2502.15685/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2240 |
+
2306.04597/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2241 |
+
2008.09641/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2242 |
+
2305.14282/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2243 |
+
2406.02925/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2244 |
+
2212.07086/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2245 |
+
2310.15758/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2246 |
+
2311.16465/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2247 |
+
2402.12243/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2248 |
+
2205.13724/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2249 |
+
2406.00197/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2250 |
+
2104.08701/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2251 |
+
2203.12000/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2252 |
+
2312.02439/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2253 |
+
2501.15043/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2254 |
+
2210.03526/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2255 |
+
2304.09691/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2256 |
+
2503.18389/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2257 |
+
2403.16002/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2258 |
+
2112.05261/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2259 |
+
2311.12996/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2260 |
+
2206.00240/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2261 |
+
2309.11132/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2262 |
+
2206.00746/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2263 |
+
2208.02459/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2264 |
+
2111.14447/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2265 |
+
2409.01449/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2266 |
+
2112.01983/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2267 |
+
2505.17830/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2268 |
+
2203.06419/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2269 |
+
2402.05382/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2270 |
+
2306.02955/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2271 |
+
2402.17226/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2272 |
+
2304.01279/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2273 |
+
2003.13085/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2274 |
+
2303.06147/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2275 |
+
2306.00928/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2276 |
+
2510.21361/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2277 |
+
2312.16830/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2278 |
+
2410.12595/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2279 |
+
2103.17229/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2280 |
+
2410.22517/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2281 |
+
2503.06514/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2282 |
+
2006.02425/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2283 |
+
2306.02747/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2284 |
+
2305.18500/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2285 |
+
2011.02426/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2286 |
+
2503.08344/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2287 |
+
2403.08505/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2288 |
+
2503.17990/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2289 |
+
2503.17784/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2290 |
+
2306.07650/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2291 |
+
2110.03618/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2292 |
+
2501.18564/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2293 |
+
2010.13685/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2294 |
+
2209.06941/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2295 |
+
2203.06063/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2296 |
+
2407.06937/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2297 |
+
2411.05331/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2298 |
+
2310.10640/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2299 |
+
2406.04264/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2300 |
+
2303.15493/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2301 |
+
2307.05209/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2302 |
+
2407.06567/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2303 |
+
2307.04963/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2304 |
+
2207.11761/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2305 |
+
2112.02889/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2306 |
+
2403.06375/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2307 |
+
2303.09032/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2308 |
+
2004.12935/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2309 |
+
2502.01523/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2310 |
+
2412.01293/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2311 |
+
2507.00355/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2312 |
+
2207.10883/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2313 |
+
2405.20574/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2314 |
+
2410.05898/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2315 |
+
2306.08238/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2316 |
+
2406.14517/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2317 |
+
2210.16834/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2318 |
+
2207.04174/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2319 |
+
2402.02750/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2320 |
+
2308.03151/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2321 |
+
2004.04312/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2322 |
+
2411.17786/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2323 |
+
2403.19588/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2324 |
+
2301.02311/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2325 |
+
2404.02790/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2326 |
+
2307.08417/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2327 |
+
2304.06306/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2328 |
+
2308.12682/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2329 |
+
2203.16910/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2330 |
+
2302.09170/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2331 |
+
2306.02583/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2332 |
+
2403.00957/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2333 |
+
2401.11288/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2334 |
+
2405.01848/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2335 |
+
2111.14792/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2336 |
+
2305.20062/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2337 |
+
2401.05975/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2338 |
+
2403.14719/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2339 |
+
2305.19926/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2340 |
+
2005.09812/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2341 |
+
2405.18972/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2342 |
+
2003.12753/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2343 |
+
2504.00999/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2344 |
+
2210.13542/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2345 |
+
2303.02404/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2346 |
+
2505.15957/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2347 |
+
2404.06903/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2348 |
+
2108.05997/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2349 |
+
2312.04362/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2350 |
+
2104.09667/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2351 |
+
2110.08421/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2352 |
+
2504.13915/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2353 |
+
2102.08201/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2354 |
+
2204.01188/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2355 |
+
2112.04386/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2356 |
+
2503.14259/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2357 |
+
2312.02503/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2358 |
+
2510.18556/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2359 |
+
2107.08929/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2360 |
+
2412.04120/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2361 |
+
2310.12318/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2362 |
+
2405.06642/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2363 |
+
2306.02558/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
| 2364 |
+
2302.03780/paper.pdf filter=lfs diff=lfs merge=lfs -text
|
2003.12753/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:71a799f328afa8d32578607bc16984f103001c3ebbeda3d9af191d5187b03b56
|
| 3 |
+
size 4501097
|
2003.13085/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile modified="2019-09-13T23:52:45.156Z" host="www.draw.io" agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" etag="NA5_RQm7znnFnLuzfj3d" pages="8" version="11.2.8" type="google"><diagram id="4t5_txGhnGwHJOXWD0b2" name="Overview">7H1Zk6NIsu6v6cdzjX15ZBEgCQGSkAR6Y1+EALHDr7/hZGYtXTU9PXPvmI3ZUVlXV6ZAQUS4++efewThf5DSc1Ibr04PVRgVfxBYOP1Byn8QBM5h6P/wwfz5AcZ+fJA0Wfj50fcPztkSfd33+WmfhVH7041dVRVdVv/8YVCVZRR0P33mNU01/nxbXBU/P7X2kuiXD86BV/z66S0Lu/TjU45gv3+uRVmSfj0ZZ/iPK0/v6+bPkbSpF1bjDx+Rmz9Iqamq7uOn5yRFBczd17wYu+39HKvCZSA67va8nPi7+z8fjSn/yle+DaGJyu7fbrpuifJ5OW/N5KkQZ/wWPtv0fyjuo+3BK/rPCfscbDd/zSAadw0/oud7RREVVdJ4zz9IsY6a7Bl1UfPna9b3C+KYZl10rr0AWhiRhqHP0u5ZoN9w9GPbNdUjkqqiataHkdL6B12Js6L4+rysyghu/lAu7P9wBPyhGZzEGIogGXTtb07R51QOUdNF0w8K8jllalShjjczuuXrKvsp/U/1Zyj64/fxuzJ905D0B0UiqM8PvU8FTr61/V1I6IdPOf0rMuPfMvtLmX3Dnv8emdHYW2Z/LTP6v09m+Ftmfy0z/r9PZsRbZn8pM5L875MZ+ZbZX8vsv4+D0NRbZn8pM+q/kIPQb5n9tcz+CzkI85bZX8vsv5CDsP9cZlEZCpDc+D55f2vS/7VZjMKfUiO/zuEPc0T/Zoq+Pmuiwuuy4eeEyu+m7fMJVpWh7n2nidjPIuK+RPTVRFv1TRB9fuvHDMafGqKZf9JQ5zVJ1P3S0CrFb8P+fxDs30iW/K8S7J+4JIf9u4Ll/0lD/2nB/o2Myv8qwf7J0FiG+j//pmgZ4p829R8W7tdY3sL947dRO0v8u6Kl/0lD/2nB/o38zP8mwZJ/MjWG+3cFy/2Thv7Tgv0XkjjZc13/Edd/hbb+WELC0Cfe1y9xNkXocSKQyizwCt3zo8Kq2qzLqhJd96uuq54/3CAUWQIXuqr+VynwZ3fk0Ou8P0jh41dCyeso+YOQsqtonkZsryaVgP4Y50u6uSToJ/grbAVJcNG/MhYZNiMIouI8is3xeqKIngyle/LwrmTJRay5HF5tI539Li2e0cud2XZuqz8IpLbKq667lHgx02nnF9xWETfmdVQ2pzBQN+LldipPx9PtdL0LG+fi0Jbi5snmVLSG4lxnU7jiiRre841dBHc20dxTot7Llxp71jP2hrhpmX0kHZbtsjf9Bbes7ezy5/KZ2zGMcuZyssxjxx8oyg3ieMjTdEKuVXwMNIdIsRL17ORrWk+6uRtpKYX+4kGrwx1xA9qrcJY8rn+3TbG7YGHobLBuf0B3+EMjjJ+XZ57yIQMhajEpmuZU2eir7ZBQn43eo8zguMinnml9Ii0pnh5f3+RkHZ/Q3WVOoekSI7xanHQYx89v0lFaU7Ntc+6S0W1GGKLJ6dBzRP1EN56Sr/4NEVJIkSzlMwXNna2URL+P1YTpG/Q7q1PmR4vUzWlex+eGOyrCKiLRCNkohSGBddhj8K3JC/KWvHESQsE8awTfoMuJTs7WHNMFrplO/jl+fTpY8k0MNpQu6H1udcvZz3HMu327LkX9q0s8TNjsggvdBHzESBMXfzwo62945wfqmIjHnOxUzNhKo7A3vmRyXVILd4SrINanDsbSHDK1Egzp2cbaxyOEpmRunLFsgl7NjqVonzF0Pfvheh36FwHbvnx+H2jJ5nnbowHbjy8JLeHdFjwhvAXoAYXO37BwPNYgl49Jiw4O3j/uC/qSfhQu+5haxb1eVEJqicjOQp2+xMvn3O1rCs27SFUn1co/mthcyb0xfWnU3qIw+bNxrb3FU+t/XGDQBf7rQvKtRaQre2v6/LigogwUeCJIDjM+b3ayB9fFZ9Da807UTtVng6gbA3e4oc/99Ch0FrHplyPxpV+V26MrRM+VZiJvZTYuYI65Pnc/b8Fjl0IfiRe4DyO2dYx+KSa+n6FJ9J92wwf3p6fdFFrXilbcSlS9qK1YbYwc5mtzxZZupnW5NDk6j8MBfRaiGRdnsne8JQt5AO/nnmCtfmPxdEEMfke+CG5+kjQYtHLn+4GTEOoEm+t41ruUCh3pxXUM+9gL1o26j8cOCahdJ+XgzHmKsW7C8yxe8DiINq7aFP17pydZ3F4smURmovTnwNKcI62RKZXGp4t8DcRKVHCG5foNtd0LKuap68hMg0mPzCAnEZoF5dSLDMzWNWSXIVfQcJRR0vhYa2LSinkyZ6hWXh5xpGUKF9phNF2EZ6Epw+zb430hhq2sTu2hSfRwCkoyxsAiI5odegO307LQwZQp9klG9/vS8PHt+rxT5IJwxYkYokJ9vJDMhbOEDZlf3R0Swcw9nZdWVHcNtVSzJHYjFeT9lA9JTentxOm62iscpWoF0VmiKxJ3mXAG3LbJVH2gb5lWqgSizyeRtUGQzNCPa0w+t2fXigOJbNSW0rSC7gePPWky1QnGl51MsXSLTfRznAk7Xu+HrJlwQCHaZ68H+BbVW75J8uxROFqaNs69gI+gbCVWu80+DNHP+lUT0kgVDtSgFYw4aBvdoggDKSpvjKOa0+xkrIMRdTY5V9OoBs4yc1RZIWuioyWLuSXa32N+PC3JwzUPtLhkHLKwC6DNkF5j/lCTXcmflnKLWolNxyp29yn+si8yPfNuhAShGGysOFMlA94mxfdW+OHcsaeKitmqk6NnH2c6xaPedqHqAJ5ysVB9IodijWLDklyyvcA4cXkRQmoXBY6MzEHJABgMRgEVtNLTeSan5zp9oEgl+sssT1IeqQPbWvQUWLqPPtuQ8dBbgBZPGiMlC1TkSufZQDouwJPNUb1WYIUB3Z7aYdOdU8DBe2/R+RClbnoF53S/juhu5bFdG6ZjZSCoi6DbfKqD8aDp7liPyRUy4BBlXb8THk+OuO3X+XZE+KQLeSsUpKNGsGvf+FBNW0sn+8Ene0t/6lNBHsSyldeH2Oghc4esUFRpBXrHtRYAyxnGTzFEt0iA+U4VxXiJ/EQ7ZNprNLzlyc/xoI0I23Fa31wFc38em/EmyNVHT2uqqi3UUJcdHJi5REwGN14yKiAD5h9gg75O3I7FeezSop+GADkJMD9QyFIZ6b6uiRclJZLxWg07vD+OyAVVKzbr1nISNf6YqTCOhdR80CB9vkdR0w/HSli8PEN0pnY3NOs8xBX9NAMTDYXftYoeMz6gUOoQBxLNjelnYXwVHkLjd/tYSyRverUbLERP2hc05dXMCSZJpBztMR82HkynP3uRlMjdrFV9zDyfw9nYckEsz9SmzPrUyYqGG7FAKBWOG3YF7XgHJOVKeCkWx8Ft2NUQ2onmAM7N28FKWvpzDLf7wBIPMTH3VkoHuc1Oe56hpVUlJ+iGlg80zG/kBADYwfjg1xGqBUPhnrRPthMIYuEsMqbgxpOjVSTyxZoOCOOyy3OzvcXkphVjogw0duKkHKAPeRyXTMWNMmZS34Wn87COWmnDmwPCQYYDppUjw1ENxn8SHFhJpsQfTIYhmQ2o/wm8Enqc6Y+iZWE1f2rM7WAdOt7sf7I9SiGp1p7KLMO6yStASD3BjaTeDkigSOf2Fg1e2I4SK47wRzoMZdyO5HhewFIjLaf1gVX9D3g9gccEZbqamjJNk64LDoeznBOyeBdLmfthO6RWvxiYn/zkbhlDdiPyOfJcQDIApAYwkMMhb038Y2a0HeCi2aFYT/ZtNtVd1E0LMZbFR7BDANE1PPjOLTvZ0RZsCcMBrZwnow98M+yWSphh9kjaB6+lVwhPF+Sy4EvczDodMN7rmKH/i75Y+0VQrs+FYe26ueN5lTY+DPYABgtUUwF+j3U2sJsAZzJC9oei4qAjPOFUrw+MNvg55bPF3VqrttwYrsiZLxaI+NhenVZdHG83MD0TurBj9dGGfgY0Dc51lpE5ckOH0z5eHpD119Mypt4tEd1K11A/99TlLnYWqbOlKx3AVpEgldt4Pp1I4GCcJc4M80Ax4k4AyG9uMY49RfEiEDniQQUN8+loQG+P0GmYl5pbn81NMPdY1NHXUkB2ZEwM0zM8sjvjZe5iUOYiscsuOq4s78o3o+wM5904OGFpdEPcdlRxHSp2tEoULfQ4zl+ko9T1WM1Fwq5RX6O9ak0fyxLrhEoV00HWgnLcO37WQJno8XBo0n0gHeV+Ltsufj3LQVJ0ZMXCnlFwmrVQJ6ID57GGazMgf3Ieb1unQYjeDk4kcpS5uU596y2RZgsPKaRAeNqVnAb/RnTUtbkZ3g4RC3zHnSKIXUTKvtWjZemL9BMa4EHmM6EjXnAwTKBkYej7JlimLO/ZDen3army7p64I/pXSQ6/E0YZDR7YE6JLRoIY2zbjeuFw5s06eQiasMpo9Zkf8jCHfSK4+q5P1IdAaYjvXDUOoolBnOjHTDm+uBVMkaciESx99TlEHL2QJ7ofHKEXZOGjR2uTNsyvGZ8RkEiPebxq4lE2ReTppzt8eUJffnkoJtA0QuqS+qs/0t6JZwNcE5PDiLL+WQuy3J8eai4kXyPCdrQHnoenHp5gAme+LuMgC5LNinvKOoqjNcijBArJMZgrHo0OUdvukw0vkwW40Y2PfrRfgisviRs6gMf7gR/6nZzI80iQNOvzn1avADr4yEOc/YR5fO/HB79HXWb9yxGI+hmQkdzpYEj54JHtdvWViT3ywnZeTAKF9nKyMlMeb2ZCqITgUqRO0gpr5MDwVJlUAjWBOnB6zrSNsaw2sNmx/rW0rkepmhYs99SH6G61lcfWaaAmmqrHZNqWyhKvga2AeF77F74URVa2Hsjk80lTW3xV4PaTidsX0mmvwq4+TT73xDg14JtZAO6zZwXSR/EiN6Kg8iCvNlJ0/8CH6iU+IUUDY23lQ/Ihk6yPahQjmo0ZzwXReCa1RbHWzG3KdCdwMmEcrYcmgKbQEefPPXYa1+BPRsEf8gmmMz2Ss3A5t2r65JCysWuU+fQJvh4vQpYj/K0eHDZKI0bqYhC2YrCs9/T5daBWbdYmdpdsEg+BpSjUgioAK7/iM2J3mrynN44yypmR1PtNgswAzYiCzEBFX9zh9AXp4n0XCtiv9sOYww6TEvOBINAVE8sUUyyeahawUn/miJ3UrRPvhI95gDgUNENuyE44izvO5XRhh2wf+PDKvANbR+4Y4soXEYdnvNnGVDFYeVXlsXFjJ4j5FKcDdnDuwllLn/cqAyq6sln0dDzmKPiR41PwCcDxZFDPZbAGQosidp3Znoi9IXvoq7NDHxDAeIb78oqDcrh15zNSgdOS+eTiI/4Qs+UQya7WDM8yQTe+OJevzLIZR02eXJE6fngNnv6zfRY0QfZNPXvyw7U0pEN8tNsOA+uvWJtIudBh89xCVPpY0f/Yc05jIs+aWFuPA8ywuBNZgr+KbrSzWxm/OAPBX5pTogiQOBKpM4E8gufkhH/nb6G4aJNLLUkr4WRPsmwdL6/URY2JHt0NRrGwerSL6NuwkYOjk1H360NjJIRwuweyaSe2yaPwEFHMTXxQDQdslOBnnZ4uCp8TnqNBPuAZhjP4v/xyymEeARQUM+usdEf4083cdCPPf7RQZptgiyK/ZCvDV65/M8ZH94r3Dm9huKqVPWH0y2THcUd1JaNwg9to5th17ZNmF3nQKEqBe+4nzPH9SAPigHjXdBt2AGidIEqOYno4ssQ1SlnkaTrz3OkEHShPssJeyLzpqQWNwhr2zBYQWZx8tjDI5yOEMI2z5AxmxGtrphEUyYow/4gluiAAz2GjJYSoaDveXyyzY4NYp7vbCWiAp6Oe6gf0XbYoTZ199hHM4bgkDQ+xIl4ccEe4CafuvKad6oMkVUKnFshjzPpw7gdLI+Paocf8xvsDrOsoExOuUjV3+EQPUr4M+KWXIYAEwKcpYFr5NiQfXtyTr1bAtiNQpfx2l3sPo8cLskUm4k36Y9rBOax5i30Sc6TA4dxw74ElJsQ+RME+p4yWFlMGRcdCSpX2lTkgnVGGhS+Ogne4GjDHB/loGFRxFjwVwPVBlkhnFc8hk4jhzpD0E7nktg8Dhi/ji9o6Q7pKOYPZrSbnfuGURLs/rDtwzathRRvPpijc8Ri8v9HIUYUOibiieLSVzpOie4zHh1uc9C/Z4VjgSJzUVKFLPjnkK6oNO01M/1JYJtdp/7Z6NLy0OXtCoRiO7+h4ol32ZV3oxz3QyOTKHSC9dpCEwyVuyF0Uk9ok5RrMqlWmk7zfg4k/+Sd3uCoU12o4CzyylWWCvibXGcbNceSTPkk6NThbfMAyfNl7jMIfGOU1Cb08RpajfCSsvVWZgB/kruEED67FOmvWzKCmUvBduzlTo13bbIyGsCjb16qQB+G2BkQwOws8u3RdRBP1T2RxBQN/3j8l+Cdyq5R8pNaw2axfya2MIvZ8qfIrwznNXFIOohKo2VMAgPGWvhsvHIokOorGdPbaKaXTN2wMWN/ILsQxNsIZJZgWTQ1c9vEYpIELlY3Jtb47KiWupVvN4Z84+4y+ergyiYIUWjI/PjQ2wmzIwKFRiQYVUmX3JAEiXKPjmzzgsnzF7HuwQDhgXK22Y4rgQcYVGkuRaQzP3nmWjE1Sf2BS4+g4e/ohN1nQkQUSXtis6/mYK9sb2bKn4zBsFkJrpNAfMY5lI2kr5wLvds5xw4G1QTzAnRVJQ95ef7ARgzDbsY0TU10zjp7ai86O98iPdpxVZJvjmXF0nhLPDWQUvYNQmVhJ7m8outATx3GujNPr52thZeUrhAHtlYJ5OENOV/2FOzuOa+EzHy3VP8qFYqRHcLUqs1vWfsSVk6d0qC0RIipT9DrfWQgdOWfn/b2I6R4xXfOiAgb5sNucEJHgTA3SCCU+9RozBf0Jp3Xtmkjb89gst2RNdFA3py3tdGQPlswBoTx1LOnt5StzsqwNMTfxRttdHM2P2mjZRNSBdD2uYyDqsSLqfry0CBJbJBH2RiKCHWNkzNm03gabplNDtsQxjwcDUKKtQG4BzK4BipHUQl1TpO289RIUsXkT5+tYnlL8JplAsuQL+nAfdNEC07nI8cGVB2HkwziMRFd8Xi11mJ3jWC/YABHbYdAk/fp0YsD42m1kbt9e1IaxIz3edUlIHEQyuUcK1gm6ne0GshY8sR421RHkUPnR3KJJCXmSvNyW8dWLdDHcHLeM97sAPXl35JWj7D3i65rL3xkRFh3XiMfpe8SPSzIRA06/4l5umuJ1Dvtgh+Lhsjk0bo8JUxpxkk/8ie/dnKxnyJfIsLNFVaLjBx8BLpCeqkmocYgUC1sMZE3jVphO3PChw3BPPiDiMUhsEe2FlA1pAGW5cZNY14grN977nUGwKfCzxzBvjfAwBGey1E9HzdLH0zGQZjLfBnLGF/FB9YtLnFN7J8sCrdtGGiD6eudchUcz0GdKS7hgRqEWhEbWZs8yRNfYziZ7JkNt5wl1vroGJ5PxswMzDyby4LOwzDBJfUkPHWcfaHyoZLjTISezlzTyGEzEHVFti6ImZ88RC+uwPcERFzVAjDnzJz+pWfZwcB1rC9DkoBi5ZQcTb2+NHfPIjaOn0HlOXUhlw0dAJc9D2gVXKr3E08T1BczJguurDXiEO06coU9OF69Y37rNoD4ua6IKdJFI20HP4qE3g01iXZbj8oy0pI5ysgwopSsJjxvrQbhgIAR15s445Bkcn5HOCxncdklfk6ImcEN85oNTkQY+xifDYbtSwNzJj3uhq/mx4HxVhRaWE78AR/ocr+pkJkfvmyaveZ9DfRNsBdCdJDcTJtDb2Oc6wA34fg85FXDmpEjpuMaR3D3Jq4A0iU9OoPEM19vetj9cy4eGwpwmjNP+dP5w/jrCoEf3oUZm1wQjYrDki+DvZZkrALLc+DQhF0QhLoN+RXyedvJlF1GXK6U7RSu6ElXL0iC6K/stdyiqzIHliZMp6YjbxyxN4evsswecTfNY/i2Pgs7QbdOHg8kGkAM00ZdDnd1VrUq5SGBslCBbV/H9ChaPcbtPJMy7IYqyiVpWiwAFTSDPO+oAmf1+cCBtQNvMjjvyVk/2A4rdlteQO3f55WI05QDm9dzpabV8zh0ceU9RZdOVjWpxL5a8AWfcQ1hxLlFk/+TOtWho3n2wiXXJYT6Pp8BqKMRHfHzvHA74BBmefZQhDidZTY/Cvufgmds1cqY6dj+3r0DdnHvopayM8zBWD/XGcaG4kY/w4dzUP7EY1uj6+XpTy5ZNZhTvDRu+9cixp3x0/zA15D4iwo69JRJIWomDGz4FpRgNsCKLk2KBomlxRyMn4OwwKjA5mUKinOVW6g8OX7HWjlqZGB8etM501rzMtteumG8BdiLeTzNMMmwsfwarM7Q7belPjCXbu27B8sqA2o7F2rsV0QIJcmAZnL2GMEB7fQCPW6BOScvF0osywKvcr+XNaoFafNgyuQZquysFKp5K8u5ER6E9T6ElaYCQshVta43CT9RYQ253G4+R+Gi5YVMz7NKN1EbyubudAckwLovS+Tei38LqARmNN2mHosNCOZqdNZIwQhQE13Fum4Yzec8dqP4SzDekbZT7GIQ16Tq5L7FYM8NrgAkRS5gVXOnptIPHC7RJhWT8YIC/hvXoozYb91wWA0U62bk6EqYzToclga+q7vappWSMui+vHJZUMay6rHz/JBgyARlE9kwcHJIwLuvSLpPmFFm1dxvrwpUSAZ/iN+JRY6jPZRCibN2y7+8xNibzJUL6HoAB2GRM6VO3s5VhLiCQIfCNPmCDxc8fiyAs0bGSobniEIjT0lQ8wtGDv65BqxG4I7Dxm1lpMnHh9LvjPNrwJmxTXRoMiG+PqJ/YdICM5hlYjjRUjigfOWvh88Zcl2wAeNyZIh71oHstPVcCbprRxj9jgi4K7hRjTxhDSV7DA2s9LsnUov7T2cqsOKJ0wvNUWBJ0vlZflHSUrBde8uEuOV4Fo8LCXOWXG853MzmB8koqLE0vugXG92TwIxoPqTUvtsH2woGoKKBONnMTzQin4fkXgmWPMnXwNRS83RTWisJD319cgSLNdNhTN0GppwcICBMMAXOy23Zq6ZyL+buI+KqlL3UU3xf1zuk080ARFtMCAO8dr6vYJIV8VlJCyoVapq2ji8j/5bBUZ2AnI88GYlB36JvOzaAHEZiHycFU9jgbtDY0NC2fowrIE8jkVhnx+jSKvJ11K/TBahIHH9uSqdJdPPF5gCjbiMTcEhznFyXplmP/sMOe8qKJW6NF2UFoLmaHjQEa5p/5kbSDyWoM5bAlwg3Gw/6MsmU+pXEjc3mMaXZ6RojPQxMMKGHEbYj9nVxalwRN1XeBXmb0wNoAtZtlqlmlpPd40HZbiIBJyBEI60zTEV7TjUlzgVzFqH8zBO2G5QJMiBvO4bpk2kRah8Db5pN478Mehbuyru4OMD/y5kRFZDbyC6lGs5UeOBRpz8IQ81MMogedMt3tGbZaNDY/7wbjxo59u1hpBKol9h+mfAlzmgv9G9416W7gDmK+7neYTtYi2AzXNtt5ItNY+iVKYDRlyU192brrvoaAXHQzUnIrUvYWjVejlLDnOKh9pWRHPKDJNQWbEGu/nY7iuHOe2KVCBhU5WV38Dzy6FzVOzPD74kmuawvLU9qAfjvcGj2KB2e8jFoWg+brcaEPvJHtAdfqIPbbAgjJhHc+sfEi7B6BzRvxpH60EWIcV5ejSJHIGe5cNOU7wVH2Lj7uhBQ9ufQjbtBAf1lYxxFVd90aMjPWswdFREgfbe7TMASRit8X/CK8DgG/JdJauEniVDNhRypzDCiQRCKfHw66jJzEYTfRTgMGEfdHIEDACkf+GhhXQU03ZPwaZuqcCA/JR210sPpu88oirPOsBOsykhjMJJCny4WMh+tRrE4pDPr5GLWAbxYBsdu0Y0945Mhx2LNlsNl+7BQJkH33NlnxzqKCLhW0g4CoM7cr0iM/qyLHnkpGeW8cnkUsMs3XnTR439I+EXEl/SwRb2DbmtUdpq7ZMQXqpMatHPfy9NXLkG3MI18OkKzb4cvNG5lASeQGFg4VP2LCQwnQeuFZ/nDQAG+3Dhm1TKlMgUNP5QB2EDDnu8824EPBnIzBlBVZhPXBvOCiKb7ri/X6tOr6Qx3qzOE+2lOdaTKZgpwqfHuXmfHki9GucnqD9VV5leOe5ovBZWcI/E2royTYmiICR9eoraqleHiz5/HIG8SBn4DcD6IaIS8NNPA4RrB2eo8vHBDVKqEWUtGACPm7xT9GwlmGjIbTPfnjurchH2qRCyLbefHk6WJy+8Egv8bC05C3J+QMHyDVxBxjTdt8cFDeXwrrPsjCyJlNhnAN5jo8WsQd7HtLwnS+zAmCx4OpyfmRfy7lLnEhm8zNZUbAcMq8KC8D04YHS3dWjr7JjifMMyfSQg4pduOP9ssnNKdZ13zX/rx7ySZ4HR+jC4z1ThLieLOYssxyqu16UMencOe7QYkPwPg6CWdD7jOPSQwyDncMVGoUBRX3JuuLNEsmseRLJu6cIz7oN+fuuQP71zpFI0sR8IsV412i0w24KyVuSoolaVBxuAabfQJIR2V9RBsj+/L5uu934UCfIyG6gB4oNXeNQVKXoyZYZdoNC0Eyty892QEn4yEzo0juVDQ5Dpi4Sa9UAPYedAz3KmYYbgzqceLEjXYMT3H4oNnc2lzchkT3UH2Q09uriijKjlT5IfBIUgpQQIQfSst4PvqBFILIcnSK0TTacrXO2sBWWgWFyHm55SubnvU7DvpxgD0oou2Xvj8b87paTVN3n+MBBk986j1bL3403twuKmaekzFySTQqKh6fdOUeN4xOv2CjjhgvfViQZGsloI37fbikWlmNOYtdoSmdY1OQsnAb3EOdC2Gqsc9Y2sJQvXIbBgu9Eh+Nq5lTmlvVVHTRecJdIdu0ew6vBFhAB31zJH/eHujeoPhmkNmL4+Cd66l3Miaj3eBkGOBLHu0HknPo2YrLJg1jJqQG6/AEjdDNe55ELArIKE7NHJYGga4hyrqKcVpEo+Mn65LjZ757LeU+kS8UTZ3WLSUbMbd5oslu8aUkjiNgpN8rlbUpKTvu1Tsdt07sTE+KhRUQ9eT6Iqz8iTA0M2CFjmeXqHnNfdVOIwRyON4a9HSutgeZNmJ542znpefY2wv8dh3tX+prlI+Z3uAOHeyOxxtQLz4G3hochikVy8Nij+dCpgZ8hOkpmHjum4yJN+IB9giMkRxdKG4hh3L0nCtFPjx51DUrA/0KeRIWlexrsO0yIqZs5uBcLdLHHcET7td4ag4pefACTvfM8WJpsJtHvLgqx+ZkuiOG9GT2Y9c3EcRiClX4i0wxS5pYZbi8IkIul4D7yoorRWhPLtAf5UVCrAs5KKLM7DvMIoWoFZk4FLADcYJo/Jq/kJOQRqnrN2CLKO6s1Bd3PqkIw2O2kU3/5PQCAx4Sp8ui33BzF9Fo9MGus8XBPFgBz86sksNUjlkuRJo88Z4CHvAxVDE4IsV6+S3PbxH2vGpqL5jkNsrAUl6ex2+xfLK5h32OQ8Mxx1MJ3lehBru+cIRUhvxY9cJC3mlcpAy6L1iqWdc9eJoPR0tJcjLJKfeW75JijpsgEtgZQohcuTZEi6UYpwvabR+TL9AO5d74Z1oYW7gFeZnu4AtWbPG79EiT1BMTN0fYyToNhTGfTBIrIjWhAsCcnojLwaiCcAEtPpE16VyxMH6M/M14OVm8CxoOX8XNlEUq0IdrwIohSxVl7wcDAHVm6RiApnS6+ZsPoIrilQKb+GTwtHlHqG7Y91NiDTuavF5Px0bbre43j2bmEavOkCsl5RH8vPh+e4XdP4o+OewJZMv7VyqSc1vHC0Z2OO6Sras+Frf0BegN04e8mdaupc35McpEcdwWCxKRwT7i+PqYI4ukY+Zw6q3pUe424JTJpr9mA5NTMdnX2jK3uYUVschDts4awuOgxrwOSVqOc8YuaOccUpZp2zkZcLENMVVG96DEjuXKi0yB++vP9PpKAd1kA6fgBamPkbC7RxQKRSCmFlbXAhEXtkaNKu/PLFkgMIYlzuTMG7PWBvojvdMUZN63OQCsiOT73EN0S8biKgQiUyN+tRdpwAsvhHWZogdTLbM4qmBmTvtqHJIIlheBnsPXfY+z2lOg8a6VOWONNAP86d0FxwfoAh1T+OdpO4bNNXoM7lAmV9ziHuRDXki+KALX4eA+drdyCvbuCIjtnJkkVoZxibSSwM0tLFGtoe5o6jN2Rv0zxRmbXBSFunCNWreA5mWcP23DcYp9wZ2vAb7J24OzsSeHb+6BbLF3cGo63HrZrnM7AWekNDNm9xZbrmvChFawoHP9tu+rePMcFfIyzPngCHy2u13pY3gJnHHnUDeIGwFDx4Mhya9LXCvrnirwKnzFbopnssDPZ0txphB2/Xu0w2Z4smNfF5AUkQ6M9RiaWKeq1tXZ9LyLjtKNt5vX5fyKbuhKtFMTuSJCUjTJvueuQ3+XuXVHkF8egolBZGUs2QRPdKrm6pX78AEajjjBknhQPq8TDYKj+u2VqrXUYFO7sx4upXlT1NfRutHzsqpeuBdEyFGPBelFy8Ni2dzZDFNB0664bubdptTuFnrWJZ60CAOc5AzzoNO9nQIMCuV5TwuvM+WsCzaXGcijBxMeM4KQWPx8W/eysIZ2n5sPY6ZZxqaAuAH4DtkTmLNG1jrbEY1HdEZw7hagQbCahtRa7poFRaRTUGIxxQ3P10sjyefzOh4avIDV3gq0Dt+uAoasCUFQKayjFzD7vfRJwZwIG9xr7W+sYX6amj6ZfsEK28W8pwMFsXVVkjLigeUlHodue484dVGLceiFYZxDqpFOrtr6DNE3PQM6ANS0BVKgHOXsgUiBkiBSoLmzPyWbnIkpf6LJnVMSzprC2vSUvjlTtm1pcduA3UaWcoQpVJQIP2EnX6IwjkmvUTu4D7Xa4ToHmKb/Li8akmbHkn0LQ64HreSN+NRhSqvQyqons9M6KGTe9qPRU5f5eEsdnTdQi8Cw77/dZ26yTzK41XzMZo0SXOPhMDTmZvVN1BiQBx5itN21XDfET/fwvLnE4clC7GdMkEo33hrIAdKs3HRTNjXboAjywDQUThyZp4HCYl52XTJdXyTRSDCGkcz57qYXZY0pvBqxbBl7zTz5fBlfXWy+sWoBGwQUOR0pTPEGUXF5XVoBELQ20TKH6IvqN2vO6Ld5Z5FrmOK7n/sCUQxxA++WDdiR2w06TzIl5866Hp2dfOZgg6Y7qG1HqqVnWKwe1Sf38btVaMgqSrsb/YpdubQOgkBZGPiWzK9tRZ57roz2UcvK6pUn+Ao6w2Mz4oCJP2qQ7ns4rwth/rAzDoznbrCCqrSIsEcoVPEAiVlzDE1Hdy/lIF5ZMBRShxM1xMhh17bbHb+5J3kAiCYS2Hn3fae7GmOiFXs7WuDWzBpEZXz1ypGzpuFxbteS1wZGRez4JN6WxFLPIN2FPV2bYK6JdAkYsWHOXB2Zh1ucNiyaDQ4jRwNh65rZhTnFitVn22o1eOocYKcWoIc4gJiRSFGM4z7YEz1sNH4XYtapLo9g2QEeTKr2MkjuBR4lN5NBCu8owAIK0N4/YhxR3Y41l87Og5zk0cyDyeLTaB/f1nCTd+b9h2BzsYypmUitIYKh9chseYcbqA0sOexm37YoVsssN3RPjglm9dCnhbmUj+Fl72Pab8iYN9FIyNQQPZ+A1oPrpLU25Q5nl1Fe+4EfQ0yHbmUDLXErvtB8Q3hr4HyDTvfaumaSxfHm0YrI72FDxVPnCgR+iF5LZe2WFXodBdYFlUpfd7k5WpM5IaBUFWeNeZQhHYLMjruzTV+ZL76Ou460an0Zgptvd6/1iQrL0Uf4AZi9GasmudT1sZtY8uR4l9swaurHRYU6s/gTQbgERuBdh9uJzTNOybdtzzx6mIy4x7YQi1gEVnBPfaUT3/YWW2rU73BaOa0h4M0oTC0BN3N2og0ArKQz3SseH+Vir4xQ6yYXa5UBn1pM4Z5cZ6+vBnm6qF0rVk0tywJTNdO4jC7cJtnER8BH6MfYOcarq2Oh7bwqTvxYf46NQHzIWClySnCsiuUPV+vnHU8G8FwqETirC0oz4umbPIrWS78cBerSQetYGQX7dd64O+2Clzh42cDTOimHBq/EE2wd+Ad+y8dyOoOAmQyh9fDjLY1knVm4Bey+Zzovnh5qgoUgUgGCTkfOzcucrw4PH1UUd894/tcZeMvbIicUsz1mCxLEHqmHyPSKNm4qCLQ8RNdoHM4UDNhe+dH5kmLOI58GXvRuS0+ckZXRuvNoTxeBqvXPJTibWHKeJxZ2P/ml0dKLg11XVeXOmzoNL53PHdZ3aAoFnL7cU0NGOjHyf60FmrEz4X7xMPh/zt3LwnR6LJky9DIpTv4QtznyRlRDCnxI3/CYOvcWw+5jShUuYnCmxkEh6wzzL3HVcG7k/SaiTKMSOzqMI7GmtOBTEjJ0ryeUDfOGkApS+iI2gQ+mhi6+s9MurrVB8j3YsKe58IQma4N0NTHwRVH/m7cjWlnxTkehBPHl1HBSlxjCSX9o9KVjp2Tu9GvEDQLXzeqVyzTArL1H5ZnF6yxoQK03xm93foIq1TQpTHGZ6U6kRwFkydvBJBsBkg5KXgxcobBCuo1HiPIdPXdWfrXz8W2M2v3NrusNmWQ1qPIpI61NnAg8PSLAojF+c5rW5Cpc3CJ+wExFHA7+Zk9wcW1l4I3NsUPY6/4U5y36QB+e+OnMmhq4bXZsnB5hvQMOcmhgNxmVE4Cft9cQOJyzsDAMZfFbz8lYXE5xV8Zud/LgMwc0p/1Ajh+GenbOB7Jj7bEVVgzfkC1y47dHv1RJ58xkWa/4ppDdygRWX93CVgpRQ8QyYIV1I1BzAHaTQlS8xWHFsG74w3jqSc6wXIbsnTVg1wJHUdm47OMdWEnQbMqsY3F+F3WCJd6mOtKHQJ6shz/lSxLQmIYprLstXw9S3u4axGJzfO33CVj8CZablNI5030M0wkg2OzwUjQ6Nwc8La0Icv9Y5wB7mXK57WjLQvab8/bCxkFNGpNZajFn0EK8lEMWBJnO7FeUP5eMdDvoU3jYaNiCQnlCkGf9wxfBfo4rQMpj2y9DOkSHQT/PMagvCgqnvb2Nb3q+Osqzw94NsbG7G397NK+XpTN9M88hcDWaALg5BCEP0HB0VvSBZOmFgwivtJzdqJZDqh5J7o5UZJ1/h0sOLIU4KUlZzTqu2Tj2Bvh9+IVGMeXx+NQf1OITFypSpetqebmp36sHsfaIvHsGpzN4rV7ri3HVCv4mVtag3FC8zjloGLy7IF92Wv3d03n1rD/HGW8dOJwcwCRC5rqcXwQwiU0Tc7BrQyOzgrnjDmrMLEG2lE1xT20fM8gi2hPTadS6T5Zi7c+glGOiGwqYtvgc0DVDiTHbZh2Fwp2z3N+fJG3B7fW2yc+ExC6MT+3GJwo6+HPOrYGrpocAfoaS5OtKLCKkZxMY88jd7u4uFIL74IQhKV+jaqBury1VlkUx5Yfcx/208cMXr1PUMERYLHuVlfnAZ1duhHv7JXOWYC8Y9Rpsffo0MYk3R9iw93Db5XkO1+WUxi8u3c1MHeTxnQn461LK5n3HTqc7uzxZhph5qbd0KrjKZCyhJyUlRy7O4BZMoecUiB7edqsSE2BKE/14GIbLMJGyAMmv5XChizLiPZ0NUZQzUMGEDP4wGSNiFrvoFjsjHfSnxLbHKMqsfDylS5Yu5RlN1Olv7PfnHxfRtc6dc/SXSOKDzT6+oEgYodmUWL16wxqf3xCXyMGTi1eOA7nJ9tHT/iWWUiwuFDipQzprN4JYEhg3uOADTusqrHm4pxa7w3TdCk/1GSI4K4rjoGQ1HjsK/dZg7uYZv9tr9jDiqTmpkXcRXYCVkyAcy7ukB5huBTXvvZ6fAUIUp/Vz4DF8JXqx4KnsNXd3GmztVBBt2ln5YPJWzCISS74OZxTNPVuTrH7drysB24A9jgrTLSOyapVHnCy60utafcU5YO59uPqfOnqqflSxyyTvDzqxi2btQA+8NdlStL82NvViB3KfCFi15i9sz5PXZevAiQ+COAYXVuRY8ms17cKvr4N2PWcvpNnSA9O263aFLRV4zBbF0DXGRB71YFWWXJeLIKC4i1th3xCWjR93wp5gz+VECc/Wa0TlxrXBQRiR3lhgF20SM2k1D/qIbTuuN+gIJ/UoGUpjtqwANnqL8HabU2tqQYphMp+btNshhzNTgiJPJ3gdWU01vLTo8+J02kXdD8zGYbtGmewCZ+ch3elVxosRSdPsFPH30tS5e0Sxi40NfH0MHLfexBP5fOwtOqaHySSpdi/sp6RfUU5iXEC2zn5ogrSPx90QjXJC1UsGBiYBZh19vwOR6i7J7ZIX8uEDmOq93u+c3RK/nI7MeIxypmeULbOJjM1sjvcU8MHn6j7uFGF986HutB1XzMMCmdOdhXhnUBoWOZYhP0XdecGLALVnIbzdifRigGND7ikNqGEpp7E0UtgHWjPMDdiKVsRD6PfY1DJOFyLfRsNqSMFUsLcPUC793LFniaPQL92Zf3IWWSd8HPWlpFuN3t4MDRzUpg3lzY49mUHOTgyfxVLAnomu89zJycfENDR9DQ1Gx1qPaWCtwGZmznc1Yd19cxtPJVNdxGTbLsCWuVsQDeuO8Bcx3BfDD2cZ2P72OkWRfkMxYWBT+7bVH3TznGNMOSzNeVYF/nat7+t2fYVrJkAfgmXhi0F+w8aDwLs5ZCEVbKNx1pVscvdrZcoY771mrZm7iQ7Umr5/5uq7AzOsHi9Qk5hfXGibUbn7vdk6y3PqrFIe2CFSw+20UYtXQGn90vcc641WtL4IcBeXNm4t8SK8NsGHh6Oo0RX0wa/9ccyc1mEfpHRSwnAGeVfUUogH+dmpZRiP11I7Aht5isxCU4wADegsqziQxuwPB5/e7wOn7FAABujLU1q5bi+kchF5iJysg60UskZrrpkSOLwFDerGn+5TvNOWx5EbEEpfthQ4pEWfJiERnIjwI/K2lwh9nXJ2TeQe/QHR1j8+9+tRyzQUkG6eh8w89AQ4gZksSJLf3kabT8g5Z2cugIAJcruN+ZI7oVPAVoc9oqZLp1meTi3cpKuJRBEWnGAhPSmCCByD/7i2HmMDPhSwADkoeJN1wQeKt5m4b06EfOUNCluqILo1JFzNFs4/CkcZ31+svOhhhV/NxhO+t5jcuv/QColbYiK/Lr7+Wy+ZxfBu3eQy63ETeRaXkA9btr/fNT3ZPC2YYrYeeYFCLInnpEqk7h8vNohP0iTIjV+uK8DTeCZZliVSVsItTj4eBGUO1vMp+oINwnYV70z85q0R0NEWwQttrpuzrDNfC2pSrSvGetTTUiLjvRpsdAiSFJlbfmjbKJgzQsIT9eJfvVAJuZeCTyBv941yFDzk+H68O6zpWtinFXcNSRa/CN7GBbpP1weRtDxu3O8HYiNQpr75OBPGXaSjcHXa6Pkab8K1mx/WggnGFnckr6ZcKgCrttcDUsh20BPxEosb9jd7l8Gyqiyg6bz6fv8+kcakJ58WbDVx9SG1H+RB2FvUbjCx7y1P3VCPosCFHGc+ngBJDA4WsjPYKEFY/HnbkrZXYX0HW/LxSQInbsQ2JQSaln2cYNN1Mpxvgjj70h11YRScHZxGUrBEmVJfFsyNcJiXSAh9mK9ir/u+FjYCXw25NYktKWjS9HlyyoV8pkdViB72VNIP0ka9cVs4pMH0SaEjBeYYoVispLOLIShLux5xYliT8WB6ljl8Xdv9cG1B1zqWCaL8Stohf6A25OspuJKuHVFfIBXAlKkLfd1JDR/wXLi+FTatiYn7tBV646vXMEPQ73Ph//De5jABbmG2Mn5ZpnzmomMgmK9T9LFlCDjfRhxlmvI/VstFiI+lwPrWT7OgqWsr4JKV/413rtaMqXv8kpSh49Tgy3W+rs/f/nweAuyR/kCF3bptimUcQxAvnZWRv/IvaDk6fteB6dGeBamG5TEFBQ+jmmwcpxlmyk6SSvB/uX/JD2vjgLe/O/VARkz2S01SinCqp+DJEsOnWHeUj1k9YPUYCfWweR1P14GVhR+0DfogutJeJ/a/298jYDxml1bgfN3PK+JRvLUGydGjY4sO/woS6sNXix2yuu8tW3/FGNYTb/J75K1vG4HZfe7nWcZAQJHXemKTr5Od4MrZMvni06lTwZQzOAEJDmH65kvZouxTbPwQOhuFMnZF2KGtcfg9Mg8qQixi/c7hySHueMy/xYiWKh3FgF37ktnrInCfGMdV5Mph2B/BukMQgb800dSWeBx+uyr9cJVEV18lkIXwnLe3gfHTujy2HPaTFTrwxgJ/ThEyx+Ds0mQ9M+TTQRG2sBW+RsHL2B1GMa/cnTwngme1u3DN51WQPsvAS5orLlBMWY/yUcbW0Jp+DZDRrgRJWOOU9SCCZn2LEH33e5/1BUNkvxei5wkQ8J+9S2NtUCz1TW51VGq+cLW4jp1vlu5KR2U6P41AOm7jzEcsupfzT2cUncVK2sPLyCLRmMFLMOy8TFWsE6TxCK3+qA2bh3Tc3PSBadKV+z9ZHieuEiLPiR3/6V7PFau9rjg8wYMFqVeH85ZtuKiIDa8SThNV+FKXG8c9BOVEafe+VxLRC+6gHrwgHw2FKrKDr36doPVkfvf+/9f5WR2eR/7oHVF/Bu3zyKtDMxjhru2mIY/21scRUaGzBWz5dl7HxwFPRWDmwWdbH+dwGMLXuVisUnJfvTDXS9bXSWnwNvK349HgfeevE6W09S0qLgRggNMtPk+sqn/e7fU1NFA5+U+nz3yeZ/VS//H5FZ/jIcy/3mX57Sl4iqJa2hfOQr8PpkbwJHvdY3Xwl6GcLDXcbeKYP1PMI60/B58t/IHtzWKwqt/gZ2qfbdvYCkczltaHzL0MeFas2eBXv3kJ8jnzB9K5X3aITEgh/yQOO0EVlur2ca7ZdLcQxl3WrdVMxHBIwiZEjkb4MOvcLtMIAuxkkQ2kEf2nhOb+rAhnWTGPp1A/ShiV+TyDJO/50x7bHm11M0oCisbXuxm+JbubU95xPTju05/PizEpkxnuGbkRkwOjfJz1FcJ6g2jBS5sNwoaJrdSWRDi2E6xgyLQpEBjl8xAy1cFXf7u+9ooHivnoISHl4Q+QBJP8ODNLjdwT4WH+QWXTknNjDYWeP7RE3gWpYlNfJBB+Ioj6kNn4ikrBOC4R0E/r+H0OXv1e8Ufl0g15g5DYTLYf2rAD7WYvrXTdDxSyaeH+IX3ptbKI/QVE02tDFrjUiYg/zu9jeq+Yj0KxZ6k1Y/zLs6S7WAnYmpGcCJKv2TJqUdz32X06utqzLezW0/Ou12+nLp12HltIiM1V8qe9XGM9Ex7ifvL5XhEkmDf3zHpn3FEFVzCTw4fWdzcOTrK4DL/jE7wBrZKkwLpy5XxYFnOH9/2Pm+Yv1zQukIqaPGP8US6zMVdCLdtlk92sBoUIxWQ/wTufbz09lpsfNAlAlxxc4aHok//Le4Vd3RS34Gc9lWvhLBX/gFvvI/5yUc/CRi7H9gOvMsA4NIrfYieux3WQwKl+9Cfo7OntRdwernjY1J6g5oodN7aAyXa0uyXSORifwlEw0+BzVp9cIMdCJ/p30GaT3UczzMjLi7jtvdA5+7j9rnnU8uFXb4+uzQBzokDBxvV1/VIwj7dYtaRRxPT84wzHtqz8mvfJmnW25aD5EY4QS7ep03j53oP6wetHEWvpdDH4Z3+USo67/mR7hD16ArEmwG837jTImIDp5QeC7Ztkth8CvubXfrKSGUJXUbwL9/UkPy35WeO65NCHiJv+aCcGBElSeuWOkAexF1s17c8vFewVOESl2ev2gYVnjz8g1uhYhiA9Lp8no9B64JHZ4FRCpX26mKujHMaHsA9P8fezCEv/fNvjRxir8HnKZgFJqFqxv7Pr3XALqmsq7IXl46xK5bq47R497vq7ExBIaBMSSEY3ms/gB29W+vsj+pI1+0ETx6fe/vYynR6X3cBYnW4vOGwwkI/fZbgt+xysE1qiPa5L9oKm7NwrpQs7AiuyNtiN37U762mIh+r15KlfGMTDvHvT5SF/tx7uAFHSB9S/Kq9Berzth2tcjIUIZ/ARKP7aIaIbStTH+ZH6jkIRHhrFcv31DJQx4a/E3hZ+0AJsilRBPDtqVP4Sbd7rUaiMnbAXl0/AvS5HFGFhU/Ow4S3bY86wJVffGyJ/Cs8pRKzkBztjVMg9wEIqnu1YruYQwY65V/Cghd0DImD2J6xvOlJ5xQ4PL9KLIxweVKIRHjKSv/zYAyI8IH8X6aYT2ezxdRlTLga0dD519x6HD2GXhuuJG7jSImY3SqH07YxJqWYCAZcNquH4772VXlR1HYTrGaI9s/1Z4wiKpZStE4PvET59j+Isj8QTLuv7CpwVblx6zuFlm4+zOa8Brm8ScQ/DXGSS56JvBo0uchukEmKzviOXd8NkO3sEjG/f8PYNb9/w9g1v3/D2DW/f8PYNb9/w9g1v3/Cf8g3c5S5+ZSQj0fG26VdNBEhkfZx0+pkt1Lhvac41l2l+6l6krGflfCZOrW+nd3+cwO882a7WvmoimDXl9bA9isJuEhy48JX1pNzux5w4UbRfFR6OKazV/v6s7C+MFEckpXq9rztJLK/DZs/g29EvfBONPzyr9X/Jb8M5pp/wtjjFcA3Q3DGkbEpF6grKM/3Wm7F8vtL15LmM4SFh50iXjZLI3k/1Fn77rt+3bPSSRdffnc701cGAtJEuhc8jX7D8BeOpU23l36ogAHz++SzaTyRb3/9YOuuMCRdrmb77bRzSkur3OQ3K5E/VE7Rv1RPW14A03sakvjBcPf1eI+MInj+wGHbYhNJ9LPUfNKdg4K2DZerWNyi4w/ZSJMPXiOWZh7M3xDYkPbLdLGrvwwkYy7fre8orF2l9K3tfdqh1L9CL7w82CsoIDSUp70iQ56patFT7St/Dcu6r9cWj3I0j2b1gH0gK6vmlnZox5Ye9oDvdxFC7ePj6mjhTEWGex47tVItEDQbfGpzX+idjqJWK9ryovMGMUNFF3J0u9KZ57JIkgdox8J/YVJ33WXkG+08WZyapn6szsb8WRuSJX4sRfX32/70sIvMvlPl9V/V5V/V5V/V5V/V5V/V5V/V5V/V5V/V5V/V5V/V5V/V5V/V5V/V5V/V5V/V5V/V5V/X5sy99V/V5V/X5413V513V513V513V513V513V513V513V513V513V513V513V513V5493VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ93VZ8/3lV93lV93lV93lV93lV93lV93lV93lV93lV93lV93lV93lV93lV93lV93lV93lV93lV93qdzv0/nfp/O/T6d+1254V254e0b3r7h7RvevuH/tndvvW0iUQDHPw1S96ErLjOAH72K8rZSpa7U7SMG4qCkcYpxku6n3+EyNgzQOk6oHefflwRqLj5z4AxDND9qA7WB2kBtoDZQG063NqD6oPqg+py+6iM9u6P6+OLYqo9A9UH1QfWxUH0sVB9UH1QfVB9Un9EnFVQfVB9UH1QfC9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1cdC9bFQfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH2YnZvZuZmdm9m5T3Z2buQGagO1gdpAbaA2UBuoDdQGagO1gdqA6oPqg+rz9lUfzwu7qo88tuojUX1QfVB9LFQfC9UH1QfVB9UH1Wf0SQXVB9UH1QfVx0L1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfWxUH0sVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfVB9UH1QfZudmdm5m52Z27pOdnRu5gdpAbaA2UBuoDdQGagO1gdpAbaA2oPqg+qD6vH3VRwi7o/oEwbFVH39A9fFv1WHLPzf1l0UVjHrF1UoFo+39+N83K/0fH9fZf2rtXH3AEfdP6ke1pb2I4ptlvtrcJR/jWuUpP5IvFx/s8jOuOm279dsfu72qxXjr++xWelfVv/aq+jTny7Q6P0+frwpIfcrdr2FV83DodYZgpNqxXH9dfFNhvHB60lAjCg0gQ1HjE8XqLNJ8AC76liVJeZi/Hq+zIv18H8XlMR/z6L7KPxWhEkSaNP1cITvpF4p++gV2P/0ce6r0C84u/VzSb/TuJ+0TS7/w7NLPIf1GST0x6xZfLdgdLf1mZ5d+gvQbSz8pnG76OUdOP32wVvp9LjZJ3ZBNgxottIuUMxLL0cZTyRO6C091OLstqNYnMg0T8RuvfN/Zr/CIyULv9EL/TxrF1Sxwk4R+5gdeNBD61FHBD6bscrrGE4977NC77yX00iRk7WOH3nsvoTd7uoG3371+utCLXujn65usVEPsefKQqYi+bugTf+HLgdCr3oMbx5PCyaITeqmbohV6xx6IvTdZ7PeQk+NN/rCNdXqXzPN89Vh2aG6j9TqLfxHsRSjLjOsHO4zTKtj1Fl+aCLjPDn+aLNOfBr8VXDkQW70uT2+jInto72s44M0RPq2yqjeii4nsjl8Euo7rXaxXmzxOm612zdbbkWdUJRkaOyqifJkWvR2pZol+tD52X35gvf8J+81xxs7LDbvnpTsqu/Srz2CXjNs2eEF+Do0Bkp8H5Kcwmi+wgz8PzFBzpFiGvV29Uo6KwMg5Vx9p2qwbGvoj6w7IOukZDWgfmnNGr0UG02ScdIbvvtPm29BYn5Fvz06x7RNt+pQV/6p1trp06qWvVj2cUP5+8dRe+KEX7tQ3am1ULn7dJrta2G1WLentxp6eX5rOdZr8LIRNs9VpsEdv80QuEMe4w83kgReI+TA7M6+0kQvk1XK4P2C43o7YRHH1AtPM6fMeVPNmo2WylVtiILfcqbr7ug8wXWG7vLyYDY2jXV2l/jkVNs94Xaka97Dr1jNfu+953T63sHmO0d1vKvOkhS3sjyUW0frmvd4RXOOO4HvHvh+4/fbRo17Ny55cvxI56Rb7XX8jYz6P6N5iqwH1m7MXjt+oxXxVvsfaXY/qi1//vUrS8hP/Aw==</diagram></mxfile>
|
2003.13085/main_diagram/main_diagram.pdf
ADDED
|
Binary file (37.7 kB). View file
|
|
|
2003.13085/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b044f28a164e302820c8be1cfa25433e0128bc5ba0780990c06dff9d6731f5f5
|
| 3 |
+
size 1866508
|
2003.13085/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
::: center
|
| 4 |
+
<figure id="fig:Overview" data-latex-placement="t">
|
| 5 |
+
<img src="image/Overview.png" style="width:50.0%" />
|
| 6 |
+
<figcaption>Overview</figcaption>
|
| 7 |
+
</figure>
|
| 8 |
+
:::
|
| 9 |
+
|
| 10 |
+
Our work targets knowledge reusing between agents in cooperative MARL, where all the agents in the environment are not good enough. In this section, we provide a story-level overview of our main idea. The overview of our motivating scenario is presented in Fig. [1](#fig:Overview){reference-type="ref" reference="fig:Overview"}.
|
| 11 |
+
|
| 12 |
+
Considering our settings, all the agents act in the environment and update their self policy parameters with local rewards from the environment. The actions executed by the agents is dictated by their self policy parameters. Now, we explore a novel knowledge transfer framework, PAT. In our framework, each agent has two acting modes, student mode and self-learning mode. Before each agent takes action, agents' student actor-critic modules decide agents' acting modes with agents' hidden states. It is not an ad-hoc design for specific domain. Agents should learn to ideally learn from other agents.
|
| 13 |
+
|
| 14 |
+
In self-learning mode, agents take action based on their independent learned behavioral knowledge, which is represented as agents' behavior policy parameters. In this mode, agents' actions are independent of other agents' behavioral knowledge. All agents in self-learning mode are trained in an individual end-to-end manner using Deep Deterministic Policy Gradient [@lillicrap2015continuous] algorithm with an actor network and a critic network.
|
| 15 |
+
|
| 16 |
+
In student mode, if there are more than two agents in the system, a student agent receives multiple advice from other agents. We now refer to a new problem, teacher selecting, because not all teachers' knowledge is useful for the student agent. However, existing frameworks try to dodge this problem and have key limitation in the scenario where the number of agents is large, which also causes difficult in model transfer. We apply a soft attention mechanism in our work to select teachers' knowledge. The Attention Teacher Selector solves the problem by selecting contextual information in teachers' learning information and computing weights of teachers' knowledge. Considering from a different angle, our attentional module selectively transform the learning information from teachers with the target of solving student's problem. The attentional selecting approach is effective both in multi-task scenarios and joint task scenarios.
|
| 17 |
+
|
| 18 |
+
We make a few assumptions on agents' identities to support our framework. When an agent chooses student mode in $t$ time step, other agents in the environment automatically become its teachers and provide their behavior policy and learning knowledge to the student agent.
|
| 19 |
+
|
| 20 |
+
Our parallel setting means that An agent in student mode can be a teacher of the other agents. When agent $i$ is unfamiliar with its observation $m^i_t$, but at the same time, agent $i$ may be familiar with agent $j$'s observation $m^i_t$ because of $i$'s past trajectory, which means that agent i can be agent j's teacher. At this time step, agent $i$ is in student mode but it also is a teacher to transfer its knowledge to other agent. The core idea is agents' different learning experience. An student agent may have confidence in the other states and its behavior knowledge can help the other student agents. Our teacher selector module is designed to determine the appropriate teachers and transform teachers' local behavior knowledge into student's advising action. Moreover, our attention mechanism quantifies the reliability of teachers, so our scenarios do not need good-enough agents (experts).
|
| 21 |
+
|
| 22 |
+
In a high-level summary, because of agents' different learning experience, agents in a cooperative team are good at different tasks or different parts of a joint task. Knowledge Transfer is a framework to help agents solve unfamiliar task with experienced agents' learning knowledge.
|
| 23 |
+
|
| 24 |
+
PAT's training and architecture details are presented in the next section.
|
| 25 |
+
|
| 26 |
+
# Method
|
| 27 |
+
|
| 28 |
+
This section introduces our knowledge transfer approach with more design details of the whole structure and all training protocols in our framework. In our framework, each agent has two actor-critic model and an attention mechanism to support two acting modes.
|
| 29 |
+
|
| 30 |
+
Different from original individual agent learning, after receiving observation from the environment, agents in our framework need to choose their action mode before taking action.
|
| 31 |
+
|
| 32 |
+
At $t$ time step, agent $i$ reprocesses the observation from environment with a hidden LSTM (or RNN) unit, which integrates information (observation) from $i$'s observation history. The LSTM unit $l^i$ outputs agent's observation encoding $m^i_t$, which represents agent's hidden state. Here, $k$ is a scaling variable, which represents the time period covered in the hidden state. We will adjust $k$ depending on different types of games. $$\begin{equation}
|
| 33 |
+
l^i : (o^i_{t-k}, a^i_{t-k}, ..., o^i_t) \rightarrow m^i_t
|
| 34 |
+
\end{equation}$$ Next, based on this step's memorized observation, $m^i_t$, agent $i$'s student actor network takes this step's memorized observation, $m^i_t$, as input and output agent $i$'s acting mode. Considering the efficiency of information exchange and communication cost, student actor is used to deciding agent $i$' confidence in $t$ time step. If $i$ has enough confidence with $m^i_t$, student actor chooses self-learning mode. Conversely, student actor chooses student mode and sends advice request to other agents.
|
| 35 |
+
|
| 36 |
+
Student actor and student critic is a deep deterministic policy gradient model. The student actor network outputs the probability of choosing student mode. When the probability exceeds a threshold value, agent will choose student mode as a deterministic action. The threshold value is a variable depending on different types of games.
|
| 37 |
+
|
| 38 |
+
Student actor and student critic represent the acting mode choosing model which determines whether agent $i$ become a student and ask teacher agents for advice. We train student actor-critic using a student reward ${\widetilde{r}}^i_t$. $$\begin{equation}
|
| 39 |
+
\widetilde{r}_{t}^{i}=V\left(m^i_t; \theta'^{i}_{t}\right)-V\left(m^i_t; \theta^i_{t}\right)
|
| 40 |
+
\end{equation}$$ $\theta'^{i}_{t}$ and $\theta^i_{t}$, are agent $i$ policy parameters in student mode and self-learning mode. The student reward measures gain in agent learning performance from student mode. The sharing of student actor-critic network parameters allows this module learning effectively in environment and easily extending to other settings.
|
| 41 |
+
|
| 42 |
+
In our experiments, student actor-critic is trained with the trained Attention Teacher Selector. Agent $i$'s student critic is updated to minimize the student loss function:
|
| 43 |
+
|
| 44 |
+
$\mathcal{\widetilde{R}}$ is agent $i$'s student policy transition set. $$\begin{equation}
|
| 45 |
+
\begin{split}
|
| 46 |
+
\mathcal{L}\left(\theta^{\tilde{Q}}\right)=\mathbb{E}_{m_t, w_t, \widetilde{r}_{t}, m_{t+1}\sim\mathcal{\widetilde{R}}}\left[\left(\tilde{y}_t-\widetilde{Q}\left(m_t, w_t | \theta^{\tilde{Q}}\right)\right)^{2}\right],\\
|
| 47 |
+
\tilde{y}_t=\tilde{r}_t+.\gamma \widetilde{Q}(m_{t+1}, w^{\prime} | \theta^{\widetilde{Q^{\prime}}})|_{w^{\prime}=\widetilde{\mu}^{\prime}\left(m_{t+1} | \theta^{w^{\prime}}\right)}
|
| 48 |
+
\end{split}
|
| 49 |
+
\end{equation}$$ Student policy network is updated by ascent with the following gradient:
|
| 50 |
+
|
| 51 |
+
$$\begin{equation}
|
| 52 |
+
{\nabla}_{\theta^{\widetilde{\mu}}}J=\mathbb{E}_{m_t, w_t \sim\mathcal{\widetilde{R}}}\left[\nabla_{w} \widetilde{Q}\left(m_t, w_t | {\theta}^{\widetilde{Q}}\right)|_{w_t=\widetilde{\mu}(m_t)} {\nabla}_{\theta^{\widetilde{\mu}}} \widetilde{\mu}\left(m_t | {\theta}^{\widetilde{\mu}}\right)\right]
|
| 53 |
+
\end{equation}$$
|
| 54 |
+
|
| 55 |
+
Here, $\widetilde{\mu}$ is agent $i$'s student policy, which is parameterized by $\widetilde{\theta}$
|
| 56 |
+
|
| 57 |
+
::: center
|
| 58 |
+
<figure id="fig:Architecture" data-latex-placement="t">
|
| 59 |
+
<img src="image/PAT.png" style="width:50.0%" />
|
| 60 |
+
<figcaption>PAT Architecture</figcaption>
|
| 61 |
+
</figure>
|
| 62 |
+
:::
|
| 63 |
+
|
| 64 |
+
Inspired by the similarity between source task and target task in transfer learning, we use attention mechanism to evaluate the task similarity between student and teachers and teachers' confidence of student's state. Therefore, each agent's Attention Teacher Selector in student mode is used to select advice from teachers based on their similarity and confidence. The main idea behind our knowledge transfer approach is to learn the student mode by selectively paying attention to policy advice from other agents in the cooperative team. Fig. [3](#fig:Attention){reference-type="ref" reference="fig:Attention"} illustrates the main components of our attention mechanism.
|
| 65 |
+
|
| 66 |
+
We now describe the Attention Teacher Selector mechanism in agent student mode. The Attention Teacher Selector (ATS) is a soft attention mechanism as a differentiable query-key-value model [@graves2014neural; @oh2016control]. After the student actor of student agent $i \in \mathcal{G}$ compute the memorized observation at $t$ time step and choose student mode, ATS receives the encoding hidden state $m^i_t$. Then, from other agents in the team as teacher agents, ATS receives the teachers' encoding learning history $h^j_t = l^j(o^j_1, a^j_1, ..., o^j_t)$ and encoding policy parameter $\theta^j$.
|
| 67 |
+
|
| 68 |
+
Now, ATS computes a query $Q^i_t = W_Q m^i_t$ as student query vector, a key $K^j_t = W_K h^j_t$ as teacher key vector, and a value $V^j_t = W_V \theta^j$ as teacher policy value vector, where $W_K, W_Q$ and $W_V$ are attentional learning parameters. After ATS receives all key-value $(K^j, V^j)$ from all of teachers $j \in \mathcal{G}$, the attention weight $\alpha^{ij}$ is assigned by passing key vector from teacher and query vector from student into a softmax: $$\begin{equation}
|
| 69 |
+
\alpha^{ij} = softmax \left(\frac{Q^{i}K^{j}}{\sqrt{D_K}}\right)
|
| 70 |
+
\end{equation}$$ Here, $D_K$ is the dimension of teacher $j$'s key vector, which is used to resolve vanishing gradients (Vaswani et al. 2017). The final policy advice is a weight sum with a linear transformation: $$\begin{equation}
|
| 71 |
+
v^i = W_T\sum_{j \neq i} \alpha^{ij} V^{j}
|
| 72 |
+
\end{equation}$$ Here, $W_T$ is a learning parameter for policy parameter decoding.
|
| 73 |
+
|
| 74 |
+
Behind the single attention head, we use a simple multi-attention head with a set of learning parameters $(W_K, W_Q, W_V)$ to aggregate all advice from different representation subplaces. Besides, attention head dropout is applied to improve the effectivity of our attention mechanism.
|
| 75 |
+
|
| 76 |
+
Finally, student agent $i$ obtains its action at this time with policy parameters from Attention Teacher Selector: $$\begin{equation}
|
| 77 |
+
\widetilde{a}^i_t = v^i(m^i_t)
|
| 78 |
+
\end{equation}$$ In our experiments, the attention parameters $(W_K, W_Q, W_V)$ are shared across all agents, because knowledge transfer process is similar in all pairs of student-teacher, but different observations introduce different teacher weight vector. This setting encourages our approach to learn more efficient and make our model easy to be extended in different settings, such as larger number of agents or a different environment.
|
| 79 |
+
|
| 80 |
+
In this work, we consider scenarios where other agents' learning experience is useful to a student agent. Feeding student's observation information and teacher's learning experience into our attention mechanism helps to select action with other agents' behavioral policy for the student agent. This module is an end-to-end knowledge transfer method without any decentralized learning parameter sharing.
|
| 81 |
+
|
| 82 |
+
::: center
|
| 83 |
+
<figure id="fig:Attention" data-latex-placement="t">
|
| 84 |
+
<img src="image/Attention.png" style="width:50.0%" />
|
| 85 |
+
<figcaption>Attention based Knowledge Selection</figcaption>
|
| 86 |
+
</figure>
|
| 87 |
+
:::
|
| 88 |
+
|
| 89 |
+
If agent $i$'s student actor chooses self-learning mode, the student actor sends $i$'s encoding hidden state $m^i_t$ to the actor network. In self-learning mode, agents learn as a common individual agent. Each agent's policy in self-learning mode is independently trained by DDPG [@lillicrap2015continuous] algorithm.
|
| 90 |
+
|
| 91 |
+
Agent $i$'s critic network is updated by TD error, $\mathcal{R}$ is agent $i$'s transition set: $$\begin{equation}
|
| 92 |
+
\begin{split}
|
| 93 |
+
\mathcal{L}\left(\theta^{Q}\right)=\mathbb{E}_{m_t, a_t, r_t, m_{t+1} \sim \mathcal{R}}\left[\left(y_t-Q\left(m_t, a_t | \theta^{Q}\right)\right)^{2}\right],\\
|
| 94 |
+
y_t=r_t+\left.\gamma Q\left(m_{t+1}, a^{\prime} | \theta^{Q^{\prime}}\right)\right|_{a^{\prime}=\mu^{\prime}\left(m_{t+1} | \theta^{\mu^{\prime}}\right)}
|
| 95 |
+
\end{split}
|
| 96 |
+
\end{equation}$$
|
| 97 |
+
|
| 98 |
+
The policy gradient of agent $i$'s actor network can be derived as: $$\begin{equation}
|
| 99 |
+
\nabla_{\theta^{\mu}} J=\mathbb{E}_{m_t, a_t \sim \mathcal{R}}\left[\nabla_{a} Q\left(m_t, a_t | \theta^{Q}\right)|_{a_t=\mu(m_t)} \nabla_{\theta^{\mu}} \mu\left(m_t | \theta^{\mu}\right)\right].
|
| 100 |
+
\end{equation}$$
|
| 101 |
+
|
| 102 |
+
In games with discrete action space, in self-learning mode, we refer to the modified discrete version of DDPG suggested by [@lowe2017multi] in agents' actor-critic networks. Agents in self-learning update its actor network use, $$\begin{equation}
|
| 103 |
+
\nabla_{\theta^{\mu}} J=\mathrm{E}_{m_t, a_t \sim \mathcal{R}}\left[\nabla_{a} Q(m_t, a_t) \nabla_{\theta^{\mu}} a_t\right]
|
| 104 |
+
\end{equation}$$.
|
| 105 |
+
|
| 106 |
+
Our framework is adapted for both continuous action space and discrete action space.
|
2004.04312/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:55d77ed479ef0e45d05ac9342fc7cfe0e51c6251af812b347bfea50b616d0b42
|
| 3 |
+
size 7082794
|
2004.12935/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:961be6fec38cf341e98f79b847a2ac1e92d9a61873966075ad7227ead72d1ec7
|
| 3 |
+
size 8191669
|
2005.09812/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3b88519776fd614b6a6483c3d41adb0857187f0082351551f35e5e1b53a53740
|
| 3 |
+
size 6324925
|
2006.02425/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-06-26T11:30:32.220Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36" version="13.3.5" etag="OXb0dq27SZ7C0oal43m-" type="google"><diagram id="wlj3x4cBi2SbiT8ih7f6">7V1tb9s4Ev41BroHWOCrRH5s3OYO6C6waA/Y7adAjZXErWP5bLVN99cfqTeLFGXRMmXJjm00tWiJluaZGc4MZ8gJnj2//HsTrp/+iOfRcoLA/GWC300QooyKv7LhV9YAOSBZy+NmMc/bdg2fFv9EeSPIW78v5tFWOTGJ42WyWKuN9/FqFd0nSlu42cQ/1dMe4qX6q+vwMao1fLoPl/XWvxbz5ClrZSjYtf8nWjw+Fb8MfZ598xwWJ+ddbJ/Cefwza0ofDr+f4NkmjpPs0/PLLFpK4hV0yShw2/BteWObaJXYXICyC36Ey+/5s+X3lfwqHnYTf1/NI3k+mOCbn0+LJPq0Du/ltz8FvKLtKXleiiMoPs7D7VN6rjxYhl+i5U14/+0x7WMWL+ON+GoVr8TFN9tkE3+LtMaHxXJZNE0Qvn0r36L9cRPOF+KZ9NPjVZKzB0T5ceVyAHwfyLvOnzLaJNFLI6VgSX/BuFH8HCWbX+KU4oLAz7kv59ppcfxzxwIFucFTBX5xGznr5Wz3WHa+Q0Z8yMExA4XHApQg60367g+t7AcLycKO8MOMKfjhOnyYGuAT2ul4+MhY4Bu/nGGuyhkidaACbgAKMQdA0TpQdPbm4x16k/wmPv1Wg008aaJis4m2i3/CL+kJkijreLFK0luiNxP6TrSE35N4m5NTHi4XjyvxeRk9yK4k+RZisHmbNyexBH8reGGxevyvPHg3JWYQZrNWdqhCSR1BxpgGGapDxkyIOQDMbwAMXgHbAxgeDrCgXRVGq/lbaabtiKDouwp0+0mXdRzNa9ZcK6EqdDCNCUXbJlqGyeKH2r2JOPkv/Ck5q4oDCDyqIAEpV3vZxt8391F+YdWSa+9rqt1QEm4eo6TWVQpY+exWGDKj0Mm7cSdyuTDdC6iijUHKnhfzubzeQnwUoZMvVawQ0MUV7xrk98Xp+RPBRhHMHY78wSalvFQ5LtgrmsCjyMdOuGuKdCFXe4gfHrbRsZzAzZxwN05GqALeK4icQKLKtRNIkSbftJB4p5gWbnYNVPiaQZ0KVBHHKgKOJFXtFQZ9gAqNoIp/38L1OnSJbR/GknQwfH8QYwlroGMMa8YSBD1ZS9AiQlM3ly7IQoKB5/PdS/XhOxtLAdeNJb2rBmNJEDr8VTktZ/99D2C85R38WY+dxdoQGJL+z/l4P1UVTi0Z9thInq+BjwgrWip8bQoyYBcybYgGCcw+vHm5W0zQTN7p3deRwzebATAUfFC1a0lQh64v5xWa40Oi5S+nzo8zyHJoCKhDyDmZpa5Q/5CxQJU3QuuQmUYRJ5AZIkQ6RtuncC0/Piyjl3wovamMqvfLcLtd3KsomuOqmj1aJXl+yn5qr+Lf5ff5b2hh2ln6ktHhnGwEen52q/mP+h7G6BzGdV+LQGE9ym49lPu8pSd3YQ9oEbs6W1YiHkMBZT4CPuM+pVRhrMBDyrdnwWTiTvXwWtCVzep91bSTQ0YzB9j+ZRxgxKWL9VYiHm7X2XT3w+JFRksdzAyVfNI+M1Tj0mq4rcLtpqktJ4MMQzpCDBvMOtyXWWeOhV1B2wsaUCHDBmOuL8CQIc51MeocaOpbWM2YkELFB+egvrmWcVHO4B+qvBFALT258vY5w8YfcuXuo8Yo3vpp8WZ+NwluFl8nwbvUjfSX0lv4shGfHpMUcD98loK7+rKV/xUhArW1JgKj8TmNczcn8TkFA+ljC/ZZTVFBYpAG34WmMkQBs1nuEbqcI5niJqrITw35Pn5PLqcpWyuFC1/haoCLDwmXOR738Q5d4WqSLj4gXOYY3Ic7WBkARx4+HXA6y+faQFb0cYrprIZ0rQ936IpdF+xoYHBw+0PPEP9K0cNX9Lqgx/0TYmcOKX24I1fsOmBHMD8hdubIUpm7MXI7ZUy5G/4J1SU2xJeqsI3cGxgRbASgU45zuDHKkgFHrsDZAmco3ugPNkOURAdpF3dvqbBpCrAag7v9z6BTyj1IOcjfUKEygx4yhc09H8HKmxnIHugZpJ0Ib1Gddq6E9xHwCIC0eCuEp9gzzlecjPAWdWWdCN9U+sc5AEQqroaZpQPq0pyAAwKPN4EDARsYHUOg4mLEAu6jfEAGVkgWOT/nSnnMmYch8Iu3QvkgGJjlbaq7zpTwRAzBlGJQvtWhAAzM8mb/XqaU7s8/OAwEzm9vOW8Coaku2FERx0Fg4T1gQTC0nPA6IvPHqKgwywmjloTHm+QpfoxX4fL3OLX9JTxfoyT5lRNPOhMqeNHLIvm78vlz5fO7l7zf9OBXcbAST/J39eBz9WB3UXpUXNWWshJuEq1CIW27XUgC5f0ZcipEY35KPa2iaow0KYqWsvXyesVqyYCppUVkuQbq8JJlDajyZ8Gj1nkR1qsLgJNyE/BoB37KLrpylJmjgpFxFByQo0AnjoJXjqpyFMEj4yhDcEbYJz92KUtjry1tQqbRXLGpOSX77Zgp8LBWOzHNCXl0zamsZ1V7xp6vduOk8pSYokNZbtp2Ha4UwP3/fZdLX2VG6XSbWaVvxSmrePMcLlNqAwnENIuayq8EQV/SL4qLy3y31P79mie5LbOvbrPfLDLiRuCb9GP+Mg80m79D++fEIm7VJfPW8bJItkq6TExsiIoVNPaZhzBFnEAY+CQAau2OoGzxBWVMGplnkKNLAfYqwQgtmxsiob8YogRn2cdYq2y3rr6ghHsAByx/89qvCI4nAYE44BhCTYu5q8sg5tSj1+pk+4x4gu7lW48CmpJTTqhk/JPasPY+9vgt0b1VIYqS7RYvsPGISN1+LcTPpf3aoHACHHh8D2uT3lSMOccqs5Nfr5nst5nJCBaLyLq1kxH1qLqCw1ROfWj4uzGU6yHccGxpBjYKwm6trAOnPLT0K8PinT1lFZB68tWXKyrp8K45psZk1L6K96ghuao23DtaR9B6BdwjCy/LxJ36Cri7SBkASvRVHLdEYMXBn9FmISgs1bweQ5NL6lSjaKI/3jJ6p0d6lxZDumlEt+a6E/kzSKtD8YHWh3XNYWtP7mwGashXewWyMAUV5v+cCwc/QhgUE1ZIhv+KBQHqyzBTTWlb++7lZMIJBMEQZL5kQSg4F1Y5t20ixMi1Ji9WiFMahDpMnC6D/2vqewr1To4YCXpbOITaZGJeoABIzY8nivrOh4bOguABRFVhKA0vp8KA6sKAzkAYuo4GAeUtXTmUBpv02AuShp2PAInmI7BjfISKS5CLQjoL0odpdAbSIFg4UE0j2F0Y1J7qg4xDYbDJRr4cYWjgtfFwUeM+J4dzkd/Sk0MmskislhyxnjQFmGyiva1Le+gV7KYiI1qHDjoJPFmkOB+0l4++F8/tbNY8+ecgzZnuj6Vj2ygeCjzsMyJJjXwMKXNAW0MSs1vaco7xcLS1XNysD8oaSojdUva932tyfhNlT0/KQtteqgLQwvqYeCX1AsaL+ckB6G4R5Txn5YD1JenHQ3mLsNpZUn50hHa9zeO4VYvPPFIl/GCjo+96f8Ya3W9wxU8ZXrWMh/IWXuk5c3yN8hAgr9QzKTUHI72FL3fOWn1ElO7dZRzI+B6dNundfxzV+DkiDrdwL08QF6otAG/c18e4TK8eM+tChcDCMzyeChmt93rCEBqe0EXwK+jZBwMguL297VHAMoj2hxGg0GaMBjD/W2QiO6dlz14V57PZILQslJWQvZKI8m9dOZ2K0j27Vb5/g4akNOZelYzi73CktvCkeh8I9AkRDA0EMc0P6DlOnSjQu0eDcTOzNcyjueBB0q45DTQVjFexT4TJ4aCyJ+jZcyk3t+tNnhtoWS4M6+3olZpoVuzbC6lPMNvVr+rcT2oxSHGF1sNR+gS+y5CUhlDzEuuLH5+M1OPwVoDmrRQVS4ohjwzDlIstRZiFr3KcDn1P+2W3BjfoRX0+1Qbiysvvi7Y9e0lD09ZUfXUy2vbsNQ1G20GI2SlduHGrM31bZ1P70fuRW9Cvx7Qtqq63KHQ4UJDS6kJtc7i0bmu9Fksbu8/oYp1yZK8soK4+4QT1NmZytEldEXcqFo8ubt/VHnWsU6Lp6+YoT61dryn3jhxl4E2HmsNmkd0rzsoyNtq4ri4sICURVF6oGw+0/Ir+I8hyLYtD1UyTEbNf2TlTQjYLEV+Z055tOquk45jeobqyiK9cOaKKnCaxrCMDcN227aZ/ukBuEee5Qr4HKyH0ToakGid1kfmDN2MuYmc1Y8jVIMMtYmgd+evgjTRGXOnEtN23i1qEg0cSTvd35E5zcIsIXu8RYqJtq4EhN2SzINN2Sk5mMovlYc92RoK3872RerU5Bwe0dB5pK2sfzatAVFbH7rQgdlOlpYW2KYSnWsSb8dJQKoipekOYpAq8xa60hyqkQO2W+R5TZrj6Gda0nSjz1U6djWnO44EHbEd3FMOdnLkcc9OBTOqInfK5HGfsc4J8/kHGDPvUjJMNKc7jbxcrqeVSVJr9SD2k5E93E9wScLtuHVqpzoNcl84BpBcGOJCvHGlubbxo0EzOFPu5pye1KPb2RLCT6XXnUatLl2pNEjjxFEHoaI9B/6BuXZn3mi4x86UrqS73Yr5Usa5lHQ5nr0FgEVS6CvY+YYB6pb5WutbVYmvr15VsaxqlgTndCTe6Mtxhvrwjg/AwM9OdQwBBf2HGC0XckbFwmAniEvFrvO7QQeUwu67rmHJCFnCecHexLODYjjjUPnEJuvPoX1n1dklzz4RRDzYmH2HA9Z3GrKEnTJ2HIcVsUB9gWwT6TjEd7SHlkREzLa9g9Kdc7MQDQacUrqYlbHf7pYGJtu3ablO16s5rbsQhw3IogUC7HIJyEZGOAtDWkUv+N4TE6OzNfLd5sOCvM9sYzbgxUxGN6SjAmYA0ijDwdoHNI/ko8Fg1g6sIbRSVjPpSAk72RytXNdHYYPzQ94wqJ1CrH3CC8VQofK1CFfYC67FxsXNXyrrl0DH/pL+NFMpUyleyePwuparYSqG6r0hbYlXbviKg2Kwh65CzwzdnaODpM9tJQe7drnJwV2MEAZmp7O/GI3XD8yllqheA+pOUV7sDD+e0Kinp7mzkGFEBXkBYRVR2Pb7GDakA1mINvLPhzrEHtD1tdwvl9SEThhilsNs+3uE3yW/ujDdn26keXthRESBIbacuW/xtrgFeju8VhmNuXG1xuInjpIrwJlw//RHPI3nG/wE=</diagram></mxfile>
|
2006.02425/main_diagram/main_diagram.pdf
ADDED
|
Binary file (52.4 kB). View file
|
|
|
2006.02425/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:de9a62ee90ba965df22cf0365925da5ff02919c704df4afef73735b18e2b6c33
|
| 3 |
+
size 3057273
|
2006.02425/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Generative learning using exact-likelihood methods based on invertible transformations has had remarkable success in accurately representing distributions of images [@kingma2018glow], audio [@oord2017parallel] and 3D point cloud data [@LiuQiGuibas_FlowNet3D; @noe2019boltzmann].
|
| 4 |
+
|
| 5 |
+
Recently, *Boltzmann Generators* (BG) [@noe2019boltzmann] have been introduced for sampling Boltzmann type distributions $\rho'(x) \propto \exp(-u(x))$ of high-dimensional many-body problems, such as valid conformations of proteins.
|
| 6 |
+
|
| 7 |
+
This approach is widely applicable in the physical sciences, and has also been employed in the sampling of spin lattice states [@Nicoli_PRE_UnbiasedSampling; @LiWang_PRL18_NeuralRenormalizationGroup] and nuclear physics models [@Albergo_PRD19_FlowLattice]. In contrast to typical generative learning problems, the target density $\rho'(x)$ is specified by definition of the many-body energy function $u(x)$ and the difficulty lies in learning to sample it efficiently. BGs do that by combining an exact-likelihood method that is trained to approximate the Boltzmann density $\rho'(x)$, and a statistical mechanics algorithm to reweigh the generated density to the target density $\rho'(x)$.
|
| 8 |
+
|
| 9 |
+
Physical systems of interest usually comprise symmetries, such as invariance with respect to global rotations or permutations of identical elements. As we show in experiments ignoring such symmetries in flow-based approaches to density estimation and enhanced sampling, e.g. using BGs, can lead to inferior results which can be a barrier for further progress in this domain. In our work we thus provide the following contributions:
|
| 10 |
+
|
| 11 |
+
- We show how symmetry-preserving generative models, satisfying the exact-likelihood requirements of Boltzmann generators, can be obtained via *equivariant flows*.
|
| 12 |
+
|
| 13 |
+
- We show that symmetry preservation can be critical for success by showing experiments on highly symmetric many-body particle systems. Concretely, equivariant flows are able to approximate the system's densities and generalize beyond biased data, whereas approaches based on non-equivariant normalizing flows cannot.
|
| 14 |
+
|
| 15 |
+
- We provide a numerically tractable and efficient implementation of the framework for many-body particle systems utilizing gradient flows derived from a simple mixture potential.
|
| 16 |
+
|
| 17 |
+
While this work focuses mostly on applications in the physical sciences the results could provide a takeaway towards a greater ML audience: studying symmetries of target distributions and considering them in the architecture of a density estimation / sampling mechanism can lead to better generalization and can even be critical for successful learning.
|
2007.13040/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-06-07T22:21:14.177Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" etag="NTSDLOlut8hS5U3UXEcz" version="14.7.6" type="google"><diagram name="Page-2" id="em4tKHelqe3XuSxN6gk8">7V3Zkps4FP0aP7ZLEvtjr5NUZWYySWommZcUsdU2E4wcLKfd+fqRQLIRi4G22dq4Uh0QICOdo3uvjhZPtNvV7rfQXS9/J3PsTxCY7yba3QQhA+nsL094jhNsgOKERejN4yR4SPjo/cIiEYjUrTfHG+VGSohPvbWaOCNBgGdUSXPDkDyptz0SX/3WtbsQ3wgOCR9nro8zt/3jzelSlAJZh/Q32Fss5TdD04mvrFx5s8his3Tn5CnxXdr9RLsNCaHx0Wp3i31ed7Je6Ls/6Rsye+vCvwh6+4TCf398v4rf8qHOI/sihDig583aENXw0/W3osJEYemzrMGQbIM55rmAiXbztPQo/rh2Z/zqE6MMS1vSlc/OIDtc+O5mI27d0JB8x7fEJyFLCUjAHrl59HxfJk2Q9qBfIzN6MHTnHitf6nY3nAlOaexs7m6W0ZvwJwh7CY9yWiL+baIgOKR4l4K+pN7gHkzWCDBZYRo+s+dELo5mxvmIBoAMQZCnA5325FsmqGSKNFcweLHP+oASOxBA1QHNLgeNZcNaGC4HzN2s42b36O141aYBurfMBwMWoVmAmky+80KWuUeC6FLIqysBm8lheyQBFQg3CqNtAwVGqKMMjEYOikZjKDodNz1k6qahDanpOVa1lgd1rSHQZJt+FU2vKdggkEGDbGu20W1bM2EObKZPhf1R8DN/bIm8cLWJGsA1u4HVyi6qM3mdByMxsoe0v3E4dwNXuWm2B/SQqMWfZJK5iP/X+T/jdr30vk6sG3bE8WHZvJWH1p24SRSAVUhcBplDio4MXKryrtQ4SAPgewtOnxkjA2bpN/z7PRZaXYsLK28+94uIrtqujqw+o6Kjem8AMlTUc6iIGqMiqmP2YbkVqWTqr507+6HYaKTQkZ4EKl4AWk0CBQGcIkM1GwBNqxkO3WkKLW1ohoMuMXW/gtFG1IgMU7QzsmFGuyZCHxrpeF+ZYc4c1h3zT9xxQe6njPsfIw/rmEBrCpHKRahPdSfxqUhNoylqGmP8+4L4V2pmncW/5ghbfdhYB3mabHtZ3addEGtpdZcdOdpG55FjBZHuAtFydJTxcT0I82tpcRcNVvcty6qgwbUOVj/QQaahhosdI5Unu10gUnkuqm9Q5clSve5zcoUUjt3Ll0uhEHasc1iDE9cKdY7NSMQafqv3Mof1ChQ4Oipw51DgkGXlBLztsjFPdOs1G7lvpiPxThimtJwS6aldT52nH/aagYX2cPTUJ3nqPpjDPB10YGTMn9cxeuuTx8uc7umZJ/y+EnqOxvO0bk4P2OlkUMLzBf4oTkX1pyowpEuyIIHrvyNkLZD8D1P6LOrR3VKi4ox3Hv2cOP7Cs2Jlj8/udiLn6ORZngSsgJ+TJ4mn+OnhsehMPqfMTOUn713KSBNEKawxZWiXnPsaX5ErA1CU+fyarzvg7ONSoTf7tPSC+MKD58sSxtXI6+4FpGH1T7bhDB+5T460Uzdc4GMZxhPKsjQMse9S76f6emfnk5xVXThp+QTyHAjzJcmXs5JngMyQ0//LmWF3ygzYFjPgyIy6zHBOZIZ49D3xoohGujyQHoY31SziEoinUvzav8YJlMsOLyWd24Fw94fUm9k2/Ll3H8Nm6H4x1fAZaoETGXqa7dJ6Q6Q6EVTfTNbZDYydksVUc6OlV9wVmJus3ZIjYUUZxXRtzm6VDjdken+fvBVmvTHwB35ifz+QlRsoPcFqfcwX9x29IMDhlc/JPOS+YGN9P2CrhDKyq04gQjn9vqYUXDnH/RVbtAr9wNdo9JxUhi82eumMmjZ6pWMKfTN6ZEtHo3fE6MH9PE5JKTvH7EGtTbNnHTV7Xcpdr1S2AlW7oFanAb6d4UU//NvRLtzLuol9YYZZtevXjBuFwNFU6wRSkymbdnjHpff+qxMHpu2fujxZ3qrIYqNT8dXJyvJ98XtDBL2yrmp0CnpWcS83MB2P1qg2BbRrUxTj2heuoWEYmONS+3AMzLBjKqeqYepUTneOy+msLlj39EP0KiRoxDi1MQ9hgMMxVuXemtkpf/L08RShFowr60mRtCF2l3S/ydtBnZo8Mm9XNxVtTpV9oRzuSs72Ue8X4aQihuhNVWOjmzck19VVWacnVLbCdXqFWJZiFtOlEDSQi0ASoiwmjUHS6MYMA4dkvz68U4Qq7Lpw6QihbhGqsNNCJ97BMdXtKzWYXcYBzTbXcch1JMeqarN01/yQFZ56rv+BV0ywKBxqSLCbRkEZD4W+EUrJSpzkjG6ocV3xxqWM8SD6TLLzdvl5Xss4oQkU9Kg6gwvKzQqPGp+iULJw1ClZW2poCmvU83ki1Qp1fmyTnVM1YbWBwrrzRwqea1hIhqDCvgojLU6ghZ7ev6smL/b0AkfzaZwnKIcnvR5iX+AAh67v/YqkgAsdZi8iv8hGV0l1pWWCCrtdJ5W3jUNu/MVrjLpidzfnaCU11WHfv+0+dNUrxmRNrciBoIK6MfSgrDLu5fjGABYifAWh2jyyC65y8dWaax9VZJdBe+tCTNpx11IF3yOe6mlWddfpfKy23XUVMWgkytmIYp2HJ07bNCld/963qI4V2WXPvKLgrj1/pme1tZbDu9L17H2k2xUNXS/wgsWgaXYCrY6HwSD1a0jdL1CAoII2eeG/h8RshZneUyAPOV2bygmuCnZw2lgnBlZQKi/8h+Ry0cuZIV+EXmMtD+bJiQOw8HhDRwNfxDXLLCVaywa+yv71UueYhYS3/jIjkQjDdCNrE7ABdZDTq3mMPpwBsUUAU2TrOjI0oOvAgLZmFrGjKbxsx56CxEfVMJDuTGVlJfDT8uJAm5WlNoLs9PA7pnFH5fBjsNr9/w==</diagram></mxfile>
|
2007.13040/main_diagram/main_diagram.pdf
ADDED
|
Binary file (34.2 kB). View file
|
|
|
2007.13040/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ad50d1915f208eab6ae8fe6dfeae1e42c8e671b7ca2a03256468dbcefff848a7
|
| 3 |
+
size 1012062
|
2007.13040/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Meta-learning, or learning to learn (Thrun & Pratt, 1998), empowers agents with the core aspect of intelligence—
|
| 4 |
+
|
| 5 |
+
Proceedings of the 38<sup>th</sup> International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s).
|
| 6 |
+
|
| 7 |
+
quickly learning a new task with as little as a few examples by drawing upon the knowledge learned from prior tasks. The resurgence of meta-learning recently pushes ahead with more effective algorithms that have been deployed in areas such as computer vision (Kang et al., 2019; Liu et al., 2019; Sung et al., 2018), natural language processing (Dou et al., 2019; Gu et al., 2018; Madotto et al., 2019), and robotics (Xie et al., 2018; Yu et al., 2018). Some of the dominant algorithms learn a transferable metric space from previous tasks (Snell et al., 2017; Vinyals et al., 2016), unfortunately being only applicable to classification problems. Instead, gradient-based algorithms (Finn et al., 2017; 2018) framing meta-learning as a bi-level optimization problem are flexible and general enough to be independent of problem types, which we focus on in this work.
|
| 8 |
+
|
| 9 |
+
The bi-level optimization procedure of gradient-based algorithms is illustrated in Figure 1a. In the inner-loop, the initialization of a base model (a.k.a., base learner) globally shared across tasks (i.e., $\theta_0$ ) is adapted to each task (e.g., $\phi_1$ for the first task) via gradient descent over the support set of the task. To reach the desired goal that optimizing from this initialization leads to fast adaptation and generalization, a meta-training objective evaluating the generalization capability of the initialization on all meta-training tasks is optimized in the outer-loop. Specifically, the generalization capability on each task is measured by the performance of the adapted model on a set distinguished from the support, namely the query set.
|
| 10 |
+
|
| 11 |
+
The learned initialization, however, is at high risk of two forms of overfitting: (1) memorization overfitting (Yin et al., 2020) (Figure 1b) where it solves meta-training tasks via rote memorization and does not rely on support sets for inner-loop adaptation and (2) learner overfitting (Rajendran et al., 2020) (Figure 1c) where it overfits to the meta-training tasks and fails to generalize to the meta-testing tasks though support sets come into play during inner-loop adaptation. Both types of overfitting hurt the generalization from meta-training to meta-testing tasks, which we call meta-generalization in Figure 1a. Improving the meta-generalization is especially challenging – standard regularizers like weight decay lose their power as they limit the flexibility of fast adaptation in the inner-loop.
|
| 12 |
+
|
| 13 |
+
<sup>&</sup>lt;sup>†</sup>Part of the work was done when H.Y. was a student at Penn State University. H.Y. and LK.H. contribute equally. <sup>1</sup>Stanford University, CA, USA <sup>2</sup>Tencent AI Lab, Shenzhen, China <sup>3</sup>Rutgers University, NJ, USA <sup>4</sup>City University of Hong Kong, Hong Kong <sup>5</sup>Pennsylvania State University, PA, USA. Correspondence to: Ying Wei <yingwei@cityu.edu.hk>.
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+
Figure 1. (a) Illustration of the gradient-based meta-learning process and two types of generalization; (b)&(c) Two forms of overfitting in gradient-based meta-learning. The red cross represents where the learned knowledge can not be well-generalized.
|
| 18 |
+
|
| 19 |
+
To this end, the few existing solutions attempt to regularize the search space of the initialization (Yin et al., 2020) or enforce a fair performance of the initialization across all meta-training tasks (Jamal & Qi, 2019) while preserving the expressive power for adaptation. Rather than passively imposing regularization on the initialization, recently, Rajendran et al. (2020) turned towards an active data augmentation way, aiming to anticipate more data to meta-train the initialization by injecting the same noise to the labels of both support and query sets (i.e., label shift). Though the label shift with a random constant increases the dependence of the base learner on the support set, learning the constant is as easy as modifying a bias. Therefore, little extra knowledge is introduced to meta-train the initialization.
|
| 20 |
+
|
| 21 |
+
This paper sets out to investigate more flexible and powerful ways to produce "more" data via task augmentation. The goal for task augmentation is to increase the dependence of target predictions on the support set and provide additional knowledge to optimize the model initialization. To meet the goal, we propose two task augmentation strategies - MetaMix and Channel Shuffle. MetaMix linearly combines either original features or hidden representations of the support and query sets, and performs the same linear interpolation between their corresponding labels. For classification problems, MetaMix is further enhanced by the strategy of Channel Shuffle, which is named as MMCF. For samples of each class, Channel Shuffle randomly selects a subset of channels to replace with corresponding ones of samples from a different class. These additional signals for the meta-training objective improve the meta-generalization of the learned initialization as expected.
|
| 22 |
+
|
| 23 |
+
We would highlight the primary contributions of this work. (1) We identify and formalize effective task augmentation that is sufficient for alleviating both memorization overfitting and learner overfitting and thereby improving metageneralization, resulting in two task augmentation methods. (2) Both task augmentation strategies have been theoretically proved to indeed improve the meta-generalization. (3) Throughout comprehensive experiments, we demonstrate
|
| 24 |
+
|
| 25 |
+
two significant benefits of the two augmentation strategies. First, in various real-world datasets, the performances are substantially improved over state-of-the-art meta-learning algorithms and other strategies for overcoming overfitting (Jamal & Qi, 2019; Yin et al., 2020). Second, both MetaMix and MMCF are compatible with existing and advanced meta-learning algorithms and ready to boost their performances.
|
| 26 |
+
|
| 27 |
+
# Method
|
| 28 |
+
|
| 29 |
+
Gradient-based meta-learning algorithms assume a set of tasks to be sampled from a distribution $p(\mathcal{T})$ . Each task $\mathcal{T}_i$ consists of a support sample set $\mathcal{D}_i^s = \{(\mathbf{x}_{i,j}^s, \mathbf{y}_{i,j}^s)\}_{j=1}^{K^s}$ and a query sample set $\mathcal{D}_i^q = \{(\mathbf{x}_{i,j}^q, \mathbf{y}_{i,j}^q)\}_{j=1}^{K^q}$ , where $K^s$ and $K^q$ denote the number of source and query samples, respectively. The objective of meta-learning is to master new tasks quickly by adapting a well-generalized model learned over the task distribution $p(\mathcal{T})$ . Specifically, the model f parameterized by $\theta$ is trained on massive tasks sampled from $p(\mathcal{T})$ during meta-training. When it comes to meta-testing, f is adapted to a new task $\mathcal{T}_t$ with the help of the support set $\mathcal{D}_t^s$ and evaluated on the query set $\mathcal{D}_t^g$ .
|
| 30 |
+
|
| 31 |
+
Take model-agnostic meta-learning (MAML) (Finn et al., 2017) as an example. The well-generalized model is grounded to an initialization for f, i.e., $\theta_0$ , which is adapted to each i-th task in a few gradient steps by its support set $\mathcal{D}_i^s$ . The generalization performance of the adapted model, i.e., $\phi_i$ , is measured on the query set $\mathcal{D}_i^g$ , and in turn used to optimize the initialization $\theta_0$ during meta-training. Let $\mathcal{L}$ and $\mu$ denote the loss function and the inner-loop learning rate, respectively. The above interleaved process is formulated as a bi-level optimization problem,
|
| 32 |
+
|
| 33 |
+
$$\theta_0^* := \min_{\theta_0} \mathbb{E}_{\mathcal{T}_i \sim p(\mathcal{T})} \left[ \mathcal{L}(f_{\phi_i}(\mathbf{X}_i^q), \mathbf{Y}_i^q) \right],$$
|
| 34 |
+
s.t. $\phi_i = \theta_0 - \mu \nabla_{\theta_0} \mathcal{L}(f_{\theta_0}(\mathbf{X}_i^s), \mathbf{Y}_i^s),$ (1)
|
| 35 |
+
|
| 36 |
+
where $\mathbf{X}_i^{s(q)}$ and $\mathbf{Y}_i^{s(q)}$ represent the collection of samples and their corresponding labels for the support (query) set, respectively. The predicted value $f_{\phi_i}(\mathbf{X}_i^{s(q)})$ is denoted as $\hat{\mathbf{Y}}_i^{s(q)}$ . In the meta-testing phase, to solve the new task $\mathcal{T}_t$ ,
|
| 37 |
+
|
| 38 |
+
the optimal initialization $\theta_0^*$ is fine-tuned on its support set $\mathcal{D}_t^s$ to the resulting task-specific parameters $\phi_t$ .
|
| 39 |
+
|
| 40 |
+
In practical situations, the distribution $p(\mathcal{T})$ is unknown for estimation of the expected performance in Eqn. (1). Instead, the common practice is to approximate it with the empirical performance, i.e.,
|
| 41 |
+
|
| 42 |
+
$$\theta_0^* := \min_{\theta_0} \frac{1}{n_T} \sum_{i=1}^{n_T} \left[ \mathcal{L}(f_{\phi_i}(\mathbf{X}_i^q), \mathbf{Y}_i^q) \right],$$
|
| 43 |
+
s.t. $\phi_i = \theta_0 - \mu \nabla_{\theta_0} \mathcal{L}(f_{\theta_0}(\mathbf{X}_i^s), \mathbf{Y}_i^s).$ (2)
|
| 44 |
+
|
| 45 |
+
Unfortunately, this empirical risk observes the generalization ability of the initialization $\theta_0$ only at a finite set of $n_T$ tasks. When the function f is sufficiently powerful, a trivial solution of $\theta_0$ is to overfit all tasks. Compared to standard supervised learning, the overfitting is more complicated with two cases: *memorization overfitting* and *learner overfitting* which differ primarily in whether the support set contributes to inner-loop adaptation. In memorization overfitting, $\theta_0^*$ memorizes all tasks, so that the adaptation to each task via its support set is even futile (Yin et al., 2020). In learner overfitting, $\theta_0^*$ fails to generalize to new tasks, though it adapts to solve each meta-training task sufficiently with the corresponding support set (Rajendran et al., 2020). Both overfitting lead to poor meta-generalization (see Figure 1a).
|
| 46 |
+
|
| 47 |
+
Inspired by data augmentation (Cubuk et al., 2019; Zhang et al., 2018; Zhong et al., 2020; Zhang et al., 2021) which is used to mitigate the overfitting of training samples in conventional supervised learning, we propose to alleviate the problem of task overfitting via task augmentation. Before proceeding to our solutions, we first formally define two criteria for an effective task augmentation as:
|
| 48 |
+
|
| 49 |
+
An effective task augmentation for meta-learning is an augmentation function $g(\cdot)$ that transforms a task $\mathcal{T}_i = \{\mathcal{D}_i^s, \mathcal{D}_i^q\}$ to an augmentated task $\mathcal{T}_i^{'} = \{g(\mathcal{D}_i^s), g(\mathcal{D}_i^q)\}$ , so that the following two criteria are met:
|
| 50 |
+
|
| 51 |
+
(1)
|
| 52 |
+
$$I(g(\hat{\mathbf{Y}}_{i}^{q}); g(\mathcal{D}_{i}^{s})|\theta_{0}, g(\mathbf{X}_{i}^{q})) - I(\hat{\mathbf{Y}}_{i}^{q}; \mathcal{D}_{i}^{s}|\theta_{0}, \mathbf{X}_{i}^{q}) > 0,$$
|
| 53 |
+
|
| 54 |
+
(2) $I(\theta_{0}; g(\mathcal{D}_{i}^{q})|\mathcal{D}_{i}^{q}) > 0.$
|
| 55 |
+
|
| 56 |
+
The augmented task satisfying the first criterion is expected to alleviate the memorization overfitting, as the model more heavily relies on the support set $\mathcal{D}_i^s$ to make predictions, i.e., increasing the mutual information between $g(\hat{\mathbf{Y}}_i^q)$ and $g(\mathcal{D}_i^s)$ . The second criterion guarantees that the augmented task contributes additional knowledge to update the initialization in the outer-loop. With more augmented meta-training tasks satisfying this criterion, the meta-generalization ability of the initialization to meta-testing tasks improves. Building
|
| 57 |
+
|
| 58 |
+
on this, we will introduce the proposed task augmentation strategies.
|
| 59 |
+
|
| 60 |
+
**MetaMix.** One of the most immediate choices for task augmentation is directly incorporating support sets in the outer-loop, while it is far from enough. The support sets contribute little to the value and gradients of the meta-training objective, as the meta-training objective is formulated as the performance of the adapted model which is exactly optimized via support sets. Thus, we are motivated to produce "more" data out of the accessible support and query sets, resulting in MetaMix, which meta-trains $\theta_0$ by mixing samples from both the query set and the support set.
|
| 61 |
+
|
| 62 |
+
In detail, the strategy of mixing follows Manifold Mixup (Verma et al., 2019) where not only inputs but also hidden representations are mixed up. Assume that the model f consists of L layers. The hidden representation of a sample set $\mathbf{X}$ at the l-th layer is denoted as $f_{\theta^l}(\mathbf{X})$ ( $0 \le l \le L-1$ ), where $f_{\theta^0}(\mathbf{X}) = \mathbf{X}$ . For a pair of support and query sets with their corresponding labels in the i-th task $\mathcal{T}_i$ , i.e., $(\mathbf{X}_i^s, \mathbf{Y}_i^s)$ and $(\mathbf{X}_i^q, \mathbf{Y}_i^q)$ , we randomly sample a value of $l \in \mathcal{C} = \{0, 1, \dots, L-1\}$ and compute the mixed batch of data for meta-training as,
|
| 63 |
+
|
| 64 |
+
$$\mathbf{X}_{i,l}^{mix} = \lambda f_{\phi_i^l}(\mathbf{X}_i^s) + (\mathbf{I} - \lambda) f_{\phi_i^l}(\mathbf{X}_i^q),$$
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
$$\mathbf{Y}_i^{mix} = \lambda \mathbf{Y}_i^s + (\mathbf{I} - \lambda) \mathbf{Y}_i^q,$$
|
| 68 |
+
(3)
|
| 69 |
+
|
| 70 |
+
where $\lambda = \operatorname{diag}(\{\lambda_j\}_{j=1}^{K^q})$ and each coefficient $\lambda_j \sim \operatorname{Beta}(\alpha,\beta)$ . Here, we assume that the size of the support set and that of the query are equal, i.e., $K^s = K^q$ . If $K^s < K^q$ , for each sample in the query set, we randomly select one sample from the support set for mixup. The similar sampling applies to $K^s > K^q$ . In Appendix B.1, we illustrate the Beta distribution in both symmetric (i.e., $\alpha = \beta$ ) and skewed shapes (i.e., $\alpha \neq \beta$ ). Using the mixed batch by MetaMix, we reformulate the outer-loop optimization problem as,
|
| 71 |
+
|
| 72 |
+
$$\theta_0^* := \min_{\theta_0} \frac{1}{n_T} \sum_{i=1}^{n_T} \mathbb{E}_{\boldsymbol{\lambda} \sim \text{Beta}(\alpha, \beta)} \mathbb{E}_{l \sim \mathcal{C}} [\mathcal{L}(f_{\phi_i^{L-l}}(\mathbf{X}_{i, l}^{mix}), \mathbf{Y}_i^{mix})],$$
|
| 73 |
+
(4)
|
| 74 |
+
|
| 75 |
+
where $f_{\phi_i^{L-l}}$ represents the rest of layers after the mixed layer l. MetaMix is flexible enough to be compatible with off-the-shelf gradient-based meta-learning algorithms, by replacing the query set with the mixed batch for meta-training. Further, to verify the effectiveness of MetaMix, we examine whether the criteria in Definition 1 are met in the follows.
|
| 76 |
+
|
| 77 |
+
**Corollary 1** Assume that the support set is sampled independently from the query set. Then the following two equations hold:
|
| 78 |
+
|
| 79 |
+
$$I(\hat{\mathbf{Y}}_{i}^{mix}; (\mathbf{X}_{i}^{s}, \mathbf{Y}_{i}^{s})|\theta_{0}, \mathbf{X}_{i}^{mix}) - I(\hat{\mathbf{Y}}_{i}^{q}; (\mathbf{X}_{i}^{s}, \mathbf{Y}_{i}^{s})|\theta_{0}, \mathbf{X}_{i}^{q})$$
|
| 80 |
+
|
| 81 |
+
$$= H(\hat{\mathbf{Y}}_{i}^{s}|\theta_{0}, \mathbf{X}_{i}^{s}) \geq 0;$$
|
| 82 |
+
|
| 83 |
+
$$I(\theta_{0}; \mathbf{X}_{i}^{mix}, \mathbf{Y}_{i}^{mix}|\mathbf{X}_{i}^{q}, \mathbf{Y}_{i}^{q}) = H(\theta_{0}) - H(\theta_{0}|\mathbf{X}_{i}^{s}, \mathbf{Y}_{i}^{s}).$$
|
| 84 |
+
(5)
|
| 85 |
+
|
| 86 |
+
The first criterion is easily satisfied $-H(\hat{\mathbf{Y}}_i^s|\theta_0,\mathbf{X}_i^s)$ hardly equals zero as $\theta_0$ unlikely fits the support set in metalearning. The second criterion indicates that MetaMix contributes a novel task as long as the support set of the task being augmented is capable of reducing the uncertainty of the initialization $\theta_0$ , which is often the case. We provide the detailed proof of Corollary 1 in Appendix A.1.
|
| 87 |
+
|
| 88 |
+
MetaMix enhanced with Channel Shuffle. In classification, the proposed MetaMix can be further enhanced by another task augmentation strategy named Channel Shuffle (CF). Channel Shuffle aims to randomly replace a subset of channels through samples of each class by the corresponding ones in a different class. Assume that the hidden representation $f_{\phi_i^l}(\mathbf{x}_{i,j}^{s(q)})$ of each sample consists of p channels, i.e., $f_{\phi_i^l}(\mathbf{x}_{i,j}^{s(q)}) = [f_{\phi_i^l}^{(1)}(\mathbf{x}_{i,j}^{s(q)}); \dots; f_{\phi_i^l}^{(p)}(\mathbf{x}_{i,j}^{s(q)})]$ . Provided with 1) a pair of classes c and c' with corresponding sample sets $(\mathbf{X}_{i;c}^{s(q)}, \mathbf{Y}_{i;c}^{s(q)}), (\mathbf{X}_{i;c'}^{s(q)}, \mathbf{Y}_{i;c'}^{s(q)})$ , and 2) a random variable $\mathbf{R}_{c,c'} = \mathrm{diag}(r_1, \dots, r_p)$ with $r_t \sim \mathrm{Bernoulli}(\delta)$ and $\delta > 0.5$ for $t \in [p]$ , the channel shuffle process is formulated as:
|
| 89 |
+
|
| 90 |
+
$$\begin{aligned} \mathbf{X}_{i;c}^{s(q),cf} &= \mathbf{R}_{c,c'} f_{\phi_i^l}(\mathbf{X}_{i;c}^{s(q)}) + (\mathbf{I} - \mathbf{R}_{c,c'}) f_{\phi_i^l}(\mathbf{X}_{i;c'}^{s(q)}), \\ \mathbf{Y}_{i:c}^{s(q),cf} &= \mathbf{Y}_{i:c}^{s(q)}. \end{aligned} \tag{6}$$
|
| 91 |
+
|
| 92 |
+
The channel shuffle strategy is then applied in both support and query sets, with $\mathbf{R}_{c,c'}$ shared between the two sets. We denote the shuffled support set and query set as $(\mathbf{X}_i^{s,cf}, \mathbf{Y}_i^{s,cf})$ and $(\mathbf{X}_i^{q,cf}, \mathbf{Y}_i^{q,cf})$ , respectively. Then, in the outer-loop, the channel shuffled samples will be integrated into the MetaMix and the Eqn. (4) is reformulated as:
|
| 93 |
+
|
| 94 |
+
$$\mathbf{X}_{i,l}^{mmcf} = \lambda \mathbf{X}_{i}^{s,cf} + (\mathbf{I} - \lambda) \mathbf{X}_{i}^{q,cf},$$
|
| 95 |
+
|
| 96 |
+
$$\mathbf{Y}_{i}^{mmcf} = \lambda \mathbf{Y}_{i}^{s,cf} + (\mathbf{I} - \lambda) \mathbf{Y}_{i}^{q,cf},$$
|
| 97 |
+
(7)
|
| 98 |
+
|
| 99 |
+
We name the MetaMix enhanced with channel shuffle as MMCF. In Appendix A.2, we prove that MMCF not only meets the first criterion in Definition 1, but also outperforms MetaMix regarding the second criterion. Taking MAML as an example, we show MetaMix and MMCF in Alg. 1 and Appendix B.2, respectively.
|
2008.09641/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2019-11-22T17:43:56.687Z" agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" version="12.2.9" etag="i3oJ5DdW8X9Q1zR81yZE" type="google" pages="1"><diagram id="TwUG6dy3D9RqXALDdvrD">3VpNc9sgEP01vmYE6POYOml7aGcyzXTaHonAlqay8GAc2/31RRayBMhNqiCP4lysXWCBt2/RssoMzVf7Txyvs6+M0GIGPbKfobsZhABGsfypNIdaEyBQK5Y8J6pTq3jM/1Cl9JR2mxO60ToKxgqRr3VlysqSpkLTYc7ZTu+2YIU+6xovqaV4THFha3/kRGS1NoZRq/9M82XWzAzCpG5Z4aazMrHJMGG7WnXcHLqfoTlnTNRPq/2cFhV4DS41Ah/PtJ4WxmkpXjMA1gOecbFVe1PrEodms5xtS0Kr/mCGPuyyXNDHNU6r1p10r9RlYlWoZsxT5a5ISou8KOasYPxoCS0WNExTqd8Izn7TTguJkifPky1qOZQLuj+7JXACSjKMshUV/CC7qAGJ2rKnyBUrcdd6CkVKl3W8FDTjsGLH8mS6BVA+KAz78UQWnt/ol+9vw9RAkQIS0KgPxSSMEA7doAiAr8MIQWLhGPfA6DtA0X+ZlbQkt1UkSykt8GaTpzpocpv88FMK3k3QiL8qsRHu9pp0aKR9LjrDpPSrsSif20GV0IypF0eJcWhs2JanVOPFK/DvwBv0sVTpOC2wyJ/1GfswVzM8sFxO3LoXGe6NfF+3Ua9eDeueIKYlEBtEiQ1LAvMlFZalIwlOG38VLwIHvBjg36FcOsOL6ZDAdB2CrkiA/GQsEoRXQYIaDxUx3tSJ4QeuiOFHoxEjejsx3kPE3vidP6RHXZwMdhPQ3QRGO8Tjq4jf6VDCg64OccvSeId44oAE4x3IYHIHsuWawZHuwYtFegNjx8u3hFiOnv5lKDFeYkGTLF/gLgTA5EJlumFhAj746BvxEgPg5BzaPfsmd1M1XQPMOs3gXBSYWe0ZJ0tX4EOn27rqsPnHkq2J9CqcfKhNDqaQXWYaTiGgEajl00sFkk7m1eZh/1UgmfYrFxi8C8LBd6BItxQmBoMdHi4uSmfjlMAm52GzUBxGrt4eoXlEOXSwXQMDx5VAr/qds/LZcvhbEiyZXsXE70uwYviEQlcJVnjm9dthwikenWdYdkXpVu6m+gYEvQfGirxcOsV0Eae0/zvIUxz4gavvIAamqAmbi2BqF2OQoil6vzS13gmX5aldObmWj0unSsMlLlR27eEaqenDC4Z7Y/fKjlA7B/THA1WK7Xf/OjNo/3sC3f8F</diagram></mxfile>
|
2008.09641/main_diagram/main_diagram.pdf
ADDED
|
Binary file (14.7 kB). View file
|
|
|
2008.09641/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:865c7f8598bc819c13eba14f62ee570c244646c00fd15afacfbfbc1c0d190047
|
| 3 |
+
size 1646723
|
2008.09641/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Clustering is a fundamental unsupervised learning problem that aims to group the input data based on a similarity criterion. Traditionally, clustering models are trained on a transformed low-dimensional version of the original data obtained via feature-engineering or dimensionality reduction *e.g.* PCA. Hence the performance of clustering relies heavily on the quality of the feature space representation. In recent years deep generative models have been successful in learning low-dimensional representations from complex data distributions, and two particular models have gained wide attention: The Variational Autoencoder (VAE) [@kingma2013vae], [@rezende2014vae2] and the Generative Adversarial Network (GAN) [@goodfellow2014gan].
|
| 4 |
+
|
| 5 |
+
In VAE an encoder and a decoder network pair is trained to map the data to a low-dimensional latent space, and to reconstruct it back from the latent space, respectively. The encoder is used for inference while the decoder is used for generation. The main limitations of the standard VAE are the restrictive assumptions associated with the explicit distributions of the encoder and decoder outputs. For the latter this translates empirically as loss of detail in the generator output. In GAN a generator network that samples from latent space is trained to mimic the underlying data distribution while a discriminator is trained to detect whether the generated samples are true or synthetic (fake). This adversarial training strategy avoids explicit assumptions on the distribution of the generator, allowing GANs to produce the most realistic synthetic outputs up to date [@brock2018large], [@karras2018progressive], [@styleGAN]. The weaknesses of the standard GAN are the lack of inference capabilities and the difficulties associated with training (*e.g.* mode collapse).
|
| 6 |
+
|
| 7 |
+
One would like to combine the strengths of these two models, *i.e.* to be able to infer the latent variables directly from data and to have a flexible decoder that learns faithful data distributions. Additionally, we would like to train simultaneously for feature extraction and clustering as this performs better according to [@DEC], [@yang2016jule], [@yang2017towards]. Extensions of the standard VAE that modify the prior distribution to make it suitable for clustering have been proposed in [@SVAE], [@dilokthanakul2016deep], [@jiang2017vade], although they still suffer from too restrictive generator models. On the other hand, the standard GAN has been extended to infer categories [@springenberg2015catgan], [@tripleGAN], [@badGAN] . Other works have extended GAN to infer the posterior distribution of the latent variables reporting good results in both reconstruction and generation [@chen2016infogan], [@dumoulin2016ali], [@donahue2016bigan], [@srivastava2017veegan], [@li2017alice], [@li2019aim]. These models do inference and have flexible generators but were not designed for clustering.
|
| 8 |
+
|
| 9 |
+
In this paper we propose a model able to learn good representations for clustering in latent space. The model is called Matching Priors and Conditionals for Clustering (MPCC). This is a GAN-based model with (a) a learnable mixture of distributions as prior for the generator, (b) an encoder to infer the latent variables from the data and (c) a clustering network to infer the cluster membership from the latent variables. Code are available at [`github.com/jumpynitro/MPCC`](http://github.com/jumpynitro/MPCC).
|
| 10 |
+
|
| 11 |
+
MPCC is based on a matching joint distribution optimization framework. Let us denote $q(x)$ as the true distribution and $p(z)$ the prior, where $x \in \mathcal{X}$ is the observed variable and $z \in \mathcal{Z}$ is the latent variable, respectively. $q(x)$ and $p(z)$ stand for the marginalization of the inference model $q(x,z)$ and generative model $p(x,z)$, respectively . If the joint distributions $q(x,z)$ and $p(x,z)$ match then it is guaranteed that all the conditionals and marginals also match. Intuitively this means that we can reach one domain starting from the other, *i.e.*, we have an encoder that allows us to reach the latent variables $p(z) \approx q(z) = \mathbb{E}_{q(x)}[q(z|x)]$ and a generator that approximates the real distribution $q(x) \approx p(x) = \mathbb{E}_{p(z)}[p(x|z)]$. Notice that the latter approximation corresponds to a GAN optimization problem. In the case of vanilla GAN the Jensen-Shannon divergence $D_{JS}(q(x)||p(x))$ is minimized, but other distances can be used [@WGAN], [@IWAN], [@fGAN], [@LeastGAN].
|
| 12 |
+
|
| 13 |
+
Although other classifications can be done [@lagrange], we recognize that the joint distribution matching problem can be divided in three general categories: i) matching the joints directly, ii) matching conditionals in $\mathcal{Z}$ and marginals in $\mathcal{X}$, and iii) matching conditionals in $\mathcal{X}$ and marginals in $\mathcal{Z}$. The straight forward approach is to minimize the distance between the joint distributions using a fully adversarial optimization such as [@dumoulin2016ali], [@donahue2016bigan], [@bigbigan], which yields competitive results but still shows difficulties in reconstruction tasks likely affecting unsupervised representation learning. According to [@li2017alice] these issues are related to the lack of an explicit optimization of the conditional distributions.
|
| 14 |
+
|
| 15 |
+
Recent works [@distributionMatching], [@li2019aim], [@lagrange] have shown that the VAE [@kingma2013vae] loss function (ELBO) is related to matching the inference and generative joint distributions. This can be demonstrated for the Kullback--Leibler (KL) divergence of $p$ from $q$, which we refer as forward KL, as follows: $$\begin{align}
|
| 16 |
+
& D_{KL}(q(z, x) ||p(z, x)) \nonumber \\
|
| 17 |
+
~ = ~ &\mathbb{E}_{q(x)} [D_{KL}(q(z|x) ||p(z|x))] + D_{KL}(q(x) ||p(x)) \nonumber \\
|
| 18 |
+
~ = ~ &\mathbb{E}_{q(x)} \mathbb{E}_{q(z|x)}[ - \log p(x|z) ] + \mathbb{E}_{q(x)} [D_{KL}(q(z|x)|| p(z)) ] + \mathbb{E}_{q(x)} [\log q(x)] \nonumber \\
|
| 19 |
+
~ = ~ &\mathbb{E}_{q(x)} [-\text{ELBO}] + \mathbb{E}_{q(x)} [\log q(x)] ,
|
| 20 |
+
\label{vae_match}
|
| 21 |
+
\end{align}$$ hence maximizing the ELBO can be seen as matching the conditionals in latent space $\mathcal{Z}$ and the marginals in data space $\mathcal{X}$ (see the second line in Eq. [\[vae_match\]](#vae_match){reference-type="ref" reference="vae_match"}). The proof for the first equivalence in Eq. [\[vae_match\]](#vae_match){reference-type="ref" reference="vae_match"} can be found in the Appendix A.
|
| 22 |
+
|
| 23 |
+
In order to avoid latent collapse and the parametric assumptions of VAE, AIM [@li2019aim] proposed the opposite, *i.e.* to match the conditionals in data space and the marginals in latent space. Starting from the KL divergence of $q$ from $p$, which we refer as reverse KL, they obtained the following: $$\begin{align}
|
| 24 |
+
& D_{KL}(p(z, x) ||q(z, x)) \nonumber \\
|
| 25 |
+
~ = ~ &\mathbb{E}_{p(z)}[D_{KL}(p(x|z) ||q(x|z))] + D_{KL}(p(z) ||q(z)) \nonumber \\
|
| 26 |
+
~ = ~ &\mathbb{E}_{p(z)} \mathbb{E}_{p(x|z)}[ - \log q(z|x) ] + \mathbb{E}_{p(z)} [D_{KL}(p(x|z)|| q(x)) ] + \mathbb{E}_{p(z)} [ \log p(z)],
|
| 27 |
+
\label{aim_match}
|
| 28 |
+
\end{align}$$ where $p(z)$ is a fixed parametric distribution hence $\mathbb{E}_{p(z)} [ \log p(z)]$ is constant. Therefore [@li2019aim] achieves the matching of joint distributions by minimizing\
|
| 29 |
+
$D_{KL}(p(x|z)|| q(x))$ to learn the real domain, and maximizing the likelihood of the encoder $\mathbb{E}_{p(x|z)}[\log q(z|x)]$. This allows obtaining an overall better performance than [@dumoulin2016ali], [@donahue2016bigan], [@li2017alice] in terms of reconstruction and generation scores. This method matches the conditional distribution explicitly, uses a flexible generator [@goodfellow2014gan] and avoids latent collapse problems [@Lucas2019UnderstandingPC].
|
| 30 |
+
|
| 31 |
+
Lot of research has been done in unsupervised and semi supervised learning using straight forward joint distribution optimization [@donahue2016bigan], [@dumoulin2016ali], [@bigbigan], [@amm], and even more for conditional in latent space decomposition [@kingma2013vae], [@kingma2016dsemi], [@maaloe16], [@maale2019biva]. In this work we explore the representation capabilities of the decomposition proposed in [@li2019aim]. Our main contributions are:
|
| 32 |
+
|
| 33 |
+
- A mathematical derivation that allows us to have a varied mixture of distributions in latent space enforcing its clustering capabilities. Based on this derivation we developed a new generative model for clustering called MPCC, trained by matching prior and conditional distributions jointly.
|
| 34 |
+
|
| 35 |
+
- A comparison with the state-of-the-art showing that MPCC outperforms generative and discriminative models in terms of clustering accuracy and generation quality.
|
| 36 |
+
|
| 37 |
+
- An ablation study of the most relevant parameters of MPCC and a comparison with the AIM baseline [@li2019aim] using state of the art architectures [@bigbigan].
|
| 38 |
+
|
| 39 |
+
# Method
|
| 40 |
+
|
| 41 |
+
MPCC extends the usual joint distribution of variables $x \in \mathcal{X}$ and $z \in \mathcal{Z}$ incorporating an additional latent variable, $y \in \mathcal{Y}$, which represents a given cluster. We specify the graphical models for *generation* and *inference* as
|
| 42 |
+
|
| 43 |
+
:::: center
|
| 44 |
+
::: minipage
|
| 45 |
+
- $p(x,z,y) = p(y) p (z|y) p(x|z,y)$,
|
| 46 |
+
|
| 47 |
+
- $q(x,z,y) = q (y|z) q(z|x) q(x)$,
|
| 48 |
+
:::
|
| 49 |
+
::::
|
| 50 |
+
|
| 51 |
+
respectively. The only assumption in the graphical model is $q(y|z)=q(y|z,x)$, *i.e.* $z$ contains all the information from $x$ that is necessary to estimate $y$.
|
| 52 |
+
|
| 53 |
+
For generation, we seek to match the decoder $p(x|z, y)$ to the real data distribution $q(x)$. The latent variable is defined by the conditional distributions $p(z|y)$ which in general can be any distribution under certain restrictions (Section [3.3](#optimizing_mpcc){reference-type="ref" reference="optimizing_mpcc"}). The marginal distribution $p(y)$ is defined as multinomial with weight probabilities $\phi$. Note that under this graphical model the latent space becomes multimodal defined by a mixture of distributions $p(z) = \sum_y p(y)p(z|y)$.
|
| 54 |
+
|
| 55 |
+
In the inference procedure the latent variables are obtained by the conditional posterior $q(z|x)$ using the empirical data distribution $q(x)$. The distribution $q(y|z)$ is a posterior approximation of the cluster membership of the data.
|
| 56 |
+
|
| 57 |
+
We call our model Matching Priors and Conditionals for Clustering (MPCC) and we optimize it by minimizing the reverse Kullback-Leibler divergence of the conditionals and priors between the inference and generative networks as follows: $$\begin{align}
|
| 58 |
+
\begin{split}
|
| 59 |
+
&D_{KL} \left ( p(x,z,y) || q(x, z, y) \right) \\
|
| 60 |
+
~ = ~ &\mathbb{E}_{p(z,y)}[D_{KL}(p(x|z,y) || q(x|z,y))] \\
|
| 61 |
+
+ ~&\mathbb{E}_{p(y)}[D_{KL}(p(z|y) || q(z|y))] + D_{KL}(p(y) || q(y)) .
|
| 62 |
+
\label{loss}
|
| 63 |
+
\end{split}
|
| 64 |
+
\end{align}$$ The proof for Eq. [\[loss\]](#loss){reference-type="eqref" reference="loss"} can be found in Appendix A. In the following sections we derive a tractable expression for Eq. [\[loss\]](#loss){reference-type="eqref" reference="loss"} and present the MPCC algorithm.
|
| 65 |
+
|
| 66 |
+
Because $q(y)$, $q(z|y)$ and $q(x|z,y)$ are impossible to sample from, we derive a closed-form solution for Eq. [\[loss\]](#loss){reference-type="eqref" reference="loss"}. In particular for any fixed $y$ and $z$ we can decompose $D_{KL}(p(x|z,y)||q(x|z,y))$ as follows: $$\begin{align}
|
| 67 |
+
& D_{KL}(p(x|z,y) || q(x|z,y)) \nonumber \\ ~ = ~
|
| 68 |
+
& \mathbb{E}_{p(x | z,y)}\left[ \log \frac{p(x|z,y)}{q(x)} \frac{q(z,y)}{q(z,y|x)} \right] \nonumber\\ ~ = ~
|
| 69 |
+
& \mathbb{E}_{p(x | z,y)} \bigg[ \log \frac{p(x|z,y)}{q(x)} - \log q(y|z) - \log q(z|x) + \log q(z|y) + \log q(y) \bigg].
|
| 70 |
+
\label{decompose}
|
| 71 |
+
\end{align}$$ Adding $\log p(z|y) + \log p(y) - \log q(z|y) - \log q(y)$ to both sides of Eq. [\[decompose\]](#decompose){reference-type="eqref" reference="decompose"} and taking the expectation with respect to $p(z,y)$ the Eq. [\[loss\]](#loss){reference-type="eqref" reference="loss"} is recovered. After adding these terms and taking the expectation we can collect the resulting right hand side of Eq. [\[decompose\]](#decompose){reference-type="eqref" reference="decompose"} as follows: $$\begin{align}
|
| 72 |
+
&\mathbb{E}_{p(z,y)}[D_{KL}(p(x|z,y) || q(x|z,y)) + D_{KL}(p(z|y) || q(z|y)) + D_{KL}(p(y) || q(y)) ] \nonumber \\
|
| 73 |
+
= ~ & \underbrace{\mathbb{E}_{p(y)p(z|y)} [D_{KL}(p(x|z,y)||q(x))]}_{\textbf {Loss I} } + \underbrace{\mathbb{E}_{p(y)p(z|y)p(x|z,y)} [- \log q(z|x) - \log q(y|z) ]}_{\textbf{Loss II} } \nonumber \\
|
| 74 |
+
+~& \underbrace{\mathbb{E}_{p(z|y)p(y)}[\log p(y)+ \log p(z|y)]}_{\textbf{Loss III}}, \label{finalloss}
|
| 75 |
+
\end{align}$$
|
| 76 |
+
|
| 77 |
+
where **Loss I** seeks to match the true distribution $q(x)$, **Loss II** is related to the variational approximation of the latent variables and **Loss III** is associated with the distribution of the cluster parameters. The right hand term of Eq. [\[finalloss\]](#finalloss){reference-type="eqref" reference="finalloss"} is a loss function, composed of three terms with distributions that we can sample from. In the next section we explain the strategy to optimize each of the terms of the proposed loss function.
|
| 78 |
+
|
| 79 |
+
MPCC follows the idea that the data space $\mathcal{X}$ is compressed in the latent space $\mathcal{Z}$ and a separation in this space will likely partition the data in the most representatives clusters $p(z|y)$. The separability of these conditional distributions will be enforced by $q(y|z)$ which also backpropagates through the parameters of $p(z|y)$. The connection with the data space is through the decoder $p(x|z,y)$ for generation and the encoder $q(z|x)$ for inference.
|
| 80 |
+
|
| 81 |
+
In what follows we describe the assumptions made in the distributions of the graphical model and how to optimize Eq. [\[finalloss\]](#finalloss){reference-type="ref" reference="finalloss"}. For simplicity we assume the conditional $p(z|y)$ to be a Gaussian distribution, but other distributions could be used with the only restriction being that their entropy should have a closed-form or at least a bound (second term in **Loss III**). In our experiments the latent variable $z|y \sim \mathcal{N}(\mu_y, \sigma^2 _y)$ is sampled using the reparameterization trick [@kingma2013vae], *i.e.* $z = \mu_{y} + \sigma_{y} \odot \epsilon$ where $\epsilon \sim \mathcal{N}(0,I)$ and $\odot$ is the Hadamard product. The parameters $\mu_{y}$, $\sigma^2_{y}$ are learnable and they are conditioned on $y$. Under Gaussian conditional distribution the latent space becomes a GMM, as we can observe mathematically $p(z) = \sum_y p(y)p(z|y) = \sum_y p(y)\mathcal{N}(\mu_y, \sigma^2 _y)$.
|
| 82 |
+
|
| 83 |
+
The distribution $p(x|z,y)$ is modeled by a neural network and trained via adversarial learning, *i.e.* it does not require parametric assumptions. The inferential distribution $q(z|x)$ is also modeled by a neural network and its distribution is assumed Gaussian for simplicity. The categorical distribution $q(y|z)$ may also be modeled by a neural network but we propose a simpler approach based on the membership from the latent variable $z$ to the Gaussian components. A diagram of the proposed model considering these assumptions is shown in Fig. [1](#fig-MPCC){reference-type="ref" reference="fig-MPCC"}. We now expand on this for each of the losses in Eq. [\[finalloss\]](#finalloss){reference-type="ref" reference="finalloss"}.
|
| 84 |
+
|
| 85 |
+
**Loss I**: Instead of minimizing the Kullback-Leibler divergence shown in the first term on the right hand of Eq. [\[finalloss\]](#finalloss){reference-type="eqref" reference="finalloss"} we choose to match the conditional decoder $p(x|z,y)$ with the empirical data distribution $q(x)$ using a generative adversarial approach. The GAN loss function can be formulated as [@Dong2019TowardsAD] $$\begin{align}
|
| 86 |
+
\begin{split}
|
| 87 |
+
& \max_{D}~ \mathbb{E}_{x\sim q(x)} [ f( D (x) )] + \mathbb{E}_{\tilde{x} \sim p(x,z,y)} [ g( D (\tilde{x})) ],\\
|
| 88 |
+
& \min_{G}~ \mathbb{E}_{\tilde{x} \sim p(x,z,y)} [ h( D (\tilde{x})) ],
|
| 89 |
+
\label{GAN}
|
| 90 |
+
\end{split}
|
| 91 |
+
\end{align}$$ where $D$ and $G$ are the discriminator and generator networks, respectively, and tilde is used to denote sampled variables. For all our experiments we use the hinge loss function [@lim2017geometric], [@tran2017hierarchical], *i.e.* $f = -\min(0, o -1)$, $g = \min(0, - o - 1)$ and $h = -o$, being $o$ the output of the discriminator. The parameters and distribution associated with **Loss I** are colored in blue in Fig.[1](#fig-MPCC){reference-type="ref" reference="fig-MPCC"}.
|
| 92 |
+
|
| 93 |
+
<figure id="fig-MPCC" data-latex-placement="t">
|
| 94 |
+
<div class="center">
|
| 95 |
+
<embed src="images/MPCC-diagram5-big3.pdf" style="width:95.0%" />
|
| 96 |
+
</div>
|
| 97 |
+
<figcaption><span id="fig-MPCC" data-label="fig-MPCC"></span> Diagram of the MPCC model. The blue colored elements are associated with <strong>Loss I</strong> (Eq. <a href="#GAN" data-reference-type="ref" data-reference="GAN">[GAN]</a>). The green colored elements are associated with <strong>Loss II</strong> (Equations <a href="#gaussian_gaussian" data-reference-type="ref" data-reference="gaussian_gaussian">[gaussian_gaussian]</a> and <a href="#classifier_loss" data-reference-type="ref" data-reference="classifier_loss">[classifier_loss]</a>). The red colored elements are associated with <strong>Loss III</strong> (Eq. <a href="#prior_eq" data-reference-type="ref" data-reference="prior_eq">[prior_eq]</a>). The dashed line corresponds to the generator (GMM plus decoder).</figcaption>
|
| 98 |
+
</figure>
|
| 99 |
+
|
| 100 |
+
**Loss II**: The first term of this loss is estimated through Monte Carlo sampling as $$\begin{align}
|
| 101 |
+
\begin{split}
|
| 102 |
+
&\mathbb{E}_{p(y)p(z|y)p(x|z,y)}[- \log q(z|x) ]\\ = ~
|
| 103 |
+
&\mathbb{E}_{ y_i \sim p(y), z_i \sim p(z|y = y_i), \tilde{x}_i \sim p(x|z = z_i,y = y_i)} \underbrace{ \left [\sum_{j=1}^J \frac{1}{2}\log (2\pi \tilde{\sigma}_{ij}^2) + \frac{(z_{ij} - \tilde{\mu}_{ij} )^2}{2 \tilde{\sigma} _{ij}^2} \right]}_{L_q(\tilde{\mu_i}, \tilde{\sigma}^2_i, z_i)} ,
|
| 104 |
+
\label{gaussian_gaussian}
|
| 105 |
+
\end{split}
|
| 106 |
+
\end{align}$$ where $J$ is the dimensionality of the latent variable $z$. By minimizing Eq. [\[gaussian_gaussian\]](#gaussian_gaussian){reference-type="eqref" reference="gaussian_gaussian"} we are maximizing the log-likelihood of the encoder $q(z|x)$ with respect to the Gaussian prior $p(z|y)$. This reconstruction error is estimated by matching the samples $z_i \sim p(z|y=y_i)$ with the Gaussian distribution $(\tilde{\mu}_i, \tilde{\sigma}^2_i )\sim q(z|x=\tilde{x}_i)$, where $\tilde{x}_i$ is the decoded representation of $z_i$.
|
| 107 |
+
|
| 108 |
+
The second term of **Loss II** is equivalent to the cross-entropy between the sampled label $y_i \sim p(y)$ and the estimated cluster membership $\tilde{y}_i$
|
| 109 |
+
|
| 110 |
+
$$\begin{align}
|
| 111 |
+
L_c (y_i, \tilde{y}_i) = -\sum_{k=1}^K y_{ik} \log \tilde{y}_{ik},
|
| 112 |
+
\label{classifier_loss}
|
| 113 |
+
\end{align}$$ where $K$ is the number of clusters and $$\begin{align}
|
| 114 |
+
\tilde{y}_{im} = q(y = m|z = z_i) = \frac{ \mathcal{N}(z_i| \mu_{m}, \sigma_m^2) } {\sum_{k=1}^K \mathcal{N}(z_i| \mu_{k}, \sigma_k^2) },
|
| 115 |
+
\label{gaussian_membressy}
|
| 116 |
+
\end{align}$$ is the membership of $z_i$ to the m-th cluster. The parameters $\mu_m$ and $\sigma^2_m$ are learnable, and $m \in [1, \dots, K]$ is the index corresponding to each cluster. The parameters and distribution associated with **Loss II** are colored in green in Fig. [1](#fig-MPCC){reference-type="ref" reference="fig-MPCC"}. In practice Eq. [\[gaussian_membressy\]](#gaussian_membressy){reference-type="ref" reference="gaussian_membressy"} is estimated using the log-sum-exp trick.
|
| 117 |
+
|
| 118 |
+
**Loss III**: This loss is associated with the regularization of the Gaussian mixture model parameters $\phi$, $\mu$ and $\sigma^2$ and has a closed form $$\begin{align}
|
| 119 |
+
\begin{split}
|
| 120 |
+
\label{prior_eq}
|
| 121 |
+
& \mathbb{E}_{p(y)p(z|y)}[ \log p(y) + \log p(z|y) ] \\ = & \underbrace{\sum_{k=1}^K \phi_{k} \left[ \log \phi_{k} - \sum_{j=1}^J \left(\frac{1}{2} + \frac{1}{2}\log (2\pi \sigma_{kj}^2) \right)\right] }_{L_p(\phi, \sigma^2)},
|
| 122 |
+
\end{split}
|
| 123 |
+
\end{align}$$
|
| 124 |
+
|
| 125 |
+
where the first term corresponds to the entropy maximization of the mixture weights, *i.e.* in general every Gaussian will not collapse to less than K modes of the data distribution which is a solution with lower entropy. In our experiments we fix $\phi_k =1/K$, *i.e.* $\phi$ is not learnable. The second term is a regularization for the variance (entropy) of each Gaussian which avoids the collapse of $p(z|y)$. The parameters associated with **Loss III** are shown in red in Fig. [1](#fig-MPCC){reference-type="ref" reference="fig-MPCC"}.
|
| 126 |
+
|
| 127 |
+
**Loss I** scale differs from that of the terms associated with the latent variables. To balance all terms we multiply Eq. [\[gaussian_gaussian\]](#gaussian_gaussian){reference-type="eqref" reference="gaussian_gaussian"} by one over the dimensionality of $x$[^1] and the second term of Eq. [\[prior_eq\]](#prior_eq){reference-type="eqref" reference="prior_eq"} by one over the dimensionality of the latent variables. During training **Loss III** is weighted by a constant factor $\lambda_p$. We explain how this constant is set in Section [5.3](#empirical_setup){reference-type="ref" reference="empirical_setup"}. The full procedure to train the MPCC model is summarized in Algorithm [\[mpcc_alg\]](#mpcc_alg){reference-type="ref" reference="mpcc_alg"}. Note that MPCC is scalable in the number of clusters since Eq. [\[gaussian_gaussian\]](#gaussian_gaussian){reference-type="ref" reference="gaussian_gaussian"} is a Monte Carlo approximation in $y$ and the cost of Eq. [\[prior_eq\]](#prior_eq){reference-type="ref" reference="prior_eq"} is low since $J$ is small in comparison to the data dimensionality.
|
| 128 |
+
|
| 129 |
+
:::: algorithm
|
| 130 |
+
::: algorithmic
|
| 131 |
+
$K$, $J \gets$ Set number of clusters and latent dimensionality $\eta$, $\eta_p \gets$ Set learning rates $\theta_g, \theta_d, \theta_e \gets$ Initialize network parameters $\phi, \mu, \sigma^2 \gets$ Initialize GMM parameters $\theta_c \gets [\phi, \mu, \sigma^2 ]$ $x_1, \dots, x_n \sim q(x)$ $y_1, \dots, y_n \sim p(y)$ $z_i \sim p(z|y = y_i), ~~~~~ i =1 ,\dots, n$ $\tilde{x}_i \sim p(x|z = z_i,y = y_i), ~~~~~ i =1 ,\dots, n$ $\theta_d \gets \theta_d + \eta \nabla_{\theta_d}\left[\frac{1}{n}\sum_{j=1}^n f( D(x_j) ) + \frac{1}{n}\sum_{i=1}^n g(D(\tilde{x}_i) ) \right]$ $y_1, \dots, y_n \sim p(y)$ $z_i \sim p(z|y = y_i), ~~~~~ i =1 ,\dots, n$ $\tilde{x}_i \sim p(x|z = z_i,y = y_i), ~~~~~ i =1 ,\dots, n$ $(\theta_g, \theta_c )\gets (\theta_g, \theta_c ) - \eta \nabla_{(\theta_g, \theta_c )} \frac{1}{n}\sum_{i=1}^n h( D(\tilde{x}_i) )$ $y_1, \dots, y_n \sim p(y)$ $z_i \sim p(z|y = y_i), ~~~~~ i =1 ,\dots, n$ $\tilde{x}_i \sim p(x|z = z_i,y = y_i), ~~~~~ i =1 ,\dots, n$ $(\tilde{\mu}_i, \tilde{\sigma}^2_i) \sim q(z| x = \tilde{x}_i), ~~~~~ i =1 ,\dots, n$ $\theta_e \gets \theta_e - \eta \nabla_{\theta_e}\frac{1}{n}\sum_{i=1}^n L_q(\tilde{\mu}_i,\tilde{\sigma}^2_i, z_i)$ $\tilde{y}_i \sim q(y|z = z_i), ~~~~~ i =1 ,\dots, n$ $\theta_c \gets \theta_c - \eta_p \nabla_{\theta_c} \left[ \frac{1}{n}\sum_{i=1}^n L_c (y_i, \tilde{y}_i) + \lambda_p \cdot L_p(\phi, \sigma^2)) \right]$
|
| 132 |
+
:::
|
| 133 |
+
::::
|
| 134 |
+
|
| 135 |
+
In Section [3](#method_section){reference-type="ref" reference="method_section"} we showed that the latent space of MPCC it is reduced to a GMM under Gaussian conditional distribution. Because all the experiments are performed based on this assumption in this section we summarize the literature of generative and autoencoding models that consider GMMs.
|
| 136 |
+
|
| 137 |
+
The combination of generative models and GMMs is not new. Several methods have applied GMM in autoencoding [@AGGMM], [@AnomalyGMM] or GAN [@DeliGAN], [@OGANSGMM] applications without clustering purposes. Other approaches have performed clustering but are not directly comparable since they use mixtures of various generators and discriminators [@MGANC] or fixed priors with ad-hoc set parameters [@GAN_ICLR].
|
| 138 |
+
|
| 139 |
+
Among the related works on generative models for clustering the closest approaches to our proposal are ClusterGAN [@Mukherjee2019ClusterGANL] and Variational Deep Embedding [@jiang2017vade]. ClusterGAN differs from our model in that it sets the dimensions of the latent space as either continuous or categorical while MPCC uses a continuous latent space which is conditioned on the categorical variable $y$. On the other hand, Variational Deep Embedding (VADE) differs greatly in the training procedure, despite its similar theoretical basis. VADE, as a variational autoencoder model, matches the joint distributions in the forward KL sense $D_{KL}(q(x,z,y)||p(x,z,y))$ by matching the posteriors and the marginals in data space as demonstrated in Appendix B . MPCC optimizes the reverse KL, *i.e.* matching the priors in latent space and conditionals in data space. Optimizing different KLs yield notably different decompositions and thus training procedures. For the forward KL [@jiang2017vade] in addition to the challenges in scaling to larger dimension (Section [2](#background){reference-type="ref" reference="background"}) it is more difficult to generalize the latent space to any multi-modal distribution, we briefly discuss the reasons for this in Appendix B.
|
2010.13685/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0ecb38aa43df392d26cef6fca3075c629d37923dd0520208aea7f70580728454
|
| 3 |
+
size 12307990
|
2011.02426/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-09-18T16:05:58.500Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36" etag="SbOD_6QlxGQRKQq19Mm8" version="13.7.3" type="google"><diagram id="sQlMRDt6SKoZMHT5O8Xq" name="Page-1">7VvbcuI4EP0aqnYfkpIl28BjICEzVTOV2aFqNzMvW8ZWjDbGYmURYL5+ZVu+ySIYYm7Z8JBYLbmtPt3qPrKgg4az1T1z5tOv1MNBBwJv1UG3HQgNE9riXyxZp5KuZaQCnxFPDioEY/ILSyGQ0gXxcFQZyCkNOJlXhS4NQ+zyisxhjC6rw55oUH3q3PFxTTB2naAu/Yt4fJpKexYo5J8w8afZkw0ge2ZONlgKoqnj0WVJhO46aMgo5enVbDXEQQxehkt632hDbz4xhkPe5IZo8vcXMjEeun/cfHoeutAef767MlCq5sUJFtLiDrQDoXDAnYkAAYIJZR5mSZ/97yKe7cAoLsWAZY5L3h+DYFUGRXwtIc2E8rYb0ZmNB+nYqQRU05VO5sqlQeDMI5yOyVulB9q+/J/aMqEiAFUhq0m8OGoC4oeVmboCYcy2mDNx3Gef0UXoxbOjLJ0a8ye/QSuZPxyKv+Vrw0K/b57ypDY74dp4gnVh3Y4jWSYs0FoGgXnhlinWnM6yiwy0/9ekLyU8zma1nvekY6GuXMTytCRmclixBXK8iuVTPguEwBCXEWf0GQ9Ti25DGoqRgycSBIqIvmD2FCTkIO7NlZeLu6z3YqR4Tkkki/09pjPMmZg1WOVUJL1FMi9bNotyLeiZlE1LFAZm3MaR1MnPVRfsQlxIgrEL2TA1ZEMBEYfeTUzbRMsNnCgibhXSjcBgr0Lk6rCU7LY0ZmcyhgOHk5cq/dNBIZ/wjRIxkxx1BKuoG0hBM6IL5mJ5V5muqYr6WxRxh/mY1xQlnsnNfoOz7A9meCpmaBkXTp+AvqTD3oXbVbEGwPL1RzV6tRqZ9n7VKN9Pt1+NuvXiI2rIWDYp41Pq09AJ7grpIIkKHKuN004x5gulcwnzP5jztXyR4Sw4rToBrwh/jG+/tmTrR6nndiU1J421bGyEP60lDZJ4WisaDNxaRRuXx7f5pvf+mYIF+yIGymsif+m1K1eoq0KqqkOzhX4DhwUBSSrzYDklHI/nThK7S+bMq55zonn6Ku+JrOK1pmSuDkQg+SjpS8hH4oNQOxlLJWA5syuFR1cTHt1DJazs+SfGGIDRqPdqWtofY8s6NcbGWWA8Gtni0w7GtrIPNM1TY2xtx9hdsJekzCaVcUumz2tq44oq0GTrx3LjR7lR3JS0WqjD/YZ1OFvjZ1Kjcj3Z+rT3rFAIKIoa1ifhd2ddGjaPB0SbJ4y6SrD3K4ci4iLV2Grxg7qtchsBbZQCuiCM20I6HZgFtXGwoM5idXtQnxnxssC1CEBoQ9vqmV3QU9Jj/7rf73VRF5mGIFZZttqZk736FBsJN4Fab8urwVKWrw2OsRq6ba+GLLKNXZK1rihsW0NvWAvZ2fSFJXhbjX5bicTG4a6yDPUtclshrRQSs3eMkG6wHd2PsVTeAoBmCb66DIps336CbxrUGb05k6A2lV2Fufe+2qwqQir9aSuo1a2mfYygbrBlfzNr2YuGHzCkmxPx8+IspnKshHp7hrSpEGTU8FjpzXnaOkJIZ7a859eGyNYXwJ0jAapZUlV04FeGSPeq5eOAsd3zKsPo6E7i0MV/8wzoT04hBBduWeWI0ex1Po4Y9/3Ciwnl901Od8aINPXnvZ0xoqa7BNS0kB7njBHVv/o8IgJmIRoGiyhZw+BuNsGeR0I/ahD9WS6QGQAN4tglrhPcyI4Z8bzUxTgiv5K1laIv2ZTQaw061m2sS3g1Sh38KivZYXmA6hGh7qzF0BEX2MLq+D5kARj2v3KwoLejKTOW3+lVnQB8DgkniQv+JB6m78wBSn7SHSge1wGw5oB3GfnqnsfoaoCHBwL+wR0+Xrk/gy4cu4/frn72H+6/aSI/+flMnHpoKIrowuWEhrrifnpX1HDXeGezK5RtIdQloaO6or4GbnyfYd9JPXDheNc2eRq8e8eEu150ZcoRmebi0a59+RfU0dYdte+BtmgWPx1Lt+TFD/DQ3X8=</diagram></mxfile>
|
2011.02426/main_diagram/main_diagram.pdf
ADDED
|
Binary file (13.1 kB). View file
|
|
|
2011.02426/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b13933fdc953517976d12fdefce29d0da9d9202ff0ee449e626aa0ca799f457a
|
| 3 |
+
size 364995
|
2011.02426/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,164 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Video Retrieval is one of the most eminent and challenging problems in the digital world today. It is the task of ranking videos in a database based on their relevance to user input queries. While most practical applications use video meta-data to convert the problem into a straight-forward page ranking problem, there are vast databases of videos with no labeled meta-data.
|
| 4 |
+
|
| 5 |
+
User queries can be of multiple forms. The most popular form of input is textual input. One of the most popular publicly available applications of text-to-video retrieval is Google's video search. It's performance, however, is driven by the metadata of the videos in its search space. Some research has been published in recent years on video retrieval through text queries and and the performance in this task has improved over the years.
|
| 6 |
+
|
| 7 |
+
In this paper, we focus our efforts on video retrieval through image queries, where an even lesser degree of research has been done. There are also drawbacks in most methods proposed so far and there are many challenges that any proposed method needs to overcome. The biggest of those challenges is to find an efficient method of storing the videos in your search database.
|
| 8 |
+
|
| 9 |
+
The next big challenge that hasn't been solved yet is to tap into the temporal information contained in video data. While image classification and object detection tasks in images have been proven to work really well in the last few years, simply searching for objects in the frames of a video would fail to make use of the temporal information contained in videos.
|
| 10 |
+
|
| 11 |
+
In this paper, we propose a novel approach to solve this problem. We first pre-process a video database to extract key features from video frames. These features are clustered, such that similar frames end up in the same cluster. We generate the embeddings for these clusters by aggregating the embeddings of their constituent frames. To account for temporal information in videos, we model these clusters as nodes in a graph. Then, the cluster embeddings are augmented by including neighboring cluster information. These augmented cluster embeddings are stored and used in our video ranking process.
|
| 12 |
+
|
| 13 |
+
The proposed method was tested on the MSR-VTT dataset.
|
| 14 |
+
|
| 15 |
+
# Method
|
| 16 |
+
|
| 17 |
+
The database of videos is first pre-processed. To create a memory-efficient and useful means of representing the videos, smart video embeddings were created.
|
| 18 |
+
|
| 19 |
+
In our proposed pipeline, the only input to our model is a database of videos. Although videos are rich in information when observed by a human being, computers require alternate methods to process videos. The first step is to analyze videos at the level of their component frames. Once we have extracted frames from the video, we can use image embedding generation techniques to represent them.
|
| 20 |
+
|
| 21 |
+
The input videos are sampled at 2 frames per second. Each frame is passed through an image embedding generation model. In our method, we have chosen to use pre-trained Residual Networks which are trained on the Imagenet dataset to generate frame embeddings. These networks produce embbedings of length 2048. Their residual connections allow features at lower layers to be preserved in deeper layers. We experimented with two variants of the residual network - ResNet50 and ResNet152. These networks are 50 and 152 layers deep respectively.
|
| 22 |
+
|
| 23 |
+
{#fig}
|
| 24 |
+
|
| 25 |
+
Videos are generally represented as a concatenation of their component frame embeddings. In our approach, we have chosen to represent the entire dataset of videos together, instead of generating individual video representations. This will allow us to improve the retrieval speed of the model.
|
| 26 |
+
|
| 27 |
+
Once we have generate embeddings for all sampled frames in the dataset, we cluster them. We use the K-Nearest Neighbors algorithm as a light-weight clustering algorithm that can be used for large-scale clustering applications. Frames across videos in the dataset are assigned into 175 clusters. These clusters are represented by the mean of their component frame embeddings. In this clustering process, we lose out on the important temporal information that is inherently present in videos. To preserve this information, we use a graph-based aggregation technique.
|
| 28 |
+
|
| 29 |
+
An undirected graph using these clusters is created where each cluster is treated as a node. If frame Y belongs to cluster 1 follows frame X which belongs to cluster 2 in the video, then there is an edge connecting cluster 1 and 2. The edge weights in the graph are directly proportional to the number of such frame-frame (and cluster-cluster) transitions. To add temporal information to the embeddings, the cluster embeddings are aggregated with their first order neighbor cluster embeddings. This intermediate representation retains the temporal information in the videos and is used in retrieval.
|
| 30 |
+
|
| 31 |
+
{#fig}
|
| 32 |
+
|
| 33 |
+
{#fig}
|
| 34 |
+
|
| 35 |
+
Any query image is first processed by the image embedding generation model. We use the augmented cluster embeddings to reduce the search space for every query. The query image embeddings is first compared with the cluster embeddings. The cosine similarity is used as the similarity metric for these embeddings. The clusters are ranked based on their cosine similarities and the top 'c' clusters are chosen for further comparisons. All frame embeddings present in these top clusters are compared with the query image, and ranked based on their similarities. The 'k' number videos corresponding to the top matching frames are retrieved for each query image and Precision@k is calcualted as: $$\begin{equation}
|
| 36 |
+
P@k=\frac{R \cap k }{k}
|
| 37 |
+
\end{equation}$$ Where 'R' is the number of videos that are the same category as the query image and 'k' is the total number of videos retrieved.
|
| 38 |
+
|
| 39 |
+
After this mAP@k is calculated for all the query images for a particular category. mAP is the mean of all the P@k for all the images for a particular category. It is given as:
|
| 40 |
+
|
| 41 |
+
$$\begin{equation}
|
| 42 |
+
mAP=\frac{ \sum_{n=1}^{k} P@k }{k}
|
| 43 |
+
\end{equation}$$
|
| 44 |
+
|
| 45 |
+
For the evaluation of this technique experiments were performed on the MSR-VTT dataset. The dataset contains 2990 videos which are around 20-60 seconds long and belong to 20 different categories. This dataset is extensively for video retrieval through text.
|
| 46 |
+
|
| 47 |
+
However, rather than using the sentences associated with the videos, this technique uses the video information only hence making it possible to retrieve previously unseen videos. The categories of videos in the dataset are
|
| 48 |
+
|
| 49 |
+
1\. Music 2. People 3. Gaming 4. Sports, Actions 5. News, Events, Politics 6. Education 7. TV Shows 8. Movie, Comedy 9. Animation 10. Vehicles, Autos 11. How-to 12. Travel 13. Science, Technology 14. Animals, Pets 15. Kids, Family 16. Documentary 17. Food, Drink 18. Cooking 19. Beauty, Fashion 20. Advertisement
|
| 50 |
+
|
| 51 |
+
For our testing we merged some similar categories like Food and cooking and removed others like movies, documentary, advertisement, etc due to the arbitrary nature of the classes. For example, it's difficult to tell the difference between a movie clip and a clip from TV show or documentary without any context.
|
| 52 |
+
|
| 53 |
+
Another reason for excluding some of the other classes like Science and Technology or education, is that there is often no clear visually discernible factor that puts a video in this category. For example a video of a teacher explaining a concept might be classified as education but there's no way to understand that the person in the video is a teacher or that something is being taught.
|
| 54 |
+
|
| 55 |
+
{#fig}
|
| 56 |
+
|
| 57 |
+
In the end after filtering, we were left with 11 relevant categories
|
| 58 |
+
|
| 59 |
+
1\. Music 2. Gaming 3. Sports, Actions 4. News, Events, Politics 5. Vehicles, Autos 6. How-to 7. Travel 8. Animals, Pets 9. Kids, Family 10. Food, Drink, Cooking 11. Beauty, Fashion
|
| 60 |
+
|
| 61 |
+
{#fig}
|
| 62 |
+
|
| 63 |
+
{#fig}
|
| 64 |
+
|
| 65 |
+
:::: center
|
| 66 |
+
::: {#tab1}
|
| 67 |
+
**Category** **mAP@5** **mAP@10** **mAP@20**
|
| 68 |
+
------------------------ ----------- ------------ ------------
|
| 69 |
+
Music 60% 42.5% 37.5%
|
| 70 |
+
Gaming 50% 40% 37.5%
|
| 71 |
+
Sports, Actions 100% 97.5% 90%
|
| 72 |
+
News, Events, Politics 45% 40% 36.25%
|
| 73 |
+
Vehicles, Auto 85% 65% 56.25%
|
| 74 |
+
How-to 30% 40% 28.75%
|
| 75 |
+
Travel 55% 42.5% 32.5%
|
| 76 |
+
Animals, Pets 85% 72.5% 61.25%
|
| 77 |
+
Kids, Family 40% 40% 32.5%
|
| 78 |
+
Food, Drink, Cooking 40% 40% 50%
|
| 79 |
+
Beauty, Fashion 60% 52.5% 38.75%
|
| 80 |
+
|
| 81 |
+
: Model Using ResNet152
|
| 82 |
+
:::
|
| 83 |
+
|
| 84 |
+
[]{#tab1 label="tab1"}
|
| 85 |
+
::::
|
| 86 |
+
|
| 87 |
+
:::: center
|
| 88 |
+
::: {#tab1}
|
| 89 |
+
**Category** **mAP@5** **mAP@10** **mAP@20**
|
| 90 |
+
------------------------ ----------- ------------ ------------
|
| 91 |
+
Music 30% 35% 31.25%
|
| 92 |
+
Gaming 55% 47.5% 36.25%
|
| 93 |
+
Sports, Actions 100% 95% 88.75%
|
| 94 |
+
News, Events, Politics 50% 42.5% 36.25%
|
| 95 |
+
Vehicles, Auto 60% 57.5% 51.25%
|
| 96 |
+
How-to 40% 40% 31.25%
|
| 97 |
+
Travel 50% 47.5% 35%
|
| 98 |
+
Animals, Pets 90% 70% 50%
|
| 99 |
+
Kids, Family 45% 40% 33.75%
|
| 100 |
+
Food, Drink, Cooking 35% 37.5% 45%
|
| 101 |
+
Beauty, Fashion 45% 45% 33.75%
|
| 102 |
+
|
| 103 |
+
: Model Using ResNet50
|
| 104 |
+
:::
|
| 105 |
+
|
| 106 |
+
[]{#tab1 label="tab1"}
|
| 107 |
+
::::
|
| 108 |
+
|
| 109 |
+
:::: center
|
| 110 |
+
::: {#tab1}
|
| 111 |
+
**Category** **mAP@5** **mAP@10** **mAP@20**
|
| 112 |
+
------------------------ ----------- ------------ ------------
|
| 113 |
+
Music 45% 37.5% 33.75%
|
| 114 |
+
Gaming 40% 35% 38.75%
|
| 115 |
+
Sports, Actions 100% 97.5% 91.25%
|
| 116 |
+
News, Events, Politics 45% 47.5% 37.5%
|
| 117 |
+
Vehicles, Auto 80% 57.5% 50.25%
|
| 118 |
+
How-to 35% 40% 31.25%
|
| 119 |
+
Travel 55% 37.5% 28.75%
|
| 120 |
+
Animals, Pets 90% 80% 61.25%
|
| 121 |
+
Kids, Family 35% 32.5% 30%
|
| 122 |
+
Food, Drink, Cooking 40% 42.5% 45%
|
| 123 |
+
Beauty, Fashion 50% 42.5% 37.5%
|
| 124 |
+
|
| 125 |
+
: Model Using ResNet152 without creating graph
|
| 126 |
+
:::
|
| 127 |
+
|
| 128 |
+
[]{#tab1 label="tab1"}
|
| 129 |
+
::::
|
| 130 |
+
|
| 131 |
+
:::: center
|
| 132 |
+
::: {#tab1}
|
| 133 |
+
**Category** **mAP@5** **mAP@10** **mAP@20**
|
| 134 |
+
------------------------ ----------- ------------ ------------
|
| 135 |
+
Music 35% 32.5% 27.5%
|
| 136 |
+
Gaming 65% 55% 46.25%
|
| 137 |
+
Sports, Actions 100% 92.5% 83.75%
|
| 138 |
+
News, Events, Politics 50% 45% 32.5%
|
| 139 |
+
Vehicles, Auto 68% 58% 60%
|
| 140 |
+
How-to 40% 41.5% 31.25%
|
| 141 |
+
Travel 40% 37.5% 30%
|
| 142 |
+
Animals, Pets 90% 72.5% 62.5%
|
| 143 |
+
Kids, Family 35% 30% 28.75%
|
| 144 |
+
Food, Drink, Cooking 35% 37..5% 43.5%
|
| 145 |
+
Beauty, Fashion 75% 52.5% 43.75%
|
| 146 |
+
|
| 147 |
+
: Model Using ResNet50 without creating graph
|
| 148 |
+
:::
|
| 149 |
+
|
| 150 |
+
[]{#tab1 label="tab1"}
|
| 151 |
+
::::
|
| 152 |
+
|
| 153 |
+
:::: center
|
| 154 |
+
::: {#tab1}
|
| 155 |
+
**Model** **Effective Search Speed (Video Frames / Second)**
|
| 156 |
+
----------- ----------------------------------------------------
|
| 157 |
+
ResNet152 15000
|
| 158 |
+
ResNet50 18000
|
| 159 |
+
|
| 160 |
+
: Speed Comparison of Models
|
| 161 |
+
:::
|
| 162 |
+
|
| 163 |
+
[]{#tab1 label="tab1"}
|
| 164 |
+
::::
|
2102.04152/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="drawio-internal.googleplex.com" modified="2021-01-28T21:15:39.301Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36" version="12.7.4" etag="9Ld05uHWzWS1vbA088NV" type="google"><diagram id="cv9pQctzb70gFnKMWqBP">7ZpRr5owFMc/jY83sYCAj1d3tyXLnky250IrkFupq3XqPv1OoSgUuDGbQlB4UPj3cGh7fp6WEyf2cnP8IvA2/s4JZRNrSo4T+9PEsmaeA59KOOWC5WshEgnJJXQRVskfqsWpVvcJobuKoeScyWRbFUOepjSUFQ0LwQ9VszVn1aducURrwirErK7+TIiMc9Vx/VLDV5pEsX60bU91zze4sNbCLsaEH0qS/Taxl4JzmZ9tjkvK1OQVE5Pf97ml9dwBQVN5zQ1WfsNvzPZ6cLpf8lSMVvB9Sqiyn07sxSFOJF1tcahaDxBe0GK5YXCF4PQ8HmW7k4K/0yVnXGSuYBrUAS3rhLFCT3kKvhaRwCSBXpfM19mhzHkqNQZIX5bM5tkBOmZJlIIWghsKjQs9OCokPbZOEDpPO/BK+YZKcQITfYNleTpUGlaEtI/DJfSoCGdcijpCc42cxi06O78EBE50TJrjYzfEx2VSTe4Wp5VAub/2ippsdl522Wy9goG3PV7a4CzS35mToBB+FAp0KDCtdvvgn5+ESo4zP1XXSs0GUsgGfBA4aRBmQuWGPg3WBlSgE0z9ddiEhQIigZ/zq27YJISoBzbCXcW/RKIHlwwHlC24IFQYOOctOHyPMgdG603A9A0wCwhLYDZxWWj/g6XTimWPRI3oXIvOeT3qAZ3ZiM6Q0allHac7dNwRnSGjU8s6HaLjDX0fZbXxd7N9lG8Ftus2YDmjPnEeGEsH9bYY+j1mtFaiRnSuRsftD535iM6g0TGzToeLYVFHG9kZKDtm2umSHdQjO99ebrgND/yZMzOLkarq6Ic0fOBteO0Nzpt3R09TvXlQ+/APELzZTvxZyay9IHZJZnulfcxrg6CnltfmHdLTZ0F8pOceuadLetpr4kNZFe++Js5dz8YNe32KYLfvPTCX5ntil2tie8F95PLZuTTfQbvkcvDV/JHLzvJll+t4ezl/5PLZuTTz5f24hMvLHwizttLfMO23vw==</diagram></mxfile>
|
2102.04152/main_diagram/main_diagram.pdf
ADDED
|
Binary file (9.62 kB). View file
|
|
|
2102.04152/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3c9372c1da156781400e459ce0c3c04509b53452a047e2169fa61e7517f550a0
|
| 3 |
+
size 3447171
|
2102.04152/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Large, high-dimensional datasets containing billions of samples are commonplace. Dimensionality reduction to extract the most informative features is an important step in the data processing pipeline which enables faster learning of classifiers and regressors (Dhillon *et al.*, 2013), clustering (Kannan and Vempala, 2009), and interpretable visualizations. Many dimensionality reduction and clustering techniques rely on eigendecomposition at their core including principal component analysis (Jolliffe, 2002), locally linear embedding (Roweis and Saul, 2000), multidimensional scaling (Mead, 1992), Isomap (Tenenbaum *et al.*, 2000), and graph spectral clustering (Von Luxburg, 2007).
|
| 4 |
+
|
| 5 |
+
Numerical solutions to the eigenvalue problem have been approached from a variety of angles for centuries: Jacobi's method, Rayleigh quotient, power (von Mises) iteration (Golub and Van der Vorst, 2000). For large datasets that do not fit in memory, approaches that access only subsets—or *minibatches*—of the data at a time have been proposed.
|
| 6 |
+
|
| 7 |
+
Recently, EigenGame (Gemp *et al.*, 2021) was introduced with the novel perspective of viewing the set of eigenvectors as the Nash strategy of a suitably defined game. While this work demonstrated an algorithm that was empirically competitive given access to only subsets of the data, its performance degraded with smaller minibatch sizes, which are required to fit high dimensional data onto devices.
|
| 8 |
+
|
| 9 |
+
One path towards circumventing EigenGame's need for large minibatch sizes is parallelization. In a data parallel approach, updates are computed in parallel on partitions of the data and then combined such that the aggregate update is equivalent to a single large-batch update. The technical obstacle preventing such an approach for EigenGame lies in the bias of its updates, i.e., the divide-and-conquer EigenGame update is not equivalent to the large-batch update. Biased updates are not just a theoretical nuisance; they can slow and even prevent convergence to the solution (made obvious in Figure 4).
|
| 10 |
+
|
| 11 |
+
In this work we introduce a formulation of EigenGame which admits unbiased updates which we term µ-EigenGame. We will refer to the original formulation of EigenGame as α-EigenGame.1
|
| 12 |
+
|
| 13 |
+
µ-EigenGame and α-EigenGame are contrasted in Figure 1. Unbiased updates allow us to increase the effective batch size using data parallelism. Lower variance updates mean that µ-EigenGame should converge faster and to more accurate solutions than α-EigenGame regardless of batch size. In Figure 1a (top), the density of the shaded region shows the distribution of steps taken by the
|
| 14 |
+
|
| 15 |
+
<sup>∗</sup> denotes equal contribution.
|
| 16 |
+
|
| 17 |
+
<sup>1</sup>µ signifies unbiased or *un*loaded and α denotes original.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Figure 1: (a) Comparing $\alpha$ -EigenGame (Gemp et al., 2021) and $\mu$ -EigenGame (this work) over 1000 trials with a batch size of 1. (top) The expected trajectory<sup>2</sup> of each algorithm from initialization ( $\square$ ) to the true value of the third eigenvector ( $\star$ ). (bottom) The distribution of distances between stochastic update trajectories and the expected trajectory of each algorithm as a function of iteration count (**bolder** lines are later iterations and modes further left are more desirable).
|
| 22 |
+
|
| 23 |
+
(b) Empirical support for Lemma 2. In the top row, player 3's utility is given for parents mis-specified by an angular distance along the sphere of $\angle(\hat{v}_{j< i}, v_{j< i}) \in [-20^{\circ}, -10^{\circ}, 10^{\circ}, 20^{\circ}]$ moving from light to dark. Player 3's mis-specification, $\angle(\hat{v}_i, v_i)$ , is given by the x-axis (optimum is at 0 radians). $\alpha$ -EigenGame (i) exhibits slightly lower sensitivity than $\mu$ -EigenGame (ii) to mis-specified parents (see equation (8)). However, when the utilities are estimated using samples $X_t \sim p(X)$ (faint lines), $\mu$ -EigenGame remains accurate (iv), while $\alpha$ -EigenGame (iii) returns a utility (dotted line) with an optimum that is shifted to the left and down. The downward shift occurs because of the random variable in the denominator of the penalty terms (see equation (3)).<sup>3</sup>
|
| 24 |
+
|
| 25 |
+
stochastic variant of each algorithm after 100 burn-in steps. Although the expected path of $\alpha$ -EG is slightly more direct, its stochastic variant has much larger variance. Figure 1a (bottom) shows that with increasing iterations, the $\mu$ -EG trajectory approaches its expected value whereas $\alpha$ -EG exhibits larger bias. Figure 1b further supports $\mu$ -EigenGame's reduced bias with details in Sections 3 and 4.
|
| 26 |
+
|
| 27 |
+
Our contributions: In the rest of the paper, we present our new formulation of EigenGame, analyze its bias and propose a **novel unbiased parallel variant**, $\mu$ -EigenGame with *stochastic* convergence guarantees. $\mu$ -EigenGame's utilities are distinct from $\alpha$ -EigenGame and offer an alternative perspective. We demonstrate its performance with extensive experiments including dimensionality reduction of massive data sets and clustering a large social network graph. We conclude with discussions of the algorithm's design and context within optimization, game theory, and neuroscience.
|
| 28 |
+
|
| 29 |
+
# Method
|
| 30 |
+
|
| 31 |
+
The Generalized Hebbian Algorithm (GHA) (Sanger, 1989; Gang et al., 2019; Chen et al., 2019) update direction for $\hat{v}_i$ with inexact parents $\hat{v}_i$ is similar to $\mu$ -EigenGame:
|
| 32 |
+
|
| 33 |
+
$$\Delta_i^{gha} = C\hat{v}_i - \sum_{j \le i} (\hat{v}_i^\top C\hat{v}_j)\hat{v}_j. \tag{9}$$
|
| 34 |
+
|
| 35 |
+
C appears linearly in this update so GHA can also be parallelized. In contrast to $\mu$ -EigenGame, GHA additionally penalizes the alignment of $\hat{v}_i$ to itself and removes the unit norm constraint on $\hat{v}_i$ (not shown). Without any constraints, GHA overflows in experiments. We take the approach of Gemp et al. (2021) and constrain $\hat{v}_i$ to the unit-ball ( $||\hat{v}_i|| \leq 1$ ) rather than the unit-sphere ( $||\hat{v}_i|| = 1$ ).
|
| 36 |
+
|
| 37 |
+
The connection between GHA and $\mu$ -EigenGame is interesting because unlike $\mu$ -EigenGame, GHA is a Hebbian learning algorithm inspired by neuroscience and its update rule is not motivated from the perspective of maximizing of a utility function. Game formulations of classical machine learning problems may provide a bridge between statistical and biologically inspired viewpoints.
|
2102.08201/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8b5dd58918620e81c93697031c519550f28ca67acac3cbb9038142c9f388b6c7
|
| 3 |
+
size 1065802
|
2103.17229/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2103.17229/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b8f025c7e8707529329a1056cc53b155ab91113b05fabf38dfb88f1241666251
|
| 3 |
+
size 6512423
|
2103.17229/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
As the nodes are the key points in images, we need to construct the edges for each graph. Each edge $(k,l) \in \mathcal{E}_j$ requires two features $w_{kl}$ and $\theta_{kl}$ , where $w_{kl}$ is the pairwise distance between the connected nodes $v_k$ and $v_l$ , and $\theta_{kl}$ is the absolute angle between the edge and the horizontal line with $0 \le \theta_{kl} \le \pi/2$ . The edge affinity between edges (k,l) in $\mathcal{G}_1$ and (a,b) in $\mathcal{G}_2$ is computed as $e_{(k,a),(l,b)} = \exp(-(|w_{kl}-w_{ab}|+|\theta_{kl}-\theta_{ab}|)/2)$ . The edge affinity can overcome the ambiguity of orientation because objects in real-world datasets typically have a natural up direction (e.g. people/animals stand on their feet, car/bikes on their tyres).
|
| 4 |
+
|
| 5 |
+
We further provide quantitative evaluations of the cycle consistency on the Pascal VOC dataset, as shown in Table 1. We quantify in terms of the cycle consistency score, which is computed as follows:
|
| 6 |
+
|
| 7 |
+
Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
| 8 |
+
|
| 9 |
+
- 1. Given three graphs $\{\mathcal{G}_j\}$ , $\{\mathcal{G}_k\}$ and $\{\mathcal{G}_l\}$ , we use the trained network to predict $X_{jk}$ , $X_{jl}$ and $X_{kl}$ .
|
| 10 |
+
- 2. We compute the composed pairwise matching between $\{\mathcal{G}_k\}$ and $\{\mathcal{G}_l\}$ by $X'_{kl} = X^T_{jk}X_{jl}$ .
|
| 11 |
+
- 3. We denote the number of points that $X'_{kl}$ equals to $X_{kl}$ as $m_{\rm cycle}$ and the number of points in $X_{kl}$ as $m_{kl}$ . The cycle consistency score is then computed as
|
| 12 |
+
|
| 13 |
+
cycle consistency score =
|
| 14 |
+
$$100 \times \frac{m_{\text{cycle}}}{m_{kl}}$$
|
| 15 |
+
%. (1)
|
| 16 |
+
|
| 17 |
+
Note that in this case, we only consider the common points that are observed in $\{G_i\}$ , $\{G_k\}$ and $\{G_l\}$ .
|
| 18 |
+
|
| 19 |
+
In Fig. 1, we show the average matching accuracy and cycle consistency score of our method and compare it with PCA (?) and CSGM (?). It is clear that our method can achieve comparable accuracy and the best cycle consistency at the same time.
|
| 20 |
+
|
| 21 |
+
We show the architecture of the deformation module in Fig. 2. Each linear layer is followed by a Rectified Linear Unit (ReLU). Additionally, we introduce a linear layer depending on the category of the input object. Its purpose is to assist the neural network in distinguishing between different deformations among categories. For detailed information on Graph Matching Network, readers are referred to (?)
|
2104.08701/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-01-19T05:27:04.205Z" agent="5.0 (X11)" etag="DWzyxixyw4WKDbLP8jVq" version="14.2.4" type="device"><diagram id="1bU7FUfBltCtCoZ4MBlI" name="Page-1">7Vxbb6M6EP41eTwrwFzC4/a60ulqK1Xato8OOIlbgllw0qS/fg2YcDFJSAuJC9tKkT2Age+bGc/4wghcLta3IQzmP4mLvJGmuOsRuBppmq2P2W8s2KQCYNqpYBZiNxWpueABvyMuVLh0iV0UlU6khHgUB2WhQ3wfObQkg2FI3sqnTYlXvmsAZ0gQPDjQE6WP2KXzVDo2lFz+A+HZPLuzqvAjC5idzAXRHLrkrSAC1yNwGRJC09JifYm8GLsMl/S6mx1Htw8WIp82uQB7wQ1++YN/u8/B9erXPSaL///TOD0R3WRvTEI6JzPiQ++OkIDJ1BG4eEGUbjg7cEkJE83pwuNH0RrTJ1ZWePm5UL5aFyubrOLTcPNUrDxnbcWV/KKkll0V0ZC8okvikTB5VqAkf+yICAbHJyLL0EF7EMiUCoYzRPecZ6XnIbekLxzqW0QWiD0pOyFEHqR4VVYfyLVwtj0vJ4oVOFdH8MbbXUFvye8k8BiSpe8il0P3NscUPQQwweKNmWqZwSn2vAKuLkTjqVOHuOmM0WS6D/EVCila78WIHwWZaXDfoI95/S23NMBF84KRmUpHqGq9QNWSDVVDQDG2ogdezf3NdS69KOMssU9yluEqeU71U24INHRDqi2VHwLdWsx07CCn1mImY0M39nr+5hajy+aH9F6gKp0fMj/gh0rm3ZpTUo93SuoxTqmLQMlo6KGyBiXxUEYfbMmQzUNp1gEUB2YcZtPuW5PKOMw+BLyGbB2N1QdUVbuMKtDPjOq4D6hqumSo2n3oHjXZdDUbvfzasALZlFXteNjrRLBKp629GPfSpdPWXvhWU5EMVq0XvtU0ZIO1F77Vkk1bs5vtGHTyiY/aG9/O5MeMb+f58bHZMvLd7/EMa/4aTHKDY3yuPpdNa9xzfrFJOaB2zLV6PNfqMVwfYrSLkZPGXEs28bF/UqtdrpVvxgG2t7w5Howi7JSoU+uoGyvxfyvUtUcJv/SeYPYseWRX6StZuvdN0S1TtbPfcoupKvFGKjxvn+oT1Itjn4fnET7j4bfsf3QOk7fQzPIba9DW+FuaAj2ZOmUT0pk6acpJ9UcT84J7QjzszwS9iuYwiIuURVfoncTNXgQoxOwRUFiU3+fCw3HZGmULreqSNQONXb3WY2gTYJotjdhUJjQ0S4zTrJo4Te8sTgNfPE47R98NtIZ9t2yrFvR/gVp3ZEs2AQz2T/sPIlLLuOu6a7Wqbv3MkZq4uvXIFR99DtvkVyfdPm+klr3nkCM1E8gWqYnh8w8s8MHejpbRLKPE3X4RUi6CHp75sUExzBKWYqywA73v/MACu27iKeroKw+RTolPj/LkR8wiqWXbMAyRF72Gl6oJtbdoURyXXy4GyIspGy91A/umF7Pg4hUrzuKiA30n3rmTHmA3KhwbHom6dMYl9kSLzQB5kc64xAXwJHQZgoOjxpDOZMRV9IGHYIQacNMc8xBF+B1OkqZiJIM4Fk1exbgYGVdxWyxDiApBXIcUmMpBCoyTUiAuvn4abfdDDsg4NOswM6c1jgYrf7OUhiXE3uYihM5rPOxzKGnJMU14TBU/zWR3j2x0Cb192C9pJ4VeXB68gNErC7bgIsbSn0RB8upDtJOqB6tZyn1aO2mw6rindlID/WntRMzzB2gRQpp/bovI3OfAaakmImenRczyB0iLIR0tYt4uLhLrPS3CcMrZaRHT9iHSIp21iCn7AGkRRlJOSMud+cf5bSmvt7b78x5sLh9fpo/ZZ1HO8/2OD+wGLsybNtkPnL5dZS7zg3vkeW5XXCNRC+l5t8SLGeitRybQE3g+asH/7t09u/YDtRGIVTZTbbetHpieBJ35MDFjvCNO69huN03s2mbRSv9wPmx3W00F2hmm0eewrSCIVNdAVh3mtmkBmE+rC0DWwN04JFKBiK3dDrasmn9lLl0akX+qD1z/BQ==</diagram></mxfile>
|
2104.08701/main_diagram/main_diagram.pdf
ADDED
|
Binary file (14.8 kB). View file
|
|
|
2104.08701/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3a6e8219879c3120be71a8b7aa2af52deae1e3b9a6b2f459cb39d1f57d33ff0d
|
| 3 |
+
size 220917
|
2104.08701/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
While generic dialog systems, or chatbots, such as Amazon Alexa or Google Assistant, are increasingly popular, to date, most industrial dialog systems are built for specific clients and use cases. Typically, these systems have the following: 1. A natural language understanding (NLU) module to analyze the user utterance, 2. A dialog manager module to reason over the analyzed utterance and decide on an action, and 3. A natural language generation module to generate an appropriate response based on the action.
|
| 4 |
+
|
| 5 |
+
Typically, an NLU module has two purposes: understanding the intent or goal of an utterance (classification) and identifying the entities in the utterance (slot filling). As dialog managers have evolved from simple flow-based systems to information state update systems [@traumInformationStateApproach2003], NLU modules have progressed past simple single intent detection and flat slot filling to multiple intents and nested entities [@8639605]. As these dialog systems need to be rebuilt for each client, the NLU module faces a significant data bottleneck; it is time-consuming and expensive to collect data, develop a domain-specific annotation scheme, and annotate data. Therefore, it is imperative that the data is shared across clients as much as possible.
|
| 6 |
+
|
| 7 |
+
In a production dialogue system, there are often similar situations that require drastically different responses. For example, "I want to cancel my subscription." and "I am thinking about canceling my subscription." are very similar. They are both about the canceling of a subscription. However, they differ in the users conviction. The latter user is much more likely to not cancel if offered a discount. Making this distinction is critical for creating sophisticated and nuanced dialogue systems. A common approach to solve this problem would be to split the intent space so the dialogue manager can differentiate between these examples, creating a `cancel` and a `think-cancel` intent. Using intents to recognize specific situations leads to data sparsity as each intent is broken into many sub-categories like present vs. past tense, how certain a user is in their actions, and if the user has tried an action or not. There would be very few examples of each intent. Additionally, the combinations of different sub-categories would cause a combinatorial explosion of intents. Another short-coming of fine-grained intents is the loss of compositionality. Fundamentally the `cancel` and `think-cancel` intents are very similar, but because they are modeled as independent output classes, there is not a shared representation of these labels the model can lean on.
|
| 8 |
+
|
| 9 |
+
In order to avoid these shortcomings, and allow for many examples per intent, we factor out these small differences in situations into what we call intent features. Intent features are a set of domain-independent properties for intents that can primarily be understood from the syntax of the utterance. These intent features represent specifics of situation, such as tense, without having a massive intent space. By decoupling these small differences, we can keep the intent categories general, while still providing the dialogue manager with the information it needs for nuanced, human-like responses.
|
| 10 |
+
|
| 11 |
+
In a multi-intent setting where each clause in the utterance has an intent, intent features reduce to the problem of classification of a span embedded within a larger utterance. We propose a new model, the Global-Local model, for this problem which shows significant improvement over strong baselines.
|
| 12 |
+
|
| 13 |
+
::: table*
|
| 14 |
+
------------------------ -------------- --------- --------- -------- ----------- ---------- ---------
|
| 15 |
+
text topic/intent attr-cf attr-ev cf modality negation tense
|
| 16 |
+
I am trying to install install self self inform modal-try positive present
|
| 17 |
+
and \- \- \- \- \- \-
|
| 18 |
+
I see a problem general self self issue other positive present
|
| 19 |
+
------------------------ -------------- --------- --------- -------- ----------- ---------- ---------
|
| 20 |
+
:::
|
| 21 |
+
|
| 22 |
+
Table [\[tab:sample-utterance\]](#tab:sample-utterance){reference-type="ref" reference="tab:sample-utterance"} shows a sample utterance with its intents and features. This is a multi-intent setting where non-overlapping spans of an utterance have different intents. Each intent span has the following features:
|
| 23 |
+
|
| 24 |
+
**Communicative functions**: The communicative functions (cf) captures what kind of response (or action) the user is trying to elicit from the system. We define five such functions:
|
| 25 |
+
|
| 26 |
+
- `inform`: The user is informing the system about something. Typically, these intents are a response to a question or they represent background information surrounding the main purpose of the utterance. For example, in the utterance, "I am installing X but it keeps saying I have an error", the first clause has a communicative function of `inform`. The user provides background information about installing something on a device and then presents a problem with the install procedure, which would have a communicative function of `issue`.
|
| 27 |
+
|
| 28 |
+
- `issue`: The user is saying that something has gone against their expectations (see above for an example).
|
| 29 |
+
|
| 30 |
+
- `request-action`: The user requests for some action to be undertaken in response to the request, or requests help with something. For example, "I would like to install X."
|
| 31 |
+
|
| 32 |
+
- `request-confirm`: The user is requesting confirmation, or disconfirmation, of their belief. Often this warrants a yes/no answer. For example, one expects a yes or no from, "Was my installation successful?"
|
| 33 |
+
|
| 34 |
+
- `request-info`: The user is requesting some information about something. These are typically expressed as "wh/how" questions, such as: "How can I install X?"
|
| 35 |
+
|
| 36 |
+
All of our running examples above share the intent of installing software; however, differences in phrasing warrants different responses. An `inform` does not typically require a targeted reply from the system, whereas for an `issue`, the system should start the response with "I am sorry you are having trouble."
|
| 37 |
+
|
| 38 |
+
**Attribution**: Attribution is concerned with *agency*. There are two types of attribution. The first type is the of attribution of the communicative function (**attr-cf**) and it deals with who is the primary source of the content of the topic. The second type is the attribution of the event/action (**attr-ev**) of a topic and describes who is the agent of the event or action. This is perhaps best elucidated by an example. In Table [\[tab:attribution\]](#tab:attribution){reference-type="ref" reference="tab:attribution"}, we see multiple utterances that all have the intent `payment`, but we can see how the attribution features change as both the payer and the informer of the payment change. Both **attr-cf** and **attr-ev** take values `self` (when the agent is the user) and `other`.
|
| 39 |
+
|
| 40 |
+
::: table*
|
| 41 |
+
--------------------------------------------------- ---------------- ----------------
|
| 42 |
+
Utterance Attribution CF Attribution Ev
|
| 43 |
+
I have paid \$\$ self self
|
| 44 |
+
I got an email confirming I paid \$\$ other self
|
| 45 |
+
I was charged \$\$ self other
|
| 46 |
+
I got an email confirming that I was charged \$\$ other other
|
| 47 |
+
--------------------------------------------------- ---------------- ----------------
|
| 48 |
+
:::
|
| 49 |
+
|
| 50 |
+
**Negation**: Topics of many intents are represented in their negated versions, as well. For example, in the software domain, the `compatibility` intent models whether a piece of software is compatible with some device. A negation feature would denote incompatibility. The negation feature takes values `positive` and `negative`.
|
| 51 |
+
|
| 52 |
+
**Tense**: Events and actions can occur in the past, present, or future, which is modeled by the tense feature using values of `past`, `present`, or `future`. The steps to solve a problem as it occurs are often quick-fixes, whereas the first step when fixing a problem that occurred in the past is often information gathering. The tense feature allows the dialogue manager to distinguish between these two possibilities. Tense information is common in the annotation of event extraction, such as in ACE 2005 dataset [@ace].
|
| 53 |
+
|
| 54 |
+
**Modality**: The real-world actions and events represented by an intent can also be viewed in terms of a modality of certainty, that is, whether or not the event or action actually occurred, and to what degree. We consider two types of modality. The first is *possibility*---the expression of the event as hypothetical, or being possible, rather than certain, as in, "I am *planning/going* to install X on my laptop." We also consider *attempts at action*. An expression can imply that it is unclear whether the action was completed or is in the attempted stage. This is expressed with modifying verbs, such as, "try", as in, "I am *trying* to install X." This feature takes the values `modal-poss`, `modal-try`, and `other`. A version of Modality is present in event extraction datasets like ACE 2005 [@ace], but instead of just marking an event as "Asserted" or "Other", our version of Modality distinguishes between different aspects of hypothetical events.
|
| 55 |
+
|
| 56 |
+
There are four different model types we explored for intent features that we detail below. However, before we can annotate an intent with a feature, we need to have an intent span. First, we describe our intent span extraction model whose predictions are used as intent spans.
|
| 57 |
+
|
| 58 |
+
The intents in our system are often conditionally dependent. Some intents even appear sequentially, for example, the `cancel` intent is often followed by the `refund` intent, as users tend to request a cancellation first and then ask for a refund. Therefore, we modeled our multi-intent system as a sequence tagging problem, where intent spans are encoded as token level annotations with the IOBES tagging scheme [@ratinovDesignChallengesMisconceptions2009]. We used a standard BiLSTM-CRF architecture following @maEndtoendSequenceLabeling2016. Each input token is represented both as a character composition, by running a small convolutional neural network with a filter size of $3$ over the characters and doing max-over-time pooling as in @DosSantos:2014:LCR:3044805.3045095:14, and as a word embedding. We use the concatenation of multiple word embeddings, GloVe embeddings [@penningtonGloVeGlobalVectors2014], as well as $100$ dimensional, in-domain embeddings trained in-house, following @lester-2020-multiple. The token sequence is then fed into an bidirectional LSTM [@10.5555/1986079.1986220], where the LSTM [@10.1162/neco.1997.9.8.1735] in each direction has a size of $200$, and projected to the final label space. Finally a Conditional Random Field (CRF) [@10.5555/645530.655813] with constrained decoding [@lester-etal-2020-constrained] is used to produce the final sequence of intents. This model was trained using SGD with momentum using $0.0015$ as the learning rate, $0.9$ for momentum, and a batch size of $10$. Model results were satisfactory, but not the focus of this paper. Instead, intent spans are the atomic unit of text that can be annotated with intent features and can be used as features for a downstream intent feature model.
|
| 59 |
+
|
| 60 |
+
The first approach was to assume that the feature labels for an intent are local to that intent span, and, therefore, each intent span can be fed into a classifier independently of the other intent spans. Under this assumption, we used a convolutional neural network with parallel filters [@kimConvolutionalNeuralNetworks2014], as it is a strong baseline used in several of our production systems. We used parallel filters of size $3$, $4$, and $5$ with $100$ filters each. Max-over-time pooling was used to produce a final span representation, which is projected into the label space. This model was trained using Adadelta [@Zeiler2012ADADELTAAA] with an initial learning rate of $1.0$ and a batch size of $50$. However, this approach misses possible dependencies across spans. Some features (such as "tense") are naturally co-dependent among spans; the use of a past tense verb in one span dictates that all spans in the utterance are past tense, even when there is no explicit signal from the span itself. While less intuitive, the "communicative function" features are conditional as well: an utterance such as, "I would like to order a pizza, but I am having a problem" (a `request-action` followed by an `issue`) is far more common than an utterance like "I am having a problem, I would like to order a pizza" (an `issue` followed by a `request-action`). It follows that the "independence of intent spans" assumption will become problematic and a contextual model that takes other spans into account will be needed.
|
| 61 |
+
|
| 62 |
+
This motivated us to reuse the BiLSTM-CRF architecture we used for intents for the intent features, as well. This model takes the utterance as input, just like the intent model. This approach has a potential pitfall, the intent model and the feature model may produce different boundaries which need to be heuristically merged. A small modification to this approach is to use a cascading tagger where the output of the intent tagger is used in the input to the feature tagger. This is done by creating an embedding that represents the span each token is within and concatenating it to the token representation. This gives the feature tagger information about the span boundaries and should keep the spans synced between the intent and feature models. However, the actual intent labels need to be masked. Instead of seeing `intent=issue` as a feature, the feature model will just see `intent`. This is required because we want the feature labels to be reusable and therefore unconditioned on exact intent label. Intent features are applied to intent spans within an utterance, meaning our BiLSTM-CRF tagger is a natural baseline that considers the global context of an utterance.
|
| 63 |
+
|
| 64 |
+
Our fourth approach is a new model architecture we call the Global-Local model. This model aims to create a targeted representation for a subsection of an utterance while also infusing information derived from the whole utterance. An utterance $U$ of $n$ tokens and a subsequence of $k$ tokens from $U$, are first encoded into matrices of dimension $n \text{ x } e$ and $k \text{ x } e$, respectively, where $e$ is the dimension of some shared embedding space. This encoding can be as simple as word embeddings or more complex like a BiLSTM encoder. A "global" pooling function $g: \mathbb{R}^{n \text{ x } e} \mapsto \mathbb{R}^{e}$ then collapses the global sentence matrix to a sentence vector and another "local" pooling function $l: \mathbb{R}^{k \text{ x } e} \mapsto \mathbb{R}^{e}$ reduces the span matrix to a span vector (both with dimension $e$). The local vector is a representation based solely on the span, while the global vector is a representation of the span that takes the whole utterance into account. These vectors are concatenated to create the final representation for the span $S$. This representation is then projected into the output space. The pooling functions can be as simple as max or mean pooling, or as complicated as self-attention [@vaswaniAttentionAllYou2017]. Each example is represented as a sequence of tokens and a mask. The mask is a sequence of zeros and ones, aligned to the tokens, that marks a token as part of the local span (a one) or not (a zero). A diagram of the model architecture can be found in Figure [1](#fig:arch){reference-type="ref" reference="fig:arch"}.
|
| 65 |
+
|
| 66 |
+
{#fig:arch width="45%"}
|
| 67 |
+
|
| 68 |
+
Our implementation uses lookup-table based word embeddings, the same embeddings used in our convolutional baseline, to create a sequence of vectors representing the input. Then a convolutional neural network with multiple parallel filters, followed by max-over-time pooling, is used as both the local and global pooling functions. We found that when $g$ and $l$ share parameters, results were a bit worse compared to when they are learned separately. Like our convolutional baseline, we use filter sizes of $3$, $4$, and $5$ with $100$ filters each. This model was trained with a cross-entropy loss using the Adadelta optimizer with an initial learning rate of $1.0$ and a batch size of $50$.
|
| 69 |
+
|
| 70 |
+
The data consists of customer utterances. They were collected from the first customer turn in web-chat conversations between customers and agents from a software company after filtering out low content first turns such as "Hi", "Hello", and "Hey". Our training, validation, and testing datasets have $36{,}725$; $9{,}256$; and $4{,}993$ examples respectively. The data was annotated by a team of six (non-overlapping) commercial annotators over a period of a month and then corrected by an expert annotator. A small subset of the data was annotated (before the error correction) by two expert annotators. The agreement was $53\%$ between two expert annotators and $42\%$ between one expert and the other non-expert annotators.
|
2104.09667/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:394d9a7e76603a7861e01637dbb0b9110c4643140b235fa92d85f5ebacb798ed
|
| 3 |
+
size 3801698
|
2104.12280/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2021-05-30T18:48:22.838Z" agent="5.0 (X11)" version="14.7.3" etag="-49_JrD5Tm4EZ9T7mHDZ" type="device"><diagram id="RWUHDood0dBW9x0UN83F">5XzXtuS4ke3X6HG06M0jvfcuyTd6b5JJ//WXPFXVrVa1NEY90szcXFXnECAAAoGIHTsCPPknmOkPYY6nShuzvPsTBGTHn2D2T9D9AfD711NzfqshflSUc519qwJ/rXDqK/9eCXyvXess//ym4TKO3VJPv61Mx2HI0+U3dfE8j/tvmxVj99unTnGZ/1ThpHH3c21QZ0v1fRUo8Gu9mNdl9ePJIPD9Th//aPy94lPF2bj/RRXM/Qlm5nFcvl31B5N3j/B+yOVbP/5v3P1lYnM+LP+RDtC3Dlvcrd/X9n1ey/ljseU8rtP3Zvm85MfviThOfjQHfp4C+MvCbo3Ixz5f5vNu8n2gfwPx733OH7v8vbz/KlsQJL9XVn8hWPRHZfx9Q8tfRv91zffF92X/vgjgf18EtwSGLH/ag3+C6b2ql9yZ4vS5u98aftdVS999v/03RfWXIoH+rkgg4K8kAv8sERz7HYEg2B8gEOSn9efZrf7fi+O8VGM5DnHH/VpL/yoh4C792kYdx+m7XJp8Wc7vthyvy/hbqd2Smc/X0//P6I9i+H24rwJ7/KZ0fi99lnlsf7HCr65HvfzFQHcp/PGM+/rXYZ7Cj1G+rfhZ5t/fsVsq4zqnP1oR37Ennst8+Qtt+nlj57yLl3r77fC/t0nfu5pjfT/4F4VAid8qxF9v87dZfe/0Vzv9yyz+Q5uP/o41YN29ODqrt/uyXL7E9a2qGL8m+aumYO91/HHj3z5fe03dDSB0On69+dejJD8qnCVe8ru5OedZnS71ODwr+d7qvkz+uudd920KP1X/zmT/++dvzuMN/fG3if/Ds/5HMKiou44Zu3H+6gtncU4U6S/W8hd3sJTIk+IPQi0C/TOO/xa4iJ+BC4J/B7hA9A8ALuxfAVw/A9B/DUzwn7EE/X1x/7VT+EdEhv9Nc/9hGz/I1w/tB/+uIfxXbUnSvP+UyfyBxlEUOZb+rnFkOJkAwD9oHOcP0gv9mQRQAoRJFEYIAvmZ4uDEnwEIJHHi+0/kZzPBkD+DBAYgGAqCGISTf8eR/Ec1gPibGvDTLvxjcKnWWTz/+xiY/C/eZAT+2uRfd/l/yiaT/0PMnFNufw54U/bl6P819v7PcYYY8i92hj9i5X8Vjf8Nif+V0/+Hafx/zYuiP3tR8r/fi4Lg/xD7YvNPm+/3bSeNf+Ggyfz3Yfb/isn9Dv+EyX+qyf2cOfhfFTn/YSYH/o2Mzx9qc/D/j/j2OxmHf4qw/0flhMB/krDBf5E3+ds5mD+Ykuus+4uvALR4Sat6KP9VFP1fxsv+yU7if3WW4gfT+Q0A/RPyFCD+v1lqfwM0/sGkMEn+NikMoehvh/i2SX9EVhj8vSzBX+3Hp4qn57Luv07P6MdM6/SWdpzknTl+6q+8KMwm47KM/d2ge27QcdqWXxv1m2j9+fzFGFRXl0/f5dk4Ov5M3071ivp4tpf+eiT1oxb4UXNf34FmfCPgtyLETw+8MbVPG/YOKEI5UvdHd7yK88r7ynyKTEhT2nMxwmIfPBcaT2s+9/pKRj//COof+GxwTOPPBQvbjttp1HqqG6dsWZ2CUNRapy8PpJ5r3NdcXrQUvJ7Z4E/ZOCjBV3Ykua+VmOs4y7cRKgfjF1K29tFvfQRn9/9o9QF5Ok9Jg+mUk13t5YZD2ZcwrMgoJ2Q0T9sH33jVvZiyJxkt7RgNeXMDeIJjbolSiecFS0/gcASviHoRrpRstEw0VNIOtzuhHTGEQyJS07641eS2Cnp60tdgAV/9AeOhHrKk7r9An9yw+Y0/rsT3SZ89mXPipTtA4cnM2iQWhpNnW8jAnfrEXwejgaV3f/Z33R200E/HOBYh7v6dFkW0TyOC6gpC4W8NxVThrbDOEtbhhzLpPB0CqP9EzVnAg0IDAwjFcx+nrgnybE+q9xByF/WzUTnrBI1h60wKoueTQfATMlAHertC2sC5MyACCh+gPKndiWGazFVmVX7f3e27AWZ59xWT0Y7Va3e5riv1bSIhu7eoKGAhmLPXZ9Xxs8IwGlU2y1LX0qW2AdSE2H8dLex53LBz5eq5jVRD6vqiVs4U7kHzhY0ue2Vk0kZE05TmtdFiHQfqsVn8bX2rxybvu+RFQVMG+dzXt03SrKYT02y8p4MV6o8WE2HWxOAerLVxVDPbD7oQnmssvWwh3M277P8o83eZGQafD6817jasOIZxEY3i2UuoSUu5EVz5294+K74Xyy8D1ty/JPPdl+2bmPRHKBGXAWozBhE7ZafhZq3ZAB1h3d1gV5rmm1nQC0hWWGJUhrs2qZ2XeuV7syXyAzbWkXwtx7xSoJ5Jh42kTphj7lNXAbeyxdTFfYLocqFxLFZ7KNvEywbkVKmKehROB4TzHWVld6MGrfeF9N4lx3SFwBKMTKAvu99NjeIuFECOyTIDDdaZkMO7l1Z5PkUbqBoJKZCUotJ6buWidbOV5GGvGj1ylgFbgPWK9Cb1puaFqXt0Ky5NuZgrySHUVBCaYnPapeym6F1vE3Y/aG666rludwIQUcELNiRAHlEdO4jMXj59LkTkIczmUiuotCXs2l9KgZL3nHC7OXJDnVjBsDPYf9WOvkcnMOOoJDQpFegw/WIYPchr9NkU+AYu6ezTWwZXHHPHxQubrsWV92yfw+AYf8wfatME7wuxaIMPLmAE6AV+TMq+LNfwDqEnaK7ZPnQ0AqzCW7vP7q/+Su2poJxnqYg7ZiYxuMIGB7klOHfV7FHop/68TGksUsIztNGkzMYCKYhxMu0uTzCll3D2PJQnRtTOpgTc+wOlRbZIqahXKNxdTUsxjLYQltf6aasm9Y/ogVV7o9p5q3wMMSCm/USi8GBukYWhB0gIpJMmaE3phwG2Ws+i5JtydptUEETXK20SJ/tA+4EJPLgmlas+0AhNViG/PWVFu93cvdZkmMWmlSPdDSImM0JOP5UKVx3UNTABMvdPDjieMQvdMWICoqnnCx66gO01PEx2YIG4XohGudr28DRdg6NoIYwIKMDI+kEQZE8esOr1OJzdTBVeHFPwg7i7ei3FVoZs3pAUIWVfBc7uTPmSxvGtQvv4zcxYQcK24cIh/y4njWu2m8voTNbypU45jOGRuwQYCqWvnRZzGa1Q1usOzXgKZIAYdYVKLClT2qESlu+hp/Dd37INai0i1f3ZvvRNXNh89I1mDY1KmA+EvfGoLKpPz06onPiu/qDcqegGluxqJUO3TpwC3GjJZnQGQbmL+QGs6JE8E79ARE3lJsavyYUb5iOCA8ftBhn7EkVcOMDG6WVWtjoLjAXRMiPqjfjs9HSYkygixe0mbq9n02aXG46ggylQCdcuJbx+Wbk+o/1c9f5VDOsYo7ctgydaK8rdHxGWEm0KoTvRRjVIl7pu0yvVdn6jn3W9pwbFx+fwRVyI+BWfmMiwh8AWarQ/Y256tIOotRj5THHUAdc0CDGrCuj54giA5Up7TVj5m6dHuauhYoYT7Zx+F8L4gVDhNQyPKGVt+9aGiyiFB8gseJQVuwawVGWmireeMZU2jZ/tyVvA16oRrDNLKV+J0ybno5F4Ar5rS4DelvqcdC/F0eDBGIlrcjwvYfD9UAwq2kPkPWP68xaHbUXLkL/VjP+ErxuRhMfcW6kMxH6tPknOzrichEEYaAXqLQ1rJwr4yPuQO0J1MjZo4azZL5fd/Go6k1SajpBXku2AMUBksR7Dp8edO9x7KOLEFiTlBiaLVIMJgydHAN8jIIzaoS1oYhzSCbAYus7xC2Aqe1DDhIKZeBsFYj7dVhL92y8xk24gRJIDTWUargUOmeMDMeTMwCUVwrBUpPWeXQYfLEHZdzwCQxullRD6mPXBaLsj6zzhjy8KFg+jJ+0cIpb1y0G1RmNI8OC+k+VNbnoQksK6YE5N6khBvBycOuwTnhgnB7wJ5sXUccAQq86Q/YwVP3NMw8UdFMIcfvn9q4/Cxs51zaghtnRpHf+8cL3WZRQ3AR6tq2o0uXNauoSP2ROTl5VczMhuXzAXEH6Ku6/teb+Nvj1PUGGA4V+IKexRZN20nxfdAJLoDj9hQlxmM6kOA6ULeVzJBnbzIwrBBpmBnr5EXSPCJl5OwHvvt98hBJaEB+QoUcLLxvPhL8PgXtpBdWzJMchV51GpGMCrMuxp9ykB3jWQhplTfdj8qo+Z95qU18R6K9O/+KkwMcpE6M3Vad1PAgkw+2R+MCfAP4ZF6uPoiAz0wrzY3CzDfKzMbzlLFZgF8PI2iWL/9Jcj7WiqDCeUL5PXmwWwS2TxatE0jqdfd5h+lNaUCIqsjbvsUtSgSwx3s1Z7tMikNB6n0twO+BIbydRXVk39IkY/7BaPSkPWi0BDFDF4pZZnr+r2qfi+sCJjotiDqU1GhaS5EgRUJ6UZ7BIRlSXA7qyzUmegXXOvI1BOi495JPDLD+ETIKz13T8Y1z6gMNhg+phuwgsDSEDXNVbEgI4ikbs8axKk+gkdNOfe51i+ENopSe60t7pD7DkwZYLWu5ovqJWgBk0et7cJSmradGGLPZsvlvEokcL54KKKlbZur+Vhhq5WhMg2fZDxfGz1RnjXfn+K6Yw819SOe6q8gd93+NpnihUxJ3BMSVWpKkvMHprsQZXtgL0dEZ8AldFU/DQV/OkVNpSKF12PZnmBIszjPFNGc/6By9fkvDXhJLdgS3HDskrLJQ+V0FBBYl5aI9NkkajeQ7opNJZ3zKT09nXiN3Ee+14ezBXaLrfz4CIOjrKD0F58x9j8auADG/xr0h/5vfhQTa2EUjHde72ilxfNY7o3WXxBN6Mo9E55YMll4TZ5J86mthhwmeWGZQqbT4hoezIqBj4qWHpPzlHfyO+RzXDm1hH+tKvpUfWwM/m3tSW9QabVqj1G7/mcyTfQuIQKcu3qmlwOCwPhZ9Pks4d3u1atkAqZSx/c3oDCPKDPG0TphyVfJY66McId4FIqNW+AqR5eMo0AKlLvJGSAzwb0BHmI8KLiRFPo7pJ+InjmRkVmrY2t/N1pdrl0EJfzKyhnjqohPppf72qfynqO0ksQ7lmCTlSQMtAMi5nKgLjAWi3r6vtJniyye972gbajL09VA4xLUFUmKQvqcRudBl8KlrCRLo4s1NfM+hkglBgZ8DSEodQ3GqNK8O1XDRUEeHBW805ds8o4s5bd3SPTkyfx3T7u/A78Li/z6aLXQxsj/EjY4iLNJbdAxzRgefBhZrfSTjQlvh4V23N7ERCGNZ/OmKaNLNAfMSt/So3CA+xKmMcXjaQ5yIDRx36fg1qzGgRp9u1xe4ZPRrwe4qN1IDk4APYmmme3Hmr6xiz85U34h3n5KgsCL22QwED2wsd1Zo89pC5BY8ZnPgUrlDz3K0LmOt5tndXqGeaPyfmhEPxnEv0pxweC0J8B8uc0Hw78mST/gJTV7x26/9/LmbRfeQrre87E4dOW/e/PmfSOdNbJ9pU3uV5Rb/NOK7TY/O/kTHzol5yJn72Coz394FHdozfN3Cysm2OLayXUgsAuqtLRNE8hVBRI/e5BnrIHo1PrIx+1Hc8JuBiBMWYlkuVxDAPafsOoT3T8qj+VUJ4KSAFtrt2i4yHmrOkKjuuPRAIMNMBlSGP6p0MO3IzUBHUf/C+8OZ6ThzHEAHCgm1k0wexOILacmzvB2LtHYQPoBOcyp5sQPF2YvO3hx5Dy+TZ162WHrWf6BlSfupwcUXSHHmBACqz/8MW8Mcmybky8H+zHnFlcIMr19g4WGZL+q3xgkj/1LrXUuNgvriHjkeuAnR3fCpP7vmdFMZCXxgUQfpKen4URtWweolEZWwzXnnm4/HQPokB4uRzCdgTvvC/fjYu/r9tVqfVruYTSplm+YZPl7pYUWhnde+QTvgDqTMwppKiQkMK85XXHdVJ/OHI+cMN2M3+tymrdn0ef99ys6gt8rrOghTJegtS2oena8JWXgiFl3y/o1BNbvZBGOw2GcHBN7YiwBAcoUr7rNYqelMSWxKkNQ6adsYglUUpNPVRI5961KffR9eAi6EvAaptRwyM28r4ZLWjdshkOVSpJ5QBGJw7t+sbdb2EoaJTXW5CuhBUwRIGVmIlvU7n6JCKlVtZU84k6vGRJPIyFPjHdjnYHBUyfwFpMCq/G4rozDM0qnCV/5QD+pmGkyO9zKn+UcU0pwdQkPql02hceLbOPVIGZhHmrA0U3FyOQuNA7VM0D33rMYi53D8IiNnCCzpJoTBsvSqnBb/JDSKXejbb2RoNkOozacptuou1anIJi9E7Ws4T3LgDvWo4fxsZx9JtS4LdDY3LJvzuVVXhsoBLnLT2ShDgHQwkVy0IJuthWl75Wj8m0kRQeS+ks9OTvaHZatxv96DDr5XKxWjCUux4hiJRbrrvaenq1sanDm4fVtK3xscRXIS4NMPokPBY3WQGMjsIns1QSdy153G2UqQsN8bURAOq50yrM1nrImGkrdjuWLD1NfSEH16qYCXJpj3W1+TPchQsgNaeTfI3DNm15mozjOrWAxfUMETiba+fxagLAgpfMcrYDVPnihYwZ3Pp2sUmUaDH2Tlzc23ooBvaWUqfyA4SNO5suuAU1IpJdooYI0NMyPbqlwai2SPl1Wxnvj2LS0XImPaSjeFKb6+fE1A9K8W/UJVLwZdnQgWysL21KkGKRSxoaLFBTyYjfdE0YJ1A4+x1+h+pZsL68rZExRw/xdfS7JVoKooG+gToIq7GJAqFxXGzI3SsghxBkvP6qqA/CNlmaMUTd213sGCb06kp/l4F4FSQAn0ESoHUPtw9KZKoTR6sb3TxZuT0DRAdI8zp77S0/uQJUOV8Bdj9aewmUP7LCp9F7J7wNyde9ZlkkzFbqD/FN490nr1FhhEVjpaJv/RJxwK2D79JbQJGRPXcw+w2QP60Z6T0DaxYdl4auZgXFjtPaY7IDc6HsHxQIjVkh76j1gAR5hz9vaut9DSawi+LN6nWuj26OYCNHNxtWMlppq4Q95jbw2I8TKW3ZV6mToD1QsqjO1PYzzr2mxAmiXej3mQw400tuM+/ZiDbLQ44bVXGJxW6ohBfHzN90cETjT9BS1mxJ/FtTKeK9QzDwAVNrDtlgoviYsht1niGuL8DV3bs1BKcXx/q60+0D61Ov+AYg4+DnDw6GTIA2wY2/ugcqLybVPXoZE7WD9I5vl537hgpJwe7zIlOke8YyTYAvvm14nL2DMZnFu4Cjobr2M1NQbl8ZSk3E3qozrA8Zu5k/7XPGrrJERXEKoFAspVrNVeA9T+LFHsLIyS6baKFoUym1C78+Cp5v2+t+0gNdldDCYEWJZ5zRvCTKr6AuUI4SOJTGJiE3lyW0RjZ8IRw51H465zdPXZhLc+lPPVhZ9nhCCQFyn0s3jHWegEzamINgYFEZY7PIHxJ+L4szzuGgv04I7ibmourTSzmw/TZEisJEugM5wOzWT9o2oI+1SadiyEZ9ZZx5i5LJB0xG4D24NPWyWA8l4gpRXIMhYu/WrBmxhKXmQBBuYGm7EYx09arURyqWtohs2JUkBxJS5cx4UHZiEhGDTcH2pamIeSaSfFATGFpTVOzJQ+3Nnge1dMS6PxHLuCARCmb8ptsMoBPr2NcHpZZQnFtB8uGJTwj7XW+9UN8FKFI9NQCqzTmAbzkrBvlpGVtYt+IDExdjFC4uImQFz8cQXgr8OJh7R6sjOQw3zWA37uDBueNqSQ+NDVPVJxNMEbb1OSP/KFUcs3thedGM8qTJHf8m78e6rU5MN80l+yPHLU+UTMxisUKkW88Pg09DQz+cm6mP7TCcdLEGSrHGCk14bW2pwJNvlfNgQ11diRvEDOxvzA4mBizAHYsp+Zi9AFZ7TCk7Xu0TUOCh+Tl4PYEcrndFkoSgdIUPup0mnAgt1QSMR7ngLgMl45wvfTcnCc9YShOlRBWyZB+O83U1WgbBwtuSxPyyfCcBimxrW8R7wghNGk7tauQDJ/evyNH7zBxlvKW8US91IB1DglU4c8bo2E/6OscmHD84T3PYVLiBz7+dGq0XDcA/NWBC6qcLspI2ux5sww+VSUOwC5shFQ37nHZcgd1uFWmRz5Nt6gPtBF0xYqfhyM1FshjiBSLtkBxJcbrhjn6IwfFTOU0U5YjL7IwnPTwrtXhJx4D2mjnmCsAbLyJy3kMq7VtrdU158JR48xaFyJcsnf2XmDK9jkKrt2HIUoeMR6WbpzUE66xScwXoABvnTchP3R6xApk7FzFHTRS2Sp0Vafyct8I7H4LIwqLOLNSVHJTbLblezr3TIU+fT4xqCdzScyViV8AqJGrE0g0AP/1gxPwTVA7rGiOXOT7Rz01uudXWOOYhSTLDmN3Gr1sf+h9/ZGLE5BhoXcHDbGzNfzVC8GgmXyzQk8Q0WK4hOqlldcZpqAeWUlwphCRhYKed1cm//d9HZNgxCkLt3VCSOgITPGiGj00feSlablq7xTx52pqEl9QLFDjsOE9K5ifB3lEhTDu5VooXIaTav8Vyfm8UjaaNkaTDy0+6lQHs6BTo2p9ycG8pgF00nC3iqAHpy3g1EUMJvCRYEEOWnNIMCnmwXeoObKh60NtlBzJc0zVBngTHEDMLRkiUnpPNkSsZtp2OSb2uCWzUyeMBVM4uJYbKh/zYexlhSt1XTuK3NKTZox5O82r5rIOE9K1gaPo0E3SKKIVOeRmzbQNzYbaCb+Gl681KxLD1xxwBYa9j2PZksze5aWZLiTu/HR0UYp9ga3lMYkujQm3BDiD0zA3/fYLbnxrRyxbWHupr0f28lBiHMZTEOTCN67VIzU6oE+Lq2pIhv5vFj1+hy1buqnZvBDmoPHwYPu3MV89c1YdjlNXg8Of8s4jij0dfLnEyJv0Z4GIXMs5CXi2zv7FNKi20vIIRLNTybbqvEm4AWLU7/sXnG6DfJIThzPfRtmfbk7nrEgcCgOQarAl7LXpy5potVJbAUFoS6+mEBS1Pwed01AIoQe7Cdx2qA+/m6Dbam7pgOn1gc3OEgCBiv8lgeRia1cB5buIX84lRyuCklz9JxayrwfqOUAd/k8BIL0IMJanewTQcGY+IhNCePL1m82Zg11iyiYbuRQIilCxkVlj0dq/EHkIvO5dYmR+loe4dtzAP74WRtNyH1FnljhODFc1e06obX9gKRVhs3mX20WusKkcduybFO7Mhe9GUMpIN8vZ+teYBR8u9jXgtuYOYjBPbm4/JUdBU0EvTvSILaW3jZWsX3ISfFuDLKw8nNVDwjQ88x7qZ1WoL+tB3pC2eVOQF4vycABTODDfRVh9PciYJB53EgU0cLh3NfA84Ta2sJ9MouiecYc+5VDqxgQNMgBVwHzDRBU8zl7SxNSUmvlm3u0MNohVcLe3gsn29EuD7hCM/yhixpxcB+eMeDlPUYe0ji35ziq0FxLoBWUhkYJyk6cFIJNcOPIr1ukCRG+Jq6rihp2LV1qBFyF6V82S9lSM6BrW6lrgmVebVwMTkuMjarEvs8g+bD+YjcFGcwczrDszOw0hhg0KsavHuYEWGEOFT5ZCr6swNYTY3t1B1EWS2azUKCLSP44VpBmpxEfsrWyW7TeYb4F7qix6TTHy8Hos9WV3aKYnjbQw66k7nZ5xZOEz3AgAvVxKeuDf0qkxobfqMGNd/xOi9D6exwtV8XcSpfYxNzSMapaI2QmMoD12lLjVqtNV1+SydN5VHfHRp8M7gVc1dfUA5URj0IKoTa0CbpMAN5xN67JN3MnOQuoFEfnW1TRmPHj65u1RRIbQ5FpeaTto9RQWKEnRknrMs+rqQxh0ZyqRmMdHiad88ByNk5x3LwoiPPIIh6uOZVr27+mDtpwAALvdc1NspA25HpYq1NraUfIIgcgu3YDQUMXg/8L5OycKLg2RfrVT09vvv1Nc2SM+jovnIBKvIiq3eNBEcqwxREhB6RToFs+WrvikIfXZojQ/ioPqTWatF9FoF3YFsCqmYz7xuLSdgUhasdyAEnDhhIyR/XYsfCSepxliI4kgIr5b1yavRG3ppLgf+62TNPkQcXHq0Prewx/umPB6HfWpNSdpSZaazQp4OcYosbldGSI9w2qC5tfBs3Jc3EtZeNQtUbkrRCNhNB1RFnIMzNB/N82aK2IWfq44okQxP2GF8N60QjTWv7A5x8D7wKEfM1DFtNVZhTg7HJtHxRI/OzYhwwMX1PNV/Aqc2MrwbxJ2RvrigE8tPKQf5m5a7T3GhYuE9PJYPQovANIGV+KnIR25LWmMzlBXXypNS7SulwnAM+7KAVw7WFbmzzpLFgKIgQMi0pNR+lr3c9L6dZMAniIqdhjWN7ZrkSpZTV4DZXNRId/HZC2ys1WS9DLfxT48uA3ahfFafiY9IjkTkqZcgmmHV5RqCiSs9CO8xMajEIwcq2mKztygrojqr2zvm7XAYcGITsI1co+d030NbIOvU0kKKHnkYaYzUkwOmIx6h+NYDqjgK+9G/efC8IBap+QR2WIKhGsfCLZdpbD1ed9QRsVbllA2s9K5CAVLcsQyLMfQdcEaZXsqaMn6i502QNyavoHJij09VkQ+u3qH7N7KqQkthVKwGVm13VS9nL8vnnIVGt5EInrwIVFBO4hDHNC+Cmz4eruyqKtojpcEtLhrld19ywLc4XB6GrfxKdq/P6Q4/z/q3p9ixpvgvoza/vX1BF3zUnqgvNvxFPmc+cfmxRBrLzUkTjuIwwEVYEsCAZDaxplamBHeoZJXg3l2i5ZsrirDfpEDHqu8oCe7JP28jBZ+LgSIhIuiJdkRzxr/RAIu64d8QdkIfeEbg5fEsK8T8bNSa5gY7jcdNUmPhwyo+a1HcKymutGo1XzWeaMtKA93QIEV9LUYd9xz5Cmyl1SxtqEMMck04w5vjY66JlFead4e1dhudfcLhcglP+0VTbM0RdQgNSq9oiecLdA7ETkM/sVLksV285heWQwzn7HzLq8wcXees7IAFTrLUMvxiTgaSaCFG0Wx9WRhMvfpLygoe5yg9xll6nYIlQlk4lhk4SGGQSzawCXLHA/x1z63RXhr0IhYf7lPHdM2AuqNqV7+tR+PpDDdwWYMJCdbgikFtEJAkGwo5lQJi3mGLXEjksZegmXEc3ItZln3tfeimZkf7FMNQcJy+u+7SPowcPqej7KzWC30LDjI47RmCKgwjCqNUQUvAlTB1ZlXelcE0i/H108/cIVOstTN46rfwS8llu9BgIUboeraCks3gEHjTNhXOjmTMZfwRCMCKR4f4GBNvfBI8jvB7gtd7JySLPP1zn+ZHD/3FnbSVhbKbiBGWKgZkfXwyDqufN/IMycrGCx5sHTyD6zFygS/dyx5FrHzRqd4b0ImpSiaVnKlNm+rSX2eOUCYqceTvoNNnk1+cqQSqQSVQarM8qndKlDgIAgECdzx2rnCVAxgTWgxQjFac2I1FtQ8D4j9QW/e7oEgkkLdZV7JP8ugI2ltaujWhVbe5SjYHiGmegIq+G2tM7HIxWzr05x5vGZbZVXkqhQ9cA9Vn9dSpqjgFiG8pefOFZP5p6COYoO1aY4QfZzSRlc1I7ZYRorWOvVy9FXh/FxMccpRA1GsU1Zk+0RRD//iE0FuoKEcDCTQOZCwFUrioF2SMeOvgoICT2TIPufBsLJVoqQOLmZRzQz5BnR5sUVEHp+UcrLmDKNwv3Iqo5xSV+YLXAV2iCOQwVSSheVaFRNnaTibHMhH2eqQR5L7Z2fZ4ElSWeOpAOwOhz1zq+rhSUTmnOR8p7SbdizJ3ZPbk1ipQ4pTXW4qjEhZaJifyUQrMlwfrHMnOKxCELBAVuhZOWIVZhneTdKsChBZ82UJCbuCcH+IXOkAwVbOL8sJfnXIlCbi5iBHobcorAkGQLYTxl/JJNpMXUo6Znhem0CHW6fjodaagSlOHqveue1FJdAQ6imF7wIvF7O2EqgAFnZMNR37ktBR3wngOaUzH5g93cptU6xS9MgAr4tR9jK/bzXDhM77pSQ1TgQlDYgyYKNOoZJXExglYCek6vYtGajlRyXXaDQefpbz1irq3x7J3YHqDPQIJPkRx5HIDhuuA6ugiAQu8g5c+eWyGtwuds7qHui7sh4JvdgE5TLQd4rG5K60gFjwgh5S7Sm5mjq9DZiLtaqzIuoYKwd7Jm3bMftDO2FsJDDok7EZPApQ6i8UNF7BpCmcOliQD/0K3x18/35NDt5SgHQ9jbFIBYAucBkSnp2NsFBn70/qV8GYS1r5D22MF3k8aZlqOVdLPmKyVPU2TPIbsqxZHxrcxZsg++9a5s2wuxitw5+4K8ExQX5szcSNpPs6mfTPmalv4JEeL7L0neotYkcdWas6UEuJaKmcDKcXfK7W1zh3XH3bxuCm8cgpSDOVH6wSU+ryvywOJ3hDIIR4g6lPrjIR7PbtDnQUydj4cUe751sB3XkA+R7es4cvEuidm2N12S4KMLie7FGEi1ZbNHh5snemr9mS8PTzNpycP4H1xvrrBzTTfyG0xGmEvX90dse6Veg1q278CkAyWgYp15g4m9zFDmUIGsLk6/KTRhsFqpCvKNQpzBIslHY1Gda48VqEkcMQ12dPyG4O8x0OkalqH1X6NRjo2PM0wajSwvMlxWBY7zywYpvO+2L24sKEzgH4Hcoe5jb1t9Y9peoGRKWHOF5WInt70QSrc4ElUoIuvzOB+0pzvjck8jBoUoUnZdmDb7qDr9DHuv+nMcvu56+DnBNXafB8hKUteUrdx/fFdHOVDLV5T6Ei7ALxStprpYOiolq0EkJGeg4rqIF4gKCMB0t8MlJq1+rVmOUBz07vvcxYhzfZ537BsmhVjfXCqu/U9G+/WomKD+yjrEzo9LoDEWo2jkYYxHi4If71GBMs33OH+hpEgQrL5azg/mQ9nZhag8FLw9XOoUZiMcz1nFEAC2O8E0FnncmrBf1IISvccPz7vvvmQ/Al9d1FsKvPnO+h6x0Og+NEYDBCn+WDopfP4rol0CGy9zRCzwRrh80ZF9ojeWQTcs5HiIrpgNruNLk1vKUg2H6KPNTX9bXEVrCBcMwsqIsvb9Sje7OiTasi7dLapcxr7oECHwr5FZ8BPVKa+to6IELcjDnPBZLM0stJ5bTH8LpyWdBHowAjORuQ5NDzC1FN02GNEZJXyhuVGfBdFimnKSkPEGGxxhqgd+hE58mTHhRCflGaQ5rfQvl6cWGLDRCzLN1B+tx6y91E0YtcP2QS78f1ovNxk6jdiphZT/XnCNHx5PU1h4wncRrtwDWB6XsqolLF+tkeEnuxSrhOfUWnK/543In7605Hf+YKpX76E6jffuPWf/1uehxj/8n1mX/f+4lvhYO7/AQ==</diagram></mxfile>
|
2104.12280/main_diagram/main_diagram.pdf
ADDED
|
Binary file (27.8 kB). View file
|
|
|
2104.12280/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
3D-Lidars and IMUs are ubiquitous to autonomous robots. 3D-Lidars provide a 3D point cloud of the area that the robot operates in and are not affected by illumination. This has proved the usage of Lidars beneficial in several robotic applications. However, owing to the spinning nature of the sensor and the sequential manner in which they produce measurements, Lidars suffer from significant motion distortion when a robot exhibits dynamic maneuvers. The motion distortion can be seen in highway operating speeds for self driving cars and also in other robotic applications like autonomous flight and off-road robotics. Motion distorted scans deteriorate the result of Lidar Odometry/Simultaneous Localizaton $\&$ Mapping (SLAM) algorithms.
|
| 4 |
+
|
| 5 |
+
Inertial Measurement Units (IMUs) can be used to mitigate the effect of motion on spinning Lidars. IMUs measure linear acceleration and angular velocity at frequencies higher than the spinning rate of a Lidar. State of the art Lidar Odometry/SLAM algorithms use IMUs to correct motion distortion (also called deskewing) in Lidar scans and produce better estimates of the robot's position and the surrounding map. In order to use the IMU's measurements, these algorthms require that the spatial separation or the extrinsic calibration between the IMU and Lidar be known *a-priori*, such that data from both these sensors can be expressed in a common frame of reference. In robotics labs where researchers generally assemble sensor suites using products procured from different sources, the extrinsic calibration between a Lidar and an IMU is usually unkown. Therefore, it is important to estimate the extrinsic calibration in order to use any Lidar Inertial Odometry or SLAM algorithm. Although IMUs operate at frequencies much higher than the spinning rate of Lidars, the number of times a Lidar fires a beam to acquire measurements during a 360 $^{\circ}$ scan (*viz,* the firing rate) is significantly higher than the IMU frequency. This requires the use of interpolation/extrapolation techniques to match IMU rates to Lidar firing rates so that motion compensation can be done using IMU measurements. [@lincalib1] uses Gaussian process regression, [@lincalib2] uses continuous time splines, while we use a discrete time IMU state propagation model under an EKF framework to compensate for the effect of motion during the calibration process.
|
| 6 |
+
|
| 7 |
+
{#fig:LidarIMUSystem width="45%"}
|
| 8 |
+
|
| 9 |
+
# Method
|
| 10 |
+
|
| 11 |
+
Our goal is to determine the spatial 6 DoF separation, *viz.* the extrinsic calibration $T^{I}_{L} \in SE(3)$ between a 3D-Lidar and an IMU (Figure [1](#fig:LidarIMUSystem){reference-type="ref" reference="fig:LidarIMUSystem"}). We divide the calibration process into three steps *viz* data collection (Section [5.1](#sec: datacollection){reference-type="ref" reference="sec: datacollection"}), inter sensor rotation initialization (Section [5.2](#sec: rotationestimation){reference-type="ref" reference="sec: rotationestimation"}, Figure [\[fig:rotationHEC\]](#fig:rotationHEC){reference-type="ref" reference="fig:rotationHEC"}) and full extrinsic calibration (Section [5.3](#sec: fullstateestimationusingEKF){reference-type="ref" reference="sec: fullstateestimationusingEKF"}, Figure [\[fig:KFBlockDiagran\]](#fig:KFBlockDiagran){reference-type="ref" reference="fig:KFBlockDiagran"}).
|
| 12 |
+
|
| 13 |
+
Data collection is an important step in any extrinsic calibration algorithm. Since our sensor suite involves an IMU, a proprioceptive sensor which can sense only motion, we sufficiently excite [^4] all degrees of rotation and translation so that all the components of the extrinsic calibration parameter are completely observable. However, as remarked in [@KalmanFilterBasedCamCalib] motion excitation along at least two degrees of rotational freedom is essential for observability. This is demonstrated in Section [7.5](#sec: motionandconvergence){reference-type="ref" reference="sec: motionandconvergence"}. Although [@KalmanFilterBasedCamCalib] deals in Camera IMU calibration, the remark about observability will also extend to the 3D-Lidar IMU calibration problem because in both the cases the exteroceptive sensor (camera in [@KalmanFilterBasedCamCalib] and Lidar in our case) is used as a pose sensor. The data collection XYZ trajectory is shown in Figure [3](#fig: Lidarimutraj){reference-type="ref" reference="fig: Lidarimutraj"}. For the purpose of notation, let us denote that we collect $M$ Lidar scans in the process of collecting data.
|
| 14 |
+
|
| 15 |
+
<figure id="fig: Lidarimutraj" data-latex-placement="!ht">
|
| 16 |
+
<img src="figures/lidaimudatacollectiontrajectory.png" style="width:35.0%" />
|
| 17 |
+
<figcaption>Trajectory of the Lidar-IMU system</figcaption>
|
| 18 |
+
</figure>
|
| 19 |
+
|
| 20 |
+
We estimate the rotation between the IMU and 3D-Lidar by using the rotation component of the motion based calibration constraint (Equation [\[eqn: fullmotionconstraint\]](#eqn: fullmotionconstraint){reference-type="ref" reference="eqn: fullmotionconstraint"}). The rotation component is given in Equation [\[eqn: rotationmotionconstraint\]](#eqn: rotationmotionconstraint){reference-type="ref" reference="eqn: rotationmotionconstraint"}. $$\begin{align}
|
| 21 |
+
R^{I_{k-1}}_{I_{k}} R^{I}_{L} &= R^{I}_{L} R^{L_{k-1}}_{L_{k}} \label{eqn: rotationmotionconstraint}
|
| 22 |
+
\end{align}$$
|
| 23 |
+
|
| 24 |
+
<figure id="fig:InitBlockKFBlock" data-latex-placement="!ht">
|
| 25 |
+
|
| 26 |
+
<figcaption>Figure <a href="#fig:rotationHEC" data-reference-type="ref" data-reference="fig:rotationHEC">[fig:rotationHEC]</a> presents the process of determination of an initial estimate of <span class="math inline"><em>R</em><sub><em>L</em></sub><sup><em>I</em></sup></span>, Figure <a href="#fig:KFBlockDiagran" data-reference-type="ref" data-reference="fig:KFBlockDiagran">[fig:KFBlockDiagran]</a> presents the Extended Kalman Filter which utilizes the initial estimate of <span class="math inline"><em>R</em><sub><em>L</em></sub><sup><em>I</em></sup></span> obtained in Figure <a href="#fig:rotationHEC" data-reference-type="ref" data-reference="fig:rotationHEC">[fig:rotationHEC]</a> to generate both <span class="math inline"><em>R</em><sub><em>L</em></sub><sup><em>I</em></sup></span> and <span class="math inline"><sup><em>I</em></sup><em>p</em><sub><em>L</em></sub></span> together.</figcaption>
|
| 27 |
+
</figure>
|
| 28 |
+
|
| 29 |
+
In our implementation we use an axis angle representation of the sensor rotations. Using the axis angle representation, Equation [\[eqn: rotationmotionconstraint\]](#eqn: rotationmotionconstraint){reference-type="ref" reference="eqn: rotationmotionconstraint"} can be reformulated as $$\begin{align}
|
| 30 |
+
^{I_{k-1}}r_{I_{k}} &= R^{I}_{L} {}^{L_{k-1}}r_{L_{k}} \label{eqn: rotationmotionconstraintAxisAngle}
|
| 31 |
+
\end{align}$$
|
| 32 |
+
|
| 33 |
+
Here $^{I_{k-1}}r_{I_{k}}$ $\&$ $^{L_{k-1}}r_{L_{k}} \in R^{3}$ are axis angle representations for $R^{I_{k-1}}_{I_{k}}$ $\&$ $R^{L_{k-1}}_{L_{k}}$ respectively. As shown in Figure [\[fig:rotationHEC\]](#fig:rotationHEC){reference-type="ref" reference="fig:rotationHEC"}, we use NDT scan matching [@NDT] to estimate the Lidar rotation $R^{L_{k-1}}_{L_{k}}$ between consecutive Lidar scans. We integrate gyroscope measurements between two scan instants to estimate IMU rotation $R^{I_{k-1}}_{I_{k}}$. An objective function (Equation [\[eqn: objectivefunction\]](#eqn: objectivefunction){reference-type="ref" reference="eqn: objectivefunction"}), unknown in $R^{I}_{L}$, is formed by squaring and summing the constraint given in Equation [\[eqn: rotationmotionconstraintAxisAngle\]](#eqn: rotationmotionconstraintAxisAngle){reference-type="ref" reference="eqn: rotationmotionconstraintAxisAngle"} for each ${}^{L_{k-1}}r_{L_{k}}$ and the corresponding ${}^{I_{k-1}}r_{I_{k}}$. $$\begin{align}
|
| 34 |
+
\label{eqn: objectivefunction}
|
| 35 |
+
P = \sum_{i=1}^{M-1} \left\lVert{}^{I_{k-1}}r_{I_{k}} - R^{I}_{L} {}^{L_{k-1}}r_{L_{k}}\right\rVert^{2}
|
| 36 |
+
\end{align}$$ In order to obtain an estimate $\hat{R}^{I}_L$, Equation [\[eqn: objectivefunction\]](#eqn: objectivefunction){reference-type="ref" reference="eqn: objectivefunction"} needs to be minimized with respect to $R^{I}_L$ as shown in Equation [\[eqn: CostFnMinimization\]](#eqn: CostFnMinimization){reference-type="ref" reference="eqn: CostFnMinimization"}. We use the Ceres [@ceres-solver] non linear least square solver to solve the optimization problem in Equation [\[eqn: CostFnMinimization\]](#eqn: CostFnMinimization){reference-type="ref" reference="eqn: CostFnMinimization"}. $$\begin{equation}
|
| 37 |
+
\label{eqn: CostFnMinimization}
|
| 38 |
+
\hat{R}^{I}_L = \mathop{\mathrm{argmin}}_{R^{I}_L} P
|
| 39 |
+
\end{equation}$$ In this step, the sensor rotations $R^{I_{k-1}}_{I_{k}}$ and $R^{L_{k-1}}_{L_{k}}$ used to estimate $R^{I}_{L}$ have certain shortcomings. First, $R^{I_{k-1}}_{I_{k}}$ is calculated by integrating the gyro measurements without taking the gyroscope bias into account and second, $R^{L_{k-1}}_{L_{k}}$ is obtained from NDT scan matching of Lidar scans which may have significant motion distortion due to the motion undertaken by the sensor suite during the data collection step (Section [5.1](#sec: datacollection){reference-type="ref" reference="sec: datacollection"}). This step only provides an initial estimate of $R^{I}_{L}$ which is used for initializing the EKF based algorithm described in Section [5.3](#sec: fullstateestimationusingEKF){reference-type="ref" reference="sec: fullstateestimationusingEKF"}.
|
| 40 |
+
|
| 41 |
+
The estimation of inter sensor translation ${}^{I}p_{L}$ depends on IMU translation ${}^{I_{k-1}}p_{I_{k}}$ (Equation [\[eqn: fullmotionconstraint\]](#eqn: fullmotionconstraint){reference-type="ref" reference="eqn: fullmotionconstraint"}) which involves double integration of IMU accelerometer measurements, but performing double integration without the knowledge of biases will introduce significant errors. We use an Extended Kalman Filter (EKF) which, in addition to estimating $R^{I}_{L}$ $\&$ $^{I}p_{L}$, also estimates the accelerometer $\&$ gyroscope biases, the pose $\&$ velocity of the IMU at the Lidar scan instants. The block diagram for the EKF approach is shown in Figure [\[fig:KFBlockDiagran\]](#fig:KFBlockDiagran){reference-type="ref" reference="fig:KFBlockDiagran"}. The states we will be estimating are: $$\begin{equation}
|
| 42 |
+
\mathcal{X} = \{ {X^{G}_{I_{k=0:M-1}}}, T^{I}_{L} \}
|
| 43 |
+
\end{equation}$$ Here $M$ is the number of scans. $X^{G}_{I_k}$ is the IMU state at scan timestamp $k$. Our EKF state vector has an evolving component and a static component. The evolving component is the IMU state at scan timestamp $k$: $$\begin{equation}
|
| 44 |
+
\hat{X}^{G}_{I_k} = \{^{I_{k}} _{G}\hat{\Bar{q}}, ^{G}\hat{\textbf{v}}_{I_{k}}, ^{G}\hat{\textbf{p}}_{I_{k}}, \hat{\textbf{b}}_{g, k}, \hat{\textbf{b}}_{a, k}\}
|
| 45 |
+
\label{eqn: evolvingstatevector}
|
| 46 |
+
\end{equation}$$ $^{I_{k}} _{G}\hat{\Bar{q}}$ is the unit quaternion which encodes the IMU orientation such that the rotation matrix $R^{\top}(^{I_{k}} _{G}\hat{\Bar{q}})$ is the IMU orientation with respect to the global frame $G$. $^{G}\hat{\textbf{p}}_{I_{k}}$ $\&$ $^{G}\hat{\textbf{v}}_{I_{k}}$ are IMU position and velocity vectors ($\in R^{3\times1}$) respectively in frame $G$. $\hat{\textbf{b}}_{a,k}$ $\&$ $\hat{\textbf{b}}_{g,k}$ are the accelerometer and gyro bias vectors ($\in R^{3 \times 1}$) respectively. The static component of the EKF state vector is the extrinsic calibration, parameterized as $T^{I}_{L}$. $T^{I}_{L}$ is formed by rotation matrix $R^{I}_{L}$ and translation vector $^{I}\textbf{p}_{L}$, however in the EKF formulation we parameterize the rotation $R^{I}_{L}$ as a unit quaternion $^{I}_{L}\bar{q}$.
|
| 47 |
+
|
| 48 |
+
We use the discrete time implementation given in [@KalmanFilterBasedCamCalib], [@OpenVins] to propagate the EKF state (Equations [\[eqn: propeqn1\]](#eqn: propeqn1){reference-type="ref" reference="eqn: propeqn1"} - [\[eqn: propeqn7\]](#eqn: propeqn7){reference-type="ref" reference="eqn: propeqn7"}) from imu timestamp $i$ to $i+1$. The gyroscope and accelerometer measurements $\bm{\omega}_{m,i} \in R^{3\times1}$ $\&$ $\textbf{a}_{m,i} \in R^{3\times1}$ respectively, are assumed to be constant during the IMU sampling period $\Delta t$. In the following equations ${}^{G}\textbf{g}$ is the acceleration due to gravity in the global frame G. $$\begin{align}
|
| 49 |
+
^{I_{i+1}} _{G}\hat{\Bar{q}} &= \exp \bigg(\frac{1}{2}\Omega(\bm{\omega}_{m,i} - \hat{\textbf{b}}_{g,i})\Delta t\bigg) {}^{I_{i}} _{G}\hat{\Bar{q}} \label{eqn: propeqn1}\\
|
| 50 |
+
^{G}\hat{\textbf{v}}_{I_{i+1}} &= ^{G}\hat{\textbf{v}}_{I_{i}} - {}^{G} \textbf{g} \Delta t + \hat{\textbf{R}}^{G} _{I_{i}} (\textbf{a}_{m, i} - \hat{\textbf{b}}_{a, i}) \Delta t \label{eqn: propeqn2}\\
|
| 51 |
+
^{G}\hat{\textbf{p}}_{I_{i+1}} &= ^{G}\hat{\textbf{p}}_{I_{i}} + ^{G}\hat{\textbf{v}}_{I_{i}} \Delta t - \frac{1}{2} {}^{G}\textbf{g} \Delta t ^{2} \nonumber \\&+ \frac{1}{2} \hat{\textbf{R}}^{G} _{I_{i}} (\textbf{a}_{m, i} - \hat{\textbf{b}}_{a, i}) \Delta t^{2} \label{eqn: propeqn3} \\
|
| 52 |
+
\hat{\textbf{b}}_{g, i+1} &= \hat{\textbf{b}}_{g, i} \label{eqn: propeqn4} \\
|
| 53 |
+
\hat{\textbf{b}}_{a, i+1} &= \hat{\textbf{b}}_{a, i} \label{eqn: propeqn5} \\
|
| 54 |
+
^{I}_{L}\hat{\Bar{q}}_{i+1} &= ^{I}_{L}\hat{\Bar{q}}_{i} \label{eqn: propeqn6} \\
|
| 55 |
+
^{I}\hat{\textbf{p}}_{L, i+1} &= ^{I}\hat{\textbf{p}}_{L, i} \label{eqn: propeqn7}
|
| 56 |
+
\end{align}$$ In Equation [\[eqn: propeqn1\]](#eqn: propeqn1){reference-type="ref" reference="eqn: propeqn1"}, $\exp()$ is matrix exponential (Equation 96 in [@Trawny2005IndirectKF]), $\Omega(\bm{\omega}) = \begin{bmatrix}
|
| 57 |
+
-[\bm{\omega}]_{\times} & \bm{\omega} \\
|
| 58 |
+
-\bm{\omega}^{\top} & 0
|
| 59 |
+
\end{bmatrix}$ $\&$ $[\bm{\omega}]_{\times} = \begin{bmatrix}
|
| 60 |
+
0 & -\omega_{z} & \omega_{y} \\
|
| 61 |
+
\omega_{z} & 0 & -\omega_{x} \\
|
| 62 |
+
-\omega_{y} & \omega_{x} & 0 \\
|
| 63 |
+
\end{bmatrix}$. The gyroscope and accelerometer measurements $\bm{\omega}_{m,i}$ and $\textbf{a}_{m,i}$ respectively, used to propagate the evolving state (Equation [\[eqn: propeqn1\]](#eqn: propeqn1){reference-type="ref" reference="eqn: propeqn1"}-[\[eqn: propeqn7\]](#eqn: propeqn7){reference-type="ref" reference="eqn: propeqn7"}) state are modelled as: $$\begin{align}
|
| 64 |
+
\bm{\omega}_{m} &= \bm{\omega} + \textbf{b}_{g} + \textbf{n}_{g} \nonumber\\
|
| 65 |
+
\textbf{a}_{m} &= \textbf{a} + \textbf{R}^{I}_{G} {}^{G}\textbf{g} + \textbf{b}_{a} + \textbf{n}_{a}
|
| 66 |
+
\label{eqn: imumodelcontinous}
|
| 67 |
+
\end{align}$$ Here $\textbf{n}_{g}$ $\&$ $\textbf{n}_{a}$ are white Gaussian noise. Discretizing and taking expected value, Equation [\[eqn: imumodelcontinous\]](#eqn: imumodelcontinous){reference-type="ref" reference="eqn: imumodelcontinous"} can be written as: $$\begin{align}
|
| 68 |
+
\bm{\omega}_{m, i} &= \hat{\bm{\omega}}_{i} + \hat{\textbf{b}}_{g, i} \nonumber \\
|
| 69 |
+
\textbf{a}_{m, i} &= \hat{\textbf{a}}_{i} + \hat{\textbf{R}}^{I_{i}}_{G} {}^{G}\textbf{g} + \hat{\textbf{b}}_{a,i}
|
| 70 |
+
\label{eqn: imumodeldiscrete}
|
| 71 |
+
\end{align}$$
|
| 72 |
+
|
| 73 |
+
In addition to propagation of state variables, we also need to propagate the EKF state covariance $\textbf{P}$ from imu timestamp $i$ to $i+1$ using Equation [\[eqn: propcov\]](#eqn: propcov){reference-type="ref" reference="eqn: propcov"}. $$\begin{align}
|
| 74 |
+
\textbf{P}_{i+1} = \Phi(t_{i+1}, t_i)\textbf{P}_{i}\Phi(t_{i+1}, t_i)^{T} + \textbf{G}_i \textbf{Q}_d \textbf{G}^{T}_i \label{eqn: propcov}
|
| 75 |
+
\end{align}$$ Here, $$\begin{multline*}
|
| 76 |
+
\Phi (t_{i+1}, t_{i})
|
| 77 |
+
= \\
|
| 78 |
+
\setlength\arraycolsep{0.5pt}
|
| 79 |
+
\begin{bmatrix}
|
| 80 |
+
\hat{\textbf{R}}^{I_{i+1}} _{I_i} & 0_{3} & 0_{3} & -\hat{\textbf{R}}^{I_{i+1}} _{I_i} \textbf{J}_r(^{I_{i+1}} _{I_i} \hat{\bm{\theta}}) \Delta t & 0_{3}\\
|
| 81 |
+
-\frac{1}{2} \hat{\textbf{R}}^{G}_{I_i}[\hat{\textbf{a}}_i \Delta t ^2]_{\times} & I_3 & I_3 \Delta t & 0_3 & -\frac{1}{2} \hat{\textbf{R}}^{G}_{I_i} \Delta t ^{2} \\
|
| 82 |
+
-\hat{\textbf{R}}^{G}_{I_i} [\hat{\textbf{a}}_{i} \Delta t]_{\times} & 0_{3} & I_{3} & 0_{3} & -\hat{\textbf{R}}^{G}_{I_i} \Delta t \\
|
| 83 |
+
0_{3} & 0_{3} & 0_{3} & I_{3} & 0_{3} \\
|
| 84 |
+
0_{3} & 0_{3} & 0_{3} & 0_{3} & I_{3}
|
| 85 |
+
\end{bmatrix}
|
| 86 |
+
\end{multline*}$$ $$\begin{equation*}
|
| 87 |
+
\textbf{G}_{i}
|
| 88 |
+
= \begin{bmatrix}
|
| 89 |
+
- \hat{\textbf{R}}^{I_{i+1}} _{I_{i}} \textbf{J}_{r} (^{I_{i+1}} _{I_{i}} \bm{\theta}) \Delta t & 0_{3} & 0_{3} & 0_{3} \\
|
| 90 |
+
0_{3} & -\frac{1}{2}\hat{\textbf{R}}^{G}_{I_i} \Delta t^{2} & 0_{3} & 0_{3} \\
|
| 91 |
+
0_{3} & -\hat{\textbf{R}}^{G}_{I_i} \Delta t & 0_{3} & 0_3 \\
|
| 92 |
+
0_{3} & 0_{3} & I_{3} & 0_{3} \\
|
| 93 |
+
0_{3} & 0_{3} & 0_{3} & I_{3}
|
| 94 |
+
\end{bmatrix}
|
| 95 |
+
\end{equation*}$$
|
| 96 |
+
|
| 97 |
+
Where, $\hat{\textbf{R}}^{I_{i+1}} _{I_i} = \exp(-\hat{\bm{\omega}}_{i} \Delta t)$, $^{I_{i+1}} _{I_i} \hat{\bm{\theta}} = -\hat{\bm{\omega}}_{i} \Delta t$ and $\textbf{J}_r(\bm{\theta})$ is the right Jacobian of $SO(3)$ that maps the variation of rotation angle in the parameter vector space into variation in the tangent vector space to the manifold [@barfoot_2017]. $\textbf{Q}_{d}$ is the IMU noise covariance matrix which can be computed as done in [@Trawny2005IndirectKF] (Equations 129-130 $\&$ Equations 187-192). Computation of $\textbf{Q}_d$ requires the knowledge of IMU intrinsic calibration parameters, *viz.* gyroscope/accelerometer noise densities and random walk (in-run biases), which can be looked up in the IMU data-sheet or determined using tools available online[^5].
|
| 98 |
+
|
| 99 |
+
The 3D Lidar sequentially produces point measurements using a rotating mechanism. When the Lidar moves, the raw scan produced by it suffers from motion distortion. The calibration data collection process (Section [5.1](#sec: datacollection){reference-type="ref" reference="sec: datacollection"}) requires the sensor suite to exhibit motion excitation, which moves the points in a raw scan away from their true positions. In a Lidar scan, each 3D point is measured from a temporally unique frame and comes with a timestamp (which is somewhere between two adjacent scan timestamps). In order to address the problem of motion distortion, we need to predict the IMU pose at point timestamp. The IMU propagation model (Equation [\[eqn: propeqn1\]](#eqn: propeqn1){reference-type="ref" reference="eqn: propeqn1"} - [\[eqn: propeqn7\]](#eqn: propeqn7){reference-type="ref" reference="eqn: propeqn7"}) is used for IMU pose prediction at point timestamp. Once we have an estimate of the IMU pose at point timestamp, we use the best known estimate of the extrinsic calibration parameter $T^{I}_{L}$ to infer the corresponding Lidar pose, which can be done by exploiting the motion constraint given in Equation [\[eqn: fullmotionconstraint\]](#eqn: fullmotionconstraint){reference-type="ref" reference="eqn: fullmotionconstraint"}. For example, consider a point $x^{L}_{k_i}$ in the $k^{th}$ scan bearing timestamp $k_i$. In order to deskew this point, we manipulate Equation [\[eqn: fullmotionconstraint\]](#eqn: fullmotionconstraint){reference-type="ref" reference="eqn: fullmotionconstraint"} and use it to estimate Lidar motion $T^{L_k}_{L_{k_i}}$ between the scan timestamp $k$ and the point timestamp $k_{i}$ (Equation [\[eqn: deskewing\]](#eqn: deskewing){reference-type="ref" reference="eqn: deskewing"}) $$\begin{align}
|
| 100 |
+
\label{eqn: deskewing}
|
| 101 |
+
T^{L_k}_{L_{k_i}} = (\hat{T}^{I}_{L})^{-1}(\hat{T}^{G}_{I_k})^{-1} \hat{T}^{G}_{I_{k_i}}\hat{T}^{I}_{L}
|
| 102 |
+
\end{align}$$ $T^{L_k}_{L_{k_i}}$ calculated in Equation [\[eqn: deskewing\]](#eqn: deskewing){reference-type="ref" reference="eqn: deskewing"} is used to transform the point $x^{L}_{k_i}$ for obtaining a deskewed Lidar scan. Here, $\hat{T}^{I}_{L}$ is the best known estimate of extrinsic calibration $T^{I}_{L}$ at that instant, $\hat{T}^{G}_{I_k}$ is an estimate of IMU pose at scan timestamp $k$ and finally $\hat{T}^{G}_{I_{k_i}}$ is the pose of the IMU at point timestamp $k_i$, obtained using the IMU state propagation model (Equation [\[eqn: propeqn1\]](#eqn: propeqn1){reference-type="ref" reference="eqn: propeqn1"} - [\[eqn: propeqn7\]](#eqn: propeqn7){reference-type="ref" reference="eqn: propeqn7"})[^6].
|
| 103 |
+
|
| 104 |
+
After we deskew the scan we use NDT scan matching [@NDT] to generate Lidar motion estimates $T^{L_{k-1}}_{L_{k}}$ between consecutive deskewed Lidar scans $k-1$ and $k$. We use these Lidar motion estimates as measurement for the EKF state update.
|
| 105 |
+
|
| 106 |
+
The State Update module requires the knowledge of a measurement model, measurement residual and the measurement Jacobians with respect to the state variables. In this section we will present the measurement model and the measurement residual, but we will omit the derivation of measurement Jacobians in the interest of space. As described in the previous section, we use the result of NDT scan matching as measurement which is parameterized as $T^{L_{k-1}}_{L_{k}}$. We will use the motion constraint in Equation [\[eqn: fullmotionconstraint\]](#eqn: fullmotionconstraint){reference-type="ref" reference="eqn: fullmotionconstraint"} to derive our measurement model. Manipulating Equation [\[eqn: fullmotionconstraint\]](#eqn: fullmotionconstraint){reference-type="ref" reference="eqn: fullmotionconstraint"} gives us the measurement model (Equation [\[eqn: measurementmodel\]](#eqn: measurementmodel){reference-type="ref" reference="eqn: measurementmodel"}):
|
| 107 |
+
|
| 108 |
+
$$\begin{align}
|
| 109 |
+
T^{L_{k-1}}_{L_{k}} &= (T^{I}_{L})^{-1} (T^{G}_{I_{k-1}})^{-1} T^{G}_{I_{k}}T^{I}_{L}\label{eqn: measurementmodel}
|
| 110 |
+
\end{align}$$ The LHS of Equation [\[eqn: measurementmodel\]](#eqn: measurementmodel){reference-type="ref" reference="eqn: measurementmodel"} is the measurement and the RHS is a function of state variables $T^{I}_{L}$, $T^{G}_{I_{k-1}}$, $T^{G}_{I_{k}}$. So, the measurement model (Equation [\[eqn: measurementmodel\]](#eqn: measurementmodel){reference-type="ref" reference="eqn: measurementmodel"}) is in agreement with the standard form $z = h(x)$ used in EKF, where $z$ is the measurement and $h()$ is the measurement model which is a function of the state $x$. In our case, measurement $z=T^{L_{k-1}}_{L_{k}}$ $\&$ measurement model $h(x)=(T^{I}_{L})^{-1} (T^{G}_{I_{k-1}})^{-1} T^{G}_{I_{k}}T^{I}_{L}$ and state $x = \{ T^{G}_{I_{k-1}}, T^{G}_{I_{k}}, T^{I}_{L}\}$. Here,
|
| 111 |
+
|
| 112 |
+
$$\begin{align*}
|
| 113 |
+
T^{I}_{L} =
|
| 114 |
+
\begin{bmatrix}
|
| 115 |
+
\textbf{R}(^{I}_{L}\Bar{q}) & ^{I}\textbf{p}_{L}\\
|
| 116 |
+
0 & 1
|
| 117 |
+
\end{bmatrix},
|
| 118 |
+
T^{G}_{I_{k-1}} =
|
| 119 |
+
\begin{bmatrix}
|
| 120 |
+
\textbf{R}^{T}(^{I_{k-1}} _{G}\Bar{q}) & ^{G}\textbf{p}_{I_{k-1}}\\
|
| 121 |
+
0 & 1
|
| 122 |
+
\end{bmatrix}
|
| 123 |
+
\end{align*}$$ $$\begin{align*}
|
| 124 |
+
T^{G}_{I_{k}} =
|
| 125 |
+
\begin{bmatrix}
|
| 126 |
+
\textbf{R}^{T}(^{I_{k}} _{G}\Bar{q}) & ^{G}\textbf{p}_{I_{k}}\\
|
| 127 |
+
0 & 1
|
| 128 |
+
\end{bmatrix}
|
| 129 |
+
\end{align*}$$ Clearly $T^{I}_{L}$, $T^{G}_{I_{k-1}}$, $T^{G}_{I_{k}}$ depend on state variables.
|
| 130 |
+
|
| 131 |
+
Separating the rotation and translation components of Equation [\[eqn: measurementmodel\]](#eqn: measurementmodel){reference-type="ref" reference="eqn: measurementmodel"}, we obtain.
|
| 132 |
+
|
| 133 |
+
$$\begin{align}
|
| 134 |
+
R^{L_{k-1}}_{L_{k}} &= \textbf{R}^{\top}(^{I}_{L}\Bar{q})\textbf{R}(^{I_{k-1}} _{G}\Bar{q})\textbf{R}^{T}(^{I_{k}} _{G}\Bar{q})\textbf{R}(^{I}_{L}\Bar{q}) \nonumber \\
|
| 135 |
+
{}^{L_{k-1}}p_{L_{k}} &=\textbf{R}^{\top}(^{I}_{L}\Bar{q}) \bigg[\bigg( \textbf{R}(^{I_{k-1}} _{G}\Bar{q})\textbf{R}^{T}(^{I_{k}} _{G}\Bar{q})- \mathbf{I}\bigg) {}^{I}\textbf{p}_{L} \nonumber \\&+ \textbf{R}(^{I_{k-1}} _{G}\Bar{q}) \bigg( {}^{G}\textbf{p}_{I_{k}} - ^{G}\textbf{p}_{I_{k-1}}\bigg) \bigg]
|
| 136 |
+
\label{eqn: measurementmodelseparated}
|
| 137 |
+
\end{align}$$
|
| 138 |
+
|
| 139 |
+
The measurement model calculated at state estimates gives us the predicted rotation and translation measurement, *viz.* $\hat{R}^{L_{k-1}}_{L_{k}}$ and ${}^{L_{k-1}}\hat{p}_{L_{k}}$ respectively. The difference between the true measurements and predicted measurements gives us the measurement residual $\textbf{r}_k$ required for state update (Equation [\[eqn: measurementresidual\]](#eqn: measurementresidual){reference-type="ref" reference="eqn: measurementresidual"}).
|
| 140 |
+
|
| 141 |
+
$$\begin{align}
|
| 142 |
+
\textbf{r}_k =
|
| 143 |
+
\begin{bmatrix}
|
| 144 |
+
\texttt{Log}(R^{L_{k-1}}_{L_{k}} (\hat{R}^{L_{k-1}}_{L_{k}})^{\top}) \\
|
| 145 |
+
{}^{L_{k-1}}p_{L_{k}} - {}^{L_{k-1}}\hat{p}_{L_{k}}
|
| 146 |
+
\end{bmatrix}
|
| 147 |
+
\label{eqn: measurementresidual}
|
| 148 |
+
\end{align}$$ Here, $\texttt{Log()}$ associates a matrix $R \in SO(3)$ to a vector $\in R^{3\times1}$ (via a skew symmetric matrix). In addition to the measurement residual $\textbf{r}_{k}$, we also require the Jacobians of the measurement model with respect to the state variables in order to perform state and covariance update. The Jacobians (Equation [\[eqn: Jacobian\]](#eqn: Jacobian){reference-type="ref" reference="eqn: Jacobian"}) are evaluated at the best available estimate of the state variables $x = \{ T^{G}_{I_{k-1}}, T^{G}_{I_{k}}, T^{I}_{L}\}$. $$\begin{align}
|
| 149 |
+
\textbf{H}^{T^{L_{k-1}}_{L_{k}}}_{T^{I}_{L}} &= \frac{\partial T^{L_{k-1}}_{L_{k}}}{\partial T^{I}_{L}}\bigg|_{\hat{x} = \{ \hat{T}^{G}_{I_{k-1}}, \hat{T}^{G}_{I_{k}}, \hat{T}^{I}_{L}\}} \nonumber\\
|
| 150 |
+
\textbf{H}^{T^{L_{k-1}}_{L_{k}}}_{T^{G}_{I_{k-1}}} &= \frac{\partial T^{L_{k-1}}_{L_{k}}}{\partial T^{G}_{I_{k-1}}}\bigg|_{\hat{x} = \{ \hat{T}^{G}_{I_{k-1}}, \hat{T}^{G}_{I_{k}}, \hat{T}^{I}_{L}\}} \nonumber\\
|
| 151 |
+
\textbf{H}^{T^{L_{k-1}}_{L_{k}}}_{T^{G}_{I_{k}}} &= \frac{\partial T^{L_{k-1}}_{L_{k}}}{\partial T^{G}_{I_{k}}}\bigg|_{\hat{x} = \{ \hat{T}^{G}_{I_{k-1}}, \hat{T}^{G}_{I_{k}}, \hat{T}^{I}_{L}\}}
|
| 152 |
+
\label{eqn: Jacobian}
|
| 153 |
+
\end{align}$$ These individual Jacobians are stacked together to form a consolidated Jacobian $\textbf{H}_{k}$ and used for state update when a measurement update is available. The state update equations are presented in Equations [\[eqn: updateeqn1\]](#eqn: updateeqn1){reference-type="ref" reference="eqn: updateeqn1"}-[\[eqn: updateeqn3\]](#eqn: updateeqn3){reference-type="ref" reference="eqn: updateeqn3"}. $$\begin{align}
|
| 154 |
+
\textbf{K}_{k} &= \textbf{P}_{k_{-}}\textbf{H}^{\top}_{k}(\textbf{H}_{k}\textbf{P}_{k_{-}}\textbf{H}^{\top}_{k} + \textbf{R})^{-1} \label{eqn: updateeqn1}\\
|
| 155 |
+
\begin{bmatrix}
|
| 156 |
+
\hat{X}^{G}_{I_{k_{+}}} \\
|
| 157 |
+
\hat{T}^{I}_{L+}
|
| 158 |
+
\end{bmatrix} &=
|
| 159 |
+
\begin{bmatrix}
|
| 160 |
+
\hat{X}^{G}_{I_{k_{-}}} \\
|
| 161 |
+
\hat{T}^{I}_{L-}
|
| 162 |
+
\end{bmatrix} \oplus \textbf{K}_{k}\textbf{r}_{k} \label{eqn: updateeqn2} \\
|
| 163 |
+
\textbf{P}_{k_{+}} &= \textbf{P}_{k_{-}} - \textbf{K}_{k}\textbf{H}_{k} \textbf{P}_{k_{-}} \label{eqn: updateeqn3}
|
| 164 |
+
\end{align}$$ Here $\textbf{K}_{k}$ is the Kalman gain which is used in Equations [\[eqn: updateeqn2\]](#eqn: updateeqn2){reference-type="ref" reference="eqn: updateeqn2"} and [\[eqn: updateeqn3\]](#eqn: updateeqn3){reference-type="ref" reference="eqn: updateeqn3"} for state and state covariance update respectively. **R** is the tunable measurement covariance matrix. '-' denotes the estimate before update while '+' denotes the estimate after update. $\oplus$ in Equation [\[eqn: updateeqn2\]](#eqn: updateeqn2){reference-type="ref" reference="eqn: updateeqn2"} refers to generic composition which can be algebraic addition for variables on vector space or rotation composition for variables on $SO(3)$.
|
| 165 |
+
|
| 166 |
+
Our system (Figure [1](#fig:LidarIMUSystem){reference-type="ref" reference="fig:LidarIMUSystem"}) consists of an Ouster 128 Channel Lidar and a Vectornav VN-300 IMU. The Lidar outputs scans at 10 Hz and IMU outputs gyroscope and accelerometer measurements at 400 Hz.
|
2107.08929/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4019e4ab072e8addfe1cfd982da966114fbf3d2e6928eef418d77cb292f29b46
|
| 3 |
+
size 898576
|
2108.01499/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2108.01499/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f36e972ad32a1da91fe9a0c10d25bc3d28c05bb2594a82c594e114938a95836e
|
| 3 |
+
size 8165788
|
2108.01499/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Object detection [@girshick2014rcnn; @girshickICCV15fastrcnn; @renNIPS15fasterrcnn; @Lin_2017_CVPR] has attracted considerable attention in computer vision community, and benefits a wide range of applications. Along with the development of powerful convolutional neural networks (CNNs) and large-scale well-annotated datasets, the performance of object detection networks has achieved remarkable improvement. Nevertheless, the success of object detection networks highly depends on precise but costly instance-level bounding box annotations of abundant images. To alleviate this issue, weakly supervised object detection (WSOD) aiming at learning effective detection models with image-level supervision has emerged as an inspiring recent topic.
|
| 4 |
+
|
| 5 |
+
Existing WSOD methods [@bilen2016weakly; @tang2018pcl; @Zeng_2019_ICCV; @ren-wetectron2020] usually adopt the multiple instance learning (MIL) framework based on the precomputed proposals. And most efforts have been given to improve proposal classification ability. However, the bounding boxes of most existing methods are mainly determined by precomputed proposals, thereby being limited in precise object localization. For single-phase WSOD methods [@bilen2016weakly; @tang2017multiple; @tang2018pcl; @Shen_2019_CVPR; @li2019weakly], the precomputed proposals classified to a specific class are directly taken as the detection results. Bounding box regression branches are introduced in [@yang2019towards; @ren-wetectron2020; @Zeng_2019_ICCV] and multi-phase training are adopted in [@zhang2018w2f; @Arun_2019]. But they are usually supervised based on the pseudo ground-truths by selecting precomputed proposals with the highest scores. In terms of localization performance, there remains a huge gap between WSOD methods and their fully-supervised counterparts.
|
| 6 |
+
|
| 7 |
+
Transfer learning has also been investigated to improve the localization performance of WSOD. Lee [@ubbr2018] presented a universal bounding box regressor (UBBR) trained on a well-annotated auxiliary dataset for refining bounding boxes generated in WSOD. Instead, Uijlings [@uijlings2018revisiting] trained a universal detector on the well-annotated source dataset, which is then transferred to WSOD as a generic proposal generator. However, [@ubbr2018] and [@uijlings2018revisiting] adopt the single-stage transfer strategy, which actually are not specified to WSOD [@bilen2016weakly; @tang2017multiple; @ubbr2018; @uijlings2018revisiting] and suffer from imperfect annotations in source domain [@lin2014microsoft; @everingham2010pascal; @uijlings2018revisiting]. Going beyond [@uijlings2018revisiting], Zhong [@zhong2020boosting] trained and exploited the one-class universal detector (OCUD) in a progressive manner. In contrast, both the source well-annotated and target weakly annotated datasets are required in the whole training process for OCUD [@zhong2020boosting]. When the source dataset is private and is of large scale [@sun2017revisiting; @mahajan2018exploring], it is preferred to avoid the direct joint use of the source and target datasets for WSOD with transfer learning. Instead, the owner of source datasets can first extract knowledge from data and then distribute knowledge instead of source datasets to the user for boosting WSOD.
|
| 8 |
+
|
| 9 |
+
In this paper, we follow the problem setting in [@ubbr2018; @uijlings2018revisiting], and propose a learnable bounding box adjuster (LBBA) for boosting WSOD performance. Specifically, we consider a well-annotated auxiliary dataset and a weakly annotated dataset. Our method involves two subtasks, , learning class-agnostic bounding box adjuster and training LBBA-boosted WSOD model. In comparison to [@ubbr2018; @uijlings2018revisiting], the LBBAs are specifically designed for improving WSOD performance by developing a multi-stage scheme. Different from [@zhong2020boosting], only the LBBAs and weakly-annotated dataset are used for boosting WSOD, and thus our approach is practically convenient and economical for WSOD training while avoiding the leakage of the auxiliary dataset.
|
| 10 |
+
|
| 11 |
+
To better learn LBBAs from the well-annotated auxiliary dataset and exploit them to improve the performance of WSOD, we formulate the learning of LBBAs as a bi-level optimization problem and present an EM-like multi-stage training algorithm. In particular, the lower subproblem is formulated to learn a deep detection model by incorporating WSOD with LBBA-based regularization, while the upper subproblem is formulated to learn the boundary box adjuster for regressing the selected region proposals generated by WSOD towards the ground-truth bounding boxes. With such formulation, the LBBAs can thus be learned for optimizing WSOD performance. For solving the bi-level optimization problem, we adopt an EM-like multi-stage training algorithm by alternating between training LBBA and WSOD models. Given the class-agnostic and multi-stage LBBAs, the training of LBBA-boosted WSOD also involves several stages. In each stage, the final LBBA can be used to predict the bounding boxes based on the selected region proposals generated by WSOD, which are then used to train the WSOD models.
|
| 12 |
+
|
| 13 |
+
Nevertheless, our LBBAs improve localization performance but are limited in improving proposal classification. As a remedy, we introduce a masking strategy to improve the classification performance of the detector. Specifically, a multi-label classifier is introduced to predict category confidence on image-level, which can further suppress scores of false-positive proposals of WSOD network.
|
| 14 |
+
|
| 15 |
+
Extensive experiments have been conducted to evaluate our proposed method. Benefiting from the class-agnostic setting, LBBAs generalize well to new classes of objects and improves the localization performance of WSOD. Our method performs favorably against state-of-the-art WSOD methods as well as knowledge transfer models with similar problem setting, , UBBR [@ubbr2018]. Contributions of this work can be summarized as follows:
|
| 16 |
+
|
| 17 |
+
- Multi-stage learnable bounding box adjusters are presented for improving localization performance of WSOD, which is the core component of our proposed framework. Particularly, LBBAs make it feasible to use source and target datasets separately for training WSOD models, which is practically more convenient and economical.
|
| 18 |
+
|
| 19 |
+
- A bi-level optimization formulation, as well as an EM-like multi-stage training algorithm, are suggested to learn LBBAs specified for optimizing WSOD.
|
| 20 |
+
|
| 21 |
+
- An effective masking strategy is introduced to improve the accuracy of the proposal classification branch.
|
| 22 |
+
|
| 23 |
+
- Experimental results show our proposed method performs favorably against the state-of-the-art WSOD methods and knowledge transfer models with the similar problem setting.
|
| 24 |
+
|
| 25 |
+
<figure id="fig:pipeline">
|
| 26 |
+
<div class="center">
|
| 27 |
+
<embed src="pipeline-v3-6.pdf" style="width:6in" />
|
| 28 |
+
</div>
|
| 29 |
+
<figcaption>Illustration of our proposed method which includes two subtasks, , <strong>learning bounding box adjusters</strong> (left) and <strong>LBBA-boosted WSOD</strong> (right). For learning bounding box adjusters, we adopt an EM-like algorithm. In <strong>E-step</strong>, adjuster <span class="math inline"><em>g</em></span> predicts bounding boxes from proposals of <span class="math inline"><em>f</em><sup>aux</sup></span> and supervised by ground-truths of <span class="math inline">𝕏<sup>aux</sup></span>; In <strong>M-step</strong>, WSOD network <span class="math inline"><em>f</em><sup>aux</sup></span> is supervised by image label as well as adjusted boxes from <span class="math inline"><em>g</em></span> on <span class="math inline">𝕏<sup>aux</sup></span>. For <span>LBBA-boosted WSOD</span>, WSOD network <span class="math inline"><em>f</em></span> is supervised by image label and adjusted boxes from <span class="math inline"><em>g</em></span> on <span class="math inline">𝕏</span>. Finally, the learned <span class="math inline"><em>f</em></span> is used for evaluation.</figcaption>
|
| 30 |
+
</figure>
|
| 31 |
+
|
| 32 |
+
# Method
|
| 33 |
+
|
| 34 |
+
In this work, we follow the problem setting in [@wslat2015; @msd2018; @uijlings2018revisiting; @ubbr2018] for WSOD by using a well-annotated auxiliary dataset $\mathbb{X}^{\text{aux}}$ and a weakly annotated dataset $\mathbb{X}$. In particular, $\mathbb{X}^{\text{aux}}$ is first used to train class-agnostic learnable bounding box adjusters (LBBAs). Then, we utilize both LBBAs and any weakly annotated dataset $\mathbb{X}$ to learn a better WSOD model. For the image-level weakly annotated dataset $\mathbb{X} = \{\mathbf{I}, \mathbb{P},\mathbf{y}\}$, $\mathbf{I}$ denotes an image from $\mathbb{X}$, and $\mathbf{y}$ denotes the corresponding image-level labels. For the end of WSOD, MCG [@APBMM2014] and selective search [@uijlings2013selective] are used to extract a set of precomputed proposals $\mathbb{P} = \{\mathbf{p}\}$ for each image $\mathbf{I}$. Besides $\mathbb{X}$, we also introduce a well-annotated auxiliary dataset $\mathbb{X}^{\text{aux}} = \{(\mathbf{I}^{\text{aux}},\mathbb{P}^{\text{aux}}, \{\mathbf{b}^{\text{aux}}\}, \mathbf{y}^{\text{aux}})\}$. For an image $\mathbf{I}^{\text{aux}}$ from $\mathbb{X}^{\text{aux}}$, $\mathbf{y}^{\text{aux}}$ denotes the image-level labels, and $\{\mathbf{b}^{\text{aux}}\}$ denotes the annotated bounding boxes. To aid WSOD, we also give the precomputed proposals $\mathbb{P}^{\text{aux}} = \{\mathbf{p}^{\text{aux}}\}$ of $\mathbf{I}^{\text{aux}}$. To show the generalization ability of LBBA, we assume the object classes in $\mathbb{X}$ are not overlapped with those in $\mathbb{X}^{\text{aux}}$.
|
| 35 |
+
|
| 36 |
+
We argue that the above problem setting is both practically valuable and convenient in implementation. Albeit weakly-supervised learning is preferred for object detection, several well-annotated datasets, , COCO [@lin2014microsoft], have already been publicly available. Our problem setting allows the learned bounding box adjusters to be deployed in training new classes of object detector, thereby being expected to be advantageous to conventional WSOD solely relying on $\mathbb{X}$. In OCUD [@zhong2020boosting], the well-annotated dataset $\mathbb{X}^{\text{aux}}$ is directly incorporated with the weakly-annotated dataset $\mathbb{X}$ for WSOD. In our problem setting, the well-annotated dataset $\mathbb{X}^{\text{aux}}$ can be safely abandoned after learning bounding box adjusters. Then, LBBAs can be incorporated with any weakly annotated dataset $\mathbb{X}$ for WSOD. We note that LBBAs can avoid the direct leakage of well-annotated dataset $\mathbb{X}^{\text{aux}}$ to the users with weakly annotated dataset $\mathbb{X}$, thereby being more convenient, economic, and secure in practice.
|
| 37 |
+
|
| 38 |
+
In general, our method involves two subtasks, , (i) learning bounding box adjusters, and (ii) LBBA-boosted WSOD. The overall training procedure is shown in Fig. [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}. To better draw the LBBAs from well-annotated auxiliary dataset, we formulate the learning of bounding box adjusters as a bi-level optimization problem. In the lower-subproblem, we use a WSOD method and current LBBA $g_{{t}}$ to update the object detection model $f_{{t+1}}$ from $\{(\mathbf{I}^{\text{aux}},\mathbb{P}^{\text{aux}}, \mathbf{y}^{\text{aux}})\}$. So the learned $f_{{t+1}}$ can also be represented as a function of LBBA. Therefore, the upper-subproblem is formulated to learn $g_{{t+1}}$ specified for optimizing the performance of the weakly-supervised object detector by using the well-annotated data $\{(\mathbf{I}^{\text{aux}},\{\mathbf{b}^{\text{aux}}\}, \mathbf{y}^{\text{aux}})\}$. In each stage, we first update the learning of bounding box adjuster $g_{{t+1}}$ by fixing $f_{{t}}$, and then update the weakly-supervised object detector $f_{{t+1}}$ by fixing LBBA $g_{{t+1}}$. With several stages (${T}=3$) of training. We can obtain a set of LBBA models {$g_{{0}}, ..., g_{{T}}\}$ with one for each stage.
|
| 39 |
+
|
| 40 |
+
For LBBA-boosted WSOD, the well-annotated dataset $\mathbb{X}^{\text{aux}}$ can be abandoned, and only the LBBA models {$g_{{0}}, ..., g_{{T}}\}$ and the weakly annotated dataset $\mathbb{X}$ are required. LBBA-boosted WSOD also involves several stages (, ${T}$). In each stage (, ${t}$), we use the current object detector $f_{{t}}$ to obtain a set of selected proposals and exploit the stage-wise LBBA $g_{{t}}$ for bounding box adjustment. Then, the adjusted bounding boxes are introduced into the WSOD model for updating $f_{{t+1}}$. In the following, after introducing the baseline WSOD model used in this work, we present our solutions to the subtasks of both learning bounding box adjusters and LBBA-boosted WSOD in detail.
|
| 41 |
+
|
| 42 |
+
To learn both bounding box regression and proposal classification from weakly-annotated dataset, we adopt the method proposed in [@ijcai2018-135; @yang2019towards] as our baseline network $f(\mathbf{I},\mathbb{P};\theta_{{f}})$. Here, $\theta_{{f}}$ denotes the model parameters of the object detector. Specifically, the network $f(\mathbf{I},\mathbb{P};\theta_{{f}})$ involves a basic multi-instance-learning (MIL) branch as well as an independent bounding box regression (BBR) branch. Given an input image $\mathbf{I}$ with image-level label $\mathbf{y}=\left\{\mathbf{y}_{{1}},...,\mathbf{y}_{{C}}\right\}$ as well as $R$ precomputed proposals $\mathbb{P}_{\text{mil}}=\left\{\mathbf{p}_{\text{mil},{1}},...,\mathbf{p}_{\text{mil},{R}}\right\}$, MIL branch generates two $R \times C$ logits $\mathbf{x}^{\text{cls}}$ and $\mathbf{x}^{\text{det}}$, which are passed through softmax layers. Then, a fusion score $\textbf{s} = \sigma_{\text{cls}}(\mathbf{x}^{\text{cls}}) \cdot \sigma_{\text{det}}(\mathbf{x}^{\text{det}})$ can be computed by performing element-wise product on those of classification and localization. Finally, the image-level score of class $c$ can be attained by $$\begin{equation}
|
| 43 |
+
\label{eqn:fusion_to_image}
|
| 44 |
+
\textbf{q}_{{c}}=\sum\nolimits_{i=1}^{{R}} \textbf{s}_{{i,c}}.
|
| 45 |
+
\vspace{-0.5em}
|
| 46 |
+
\end{equation}$$ And the MIL branch can be optimized by $$\begin{equation}
|
| 47 |
+
\mathcal{L}_{\text{wsddn}} = \text{BCE}(\mathbf{q}, \mathbf{y};{\theta}_{\text{f}}),
|
| 48 |
+
\vspace{-0.5em}
|
| 49 |
+
\end{equation}$$ where $\text{BCE}(\cdot, \cdot)$ denotes the binary cross-entropy loss. To improve detection quality, we also introduce pseudo label mining strategy and construct instance refinement branch optimized by a set of weighted instance refinement loss $\mathcal{L}_{\text{r}}$ [@tang2017multiple; @tang2018pcl; @ren-wetectron2020].
|
| 50 |
+
|
| 51 |
+
In typical single phase WSOD, the precomputed proposals classified to a specific class are taken as the detection results. To improve the object localization performance, we follow [@ijcai2018-135] to introduce an RPN module into our WSOD network for generating region proposals $\mathbb{P}_{\text{rpn}} = \{\mathbf{p}_{\text{rpn}}\}$. Then, all proposals from $\mathbb{P} = \mathbb{P}_{\text{mil}} \cup \mathbb{P}_{\text{rpn}}$ are sent into bounding box regression branch to generate corresponding localization outputs. Following standard Faster R-CNN [@renNIPS15fasterrcnn], both RPN module and bounding box regression branch are trained by the losses $\mathcal{L}_{\text{rpn-cls}}$, $\mathcal{L}_{\text{rpn-det}}$ and $\mathcal{L}_{\text{det}}$ defined on pseudo ground-truth instances selected by refinement scores. Thus, the learning objective of our baseline WSOD model can be written as, $$\begin{equation}
|
| 52 |
+
\mathcal{L}_{\text{wsod}}=\mathcal{L}_{\text{wsddn}}+\mathcal{L}_{\text{r}}+\mathcal{L}_{\text{rpn-cls}}+\mathcal{L}_{\text{rpn-det}}+\mathcal{L}_{\text{det}},
|
| 53 |
+
\end{equation}$$ where $\mathcal{L}_{\text{r}}$ and $\mathcal{L}_{\text{rpn-cls}}$ are the cross-entropy losses supervised by pseudo class labels on the selected proposals, while $\mathcal{L}_{\text{rpn-det}}$ and $\mathcal{L}_{\text{det}}$ are the smooth-L1 losses [@girshickICCV15fastrcnn] supervised by the proposal boxes of pseudo ground-truths. Note that we follow the same strategy of OICR [@tang2017multiple] to generate pseudo ground-truths.
|
| 54 |
+
|
| 55 |
+
We note that the bounding box regression branch in baseline WSOD model is learned based on the supervision from the precomputed proposals, which naturally are not precise enough. In the subsequent subsections, we learn a set of bounding box adjusters to provide better ground-truth for supervising the bounding box regression branch, thereby being beneficial to detection performance. Moreover, we use the above baseline WSOD model as an example to show the effectiveness of the learned bounding box adjusters. Actually, our proposed method is independent with most existing WSOD methods and can be incorporated with them to further boost detection performance. And we will illustrate this point in the experiments.
|
| 56 |
+
|
| 57 |
+
:::: algorithm
|
| 58 |
+
::: algorithmic
|
| 59 |
+
Auxiliary dataset $\mathbb{X}^{\text{aux}}$, adjuster network $g$ , WSOD network $f^{\text{aux}}$, stage num ${T}$ Adjuster parameters $\{{\theta}_{g}^{{0}}\dots{\theta}_{g}^{{T}}\}$ Initialize ${\theta}_{g}^{{0}}$ on $\mathbb{X}^{\text{aux}}$ $\theta^{{0}}_{f^{\text{aux}}}\leftarrow \mathop{\mathrm{arg\,min}}\limits_{\theta_{f^{\text{aux}}}} \mathcal{L}_{\text{wsod}}+\mathcal{L}_{\text{bbr}}$ **E-Step:** $\theta^{{t+1}}_{\text{g}} \leftarrow \mathop{\mathrm{arg\,min}}\limits_{\theta_{g}} \mathcal{L}_{\text{bba}}$ **M-Step:** $\theta^{{t+1}}_{f^{\text{aux}}}\leftarrow \mathop{\mathrm{arg\,min}}\limits_{\theta_{f^{\text{aux}}}} \mathcal{L}_{\text{wsod}}+\mathcal{L}_{\text{bbr}}$ $\textbf{return}$ $\{{\theta}_{g}^{{0}}\dots{\theta}_{g}^{{T}}\}$
|
| 60 |
+
:::
|
| 61 |
+
::::
|
| 62 |
+
|
| 63 |
+
To formulate our weakly supervised object detection problem elegantly, we first revisit the traditional EM algorithm for weakly supervised learning. In particular, E-step is used to update latent variable $\hat{\text{b}}$, $$\begin{equation}
|
| 64 |
+
\label{eq:pre_e}
|
| 65 |
+
\vspace{-0.5em}
|
| 66 |
+
\hat{\text{b}} = \arg\max\limits_{\text{b}_{\text{latent}}}\log P(\mathbf{y}|\text{b}_{\text{latent}}) - \mathcal{L}(\text{b}_{\text{latent}},f(\mathbf{I},\mathbb{P};\theta_{{f}})).
|
| 67 |
+
\vspace{-0.5em}
|
| 68 |
+
\end{equation}$$ For WSOD with box regression, $\mathbf{y}$ is image class labels, $\mathcal{L}$ is defined as box regression loss (*e.g.*, smooth L1 loss [@girshickICCV15fastrcnn] for bounding box regression), $\hat{\text{b}}$ means latent bounding box variables, and $P(\mathbf{y}|\text{b}_{\text{latent}})$ is probability of $\mathbf{y}$ with given $\text{b}_{\text{latent}}$ in WSOD training. And $f(\mathbf{I},\mathbb{P};\theta_{{f}})$ is bounding box output from WSOD network $f$ with corresponding parameters $\theta_{{f}}$. We mainly discuss $\mathcal{L}$ in next paragraphs. Then, M-step is deployed to update the model parameters $\theta_{{f}}$. $$\begin{equation}
|
| 69 |
+
\vspace{-0.5em}
|
| 70 |
+
\theta_{{f}} = \arg\min\limits_{\theta_{f}}\mathcal{L}(\hat{\text{b}},f(\mathbf{I},\mathbb{P};\theta_{{f}})),
|
| 71 |
+
\vspace{-0.5em}
|
| 72 |
+
\end{equation}$$ where $\mathcal{L}$ is a combination of weakly supervised object detection loss $\mathcal{L}_{\text{wsod}}$ and bounding box regression loss $\mathcal{L}_{\text{bbr}}$.
|
| 73 |
+
|
| 74 |
+
As mentioned above, previous methods utilize precomputed proposals as well as pseudo ground-truth mining in E-step, and then update box regression branch of WSOD network in M-step. However, optimizing $P(\mathbf{y}|\text{b}_{\text{latent}})$ in E-step with only image-level supervision to improve quality of $\hat{\text{b}}$ is difficult. Besides, when optimizing $\mathcal{L}$ in E-step, precomputed proposals are designed for generating region proposals for box regression of object detection, which are not suitable for final object localization. To tackle this problem, we want to use extra well-annotated data to supervise a learnable model, make it generate more precise $\hat{\text{b}}$ in E-step. Therefore, we first introduce a full-annotated auxiliary dataset $\mathbb{X}^{\text{aux}}$ to provide class-agnostic localization supervision. And then, we aim to introduce a class-agnostic Learnable Bounding Box Adjuster (LBBA) $g(\mathbf{I}^{\text{aux}}, \mathbb{P}^{\text{aux}}; \theta_g)$ trained on $\mathbb{X}^{\text{aux}}$, which takes the selected proposals from $\mathbb{P}^{\text{aux}} = \mathbb{P}^{\text{aux}}_{\text{mil}} \cup \mathbb{P}^{\text{aux}}_{\text{rpn}}$ as the input. For each $\mathbf{p}^{\text{aux}} \in \mathbb{P}^{\text{aux}}$, $g(\mathbf{I}^{\text{aux}}, \mathbb{P}^{\text{aux}}; \theta_g)$ aims to predict a more precise estimation of bounding box $\hat{\mathbf{b}}^{\text{aux}}$, which is then used to supervise the bounding box regression branch in WSOD. Denoted by $\tilde{\mathbf{b}}^{\text{aux}}$ the output of bounding box regression. We apply smooth L1 loss [@girshickICCV15fastrcnn] $\mathcal{L}_{\text{bbr}}$ for supervising bounding box regression branch of $f$, $$\begin{equation}
|
| 75 |
+
\vspace{-0.5em}
|
| 76 |
+
\mathcal{L}_{\text{bbr}} = \sum\nolimits_{\mathbf{p}^{\text{aux}} \in \mathbb{P}^{\text{aux}}}
|
| 77 |
+
{\text{Smooth}}_{L1}(\hat{\mathbf{b}}^{\text{aux}}, \tilde{\mathbf{b}}^{\text{aux}};{\theta}_{{f}}).
|
| 78 |
+
\vspace{-0.5em}
|
| 79 |
+
\end{equation}$$ Using the ground-truth bounding box $\mathbf{b}^{\text{aux}}$ from $\mathbb{X}^{\text{aux}}$, we further introduce a loss $\mathcal{L}_{\text{bba}}$ for supervising the learning of bounding box adjusters, $$\begin{equation}
|
| 80 |
+
\vspace{-0.5em}
|
| 81 |
+
\mathcal{L}_{\text{bba}} = \sum\nolimits_{\mathbf{p}^{\text{aux}} \in \mathbb{P}^{\text{aux}}}
|
| 82 |
+
{\text{Smooth}}_{L1}({\mathbf{b}}^{\text{aux}}, \tilde{\mathbf{b}}^{\text{aux}};{\theta}_{{g}}).
|
| 83 |
+
\vspace{-0.5em}
|
| 84 |
+
\end{equation}$$ To this end, we suggest to utilize LBBA $g$ to generate latent variable $\hat{\text{b}}_{\text{aux}}$ on $\mathbb{X}^{\text{aux}}$.
|
| 85 |
+
|
| 86 |
+
$$\begin{equation}
|
| 87 |
+
\vspace{-1em}
|
| 88 |
+
\begin{split}
|
| 89 |
+
&\hat{\text{b}}_{\text{aux}} = g(\mathbf{I}^{\text{aux}}, \mathbb{P}^{\text{aux}}; \theta_g) \\
|
| 90 |
+
\theta_g = \arg&\min\limits_{\theta_g}\mathcal{L}_{\text{bba}}(\{\text{b}^{\text{aux}}\},g(\mathbf{I}^{\text{aux}}, \mathbb{P}^{\text{aux}}; \theta_g))
|
| 91 |
+
\end{split}
|
| 92 |
+
\vspace{-0.7em}
|
| 93 |
+
\end{equation}$$
|
| 94 |
+
|
| 95 |
+
After introducing LBBA $g$ into WSOD, our WSOD problem can be transferred into a **bi-level optimization problem**, here we state how to build bi-level optimization.
|
| 96 |
+
|
| 97 |
+
**Lower subproblem.** During M-step, WSOD network $f$ is supervised by both image class label $\mathbf{y}$ as well as latent variable $\hat{\text{b}}^{\text{aux}}$, which is output of LBBA network $g(\mathbf{I}^{\text{aux}}, \mathbb{P}^{\text{aux}}; \theta_g)$. Therefore we update parameters of WSOD network $\theta_{f^{\text{aux}}}$ by minimizing $\mathcal{L}_{\text{wsod}}+\mathcal{L}_{\text{bbr}}$, which is shown as follows, $$\begin{equation}
|
| 98 |
+
\label{eq:m_lbba}
|
| 99 |
+
\hspace{-4mm}\theta_{f^{\text{aux}}}\text{=}\arg\min\limits_{\theta_{f^{\text{aux}}}} (\mathcal{L}_{\text{wsod}}\text{+}\mathcal{L}_{\text{bbr}}) (\hat{\text{b}}^{\text{aux}},f^{\text{aux}}(\mathbf{I}^{\text{aux}},\mathbb{P}^{\text{aux}};\theta_{f^{\text{aux}}}))
|
| 100 |
+
\vspace{-0.5em}
|
| 101 |
+
\end{equation}$$
|
| 102 |
+
|
| 103 |
+
**Upper subproblem.** Taking above equations into consideration, WSOD parameters $\theta_{f^{\text{aux}}}$ can be seen as a function of LBBA parameters $\theta_g$ (*i.e.*, $\theta_{f^{\text{aux}}}(\theta_g)$). Thus, in E-step the upper subproblem on $\theta_g$ is defined for optimizing $\mathcal{L}_{\text{bba}}$ on the WSOD network $f^{\text{aux}}(\mathbf{I}^{\text{aux}},\mathbb{P}^{\text{aux}};\theta_{f^{\text{aux}}}(\theta_g))$, $$\begin{equation}
|
| 104 |
+
\label{eq:e_lbba}
|
| 105 |
+
\theta_g\text{=}\arg\min\limits_{\theta_g} \mathcal{L}_{\text{bba}}(\{\text{b}^{\text{aux}}\},f^{\text{aux}}(\mathbf{I}^{\text{aux}},\mathbb{P}^{\text{aux}};\theta_{f^{\text{aux}}}(\theta_g)))
|
| 106 |
+
\vspace{-0.5em}
|
| 107 |
+
\end{equation}$$ where $g$ generates adjusted bounding box regression for given proposals from WSOD $f^{\text{aux}}$. Thus upper subproblem has transferred into a fully-supervised setting.
|
| 108 |
+
|
| 109 |
+
From Eqns. ([\[eq:m_lbba\]](#eq:m_lbba){reference-type="ref" reference="eq:m_lbba"},[\[eq:e_lbba\]](#eq:e_lbba){reference-type="ref" reference="eq:e_lbba"}), the direct optimization of $\theta_g$ involves the cumbersome computation of the partial gradient $(\partial{\mathcal{L}_{\text{bbr}}}/\partial{\theta_f})(\partial{\theta_f}/\theta_g)$. Briefly, direct joint training of two networks to solve this bi-level optimization problem is harmful to the generalization ability of LBBA. And EM-like training strategy can keep that of LBBA. Therefore, to avoid this issue, we suggest an EM-like multi-stage training algorithm. Suppose that $f_{{t}}(\mathbf{I}^{\text{aux}}, \mathbb{P}^{\text{aux}}_{\text{mil}}; \theta_f^{t})$ and $g_{{t}}(\mathbf{I}^{\text{aux}}, \mathbb{P}^{\text{aux}}; \theta_g^{{t}})$ are the learned models at stage ${t}$. In the E-step, we use $f_{{t}}(\mathbf{I}^{\text{aux}}, \mathbb{P}^{\text{aux}}_{\text{mil}}; \theta_f^{{t}})$ to generate and select the proposals $\mathbb{P}^{\text{aux}}$, which are then deployed to learn $g_{{t+1}}(\mathbf{I}^{\text{aux}}, \mathbb{P}^{\text{aux}}; \theta_g^{{t+1}})$. In the M-step, we use $\theta_g^{{t+1}}$ to substitute $\theta_g$ in $\mathcal{L}_{\text{bbr}}$, and obtain $f_{{t+1}}(\mathbf{I}^{\text{aux}}, \mathbb{P}^{\text{aux}}_{\text{mil}}; \theta_f^{{t+1}})$ by solving the lower subproblem, thereby resulting in our EM-like multi-stage training algorithm. In the following, we explain the initialization, E-step, and M-step in more detail.
|
| 110 |
+
|
| 111 |
+
**Initialization.** To begin with, we utilize $\mathbb{X}^{\text{aux}}$ to train a two-stage detector with class-agnostic bounding box regression branch, which is then used as the bounding box adjuster $g_{{0}}$ at stage $t = 0$. Then, the selected proposals from $\mathbb{P}^{\text{aux}}_{\text{mil}}$ are fed into $g_{{0}}$ to generate the adjusted bounding boxes for supervising the learning of WSOD model $f_{{0}}$.
|
| 112 |
+
|
| 113 |
+
**E-step.** Given the learned model parameters $\theta_f^{{t}}$ of $f_{{t}}$ at stage ${t}$, the E-step aims at learning the bounding box adjuster $g_{{t+1}}$ with the model parameters $\theta_g^{{t+1}}$. For an image $\mathbf{I}^{\text{aux}}$ from $\mathbb{X}^{\text{aux}}$, we utilize the RPN module of $f_{{t}}$ to generate a set of region proposals $\mathbb{P}_{\text{rpn}}^{\text{aux}}$. We empirically find that it is better to take the region proposal instead of the bounding box predicted by $f_{{t}}$ as the input to $g_{{t+1}}$. Moreover, both the precomputed and the generated proposals $\mathbb{P}_{\text{mil}}^{\text{aux}} \cup \mathbb{P}_{\text{rpn}}^{\text{aux}}$ are beneficial to the training of $g_{{t+1}}$. Thus, we use $f_{t}$ with the parameters $\theta_f^{{t}}$ to predict the bounding boxes, and decode them to generate the corresponding selected proposals $\mathbb{P}^{\text{aux}}_{\text{wsod}}$ from $\mathbb{P}_{\text{mil}}^{\text{aux}} \cup \mathbb{P}_{\text{rpn}}^{\text{aux}}$. The model $g_{{t+1}}$ takes $\mathbb{P}^{\text{aux}}_{\text{wsod}}$ as the input to predict a set of adjusted bounding boxes $\{ \hat{\mathbf{b}}^{\text{aux}} \}$. With the ground-truth bounding boxes from $\mathbb{X}^{\text{aux}}$, we train the bounding box adjuster $g_{{t+1}}$ with the parameters $\theta_g^{{t+1}}$ at stage ${t+1}$ by minimizing the loss ${\mathcal{L}_{\text{bba}}}$.
|
| 114 |
+
|
| 115 |
+
**M-step.** With the help of the learned model parameters $\theta_g^{{t+1}}$ of $g_{{t+1}}$, the M-step learns the WSOD model $f_{{t+1}}$ with the model parameters $\theta_f^{{t+1}}$. In the forward propagation, an image $\mathbf{I}^{\text{aux}}$ from $\mathbb{X}^{\text{aux}}$ is fed into the current WSOD model to generate a number of region proposals $\mathbb{P}_{\text{rpn}}^{\text{aux}}$ and bounding boxes. Then, we decode the predicted bounding boxes to obtain the selected proposals $\mathbb{P}^{\text{aux}}_{\text{wsod}}$ from $\mathbb{P}_{\text{mil}}^{\text{aux}} \cup \mathbb{P}_{\text{rpn}}^{\text{aux}}$. Taking $\mathbb{P}^{\text{aux}}_{\text{wsod}}$ as the input, the adjusted bounding boxes predicted by the LBBA $g_{{t+1}}$ are then used to define the loss $\mathcal{L}_{\text{bbr}}$. Finally, the WSOD model $f_{{t+1}}$ with the model parameters $\theta_f^{{t+1}}$ can be trained by minimizing the combined loss $\mathcal{L}_{\text{wsod}} + \mathcal{L}_{\text{bbr}}$.
|
| 116 |
+
|
| 117 |
+
To sum up, after the initialization, our training algorithm alternates between the E-step and M-step for $T$ times. Hence, it is a multi-stage training scheme, where we run the E-step and M-step once in each stage. The training process of LBBA is given in Algorithm [\[alg:lbba\]](#alg:lbba){reference-type="ref" reference="alg:lbba"}.
|
| 118 |
+
|
| 119 |
+
After learning bounding box adjusters, the well-annotated auxiliary dataset can be abandoned. For the LBBA-boosted WSOD task, we only require a weakly-annotated dataset $\mathbb{X}$ as well as a set of learned bounding box adjusters $\{g_{{0}}, ..., g_{{T}}\}$. The multi-stage scheme is also adopted to train WSOD, and we use stage ${t}$ as an example to illustrate the training process. In particular, an image $\mathbf{I}$ from $\mathbb{X}$ is fed into the current WSOD model to generate a number of region proposals $\mathbb{P}_{\text{rpn}}$ and bounding boxes. Then, we decode the predicted bounding boxes to obtain the selected proposals $\mathbb{P}_{\text{wsod}}$ from $\mathbb{P}_{\text{mil}} \cup \mathbb{P}_{\text{rpn}}$. Taking $\mathbb{P}_{\text{wsod}}$ as the input, the adjusted bounding boxes predicted by the LBBA $g_{{t}}$ are then used to define the loss $\mathcal{L}_{\text{bbr}}$. Finally, the WSOD model $f_{{t}}$ with the model parameters $\theta_f^{{t}}$ can be trained by minimizing the combined loss $\mathcal{L}_{\text{wsod}} + \mathcal{L}_{\text{bbr}}$. After ${T}$ stages of training, the WSOD model at stage ${T}$, , $f_{{T}}$ with parameters $\theta_f^{{T}}$, can be kept and applied to the test images. The training process of LBBA-boosted WSOD is given in Algorithm [\[alg:lbba-wsod\]](#alg:lbba-wsod){reference-type="ref" reference="alg:lbba-wsod"}.
|
| 120 |
+
|
| 121 |
+
Nonetheless, we empirically find that updating WSOD network with only the last $g_{T}$ can attain a similar performance. Hence we can build a lighter pipeline by only using the last $g_{T}$.
|
| 122 |
+
|
| 123 |
+
<figure id="fig:vis-2007">
|
| 124 |
+
<div class="center">
|
| 125 |
+
<embed src="vis-voc2007.pdf" style="width:6in" />
|
| 126 |
+
</div>
|
| 127 |
+
<figcaption>Visualization results of our method on PASCAL VOC 2007, which has the ability to generate precise bounding boxes.</figcaption>
|
| 128 |
+
</figure>
|
| 129 |
+
|
| 130 |
+
:::: algorithm
|
| 131 |
+
::: algorithmic
|
| 132 |
+
Weakly-annotated dataset $\mathbb{X}$, stage num ${T}$, adjuster network $g$, adjuster parameters $\{{\theta}_{g}^{{0}}\dots{\theta}_{g}^{{T}}\}$, WSOD network $f$ WSOD network parameters ${\theta}_{{f}}^{{T}}$
|
| 133 |
+
|
| 134 |
+
$\theta_{g} \leftarrow \theta_{g}^{t}$ $\theta^{{t}}_{f}\leftarrow \mathop{\mathrm{arg\,min}}\limits_{\theta_{f}} \mathcal{L}_{\text{wsod}}+\mathcal{L}_{\text{bbr}}$ $\textbf{return}$ ${\theta}_{{f}}^{{T}}$
|
| 135 |
+
:::
|
| 136 |
+
::::
|
| 137 |
+
|
| 138 |
+
The above training algorithm can improve localization ability of WSOD network but cannot improve the ability of proposal classification. To further improve the detection performance, we introduce an additional multi-label image classifier $h(\mathbf{I};\theta_{h})$ and present a classification score masking strategy. During training, we utilize images and corresponding image labels of dataset $\mathbb{X}$ to train $h$; during testing, given input image $\mathbf{I}$, we obtain image classification score by $\hat{{\textbf{s}}} = h(\mathbf{I};\theta_{h})$, where $\hat{{\textbf{s}}} \in \mathbb{R}^{1 \times C}$ is per-class prediction scores of $\mathbf{I}$. Therefore, we can judge which categories should not be included in $\mathbf{I}$, and suppress the corresponding output of WSOD. Specifically, we select a threshold $\tau$ (, = -3.0), if $\hat{\text{s}}_{c}<\tau$, we assert that the category $c$ is not appeared in this image. Therefore, for each category c with $\hat{s}_{c} < \tau$, score of $i$-th proposal $\mathbf{\hat{b}}_{i,c}$ is set to 0 to eliminate wrong predictions.
|
2108.05997/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f757ff1244388be35c65bbbc8fd795233282b9d1f1d90ec8bf64f47d5d4aae55
|
| 3 |
+
size 5576166
|
2110.03618/main_diagram/main_diagram.pdf
ADDED
|
Binary file (35.3 kB). View file
|
|
|
2110.03618/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:95006395a58b9ba8fd9115697a4f888b97ea82bf9dd66236615353d225cae518
|
| 3 |
+
size 652549
|
2110.08421/paper.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f0fafbdfbb6d03f72d33c3ff5e14d7ed2080aa1cecffb2875240b57572abc958
|
| 3 |
+
size 1459133
|