Add Batch 51fac285-a28e-498b-b588-b59e29528b02
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2301.10xxx/2301.10896/444d37ed-dd24-4b60-8f47-a7481377dd62_content_list.json +0 -0
- 2301.10xxx/2301.10896/444d37ed-dd24-4b60-8f47-a7481377dd62_model.json +0 -0
- 2301.10xxx/2301.10896/444d37ed-dd24-4b60-8f47-a7481377dd62_origin.pdf +3 -0
- 2301.10xxx/2301.10896/full.md +582 -0
- 2301.10xxx/2301.10896/images.zip +3 -0
- 2301.10xxx/2301.10896/layout.json +0 -0
- 2301.10xxx/2301.10921/120373ed-b1dd-408d-b46f-157085370948_content_list.json +0 -0
- 2301.10xxx/2301.10921/120373ed-b1dd-408d-b46f-157085370948_model.json +0 -0
- 2301.10xxx/2301.10921/120373ed-b1dd-408d-b46f-157085370948_origin.pdf +3 -0
- 2301.10xxx/2301.10921/full.md +579 -0
- 2301.10xxx/2301.10921/images.zip +3 -0
- 2301.10xxx/2301.10921/layout.json +0 -0
- 2301.10xxx/2301.10931/afdf8b0d-7ab1-41e7-80f8-fbf01ffd0d6c_content_list.json +1768 -0
- 2301.10xxx/2301.10931/afdf8b0d-7ab1-41e7-80f8-fbf01ffd0d6c_model.json +0 -0
- 2301.10xxx/2301.10931/afdf8b0d-7ab1-41e7-80f8-fbf01ffd0d6c_origin.pdf +3 -0
- 2301.10xxx/2301.10931/full.md +354 -0
- 2301.10xxx/2301.10931/images.zip +3 -0
- 2301.10xxx/2301.10931/layout.json +0 -0
- 2301.10xxx/2301.10937/0902a8dd-f087-4ad6-b1fd-650f398952de_content_list.json +0 -0
- 2301.10xxx/2301.10937/0902a8dd-f087-4ad6-b1fd-650f398952de_model.json +0 -0
- 2301.10xxx/2301.10937/0902a8dd-f087-4ad6-b1fd-650f398952de_origin.pdf +3 -0
- 2301.10xxx/2301.10937/full.md +0 -0
- 2301.10xxx/2301.10937/images.zip +3 -0
- 2301.10xxx/2301.10937/layout.json +0 -0
- 2301.10xxx/2301.10938/e2b2cbfc-a0df-462f-9845-caeaa831fe88_content_list.json +1532 -0
- 2301.10xxx/2301.10938/e2b2cbfc-a0df-462f-9845-caeaa831fe88_model.json +2220 -0
- 2301.10xxx/2301.10938/e2b2cbfc-a0df-462f-9845-caeaa831fe88_origin.pdf +3 -0
- 2301.10xxx/2301.10938/full.md +339 -0
- 2301.10xxx/2301.10938/images.zip +3 -0
- 2301.10xxx/2301.10938/layout.json +0 -0
- 2301.10xxx/2301.10941/c35dbdab-14b3-4d81-9ce3-5fce0461d6c8_content_list.json +1761 -0
- 2301.10xxx/2301.10941/c35dbdab-14b3-4d81-9ce3-5fce0461d6c8_model.json +2376 -0
- 2301.10xxx/2301.10941/c35dbdab-14b3-4d81-9ce3-5fce0461d6c8_origin.pdf +3 -0
- 2301.10xxx/2301.10941/full.md +355 -0
- 2301.10xxx/2301.10941/images.zip +3 -0
- 2301.10xxx/2301.10941/layout.json +0 -0
- 2301.10xxx/2301.10945/5954f0e5-fced-4027-b87d-404a02fb2c9d_content_list.json +0 -0
- 2301.10xxx/2301.10945/5954f0e5-fced-4027-b87d-404a02fb2c9d_model.json +0 -0
- 2301.10xxx/2301.10945/5954f0e5-fced-4027-b87d-404a02fb2c9d_origin.pdf +3 -0
- 2301.10xxx/2301.10945/full.md +0 -0
- 2301.10xxx/2301.10945/images.zip +3 -0
- 2301.10xxx/2301.10945/layout.json +0 -0
- 2301.10xxx/2301.10964/6e1dc718-2719-4720-8a29-9ffcfae23cc6_content_list.json +0 -0
- 2301.10xxx/2301.10964/6e1dc718-2719-4720-8a29-9ffcfae23cc6_model.json +0 -0
- 2301.10xxx/2301.10964/6e1dc718-2719-4720-8a29-9ffcfae23cc6_origin.pdf +3 -0
- 2301.10xxx/2301.10964/full.md +452 -0
- 2301.10xxx/2301.10964/images.zip +3 -0
- 2301.10xxx/2301.10964/layout.json +0 -0
- 2301.10xxx/2301.10972/e2f593df-4e15-4fa3-8e75-388ebabfb5aa_content_list.json +1087 -0
.gitattributes
CHANGED
|
@@ -11413,3 +11413,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 11413 |
2301.13xxx/2301.13298/aba0a1af-0e4b-4bf2-8340-fbc73be62058_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11414 |
2301.13xxx/2301.13341/a05c74fe-bc7e-4a7d-92e8-c64fbcdf9e61_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11415 |
2301.13xxx/2301.13852/68a007d3-f8e7-42bc-ad29-17e511409b23_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11413 |
2301.13xxx/2301.13298/aba0a1af-0e4b-4bf2-8340-fbc73be62058_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11414 |
2301.13xxx/2301.13341/a05c74fe-bc7e-4a7d-92e8-c64fbcdf9e61_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11415 |
2301.13xxx/2301.13852/68a007d3-f8e7-42bc-ad29-17e511409b23_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11416 |
+
2301.10xxx/2301.10896/444d37ed-dd24-4b60-8f47-a7481377dd62_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11417 |
+
2301.10xxx/2301.10921/120373ed-b1dd-408d-b46f-157085370948_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11418 |
+
2301.10xxx/2301.10931/afdf8b0d-7ab1-41e7-80f8-fbf01ffd0d6c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11419 |
+
2301.10xxx/2301.10937/0902a8dd-f087-4ad6-b1fd-650f398952de_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11420 |
+
2301.10xxx/2301.10938/e2b2cbfc-a0df-462f-9845-caeaa831fe88_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11421 |
+
2301.10xxx/2301.10941/c35dbdab-14b3-4d81-9ce3-5fce0461d6c8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11422 |
+
2301.10xxx/2301.10945/5954f0e5-fced-4027-b87d-404a02fb2c9d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11423 |
+
2301.10xxx/2301.10964/6e1dc718-2719-4720-8a29-9ffcfae23cc6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11424 |
+
2301.10xxx/2301.10972/e2f593df-4e15-4fa3-8e75-388ebabfb5aa_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11425 |
+
2301.11xxx/2301.11045/0934e082-13b3-4d85-b2aa-8b83b37767e6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11426 |
+
2301.11xxx/2301.11047/e5dff33b-14ef-4f94-8d15-010da79e44f8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11427 |
+
2301.11xxx/2301.11093/c3a10789-0c09-4da4-bcee-d6ca90e9f2fc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11428 |
+
2301.11xxx/2301.11104/e5ac0b55-2e6b-4d04-aee6-dd66c93a4fff_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11429 |
+
2301.11xxx/2301.11111/ed7a27bc-0a0e-4258-b26f-8810a4266e65_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11430 |
+
2301.11xxx/2301.11116/6c91b042-5fcf-4cdf-b49d-64aa7cd5febe_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11431 |
+
2301.11xxx/2301.11167/1f4bcff8-96ca-4576-a022-c1d3fa23110f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11432 |
+
2301.11xxx/2301.11178/aeb66569-b702-4c03-942f-8c4bd843ec05_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11433 |
+
2301.11xxx/2301.11189/00da18c0-1451-41d4-9868-0ce983c0bf25_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11434 |
+
2301.11xxx/2301.11198/3cb8ca4b-d1f7-47a3-98b1-5f6cf37a0ba0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11435 |
+
2301.11xxx/2301.11226/5764fafd-ed3e-4eee-aaad-89783ea81520_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11436 |
+
2301.11xxx/2301.11233/822fcb41-8533-4196-a990-f53afb50ae35_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11437 |
+
2301.11xxx/2301.11270/6850d92e-1f5c-4f93-a5bf-e6cfaf0afd12_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11438 |
+
2301.11xxx/2301.11280/804caf3f-b4c1-42da-b238-362a6aca1540_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11439 |
+
2301.11xxx/2301.11283/a83e8761-5840-4310-b7f5-4e0360f294c2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11440 |
+
2301.11xxx/2301.11300/e1a2f0e9-ea38-4501-b9f8-5141df6de58c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11441 |
+
2301.11xxx/2301.11305/309da9d6-538f-453f-8a47-5b7b1d1e2fe3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11442 |
+
2301.11xxx/2301.11320/910882b8-a635-4121-9077-f07a0c6e5899_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11443 |
+
2301.11xxx/2301.11325/49202da1-9e0f-4dc3-b575-181c2248b3f9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11444 |
+
2301.11xxx/2301.11328/97d562ac-2034-4642-8e2b-95222d350c3a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11445 |
+
2301.11xxx/2301.11355/03c5d63a-9036-420f-be00-81c2b8a1e5c9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11446 |
+
2301.11xxx/2301.11387/7a5396ed-c348-4bb9-b12f-0a2fc561f52b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11447 |
+
2301.11xxx/2301.11429/0f1c8025-1531-459e-9a27-caad0174ff63_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11448 |
+
2301.11xxx/2301.11441/3085610e-65ed-449f-9737-af010fed0af3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11449 |
+
2301.11xxx/2301.11445/5df4abea-cf6b-4326-8d28-168e0ae57e44_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11450 |
+
2301.11xxx/2301.11467/77887bde-4c1e-43bd-a7d2-fa9865ceec1f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11451 |
+
2301.11xxx/2301.11497/a6638a46-d347-4a28-83c5-91133b359cea_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11452 |
+
2301.11xxx/2301.11500/15dcb5d7-49bc-41f9-bf7b-8a7987876c60_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11453 |
+
2301.11xxx/2301.11514/2ce06811-e0ce-47ab-b1cd-9a14e21e9d46_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11454 |
+
2301.11xxx/2301.11526/6099b132-bacc-4773-af35-159e89ce9a2a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11455 |
+
2301.11xxx/2301.11529/60bfbd94-3091-4eb1-b571-86d8c26d17f1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11456 |
+
2301.11xxx/2301.11575/7fd2c910-5894-4a7b-bf10-8f4de12641c4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11457 |
+
2301.11xxx/2301.11578/76d44da1-8011-415d-a1f4-ff813a345757_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11458 |
+
2301.11xxx/2301.11596/2db545a9-222b-43fb-a486-730fb32647db_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11459 |
+
2301.11xxx/2301.11661/3002f53d-d5d1-4c99-b0c6-28f62ce5180c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11460 |
+
2301.11xxx/2301.11699/de9d98ae-2112-4b30-b1ee-93036508b9fb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11461 |
+
2301.11xxx/2301.11706/55c986ea-152f-4532-a253-2993acb1c667_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11462 |
+
2301.11xxx/2301.11733/158cdce6-3282-4d45-ad54-bc70b1f6b4cd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11463 |
+
2301.11xxx/2301.11757/93fbd8b9-5f46-4083-a28e-94f0710cb66a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11464 |
+
2301.11xxx/2301.11760/53d5189a-141f-45e1-8875-0e3597e2a265_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11465 |
+
2301.11xxx/2301.11796/0f259a7b-29be-4156-9284-0b8695946804_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11466 |
+
2301.11xxx/2301.11842/dcf01226-8f70-4a8b-8115-5e9f55f90ee2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11467 |
+
2301.11xxx/2301.11847/90b9f55c-0fb3-4896-918e-8092b13fb8ed_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11468 |
+
2301.11xxx/2301.11911/b8306ee5-dbd6-473e-9566-ec806d97e163_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11469 |
+
2301.11xxx/2301.11913/b14c60c9-1b93-4dab-952b-44053ca4e9b6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11470 |
+
2301.11xxx/2301.11916/17127768-2cdf-4256-a984-2a6b70540051_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11471 |
+
2301.11xxx/2301.11929/fbd8bcde-ea84-442f-ab33-f789bec28e21_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11472 |
+
2301.11xxx/2301.11935/b023f2ae-f962-4730-b9dd-19736c57f3cc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11473 |
+
2301.11xxx/2301.11956/29ada079-0156-44ba-8ef0-679de86e1a2e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11474 |
+
2301.11xxx/2301.11982/3d936fe8-bbb8-46ec-bddf-6e1b843790f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11475 |
+
2301.12xxx/2301.12003/ec2a1e98-dd1b-495a-b8ec-0570dd86a8b4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11476 |
+
2301.12xxx/2301.12031/35ecf27a-094c-4768-b735-df0bb2065b0e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11477 |
+
2301.12xxx/2301.12040/98f4284f-f84f-4a95-b243-6b28cc2edcab_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11478 |
+
2302.02xxx/2302.02805/6b92c4b6-d0b8-4411-ba54-c496fa3a9ced_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 11479 |
+
2302.12xxx/2302.12014/c58ccd70-0fb1-4d6e-b214-07bd6b02999a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2301.10xxx/2301.10896/444d37ed-dd24-4b60-8f47-a7481377dd62_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10896/444d37ed-dd24-4b60-8f47-a7481377dd62_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10896/444d37ed-dd24-4b60-8f47-a7481377dd62_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:26c8b5bfa7ac6d7700aed771229f46e5feadc9040ea6762f65dc22dd609ba665
|
| 3 |
+
size 473523
|
2301.10xxx/2301.10896/full.md
ADDED
|
@@ -0,0 +1,582 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Causal Reasoning About Entities and Events in Procedural Texts
|
| 2 |
+
|
| 3 |
+
Li Zhang\*, Hainiu Xu\*, Yue Yang\*, Shuyan Zhou\*
|
| 4 |
+
|
| 5 |
+
Weiqiu You*, Manni Arora*, Chris Callison-Burch*
|
| 6 |
+
|
| 7 |
+
$\clubsuit$ University of Pennsylvania Carnegie Mellon University
|
| 8 |
+
|
| 9 |
+
{zharry,seacow,yueyang1,weiqiuy,manni,ccb}@seas.upenn.edu
|
| 10 |
+
|
| 11 |
+
{shuyanzh}@cs.cmu.edu
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Entities and events are crucial to natural language reasoning and common in procedural texts. Existing work has focused either exclusively on entity state tracking (e.g., whether a pan is hot) or on event reasoning (e.g., whether one would burn themselves by touching the pan), while these two tasks are often causally related. We propose CREPE, the first benchmark on causal reasoning of event plausibility and entity states. We show that most language models, including GPT-3, perform close to chance at .35 F1, lagging far behind human at .87 F1. We boost model performance to .59 F1 by creatively representing events as programming languages while prompting language models pretrained on code. By injecting the causal relations between entities and events as intermediate reasoning steps in our representation, we further boost the performance to .67 F1. Our findings indicate not only the challenge that CREPE brings for language models, but also the efficacy of code-like prompting combined with chain-of-thought prompting for multihop event reasoning. $^{1}$
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Event-centric natural language processing (Chen et al., 2021b) is one of the leading paradigms in machine understanding of texts. This line of work focuses on first extracting entities and events from texts (Yang et al., 2019; Du and Cardie, 2020) and then making inferences about them (Li et al., 2020; Du et al., 2021). Even with the recent advances of large language models (LLMs), reasoning about events remains challenging as it requires highly contextual information and ample common-sense knowledge. For example, the event "adding water to a pan containing hot oil" causes the event "there is a sizzling sound" to happen, while "heat up an
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Example of our task CREPE. A procedure including a goal and some steps are provided. A model needs to predict the change in the likelihood of an event throughout the procedure. We show that predicting entity states as an intermediate step improves performance.
|
| 23 |
+
|
| 24 |
+
empty pan" does not. Any model that can draw the correct conclusion given these contexts is expected to have access to some implicit knowledge about these entities and events.
|
| 25 |
+
|
| 26 |
+
One type of text which demonstrates these challenges is procedural text, namely sequences of events, such as how-to instructions, recipes, natural processes, scientific protocols, etc. Procedural texts describe an environment that changes dynamically through a sequence of steps. Therefore, the exact environment configuration is often implicit. In the previous cooking example, whether "there is a sizzling sound" depends on what steps have taken place. With these interesting challenges coupled with the added benefit of application to robotics (Brohan et al., 2022) and household smart assistants such as Alexa (Panagopoulou et al., 2022), reasoning about procedures attracts great attention from the NLP community (Zhang, 2022).
|
| 27 |
+
|
| 28 |
+
Most work on reasoning about procedural texts
|
| 29 |
+
|
| 30 |
+
has focused solely on either predicting the properties of events (e.g., which event is more likely to happen) (Zhang et al., 2020c; Yang et al., 2021b; Tandon et al., 2019) or tracking entity states (e.g., what is some property of an entity after some step) (Dalvi et al., 2018; Tandon et al., 2020), while the causal relation between events and entities is largely underexplored – for example, whether “there is a sizzling sound” is determined by the state of “water” and “oil.” Therefore, we claim that many event prediction tasks are multihop reasoning tasks that require the knowledge of intermediate entity states. Causal reasoning about events and entities differs from existing multihop reasoning tasks, such as Yang et al. (2018); Dua et al. (2019) whose reasoning process is explicitly formulated by a direct question (e.g., how old is the previous US president); and Geva et al. (2021) whose supporting evidence is factual and static. In contrast, causal reasoning in procedures requires models to first figure out the relevant entity attributes, then infer their states based on the current context, and finally predict the event.
|
| 31 |
+
|
| 32 |
+
To this end, we propose the task of Causal Reasoning of Entities and Events in Procedural Texts (CREPE), with an overview in Figure 1. Given a procedure consisting of a goal ("stir fry vegetables") and some steps ("rinse vegetable"...), a model is to predict the likelihood of some unobserved events ("there is a sizzling sound") after the execution of each step. We provide a handcrafted, high-quality benchmark containing 183 procedures, 1219 steps, and 324 changes in the likelihood of events along with the corresponding underlying entity state changes. In an in-context learning setting, we show that most LLMs including GPT-3 (Brown et al., 2020) perform no better (.350 F1) than chance (.297 F1), greatly underperforming the human performance of .868 F1, on the development set. Providing ground-truth entity state changes to the prompt of GPT-3 shows no performance gain, indicating that it cannot leverage this causal signal. Instead, we draw inspiration from Madaan et al. (2022) who represented texts as programming languages as the prompt to code language model Codex (Chen et al., 2021a) to perform event reasoning. We propose a novel Python code representation of procedures that achieves .585 F1. Furthermore, our code-like representation allows us to effectively encode and leverage predicted or labeled entity state changes by generating them as
|
| 33 |
+
|
| 34 |
+
an intermediate reasoning step (namely, chain-of-thought), boosting the performance to .667 using predicted entity state changes and .715 F1 using labeled entity state changes.
|
| 35 |
+
|
| 36 |
+
Our contributions are summarized as follows:
|
| 37 |
+
|
| 38 |
+
- We propose a novel task, a dataset, and several strong baselines for causal reasoning about events and entities in procedural texts.
|
| 39 |
+
- We devise an effective code-like representation of procedures, leading to superior performance and allowing the injection of structured knowledge for reasoning.
|
| 40 |
+
- We are among the first to show that code language models can apply chain-of-thought to tackle multihop reasoning.
|
| 41 |
+
|
| 42 |
+
# 2 Task and Hypothesis
|
| 43 |
+
|
| 44 |
+
A procedure $P$ of length $n$ consists of a goal $G$ and some steps $s_1 \ldots s_n \in S$ , each represented as a short sentence. Each procedure is associated with a set of hypothetical events $e_1 \ldots e_m \in E$ whose likelihood of happening changes throughout the procedure. The task is to predict the change of likelihood of a hypothetical event $e_j$ from step $s_{i-1}$ (the previous step) to step $s_i$ (the current step):
|
| 45 |
+
|
| 46 |
+
$$
|
| 47 |
+
\delta_ {i} = p \left(e _ {j} | s _ {i}, \dots , s _ {1}, G\right) - p \left(e _ {j} | s _ {i - 1}, \dots , s _ {1}, G\right)
|
| 48 |
+
$$
|
| 49 |
+
|
| 50 |
+
The likelihood change $\delta_{i}$ is positive if the label is "more likely", negative if "less likely", or zero if "equally likely".
|
| 51 |
+
|
| 52 |
+
Predicting the likelihood of hypothetical events, also known as counterfactual reasoning, is extremely important for machine reasoning (Pearl and Mackenzie, 2018) (see more in Section 7). In our work, we hypothesize that the causal relation between entity changes and events can be leveraged by LLMs to better perform counterfactual reasoning. In other words, any change of the likelihood of a hypothetical event is given rise to by changes of some entity attributes $a_1 \ldots a_m \in A$ .
|
| 53 |
+
|
| 54 |
+
$$
|
| 55 |
+
\delta_ {i} = p (a _ {j} | s _ {i}, \ldots , s _ {1}, G) - p (a _ {j} | s _ {i - 1}, \ldots , s _ {1}, G)
|
| 56 |
+
$$
|
| 57 |
+
|
| 58 |
+
# 3 Dataset
|
| 59 |
+
|
| 60 |
+
Our CREPE benchmark dataset has two portions. The first is handcrafted and cross-validated by six authors of this paper. The annotation happens in 3 phases: (1) we first write down or acquire a procedure from the web; (2) we then annotate some hypothetical events whose likelihood of happening changes throughout the procedure, and how
|
| 61 |
+
|
| 62 |
+
<table><tr><td colspan="4">Data Statistics</td></tr><tr><td></td><td>Dev</td><td>Test</td><td>Total</td></tr><tr><td>Num. procedures</td><td>42</td><td>141</td><td>183</td></tr><tr><td>Num. steps</td><td>295</td><td>924</td><td>1219</td></tr><tr><td>Num. event changes</td><td>144</td><td>180</td><td>324</td></tr><tr><td>Avg. step per procedure</td><td>7.0</td><td>6.6</td><td>6.7</td></tr><tr><td>Avg. token per step</td><td>6.8</td><td>6.8</td><td>6.8</td></tr></table>
|
| 63 |
+
|
| 64 |
+
<table><tr><td colspan="4">Procedure Topics</td></tr><tr><td></td><td>Dev</td><td>Test</td><td>Total</td></tr><tr><td>Recipe</td><td>10</td><td>33</td><td>43</td></tr><tr><td>Household</td><td>12</td><td>40</td><td>52</td></tr><tr><td>Craft</td><td>4</td><td>17</td><td>21</td></tr><tr><td>Technology</td><td>5</td><td>19</td><td>24</td></tr><tr><td>Travel</td><td>4</td><td>4</td><td>8</td></tr><tr><td>Sports</td><td>2</td><td>13</td><td>15</td></tr><tr><td>Others</td><td>5</td><td>15</td><td>20</td></tr></table>
|
| 65 |
+
|
| 66 |
+
Table 1: Statistics of the CREPE dataset.
|
| 67 |
+
|
| 68 |
+
their likelihood change after each step; (3) for each event, we annotate a tuple of entity, attribute, and change that causes the event likelihood change. To obtain interesting and challenging data, we require annotators to write procedures covering a diverse range of topics and to prioritize events that undergo multiple likelihood changes, and those that involve information implicit from the steps. In our work, we strictly use this portion as the development set to inform all our experimental designs.
|
| 69 |
+
|
| 70 |
+
The second portion, designed to be drawn from a different distribution to minimize bias, was annotated by students in an Artificial Intelligence class at the University of Pennsylvania who participated in an extra-credit assignment. The students were given an overview of the project and some guidelines to annotate data with the aforementioned criteria. We carefully validated all resulting annotations by discarding or editing erroneous and inappropriate examples. In our work, we strictly use this portion as the test set to evaluate the generalization ability of our final models. The complete dataset and annotation instructions can be found in our public repository containing no personally identifiable information of any annotator.
|
| 71 |
+
|
| 72 |
+
The statistics of CREPE are in Table 1. In this work, we consciously focus on few-shot and incontext settings because our data annotation inevitably contains bias and limitation, and thus cannot be truly representative of counterfactual reasoning in every scenario. In such cases, we believe having a sizeable training set aggravates such biases and induces spurious artifacts.
|
| 73 |
+
|
| 74 |
+
# 4 Event Likelihood Prediction
|
| 75 |
+
|
| 76 |
+
The task of CREPE is essentially ternary classification, where the likelihood change of each event after each step is labeled as one of "more likely", "less likely", or "equally likely". In this section, all models have no access to the annotated entity state changes until later sections.
|
| 77 |
+
|
| 78 |
+
# 4.1 Baselines
|
| 79 |
+
|
| 80 |
+
To show the challenge CREPE brings to existing models, we first introduce some naive baselines.
|
| 81 |
+
|
| 82 |
+
- The chance baseline assigns random labels.
|
| 83 |
+
- The majority baseline always assigns the majority label "equally likely".
|
| 84 |
+
|
| 85 |
+
Next, we consider the following state-of-the-art LLMs as strong baselines, where all models are given exactly three examples in their prompt:
|
| 86 |
+
|
| 87 |
+
- T5 (Raffel et al., 2020) is one of the state-of-the-art LLMs. Given the goal, steps, and question formatted by a prompt template, we compare the probability of generating "the answer is no|yes." We use $\mathrm{T}0 - 3\mathrm{B}^{2}$ with 3 billion parameters.
|
| 88 |
+
|
| 89 |
+
- T0 (Sanh et al., 2022) is a variant of T5, finetuned on a large set of downstream tasks with natural language prompts. We adopt the same inference process as T5 described above. We use $\text{T0pp}^3$ with 11 billion parameters.
|
| 90 |
+
|
| 91 |
+
- GPT-3 (Brown et al., 2020) is a series of LLMs that excels at few-shot learning using the prompting mechanism. We consider text-curie-001 (7B parameters), text-davinci-002, text-davinci-003, and ChatGPT (all 175B parameters). We use default parameters with a temperature of 0 for deterministic predictions. An example of the prompt is shown in Figure 2.
|
| 92 |
+
|
| 93 |
+
- GPT-3 finetuned on StrategyQA is a GPT-3 curie model finetuned with StrategyQA (Geva et al., 2021), a dataset of factual multihop questions and their decomposition. StrategyQA is similar to our task in that estimating the change of event likelihood can also be decomposed into sub-tasks of estimating the change of state of related entities (Section 5.1).
|
| 94 |
+
|
| 95 |
+
Table 2 shows that all state-of-the-art LLMs we have attempted achieve close-to-chance performance on CREPE around 0.350 F1, whereas text-davinci-003 and ChatGPT which are
|
| 96 |
+
|
| 97 |
+
```txt
|
| 98 |
+
Goal: Wash sneakers
|
| 99 |
+
Context: I remove shoelaces. I rinse.
|
| 100 |
+
Question: What is the likelihood that my feet get wet by wearing the sneakers?
|
| 101 |
+
Answer: likely
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
Figure 2: Our GPT-3 prompt, which is typical for a QA task. Each likelihood label is compared with the previous one to get the label for the change.
|
| 105 |
+
|
| 106 |
+
known to be stronger at reasoning perform better. Details about prompt formulation and experimental results on prompt sensitivity are shown in Appendix B and A.
|
| 107 |
+
|
| 108 |
+
# 4.2 Representing Procedures as Python Code
|
| 109 |
+
|
| 110 |
+
Codex (Chen et al., 2021a) is a variation of GPT-3 that was designed to be prompted with and to generate code, in addition to natural language texts. Recently, Madaan et al. (2022) found that prompting Codex with some structured representation such as Python code. Inspired by this observation, we propose novel code representations of procedures and hypothetical events. Among many possibilities we experimented with, the representation with the best empirical performance is described below, later shown to greatly outperform all baseline models. The representation is exemplified in Figure 3.
|
| 111 |
+
|
| 112 |
+
The procedure is represented as a class where the goal $G$ is the class name, followed by the steps $s_i$ as comments. Then, each step is defined as a member function, in which the hypothetical events $e_j$ are represented as objects with comments. Each event object has an attribute "change" whose value describes the change of the likelihood. During inference, Codex is provided with the prompt including three in-context examples and the current procedure up to the definition of the "init" function and predicts the definition of all step functions. Finally, we extract the assigned value of the "change" attribute as the event likelihood change $\delta_i$ .
|
| 113 |
+
|
| 114 |
+
This prompt design effectively leverages the semantic similarity between procedures with entity states and functions with variables, by representing texts as function identifiers and comments. We use code-davinci-002<sup>4</sup> with 175B parameters and default hyperparameters with a temperature of 0.
|
| 115 |
+
|
| 116 |
+
```python
|
| 117 |
+
class Wash_Sneakers: # Init # Remove shoelaces #Rinse def__init__(self, event0): self.event0 $=$ event0#My feet get wet by wearing the sneakers. def remove_shoelaces(self): self.event0/change $=$ "equally likely $\#$ My feet get wet by wearing the sneakers. def rinse(self): self.event0/change $=$ "more likely" # My feet get wet by wearing the sneakers.
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
+
Figure 3: Our best-performing Python code representation of a procedure and hypothetical events, for Codex.
|
| 121 |
+
|
| 122 |
+
# 4.3 Results
|
| 123 |
+
|
| 124 |
+
As CREPE is a ternary classification task, we report the macro F1 score across the three classes. As shown in Table 2, T5 and T0 perform only slightly better (.343 and .336 F1) than chance (.297 F1). GPT-3, one of the most dominant models across a variety of NLP tasks, is no better (.336 F1), whereas finetuning it on another multihop reasoning dataset StrategyQA does not bring about any improvement (.341 F1). The latest GPT-3 models, text-davinci-003 (.424 F1) and ChatGPT (.470 F1) which were released contemporarily with this paper, greatly outperform their predecessors.
|
| 125 |
+
|
| 126 |
+
On the other hand, our code-representation of events as the prompt to Codex greatly outperforms all other models with .585 F1. As Codex is trained on public Github code in addition to the internet texts that GPT-3 is trained on, it is noteworthy that Codex can effectively reason about texts with code-like structures, for a procedure has many analogies to a class in object-oriented programming.
|
| 127 |
+
|
| 128 |
+
# 4.4 Ablation Studies
|
| 129 |
+
|
| 130 |
+
To understand why the representation in our Codex prompt is effective, we perform an ablation study with various changes of the format to the representation, including:
|
| 131 |
+
|
| 132 |
+
- Remove steps comments in the beginning
|
| 133 |
+
- Remove event comments in step functions
|
| 134 |
+
- Use nested functions instead of a class
|
| 135 |
+
- Use flat variables to encode goals, steps, and events (no hierarchical class functions)
|
| 136 |
+
|
| 137 |
+
Examples of these empirically inferior representations are shown in Appendix B. As seen in Table 3, the hierarchical representation of procedures, steps,
|
| 138 |
+
|
| 139 |
+
<table><tr><td rowspan="3">Params</td><td colspan="2">Naive</td><td colspan="8">Large Language Models</td><td>Human</td></tr><tr><td>Cha.</td><td>Maj.</td><td>T5</td><td>T0</td><td>GPT3C</td><td>GPT3C+S</td><td>GPT3D2</td><td>GPT3D3</td><td>ChatGPT</td><td>Codex (ours)</td><td></td></tr><tr><td>-</td><td>-</td><td>3B</td><td>11B</td><td>13B</td><td>13B</td><td>175B</td><td>175B</td><td>175B</td><td>175B</td><td>-</td></tr><tr><td>Dev</td><td>.262</td><td>.297</td><td>.343</td><td>.336</td><td>.346</td><td>.341</td><td>.350</td><td>.424</td><td>.470</td><td>.585</td><td>.868</td></tr><tr><td>Test</td><td>.251</td><td>.296</td><td>.343</td><td>.337</td><td>.356</td><td>.346</td><td>.533</td><td>.423</td><td>.462</td><td>.591</td><td>-</td></tr></table>
|
| 140 |
+
|
| 141 |
+
Table 2: Macro F1 of baseline models on the CREPE dataset. Human performance is not benchmarked on the test set as we strictly hold out its labels during all experiments. GPT3C represents the text-curie-001 model. GPT3D2 represents the text-davinci-002 model with an abnormal performance on the test set that we have confirmed but regrettably cannot explain. GPT3D3 represents the text-davinci-003 model. GPT3C+S represents the GPT-3 curie model finetuned on StrategyQA. All of the above models work with textual prompts. Codex represents the code-davinci-002 model and works with our proposed code-like prompts.
|
| 142 |
+
|
| 143 |
+
<table><tr><td></td><td>Dev</td><td>Test</td></tr><tr><td>Codex</td><td>.585</td><td>.591</td></tr><tr><td>no step comments</td><td>.377</td><td>.352</td></tr><tr><td>no event comments</td><td>.576</td><td>.555</td></tr><tr><td>nested function</td><td>.568</td><td>.572</td></tr><tr><td>flat variables</td><td>.338</td><td>.341</td></tr></table>
|
| 144 |
+
|
| 145 |
+
Table 3: Macro F1 of the ablations of our Codex prompt.
|
| 146 |
+
|
| 147 |
+
and events as classes or nested functions is critical. Besides, listing all the steps as comments helps, mimicking a programmer's textual explanation of a class or a function.
|
| 148 |
+
|
| 149 |
+
# 5 Causal Reasoning with Entities
|
| 150 |
+
|
| 151 |
+
When a human tries to predict whether the event "one would get burnt by touching a pan" is likely, their reasoning process would first focus on some entities in the question (e.g., "the pan"), then attend to some attributes and states of that entity (e.g., the temperature of the pan is hot), and finally draw a logical conclusion (e.g., "the pan being hot means one would get burnt by touching it.") CREPE is constructed precisely with this thought process in mind. An entity-attribute-change tuple is annotated along with each event likelihood change. In this section, we study how to explicitly leverage the intermediate information to assist the prediction of event likelihood prediction.
|
| 152 |
+
|
| 153 |
+
# 5.1 Predicted Entity States as CoT
|
| 154 |
+
|
| 155 |
+
In CREPE, the task of predicting event likelihood change can be seen as a case of multihop reasoning, where a model first decomposes the question into some open-ended sub-questions, answer these sub-questions, and aggregate them as a final answer. LLMs can be prompted to perform chain-of-thought (CoT) style reasoning (Nye et al., 2021; Wei et al., 2022). Thus, we ask the question:
|
| 156 |
+
|
| 157 |
+
Q1. Can LLMs benefit from first predicting entity state changes, as a CoT, before predicting event likelihood changes?
|
| 158 |
+
|
| 159 |
+
CoT with GPT-3. First, we prompt GPT-3 with Wei et al. (2022)'s CoT paradigm and Press et al. (2022)'s self-ask paradigm, both of which are shown in Figure 4. While self-ask relies on search engines for fact retrieval, we use LM generation instead as most of our entity state tracking questions are heavily context-dependent and unanswerable by any search engine. When writing demonstrations for few-shot learning, we impose the following logic progression for the follow-up questions: (1) initial followups shall ask questions on the state of entities that are directly related to the event; (2) followups following the entity state questions shall ask for the logical relationship between the entity states and the original event.
|
| 160 |
+
|
| 161 |
+
CoT Codex with Soft Entity Representation. We modify our Codex prompt in Figure 3, so that a sub-event is represented as a string variable whose declaration and value assignments are right before those of the hypothetical event. We refer to this as a soft representation of entities (Figure 5). During inference, Codex is provided with the code up to the step function header and predicts the entity and event changes for every step function. Our Codex model achieves the new best performance of .624 F1, outperforming the same model without predicted entities as CoT by .039 F1.
|
| 162 |
+
|
| 163 |
+
<table><tr><td></td><td>Naive</td><td colspan="2">LLMs</td><td colspan="4">CoT Large Language Models</td><td>Human</td></tr><tr><td></td><td>Majority</td><td>GPT-3</td><td>Codex</td><td>GPT-3 + CoT</td><td>GPT-3+self-ask</td><td>Codex soft (ours)</td><td>Codex hard (ours)</td><td></td></tr><tr><td>Dev</td><td>.297</td><td>.346</td><td>.585</td><td>0.359</td><td>.342</td><td>.624</td><td>.667</td><td>.868</td></tr><tr><td>Test</td><td>.296</td><td>.356</td><td>.591</td><td>0.379</td><td>.345</td><td>.626</td><td>.609</td><td>-</td></tr></table>
|
| 164 |
+
|
| 165 |
+
Table 4: Macro F1 of chain-of-thought models on the CREPE dataset. GPT-3 + CoT|self-ask represents the text-davinci-002 model prompted with the CoT or self-ask style prompt.
|
| 166 |
+
|
| 167 |
+
Goal: Wash sneakers
|
| 168 |
+
|
| 169 |
+
Context: I remove shoelaces. I rinse.
|
| 170 |
+
|
| 171 |
+
Question: What is the likelihood that my feet get wet by wearing the sneakers?
|
| 172 |
+
|
| 173 |
+
Answer: To get feet wet by wearing the sneakers, the sneakers must be wet. In the given context, the sneakers are wet.
|
| 174 |
+
|
| 175 |
+
Therefore, comparing to the previous step, the likelihood change is "more likely".
|
| 176 |
+
|
| 177 |
+
Goal: Wash sneakers
|
| 178 |
+
|
| 179 |
+
Context: I remove shoelaces. I rinse.
|
| 180 |
+
|
| 181 |
+
Question: What is the likelihood that my feet get wet by wearing the sneakers?
|
| 182 |
+
|
| 183 |
+
Follow up: Are the sneakers wet?
|
| 184 |
+
|
| 185 |
+
Intermediate answer: Yes
|
| 186 |
+
|
| 187 |
+
Follow up: Will my feet get wet by wearing wet sneakers?
|
| 188 |
+
|
| 189 |
+
Intermediate answer: Yes
|
| 190 |
+
|
| 191 |
+
Answer: likely
|
| 192 |
+
|
| 193 |
+
Figure 4: Our GPT-3 prompt with intermediate questions, mimicking the CoT prompt (top) and the Self-Ask prompt (bottom).
|
| 194 |
+
|
| 195 |
+
```python
|
| 196 |
+
class Wash_Sneakers(): # Init # Remove shoelaces # Rinse def init(self, event0, subevent0): self.event0 = event0 # My feet get wet by wearing the sneakers. self.event0.subevent = subevent0 # The sneakers are wet def remove_shoelaces(self): self.event0.subevent/change = "equally likely" # The sneakers are wet self.event0/change = "equally likely" # My feet get wet by wearing the sneakers. def rinse(self): self.event0.subevent/change = "more likely" # The sneakers are wet self.event0.chang $=$ "more likely" # My feet get wet by wearing the sneakers.
|
| 197 |
+
```
|
| 198 |
+
|
| 199 |
+
Figure 5: Our Codex prompt with a soft representation of entity state changes as strings.
|
| 200 |
+
|
| 201 |
+
# CoT Codex with Hard Entity Representation.
|
| 202 |
+
|
| 203 |
+
The two approaches above both softly represent
|
| 204 |
+
|
| 205 |
+
the intermediate entity state changes as texts, either questions or statements. Here, LLMs are not enforced to generate intermediate reasoning steps that contain entities and attributes. To answer Q1 more precisely, we experiment with a hard entity representation where the entity-attribute-change tuple is explicitly baked into the Codex prompt as shown in Figure 6. Here, each entity is represented as an object with an attribute and assigned value. The hard entity representation leads to a far superior performance of .667 F1 on the development set but generalizes worse on the test set with .609 F1.
|
| 206 |
+
|
| 207 |
+
```python
|
| 208 |
+
class Wash_Sneakers(): # Init # Remove shoelaces # Rinse def init(self, event0): self.sneakers $=$ Sneakers() self.event0 $=$ event0#My feet get wet by wearing the sneakers. def remove_shoelaces(self): self.event0/change $=$ "equally likely" # My feet get wet by wearing the sneakers. def rinse(self): self.sneakers.wet $=$ True self.event0/change $=$ "more likely" # My feet get wet by wearing the sneakers.
|
| 209 |
+
```
|
| 210 |
+
|
| 211 |
+
Figure 6: Our Codex prompt with a hard representation of entity states as variables, attributes, and values.
|
| 212 |
+
|
| 213 |
+
To recap, we have shown that LLMs can be prompted to exhibit a CoT that first predicts entity state changes and then event likelihood changes. Hence, our answer to Q1 raised at the beginning of this subsection is 'yes.'
|
| 214 |
+
|
| 215 |
+
# 5.2 Annotated Entity States as CoT
|
| 216 |
+
|
| 217 |
+
In the above section, we have shown how event likelihood prediction can be improved by first having the LLMs predict entity states as a CoT. These experiments mimic a realistic setting where information about entities is unavailable. However, in some scenarios, the entity states may be provided. For example, an embodied agent or a robot might
|
| 218 |
+
|
| 219 |
+
<table><tr><td></td><td>Dev</td><td>Test</td></tr><tr><td>Majority</td><td>.297</td><td>.296</td></tr><tr><td>GPT-3 CoT</td><td>.342</td><td>.345</td></tr><tr><td>w/ gold entity changes</td><td>.351</td><td>.380</td></tr><tr><td>Codex CoT</td><td>.667</td><td>.609</td></tr><tr><td>w/ gold entity changes</td><td>.715</td><td>.722</td></tr><tr><td>Human</td><td>.868</td><td>-</td></tr></table>
|
| 220 |
+
|
| 221 |
+
Table 5: Macro F1 of GPT-3 and Codex with chain-of-thought provided with gold entity state changes.
|
| 222 |
+
|
| 223 |
+
have a reliable component that tracks entities; some practitioners might care about a small set of procedures in a narrow domain with annotated entity changes; or, some event schemata containing entity information could be used to predict unseen events. Here, we try to answer the following question:
|
| 224 |
+
|
| 225 |
+
# Q2. Can LLMs effectively leverage annotated entity state changes to better predict event likelihood changes?
|
| 226 |
+
|
| 227 |
+
Instead of having LLMs predict entity state changes, we provide the annotated entity state changes in the CREPE dataset to GPT-3 and Codex. Doing so has the additional benefit of verifying that entity state changes indeed causally benefit LLMs in predicting events.
|
| 228 |
+
|
| 229 |
+
As shown in Table 5, our Codex representation with access to gold entity changes leads to improved performance of .715 F1 on the development set. In contrast, GPT-3 does not see any gain. Hence, the answer to Q2 is 'yes' for the code-trained LLMs but 'no' for standard LLMs.
|
| 230 |
+
|
| 231 |
+
# 5.3 Externally Predicted Entity States
|
| 232 |
+
|
| 233 |
+
As we will discuss further in Section 7, entity state tracking is an established task in NLP with existing datasets and models. We have now predicted entity state changes using LLMs in a few-shot learning setting. It is then natural to pose the question:
|
| 234 |
+
|
| 235 |
+
# Q3. Do existing entity state tracking models make predictions that lead to better performance on CREPE?
|
| 236 |
+
|
| 237 |
+
Our definition of causal reasoning of events is directional since we consider entity state changes as the cause of the change in event likelihoods. To this extent, we incorporate OpenPI (Tandon et al., 2020), the only open-domain entity state tracking dataset in procedural texts, as a part of the pipeline. In OpenPI, the input is a goal, a step, and the output is tuples of an entity, a feature, and two attributes
|
| 238 |
+
|
| 239 |
+
before and after the execution of the step. For example, after "heat the pan [step]", "the temperature [feature] of the pan [entity] is cool [attribute] before and hot [attribute] afterward." While the original paper proposed a GPT2 model (Radford et al., 2019), we opt to finetune the superior GPT-3 Curie model on its data. After the model makes a prediction, we post-process it into the format of CREPE by discarding the feature and producing two entity-attribute-change pairs (e.g., pan-hot-"more likely" and pan-cold-"less likely"). We provide Codex with only the entity changes when the entity is mentioned in the event. Further, to fit our prompt in the context window of Codex, we provide Codex with 5 entity state changes uniformly drawn from a pool of candidate choices at every step. The resulting OpenPI-prompted Codex gives a degraded macro F1 score of 0.553 on the development set and 0.496 on the testing set. Hence, our answer to Q3 is 'no,' suggesting that existing entity state tracking datasets may be insufficient for our causal reasoning task.
|
| 240 |
+
|
| 241 |
+
# 6 Performance Analysis
|
| 242 |
+
|
| 243 |
+
In this section, we analyze potential factors that play a role in our Codex model's performance. We investigate three factors: (1) the number of steps in a procedure; (2) explicit mentions of event-related entity-of-interest (EoI) in a given step; and (3) the logical relation (entailment or contradiction) between the event likelihood change and its related entity state change. To study factor (1), we dichotomize procedures from the development set by the average length of the procedure. To investigate factors (2) and (3), we manually labeled the ground truth EoI mentioning and logical relation for the development dataset. Intuitively, estimating event likelihood in lengthy procedures and in steps where EoI is not explicitly mentioned would be difficult. Rather surprisingly, Codex shows no significant performance discrepancy under factors (2) and (3), and only a slight performance difference in factor (1) (see Appendix C).
|
| 244 |
+
|
| 245 |
+
Further, the task of CREPE can be divided into two sub-tasks, first to identify whether an event likelihood change occurred at all, and then to classify the change as either more or less likely. We observe that CoT Codex outperforms Codex on both sub-tasks. For the classification task, in particular, CoT Codex obtained a .149 increase in macro F1 score from .805 to .954. This shows not only that
|
| 246 |
+
|
| 247 |
+
CoT Codex is effective, but also that its bottleneck is identifying event likelihood change.
|
| 248 |
+
|
| 249 |
+
# 7 Related Work
|
| 250 |
+
|
| 251 |
+
# Event & Entity Extraction and Representation
|
| 252 |
+
|
| 253 |
+
Event-centric NLP has been a dominant strand of approaches to machine reasoning. Myriad work has focused on extracting events from the news or web data (Liu et al., 2018; Yang et al., 2019; Du and Cardie, 2020). The effort of structurally representing scripts, groups of events in certain scenarios including procedures, started decades ago (Abelson and Schank, 1977) and is receiving revived attention in present years (Li et al., 2020; Wang et al., 2022a). While this line of work mostly focuses on the representation as relations (e.g., temporal, hierarchical) among events, we recognize entities as a cause of event relations and thus propose a more granular representation. Furthermore, structured representations of events typically cannot take advantage of the power of textual LLMs for challenging downstream tasks. In contrast, we advance towards the best of two worlds by working with code language models.
|
| 254 |
+
|
| 255 |
+
Besides, existing work on jointly extracting and representing events and entities (Lee et al., 2012; Wadden et al., 2019; Barhom et al., 2019) neglects the causal relation therein and treats entities and events simply as two related tasks to be tackled simultaneously. We causally bridge the two.
|
| 256 |
+
|
| 257 |
+
Entity State Tracking Prior work on entity state tracking spans various disciplines of AI. For instance, object tracking, a sub-task of entity state tracking, has led to much work in both robotics (Wang et al., 2007) and computer vision (Comaniciu et al., 2003). In NLP, early efforts focus on synthetic, closed-domain data (Weston et al., 2015; Long et al., 2016) and more recent ones shift attention to real-world procedures (Bosselut et al., 2017; Dalvi et al., 2018; Gupta and Durrett, 2019; Du et al., 2019; Mysore et al., 2019) with a closed set of entities and attributes or an open-ended set Tandon et al. (2020). In all prior work, entity state track is treated as an end-task, whereas we treat it as a critical intermediate step for event reasoning, a more practical application.
|
| 258 |
+
|
| 259 |
+
Counterfactual Reasoning In this work, we hope to provide evidence that signals of entities effectively help models reason about events. We specifically focus on hypothetical event reasoning because it is a high-level cognitive ability beyond
|
| 260 |
+
|
| 261 |
+
pattern recognition and a manifestation of complex reasoning ability (Pearl and Mackenzie, 2018; Pearl, 2019). Counterfactual reasoning has a long history with formal methods (Forbus, 1984; Lewis, 2013). Less modern work exists in commonsense (Feng et al., 2021), procedural texts (Tandon et al., 2019), and even computer vision (Yue et al., 2021).
|
| 262 |
+
|
| 263 |
+
Multihop Reasoning Prior studies on multihop reasoning mainly focus on question answering from a passage (Welbl et al., 2018; Talmor and Berant, 2018; Yang et al., 2018; Kočisky et al., 2018; Mihaylov et al., 2018; Khot et al., 2020) and representing and utilizing multihop information in the form of structured data (De Cao et al., 2019; Ding et al., 2019; Qiu et al., 2019; Cao et al., 2019; Fang et al., 2020; Thayaparan et al., 2019; Zhang et al., 2020d, 2021; Huang and Yang, 2021).
|
| 264 |
+
|
| 265 |
+
There are also efforts such as DecompRC, StrategyQA, and CGDe-FGIN that attempt to conduct multihop reasoning by decomposing the original task to a series of logically related sub-tasks (Min et al., 2019; Geva et al., 2021; Cao and Liu, 2022). Such an approach has recently seen great success with the Chain-of-Thought (CoT) prompting of GPT-3, which significantly improves numerous multihop reasoning tasks (Nye et al., 2021; Kojima et al., 2022; Wei et al., 2022; Wang et al., 2022c). Following CoT prompting, Self-Ask further elicits CoT by demanding GPT-3 to explicitly generate the reasoning questions raised during its chain-of-thought process (Press et al., 2022).
|
| 266 |
+
|
| 267 |
+
Code-Based Language Models and Prompts Recent work has shown that LLMs trained on programs or code (PLMs) have an augmented ability of reasoning over natural language texts. Notably, Suzgun et al. (2022); Liang et al. (2022) showed that PLMs outperforms only-text-trained LMs on certain reasoning tasks even though the prompts are purely natural language and contain no code. Moreover, there has been speculation that multihop reasoning is an emergent ability exclusive to PLMs and absent in their only-text-trained predecessors (Fu and Khot, 2022).
|
| 268 |
+
|
| 269 |
+
Even more interestingly, a line of contemporary work found that, for some reasoning tasks, prompting PLMs with certain structured programs (e.g., Python code, JSON, PDDL) that represent the originally textual data outperforms doing so simply with natural language prompts. These tasks include math questions (Chen et al., 2022; Lyu et al., 2023; Mishra et al., 2022) and event reasoning (Madaan
|
| 270 |
+
|
| 271 |
+
et al., 2022; Wang et al., 2022b) like our work.
|
| 272 |
+
|
| 273 |
+
Procedural Texts Procedural texts are an attractive data source to reason about events and entities which undergo frequent changes. There has been steady efforts in computer vision (Miech et al., 2019), robotics (Ahn et al., 2022), and language (Mujtaba and Mahapatra, 2019; Zhang, 2022). In NLP specifically, work on procedures includes extracting them from instructional texts (Paris et al., 2002; Delpech and Saint-Dizier, 2008; Zhang et al., 2012), reasoning about events (Takechi et al., 2003; Tandon et al., 2019; Rajagopal et al., 2020; Zhang et al., 2020c), knowledge-base construction (Jung et al., 2010; Chu et al., 2017; Park and Motahari Nezhad, 2018), or applying them to downstream applications (Yang et al., 2021b,a; Zhang et al., 2020a; Lyu et al., 2021; Dalvi et al., 2019; Zhang et al., 2020b; Chen et al., 2020). Our work is scoped in procedural texts due to the outstanding causal relations between entities and events in a dynamic environment.
|
| 274 |
+
|
| 275 |
+
# 8 Conclusion and Future Work
|
| 276 |
+
|
| 277 |
+
We present CREPE, a benchmark for causal reasoning about events and entities in procedural texts. We show that mainstream LLMs such as GPT-3 perform close to chance on CREPE, while using code-like event representation as a prompt to code language model Codex greatly improves the performance. Further, we experiment with various ways to encode entity information into this representation and find that eliciting chain-of-thought reasoning from Codex further improves performance while existing CoT approaches with GPT-3 are ineffective. We clearly show that LLMs benefit from lower-level entity information when making predictions about higher-level events. Future work should explore related tasks such as next-event prediction, event temporal ordering, etc., by injecting relevant information about entities into our representation. Our code-representation of events allows more powerful expressions than simply entailment and negation considered in this work. Future work may explore other forms of code chain-of-thought such as first-order logic. These expressions generated by LLMs can be computed objectively, thus ameliorating LLMs' hallucinations and improving the interpretability and faithfulness of predictions.
|
| 278 |
+
|
| 279 |
+
# 9 Limitations
|
| 280 |
+
|
| 281 |
+
Despite our best efforts, our CREPE dataset has inherent limitations. First, the choice of studying procedure texts, despite many discussed advantages, limits the domain, writing style, and other semantic features of the texts. As a result, porting our methods and findings to other text styles such as stories or news might require domain adaptation. Second, we prioritize quality over quantity when creating this benchmark, which suffers from small size and contains biases from the annotators, even though we address the latter by having different annotators label a test set.
|
| 282 |
+
|
| 283 |
+
When annotating the hypothetical events, our intention is that they represent a wild variety that doers of the procedures, humans or machines, would care about. However, we also have to ensure these events are unambiguously bound to some entities in order to challenge models for their causal reasoning ability. While we do our utmost to balance these two conflicting objectives, the issue might still persist.
|
| 284 |
+
|
| 285 |
+
In CREPE, each event likelihood change is caused by exactly one entity state change. This is an over-simplification made to facilitate evaluation. In real life, many complex events require many entity states to be reasoned about, which in turn may have complex logical relations among them. We leave this for future work.
|
| 286 |
+
|
| 287 |
+
While we intend our representation of events and entities to be a general and effective one, we have only shown that it works well empirically using Codex, which is one of the only code language models at present. Whether the idea of our structured representation applies to other models remains to be explored.
|
| 288 |
+
|
| 289 |
+
# 10 Acknowledgements
|
| 290 |
+
|
| 291 |
+
This research is based upon work supported in part by the DARPA KAIROS Program (contract FA8750-19-2-1004), the DARPA LwLL Program (contract FA8750-19-2-0201), the IARPA BETTER Program (contract 2019-19051600004), and the NSF (Award 1928631). Approved for Public Release, Distribution Unlimited. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, IARPA, NSF, or the U.S. Government.
|
| 292 |
+
|
| 293 |
+
We thank the students in the Artificial Intelligence class at the University of Pennsylvania in Fall 2022 who participated in the annotation of the test set of CREPE. We thank Niket Tandon and Qing Lyu for valuable discussions about this work.
|
| 294 |
+
|
| 295 |
+
# References
|
| 296 |
+
|
| 297 |
+
Robert Abelson and Roger C Schank. 1977. Scripts, plans, goals and understanding. An inquiry into human knowledge structures New Jersey, 10.
|
| 298 |
+
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. 2022. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691.
|
| 299 |
+
Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, and Ido Dagan. 2019. Revisiting joint modeling of cross-document entity and event coreference resolution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4179-4189, Florence, Italy. Association for Computational Linguistics.
|
| 300 |
+
Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, and Yejin Choi. 2017. Simulating action dynamics with neural process networks. arXiv preprint arXiv:1711.05313.
|
| 301 |
+
Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, et al. 2022. Do as i can, not as i say: Grounding language in robotic affordances. In 6th Annual Conference on Robot Learning.
|
| 302 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
|
| 303 |
+
Xing Cao and Yun Liu. 2022. Coarse-grained decomposition and fine-grained interaction for multi-hop question answering. Journal of Intelligent Information Systems, 58(1):21-41.
|
| 304 |
+
Yu Cao, Meng Fang, and Dacheng Tao. 2019. BAG: Bi-directional attention entity graph convolutional network for multi-hop reasoning question answering. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 357-362, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 305 |
+
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan,
|
| 306 |
+
|
| 307 |
+
Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021a. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
|
| 308 |
+
Muhao Chen, Hongming Zhang, Qiang Ning, Manling Li, Heng Ji, Kathleen McKeown, and Dan Roth. 2021b. Event-centric natural language processing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Tutorial Abstracts, pages 6-14, Online. Association for Computational Linguistics.
|
| 309 |
+
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. 2022. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588.
|
| 310 |
+
Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. 2020. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1026-1036, Online. Association for Computational Linguistics.
|
| 311 |
+
Cuong Xuan Chu, Niket Tandon, and Gerhard Weikum. 2017. Distilling task knowledge from how-to communities. In Proceedings of the 26th International Conference on World Wide Web, WWW 2017, Perth, Australia, April 3-7, 2017, pages 805-814. ACM.
|
| 312 |
+
Dorin Comaniciu, Visvanathan Ramesh, and Peter Meer. 2003. Kernel-based object tracking. IEEE Transactions on pattern analysis and machine intelligence, 25(5):564-577.
|
| 313 |
+
Bhavana Dalvi, Lifu Huang, Niket Tandon, Wen-tau Yih, and Peter Clark. 2018. Tracking state changes in procedural text: a challenge dataset and models for process paragraph comprehension. In Proceedings of the 2018 Conference of the North American Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1595-1604, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 314 |
+
Bhavana Dalvi, Niket Tandon, Antoine Bosselut, Wentau Yih, and Peter Clark. 2019. Everything happens for a reason: Discovering the purpose of actions in procedural text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4496-4505, Hong Kong, China. Association for Computational Linguistics.
|
| 315 |
+
Nicola De Cao, Wilker Aziz, and Ivan Titov. 2019. Question answering by reasoning across documents with graph convolutional networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
|
| 316 |
+
|
| 317 |
+
(Long and Short Papers), pages 2306-2317, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 318 |
+
Estelle Delpech and Patrick Saint-Dizier. 2008. Investigating the structure of procedural texts for answering how-to questions. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA).
|
| 319 |
+
Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2694-2703, Florence, Italy. Association for Computational Linguistics.
|
| 320 |
+
Li Du, Xiao Ding, Ting Liu, and Bing Qin. 2021. Learning event graph knowledge for abductive reasoning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5181-5190, Online. Association for Computational Linguistics.
|
| 321 |
+
Xinya Du and Claire Cardie. 2020. Event extraction by answering (almost) natural questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 671-683, Online. Association for Computational Linguistics.
|
| 322 |
+
Xinya Du, Bhavana Dalvi, Niket Tandon, Antoine Bosselut, Wen-tau Yih, Peter Clark, and Claire Cardie. 2019. Be consistent! improving procedural text comprehension using label consistency. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2347-2356, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 323 |
+
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368-2378, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 324 |
+
Yuwei Fang, Siqi Sun, Zhe Gan, Rohit Pillai, Shuo-hang Wang, and Jingjing Liu. 2020. Hierarchical graph network for multi-hop question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8823-8838, Online. Association for Computational Linguistics.
|
| 325 |
+
Fuli Feng, Jizhi Zhang, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua. 2021. Empowering language
|
| 326 |
+
|
| 327 |
+
understanding with counterfactual reasoning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2226-2236, Online. Association for Computational Linguistics.
|
| 328 |
+
Kenneth D Forbus. 1984. Qualitative process theory. Artificial intelligence, 24(1-3):85-168.
|
| 329 |
+
Hao Fu, Yao; Peng and Tushar Khot. 2022. How does gpt obtain its ability? tracing emergent abilities of language models to their sources. Yao Fu's Notion.
|
| 330 |
+
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346-361.
|
| 331 |
+
Aditya Gupta and Greg Durrett. 2019. Tracking discrete and continuous entity state for process understanding. In Proceedings of the Third Workshop on Structured Prediction for NLP, pages 7-12, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 332 |
+
Yongjie Huang and Meng Yang. 2021. Breadth first reasoning graph for multi-hop question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5810-5821.
|
| 333 |
+
Yuchul Jung, Jihee Ryu, Kyung-min Kim, and Sung-Hyon Myaeng. 2010. Automatic construction of a large-scale situation ontology by mining how-to instructions from the web. Web Semantics: Science, Services and Agents on the World Wide Web, 8(2-3):110-124.
|
| 334 |
+
Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. Qasc: A dataset for question answering via sentence composition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8082-8090.
|
| 335 |
+
Tomáš Kočisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328.
|
| 336 |
+
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916.
|
| 337 |
+
Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489-500, Jeju Island, Korea. Association for Computational Linguistics.
|
| 338 |
+
|
| 339 |
+
David Lewis. 2013. Counterfactuals. John Wiley & Sons.
|
| 340 |
+
Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, and Clare Voss. 2020. Connecting the dots: Event graph schema induction with path language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 684-695, Online. Association for Computational Linguistics.
|
| 341 |
+
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110.
|
| 342 |
+
Xiao Liu, Zhunchen Luo, and Heyan Huang. 2018. Jointly multiple events extraction via attention-based graph information aggregation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1247-1256, Brussels, Belgium. Association for Computational Linguistics.
|
| 343 |
+
Reginald Long, Panupong Pasupat, and Percy Liang. 2016. Simpler context-dependent logical forms via model projections. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1456-1465, Berlin, Germany. Association for Computational Linguistics.
|
| 344 |
+
Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. 2023. Faithful chain-of-thought reasoning.
|
| 345 |
+
Qing Lyu, Li Zhang, and Chris Callison-Burch. 2021. Goal-oriented script construction. In Proceedings of the 14th International Conference on Natural Language Generation, pages 184-200, Aberdeen, Scotland, UK. Association for Computational Linguistics.
|
| 346 |
+
Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. 2022. Language models of code are few-shot commonsense learners. In _Conference on Empirical Methods in Natural Language Processing (EMNLP), Abu Dhabi, UAE_.
|
| 347 |
+
Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. 2019. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2630-2640.
|
| 348 |
+
Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391.
|
| 349 |
+
|
| 350 |
+
Sewon Min, Victor Zhong, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2019. Multi-hop reading comprehension through question decomposition and rescoring. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6097-6109, Florence, Italy. Association for Computational Linguistics.
|
| 351 |
+
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, et al. 2022. Lila: A unified benchmark for mathematical reasoning. arXiv preprint arXiv:2210.17517.
|
| 352 |
+
Dena Mujtaba and Nihar Mahapatra. 2019. Recent trends in natural language understanding for procedural knowledge. In 2019 International Conference on Computational Science and Computational Intelligence (CSCI), pages 420-424.
|
| 353 |
+
Sheshera Mysore, Zachary Jensen, Edward Kim, Kevin Huang, Haw-Shiuan Chang, Emma Strubell, Jeffrey Flanigan, Andrew McCallum, and Elsa Olivetti. 2019. The materials science procedural text corpus: Annotating materials synthesis procedures with shallow semantic structures. In Proceedings of the 13th Linguistic Annotation Workshop, pages 56-64, Florence, Italy. Association for Computational Linguistics.
|
| 354 |
+
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114.
|
| 355 |
+
Artemis Panagopoulou, Manni Arora, Li Zhang, Dimitri Cugini, Weiqiu You, Yue Yang, Liyang Zhou, Yuxuan Wang, Zhaoyi Hou, Alyssa Hwang, Lara Martin, Sherry Shi, Chris Callison-Burch, and Mark Yatskar. 2022. Quakerbot: A household dialog system powered by large language models. In *Alexa Prize TaskBot Challenge Proceedings*.
|
| 356 |
+
Cécile Paris, Keith Vander Linden, and Shijian Lu. 2002. Automated knowledge acquisition for instructional text generation. In Proceedings of the 20th Annual International Conference on Computer Documentation, SIGDOC '02, page 142-151, New York, NY, USA. Association for Computing Machinery.
|
| 357 |
+
Hogun Park and Hamid Reza Motahari Nezhad. 2018. Learning procedures from text: Codifying how-to procedures in deep neural networks. In *Companion Proceedings of the The Web Conference* 2018, pages 351-358.
|
| 358 |
+
Judea Pearl. 2019. The seven tools of causal inference, with reflections on machine learning. Communications of the ACM, 62(3):54-60.
|
| 359 |
+
Judea Pearl and Dana Mackenzie. 2018. The Book of Why: The New Science of Cause and Effect, 1st edition. Basic Books, Inc., USA.
|
| 360 |
+
|
| 361 |
+
Ofir Press, Sewon Min Muru Zhang, Ludwig Schmidt, Noah A Smith, and Mike Lewis. 2022. Measuring and narrowing the compositionality gap in language models.
|
| 362 |
+
Lin Qiu, Yunxuan Xiao, Yanru Qu, Hao Zhou, Lei Li, Weinan Zhang, and Yong Yu. 2019. Dynamically fused graph network for multi-hop reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6140-6150.
|
| 363 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 364 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
|
| 365 |
+
Dheeraj Rajagopal, Niket Tandon, Peter Clark, Bhavana Dalvi, and Eduard Hovy. 2020. What-if I ask you to explain: Explaining the effects of perturbations in procedural text. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3345–3355, Online. Association for Computational Linguistics.
|
| 366 |
+
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations.
|
| 367 |
+
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261.
|
| 368 |
+
Mineki Takechi, Takenobu Tokunaga, Yuji Matsumoto, and Hozumi Tanaka. 2003. Feature selection in categorizing procedural expressions. In Proceedings of the Sixth International Workshop on Information Retrieval with Asian Languages, pages 49-56, Sapporo, Japan. Association for Computational Linguistics.
|
| 369 |
+
|
| 370 |
+
Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 641-651, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 371 |
+
Niket Tandon, Bhavana Dalvi, Keisuke Sakaguchi, Peter Clark, and Antoine Bosselut. 2019. WIQA: A dataset for "what if..." reasoning over procedural text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6076-6085, Hong Kong, China. Association for Computational Linguistics.
|
| 372 |
+
Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, and Eduard Hovy. 2020. A dataset for tracking entities in open domain procedural text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6408-6417, Online. Association for Computational Linguistics.
|
| 373 |
+
Mokanarangan Thayaparan, Marco Valentino, Viktor Schlegel, and André Freitas. 2019. Identifying supporting facts for multi-hop question answering with document graph networks. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 42-51, Hong Kong. Association for Computational Linguistics.
|
| 374 |
+
David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784-5789, Hong Kong, China. Association for Computational Linguistics.
|
| 375 |
+
Chieh-Chih Wang, Charles Thorpe, Sebastian Thrun, Martial Hebert, and Hugh Durrant-Whyte. 2007. Simultaneous localization, mapping and moving object tracking. The International Journal of Robotics Research, 26(9):889-916.
|
| 376 |
+
Hongwei Wang, Zixuan Zhang, Sha Li, Jiawei Han, Yizhou Sun, Hanghang Tong, Joseph P Olive, and Heng Ji. 2022a. Schema-guided event graph completion. arXiv preprint arXiv:2206.02921.
|
| 377 |
+
Xingyao Wang, Sha Li, and Heng Ji. 2022b. Code4struct: Code generation for few-shot structured prediction from natural language. arXiv preprint arXiv:2210.12810.
|
| 378 |
+
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022c. Self-consistency
|
| 379 |
+
|
| 380 |
+
improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
|
| 381 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.
|
| 382 |
+
Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association for Computational Linguistics, 6:287-302.
|
| 383 |
+
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merrienboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698.
|
| 384 |
+
Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, and Dongsheng Li. 2019. Exploring pre-trained language models for event extraction and generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5284-5294, Florence, Italy. Association for Computational Linguistics.
|
| 385 |
+
Yue Yang, Joongwon Kim, Artemis Panagopoulou, Mark Yatskar, and Chris Callison-Burch. 2021a. Induce, edit, retrieve: Language grounded multimodal schema for instructional video retrieval. ArXiv preprint, abs/2111.09276.
|
| 386 |
+
Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, and Chris Callison-Burch. 2021b. Visual goal-step inference using wikiHow. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2167-2179, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 387 |
+
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.
|
| 388 |
+
Zhongqi Yue, Tan Wang, Qianru Sun, Xian-Sheng Hua, and Hanwang Zhang. 2021. Counterfactual zero-shot and open-set visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15404-15414.
|
| 389 |
+
Hongming Zhang, Muhao Chen, Haoyu Wang, Yangqiu Song, and Dan Roth. 2020a. Analogous process structure induction for sub-event sequence prediction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1541-1550, Online. Association for Computational Linguistics.
|
| 390 |
+
|
| 391 |
+
Li Zhang. 2022. Reasoning about procedures with natural language processing: A tutorial.
|
| 392 |
+
Li Zhang, Qing Lyu, and Chris Callison-Burch. 2020b. Intent detection with WikiHow. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 328-333, Suzhou, China. Association for Computational Linguistics.
|
| 393 |
+
Li Zhang, Qing Lyu, and Chris Callison-Burch. 2020c. Reasoning about goals, steps, and temporal ordering with WikiHow. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4630-4639, Online. Association for Computational Linguistics.
|
| 394 |
+
Min Zhang, Feng Li, Yang Wang, Zequn Zhang, Yanhai Zhou, and Xiaoyu Li. 2020d. Coarse and fine granularity graph reasoning for interpretable multi-hop question answering. IEEE Access, 8:56755-56765.
|
| 395 |
+
Yuyu Zhang, Ping Nie, Arun Ramamurthy, and Le Song. 2021. Answering any-hop open-domain questions with iterative document reranking. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 481-490.
|
| 396 |
+
Ziqi Zhang, Philip Webster, Victoria Uren, Andrea Varga, and Fabio Ciravegna. 2012. Automatically extracting procedural knowledge from instructional texts using natural language processing. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 520-527, Istanbul, Turkey. European Language Resources Association (ELRA).
|
| 397 |
+
|
| 398 |
+
# A Prompt Sensitivity
|
| 399 |
+
|
| 400 |
+
In addition to the results reported in Table 2, we also investigated the effect of the number and choice of in-context examples.
|
| 401 |
+
|
| 402 |
+
Number of in-context examples The context window of text-davinci-002 maximally fits 3 shots. We experiment with 1-shot (0.245 f1), 2-shots (0.348 f1), and 3-shots (0.359 f1) learning using text-002 with CoT prompting. We see that having more context provides limited improvements in model performance.
|
| 403 |
+
|
| 404 |
+
Prompt sensitivity with random examples We tested the text-davinci-002 model with CoT prompt on the dev set using randomly chosen examples from our example bank. The F1 scores for 5 runs with randomly chosen in-context examples are 0.333, 0.327, 0.359, 0.336, and 0.331. The mean score is 0.337, and the standard deviation is 0.011, implying low sensitivity of in-context example selection.
|
| 405 |
+
|
| 406 |
+
# B Prompt Engineering
|
| 407 |
+
|
| 408 |
+
# B.1 Code Prompts for Codex
|
| 409 |
+
|
| 410 |
+
In Section 4 and 5, we have discussed our best-performing prompts for GPT-3 and Codex. Here, we elaborate on inferior Codex prompts and shed light on why they do not work well empirically.
|
| 411 |
+
|
| 412 |
+
Best prompt As discussed, our best-performing prompt represents procedures as classes and steps as functions.
|
| 413 |
+
|
| 414 |
+
```python
|
| 415 |
+
class Wash_Sneakers: # Init # Remove shoelaces #Rinse def__init__(self, event0): self.event $= 0$ event0#My feet get wet by wearing the sneakers. def remove_shoelaces(self): self.event0/change $\equiv$ "equally likely" # My feet get wet by wearing the sneakers. def rinse(self): self.event0.change $\equiv$ "more likely" # My feet get wet by wearing the sneakers.
|
| 416 |
+
```
|
| 417 |
+
|
| 418 |
+
Nested functions Instead of representing procedures as classes as in our best-performing prompt, we can also represent them as nested functions.
|
| 419 |
+
|
| 420 |
+
```python
|
| 421 |
+
def wash_sneakers(event0):
|
| 422 |
+
# Init
|
| 423 |
+
# Remove shoelaces
|
| 424 |
+
# Rinse
|
| 425 |
+
event0 = event0 # My feet get wet by wearing the sneakers.
|
| 426 |
+
def remove_shoelaces(self):
|
| 427 |
+
event0/change = "equally likely" # My feet get wet by wearing the sneakers.
|
| 428 |
+
def rinse(self):
|
| 429 |
+
event0/change = "more likely" # My feet get wet by wearing the sneakers.
|
| 430 |
+
```
|
| 431 |
+
|
| 432 |
+
No step comments The comments displaying the steps immediately after the class declaration are removed.
|
| 433 |
+
|
| 434 |
+
```python
|
| 435 |
+
class Wash_Sneakers: def __init__(self, event0): self.event0 = event0 # My feet get wet by wearing the sneakers. def remove_shoelaces(self): self.event0/change = "equally likely" # My feet get wet by wearing the sneakers. def rinse(self): self.event0/change = "more likely" # My feet get wet by wearing the sneakers.
|
| 436 |
+
```
|
| 437 |
+
|
| 438 |
+
No event comments The comments displaying the events in step functions except init are removed.
|
| 439 |
+
|
| 440 |
+
```python
|
| 441 |
+
class Wash_Sneakers: def__init__(self, event0): self.event $= 0$ event0#Myfeetgetwetby wearing the sneakers. defremove_shoelaces(self): self.event0/change $\equiv$ "equally likely" def rinse(self): self.event0.chang $\equiv$ "more likely"
|
| 442 |
+
```
|
| 443 |
+
|
| 444 |
+
Two-step In this approach, we hypothesize that providing entity state change at every step is helpful. To do this, we first prompt Codex to generate entity states corresponding to a specific event:
|
| 445 |
+
|
| 446 |
+
```python
|
| 447 |
+
class Wash_Sneakers: def remove_shoelaces(self): event $=$ "My feet get wet by wearing the sneakers." event.precondition $\equiv$ \\ ("sneakers", "wet") def rinse(self): event $=$ "My feet get wet by wearing the sneakers." event.precondition $\equiv$ \\ ("sneakers", "wet")
|
| 448 |
+
```
|
| 449 |
+
|
| 450 |
+
We select event-related entities by majority vote. The resulting entity state bank is used to prompt Codex to first deduce entity state at every step and then answer the likelihood of the event.
|
| 451 |
+
|
| 452 |
+
Flat variables Instead of defining functions using def or creating class with class, we use only variables to define relevant information.
|
| 453 |
+
|
| 454 |
+
```txt
|
| 455 |
+
Goal $=$ "Wash Sneakers"
|
| 456 |
+
Context $=$ "Remove shoelaces. After this, the shoelaces are removed"
|
| 457 |
+
Question $=$ "What is the likelihood that my feet get wet by wearing the sneakers?
|
| 458 |
+
Options $=$ [ "more likely", "less likely", "equally likely", ]
|
| 459 |
+
Answer $=$ Options[2]
|
| 460 |
+
Context $=$ "Rinse the sneakers. After this, the sneakers are damp."
|
| 461 |
+
Question $=$ "What is the likelihood that my feet get wet by wearing the sneakers?
|
| 462 |
+
Options $=$ [ "more likely", "less likely", "equally likely", ]
|
| 463 |
+
Answer $=$ Options[0]
|
| 464 |
+
```
|
| 465 |
+
|
| 466 |
+
# B.2 Textual Prompts for GPT-3
|
| 467 |
+
|
| 468 |
+
For GPT-3, we attempted a dozen of prompt formulations in our preliminary experiments which we found to differ minimally in performance. Here, we show one example:
|
| 469 |
+
|
| 470 |
+
"Wash hands" involves the followings steps:
|
| 471 |
+
|
| 472 |
+
1. Turn on the tap water.
|
| 473 |
+
2. Put hands under running water.
|
| 474 |
+
3. Apply soap and rub hands.
|
| 475 |
+
4. Turn off the tap water
|
| 476 |
+
5. Dry my hands using a towel.
|
| 477 |
+
|
| 478 |
+
For every step, find out how likely it is that water streaming sound can be heard. Answer as (A) very likely (B) likely (C) not very likely (D) unlikely.
|
| 479 |
+
|
| 480 |
+
Step 1: (A) very likely
|
| 481 |
+
|
| 482 |
+
Step 2: (A) very likely
|
| 483 |
+
Step 3: (A) very likely
|
| 484 |
+
Step 4: (D) unlikely
|
| 485 |
+
Step 5: (D) unlikely
|
| 486 |
+
|
| 487 |
+
For GPT-3 finetuned with StrategyQA, we ask two questions regarding the likelihood of the events, namely whether it is more/less likely that some event occurs. After obtaining the result, we conduct a consistency check. For consistent likelihood estimates, where only one of the two questions gives a positive answer, or both questions give negative answers, we assign the corresponding label to the event state change. For inconsistent estimates, where both questions give positive answers, we
|
| 488 |
+
|
| 489 |
+
assign the event change likelihood to the majority label, which is "equally likely". An example of a finetuning prompt-completion pair is shown as follows
|
| 490 |
+
|
| 491 |
+
Prompt:
|
| 492 |
+
|
| 493 |
+
Context: Julius Caesar had three children.
|
| 494 |
+
|
| 495 |
+
Genghis Khan had sixteen children.
|
| 496 |
+
|
| 497 |
+
Modern geneticists have determined
|
| 498 |
+
|
| 499 |
+
that out of every 200 men today
|
| 500 |
+
|
| 501 |
+
has DNA that can be traced to
|
| 502 |
+
|
| 503 |
+
Genghis Khan.
|
| 504 |
+
|
| 505 |
+
Question: Are more people today
|
| 506 |
+
|
| 507 |
+
related to Genghis Khan than Julius Caesar?
|
| 508 |
+
|
| 509 |
+
Take it step by step:
|
| 510 |
+
|
| 511 |
+
Completion:
|
| 512 |
+
|
| 513 |
+
1 How many kids did Julius Caesar have? two
|
| 514 |
+
|
| 515 |
+
#2 How many kids did Genghis Khan have?
|
| 516 |
+
|
| 517 |
+
fourth
|
| 518 |
+
|
| 519 |
+
#3 Is fourth greater than two?
|
| 520 |
+
|
| 521 |
+
no
|
| 522 |
+
|
| 523 |
+
Therefore, the answer to the original
|
| 524 |
+
|
| 525 |
+
question is True
|
| 526 |
+
|
| 527 |
+
An example of our StrategyQA GPT-3 prompt on the CREPE task is as follows:
|
| 528 |
+
|
| 529 |
+
Context: Remove shoelaces. Rinse. Srub the shoes with cleaning solution. Rinse the shoes again. Air dry the shoes and put the shoelaces back on.
|
| 530 |
+
|
| 531 |
+
Question: Is it more likely that my feet get wet by wearing the sneakers?
|
| 532 |
+
|
| 533 |
+
Take it step by step:
|
| 534 |
+
|
| 535 |
+
Completion:
|
| 536 |
+
|
| 537 |
+
1 Is the sneaker wet?
|
| 538 |
+
|
| 539 |
+
Yes
|
| 540 |
+
|
| 541 |
+
#2 Will my feet get wet by wearing wet shoes?
|
| 542 |
+
|
| 543 |
+
Yes
|
| 544 |
+
|
| 545 |
+
Therefore, the answer to the original question
|
| 546 |
+
|
| 547 |
+
is True.
|
| 548 |
+
|
| 549 |
+
# B.3 Textual Prompts for ChatGPT
|
| 550 |
+
|
| 551 |
+
As of the time of camera-ready submission of this paper (February 1, 2023), OpenAI has not released the API for ChatGPT. Thus, we use an unofficial $\mathrm{API}^5$ which is believed to behave the same as the official web playground. Because ChatGPT is designed to only work with a zero-shot and multi-turn dialog setting, we tweak our prompt as follows:
|
| 552 |
+
|
| 553 |
+
```txt
|
| 554 |
+
I'm trying to wash hands.
|
| 555 |
+
First, I turn on the tap water.
|
| 556 |
+
At this point, is it likely that water streaming sound can be heard?
|
| 557 |
+
Answer with yes or no.
|
| 558 |
+
[answer]
|
| 559 |
+
Then, I put hands under running water.
|
| 560 |
+
At this point, is it likely that water streaming sound can be heard?
|
| 561 |
+
Answer with yes or no.
|
| 562 |
+
[answer]
|
| 563 |
+
```
|
| 564 |
+
|
| 565 |
+
# B.4 Textual Prompts for T5/T0
|
| 566 |
+
|
| 567 |
+
We design the following prompt for T5 and T0 to perform our task:
|
| 568 |
+
|
| 569 |
+
```txt
|
| 570 |
+
Goal: [The name of the goal]
|
| 571 |
+
Step: [The list of steps]
|
| 572 |
+
Question: Is that okay that [question]?
|
| 573 |
+
Answer: [yes or no, generated by the model]
|
| 574 |
+
```
|
| 575 |
+
|
| 576 |
+
# C Error Analysis
|
| 577 |
+
|
| 578 |
+
In Section 6, we conclude that the performance of Codex is not influenced by (1) the number of steps in a procedure; (2) explicit mentions of event-related entity-of-interest (EoI) in a given step; and (3) the logical relation (entailment or contradiction) between the event likelihood change and its related entity state change.
|
| 579 |
+
|
| 580 |
+
<table><tr><td>Factors</td><td>Dev</td></tr><tr><td>Procedure Length > 7</td><td>.629</td></tr><tr><td>Procedure Length ≤ 7</td><td>.700</td></tr><tr><td>EoI Mentioned</td><td>.481</td></tr><tr><td>EoI NOT Mentioned</td><td>.496</td></tr><tr><td>Entailment</td><td>.482</td></tr><tr><td>Contradiction</td><td>.461</td></tr></table>
|
| 581 |
+
|
| 582 |
+
Table 6: Macro F1 Score of error analysis. The scores for EoI and Logical relation are lower since we do not consider the majority label, "equally likely", in the error analysis.
|
2301.10xxx/2301.10896/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:668c8f90459e861ff0d91288e109d13d84c3db9af574c4d236d9d348a4a1dbbf
|
| 3 |
+
size 191610
|
2301.10xxx/2301.10896/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10921/120373ed-b1dd-408d-b46f-157085370948_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10921/120373ed-b1dd-408d-b46f-157085370948_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10921/120373ed-b1dd-408d-b46f-157085370948_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f5e45d4faa9b06ce120591cf593cec12ac25463f46cb231f2ac7369c23c8a51f
|
| 3 |
+
size 904273
|
2301.10xxx/2301.10921/full.md
ADDED
|
@@ -0,0 +1,579 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SOFTMATCH: ADDRESSING THE QUANTITY-QUALITY TRADE-OFF IN SEMI-SUPERVISED LEARNING
|
| 2 |
+
|
| 3 |
+
Hao Chen $^{1*}$ , Ran Tao $^{1*}$ , Yue Fan $^{2}$ , Yidong Wang $^{3}$
|
| 4 |
+
Jindong Wang $^{3\dagger}$ , Bernt Schiele $^{2}$ , Xing Xie $^{3}$ , Bhiksha Raj $^{1,4}$ , Marios Savvides $^{1\dagger}$
|
| 5 |
+
|
| 6 |
+
$^{1}$ Carnegie Mellon University, $^{2}$ Max Planck Institute for Informatics, Saarland Informatics Campus, $^{3}$ Microsoft Research Asia, $^{4}$ Mohamed bin Zayed University of AI
|
| 7 |
+
|
| 8 |
+
# ABSTRACT
|
| 9 |
+
|
| 10 |
+
The critical challenge of Semi-Supervised Learning (SSL) is how to effectively leverage the limited labeled data and massive unlabeled data to improve the model's generalization performance. In this paper, we first revisit the popular pseudo-labeling methods via a unified sample weighting formulation and demonstrate the inherent quantity-quality trade-off problem of pseudo-labeling with thresholding, which may prohibit learning. To this end, we propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training, effectively exploiting the unlabeled data. We derive a truncated Gaussian function to weight samples based on their confidence, which can be viewed as a soft version of the confidence threshold. We further enhance the utilization of weakly-learned classes by proposing a uniform alignment approach. In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
|
| 11 |
+
|
| 12 |
+
# 1 INTRODUCTION
|
| 13 |
+
|
| 14 |
+
Semi-Supervised Learning (SSL), concerned with learning from a few labeled data and a large amount of unlabeled data, has shown great potential in practical applications for significantly reduced requirements on laborious annotations (Fan et al., 2021; Xie et al., 2020; Sohn et al., 2020; Pham et al., 2021; Zhang et al., 2021; Xu et al., 2021b;a; Chen et al., 2021; Oliver et al., 2018). The main challenge of SSL lies in how to effectively exploit the information of unlabeled data to improve the model's generalization performance (Chapelle et al., 2006). Among the efforts, pseudo-labeling (Lee et al., 2013; Arazo et al., 2020) with confidence thresholding (Xie et al., 2020; Sohn et al., 2020; Xu et al., 2021b; Zhang et al., 2021) is highly-successful and widely-adopted.
|
| 15 |
+
|
| 16 |
+
The core idea of threshold-based pseudo-labeling (Xie et al., 2020; Sohn et al., 2020; Xu et al., 2021b; Zhang et al., 2021) is to train the model with pseudo-label whose prediction confidence is above a hard threshold, with the others being simply ignored. However, such a mechanism inherently exhibits the quantity-quality trade-off, which undermines the learning process. On the one hand, a high confidence threshold as exploited in FixMatch (Sohn et al., 2020) ensures the quality of the pseudo-labels. However, it discards a considerable number of unconfident yet correct pseudo-labels. As an example shown in Fig. 1(a), around $71\%$ correct pseudo-labels are excluded from the training. On the other hand, dynamically growing threshold (Xu et al., 2021b; Berthelot et al., 2021), or class-wise threshold (Zhang et al., 2021) encourages the utilization of more pseudo-labels but inevitably fully enrolls erroneous pseudo-labels that may mislead training. As an example shown by FlexMatch (Zhang et al., 2021) in Fig. 1(a), about $16\%$ of the utilized pseudo-labels are incorrect. In summary, the quantity-quality trade-off with a confidence threshold limits the unlabeled data utilization, which may hinder the model's generalization performance.
|
| 17 |
+
|
| 18 |
+
In this work, we formally define the quantity and quality of pseudo-labels in SSL and summarize the inherent trade-off present in previous methods from a perspective of unified sample weighting for-
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
(a) Confi. Dist.
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
(b) Quantity
|
| 25 |
+
Figure 1: Illustration on Two-Moon Dataset with only 4 labeled samples (triangle purple/pink points) with others as unlabeled samples in training a 3-layer MLP classifier. Training detail is in Appendix. (a) Confidence distribution, including all predictions and wrong predictions. The red line denotes the correct percentage of samples used by SoftMatch. The part of the line above scatter points denotes the correct percentage for FixMatch (blue) and FlexMatch (green). (b) Quantity of pseudo-labels; (c) Quality of pseudo-labels; (d) Decision boundary. SoftMatch exploits almost all samples during training with lowest error rate and best decision boundary.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
(c) Quality
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
(d) Decision Boundary
|
| 32 |
+
|
| 33 |
+
mulation. We first identify the fundamental reason behind the quantity-quality trade-off is the lack of sophisticated assumption imposed by the weighting function on the distribution of pseudo-labels. Especially, confidence thresholding can be regarded as a step function assigning binary weights according to samples' confidence, which assumes pseudo-labels with confidence above the threshold are equally correct while others are wrong. Based on the analysis, we propose SoftMatch to overcome the trade-off by maintaining high quantity and high quality of pseudo-labels during training. A truncated Gaussian function is derived from our assumption on the marginal distribution to fit the confidence distribution, which assigns lower weights to possibly correct pseudo-labels according to the deviation of their confidence from the mean of Gaussian. The parameters of the Gaussian function are estimated using the historical predictions from the model during training. Furthermore, we propose Uniform Alignment to resolve the imbalance issue in pseudo-labels, resulting from different learning difficulties of different classes. It further consolidates the quantity of pseudo-labels while maintaining their quality. On the two-moon example, as shown in Fig. 1(c) and Fig. 1(b), SoftMatch achieves a distinctively better accuracy of pseudo-labels while retaining a consistently higher utilization ratio of them during training, therefore, leading to a better-learned decision boundary as shown in Fig. 1(d). We demonstrate that SoftMatch achieves a new state-of-the-art on a wide range of image and text classification tasks. We further validate the robustness of SoftMatch against long-tailed distribution by evaluating imbalanced classification tasks.
|
| 34 |
+
|
| 35 |
+
# Our contributions can be summarized as:
|
| 36 |
+
|
| 37 |
+
- We demonstrate the importance of the unified weighting function by formally defining the quantity and quality of pseudo-labels, and the trade-off between them. We identify that the inherent trade-off in previous methods mainly stems from the lack of careful design on the distribution of pseudo-labels, which is imposed directly by the weighting function.
|
| 38 |
+
- We propose SoftMatch to effectively leverage the unconfident yet correct pseudo-labels, fitting a truncated Gaussian function the distribution of confidence, which overcomes the trade-off. We further propose Uniform Alignment to resolve the imbalance issue of pseudolabels while maintaining their high quantity and quality.
|
| 39 |
+
- We demonstrate that SoftMatch outperforms previous methods on various image and text evaluation settings. We also empirically verify the importance of maintaining the high accuracy of pseudo-labels while pursuing better unlabeled data utilization in SSL.
|
| 40 |
+
|
| 41 |
+
# 2 REVISIT QUANTITY-QUALITY TRADE-OFF OF SSL
|
| 42 |
+
|
| 43 |
+
In this section, we formulate the quantity and quality of pseudo-labels from a unified sample weighting perspective, by demonstrating the connection between sample weighting function and the quantity/quality of pseudo-labels. SoftMatch is naturally inspired by revisiting the inherent limitation in quantity-quality trade-off of the existing methods.
|
| 44 |
+
|
| 45 |
+
# 2.1 PROBLEM STATEMENT
|
| 46 |
+
|
| 47 |
+
We first formulate the framework of SSL in a $C$ -class classification problem. Denote the labeled and unlabeled datasets as $\mathcal{D}_L = \{\mathbf{x}_i^l, \mathbf{y}_i^l\}_{i=1}^{N_L}$ and $\mathcal{D}_U = \{\mathbf{x}_i^u\}_{i=1}^{N_U}$ , respectively, where $\mathbf{x}_i^l, \mathbf{x}_i^u \in \mathbb{R}^d$ is the $d$ -dimensional labeled and unlabeled training sample, and $\mathbf{y}_i^l$ is the one-hot ground-truth label for labeled data. We use $N_L$ and $N_U$ to represent the number of training samples in $\mathcal{D}_L$ and $\mathcal{D}_U$ , respectively. Let $\mathbf{p}(\mathbf{y}|\mathbf{x}) \in \mathbb{R}^C$ denote the model's prediction. During training, given a batch of labeled data and unlabeled data, the model is optimized using a joint objective $\mathcal{L} = \mathcal{L}_s + \mathcal{L}_u$ , where $\mathcal{L}_s$ is the supervised objective of the cross-entropy loss ( $\mathcal{H}$ ) on the $B_L$ --sized labeled batch:
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
\mathcal {L} _ {s} = \frac {1}{B _ {L}} \sum_ {i = 1} ^ {B _ {L}} \mathcal {H} \left(\mathbf {y} _ {i}, \mathbf {p} \left(\mathbf {y} \mid \mathbf {x} _ {i} ^ {l}\right)\right). \tag {1}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
For the unsupervised loss, most existing methods with pseudo-labeling (Lee et al., 2013; Arazo et al., 2020; Xie et al., 2020; Sohn et al., 2020; Xu et al., 2021b; Zhang et al., 2021) exploit a confidence thresholding mechanism to mask out the unconfident and possibly incorrect pseudo-labels from training. In this paper, we take a step further and present a unified formulation of the confidence thresholding scheme (and other schemes) from the sample weighting perspective. Specifically, we formulate the unsupervised loss $\mathcal{L}_u$ as the weighted cross-entropy between the model's prediction of the strongly-augmented data $\Omega (\mathbf{x}^u)$ and pseudo-labels from the weakly-augmented data $\omega (\mathbf{x}^u)$ :
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\mathcal {L} _ {u} = \frac {1}{B _ {U}} \sum_ {i = 1} ^ {B _ {U}} \lambda (\mathbf {p} _ {i}) \mathcal {H} \left(\hat {\mathbf {p}} _ {i}, \mathbf {p} \left(\mathbf {y} \mid \Omega \left(\mathbf {x} _ {i} ^ {u}\right)\right)\right), \tag {2}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where $\mathbf{p}$ is the abbreviation of $\mathbf{p}(\mathbf{y}|\omega (\mathbf{x}^u))$ , and $\hat{\mathbf{p}}$ is the one-hot pseudo-label $\mathrm{argmax}(\mathbf{p})$ ; $\lambda (\mathbf{p})$ is the sample weighting function with range $[0,\lambda_{\mathrm{max}}]$ ; and $B_U$ is the batch size for unlabeled data.
|
| 60 |
+
|
| 61 |
+
# 2.2 QUANTITY-QUALITY TRADE-OFF FROM SAMPLE WEIGHTING PERSPECTIVE
|
| 62 |
+
|
| 63 |
+
In this section, we demonstrate the importance of the unified weighting function $\lambda(\mathbf{p})$ , by showing its different instantiations in previous methods and its essential connection with model predictions. We start by formulating the quantity and quality of pseudo-labels.
|
| 64 |
+
|
| 65 |
+
Definition 2.1 (Quantity of pseudo-labels). The quantity $f(\mathbf{p})$ of pseudo-labels enrolled in training is defined as the expectation of the sample weight $\lambda (\mathbf{p})$ over the unlabeled data:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
f (\mathbf {p}) = \mathbb {E} _ {\mathcal {D} _ {U}} [ \lambda (\mathbf {p}) ] \in [ 0, \lambda_ {\max } ]. \tag {3}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
Definition 2.2 (Quality of pseudo labels). The quality $g(\mathbf{p})$ is the expectation of the weighted $0/1$ error of pseudo-labels, assuming the label $\mathbf{y}^u$ is given for $\mathbf{x}^u$ for only theoretical analysis purpose:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
g (\mathbf {p}) = \sum_ {i} ^ {N _ {U}} \mathbb {1} \left(\hat {\mathbf {p}} _ {i} = \mathbf {y} _ {i} ^ {u}\right) \frac {\lambda \left(\mathbf {p} _ {i}\right)}{\sum_ {j} ^ {N _ {U}} \lambda \left(\mathbf {p} _ {j}\right)} = \mathbb {E} _ {\bar {\lambda} (\mathbf {p})} [ \mathbb {1} \left(\hat {\mathbf {p}} = \mathbf {y} ^ {u}\right) ] \in [ 0, 1 ], \tag {4}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where $\bar{\lambda} (\mathbf{p}) = \lambda (\mathbf{p}) / \sum \lambda (\mathbf{p})$ is the probability mass function (PMF) of $\mathbf{p}$ being close to $\mathbf{y}^u$ .
|
| 78 |
+
|
| 79 |
+
Based on the definitions of quality and quantity, we present the quantity-quality trade-off of SSL.
|
| 80 |
+
|
| 81 |
+
Definition 2.3 (The quantity-quality trade-off). Due to the implicit assumptions of PMF $\bar{\lambda}(\mathbf{p})$ on the marginal distribution of model predictions, the lack of sophisticated design on it usually results in a trade-off in quantity and quality - when one of them increases, the other must decrease. Ideally, a well-defined $\lambda(\mathbf{p})$ should reflect the true distribution and lead to both high quantity and quality.
|
| 82 |
+
|
| 83 |
+
Despite its importance, $\lambda (\mathbf{p})$ has hardly been defined explicitly or properly in previous methods. In this paper, we first summarize $\lambda (\mathbf{p})$ , $\bar{\lambda} (\mathbf{p})$ , $f(\mathbf{p})$ , and $g(\mathbf{p})$ of relevant methods, as shown in Table 1, with the detailed derivation present in Appendix A.1. For example, naive pseudo-labeling (Lee et al., 2013) and loss weight ramp-up scheme (Samuli & Timo, 2017; Tarvainen & Valpola, 2017; Berthelot et al., 2019b;a) exploit the fixed sample weight to fully enroll all pseudo-labels into training. It is equivalent to set $\lambda = \lambda_{\mathrm{max}}$ and $\bar{\lambda} = 1 / N_U$ , regardless of $\mathbf{p}$ , which means each pseudolabel is assumed equally correct. We can verify the quantity of pseudo-labels is maximized to $\lambda_{\mathrm{max}}$ .
|
| 84 |
+
|
| 85 |
+
Table 1: Summary of different sample weighting function $\lambda (\mathbf{p})$ , probability density function $\bar{\lambda} (\mathbf{p})$ of $\mathbf{p}$ , quantity $f(\mathbf{p})$ and quality $g(\mathbf{p})$ of pseudo-labels used in previous methods and SoftMatch.
|
| 86 |
+
|
| 87 |
+
<table><tr><td>Scheme</td><td>Pseudo-Label</td><td>FixMatch</td><td>SoftMatch</td></tr><tr><td>λ(p)</td><td>λmax</td><td>{λmax, if max(p) ≥ τ, 0.0, otherwise.</td><td>{λmax exp(−(max(p)−μt)/2σt2), if max(p) < μt, λmax, otherwise.</td></tr><tr><td>λ(p)</td><td>1/NU</td><td>{1/ˆNUT, if max(p) ≥ τ, 0.0, otherwise.</td><td>{exp(−(max(pi)−μt)/2σt2)/NUT+ΣiNUT exp(−(max(pi)−μt)/2σt2), max(p) < μt 1/NUT+ΣiNUT exp(−(max(pi)−μt)/2σt2), max(p) ≥ μt</td></tr><tr><td>f(p)</td><td>λmax</td><td>λmaxˆNUT/NU</td><td>λmax/2 + λmax/NU ΣiNUT exp(−(max(pi)−μt)/2σt2)</td></tr><tr><td>g(p)</td><td>∑iNU1(ˆp=yu)/NU</td><td>∑iNUT1(ˆp=yu)/NUT</td><td>∑jNUTμt1(ˆp=yu)/2NUT+ ∑iNUT−NUTμt1(ˆp=yu) exp(−(max(pi)−μt)/σt2)/2(NUT−NUTμt)</td></tr><tr><td>Note</td><td>High Quantity Low Quality</td><td>Low Quantity High Quality</td><td>High Quantity High Quality</td></tr></table>
|
| 88 |
+
|
| 89 |
+
However, maximizing quantity also fully involves the erroneous pseudo-labels, resulting in deficient quality, especially in early training. This failure trade-off is due to the implicit uniform assumption on PMF $\bar{\lambda} (\mathbf{p})$ that is far from the realistic situation.
|
| 90 |
+
|
| 91 |
+
In confidence thresholding (Arazo et al., 2020; Sohn et al., 2020; Xie et al., 2020), we can view the sample weights as being computed from a step function with confidence $\max (\mathbf{p})$ as the input and a pre-defined threshold $\tau$ as the breakpoint. It sets $\lambda (\mathbf{p})$ to $\lambda_{\mathrm{max}}$ when the confidence is above $\tau$ and otherwise 0. Denoting $\hat{N}_U^\tau = \sum_i^{N_U}\mathbb{1}(\max (\mathbf{p})\geq \tau)$ as the total number of samples whose predicted confidence are above the threshold, $\bar{\lambda}$ is set to a uniform PMF with a total mass of $\hat{N}_U^\tau$ within a fixed range [τ, 1]. This is equal to constrain the unlabeled data as $\hat{\mathcal{D}}_U^\tau = \{\mathbf{x}^u;\max (\mathbf{p}(\mathbf{y}|\mathbf{x}^u))\geq \tau \}$ , with others simply being discarded. We can derive the quantity and the quality as shown in Table 1. A trade-off exists between the quality and quantity of pseudo-labels in confidence thresholding controlled by $\tau$ . On the one hand, while a high threshold ensures quality, it limits the quantity of enrolled samples. On the other hand, a low threshold sacrifices quality by fully involving more but possibly erroneous pseudo-labels in training. The trade-off still results from the over-simplification of the PMF from actual cases. Adaptive confidence thresholding (Zhang et al., 2021; Xu et al., 2021b) adopts the dynamic and class-wise threshold, which alleviates the trade-off by evolving the (class-wise) threshold during learning. They impose a further relaxation on the assumption of distribution, but the uniform nature of the assumed PMF remains unchanged.
|
| 92 |
+
|
| 93 |
+
While some methods indeed consider the definition of $\lambda (\mathbf{p})$ (Ren et al., 2020; Hu et al., 2021; Kim et al., 2022), interestingly, they all neglect the assumption induced on the PMF. The lack of sophisticated modeling of $\bar{\lambda} (\mathbf{p})$ usually leads to a quantity-quality trade-off in the unsupervised loss of SSL, which motivates us to propose SoftMatch to overcome this challenge.
|
| 94 |
+
|
| 95 |
+
# 3 SOFTMATCH
|
| 96 |
+
|
| 97 |
+
# 3.1 GAUSSIAN FUNCTION FOR SAMPLE WEIGHTING
|
| 98 |
+
|
| 99 |
+
Inherently different from previous methods, we generally assume the underlying PMF $\bar{\lambda} (\mathbf{p})$ of marginal distribution follows a dynamic and truncated Gaussian distribution of mean $\mu_t$ and variance $\sigma_t$ at $t$ -th training iteration. We choose Gaussian for its maximum entropy property and empirically verified better generalization. Note that this is equivalent to treat the deviation of confidence $\max (\mathbf{p})$ from the mean $\mu_t$ of Gaussian as a proxy measure of the correctness of the model's prediction, where samples with higher confidence are less prone to be erroneous than that with lower confidence, consistent to the observation as shown in Fig. 1(a). To this end, we can derive $\lambda (\mathbf{p})$ as:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\lambda (\mathbf {p}) = \left\{ \begin{array}{l l} \lambda_ {\max } \exp \left(- \frac {\left(\max (\mathbf {p}) - \mu_ {t}\right) ^ {2}}{2 \sigma_ {t} ^ {2}}\right), & \text {i f} \max (\mathbf {p}) < \mu_ {t}, \\ \lambda_ {\max }, & \text {o t h e r w i s e .} \end{array} \right. \tag {5}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
which is also a truncated Gaussian function within the range $[0, \lambda_{\max}]$ , on the confidence $\max(\mathbf{p})$ .
|
| 106 |
+
|
| 107 |
+
However, the underlying true Gaussian parameters $\mu_t$ and $\sigma_t$ are still unknown. Although we can set the parameters to fixed values as in FixMatch (Sohn et al., 2020) or linearly interpolate them within some pre-defined range as in Ramp-up (Tarvainen & Valpola, 2017), this might again over-simplify the PMF assumption as discussed before. Recall that the PMF $\bar{\lambda}(\mathbf{p})$ is defined over $\max(\mathbf{p})$ , we can instead fit the truncated Gaussian function directly to the confidence distribution for better generalization. Specifically, we can estimate $\mu$ and $\sigma^2$ from the historical predictions of the model. At $t$ -th iteration, we compute the empirical mean and the variance as:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\hat {\mu} _ {b} = \hat {\mathbb {E}} _ {B _ {U}} [ \max (\mathbf {p}) ] = \frac {1}{B _ {U}} \sum_ {i = 1} ^ {B _ {U}} \max (\mathbf {p} _ {i}), \tag {6}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\hat {\sigma} _ {b} ^ {2} = \hat {\mathrm {V a r}} _ {B _ {U}} [ \max (\mathbf {p}) ] = \frac {1}{B _ {U}} \sum_ {i = 1} ^ {B _ {U}} \left(\max (\mathbf {p} _ {i}) - \hat {\mu} _ {b}\right) ^ {2}.
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
We then aggregate the batch statistics for a more stable estimation, using Exponential Moving Average (EMA) with a momentum $m$ over previous batches:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\hat {\mu} _ {t} = m \hat {\mu} _ {t - 1} + (1 - m) \hat {\mu} _ {b},
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\hat {\sigma} _ {t} ^ {2} = m \hat {\sigma} _ {t - 1} ^ {2} + (1 - m) \frac {B _ {U}}{B _ {U} - 1} \hat {\sigma} _ {b} ^ {2}, \tag {7}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where we use unbiased variance for EMA and initialize $\hat{\mu}_0$ as $\frac{1}{C}$ and $\hat{\sigma}_0^2$ as 1.0. The estimated mean $\hat{\mu}_t$ and variance $\hat{\sigma}_t^2$ are plugged back into Eq. (5) to compute sample weights.
|
| 128 |
+
|
| 129 |
+
Estimating the Gaussian parameters adaptively from the confidence distribution during training not only improves the generalization but also better resolves the quantity-quality trade-off. We can verify this by computing the quantity and quality of pseudo-labels as shown in Table 1. The derived quantity $f(\mathbf{p})$ is bounded by $\left[\frac{\lambda_{\mathrm{max}}}{2} (1 + \exp \left(-\frac{\left(\frac{1}{C} - \hat{\mu}_t\right)^2}{2\hat{\sigma}_t^2}\right)), \lambda_{\mathrm{max}}\right]$ , indicating SoftMatch guarantees at least $\lambda_{\mathrm{max}} / 2$ of quantity during training. As the model learns better and becomes more confident, i.e., $\hat{\mu}_t$ increases and $\hat{\sigma}_t$ decreases, the lower tail of the quantity becomes much tighter. While quantity maintains high, the quality of pseudo-labels also improves. As the tail of the Gaussian exponentially grows tighter during training, the erroneous pseudo-labels where the model is highly unconfident are assigned with lower weights, and those whose confidence are around $\hat{\mu}_t$ are more efficiently utilized. The truncated Gaussian weighting function generally behaves as a soft and adaptive version of confidence thresholding, thus we term the proposed method as SoftMatch.
|
| 130 |
+
|
| 131 |
+
# 3.2 UNIFORM ALIGNMENT FOR FAIR QUANTITY
|
| 132 |
+
|
| 133 |
+
As different classes exhibit different learning difficulties, generated pseudo-labels can have potentially imbalanced distribution, which may limit the generalization of the PMF assumption (Oliver et al., 2018; Zhang et al., 2021). To overcome this problem, we propose Uniform Alignment (UA), encouraging more uniform pseudo-labels of different classes. Specifically, we define the distribution in pseudo-labels as the expectation of the model predictions on unlabeled data: $\mathbb{E}_{\mathcal{D}_U}[\mathbf{p}(\mathbf{y}|\mathbf{x}^u)]$ . During training, it is estimated as $\hat{\mathbb{E}}_{B_U}[\mathbf{p}(\mathbf{y}|\mathbf{x}^u)]$ using the EMA of batch predictions on unlabeled data. We use the ratio between a uniform distribution $\mathbf{u}(C) \in \mathbb{R}^C$ and $\hat{\mathbb{E}}_{B_U}[\mathbf{p}(\mathbf{y}|\mathbf{x}^u)]$ to normalize the each prediction $\mathbf{p}$ on unlabeled data and use the normalized probability to calculate the per-sample loss weight. We formulate the UA operation as:
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\operatorname {U A} (\mathbf {p}) = \text {N o r m a l i z e} \left(\mathbf {p} \cdot \frac {\mathbf {u} (C)}{\hat {\mathbb {E}} _ {B _ {U}} [ \mathbf {p} ]}\right), \tag {8}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where the Normalize $(\cdot) = (\cdot) / \sum(\cdot)$ , ensuring the normalized probability sums to 1.0. With UA plugged in, the final sample weighting function in SoftMatch becomes:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\lambda (\mathbf {p}) = \left\{ \begin{array}{l l} \lambda_ {\max } \exp \left(- \frac {\left(\max \left(\operatorname {U A} (\mathbf {p})\right) - \hat {\mu} _ {t}\right) ^ {2}}{2 \hat {\sigma} _ {t} ^ {2}}\right), & \text {i f} \max (\operatorname {U A} (\mathbf {p})) < \hat {\mu} _ {t}, \\ \lambda_ {\max }, & \text {o t h e r w i s e .} \end{array} \right. \tag {9}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
When computing the sample weights, UA encourages larger weights to be assigned to less-predicted pseudo-labels and smaller weights to more-predicted pseudo-labels, alleviating the imbalance issue.
|
| 146 |
+
|
| 147 |
+
Table 2: Top-1 error rate (%) on CIFAR-10, CIFAR-100, STL-10, and SVHN of 3 different random seeds. Numbers with * are taken from the original papers. The best number is in bold.
|
| 148 |
+
|
| 149 |
+
<table><tr><td>Dataset</td><td colspan="3">CIFAR-10</td><td colspan="3">CIFAR-100</td><td colspan="2">SVHN</td><td colspan="2">STL-10</td></tr><tr><td># Label</td><td>40</td><td>250</td><td>4,000</td><td>400</td><td>2,500</td><td>10,000</td><td>40</td><td>1,000</td><td>40</td><td>1,000</td></tr><tr><td>PseudoLabel</td><td>74.61±0.26</td><td>46.49±2.20</td><td>15.08±0.19</td><td>87.45±0.85</td><td>57.74±0.28</td><td>36.55±0.24</td><td>64.61±5.60</td><td>9.40±0.32</td><td>74.68±0.99</td><td>32.64±0.71</td></tr><tr><td>MeanTeacher</td><td>70.09±1.60</td><td>37.46±3.30</td><td>8.10±0.21</td><td>81.11±1.44</td><td>45.17±1.06</td><td>31.75±0.23</td><td>36.09±3.98</td><td>3.27±0.05</td><td>71.72±1.45</td><td>33.90±1.37</td></tr><tr><td>MixMatch</td><td>36.19±6.48</td><td>13.63±0.59</td><td>6.66±0.26</td><td>67.59±0.66</td><td>39.76±0.48</td><td>27.78±0.29</td><td>30.60±8.39</td><td>3.69±0.37</td><td>54.93±0.96</td><td>21.70±0.68</td></tr><tr><td>ReMixMatch</td><td>9.88±1.03</td><td>6.30±0.05</td><td>4.84±0.01</td><td>42.75±1.05</td><td>26.03±0.35</td><td>20.02±0.27</td><td>24.04±9.13</td><td>5.16±0.31</td><td>32.12±6.24</td><td>6.74±0.14</td></tr><tr><td>UDA</td><td>10.62±3.75</td><td>5.16±0.06</td><td>4.29±0.07</td><td>46.39±1.59</td><td>27.73±0.21</td><td>22.49±0.23</td><td>5.12±4.27</td><td>1.89±0.01</td><td>37.42±8.44</td><td>6.64±0.17</td></tr><tr><td>FixMatch</td><td>7.47±0.28</td><td>4.86±0.05</td><td>4.21±0.08</td><td>46.42±0.82</td><td>28.03±0.16</td><td>22.20±0.12</td><td>3.81±1.18</td><td>1.96±0.03</td><td>35.97±4.14</td><td>6.25±0.33</td></tr><tr><td>Influence</td><td>-</td><td>5.05±0.12*</td><td>4.35±0.06*</td><td>-</td><td>-</td><td>-</td><td>2.63±0.23*</td><td>2.34±0.15*</td><td>-</td><td>-</td></tr><tr><td>FlexMatch</td><td>4.97±0.06</td><td>4.98±0.09</td><td>4.19±0.01</td><td>39.94±1.62</td><td>26.49±0.20</td><td>21.90±0.15</td><td>8.19±3.20</td><td>6.72±0.30</td><td>29.15±4.16</td><td>5.77±0.18</td></tr><tr><td>SoftMatch</td><td>4.91±0.12</td><td>4.82±0.09</td><td>4.04±0.02</td><td>37.10±0.77</td><td>26.66±0.25</td><td>22.03±0.03</td><td>2.33±0.25</td><td>2.01±0.01</td><td>21.42±3.48</td><td>5.73±0.24</td></tr></table>
|
| 150 |
+
|
| 151 |
+
An essential difference between UA and Distribution Alignment (DA) (Berthelot et al., 2019a) proposed earlier lies in the computation of unsupervised loss. The normalization operation makes the predicted probability biased towards the less-predicted classes. In DA, this might not be an issue, as the normalized prediction is used as soft target in the cross-entropy loss. However, with pseudolabeling, more erroneous pseudo-labels are probably created after normalization, which damages the quality. UA avoids this issue by exploiting original predictions to compute pseudo-labels and normalized predictions to compute sample weights, maintaining both the quantity and quality of pseudo-labels in SoftMatch. The complete training algorithm is shown in Appendix A.2.
|
| 152 |
+
|
| 153 |
+
# 4 EXPERIMENTS
|
| 154 |
+
|
| 155 |
+
While most SSL literature performs evaluation on image tasks, we extensively evaluate SoftMatch on various datasets including image and text datasets with classic and long-tailed settings. Moreover, We provide ablation study and qualitative comparison to analyze the effectiveness of SoftMatch.
|
| 156 |
+
|
| 157 |
+
# 4.1 CLASSIC IMAGE CLASSIFICATION
|
| 158 |
+
|
| 159 |
+
Setup. For the classic image classification setting, we evaluate on CIFAR-10/100 (Krizhevsky et al., 2009), SVHN(Netzer et al., 2011), STL-10 (Coates et al., 2011) and ImageNet (Deng et al., 2009), with various numbers of labeled data, where class distribution of the labeled data is balanced. We use the WRN-28-2 (Zagoruyko & Komodakis, 2016) for CIFAR-10 and SVHN, WRN-28-8 for CIFAR-100, WRN-37-2 (Zhou et al., 2020) for STL-10, and ResNet-50 (He et al., 2016) for ImageNet. For all experiments, we use SGD optimizer with a momentum of 0.9, where the initial learning rate $\eta_0$ is set to 0.03. We adopt the cosine learning rate annealing scheme to adjust the learning rate with a total training step of $2^{20}$ . The labeled batch size $B_{L}$ is set to 64 and the unlabeled batch size $B_{U}$ is set to 7 times of $B_{L}$ for all datasets. We set $m$ to 0.999 and divide the estimated variance $\hat{\sigma}_t$ by 4 for $2\sigma$ of the Gaussian function. We record the EMA of model parameters for evaluation with a momentum of 0.999. Each experiment is run with three random seeds on labeled data, where we report the top-1 error rate. More details on the hyper-parameters are shown in Appendix A.3.1.
|
| 160 |
+
|
| 161 |
+
Results. SoftMatch obtains the state-of-the-art results on almost all settings in Table 2 and Table 3, except CIFAR-100 with 2,500 and 10,000 labels and SVHN with 1,000 labels, where the results of SoftMatch are comparable to previous methods. Notably, FlexMatch exhibits a performance drop compared to FixMatch on SVHN, since it enrolls too many erroneous pseudo-labels at the beginning of the training that prohibits learning afterward. In contrast, SoftMatch surpasses FixMatch by $1.48\%$ on SVHN with 40 labels, demonstrating its superiority for better utilization of the pseudolabels. On more realistic datasets, CIFAR-100 with 400 labels, STL-10 with 40 labels, and ImageNet with $10\%$ labels, SoftMatch exceeds FlexMatch by a margin of $7.73\%$ , $2.84\%$ , and $1.33\%$ , respectively. SoftMatch shows the comparable results to FlexMatch on CIFAR-100 with 2,500 and 10,000 labels, whereas ReMixMatch (Berthelot et al., 2019a) demonstrates the best results due to the Mixup (Zhang et al., 2017) and Rotation loss.
|
| 162 |
+
|
| 163 |
+
Table 3: Top1 error rate $(\%)$ on ImageNet. The best number is in bold.
|
| 164 |
+
|
| 165 |
+
<table><tr><td># Label</td><td>100k</td><td>400k</td></tr><tr><td>FixMatch</td><td>43.66</td><td>32.28</td></tr><tr><td>FlexMatch</td><td>41.85</td><td>31.31</td></tr><tr><td>SoftMatch</td><td>40.52</td><td>29.49</td></tr></table>
|
| 166 |
+
|
| 167 |
+
Table 4: Top1 error rate $(\%)$ on CIFAR-10-LT and CIFAR-100-LT of 5 different random seeds. The best number is in bold.
|
| 168 |
+
|
| 169 |
+
<table><tr><td>Dataset</td><td colspan="3">CIFAR-10-LT</td><td colspan="3">CIFAR-100-LT</td></tr><tr><td>Imbalance γ</td><td>50</td><td>100</td><td>150</td><td>20</td><td>50</td><td>100</td></tr><tr><td>FixMatch</td><td>18.46±0.30</td><td>25.11±1.20</td><td>29.62±0.88</td><td>50.42±0.78</td><td>57.89±0.33</td><td>62.40±0.48</td></tr><tr><td>FlexMatch</td><td>18.13±0.19</td><td>25.51±0.92</td><td>29.80±0.36</td><td>49.11±0.60</td><td>57.20±0.39</td><td>62.70±0.47</td></tr><tr><td>SoftMatch</td><td>16.55±0.29</td><td>22.93±0.37</td><td>27.40±0.46</td><td>48.09±0.55</td><td>56.24±0.51</td><td>61.08±0.81</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 5: Top1 error rate (%) on text datasets of 3 different random seeds. Best numbers are in bold.
|
| 172 |
+
|
| 173 |
+
<table><tr><td>Datasets</td><td colspan="2">AG News</td><td colspan="2">DBpedia</td><td>IMDb</td><td>Amazon-5</td><td>Yelp-5</td></tr><tr><td># Labels</td><td>40</td><td>200</td><td>70</td><td>280</td><td>100</td><td>1000</td><td>1000</td></tr><tr><td>UDA</td><td>16.83±1.68</td><td>14.34±1.9</td><td>4.11±1.44</td><td>6.93±3.85</td><td>18.33±0.61</td><td>50.29±4.6</td><td>47.49±6.83</td></tr><tr><td>FixMatch</td><td>17.10±3.13</td><td>11.24±1.43</td><td>2.18±0.92</td><td>1.42±0.18</td><td>7.59±0.28</td><td>42.70±0.53</td><td>39.56±0.7</td></tr><tr><td>FlexMatch</td><td>15.49±1.97</td><td>10.95±0.56</td><td>2.69±0.34</td><td>1.69±0.02</td><td>7.80±0.23</td><td>42.34±0.62</td><td>39.01±0.17</td></tr><tr><td>SoftMatch</td><td>12.68±0.23</td><td>10.41±0.13</td><td>1.68±0.34</td><td>1.27±0.1</td><td>7.48±0.12</td><td>42.14±0.92</td><td>39.31±0.45</td></tr></table>
|
| 174 |
+
|
| 175 |
+
# 4.2 LONG-TAILED IMAGE CLASSIFICATION
|
| 176 |
+
|
| 177 |
+
Setup. We evaluate SoftMatch on a more realistic and challenging setting of imbalanced SSL (Kim et al., 2020; Wei et al., 2021; Lee et al., 2021; Fan et al., 2022), where both the labeled and the unlabeled data exhibit long-tailed distributions. Following (Fan et al., 2022), the imbalance ratio $\gamma$ ranges from 50 to 150 and 20 to 100 for CIFAR-10-LT and CIFAR-100-LT, respectively. Here, $\gamma$ is used to exponentially decrease the number of samples from class 0 to class $C$ (Fan et al., 2022). We compare SoftMatch with two strong baselines: FixMatch (Sohn et al., 2020) and FlexMatch (Zhang et al., 2021). All experiments use the same WRN-28-2 (Zagoruyko & Komodakis, 2016) as the backbone and the same set of common hyper-parameters. Each experiment is repeated five times with different data splits, and we report the average test accuracy and the standard deviation. More details are in Appendix A.3.2.
|
| 178 |
+
|
| 179 |
+
Results. As is shown in Table 4, SoftMatch achieves the best test error rate across all long-tailed settings. The performance improvement over the previous state-of-the-art is still significant even at large imbalance ratios. For example, SoftMatch outperforms the second-best by $2.4\%$ at $\gamma = 150$ on CIFAR-10-LT, which suggests the superior robustness of our method against data imbalance.
|
| 180 |
+
|
| 181 |
+
Discussion. Here we study the design choice of uniform alignment as it plays a key role in Soft-Match's performance on imbalanced SSL. We conduct experiments with different target distributions for alignment. Specifically, the default uniform target distribution $\mathbf{u}(C)$ can be replaced by ground-truth class distribution or the empirical class distribution estimated by seen labeled data during training. The results in Fig. 3(a) show a clear advantage of using uniform distribution. Uniform target distribution enforces the class marginal to become uniform, which has a strong regularization effect of balancing the head and tail classes in imbalanced classification settings.
|
| 182 |
+
|
| 183 |
+
# 4.3 TEXT CLASSIFICATION
|
| 184 |
+
|
| 185 |
+
Setup. In addition to image classification tasks, we further evaluate SoftMatch on text topic classification tasks of AG News and DBpedia, and sentiment tasks of IMDb, Amazon-5, and Yelp-5 (Maas et al., 2011; Zhang et al., 2015). We split a validation set from the training data to evaluate the algorithms. For Amazon-5 and Yelp-5, we randomly sample 50,000 samples per class from the training data to reduce the training time. We fine-tune the pre-trained BERT-Base (Devlin et al., 2018) model for all datasets using UDA (Xie et al., 2020), FixMatch (Sohn et al., 2020), FlexMatch (Zhang et al., 2021), and SoftMatch. We use AdamW (Kingma & Ba, 2014; Loshchilov & Hutter, 2017) optimizer with an initial learning rate of $1e - 5$ and the same cosine scheduler as image classification tasks. All algorithms are trained for a total iteration of $2^{18}$ . The fine-tuned model is directly used for evaluation rather than the EMA version. To reduce the GPU memory usage, we set both $B_{L}$ and $B_{U}$ to 16. Other algorithmic hyper-parameters stay the same as image classification tasks. Details of the data splitting and the hyper-parameter used are in Appendix A.3.3.
|
| 186 |
+
|
| 187 |
+
Results. The results on text datasets are shown in Table 5. SoftMatch consistently outperforms other methods, especially on the topic classifications tasks. For instance, SoftMatch achieves an error rate
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
(a) Eval. Error
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
(b) Quantity
|
| 194 |
+
Figure 2: Qualitative analysis of FixMatch, FlexMatch, and SoftMatch on CIFAR-10 with 250 labels. (a) Evaluation error; (b) Quantity of Pseudo-Labels; (c) Quality of Pseudo-Labels; (d) Quality of Pseudo-Labels from the best and worst learned class. Quality is computed according to the underlying ground truth labels. SoftMatch achieves significantly better performance.
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
(c) Quality
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
(d) Cls. Quality
|
| 201 |
+
|
| 202 |
+
of $12.68\%$ on AG news with only 40 labels and $1.68\%$ on DBpedia with 70 labels, surpassing the second best by a margin of $2.81\%$ and $0.5\%$ respectively. On sentiment tasks, SoftMatch also shows the best results on Amazon-5 and IMDb, and comparable results to its counterpart on Yelp-5.
|
| 203 |
+
|
| 204 |
+
# 4.4 QUALITATIVE ANALYSIS
|
| 205 |
+
|
| 206 |
+
In this section, we provide a qualitative comparison on CIFAR-10 with 250 labels of FixMatch (Sohn et al., 2020), FlexMatch (Zhang et al., 2021), and SoftMatch from different aspects, as shown in Fig. 2. We compute the error rate, the quantity, and the quality of pseudo-labels to analyze the proposed method, using the ground truth of unlabeled data that is unseen during training.
|
| 207 |
+
|
| 208 |
+
SoftMatch utilizes the unlabeled data better. From Fig. 2(b) and Fig. 2(c), one can observe that SoftMatch obtains highest quantity and quality of pseudo-labels across the training. Larger error with more fluctuation is present in quality of FixMatch and FlexMatch due to the nature of confidence thresholding, where significantly more wrong pseudo-labels are enrolled into training, leading to larger variance in quality and thus unstable training. While attaining a high quality, SoftMatch also substantially improves the unlabeled data utilization ratio, i.e., the quantity, as shown in Fig. 2(b), demonstrating the design of truncated Gaussian function could address the quantity-quality trade-off of the pseudo-labels. We also present the quality of the best and worst learned classes, as shown in Fig. 2(d), where both retain the highest along training in SoftMatch. The well-solved quantity-quality trade-off allows SoftMatch achieves better performance on convergence and error rate, especially for the first 50k iterations, as in Fig. 2(a).
|
| 209 |
+
|
| 210 |
+
# 4.5 ABLATION STUDY
|
| 211 |
+
|
| 212 |
+
Sample Weighting Functions. We validate different instantiations of $\lambda (\mathbf{p})$ to verify the effectiveness of the truncated Gaussian assumption on PMF $\lambda (\bar{\mathbf{p}})$ , as shown in Fig. 3(b). Both linear function and Quadratic function fail to generalize and present large performance gap between Gaussian due to the naive assumption on PMF as discussed before. Truncated Laplacian assumption also works well on different settings, but truncated Gaussian demonstrates the most robust performance.
|
| 213 |
+
|
| 214 |
+
Gaussian Parameter Estimation. SoftMatch estimates the Gaussian parameters $\mu$ and $\sigma^2$ directly from the confidence generated from all unlabeled data along the training. Here we compare it (All-Class) with two alternatives: (1) Fixed: which uses pre-defined $\mu$ and $\sigma^2$ of 0.95 and 0.01. (2) Per-Class: where a Gaussian for each class instead of a global Gaussian weighting function. As shown in Fig. 3(c), the inferior performance of Fixed justifies the importance of adaptive weight adjustment in SoftMatch. Moreover, Per-Class achieves comparable performance with SoftMatch at 250 labels, but significantly higher error rate at 40 labels. This is because an accurate parameter estimation requires many predictions for each class, which is not available for Per-Class.
|
| 215 |
+
|
| 216 |
+
Uniform Alignment on Gaussian. To verify the impact of UA, we compare the performance of SoftMatch with and without UA, denoted as all-class with UA and all-class without UA in Fig. 3(d). Since the per-class estimation standalone can also be viewed as a way to achieve fair class utilization (Zhang et al., 2021), we also include it in comparison. Removing UA from SoftMatch has a slight performance drop. Besides, per-class estimation produces significantly inferior results on SVHN.
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
(a) L.T. UA
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
(b) Weight. Func.
|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
(c)Gau.Param.
|
| 226 |
+
Figure 3: Ablation study of SoftMatch. (a) Target distributions for Uniform Alignment (UA) on long-tailed setting; (b) Error rate of different sample functions; (c) Error rate of different Gaussian parameter estimation, with UA enabled; (d) Ablation on UA with Gaussian parameter estimation;
|
| 227 |
+
|
| 228 |
+

|
| 229 |
+
(d) UA
|
| 230 |
+
|
| 231 |
+
We further include the detailed ablation of sample functions and several additional ablation study in Appendix A.5 due to space limit. These studies demonstrate that SoftMatch stays robust to different EMA momentum, variance range, and UA target distributions on balanced distribution settings.
|
| 232 |
+
|
| 233 |
+
# 5 RELATED WORK
|
| 234 |
+
|
| 235 |
+
Pseudo-labeling (Lee et al., 2013) generates artificial labels for unlabeled data and trains the model in a self-training manner. Consistency regularization (Samuli & Timo, 2017) is proposed to achieve the goal of producing consistent predictions for similar data points. A variety of works focus on improving the pseudo-labeling and consistency regularization from different aspects, such as loss weighting (Samuli & Timo, 2017; Tarvainen & Valpola, 2017; Iscen et al., 2019; Ren et al., 2020), data augmentation (Grandvalet et al., 2005; Sajjadi et al., 2016; Miyato et al., 2018; Berthelot et al., 2019b;a; Xie et al., 2020; Cubuk et al., 2020; Sajjadi et al., 2016), label allocation (Tai et al., 2021), feature consistency (Li et al., 2021; Zheng et al., 2022; Fan et al., 2021), and confidence thresholding (Sohn et al., 2020; Zhang et al., 2021; Xu et al., 2021b).
|
| 236 |
+
|
| 237 |
+
Loss weight ramp-up strategy is proposed to balance the learning on labeled and unlabeled data. (Samuli & Timo, 2017; Tarvainen & Valpola, 2017; Berthelot et al., 2019b;a). By progressively increasing the loss weight for the unlabeled data, which prevents the model involving too much ambiguous unlabeled data at the early stage of training, the model therefore learns in a curriculum fashion. Per-sample loss weight is utilized to better exploit the unlabeled data (Iscen et al., 2019; Ren et al., 2020). The previous work "Influence" shares a similar goal with us, which aims to calculate the loss weight for each sample but for the motivation that not all unlabeled data are equal (Ren et al., 2020). SAW (Lai et al., 2022) utilizes effective weights (Cui et al., 2019) to overcome the class-imbalanced issues in SSL. Modeling of loss weight has also been explored in semi-supervised segmentation (Hu et al., 2021). De-biased self-training (Chen et al., 2022; Wang et al., 2022a) study the data bias and training bias brought by involving pseudo-labels into training, which is similar exploration of quantity and quality in SoftMatch. Kim et al. (2022) proposed to use a small network to predict the loss weight, which is orthogonal to our work.
|
| 238 |
+
|
| 239 |
+
Confidence thresholding methods (Sohn et al., 2020; Xie et al., 2020; Zhang et al., 2021; Xu et al., 2021b) adopt a threshold to enroll the unlabeled samples with high confidence into training. FixMatch (Sohn et al., 2020) uses a fixed threshold to select pseudo-labels with high quality, which limits the data utilization ratio and leads to imbalanced pseudo-label distribution. Dash (Xu et al., 2021b) gradually increases the threshold during training to improve the utilization of unlabeled data. FlexMatch (Zhang et al., 2021) designs class-wise thresholds and lowers the thresholds for classes that are more difficult to learn, which alleviates class imbalance.
|
| 240 |
+
|
| 241 |
+
# 6 CONCLUSION
|
| 242 |
+
|
| 243 |
+
In this paper, we revisit the quantity-quality trade-off of pseudo-labeling and identify the core reason behind this trade-off from a unified sample weighting. We propose SoftMatch with truncated Gaussian weighting function and Uniform Alignment that overcomes the trade-off, yielding both high quantity and quality of pseudo-labels during training. Extensive experiments demonstrate the effectiveness of our method on various tasks. We hope more works can be inspired in this direction, such as designing better weighting functions that can discriminate correct pseudo-labels better.
|
| 244 |
+
|
| 245 |
+
# REFERENCES
|
| 246 |
+
|
| 247 |
+
Eric Arazo, Diego Ortego, Paul Albert, Noel E O'Connor, and Kevin McGuinness. Pseudo-labeling and confirmation bias in deep semi-supervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2020.
|
| 248 |
+
David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In International Conference on Learning Representations, 2019a.
|
| 249 |
+
David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. Advances in Neural Information Processing Systems, 32, 2019b.
|
| 250 |
+
David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, and Alex Kurakin. Adamatch: A unified approach to semi-supervised learning and domain adaptation. *ICLR*, 2021.
|
| 251 |
+
Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien (eds.). Semi-Supervised Learning. The MIT Press, 2006.
|
| 252 |
+
Baixu Chen, Junguang Jiang, Ximei Wang, Jianmin Wang, and Mingsheng Long. Debiased pseudo labeling in self-training. arXiv preprint arXiv:2202.07136, 2022.
|
| 253 |
+
Xiaokang Chen, Yuhui Yuan, Gang Zeng, and Jingdong Wang. Semi-supervised semantic segmentation with cross pseudo supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2613-2622, 2021.
|
| 254 |
+
Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 215-223. JMLR Workshop and Conference Proceedings, 2011.
|
| 255 |
+
Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702-703, 2020.
|
| 256 |
+
Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019.
|
| 257 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.
|
| 258 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
|
| 259 |
+
Yue Fan, Anna Kukleva, and Bernt Schiele. Revisiting consistency regularization for semi-supervised learning. In DAGM German Conference on Pattern Recognition, pp. 63-78. Springer, 2021.
|
| 260 |
+
Yue Fan, Dengxin Dai, and Bernt Schiele. Cossl: Co-learning of representation and classifier for imbalanced semi-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
|
| 261 |
+
Yves Grandvalet, Yoshua Bengio, et al. Semi-supervised learning by entropy minimization. volume 367, pp. 281-296, 2005.
|
| 262 |
+
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, pp. 1321-1330. PMLR, 2017.
|
| 263 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
|
| 264 |
+
|
| 265 |
+
Hanzhe Hu, Fangyun Wei, Han Hu, Qiwei Ye, Jinshi Cui, and Liwei Wang. Semi-supervised semantic segmentation via adaptive equalization learning. Advances in Neural Information Processing Systems, 34:22106-22118, 2021.
|
| 266 |
+
Ahmet Iscen, Giorgos Tolias, Yannis Avrithis, and Ondrej Chum. Label propagation for deep semi-supervised learning. In CVPR, 2019.
|
| 267 |
+
Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, and Jinwoo Shin. Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning. Advances in Neural Information Processing Systems, 33:14567-14579, 2020.
|
| 268 |
+
Jiwon Kim, Youngjo Min, Daehwan Kim, Gyuseong Lee, Junyoung Seo, Kwangrok Ryoo, and Seungryong Kim. Conmatch: Semi-supervised learning with confidence-guided consistency regularization. In European Conference on Computer Vision, 2022.
|
| 269 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 270 |
+
Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009.
|
| 271 |
+
Zhengfeng Lai, Chao Wang, Henry Gunawan, Sen-Ching S Cheung, and Chen-nee Chuah. Smoothed adaptive weighting for imbalanced semi-supervised learning: Improve reliability against unknown distribution data. In International Conference on Machine Learning, pp. 11828-11843. PMLR, 2022.
|
| 272 |
+
Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, pp. 896, 2013.
|
| 273 |
+
Hyuck Lee, Seungjae Shin, and Heeyoung Kim. Abc: Auxiliary balanced classifier for class-imbalanced semi-supervised learning. Advances in Neural Information Processing Systems, 34, 2021.
|
| 274 |
+
Junnan Li, Caiming Xiong, and Steven CH Hoi. Comatch: Semi-supervised learning with contrastive graph regularization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9475-9484, 2021.
|
| 275 |
+
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
|
| 276 |
+
Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pp. 142-150, 2011.
|
| 277 |
+
Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993, 2018.
|
| 278 |
+
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.
|
| 279 |
+
Avital Oliver, Augustus Odena, Colin A Raffel, Ekin Dogus Cubuk, and Ian Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. Advances in neural information processing systems, 31, 2018.
|
| 280 |
+
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. *fairoseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of NAACL-HLT* 2019: Demonstrations, 2019.
|
| 281 |
+
Hieu Pham, Zihang Dai, Qizhe Xie, and Quoc V Le. Meta pseudo labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11557-11568, 2021.
|
| 282 |
+
Zhongzheng Ren, Raymond A. Yeh, and Alexander G. Schwing. Not all unlabeled data are equal: Learning to weight data in semi-supervised learning. In Neural Information Processing Systems (NeurIPS), 2020.
|
| 283 |
+
|
| 284 |
+
Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Advances in neural information processing systems, 29:1163-1171, 2016.
|
| 285 |
+
Laine Samuli and Aila Timo. Temporal ensembling for semi-supervised learning. In International Conference on Learning Representations (ICLR), volume 4, pp. 6, 2017.
|
| 286 |
+
Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in Neural Information Processing Systems, 33, 2020.
|
| 287 |
+
Kai Sheng Tai, Peter Bailis, and Gregory Valiant. Sinkhorn label allocation: Semi-supervised classification via annealed self-training, 2021.
|
| 288 |
+
Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1195–1204, 2017.
|
| 289 |
+
Xudong Wang, Zhirong Wu, Long Lian, and Stella X Yu. Debiased learning from naturally imbalanced pseudo-labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14647-14657, 2022a.
|
| 290 |
+
Yidong Wang, Hao Chen, Yue Fan, Wang Sun, Ran Tao, Wenxin Hou, Renjie Wang, Linyi Yang, Zhi Zhou, Lan-Zhe Guo, Heli Qi, Zhen Wu, Yu-Feng Li, Satoshi Nakamura, Wei Ye, Marios Savvides, Bhiksha Raj, Takahiro Shinozaki, Bernt Schiele, Jindong Wang, Xing Xie, and Yue Zhang. Usb: A unified semi-supervised learning benchmark. In Neural Information Processing Systems (NeurIPS), 2022b.
|
| 291 |
+
Chen Wei, Kihyuk Sohn, Clayton Mellina, Alan Yuille, and Fan Yang. Crest: A classrebalancing self-training framework for imbalanced semi-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10857-10866, 2021.
|
| 292 |
+
Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. Unsupervised data augmentation for consistency training. Advances in Neural Information Processing Systems, 33, 2020.
|
| 293 |
+
Mengde Xu, Zheng Zhang, Han Hu, Jianfeng Wang, Lijuan Wang, Fangyun Wei, Xiang Bai, and Zicheng Liu. End-to-end semi-supervised object detection with soft teacher. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3060-3069, 2021a.
|
| 294 |
+
Yi Xu, Lei Shang, Jinxing Ye, Qi Qian, Yu-Feng Li, Baigui Sun, Hao Li, and Rong Jin. Dash: Semi-supervised learning with dynamic thresholding. In International Conference on Machine Learning, pp. 11525-11536. PMLR, 2021b.
|
| 295 |
+
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In *British Machine Vision Conference* 2016. British Machine Vision Association, 2016.
|
| 296 |
+
Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. Advances in Neural Information Processing Systems, 34, 2021.
|
| 297 |
+
Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
|
| 298 |
+
Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28:649-657, 2015.
|
| 299 |
+
Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, and Chang Xu. Simmatch: Semi-supervised learning with similarity matching. arXiv preprint arXiv:2203.06915, 2022.
|
| 300 |
+
Tianyi Zhou, Shengjie Wang, and Jeff Bilmes. Time-consistent self-supervision for semi-supervised learning. In International Conference on Machine Learning, pp. 11523-11533. PMLR, 2020.
|
| 301 |
+
|
| 302 |
+
# A APPENDIX
|
| 303 |
+
|
| 304 |
+
# A.1 QUANTITY-QUALITY TRADE-OFF
|
| 305 |
+
|
| 306 |
+
In this section, we present the detailed definition and derivation of the quantity and quality formulation. Importantly, we identify that the sampling weighting function $\lambda (\mathbf{p})\in [0,\lambda_{\mathrm{max}}]$ is directly related to the (implicit) assumption of probability mass function (PMF) over $\mathbf{p}$ for $\mathbf{p}\in \{\mathbf{p}(\mathbf{y}|\mathbf{x}^u);\mathbf{x}^u\in \mathcal{D}_U\}$ , i.e., the distribution of $\mathbf{p}$ . From the unified sample weighting function perspective, we show the analysis of quantity and quality of the related methods and SoftMatch.
|
| 307 |
+
|
| 308 |
+
# A.1.1 QUANTITY AND QUALITY
|
| 309 |
+
|
| 310 |
+
# Derivation Definition 2.1
|
| 311 |
+
|
| 312 |
+
The definition and derivation of quantity $f(\mathbf{p})$ of pseudo-labels is rather straightforward. We define the quantity as the percentage/ratio of unlabeled data enrolled in the weighted unsupervised loss. In other words, the quantity is the average sample weights on unlabeled data:
|
| 313 |
+
|
| 314 |
+
$$
|
| 315 |
+
f (\mathbf {p}) = \sum_ {i} ^ {N _ {U}} \frac {\lambda \left(\mathbf {p} _ {i}\right)}{N _ {U}} = \mathbb {E} _ {\mathcal {D} _ {U}} [ \lambda \left(\mathbf {p} _ {i}\right) ], \tag {10}
|
| 316 |
+
$$
|
| 317 |
+
|
| 318 |
+
where each unlabeled data is uniformly sampled from $\mathcal{D}_U$ and $f(\mathbf{p})\in [0,\lambda_{\mathrm{max}}]$
|
| 319 |
+
|
| 320 |
+
# Derivation Definition 2.2
|
| 321 |
+
|
| 322 |
+
We define the quality $g(\mathbf{p})$ of pseudo-labels as the percentage/ratio of correct pseudo-labels enrolled in the weighted unsupervised loss, assuming the ground truth label $\mathbf{y}^u$ of unlabeled data is known. With the 0/1 correct indicator function $\gamma (\mathbf{p})$ being defined as:
|
| 323 |
+
|
| 324 |
+
$$
|
| 325 |
+
\gamma (\mathbf {p}) = \mathbb {1} (\hat {\mathbf {p}} = \mathbf {y} ^ {u}) \in \{0, 1 \}, \tag {11}
|
| 326 |
+
$$
|
| 327 |
+
|
| 328 |
+
where $\hat{\mathbf{p}}$ is the one-hot vector of pseudo-label $\mathrm{argmax}(\mathbf{p})$ . We can formulate quality as:
|
| 329 |
+
|
| 330 |
+
$$
|
| 331 |
+
\begin{array}{l} g (\mathbf {p}) = \sum_ {i} ^ {N _ {U}} \gamma (\mathbf {p} _ {i}) \frac {\lambda (\mathbf {p} _ {i})}{\sum_ {j} ^ {N _ {U}} \lambda (\mathbf {p} _ {j})} \\ = \sum_ {i} ^ {N _ {U}} \gamma \left(\mathbf {p} _ {i}\right) \bar {\lambda} \left(\mathbf {p} _ {i}\right) \tag {12} \\ = \mathbb {E} _ {\bar {\lambda} (\mathbf {p})} [ \gamma (\mathbf {p}) ] \\ = \mathbb {E} _ {\tilde {\lambda} (\mathbf {p})} [ \mathbb {1} (\hat {\mathbf {p}} = \mathbf {y} ^ {u}) ] \in [ 0, 1 ]. \\ \end{array}
|
| 332 |
+
$$
|
| 333 |
+
|
| 334 |
+
We denote $\bar{\lambda} (\mathbf{p})$ as the probability mass function (PMF) of $\mathbf{p}$ , with $\bar{\lambda} (\mathbf{p})\geq 0$ and $\sum \bar{\lambda} (\mathbf{p}) = 1.0$
|
| 335 |
+
|
| 336 |
+
This indicates that, once $\lambda (\mathbf{p})$ is set to a function, the assumption on the PMF of $\mathbf{p}$ is made. In most of the previous methods (Tarvainen & Valpola, 2017; Berthelot et al., 2019b;a; Sohn et al., 2020; Zhang et al., 2021; Xu et al., 2021b), although they do not explicitly set $\lambda (\mathbf{p})$ , the introduction of loss weight schemes implicitly relates to the PMF of $\mathbf{p}$ . While the ground truth label $\mathbf{p}$ is actually unknown in practice, we can still use it for theoretical analysis.
|
| 337 |
+
|
| 338 |
+
In the following sections, we explicitly derive the sampling weighting function $\lambda (\mathbf{p})$ , probability mass function $\bar{\lambda} (\mathbf{p})$ , quantity $f(\mathbf{p})$ , and quality $g(\mathbf{p})$ for each relevant method.
|
| 339 |
+
|
| 340 |
+
# A.1.2 NAIVE PSEUDO-LABELING
|
| 341 |
+
|
| 342 |
+
In naive pseudo-labeling (Lee et al., 2013), the pseudo-labels are directly used to the model itself. This is equivalent to set $\lambda (\mathbf{p})$ to a fixed value $\lambda_{\mathrm{max}}$ , which is a hyper-parameter. We can write:
|
| 343 |
+
|
| 344 |
+
$$
|
| 345 |
+
\lambda (\mathbf {p}) = \lambda_ {\max }, \tag {13}
|
| 346 |
+
$$
|
| 347 |
+
|
| 348 |
+
$$
|
| 349 |
+
\bar {\lambda} (\mathbf {p}) = \frac {\lambda_ {\operatorname* {m a x}}}{N _ {U} \lambda_ {\operatorname* {m a x}}} = \frac {1}{N _ {U}}, \tag {14}
|
| 350 |
+
$$
|
| 351 |
+
|
| 352 |
+
$$
|
| 353 |
+
f (\mathbf {p}) = \sum_ {i} ^ {N _ {U}} \frac {\lambda_ {\operatorname* {m a x}}}{N _ {U}} = \lambda_ {\max }, \tag {15}
|
| 354 |
+
$$
|
| 355 |
+
|
| 356 |
+
$$
|
| 357 |
+
g (\mathbf {p}) = \sum_ {i} ^ {N _ {U}} \frac {\mathbb {1} \left(\hat {\mathbf {p}} _ {i} = \mathbf {y} _ {i} ^ {u}\right)}{N _ {U}}. \tag {16}
|
| 358 |
+
$$
|
| 359 |
+
|
| 360 |
+
We can observe that the naive self-training maximizes the quantity of the pseudo-labels by fully enrolling them into training. However, full enrollment results in pseudo-labels of low quality. At beginning of training, a large portion of the pseudo-labels would be wrong, i.e., $\gamma (\mathbf{p}) = 0$ , since the model is not well-learned. The wrong pseudo-labels usually leads to confirmation bias (Guo et al., 2017; Arazo et al., 2020) as training progresses, where the model memorizes the wrong pseudolabels and becomes very confident on them. We can also notice that, by setting $\lambda (\mathbf{p})$ to a fixed value $\lambda_{\mathrm{max}}$ , we implicitly assume the PMF of the model's prediction $\mathbf{p}$ is uniform, which is far away from the realistic distribution.
|
| 361 |
+
|
| 362 |
+
# A.1.3 LOSS WEIGHT RAMP UP
|
| 363 |
+
|
| 364 |
+
In the earlier attempts of semi-supervised learning, a bunch of work (Tarvainen & Valpola, 2017; Berthelot et al., 2019b;a) exploit the loss weight ramp up technique to avoid involving too much erroneous pseudo-labels in the early training and let the model focus on learning from labeled data first. In this case, the sample weighting function is formulated as a function of training iteration $t$ , which is linearly increased during training and reaches its maximum $\lambda_{\mathrm{max}}$ after $T$ warm-up iterations. Thus we have:
|
| 365 |
+
|
| 366 |
+
$$
|
| 367 |
+
\lambda (\mathbf {p}) = \lambda_ {\max } \min \left(\frac {t}{T}, 1\right), \tag {17}
|
| 368 |
+
$$
|
| 369 |
+
|
| 370 |
+
$$
|
| 371 |
+
\bar {\lambda} (\mathbf {p}) = \frac {\lambda_ {\max } \min \left(\frac {t}{T} , 1\right)}{N _ {U} \lambda_ {\max } \min \left(\frac {t}{T} , 1\right)} = \frac {1}{N _ {U}}, \tag {18}
|
| 372 |
+
$$
|
| 373 |
+
|
| 374 |
+
$$
|
| 375 |
+
f (\mathbf {p}) = \lambda_ {\max } \min \left(\frac {t}{T}, 1\right), \tag {19}
|
| 376 |
+
$$
|
| 377 |
+
|
| 378 |
+
$$
|
| 379 |
+
g (\mathbf {p}) = \sum_ {i} ^ {N _ {U}} \frac {\mathbb {1} \left(\hat {\mathbf {p}} _ {i} = \mathbf {y} _ {i} ^ {u}\right)}{N _ {U}}, \tag {20}
|
| 380 |
+
$$
|
| 381 |
+
|
| 382 |
+
which demonstrates the same uniform assumption of PMF and same quality function as naive self-training. It also indicates that, as long as same sample weight is used for all unlabeled data, a uniform assumption of PDF over $\mathbf{p}$ is made.
|
| 383 |
+
|
| 384 |
+
# A.1.4 FIXED CONFIDENCE THRESHOLDING
|
| 385 |
+
|
| 386 |
+
Confidence thresholding introduces a filtering mechanism, where the unlabeled data whose prediction confidence $\max (\mathbf{p})$ is above the pre-defined threshold $\tau$ is fully enrolled during training, and others being ignored (Xie et al., 2020; Sohn et al., 2020). The confidence thresholding mechanism can be formulated by setting $\lambda (\mathbf{p})$ as a step function - when the confidence is above threshold, the
|
| 387 |
+
|
| 388 |
+
sample weight is set to $\lambda_{\mathrm{max}}$ , and otherwise 0. We can derive:
|
| 389 |
+
|
| 390 |
+
$$
|
| 391 |
+
\lambda (\mathbf {p}) = \left\{ \begin{array}{l l} \lambda_ {\max }, & \text {i f} \max (\mathbf {p}) \geq \tau , \\ 0. 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {21}
|
| 392 |
+
$$
|
| 393 |
+
|
| 394 |
+
$$
|
| 395 |
+
\bar {\lambda} (\mathbf {p}) = \frac {\mathbb {1} (\max (\mathbf {p}) \geq \tau)}{\sum_ {i} ^ {N _ {U}} \mathbb {1} (\max (\mathbf {p} _ {i}) \geq \tau)} = \left\{ \begin{array}{l l} \frac {1}{\hat {N} _ {U}}, & \text {i f} \max (\mathbf {p}) \geq \tau , \\ 0. 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {22}
|
| 396 |
+
$$
|
| 397 |
+
|
| 398 |
+
$$
|
| 399 |
+
f (\mathbf {p}) = \sum_ {i} ^ {N _ {U}} \frac {\mathbb {1} \left(\max \left(\mathbf {p} _ {i}\right) \geq \tau\right) \lambda_ {\max }}{N _ {U}} = \lambda_ {\max } \frac {\hat {N} _ {U}}{N _ {U}}, \tag {23}
|
| 400 |
+
$$
|
| 401 |
+
|
| 402 |
+
$$
|
| 403 |
+
g (\mathbf {p}) = \sum_ {i} ^ {\hat {N} _ {U}} \frac {\mathbb {1} \left(\hat {\mathbf {p}} _ {\mathbf {i}} = \mathbf {y} _ {i} ^ {u}\right)}{\hat {N} _ {U}}, \tag {25}
|
| 404 |
+
$$
|
| 405 |
+
|
| 406 |
+
where we set $\hat{N}_U = \sum_i^{N_U}\mathbb{1}(\max (\mathbf{p}_i)\geq \tau)$ , i.e., number of unlabeled samples whose prediction confidence $\max (\mathbf{p})$ are above threshold $\tau$ .
|
| 407 |
+
|
| 408 |
+
Interestingly, one can find that confidence thresholding directly modeling the PMF over the prediction confidence $\max (\mathbf{p})$ . Although it still makes the uniform assumption, as shown in Eq. (22), it constrains the probability mass to concentrate in the range of $[\tau ,1]$ . As the model is more confident about the pseudo-labels, and the unconfident ones are excluded from training, it is more likely that $\hat{\mathbf{p}}$ would be close to $\mathbf{y}^u$ , thus ensuring the quality of the pseudo-labels to a high value if a high threshold is exploited. However, a higher threshold corresponds to smaller $\hat{N}_U$ , directly reducing the quantity of pseudo-labels. We can clearly observe a trade-off between quantity and quality of using fixed confidence thresholding. In addition, assuming the PMF of $\max (\mathbf{p})$ as a uniform within a range $[\tau ,1]$ still does not reflect the actually distribution over confidence during training.
|
| 409 |
+
|
| 410 |
+
# A.1.5 SOFTMATCH
|
| 411 |
+
|
| 412 |
+
In this paper, we propose SoftMatch to overcome the trade-off between quantity and quality of pseudo-labels. Different from previous methods, which implicitly make over-simplified assumptions on the distribution of $\mathbf{p}$ , we directly modelling the PMF of $\max (\mathbf{p})$ , from which we derive the sample weighting function $\lambda (\mathbf{p})$ used in SoftMatch.
|
| 413 |
+
|
| 414 |
+
We assume the confidence of model predictions $\max (\mathbf{p})$ generally follows the Gaussian distribution $\mathcal{N}(\max (\mathbf{p});\hat{\mu}_t,\hat{\sigma}_t)$ when $\max (\mathbf{p}) < \mu_t$ and the uniform distribution when $\max (\mathbf{p})\geq \mu_t$ . Note that $\mu_t$ and $\sigma_t$ is changing along training as the model learns better. One can see that the uniform part of the PMF is similar to that of confidence thresholding, and it is the Gaussian part makes SoftMatch distinct from previous methods. In SoftMatch, we directly estimate the Gaussian parameters on $\max (\mathbf{p})$ using Maximum Likelihood Estimation (MLE), rather than set them to fixed values, which is more consistent to the actual distribution of prediction confidence. Using the definition of PMF $\bar{\lambda} (\mathbf{p})$ , we can directly write the sampling weighting function $\lambda (\mathbf{p})$ of SoftMatch as:
|
| 415 |
+
|
| 416 |
+
$$
|
| 417 |
+
\lambda (\mathbf {p}) = \left\{ \begin{array}{l l} \lambda_ {\max } \sqrt {2 \pi} \sigma_ {t} \phi \left(\max \left(\mathbf {p}; \mu_ {t}, \sigma_ {t}\right)\right), & \max (\mathbf {p}) < \mu_ {t} \\ \lambda_ {\max }, & \max (\mathbf {p}) \geq \mu_ {t} \end{array} , \right. \tag {26}
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
where $\phi (x;\mu ,\sigma) = \frac{1}{\sqrt{2\pi}\sigma}\exp (-\frac{(x - \mu)^2}{2\sigma^2})$ . Without loss of generality, we can assume $\max (\mathbf{p}_i) < \mu_t$ for $i\in [0,\frac{N_U}{2} ]$ , as $\mu_{t} = \frac{1}{N_{U}}\sum_{i}^{N_{U}}\max (\mathbf{p}_{i})$ (shown in Eq. (6)) and thus $\mathcal{P}(\max (\mathbf{p}) < \mu_t) = 0.5.$
|
| 421 |
+
|
| 422 |
+
Therefore, $\sum \lambda (\mathbf{p})$ is computed as follows:
|
| 423 |
+
|
| 424 |
+
$$
|
| 425 |
+
\begin{array}{l} \sum_ {i} ^ {N _ {U}} \lambda (\mathbf {p} _ {i}) \\ = \sum_ {i = 1} ^ {\frac {N _ {U}}{2}} \lambda (\mathbf {p} _ {i}) + \sum_ {j = \frac {N _ {U}}{2} + 1} ^ {N _ {U}} \lambda (\mathbf {p} _ {j}) \\ = \sum_ {i} ^ {\frac {N _ {U}}{2}} \lambda_ {\max } \sqrt {2 \pi} \sigma_ {t} \phi \left(\max \left(\mathbf {p} _ {i}\right); \mu_ {t}, \sigma_ {t}\right) + \sum_ {j = \frac {N _ {U}}{2} + 1} ^ {N _ {U}} \lambda_ {\max } \tag {27} \\ = \lambda_ {\mathrm {m a x}} \left(\frac {N _ {U}}{2} + \sum_ {i} ^ {\frac {N _ {U}}{2}} \exp (- \frac {(\max (\mathbf {p} _ {i}) - \mu_ {t}) ^ {2}}{2 \sigma_ {t} ^ {2}})\right) \\ \end{array}
|
| 426 |
+
$$
|
| 427 |
+
|
| 428 |
+
Further,
|
| 429 |
+
|
| 430 |
+
$$
|
| 431 |
+
\begin{array}{l} f (\mathbf {p}) = \frac {1}{N _ {U}} \sum_ {i} ^ {N _ {U}} \lambda (\mathbf {p} _ {i}) \\ = \frac {1}{N _ {U}} \left(\sum_ {i = 1} ^ {\frac {N _ {U}}{2}} \lambda \left(\mathbf {p} _ {i}\right) + \sum_ {j = \frac {N _ {U}}{2} + 1} ^ {N _ {U}} \lambda \left(\mathbf {p} _ {j}\right)\right) \tag {28} \\ = \frac {\lambda_ {\max}}{N _ {U}} \left(\frac {N _ {U}}{2} + \sum_ {j} ^ {\frac {N _ {U}}{2}} \exp \left(- \frac {\left(\max \left(\mathbf {p} _ {j}\right) - \mu_ {t}\right) ^ {2}}{2 \sigma_ {t} ^ {2}}\right)\right) \\ = \frac {\lambda_ {\operatorname* {m a x}}}{2} + \frac {\lambda_ {\operatorname* {m a x}}}{N _ {U}} \sum_ {j} ^ {\frac {N _ {U}}{2}} \exp (- \frac {(\max (\mathbf {p} _ {j}) - \mu_ {t}) ^ {2}}{2 \sigma_ {t} ^ {2}}) \\ \end{array}
|
| 432 |
+
$$
|
| 433 |
+
|
| 434 |
+
Since $\max (\mathbf{p}_i) < \mu_t$ for $i\in [0,\frac{N_U}{2} ]$
|
| 435 |
+
|
| 436 |
+
$$
|
| 437 |
+
\exp \left(- \frac {\left(\frac {1}{C} - \mu_ {t}\right) ^ {2}}{2 \sigma_ {t} ^ {2}}\right) < = \exp \left(- \frac {\left(\max \left(\mathbf {p} _ {i}\right) - \mu_ {t}\right) ^ {2}}{2 \sigma_ {t} ^ {2}}\right) < 1
|
| 438 |
+
$$
|
| 439 |
+
|
| 440 |
+
$$
|
| 441 |
+
\frac {N _ {U}}{2} \exp (- \frac {\left(\frac {1}{C} - \mu_ {t}\right) ^ {2}}{2 \sigma_ {t} ^ {2}}) < = \sum_ {i} ^ {\frac {N _ {U}}{2}} \exp (- \frac {(\max (\mathbf {p} _ {i}) - \mu_ {t}) ^ {2}}{2 \sigma_ {t} ^ {2}}) < \frac {N _ {U}}{2}
|
| 442 |
+
$$
|
| 443 |
+
|
| 444 |
+
$$
|
| 445 |
+
\frac {\lambda_ {\max}}{2} < \frac {\lambda_ {\max}}{2} \left(1 + \exp \left(- \frac {\left(\frac {1}{C} - \mu_ {t}\right) ^ {2}}{2 \sigma_ {t} ^ {2}}\right)\right) < = f (\mathbf {p}) < \lambda_ {\max }
|
| 446 |
+
$$
|
| 447 |
+
|
| 448 |
+
Therefore, SoftMatch can guarantee at least half of the possible contribution to the final loss, improving the utilization of unlabeled data. Besides, as $\sigma_{t}$ is also estimated from $\max (\mathbf{p})$ , the lower bound of $f(\mathbf{p})$ would become tighter during training with a better and more confident model.
|
| 449 |
+
|
| 450 |
+
With the derived $\sum \lambda (\mathbf{p})$ , We can write the PDF $\bar{\lambda} (\mathbf{p})$ in SoftMatch as:
|
| 451 |
+
|
| 452 |
+
$$
|
| 453 |
+
\bar {\lambda} (\mathbf {p}) = \left\{ \begin{array}{l l} \frac {\sqrt {2 \pi} \sigma_ {t} \phi (\max (\mathbf {p}); \mu_ {t} , \sigma_ {t})}{\frac {N _ {U}}{2} + \sum_ {i} ^ {\frac {N _ {U}}{2}} \sqrt {2 \pi} \sigma_ {t} \phi (\max (\mathbf {p}); \mu_ {t} , \sigma_ {t})}, & \max (\mathbf {p}) < \mu_ {t} \\ \frac {1}{\frac {N _ {U}}{2} + \sum_ {i} ^ {\frac {N _ {U}}{2}} \sqrt {2 \pi} \sigma_ {t} \phi (\max (\mathbf {p}); \mu_ {t} , \sigma_ {t})}, & \max (\mathbf {p}) \geq \mu_ {t} \end{array} , \right. \tag {29}
|
| 454 |
+
$$
|
| 455 |
+
|
| 456 |
+
and further derive the quality of pseudo-labels in SoftMatch as:
|
| 457 |
+
|
| 458 |
+
$$
|
| 459 |
+
\begin{array}{l} g (\mathbf {p}) = \sum_ {i} ^ {N _ {U}} \mathbb {1} (\hat {\mathbf {p}} _ {i} = \mathbf {y} ^ {u}) \bar {\lambda} (\mathbf {p}) \\ = \frac {1}{\sum_ {k} ^ {N _ {U}} \lambda (\mathbf {p} _ {k})} \sum_ {i} ^ {N _ {U}} \gamma (\mathbf {p} _ {i}) \lambda (\mathbf {p} _ {i}) \\ = \frac {1}{\sum_ {k} ^ {N _ {U}} \lambda \left(\mathbf {p} _ {k}\right)} \left(\sum_ {i} ^ {\frac {N _ {U}}{2}} \gamma \left(\mathbf {p} _ {i}\right) \lambda \left(\mathbf {p} _ {i}\right) + \sum_ {j = \frac {N _ {U}}{2} + 1} ^ {\frac {N _ {U}}{2}} \gamma \left(\mathbf {p} _ {j}\right) \lambda \left(\mathbf {p} _ {j}\right)\right) \tag {30} \\ = \sum_ {i} ^ {\frac {N _ {U}}{2}} \gamma (\mathbf {p} _ {i}) \frac {\lambda_ {\max } \sqrt {2 \pi} \sigma_ {t} \phi (\max (\mathbf {p} _ {i}) ; \mu_ {t} , \sigma_ {t})}{\sum_ {k} ^ {N _ {U}} \lambda (\mathbf {p} _ {k})} + \sum_ {j} ^ {\frac {N _ {U}}{2}} \gamma (\mathbf {p} _ {j}) \frac {\lambda_ {\max }}{\sum_ {k} ^ {N _ {U}} \lambda (\mathbf {p} _ {k})} \\ = \sum_ {i} ^ {N _ {U} - \hat {N} _ {U}} \frac {\mathbb {1} (\hat {\mathbf {p}} _ {i} = \mathbf {y} _ {i} ^ {u}) \exp (- \frac {(\max (\mathbf {p} _ {i}) - \mu_ {t}) ^ {2}}{\sigma_ {t} ^ {2}})}{2 (N _ {U} - \hat {N} _ {U})} + \sum_ {j} ^ {\hat {N} _ {U}} \frac {\mathbb {1} (\hat {\mathbf {p}} _ {j} = \mathbf {y} _ {j} ^ {u})}{2 \hat {N} _ {U}} \\ \end{array}
|
| 460 |
+
$$
|
| 461 |
+
|
| 462 |
+
where $\hat{N}_U = \sum_i^{N_U}\mathbb{1}(\max (\mathbf{p}_i)\geq \mu_t)$ . From the above equation, we can see that for pseudo-labels whose confidence is above $\mu_{t}$ , the quality is as high as in confidence thresholding; for pseudolabels whose confidence is lower, thus more possible to be erroneous, the quality is weighted by the deviation from $\mu_t$ .
|
| 463 |
+
|
| 464 |
+
At the beginning of training, where the model is unconfident about most of the pseudo-labels, SoftMatch guarantees the quantity for at least $\frac{\lambda_{\mathrm{max}}}{2}$ and high quality for at least $\sum_{j} \hat{N}_{U} \frac{\mathbb{1}(\hat{\mathbf{p}}_{j} = \mathbf{y}_{j}^{u})}{2 \hat{N}_{U}}$ . As the model learns better and becomes more confident, i.e., $\mu_{t}$ increases and $\sigma_{t}$ decreases, the lower bound of quantity becomes tighter. The increase in $\hat{N}_{U}$ leads to better quality with pseudo-labels whose confidence below $\mu_{t}$ are further down-weighted. Therefore, SoftMatch overcomes the quantity-quality trade-off.
|
| 465 |
+
|
| 466 |
+
# A.2 ALGORITHM
|
| 467 |
+
|
| 468 |
+
We present the pseudo algorithms of SoftMatch in this section. SoftMatch adopts the truncated Gaussian function with parameters estimated from the EMA of the confidence distribution at each training step, which introduce trivial computations.
|
| 469 |
+
|
| 470 |
+
# Algorithm 1 SoftMatch algorithm.
|
| 471 |
+
|
| 472 |
+
1: Input: Number of classes $C$ , labeled batch $\{\mathbf{x}_i, \mathbf{y}_i\}_{i \in [B_L]}$ , unlabeled batch $\{\mathbf{u}_i\}_{i \in [B_U]}$ , and EMA momentum $m$ .
|
| 473 |
+
|
| 474 |
+
2: Define: $\mathbf{p}_i = \mathbf{p}(\mathbf{y}|\omega (\mathbf{u}_i))$
|
| 475 |
+
|
| 476 |
+
3: $\mathcal{L}_s = \frac{1}{B_L}\sum_{i = 1}^{B_L}\mathcal{H}(\mathbf{y}_i,\mathbf{p}(\mathbf{y}|\omega (\mathbf{x}_i)))$ $\triangleright$ Compute $\mathcal{L}_s$ on labeled batch
|
| 477 |
+
|
| 478 |
+
4: $\hat{\mu}_b = \frac{1}{B_U}\sum_{i = 1}^{B_U}\max (\mathbf{p}_i)$ $\triangleright$ Compute the mean of confidence
|
| 479 |
+
|
| 480 |
+
5: $\hat{\sigma}^2 = \frac{1}{B_U}\sum_{i = 1}^{B_U}\left(\max (\mathbf{p}_i) - \hat{\mu}_b\right)^2$ Compute the variance of confidence
|
| 481 |
+
|
| 482 |
+
6: $\hat{\mu}_t = m\hat{\mu}_{t - 1} + (1 - m)\hat{\mu}_b$ $\triangleright$ Update EMA of mean
|
| 483 |
+
|
| 484 |
+
7: $\hat{\sigma}_t^2 = m\hat{\sigma}_{t - 1}^2 +(1 - m)\frac{B_U}{B_U - 1}\hat{\sigma}_b^2$ $\triangleright$ Update EMA of variance
|
| 485 |
+
|
| 486 |
+
8: for $i = 1$ to $B_U$ do
|
| 487 |
+
|
| 488 |
+
9: $\lambda (\mathbf{p}_i) = \left\{ \begin{array}{ll}\exp \left(-\frac{(\max(\mathrm{UA}(\mathbf{p}_i)) - \hat{\mu}_t)^2}{2\hat{\sigma}_t^2}\right), & \text{if}\max (\mathrm{UA}(\mathbf{p}_i)) < \hat{\mu}_t,\\ 1.0, & \text{otherwise.} \end{array} \right.$ $\triangleright$ Compute loss weight
|
| 489 |
+
|
| 490 |
+
10: end for
|
| 491 |
+
|
| 492 |
+
11: $\mathcal{L}_u = \frac{1}{B_U}\sum_{i = 1}^{B_U}\lambda (\mathbf{p}_i)\mathcal{H}(\hat{\mathbf{p}}_i,\mathbf{p}(\mathbf{y}|\Omega (\mathbf{u}_i)))$ $\triangleright$ Compute $\mathcal{L}_u$ on unlabeled batch
|
| 493 |
+
12: Return: $\mathcal{L}_s + \mathcal{L}_u$
|
| 494 |
+
|
| 495 |
+
# A.3 EXPERIMENT DETAILS
|
| 496 |
+
|
| 497 |
+
# A.3.1 CLASSIC IMAGE CLASSIFICATION
|
| 498 |
+
|
| 499 |
+
We present the detailed hyper-parameters used for the classic image classification setting in Table 6 for reproduction. We use NVIDIA V100 for training of classic image classification. The training time for CIFAR-10 and SVHN on a single GPU is around 3 days, whereas the training time for CIFAR-100 and STL-10 is around 7 days.
|
| 500 |
+
|
| 501 |
+
Table 6: Hyper-parameters of classic image classification tasks.
|
| 502 |
+
|
| 503 |
+
<table><tr><td>Dataset</td><td>CIFAR-10</td><td>CIFAR-100</td><td>STL-10</td><td>SVHN</td><td>ImageNet</td></tr><tr><td>Model</td><td>WRN-28-2</td><td>WRN-28-8</td><td>WRN-37-2</td><td>WRN-28-2</td><td>ResNet-50</td></tr><tr><td>Weight Decay</td><td>5e-4</td><td>1e-3</td><td>5e-4</td><td>5e-4</td><td>3e-4</td></tr><tr><td>Labeled Batch size</td><td colspan="4">64</td><td>128</td></tr><tr><td>Unlabeled Batch size</td><td colspan="4">448</td><td>128</td></tr><tr><td>Learning Rate</td><td colspan="4">0.03</td><td></td></tr><tr><td>Scheduler</td><td colspan="4">η = η0 cos(7πk/16K)</td><td></td></tr><tr><td>SGD Momentum</td><td colspan="4">0.9</td><td></td></tr><tr><td>Model EMA Momentum</td><td colspan="4">0.999</td><td></td></tr><tr><td>Prediction EMA Momentum</td><td colspan="4">0.999</td><td></td></tr><tr><td>Weak Augmentation</td><td colspan="4">Random Crop, Random Horizontal Flip</td><td></td></tr><tr><td>Strong Augmentation</td><td colspan="4">RandAugment (Cubuk et al., 2020)</td><td></td></tr></table>
|
| 504 |
+
|
| 505 |
+
# A.3.2 LONG-TAILED IMAGE CLASSIFICATION
|
| 506 |
+
|
| 507 |
+
The hyper-parameters for long-tailed image classification evaluation is shown in Table 7. We use Adam optimizer instead. For faster training, WRN-28-2 is used for both CIFAR-10 and CIFAR-100. NVIDIA V100 is used to train long-tailed image classification, and the training time is around 1 day.
|
| 508 |
+
|
| 509 |
+
Table 7: Hyper-parameters of long-tailed image classification tasks.
|
| 510 |
+
|
| 511 |
+
<table><tr><td>Dataset</td><td>CIFAR-10</td><td>CIFAR-100</td></tr><tr><td>Model</td><td colspan="2">WRN-28-2</td></tr><tr><td>Weight Decay</td><td colspan="2">4e-5</td></tr><tr><td>Labeled Batch size</td><td colspan="2">64</td></tr><tr><td>Unlabeled Batch size</td><td colspan="2">128</td></tr><tr><td>Learning Rate</td><td colspan="2">0.002</td></tr><tr><td>Scheduler</td><td colspan="2">η = η0 cos(7πk/16K)</td></tr><tr><td>Optimizer</td><td colspan="2">Adam</td></tr><tr><td>Model EMA Momentum</td><td colspan="2">0.999</td></tr><tr><td>Prediction EMA Momentum</td><td colspan="2">0.999</td></tr><tr><td>Weak Augmentation</td><td colspan="2">Random Crop, Random Horizontal Flip</td></tr><tr><td>Strong Augmentation</td><td colspan="2">RandAugment (Cubuk et al., 2020)</td></tr></table>
|
| 512 |
+
|
| 513 |
+
# A.3.3 TEXT CLASSIFICATION
|
| 514 |
+
|
| 515 |
+
For text classification tasks, we random split a validation set from the training set of each dataset used. For IMDb and AG News, we randomly sample 1,000 data and 2,500 data per-class respectively as validation set, and other data is used as training set. For Amazon-5 and Yelp-5, we randomly sample 5,000 data and 50,000 data per-class as validation set and training set respectively. For DBpedia, the validation set and training set consist of 1,000 and 10,000 samples per-class.
|
| 516 |
+
|
| 517 |
+
The training parameters used are shown in Table 8. Note that for strong augmentation, we use back-translation similar to (Xie et al., 2020). We conduct back-translation offline before training, using EN-DE and EN-RU with models provided in fairseq (Ott et al., 2019). We use NVIDIA V100 to train all text classification models, the total training time is around 20 hours.
|
| 518 |
+
|
| 519 |
+
Table 8: Hyper-parameters of text classification tasks.
|
| 520 |
+
|
| 521 |
+
<table><tr><td>Dataset</td><td>AG News</td><td>DBpedia</td><td>IMDb</td><td>Amazon-5</td><td>Yelp-5</td></tr><tr><td>Model</td><td colspan="5">Bert-Base</td></tr><tr><td>Weight Decay</td><td colspan="5">1e-4</td></tr><tr><td>Labeled Batch size</td><td colspan="5">16</td></tr><tr><td>Unlabeled Batch size</td><td colspan="5">16</td></tr><tr><td>Learning Rate</td><td colspan="5">1e-5</td></tr><tr><td>Scheduler</td><td colspan="5">η = η0 cos(7πk/16K)</td></tr><tr><td>Model EMA Momentum</td><td colspan="5">0.0</td></tr><tr><td>Prediction EMA Momentum</td><td colspan="5">0.999</td></tr><tr><td>Weak Augmentation</td><td colspan="5">None</td></tr><tr><td>Strong Augmentation</td><td colspan="5">Back-Translation (Xie et al., 2020)</td></tr></table>
|
| 522 |
+
|
| 523 |
+
# A.4 EXTEND EXPERIMENT RESULTS
|
| 524 |
+
|
| 525 |
+
In this section, we provide detailed experiments on the implementation of the sample weighting function in unlabeled loss, as shown in Table 9. One can observe most fixed functions works surprisingly well on CIFAR-10 with 250 labels, yet Gaussian function demonstrate the best results on CIFAR-10 with 40 labels. On the SVHN with 40 labels, Linear and Quadratic function fails to learn while Laplacian and Gaussian function shows better performance. Estimating the function parameters from the confidence and making the function truncated allow the model learn more flexibly and yields better performance for both Laplacian and Gaussian function. We visualize the functions studied in Fig. 4, where one can observe the truncated Gaussian function is most reasonable by assigning diverse weights for samples whose confidence is within its standard deviation.
|
| 526 |
+
|
| 527 |
+
Table 9: Detailed results of different instantiation of $\lambda \mathbf{p}$ on CIFAR-10 with 40 and 250 labels, and SVHN-10 with 40 labels.
|
| 528 |
+
|
| 529 |
+
<table><tr><td>Method</td><td>λ(p)</td><td>Learnable</td><td>CIFAR-10 40</td><td>CIFAR-10 250</td><td>SVHN-10 40</td></tr><tr><td>Linear</td><td>max(p)</td><td>-</td><td>11.38±3.92</td><td>5.41±0.19</td><td>15.27±28.92</td></tr><tr><td>Quadratic</td><td>-(max(p)-1)2+1</td><td>-</td><td>12.44±5.67</td><td>5.94±0.22</td><td>84.11±1.84</td></tr><tr><td>Laplacian</td><td>exp(-|max(p)-μ|/b), μ=1.0, b=0.3</td><td>-</td><td>13.29±3.33</td><td>5.24±0.16</td><td>12.77±10.33</td></tr><tr><td>Gaussian</td><td>exp(-((max(p)-μ)2/2σ2), μ=1.0, σ=0.3</td><td>-</td><td>7.73±1.44</td><td>4.98±0.02</td><td>12.95±8.79</td></tr><tr><td>Trun. Laplacian</td><td>{exp(-|max(p)-μ|/b), if max(p) < μ, 1.0, otherwise.</td><td>μ,b</td><td>5.30±0.09</td><td>5.14±0.20</td><td>3.12±0.30</td></tr><tr><td>Trun. Gaussian</td><td>{exp(-((max(p)-μ)2/2σ2), if max(p) < μ, 1.0, otherwise.</td><td>μ,σ</td><td>4.91±0.12</td><td>4.82±0.09</td><td>2.33±0.25</td></tr></table>
|
| 530 |
+
|
| 531 |
+
# A.5 EXTENDED ABLATION STUDY
|
| 532 |
+
|
| 533 |
+
We provide the additional ablation study of other components of SoftMatch, including the EMA momentum parameter $m$ , the variance range of truncated Gaussian function, and the target distribution of Uniform Alignment (UA), on CIFAR-10 with 250 labels.
|
| 534 |
+
|
| 535 |
+
EMA momentum. We compare SoftMatch with momentum 0.99, 0.999, and 0.9999 and present the results in Table 10. A momentum of 0.999 shows the best results. While different momentum does not affect the final performance much, they have larger impact on convergence speed, where a smaller momentum value results in faster convergence yet lower accuracy and a larger momentum slows down the convergence.
|
| 536 |
+
|
| 537 |
+

|
| 538 |
+
Figure 4: Sample weighting function visualization
|
| 539 |
+
|
| 540 |
+
Table 10: Ablation of EMA momentum $m$ on CIFAR-10 with 250 labels.
|
| 541 |
+
|
| 542 |
+
<table><tr><td>Momentum</td><td>Error Rate</td></tr><tr><td>0.99</td><td>4.92±0.11</td></tr><tr><td>0.999</td><td>4.82±0.09</td></tr><tr><td>0.9999</td><td>4.86±0.12</td></tr></table>
|
| 543 |
+
|
| 544 |
+
Table 11: Ablation of variance range in Gaussian function on CIFAR-10 with 250 labels.
|
| 545 |
+
|
| 546 |
+
<table><tr><td>Variance Range</td><td>Error Rate</td></tr><tr><td>σ</td><td>4.97±0.13</td></tr><tr><td>2σ</td><td>4.82±0.09</td></tr><tr><td>3σ</td><td>4.84±0.15</td></tr></table>
|
| 547 |
+
|
| 548 |
+
Table 12: Ablation of target distribution of UA on CIFAR-10 with 250 labels.
|
| 549 |
+
|
| 550 |
+
<table><tr><td>Target Dist.</td><td>Error Rate</td></tr><tr><td>pL(y)</td><td>4.83±0.12</td></tr><tr><td>ˆpL(y)</td><td>4.90±0.23</td></tr><tr><td>u(C)</td><td>4.82±0.09</td></tr></table>
|
| 551 |
+
|
| 552 |
+
Variance range. We study the variance range of Gaussian function. In all experiments of the main paper, we use the $2\sigma$ range, i.e., divide the estimated variance $\hat{\sigma}_t$ by 4 in practice. The variance range directly affects the degree of softness of the truncated Gaussian function. We show in Table 11 that using $\sigma$ directly results in a slight performance drop, while $2\sigma$ and $3\sigma$ produces similar results.
|
| 553 |
+
|
| 554 |
+
UA target distribution. In the main paper, we validate the target distribution of UA on long-tailed setting. We also include the effect of the target distribution of UA on balanced setting. As shown in Table 12, using uniform distribution $\mathbf{u}(c)$ or the ground-truth marginal distribution $\mathbf{p}_L(\mathbf{y})$ produces the same results, whereas using the estimated $\hat{\mathbf{p}}_L(\mathbf{y})$ (Berthelot et al., 2021) has a performance drop.
|
| 555 |
+
|
| 556 |
+
# A.6 EXTEND ANALYSIS ON TRUNCATED GAUSSIAN
|
| 557 |
+
|
| 558 |
+
In this section, we provide further visualization about the confidence distribution of pseudo-labels, and the weighting function, similar to Fig. 1(a) but on CIFAR-10. More specifically, we plot the histogram of confidence of pseudo-labels and of wrong pseudo-labels, from epoch 1 to 6. We select the first 5 epochs because the difference is more significant. Along with the histogram, we also plot the current weighting function over confidence, as a visualization how the pseudo-labels over different confidence intervals are used in different methods.
|
| 559 |
+
|
| 560 |
+
Fig. 5 summarizes the visualization. Interestingly, although FixMatch adopts quite a high threshold, the quality of pseudo-labels is very low, i.e., there are more wrong pseudo-labels in each confidence interval. This reflects the important of involving more pseudo-labels into training at the beginning, as in SoftMatch, to let the model learn more balanced on each class to improve quality of pseudolabels.
|
| 561 |
+
|
| 562 |
+
# A.7 EXTEND ANALYSIS ON UNIFORM ALIGNMENT
|
| 563 |
+
|
| 564 |
+
In this section, we provide more explanation regarding the mechanism of Uniform Alignment (UA). UA is proposed to make the model learn more equally on each classes to reduce the pseudo-label imbalance/bias. To do so, we align the expected prediction probability to a uniform distribution
|
| 565 |
+
|
| 566 |
+

|
| 567 |
+
(a) FixMatch
|
| 568 |
+
|
| 569 |
+

|
| 570 |
+
(b) FlexMatch
|
| 571 |
+
|
| 572 |
+

|
| 573 |
+
(c) SoftMatch
|
| 574 |
+
|
| 575 |
+

|
| 576 |
+
Figure 5: Histogram of confidence of pseudo-labels, learned by (a) FixMatch; (b) Flexmatch; (c) SoftMatch, for first 6 epochs on CIFAR-10. The weighting function over confidence of each method is shown as the blue curve. For FlexMatch, we plot the average threshold. SoftMatch presents better accuracy by utilizing pseudo-labels in a more efficient way.
|
| 577 |
+
Figure 6: Average weight for each class according to pseudo-label, for (a) before UA; and (b) after UA. We also include the difference of them in (c). UA helps to balance the average weight of each class.
|
| 578 |
+
|
| 579 |
+
when computing the sample weights. A difference of UA and DA is that UA is only used in weight computing, and not used in consistency loss. To visualize this, we plot the average class weight according to pseudo-labels of SoftMatch before UA and after UA at the beginning of training, as shown in Fig. 6. UA facilitates more balanced class-wise sample weight, which would help the model learn more equally on each class.
|
2301.10xxx/2301.10921/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4fa300ecfaa20fc758c7685e85a39dea3317a176191205fbe1e67e1b07671c60
|
| 3 |
+
size 1003800
|
2301.10xxx/2301.10921/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10931/afdf8b0d-7ab1-41e7-80f8-fbf01ffd0d6c_content_list.json
ADDED
|
@@ -0,0 +1,1768 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Towards Continual Egocentric Activity Recognition: A Multi-modal Egocentric Activity Dataset for Continual Learning",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
76,
|
| 8 |
+
69,
|
| 9 |
+
919,
|
| 10 |
+
175
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Linfeng Xu, Qingbo Wu, Lili Pan, Fanman Meng, Hongliang Li, Chiyuan He, Hanxin Wang, Shaoxu Cheng, Yu Dai",
|
| 17 |
+
"bbox": [
|
| 18 |
+
73,
|
| 19 |
+
181,
|
| 20 |
+
921,
|
| 21 |
+
200
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Abstract—With the rapid development of wearable cameras, a massive collection of egocentric video for first-person visual perception becomes available. Using egocentric videos to predict first-person activity faces many challenges, including limited field of view (FoV), occlusions, and unstable motions. Observing that sensor data from wearable devices facilitates human activity recognition (HAR), activity recognition using multi-modal data is attracting increasing attention. However, the deficiency of related dataset hinders the development of multi-modal deep learning for egocentric activity recognition. Nowadays, deep learning in real world has led to a focus on continual learning that often suffers from catastrophic forgetting. But the catastrophic forgetting problem of continual learning for egocentric activity recognition, especially in the context of multiple modalities, remains unexplored due to unavailability of dataset. In order to assist this research, in this paper, we present a multi-modal egocentric activity dataset for continual learning named UESTC-MMEA-CL, which is collected by self-developed glasses integrating a first-person camera and wearable sensors. It contains synchronized data of videos, accelerometers, and gyroscopes, for 32 types of daily activities, performed by 10 participants wearing the glasses. The collection device and process of our dataset are described. Its class types and scale are compared with other publicly available multi-modal datasets for egocentric activity recognition. The statistical analysis of the sensor data is given to show the auxiliary effects for different behaviors. And results of egocentric activity recognition are reported when using separately, and jointly, three modalities: RGB, acceleration, and gyroscope, on a base multi-modal network architecture. To explore the catastrophic forgetting in continual learning tasks on UESTC-MMEA-CL, four baseline methods are extensively evaluated with different multi-modal combinations. We hope the UESTC-MMEA-CL dataset can promote future studies on continual learning for first-person activity recognition in wearable applications. Our dataset will be released soon.",
|
| 28 |
+
"bbox": [
|
| 29 |
+
73,
|
| 30 |
+
250,
|
| 31 |
+
491,
|
| 32 |
+
691
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Index Terms-Multi-modal dataset, egocentric activity recognition, continual learning, wearable device",
|
| 39 |
+
"bbox": [
|
| 40 |
+
73,
|
| 41 |
+
696,
|
| 42 |
+
491,
|
| 43 |
+
724
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "I. INTRODUCTION",
|
| 50 |
+
"text_level": 1,
|
| 51 |
+
"bbox": [
|
| 52 |
+
215,
|
| 53 |
+
743,
|
| 54 |
+
351,
|
| 55 |
+
757
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "OVER the last decades, enormous annotated images and videos boost the tremendous progress of the models and systems in deep learning and computer vision. Most of popular image and video datasets [1], [2], [3], [4], [5] capture moments from a third-person \"spectator\" view, which leads to the limited visual perception in current models and systems",
|
| 62 |
+
"bbox": [
|
| 63 |
+
73,
|
| 64 |
+
762,
|
| 65 |
+
491,
|
| 66 |
+
854
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "L. Xu, Q. Wu, L. Pan, F. Meng, H. Li, C. He, H. Wang, S. Cheng, and Y. Dai are with School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China (e-mail: {lfxu, qbwu, lilipan, fmmeng, hlli} @uestc.edu.cn).",
|
| 73 |
+
"bbox": [
|
| 74 |
+
73,
|
| 75 |
+
862,
|
| 76 |
+
491,
|
| 77 |
+
909
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.",
|
| 84 |
+
"bbox": [
|
| 85 |
+
73,
|
| 86 |
+
909,
|
| 87 |
+
491,
|
| 88 |
+
945
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "text",
|
| 94 |
+
"text": "[6]. Compared to the widespread third-person images, videos from the egocentric point of view can provide the first-person experience of immersion or \"participant\", i.e., we can feel what a person sees when doing an action. Recently, with the rapid development of wearable devices, especially portable head-mounted cameras, such as GoPro, Insta360, Envision Glasses, Vuzix Blade, ThinkReality A3, and Mijia Glasses, the collection of rich egocentric videos becomes available. Analyzing and understanding the content in the egocentric perspective is key to the paradigm shift from \"spectator\" view to \"participant\" view in computer vision research, which is of prevalent interest due to a large number of applications, including military operations, lifestyle analysis [7], human-object interactions [8], medical monitoring [9], augmented and virtual reality [10], [11], industrial robotics [12], and autonomous driving [13].",
|
| 95 |
+
"bbox": [
|
| 96 |
+
501,
|
| 97 |
+
250,
|
| 98 |
+
921,
|
| 99 |
+
491
|
| 100 |
+
],
|
| 101 |
+
"page_idx": 0
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"type": "text",
|
| 105 |
+
"text": "Modeling human activity recognition or anticipation for egocentric videos poses lots of challenges. Firstly, unlike third-person video with apparent motion cues [14], egocentric video changes quickly with the movements of the wearer's head and body. It is difficult to capture the motion cues in egocentric video due to drastic alteration of motion direction and speed, as well as the absence of static backgrounds. Secondly, whereas third person images and videos are captured by a \"spectator\" for some purpose, egocentric images are driven by the active behavior of the camera wearer. The attention of the egocentric video or the \"participant\", when doing an action, may focus on hands, objects, and the interaction with the surroundings [15], which is quite different from the interest points of a photographer watching from a \"spectator\" view. Finally, in some egocentric scenes (e.g., riding bicycle and walking on the road), the objects or body associated with the behavior may not appear in the video due to the limited FoV of the egocentric camera.",
|
| 106 |
+
"bbox": [
|
| 107 |
+
503,
|
| 108 |
+
491,
|
| 109 |
+
921,
|
| 110 |
+
762
|
| 111 |
+
],
|
| 112 |
+
"page_idx": 0
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"type": "text",
|
| 116 |
+
"text": "Complementary to vision data, inertial sensor data (e.g., gyroscopes and accelerometers) provide position and direction information of the wearable device, which may facilitate human activity recognition for egocentric videos. Recently, with the advancement and application of wearable inertial sensors, multi-modal methods, i.e., combining vision data and sensor data to recognize human activities, are of widespread interest, which may promote vision-based methods [16], [17], [18]. Some pioneering work [17] uses LSTM to learn the feature from sensor data and CNNs to learn the feature from vision data, which are fused together to predict wearer's activity. However, due to the difficulty of collecting data and",
|
| 117 |
+
"bbox": [
|
| 118 |
+
503,
|
| 119 |
+
763,
|
| 120 |
+
921,
|
| 121 |
+
945
|
| 122 |
+
],
|
| 123 |
+
"page_idx": 0
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"type": "page_number",
|
| 127 |
+
"text": "1",
|
| 128 |
+
"bbox": [
|
| 129 |
+
911,
|
| 130 |
+
30,
|
| 131 |
+
919,
|
| 132 |
+
39
|
| 133 |
+
],
|
| 134 |
+
"page_idx": 0
|
| 135 |
+
},
|
| 136 |
+
{
|
| 137 |
+
"type": "aside_text",
|
| 138 |
+
"text": "arXiv:2301.10931v1 [cs.CV] 26 Jan 2023",
|
| 139 |
+
"bbox": [
|
| 140 |
+
22,
|
| 141 |
+
267,
|
| 142 |
+
57,
|
| 143 |
+
707
|
| 144 |
+
],
|
| 145 |
+
"page_idx": 0
|
| 146 |
+
},
|
| 147 |
+
{
|
| 148 |
+
"type": "text",
|
| 149 |
+
"text": "lack of dataset, the progress of multi-modal egocentric activity recognition is slow compared with the vision-based methods.",
|
| 150 |
+
"bbox": [
|
| 151 |
+
73,
|
| 152 |
+
69,
|
| 153 |
+
491,
|
| 154 |
+
98
|
| 155 |
+
],
|
| 156 |
+
"page_idx": 1
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"type": "text",
|
| 160 |
+
"text": "Nowadays, deep neural networks (DNNs) have made a tremendous progress in various fields and applications, such as computer vision, pattern recognition, and natural language processing. Although this progress is wonderful, most of current DNNs only are good at dealing with static data, because they no longer learn after a training period. This learning strategy is different from what human beings do. Actually, in real world, humans keep acquiring new skills and knowledge to adapt dynamic environments based on what previously learned. This on-going ability is crucial for the development of artificial general intelligence (AGI) [19]. However, when the network is trained on a sequence of multiple tasks, the performance on previous tasks will severely degrade, because the weights of the network, which are important for previous tasks, are modified to fit the objectives of the new task. This phenomenon is termed catastrophic forgetting [20], [21], which has seriously hampered further development of machine learning in real world. To alleviate catastrophic forgetting, many promising continual learning algorithms were proposed in recent years. Most of continual learning research focuses on incremental classification tasks [22], [23], [24], [25], [26], [27]. To tackle computer vision tasks, continual learning for object detection [28], [29], semantic segmentation [30], [31], and activity recognition [32] has attracted much attention and is an emerging trend due to lots of real-world applications, such as robotics and autonomous driving. However, the catastrophic forgetting in the context of continual learning for multi-modal egocentric activity recognition and possible approaches to address this problem have remained unexplored due to unavailability of related dataset. To fill in this gap, we propose a multi-modal egocentric activity dataset for continual learning named UESTC-MMEA-CL.",
|
| 161 |
+
"bbox": [
|
| 162 |
+
73,
|
| 163 |
+
99,
|
| 164 |
+
491,
|
| 165 |
+
582
|
| 166 |
+
],
|
| 167 |
+
"page_idx": 1
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"type": "text",
|
| 171 |
+
"text": "Different from the existing multi-modal egocentric activity datasets [33], [34], which are collected by separate camera and sensors, our dataset is collected by self-developed glasses integrated with a first-person camera and an inertial measurement unit (IMU). So UESTC-MMEA-CL is suitable to develop applications for life-logging wearable devices (e.g., smart glasses). The vision data and sensor data of our dataset are synchronized well when doing actions. Similar to our manner of collection, MEAD [17] was collected by Google Glasses to capture synchronous video and sensor data. However, the scale of MEAD is too limited to take full advantage of DNNs, let alone facilitate research of continual learning. The proposed UESTC-MMEA-CL contains 32 daily activity classes with the duration over 30 hours in total. Each sample clip consists of video, acceleration and gyroscope signals which can provide rich object and motion attributes. Besides, as shown in Fig. 1, we divide these classes into different tasks/steps to adapt to the requirements of continual learning to encourage more research on continual multi-modal egocentric activity recognition.",
|
| 172 |
+
"bbox": [
|
| 173 |
+
73,
|
| 174 |
+
582,
|
| 175 |
+
491,
|
| 176 |
+
868
|
| 177 |
+
],
|
| 178 |
+
"page_idx": 1
|
| 179 |
+
},
|
| 180 |
+
{
|
| 181 |
+
"type": "text",
|
| 182 |
+
"text": "In order to better describe catastrophic forgetting in the context of continual learning for multi-modal egocentric activity recognition, we propose a benchmark model and evaluate several classic continual learning methods on our UESTC-MMEA-CL.",
|
| 183 |
+
"bbox": [
|
| 184 |
+
73,
|
| 185 |
+
869,
|
| 186 |
+
491,
|
| 187 |
+
944
|
| 188 |
+
],
|
| 189 |
+
"page_idx": 1
|
| 190 |
+
},
|
| 191 |
+
{
|
| 192 |
+
"type": "image",
|
| 193 |
+
"img_path": "images/dfb21fd4ca635da7e86653d824fd6d55c8fd22e545b112a75fe7382a8a7741d2.jpg",
|
| 194 |
+
"image_caption": [
|
| 195 |
+
"Fig. 1: Continual egocentric activity recognition with multi modalities: Video stream, acceleration data(green) and gyroscope data(purple)."
|
| 196 |
+
],
|
| 197 |
+
"image_footnote": [],
|
| 198 |
+
"bbox": [
|
| 199 |
+
527,
|
| 200 |
+
74,
|
| 201 |
+
919,
|
| 202 |
+
304
|
| 203 |
+
],
|
| 204 |
+
"page_idx": 1
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"type": "text",
|
| 208 |
+
"text": "In summary, the main contributions of this paper are listed as follows:",
|
| 209 |
+
"bbox": [
|
| 210 |
+
504,
|
| 211 |
+
388,
|
| 212 |
+
919,
|
| 213 |
+
417
|
| 214 |
+
],
|
| 215 |
+
"page_idx": 1
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"type": "list",
|
| 219 |
+
"sub_type": "text",
|
| 220 |
+
"list_items": [
|
| 221 |
+
"- We propose a new multi-modal egocentric activity dataset UESTC-MMEA-CL, which aims at addressing the catastrophic forgetting problem in the context of continual egocentric activity recognition. To the best of our knowledge, this is the first multi-modal dataset for continual egocentric activity recognition.",
|
| 222 |
+
"- We propose a benchmark model for mulimodal egocentric activity recognition and demonstrate the experimental results when using separately, and jointly, the three modalities, i.e., RGB, acceleration, and gyroscope, on UESTC-MMEA-CL.",
|
| 223 |
+
"- We set the continual egocentric activity recognition tasks and describe the main challenges raised by UESTC-MMEA-CL: the catastrophic forgetting of each modality. Besides, we try to employ popular continual learning methods to tackle this problem and provide some potential research directions."
|
| 224 |
+
],
|
| 225 |
+
"bbox": [
|
| 226 |
+
521,
|
| 227 |
+
422,
|
| 228 |
+
921,
|
| 229 |
+
676
|
| 230 |
+
],
|
| 231 |
+
"page_idx": 1
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"type": "text",
|
| 235 |
+
"text": "II. RELATED WORK",
|
| 236 |
+
"text_level": 1,
|
| 237 |
+
"bbox": [
|
| 238 |
+
638,
|
| 239 |
+
695,
|
| 240 |
+
785,
|
| 241 |
+
708
|
| 242 |
+
],
|
| 243 |
+
"page_idx": 1
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"type": "text",
|
| 247 |
+
"text": "A. Multi-modal Human Activity Recognition",
|
| 248 |
+
"text_level": 1,
|
| 249 |
+
"bbox": [
|
| 250 |
+
503,
|
| 251 |
+
714,
|
| 252 |
+
807,
|
| 253 |
+
729
|
| 254 |
+
],
|
| 255 |
+
"page_idx": 1
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"type": "text",
|
| 259 |
+
"text": "1) Datasets: In order to integrate the complementary information for vision data, some multi-modal datasets have been proposed for human activity recognition task. UTD-MHAD [35] is collected from two independent devices, i.e., a Kinect camera and a wearable inertial sensor. The dataset consists of RGB videos, depth videos, skeleton positions, and inertial signals for 27 human actions such as right arm throw, cross arms in the chest, basketball shoot, et al. For the purpose of developing and evaluating multi-modal algorithms, Berkeley-MHAD [36] consists of multi-modal data for 11 actions, which is captured by five different systems: an optical motion capture system, stereo cameras, Microsoft Kinect cameras, accelerometers, and microphones. To address the health problem of elder persons, the Up-Fall dataset [37] is proposed for reliable fall",
|
| 260 |
+
"bbox": [
|
| 261 |
+
501,
|
| 262 |
+
733,
|
| 263 |
+
921,
|
| 264 |
+
944
|
| 265 |
+
],
|
| 266 |
+
"page_idx": 1
|
| 267 |
+
},
|
| 268 |
+
{
|
| 269 |
+
"type": "page_number",
|
| 270 |
+
"text": "2",
|
| 271 |
+
"bbox": [
|
| 272 |
+
911,
|
| 273 |
+
30,
|
| 274 |
+
919,
|
| 275 |
+
40
|
| 276 |
+
],
|
| 277 |
+
"page_idx": 1
|
| 278 |
+
},
|
| 279 |
+
{
|
| 280 |
+
"type": "text",
|
| 281 |
+
"text": "detection. The dataset contains multi-modal data for six daily living activities and five types of simulated falls from wearable sensors, ambient sensors, and vision devices.",
|
| 282 |
+
"bbox": [
|
| 283 |
+
78,
|
| 284 |
+
69,
|
| 285 |
+
488,
|
| 286 |
+
113
|
| 287 |
+
],
|
| 288 |
+
"page_idx": 2
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"type": "text",
|
| 292 |
+
"text": "Nowadays, activity recognition from the egocentric perspective has become a widely concerned topic due to the interesting life-logging applications, such as lifestyle analysis and health monitoring [17]. However, the progress of multi-modal egocentric activity recognition is relatively slow because it is not easy to capture multi-modal data from wearable devices such as smart glasses. The existing multimodal datasets for egocentric activity recognition are quite limited. EPIC-KITCHENS [38] is a large-scale egocentric video dataset which is collected by 32 participants in kitchen environments. Every participant is commanded to use a head-mounted GoPro Hero7 black to record every second from the time they entered the kitchen. This dataset contains multimodal data of RGB, flow, and audio, except position and direction information related to the activities. With the wide use of wearable sensors, a number of works introduced some auxiliary data for a comprehensive understanding of human activities. Stanford-ECM [33] consists of egocentric video, accelerometer data, and heart rate data, collected by a mobile phone placed in the chest pocket and a wrist-worn heart rate sensor. The dataset contains 24 daily activities under natural conditions, including various levels of motion intensity. CMUMMAC [34] introduces 29 kitchen activities, such as opening fridge, removing cap, which is collected from 7 participants using an egocentric camera, IMUs, and other sensors. In the existing datasets, the MEAD dataset collected by Song et al. [17] is most similar to our proposed UESTC-MMEA-CL dataset. The MEAD dataset was collected by Google Glasses to record 20 human activities which contains modalities of synchronous video and sensor data. However, there are only 200 sequences in total in the MEAD dataset, whose scale is too limited for the research of DNNs.",
|
| 293 |
+
"bbox": [
|
| 294 |
+
78,
|
| 295 |
+
114,
|
| 296 |
+
488,
|
| 297 |
+
594
|
| 298 |
+
],
|
| 299 |
+
"page_idx": 2
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"type": "text",
|
| 303 |
+
"text": "2) Methods: Datasets with more complex scenes and more categories of behaviors make it challenging to recognize human activities for vision-based methods. Is is helpful to integrate complementary information for vision data. In order to improve the algorithm's robustness, Song et al. [17] disassemble the visual signals into three input forms (single frame, optical flow, and stabilized optical flow), then classify activities with the aid of gyroscope and accelerometer data. Kazakos et al. [39] propose a mid-level fusion Temporal Binding Network (TBN) to combine signals of three modalities, i.e., video, flow, and audio. Different from traditional fusion method, multimodal signals are aggregated before temporal fusion with the shared weights over time and each modality is trained individually. Spriggs et al. [34] segment human motion into several actions and classify activities for first-person sensing, which is captured by a wearable vision sensor and IMUs. Kitani et al. [40] propose an unsupervised method for the egocentric activity recognition task, which adopts a stacked Dirichlet process mixture model to infer the motion histogram codebook and the activity category. Nakamura et al. [33] employ a stacked LSTM network to process the fused features from vision and acceleration, then jointly predict activities and energy expenditures with the aid of heart-rate sensor data.",
|
| 304 |
+
"bbox": [
|
| 305 |
+
78,
|
| 306 |
+
598,
|
| 307 |
+
488,
|
| 308 |
+
943
|
| 309 |
+
],
|
| 310 |
+
"page_idx": 2
|
| 311 |
+
},
|
| 312 |
+
{
|
| 313 |
+
"type": "text",
|
| 314 |
+
"text": "Besides, some researches [41], [42] have been devoted to predicting people's intentions by analyzing some mid-level features like people's face, gaze, and hands.",
|
| 315 |
+
"bbox": [
|
| 316 |
+
508,
|
| 317 |
+
69,
|
| 318 |
+
919,
|
| 319 |
+
113
|
| 320 |
+
],
|
| 321 |
+
"page_idx": 2
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"type": "text",
|
| 325 |
+
"text": "B. Continual Learning",
|
| 326 |
+
"text_level": 1,
|
| 327 |
+
"bbox": [
|
| 328 |
+
508,
|
| 329 |
+
137,
|
| 330 |
+
661,
|
| 331 |
+
152
|
| 332 |
+
],
|
| 333 |
+
"page_idx": 2
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "text",
|
| 337 |
+
"text": "1) Datasets: To the best of our knowledge, there are no datasets dedicated to continual learning tasks, and the researchers usually manually divide the well-known datasets into continual learning task sequences according to the specific task types, such as image classification, object detection and image segmentation, etc. Specifically, for the image classification task, the most widely-used datasets are ImageNet [1] and CIFAR100 [43], which are originally used for non-continuous image classification. ImageNet consists of 1000 classes with approximately 1000 pictures for each class. The size of each picture is $224 \\times 224$ . CIFAR100 is made up of 60000 images evenly divided into 100 classes, where each class is comprised of 500 training samples and 100 test samples. For the image segmentation task, the selected datasets are Pascal-VOC 2012 [44] and ADE20K [45]. The former contains 20 classes and the latter contains 150 classes. The Pascal-VOC 2012 dataset is also widely used in the object detection task and the action classification task. Another popular dataset in object detection is Microsoft-COCO [3], which contains 80 classes in total and is comprised of more than 300,000 images and more than 2 million instances. It is worth noting again that all the datasets mentioned here are used originally for non-continuous tasks, but the researchers in continual learning manually partition them into continuous task sequences.",
|
| 338 |
+
"bbox": [
|
| 339 |
+
508,
|
| 340 |
+
157,
|
| 341 |
+
919,
|
| 342 |
+
518
|
| 343 |
+
],
|
| 344 |
+
"page_idx": 2
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "text",
|
| 348 |
+
"text": "2) Methods: Many efforts have been made to improve the performance of continual learning. The existing work can be mainly divided into parameter-based, knowledge-distillation-based, and parameter-expansion-based.",
|
| 349 |
+
"bbox": [
|
| 350 |
+
508,
|
| 351 |
+
521,
|
| 352 |
+
919,
|
| 353 |
+
580
|
| 354 |
+
],
|
| 355 |
+
"page_idx": 2
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "text",
|
| 359 |
+
"text": "Parameter-based. The key to this method is to evaluate the importance of parameters and protect the important ones. Methods [46], [47], [48], [49] fall into this category with different parameter importance estimations. A quadratic penalty imposed on the parameters critical to old tasks is proposed by EWC [46]. The authors utilize the Fisher Information Matrix [50] to choose the critical parameters. Liu et al. [49] obtain a better Fisher Information Matrix approximation by rotating the parameter space.",
|
| 360 |
+
"bbox": [
|
| 361 |
+
508,
|
| 362 |
+
580,
|
| 363 |
+
919,
|
| 364 |
+
717
|
| 365 |
+
],
|
| 366 |
+
"page_idx": 2
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"type": "text",
|
| 370 |
+
"text": "However, overestimation and underestimation might happen due to batch updates. To solve this problem, [47] accumulates the changes in the learning of parameters via which the importance is estimated. Memory Aware Synapses (MAS) [48] solves the same problem by accumulating the gradient magnitude.",
|
| 371 |
+
"bbox": [
|
| 372 |
+
508,
|
| 373 |
+
718,
|
| 374 |
+
919,
|
| 375 |
+
806
|
| 376 |
+
],
|
| 377 |
+
"page_idx": 2
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"type": "text",
|
| 381 |
+
"text": "Distillation-based. The core idea of this category is to prevent the drift between new and old models. Learning without Forgetting (LwF) [22] first introduces knowledge distillation to continual learning. Specifically, the predictions made by the new model should be close enough to the old model predictions. iCaRL [23] proposes a rehearsal strategy and a nearest-mean-of-exemplars classifier to cooperate with the LwF loss. The less-forget loss is then devised by UCIR [24], which penalizes the activation drift of the backbone. For",
|
| 382 |
+
"bbox": [
|
| 383 |
+
508,
|
| 384 |
+
809,
|
| 385 |
+
919,
|
| 386 |
+
943
|
| 387 |
+
],
|
| 388 |
+
"page_idx": 2
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "page_number",
|
| 392 |
+
"text": "3",
|
| 393 |
+
"bbox": [
|
| 394 |
+
911,
|
| 395 |
+
31,
|
| 396 |
+
919,
|
| 397 |
+
39
|
| 398 |
+
],
|
| 399 |
+
"page_idx": 2
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "text",
|
| 403 |
+
"text": "a stronger distillation constraint, a spatial-based multi-level distillation loss is designed by PODNet [25]. DDE [26] aims to solve catastrophic forgetting from the scope of causal analysis and then proposes to distill the colliding effect of new and old data.",
|
| 404 |
+
"bbox": [
|
| 405 |
+
73,
|
| 406 |
+
69,
|
| 407 |
+
491,
|
| 408 |
+
143
|
| 409 |
+
],
|
| 410 |
+
"page_idx": 3
|
| 411 |
+
},
|
| 412 |
+
{
|
| 413 |
+
"type": "text",
|
| 414 |
+
"text": "Parameter-expansion-based. Another straightforward idea is to prevent the parameters of previous tasks from drifting and expand new branches for new tasks. EG [51] allocates a duplicate model for new tasks. CCGN [52] devises a task-specific gating mechanism to select the target filters for specific inputs. DER [27] also duplicates the entire backbone to learn new classes. Additionally, DER concatenates all the features obtained from the backbones and utilizes them to learn a unified classifier. However, the excessive parameter overhead hinders the application of these methods in real-world scenarios.",
|
| 415 |
+
"bbox": [
|
| 416 |
+
73,
|
| 417 |
+
143,
|
| 418 |
+
491,
|
| 419 |
+
309
|
| 420 |
+
],
|
| 421 |
+
"page_idx": 3
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"type": "text",
|
| 425 |
+
"text": "III. UESTC-MMEA-CL DATASET",
|
| 426 |
+
"text_level": 1,
|
| 427 |
+
"bbox": [
|
| 428 |
+
158,
|
| 429 |
+
329,
|
| 430 |
+
408,
|
| 431 |
+
343
|
| 432 |
+
],
|
| 433 |
+
"page_idx": 3
|
| 434 |
+
},
|
| 435 |
+
{
|
| 436 |
+
"type": "text",
|
| 437 |
+
"text": "In this section, we introduce the data collection of UESTC-MMEA-CL, present statistics, and compare with other multimodal egocentric datasets. The distributions of standard deviation of acceleration and gyroscope sensor data are shown to demonstrate the motion intensity of each activity, as well as the motion correlation of the two sensor modalities.",
|
| 438 |
+
"bbox": [
|
| 439 |
+
73,
|
| 440 |
+
348,
|
| 441 |
+
490,
|
| 442 |
+
439
|
| 443 |
+
],
|
| 444 |
+
"page_idx": 3
|
| 445 |
+
},
|
| 446 |
+
{
|
| 447 |
+
"type": "text",
|
| 448 |
+
"text": "A. Data Collection",
|
| 449 |
+
"text_level": 1,
|
| 450 |
+
"bbox": [
|
| 451 |
+
73,
|
| 452 |
+
460,
|
| 453 |
+
209,
|
| 454 |
+
474
|
| 455 |
+
],
|
| 456 |
+
"page_idx": 3
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"text": "In order to collect synchronous video and sensor data for egocentric activity recognition, we developed a pair of wearable smart glasses, as show in Fig. 2(a), with a first-person camera, IMU sensors, and the function of wireless connection. The mainboard of the glasses is very tiny as shown in Fig. 2(b). The process of data collection can be conducted in the following two steps: 1) device configuration; 2) data collection and post-processing.",
|
| 461 |
+
"bbox": [
|
| 462 |
+
73,
|
| 463 |
+
479,
|
| 464 |
+
491,
|
| 465 |
+
601
|
| 466 |
+
],
|
| 467 |
+
"page_idx": 3
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "text",
|
| 471 |
+
"text": "First, we set up the device as follows. For camera, the video resolution is $640 \\times 480$ , and the frame rate is 25FPS. For sensors, the sample rate is $25\\mathrm{Hz}$ . The sensitivity of gyroscope is $16.4\\mathrm{LSB / deg / s}$ , and the sensitivity of accelerator is $8192\\mathrm{LSB / g}$ . We developed applications to capture videos, accelerometers, and gyroscopes data, which are synchronized by time-delay correction and transferred to a terminal via WIFI.",
|
| 472 |
+
"bbox": [
|
| 473 |
+
73,
|
| 474 |
+
601,
|
| 475 |
+
491,
|
| 476 |
+
719
|
| 477 |
+
],
|
| 478 |
+
"page_idx": 3
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "text",
|
| 482 |
+
"text": "After the configuration, ten subjects are divided into five groups. For each group, one subject equips the glasses and acts, while another uses a terminal to ensure each video only contains one action. All data are collected from different scenes with adequate illumination. Because the sensors are sensitive to noise, the median filtering method is used to filter the abnormal values and noise. The kernel size of the median filter is 5. After filtering the sensor data can reflect the movement of the subject better.",
|
| 483 |
+
"bbox": [
|
| 484 |
+
73,
|
| 485 |
+
722,
|
| 486 |
+
491,
|
| 487 |
+
858
|
| 488 |
+
],
|
| 489 |
+
"page_idx": 3
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"text": "B. Dataset Overview",
|
| 494 |
+
"text_level": 1,
|
| 495 |
+
"bbox": [
|
| 496 |
+
73,
|
| 497 |
+
878,
|
| 498 |
+
220,
|
| 499 |
+
893
|
| 500 |
+
],
|
| 501 |
+
"page_idx": 3
|
| 502 |
+
},
|
| 503 |
+
{
|
| 504 |
+
"type": "text",
|
| 505 |
+
"text": "We first introduce some general statistics of our proposed dataset UESTC-MMEA-CL, compared with the available egocentric datasets, which is shown in Table I. Our dataset",
|
| 506 |
+
"bbox": [
|
| 507 |
+
73,
|
| 508 |
+
898,
|
| 509 |
+
491,
|
| 510 |
+
944
|
| 511 |
+
],
|
| 512 |
+
"page_idx": 3
|
| 513 |
+
},
|
| 514 |
+
{
|
| 515 |
+
"type": "image",
|
| 516 |
+
"img_path": "images/6b99e0691dd309721be1ec621bcc0243acaa519cf845664e9710061996a7be5a.jpg",
|
| 517 |
+
"image_caption": [
|
| 518 |
+
"(a)"
|
| 519 |
+
],
|
| 520 |
+
"image_footnote": [],
|
| 521 |
+
"bbox": [
|
| 522 |
+
524,
|
| 523 |
+
69,
|
| 524 |
+
710,
|
| 525 |
+
178
|
| 526 |
+
],
|
| 527 |
+
"page_idx": 3
|
| 528 |
+
},
|
| 529 |
+
{
|
| 530 |
+
"type": "image",
|
| 531 |
+
"img_path": "images/b020d58cee20e407879b0f2f5820ec1053421a687342df30d6fc2babc04fa02e.jpg",
|
| 532 |
+
"image_caption": [
|
| 533 |
+
"(b)",
|
| 534 |
+
"Fig. 2: The device for data collection. (a) Our developed Kuaiyan Vision Smart Glasses. (b) The mainboard of the glasses."
|
| 535 |
+
],
|
| 536 |
+
"image_footnote": [],
|
| 537 |
+
"bbox": [
|
| 538 |
+
715,
|
| 539 |
+
68,
|
| 540 |
+
903,
|
| 541 |
+
178
|
| 542 |
+
],
|
| 543 |
+
"page_idx": 3
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"type": "text",
|
| 547 |
+
"text": "comprises 30.4 hours of video clips, acceleration stream and gyroscope data in total. There are 32 daily activities included in our dataset as shown in Table II, containing some basic movements (upstairs, walking, standing, etc.), indoor behaviors (writing, reading, type-PC, etc.), some kinds of cleaning labor (mop-floor, wash-dish, wipe-table, etc.), several recreations and leisure activities (watch-TV, play-phone, playcard, etc.), activities with hands (wash-hands, wash-dish, and cooking), and activities with head movements (eating and drinking).",
|
| 548 |
+
"bbox": [
|
| 549 |
+
501,
|
| 550 |
+
273,
|
| 551 |
+
921,
|
| 552 |
+
425
|
| 553 |
+
],
|
| 554 |
+
"page_idx": 3
|
| 555 |
+
},
|
| 556 |
+
{
|
| 557 |
+
"type": "text",
|
| 558 |
+
"text": "In contrast to Stanford-ECM [33], which suffers from limited FoV and contextual information due to the lower location of chest-mounted camera, we embed the camera into the head-mounted glasses to capture more useful visual information. Compared with uni-modality datasets such as JPL-Interaction [53], GTEA Gaze [54], GTEA Gaze+ [54], UEC EgoAction [40], EPIC-KITCHENS [38], we provide additional synchronized data of accelerometers and gyroscopes, which are complementary to vision data and make it available to explore the catastrophic forgetting problem using separately, and jointly, the three modalities. CMU-MMAC [34] provides multi-modal measures of human activities with four wireless IMUs and five wired IMUs located on multiple parts of the subjects' body, such as wrists, ankles, arms, and waist, in order to capture motion details when performing cooking and food preparation. However, the complex devices make the data collection of daily behaviors difficult, which is not suitable for wearable applications. MEAD [17] contains 20 life-logging activities, which uses Google Glasses to capture multi-modal data of video, accelerometer and gyroscope. But due to the limited scale, duration, and category number of MEAD, it is difficult to take full advantage of DNNs and set up multiple tasks for continual learning research.",
|
| 559 |
+
"bbox": [
|
| 560 |
+
501,
|
| 561 |
+
426,
|
| 562 |
+
921,
|
| 563 |
+
773
|
| 564 |
+
],
|
| 565 |
+
"page_idx": 3
|
| 566 |
+
},
|
| 567 |
+
{
|
| 568 |
+
"type": "text",
|
| 569 |
+
"text": "C. Dataset Statistics",
|
| 570 |
+
"text_level": 1,
|
| 571 |
+
"bbox": [
|
| 572 |
+
504,
|
| 573 |
+
790,
|
| 574 |
+
648,
|
| 575 |
+
804
|
| 576 |
+
],
|
| 577 |
+
"page_idx": 3
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "text",
|
| 581 |
+
"text": "Our UESTC-MMEA-CL contains 32 different activity classes and each class contains approximately 200 samples, consisting of fully synchronized first-person video clips, acceleration sensing sequences, and gyroscope sensing sequences. A sample is shown in Fig. 3.",
|
| 582 |
+
"bbox": [
|
| 583 |
+
503,
|
| 584 |
+
808,
|
| 585 |
+
919,
|
| 586 |
+
883
|
| 587 |
+
],
|
| 588 |
+
"page_idx": 3
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"type": "text",
|
| 592 |
+
"text": "Although visual information dominates human activity recognition, sensor data may provide complementary position and direction information to facilitate the recognition task for egocentric video. In order to demonstrate the auxiliary motion",
|
| 593 |
+
"bbox": [
|
| 594 |
+
503,
|
| 595 |
+
883,
|
| 596 |
+
921,
|
| 597 |
+
945
|
| 598 |
+
],
|
| 599 |
+
"page_idx": 3
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"type": "page_number",
|
| 603 |
+
"text": "4",
|
| 604 |
+
"bbox": [
|
| 605 |
+
911,
|
| 606 |
+
31,
|
| 607 |
+
919,
|
| 608 |
+
39
|
| 609 |
+
],
|
| 610 |
+
"page_idx": 3
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "table",
|
| 614 |
+
"img_path": "images/48c07fbb83665ca42b6e5299668ad3b4da2cc471baf8111f9f15e1ee3a58880a.jpg",
|
| 615 |
+
"table_caption": [
|
| 616 |
+
"TABLE I: Comparison with available egocentric datasets."
|
| 617 |
+
],
|
| 618 |
+
"table_footnote": [],
|
| 619 |
+
"table_body": "<table><tr><td>Dataset</td><td>#Subjects</td><td>#Class</td><td>#Duration (h)</td><td>Mount</td><td>Scenario</td><td>Video</td><td>Acc</td><td>Gyro</td></tr><tr><td>CMU-MMAC [34]</td><td>39</td><td>29</td><td>17.0</td><td>Head</td><td>Natural</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>JPL-Interaction [53]</td><td>1</td><td>7</td><td>0.4</td><td>Head</td><td>Indoor</td><td>✓</td><td></td><td></td></tr><tr><td>MEAD [17]</td><td>7</td><td>20</td><td>0.5</td><td>Head</td><td>Natural</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>GTEA Gaze [54]</td><td>14</td><td>40</td><td>1.0</td><td>Head</td><td>Kitchen</td><td>✓</td><td></td><td></td></tr><tr><td>GTEA Gaze+ [54]</td><td>5</td><td>44</td><td>9.0</td><td>Head</td><td>Kitchen</td><td>✓</td><td></td><td></td></tr><tr><td>PAMAP2 [55]</td><td>9</td><td>18</td><td>-</td><td>-</td><td>-</td><td></td><td>✓</td><td></td></tr><tr><td>UEC EgoAction [40]</td><td>1</td><td>37</td><td>0.5</td><td>Head</td><td>Kitchen</td><td>✓</td><td></td><td></td></tr><tr><td>Stanford-ECM [33]</td><td>10</td><td>24</td><td>31.0</td><td>Chest</td><td>Natural</td><td>✓</td><td>✓</td><td></td></tr><tr><td>EPIC-KITCHENS [38]</td><td>32</td><td>149</td><td>-</td><td>Head</td><td>Kitchen</td><td>✓</td><td></td><td></td></tr><tr><td>UESTC-MMEA-CL(ours)</td><td>10</td><td>32</td><td>30.4</td><td>Head</td><td>Natural</td><td>✓</td><td>✓</td><td>✓</td></tr></table>",
|
| 620 |
+
"bbox": [
|
| 621 |
+
81,
|
| 622 |
+
80,
|
| 623 |
+
915,
|
| 624 |
+
232
|
| 625 |
+
],
|
| 626 |
+
"page_idx": 4
|
| 627 |
+
},
|
| 628 |
+
{
|
| 629 |
+
"type": "table",
|
| 630 |
+
"img_path": "images/e616fb4bc4eb0bc2566cf6c8c83386b31e1cbd8e00975a0a53024cad6303c814.jpg",
|
| 631 |
+
"table_caption": [
|
| 632 |
+
"TABLE II: Activities in UESTC-MMEA-CL Dataset."
|
| 633 |
+
],
|
| 634 |
+
"table_footnote": [],
|
| 635 |
+
"table_body": "<table><tr><td></td><td>Class</td><td>#Clips</td><td>#Avg-Dur(s)</td><td>Scenario</td></tr><tr><td>1</td><td>upstairs</td><td>192</td><td>17.7</td><td>teaching building, park, library</td></tr><tr><td>2</td><td>downstairs</td><td>190</td><td>17.3</td><td>teaching building, park, library</td></tr><tr><td>3</td><td>drinking</td><td>202</td><td>16.0</td><td>dorm, office</td></tr><tr><td>4</td><td>fall</td><td>185</td><td>13.7</td><td>campus, office, corridor</td></tr><tr><td>5</td><td>reading</td><td>201</td><td>18.2</td><td>office, classroom</td></tr><tr><td>6</td><td>sweep-floor</td><td>229</td><td>18.0</td><td>corridor, office, campus</td></tr><tr><td>7</td><td>cut-fruits</td><td>203</td><td>17.5</td><td>teaching building, park, office</td></tr><tr><td>8</td><td>mop-floor</td><td>206</td><td>17.7</td><td>corridor, office</td></tr><tr><td>9</td><td>writing</td><td>209</td><td>18.8</td><td>classroom, office</td></tr><tr><td>10</td><td>wipe-table</td><td>245</td><td>18.2</td><td>home, office, dorm</td></tr><tr><td>11</td><td>wash-hand</td><td>189</td><td>17.0</td><td>bathroom, kitchen</td></tr><tr><td>12</td><td>standing</td><td>203</td><td>18.0</td><td>corridor, office, dining hall</td></tr><tr><td>13</td><td>play-phone</td><td>205</td><td>17.4</td><td>classroom, office, campus, park</td></tr><tr><td>14</td><td>type-PC</td><td>204</td><td>18.1</td><td>classroom, office</td></tr><tr><td>15</td><td>eating</td><td>213</td><td>17.1</td><td>classroom, office, dining hall, canteen</td></tr><tr><td>16</td><td>cooking</td><td>225</td><td>17.1</td><td>kitchen, office</td></tr><tr><td>17</td><td>pick-up-phone</td><td>213</td><td>14.4</td><td>classroom, office, teaching building, campus</td></tr><tr><td>18</td><td>drop-trash</td><td>201</td><td>13.1</td><td>campus, park, teaching building</td></tr><tr><td>19</td><td>fold-clothes</td><td>204</td><td>17.3</td><td>home, dorm, office</td></tr><tr><td>20</td><td>walking</td><td>203</td><td>17.1</td><td>campus, library, park</td></tr><tr><td>21</td><td>play-card</td><td>206</td><td>17.0</td><td>classroom, restroom</td></tr><tr><td>22</td><td>brush-teeth</td><td>203</td><td>17.0</td><td>bathroom</td></tr><tr><td>23</td><td>wash-dish</td><td>189</td><td>16.0</td><td>kitchen, bathroom</td></tr><tr><td>24</td><td>moving-sth</td><td>201</td><td>15.7</td><td>corridor, office, teaching building</td></tr><tr><td>25</td><td>type-phone</td><td>195</td><td>17.3</td><td>classroom, office, campus</td></tr><tr><td>26</td><td>chat</td><td>203</td><td>17.3</td><td>classroom, office, dorm</td></tr><tr><td>27</td><td>open-close-door</td><td>200</td><td>15.8</td><td>office, home</td></tr><tr><td>28</td><td>ride-bike</td><td>198</td><td>17.1</td><td>campus, park, road</td></tr><tr><td>29</td><td>sit-stand</td><td>201</td><td>15.7</td><td>office, classroom, library</td></tr><tr><td>30</td><td>take-drop-sth</td><td>201</td><td>13.5</td><td>office, classroom, library</td></tr><tr><td>31</td><td>shopping</td><td>208</td><td>17.1</td><td>mall, street</td></tr><tr><td>32</td><td>watch-TV</td><td>205</td><td>16.9</td><td>office, home</td></tr></table>",
|
| 636 |
+
"bbox": [
|
| 637 |
+
84,
|
| 638 |
+
284,
|
| 639 |
+
500,
|
| 640 |
+
640
|
| 641 |
+
],
|
| 642 |
+
"page_idx": 4
|
| 643 |
+
},
|
| 644 |
+
{
|
| 645 |
+
"type": "image",
|
| 646 |
+
"img_path": "images/7d40d7569642bd54dc9d26ba1e6b0b997d2683197c5056666d4f60cd1c1af5d3.jpg",
|
| 647 |
+
"image_caption": [
|
| 648 |
+
"Fig. 3: A sample of activities \"drinking\", which consists of the synchronized video stream, acceleration, and gyroscope sensor data."
|
| 649 |
+
],
|
| 650 |
+
"image_footnote": [],
|
| 651 |
+
"bbox": [
|
| 652 |
+
89,
|
| 653 |
+
691,
|
| 654 |
+
460,
|
| 655 |
+
875
|
| 656 |
+
],
|
| 657 |
+
"page_idx": 4
|
| 658 |
+
},
|
| 659 |
+
{
|
| 660 |
+
"type": "text",
|
| 661 |
+
"text": "information of the sensor data, we make statistical analysis of the two modalities, i.e., accelerometer and gyroscope data. Following Stanford-ECM [33], we calculate the standard deviation (STD) of the sensor data to show the relative motion intensity for each activity. Fig. 4(a) demonstrates the distribution of acceleration STD. Activities are sorted by the median STD of acceleration and divided into four levels of intensity. Behaviors such as upstairs, downstairs, and moving-sth are relatively vigorous while chat, type-phone, and watch-TV are stable. The degree of variation in orientation can be measured by the STD of the gyroscope data, which is shown in Fig. 4(b). Besides, Fig. 4(c) shows a scatter plot of the STD distributions of acceleration and gyroscope data, which reflects the motion correlation (correlation coefficient $r = 0.78$ ) of the acceleration and gyroscope data.",
|
| 662 |
+
"bbox": [
|
| 663 |
+
501,
|
| 664 |
+
258,
|
| 665 |
+
921,
|
| 666 |
+
484
|
| 667 |
+
],
|
| 668 |
+
"page_idx": 4
|
| 669 |
+
},
|
| 670 |
+
{
|
| 671 |
+
"type": "text",
|
| 672 |
+
"text": "IV. CONTINUAL LEARNING ON UESTC-MMEA-CL",
|
| 673 |
+
"text_level": 1,
|
| 674 |
+
"bbox": [
|
| 675 |
+
527,
|
| 676 |
+
505,
|
| 677 |
+
898,
|
| 678 |
+
518
|
| 679 |
+
],
|
| 680 |
+
"page_idx": 4
|
| 681 |
+
},
|
| 682 |
+
{
|
| 683 |
+
"type": "text",
|
| 684 |
+
"text": "A. Problem Setup",
|
| 685 |
+
"text_level": 1,
|
| 686 |
+
"bbox": [
|
| 687 |
+
503,
|
| 688 |
+
527,
|
| 689 |
+
630,
|
| 690 |
+
542
|
| 691 |
+
],
|
| 692 |
+
"page_idx": 4
|
| 693 |
+
},
|
| 694 |
+
{
|
| 695 |
+
"type": "text",
|
| 696 |
+
"text": "Although DNNs have made a remarkable progress in many applications, most of current DNNs face the catastrophic forgetting problem when dealing with dynamic data. In the wearable application of egocentric activity recognition, data may come dynamically. Limited by the memory capacity and computing power of wearable devices, models are expected to accommodate new recognition tasks when the data from the past are inaccessible or partially accessible. In order to explore the catastrophic forgetting and promote possible approaches to address this problem, we introduce continual learning into our task scenario.",
|
| 697 |
+
"bbox": [
|
| 698 |
+
501,
|
| 699 |
+
547,
|
| 700 |
+
919,
|
| 701 |
+
712
|
| 702 |
+
],
|
| 703 |
+
"page_idx": 4
|
| 704 |
+
},
|
| 705 |
+
{
|
| 706 |
+
"type": "text",
|
| 707 |
+
"text": "The activity class set $\\mathcal{C}$ , which contains $N$ classes ( $N = 32$ ), of our dataset is divided into $S$ incremental steps/tasks. The class set $\\mathcal{C}^s$ of each step/task $s$ ( $0 \\leq s \\leq S - 1$ ) contains $N / S$ classes ( $\\mathcal{C}^s = \\bigcup_{l=0}^{(N / S) - 1} \\mathcal{C}_l^s = \\{\\mathcal{C}_0^s, \\mathcal{C}_1^s,.., \\mathcal{C}_{(N / S) - 1}^s\\}$ ). The multi-modal sample set of class $\\mathcal{C}_l^s$ is denoted as $\\mathcal{D}_l^s = \\left\\{\\left(\\mathbf{v}_i^{l,s}, \\mathbf{a}_i^{l,s}, \\mathbf{g}_i^{l,s}, \\mathbf{y}_i^{l,s}\\right)\\bigg|_{i=1}^{N_{l,s}}\\right\\}$ . Sample set $\\mathcal{D}_l^s$ contains $N_{l,s}$ pairs of samples $\\mathbf{x}_i^{l,s} = (\\mathbf{v}_i^{l,s}, \\mathbf{a}_i^{l,s}, \\mathbf{g}_i^{l,s})$ and activity class label $\\mathbf{y}_i^{l,s}$ , where $\\mathbf{v}_i^{l,s}, \\mathbf{a}_i^{l,s}, \\mathbf{g}_i^{l,s}$ represent the visual signal, acceleration signal and gyroscope signal of the $i$ -th sample of $\\mathcal{C}_l^s$ , respectively. At the $s$ step, the models are trained with the available samples $\\mathcal{D}^s$ , where $\\mathcal{D}^s = \\bigcup_{l=0}^{(N / S) - 1} \\mathcal{D}_l^s$ , and evaluated on the test set of all seen classes $\\bigcup_{j=0}^{s} \\mathcal{C}^j$ . For the methods based on exemplar replay (or rehearsal), we define a replay buffer to store exemplar samples $\\mathcal{E}^s$ of old classes. At",
|
| 708 |
+
"bbox": [
|
| 709 |
+
503,
|
| 710 |
+
713,
|
| 711 |
+
921,
|
| 712 |
+
946
|
| 713 |
+
],
|
| 714 |
+
"page_idx": 4
|
| 715 |
+
},
|
| 716 |
+
{
|
| 717 |
+
"type": "page_number",
|
| 718 |
+
"text": "5",
|
| 719 |
+
"bbox": [
|
| 720 |
+
911,
|
| 721 |
+
30,
|
| 722 |
+
919,
|
| 723 |
+
40
|
| 724 |
+
],
|
| 725 |
+
"page_idx": 4
|
| 726 |
+
},
|
| 727 |
+
{
|
| 728 |
+
"type": "image",
|
| 729 |
+
"img_path": "images/2a15796272f450b7af519c9b6220e0ab4338344f678437528251460e94d12f97.jpg",
|
| 730 |
+
"image_caption": [
|
| 731 |
+
"(a)",
|
| 732 |
+
"(b)"
|
| 733 |
+
],
|
| 734 |
+
"image_footnote": [],
|
| 735 |
+
"bbox": [
|
| 736 |
+
127,
|
| 737 |
+
102,
|
| 738 |
+
503,
|
| 739 |
+
223
|
| 740 |
+
],
|
| 741 |
+
"page_idx": 5
|
| 742 |
+
},
|
| 743 |
+
{
|
| 744 |
+
"type": "image",
|
| 745 |
+
"img_path": "images/25d4640e8628aed59f646cca28f79a6b71b7480be73c823b6f7254ee921246c4.jpg",
|
| 746 |
+
"image_caption": [
|
| 747 |
+
"Fig. 4: Statistics of sensor data. (a) STD distributions of acceleration for all activity classes. The relative motion intensity of the activities increase sequentially from the leftmost column to the right, which are divided into four different levels according to the median STD. (b) STD distributions of gyroscope for each activity. (c) Scatter plot of the STD distributions of acceleration and gyroscope (Correlation coefficient $r = 0.78$ on all samples)."
|
| 748 |
+
],
|
| 749 |
+
"image_footnote": [],
|
| 750 |
+
"bbox": [
|
| 751 |
+
132,
|
| 752 |
+
243,
|
| 753 |
+
503,
|
| 754 |
+
364
|
| 755 |
+
],
|
| 756 |
+
"page_idx": 5
|
| 757 |
+
},
|
| 758 |
+
{
|
| 759 |
+
"type": "image",
|
| 760 |
+
"img_path": "images/c5244d0f43f448b4ad60bda6ccf789ebb823b5b318444b8b3145881234d50ca1.jpg",
|
| 761 |
+
"image_caption": [
|
| 762 |
+
"(c)"
|
| 763 |
+
],
|
| 764 |
+
"image_footnote": [],
|
| 765 |
+
"bbox": [
|
| 766 |
+
526,
|
| 767 |
+
119,
|
| 768 |
+
875,
|
| 769 |
+
339
|
| 770 |
+
],
|
| 771 |
+
"page_idx": 5
|
| 772 |
+
},
|
| 773 |
+
{
|
| 774 |
+
"type": "text",
|
| 775 |
+
"text": "the first step, available data is only $\\mathcal{D}^0$ , and at each subsequent incremental step, the available data is $\\mathcal{D}^s \\cup \\mathcal{E}^s (s \\geq 1)$ .",
|
| 776 |
+
"bbox": [
|
| 777 |
+
73,
|
| 778 |
+
458,
|
| 779 |
+
491,
|
| 780 |
+
491
|
| 781 |
+
],
|
| 782 |
+
"page_idx": 5
|
| 783 |
+
},
|
| 784 |
+
{
|
| 785 |
+
"type": "text",
|
| 786 |
+
"text": "B. Base Multi-modal Architecture",
|
| 787 |
+
"text_level": 1,
|
| 788 |
+
"bbox": [
|
| 789 |
+
73,
|
| 790 |
+
508,
|
| 791 |
+
307,
|
| 792 |
+
522
|
| 793 |
+
],
|
| 794 |
+
"page_idx": 5
|
| 795 |
+
},
|
| 796 |
+
{
|
| 797 |
+
"type": "text",
|
| 798 |
+
"text": "First, we introduce the base architecture employed for the multi-modal egocentric activity recognition, which is shown in Fig. 5. The architecture is based on the temporal binding network (TBN) [39], which is effective for modal fusion and temporal aggregation. BN-Inception [56] is adopted as the feature extractor $\\mathcal{F}_v$ for the frame from the video stream. The deep convolutional and LSTM recurrent neural networks [57] are used for the feature extraction of acceleration signal $\\mathcal{F}_a$ and gyroscope signal $\\mathcal{F}_g$ . We use the random sampling method to sample the multi-modal data within a temporal blinding window (TBW) [39]. The input multi-modal data $x = \\{v, a, g\\}$ , where $v, a,$ and $g$ denote the video, acceleration data, and gyroscope data respectively, are divided into $T$ TBWs. Within a temporal blinding window $TBW_t$ ( $1 \\leq t \\leq T$ ), modalities are sampled as a single video frame, a sequence of acceleration data, and gyroscope data, which is denoted as $x_t = \\{v_t, a_t, g_t\\}$ . Thus, we can get fused feature:",
|
| 799 |
+
"bbox": [
|
| 800 |
+
73,
|
| 801 |
+
527,
|
| 802 |
+
490,
|
| 803 |
+
785
|
| 804 |
+
],
|
| 805 |
+
"page_idx": 5
|
| 806 |
+
},
|
| 807 |
+
{
|
| 808 |
+
"type": "equation",
|
| 809 |
+
"text": "\n$$\ny _ {t} = Q \\left[ p \\left(\\mathcal {F} _ {v} \\left(v _ {t}\\right)\\right), \\mathcal {F} _ {a} \\left(a _ {t}\\right), \\mathcal {F} _ {g} \\left(g _ {t}\\right) \\right], \\tag {1}\n$$\n",
|
| 810 |
+
"text_format": "latex",
|
| 811 |
+
"bbox": [
|
| 812 |
+
151,
|
| 813 |
+
797,
|
| 814 |
+
488,
|
| 815 |
+
815
|
| 816 |
+
],
|
| 817 |
+
"page_idx": 5
|
| 818 |
+
},
|
| 819 |
+
{
|
| 820 |
+
"type": "text",
|
| 821 |
+
"text": "where $p$ denotes the average pooling operation, and $Q$ represents the mid-fusion block to aggregate features of the three modalities, which contains concatenation, convolution, and ReLU operations. Then, all features $y_{t}$ from the $T$ TBWs are averaged as the input to the activity classifier:",
|
| 822 |
+
"bbox": [
|
| 823 |
+
73,
|
| 824 |
+
821,
|
| 825 |
+
491,
|
| 826 |
+
898
|
| 827 |
+
],
|
| 828 |
+
"page_idx": 5
|
| 829 |
+
},
|
| 830 |
+
{
|
| 831 |
+
"type": "equation",
|
| 832 |
+
"text": "\n$$\n\\tilde {y} = \\operatorname {s o f t m a x} \\left(\\frac {1}{T} \\sum_ {t = 1} ^ {T} y _ {t}\\right). \\tag {2}\n$$\n",
|
| 833 |
+
"text_format": "latex",
|
| 834 |
+
"bbox": [
|
| 835 |
+
194,
|
| 836 |
+
907,
|
| 837 |
+
490,
|
| 838 |
+
949
|
| 839 |
+
],
|
| 840 |
+
"page_idx": 5
|
| 841 |
+
},
|
| 842 |
+
{
|
| 843 |
+
"type": "image",
|
| 844 |
+
"img_path": "images/3694f4d81578bcf15edf9ccef4edd967ce56ece9d092b73475dd664d317d443e.jpg",
|
| 845 |
+
"image_caption": [
|
| 846 |
+
"Fig. 5: Base architecture of multi-modal egocentric activity recognition. The number of TBWs $T$ is set to 8."
|
| 847 |
+
],
|
| 848 |
+
"image_footnote": [],
|
| 849 |
+
"bbox": [
|
| 850 |
+
526,
|
| 851 |
+
458,
|
| 852 |
+
901,
|
| 853 |
+
696
|
| 854 |
+
],
|
| 855 |
+
"page_idx": 5
|
| 856 |
+
},
|
| 857 |
+
{
|
| 858 |
+
"type": "text",
|
| 859 |
+
"text": "Following most classification tasks, cross entropy is employed as the loss function for the final prediction of activities, and branches of all modalities are trained jointly.",
|
| 860 |
+
"bbox": [
|
| 861 |
+
503,
|
| 862 |
+
758,
|
| 863 |
+
919,
|
| 864 |
+
805
|
| 865 |
+
],
|
| 866 |
+
"page_idx": 5
|
| 867 |
+
},
|
| 868 |
+
{
|
| 869 |
+
"type": "text",
|
| 870 |
+
"text": "C. Benchmark for Continual Learning",
|
| 871 |
+
"text_level": 1,
|
| 872 |
+
"bbox": [
|
| 873 |
+
504,
|
| 874 |
+
819,
|
| 875 |
+
767,
|
| 876 |
+
834
|
| 877 |
+
],
|
| 878 |
+
"page_idx": 5
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "text",
|
| 882 |
+
"text": "In this paper, we implement three baseline methods, as well as the most straightforward fine-tune solution, on the base multi-modal architecture (Fig. 5) as the benchmark for continual learning on UESTC-MMEA-CL dataset. These continual learning methods are as follows:",
|
| 883 |
+
"bbox": [
|
| 884 |
+
503,
|
| 885 |
+
837,
|
| 886 |
+
919,
|
| 887 |
+
912
|
| 888 |
+
],
|
| 889 |
+
"page_idx": 5
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "text",
|
| 893 |
+
"text": "- EWC [46]: a parameter-based continual learning model, where the important parameters to old tasks are regular-",
|
| 894 |
+
"bbox": [
|
| 895 |
+
519,
|
| 896 |
+
914,
|
| 897 |
+
921,
|
| 898 |
+
945
|
| 899 |
+
],
|
| 900 |
+
"page_idx": 5
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "page_number",
|
| 904 |
+
"text": "6",
|
| 905 |
+
"bbox": [
|
| 906 |
+
911,
|
| 907 |
+
31,
|
| 908 |
+
919,
|
| 909 |
+
40
|
| 910 |
+
],
|
| 911 |
+
"page_idx": 5
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "text",
|
| 915 |
+
"text": "ized and changed in a small range. Therefore the influence to old tasks is alleviated during new task learning.",
|
| 916 |
+
"bbox": [
|
| 917 |
+
106,
|
| 918 |
+
69,
|
| 919 |
+
488,
|
| 920 |
+
99
|
| 921 |
+
],
|
| 922 |
+
"page_idx": 6
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "list",
|
| 926 |
+
"sub_type": "text",
|
| 927 |
+
"list_items": [
|
| 928 |
+
"- LwF [22]: a distillation-based continual learning model, where knowledge distillation (KD) is combined with finetuning, and the output of the old network is used to constrain the parameter update of the new task.",
|
| 929 |
+
"- iCaRL [23]: a replay-based continual learning model, which constructs and manages an exemplar set consisting of collection of representative old data. The exemplars that are closest to the mean feature of each class are selected. For the new task, the new data and exemplar set are mixed as input in the learning phase."
|
| 930 |
+
],
|
| 931 |
+
"bbox": [
|
| 932 |
+
91,
|
| 933 |
+
101,
|
| 934 |
+
491,
|
| 935 |
+
255
|
| 936 |
+
],
|
| 937 |
+
"page_idx": 6
|
| 938 |
+
},
|
| 939 |
+
{
|
| 940 |
+
"type": "text",
|
| 941 |
+
"text": "V. EXPERIMENTS",
|
| 942 |
+
"text_level": 1,
|
| 943 |
+
"bbox": [
|
| 944 |
+
217,
|
| 945 |
+
268,
|
| 946 |
+
346,
|
| 947 |
+
281
|
| 948 |
+
],
|
| 949 |
+
"page_idx": 6
|
| 950 |
+
},
|
| 951 |
+
{
|
| 952 |
+
"type": "text",
|
| 953 |
+
"text": "A. Implementation Details",
|
| 954 |
+
"text_level": 1,
|
| 955 |
+
"bbox": [
|
| 956 |
+
73,
|
| 957 |
+
287,
|
| 958 |
+
256,
|
| 959 |
+
301
|
| 960 |
+
],
|
| 961 |
+
"page_idx": 6
|
| 962 |
+
},
|
| 963 |
+
{
|
| 964 |
+
"type": "text",
|
| 965 |
+
"text": "Sensor signal processing: We use a median filter with kernel size 5 to filter abnormal values of acceleration and gyroscope signals. Since the gyroscope is not reliable in a long term, the trapezoidal integral of the filtered angular velocity is calculated to get the angle data. However, there exists a bias drift problem in the gyroscope signal which would cause a large cumulative error in the integral results. To tackle this problem, we subtract the mean value before the integral. After the filtering and integral processing, 24 consecutive sensor data are sampled within a TBW.",
|
| 966 |
+
"bbox": [
|
| 967 |
+
73,
|
| 968 |
+
305,
|
| 969 |
+
490,
|
| 970 |
+
455
|
| 971 |
+
],
|
| 972 |
+
"page_idx": 6
|
| 973 |
+
},
|
| 974 |
+
{
|
| 975 |
+
"type": "text",
|
| 976 |
+
"text": "Multi-modal training details: We implement the model in PyTorch. The video stream branch is trained by the SGD optimizer [58] with a momentum of 0.9, a batch size of 8, a dropout of 0.5, and a learning rate of 0.001. The acceleration and gyroscope steam branches are trained by the RMSprop optimizer [59] with a dropout of 0.5 and a learning rate of 0.001. The batch size is set to 32 for the acceleration network and 8 for the gyroscope network. We initialize the RGB network with pre-trained model from the ImageNet. All networks are trained for 50 epochs, and the learning rate is decayed by a factor of 10 at epoch 10 and 20.",
|
| 977 |
+
"bbox": [
|
| 978 |
+
73,
|
| 979 |
+
455,
|
| 980 |
+
491,
|
| 981 |
+
622
|
| 982 |
+
],
|
| 983 |
+
"page_idx": 6
|
| 984 |
+
},
|
| 985 |
+
{
|
| 986 |
+
"type": "text",
|
| 987 |
+
"text": "Continual learning training details: All continual learning benchmarks are implemented using PyTorch and PyCIL [60]. The settings of incremental steps and activity classes in each step are shown in Fig. 6. Based on the Problem Setup introduced in IV-A, we set the number of total activity classes $N = 32$ and the number of incremental steps $S = \\{16,8,4\\}$ . Therefore, each step contains $N / S = \\{2,4,8\\}$ activity classes. For the replay-based continual learning method, we set the memory size to 320. Other parameter settings are the same as multi-modal training.",
|
| 988 |
+
"bbox": [
|
| 989 |
+
73,
|
| 990 |
+
622,
|
| 991 |
+
491,
|
| 992 |
+
773
|
| 993 |
+
],
|
| 994 |
+
"page_idx": 6
|
| 995 |
+
},
|
| 996 |
+
{
|
| 997 |
+
"type": "text",
|
| 998 |
+
"text": "B. Metrics",
|
| 999 |
+
"text_level": 1,
|
| 1000 |
+
"bbox": [
|
| 1001 |
+
75,
|
| 1002 |
+
789,
|
| 1003 |
+
151,
|
| 1004 |
+
804
|
| 1005 |
+
],
|
| 1006 |
+
"page_idx": 6
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"type": "text",
|
| 1010 |
+
"text": "Following the research in continual learning [23], [61], two metrics, i.e., average accuracy and average forgetting, are used to evaluate the overall accuracy in continual learning stages and the average decrease of accuracy on previous tasks, respectively, which is defined as follows.",
|
| 1011 |
+
"bbox": [
|
| 1012 |
+
73,
|
| 1013 |
+
809,
|
| 1014 |
+
490,
|
| 1015 |
+
883
|
| 1016 |
+
],
|
| 1017 |
+
"page_idx": 6
|
| 1018 |
+
},
|
| 1019 |
+
{
|
| 1020 |
+
"type": "text",
|
| 1021 |
+
"text": "Average accuracy (A) Here, $a_{k,j} \\in [0,1]$ denotes the accuracy evaluated on the test set of task $j$ after learning task $k$ ( $j \\leq k$ ). Then the average accuracy on task $k$ can be calculated as",
|
| 1022 |
+
"bbox": [
|
| 1023 |
+
73,
|
| 1024 |
+
883,
|
| 1025 |
+
491,
|
| 1026 |
+
944
|
| 1027 |
+
],
|
| 1028 |
+
"page_idx": 6
|
| 1029 |
+
},
|
| 1030 |
+
{
|
| 1031 |
+
"type": "equation",
|
| 1032 |
+
"text": "\n$$\nA _ {k} = \\frac {1}{k} \\sum_ {j = 1} ^ {k} a _ {k, j} \\tag {3}\n$$\n",
|
| 1033 |
+
"text_format": "latex",
|
| 1034 |
+
"bbox": [
|
| 1035 |
+
655,
|
| 1036 |
+
80,
|
| 1037 |
+
919,
|
| 1038 |
+
122
|
| 1039 |
+
],
|
| 1040 |
+
"page_idx": 6
|
| 1041 |
+
},
|
| 1042 |
+
{
|
| 1043 |
+
"type": "text",
|
| 1044 |
+
"text": "Average forgetting (F) The forgetting for a certain task is defined as the difference between the maximum knowledge obtained with respect to the task during the learning process in the past and the current knowledge the model has about it [61]. $f_{j}^{k}\\in [-1,1]$ denotes the forgetting on the previous task $j$ after learning task $k$ , which can be formulated as",
|
| 1045 |
+
"bbox": [
|
| 1046 |
+
503,
|
| 1047 |
+
128,
|
| 1048 |
+
919,
|
| 1049 |
+
219
|
| 1050 |
+
],
|
| 1051 |
+
"page_idx": 6
|
| 1052 |
+
},
|
| 1053 |
+
{
|
| 1054 |
+
"type": "equation",
|
| 1055 |
+
"text": "\n$$\nf _ {j} ^ {k} = \\max _ {l \\in j, \\dots , k - 1} a _ {l, j} - a _ {k, j}, \\quad \\forall j < k \\tag {4}\n$$\n",
|
| 1056 |
+
"text_format": "latex",
|
| 1057 |
+
"bbox": [
|
| 1058 |
+
586,
|
| 1059 |
+
227,
|
| 1060 |
+
919,
|
| 1061 |
+
251
|
| 1062 |
+
],
|
| 1063 |
+
"page_idx": 6
|
| 1064 |
+
},
|
| 1065 |
+
{
|
| 1066 |
+
"type": "text",
|
| 1067 |
+
"text": "Thus, the average forgetting at the $k$ -th task can be defined as",
|
| 1068 |
+
"bbox": [
|
| 1069 |
+
504,
|
| 1070 |
+
258,
|
| 1071 |
+
919,
|
| 1072 |
+
273
|
| 1073 |
+
],
|
| 1074 |
+
"page_idx": 6
|
| 1075 |
+
},
|
| 1076 |
+
{
|
| 1077 |
+
"type": "equation",
|
| 1078 |
+
"text": "\n$$\nF _ {k} = \\frac {1}{k - 1} \\sum_ {j = 1} ^ {k - 1} f _ {j} ^ {k} \\tag {5}\n$$\n",
|
| 1079 |
+
"text_format": "latex",
|
| 1080 |
+
"bbox": [
|
| 1081 |
+
645,
|
| 1082 |
+
280,
|
| 1083 |
+
919,
|
| 1084 |
+
321
|
| 1085 |
+
],
|
| 1086 |
+
"page_idx": 6
|
| 1087 |
+
},
|
| 1088 |
+
{
|
| 1089 |
+
"type": "text",
|
| 1090 |
+
"text": "Note that the lower $F_{k}$ , the less forgetting of a model on the previous tasks.",
|
| 1091 |
+
"bbox": [
|
| 1092 |
+
503,
|
| 1093 |
+
329,
|
| 1094 |
+
919,
|
| 1095 |
+
359
|
| 1096 |
+
],
|
| 1097 |
+
"page_idx": 6
|
| 1098 |
+
},
|
| 1099 |
+
{
|
| 1100 |
+
"type": "text",
|
| 1101 |
+
"text": "C. Evaluation on UESTC-MMEA-CL",
|
| 1102 |
+
"text_level": 1,
|
| 1103 |
+
"bbox": [
|
| 1104 |
+
504,
|
| 1105 |
+
380,
|
| 1106 |
+
759,
|
| 1107 |
+
393
|
| 1108 |
+
],
|
| 1109 |
+
"page_idx": 6
|
| 1110 |
+
},
|
| 1111 |
+
{
|
| 1112 |
+
"type": "text",
|
| 1113 |
+
"text": "Multi-modal egocentric activity recognition. We evaluate the multi-modal egocentric activity recognition on UESTC-MMEA-CL with different modal combinations using the base architecture in Fig. 5. The results are summarized in Table III. For the uni-modal recognition, RGB network achieves the most prominent performance compared with acceleration and gyroscope network. Fusing the two modalities of sensor data, the average class precision is $59.9\\%$ , which achieves significant improvements of more than $56\\%$ compared to the unimodality 'Acc' and 'Gyro'. The improvements of the modal combination methods 'RGB+Acc' and 'RGB+Gyro' on RGB are not great but also obvious. When fusing all modalities together, 'RGB+Acc+Gyro' achieves the highest recognition accuracy.",
|
| 1114 |
+
"bbox": [
|
| 1115 |
+
501,
|
| 1116 |
+
400,
|
| 1117 |
+
919,
|
| 1118 |
+
611
|
| 1119 |
+
],
|
| 1120 |
+
"page_idx": 6
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "text",
|
| 1124 |
+
"text": "In order to demonstrate the recognition performance of different modality and modal combinations on each activity class, we present the confusion matrices which are shown in Fig. 7. As shown in Fig. 7(d), with the help of the two types of motion-based sensor data, recognition accuracy of activities such as 'upstairs', 'drinking', 'fall', 'walking', 'sit-stand' and 'shopping' is obviously improved. Fig. 7(e) and Fig. 7(f) prove that $\\mathrm{RGB + Acc}$ and $\\mathrm{RGB + Gyro}$ also perform well but are not as good as 'All'.",
|
| 1125 |
+
"bbox": [
|
| 1126 |
+
503,
|
| 1127 |
+
612,
|
| 1128 |
+
919,
|
| 1129 |
+
747
|
| 1130 |
+
],
|
| 1131 |
+
"page_idx": 6
|
| 1132 |
+
},
|
| 1133 |
+
{
|
| 1134 |
+
"type": "text",
|
| 1135 |
+
"text": "Catastrophic forgetting. Deep neural networks often suffer from catastrophic forgetting in continual learning tasks. In order to demonstrate the catastrophic forgetting in the context of continual learning for multi-modal activity recognition, the straightforward solution fine-tune is evaluated on UESTC-MMEA-CL with three incremental settings and two metrics A and F that are introduced in section V-B. As shown in Fig. 8, with the increase of incremental tasks, the recognition accuracy of fine-tune on different modalities and modal combinations decrease dramatically. Moreover, when using the sensor data the model suffers from more serious catastrophic forgetting than using RGB only. The average accuracy and forgetting of fine-tune with incremental setting $N / S = 4$",
|
| 1136 |
+
"bbox": [
|
| 1137 |
+
501,
|
| 1138 |
+
748,
|
| 1139 |
+
921,
|
| 1140 |
+
945
|
| 1141 |
+
],
|
| 1142 |
+
"page_idx": 6
|
| 1143 |
+
},
|
| 1144 |
+
{
|
| 1145 |
+
"type": "page_number",
|
| 1146 |
+
"text": "7",
|
| 1147 |
+
"bbox": [
|
| 1148 |
+
911,
|
| 1149 |
+
30,
|
| 1150 |
+
919,
|
| 1151 |
+
40
|
| 1152 |
+
],
|
| 1153 |
+
"page_idx": 6
|
| 1154 |
+
},
|
| 1155 |
+
{
|
| 1156 |
+
"type": "image",
|
| 1157 |
+
"img_path": "images/2c24b9bc714b4310209785d656cb4e154065740d623022dc8556b6d9a4bc6c3b.jpg",
|
| 1158 |
+
"image_caption": [
|
| 1159 |
+
"Fig. 6: Settings of incremental steps. Each number denotes the activity class in Table II."
|
| 1160 |
+
],
|
| 1161 |
+
"image_footnote": [],
|
| 1162 |
+
"bbox": [
|
| 1163 |
+
161,
|
| 1164 |
+
95,
|
| 1165 |
+
854,
|
| 1166 |
+
244
|
| 1167 |
+
],
|
| 1168 |
+
"page_idx": 7
|
| 1169 |
+
},
|
| 1170 |
+
{
|
| 1171 |
+
"type": "table",
|
| 1172 |
+
"img_path": "images/2018f097602e3219d46e4f27512134d53fc1ef64761085d297ba528279537a85.jpg",
|
| 1173 |
+
"table_caption": [
|
| 1174 |
+
"TABLE III: Results on UESTC-MMEA-CL using multi-modal combinations ('All' denotes 'RGB+Acc+Gyro')."
|
| 1175 |
+
],
|
| 1176 |
+
"table_footnote": [],
|
| 1177 |
+
"table_body": "<table><tr><td rowspan=\"2\"></td><td colspan=\"3\">Uni-modal</td><td colspan=\"4\">Multi-modal</td></tr><tr><td>RGB</td><td>Acc</td><td>Gyro</td><td>RGB + Acc</td><td>RGB + Gyro</td><td>Acc + Gyro</td><td>All</td></tr><tr><td>Top1-Accuracy (%)</td><td>92.6</td><td>35.0</td><td>38.2</td><td>94.5</td><td>93.9</td><td>59.7</td><td>95.6</td></tr><tr><td>Avg Class Precision (%)</td><td>92.5</td><td>35.1</td><td>38.3</td><td>94.4</td><td>93.9</td><td>59.9</td><td>95.6</td></tr></table>",
|
| 1178 |
+
"bbox": [
|
| 1179 |
+
86,
|
| 1180 |
+
338,
|
| 1181 |
+
916,
|
| 1182 |
+
402
|
| 1183 |
+
],
|
| 1184 |
+
"page_idx": 7
|
| 1185 |
+
},
|
| 1186 |
+
{
|
| 1187 |
+
"type": "image",
|
| 1188 |
+
"img_path": "images/6429c11596baf2102c20558b40ed4de793f76944b08e43fa0e14b869509fe3fd.jpg",
|
| 1189 |
+
"image_caption": [
|
| 1190 |
+
"(a)"
|
| 1191 |
+
],
|
| 1192 |
+
"image_footnote": [],
|
| 1193 |
+
"bbox": [
|
| 1194 |
+
106,
|
| 1195 |
+
477,
|
| 1196 |
+
364,
|
| 1197 |
+
648
|
| 1198 |
+
],
|
| 1199 |
+
"page_idx": 7
|
| 1200 |
+
},
|
| 1201 |
+
{
|
| 1202 |
+
"type": "image",
|
| 1203 |
+
"img_path": "images/97b04ef4b530018f39fd5f57bf1ebdf72d6e9ea28dbb647454afbe9b6c444df8.jpg",
|
| 1204 |
+
"image_caption": [
|
| 1205 |
+
"(b)"
|
| 1206 |
+
],
|
| 1207 |
+
"image_footnote": [],
|
| 1208 |
+
"bbox": [
|
| 1209 |
+
370,
|
| 1210 |
+
477,
|
| 1211 |
+
625,
|
| 1212 |
+
648
|
| 1213 |
+
],
|
| 1214 |
+
"page_idx": 7
|
| 1215 |
+
},
|
| 1216 |
+
{
|
| 1217 |
+
"type": "image",
|
| 1218 |
+
"img_path": "images/7dc329f5f404ec162b4bef869f20b02b80e043d4a554b1925094ae718a970d52.jpg",
|
| 1219 |
+
"image_caption": [
|
| 1220 |
+
"(c)"
|
| 1221 |
+
],
|
| 1222 |
+
"image_footnote": [],
|
| 1223 |
+
"bbox": [
|
| 1224 |
+
635,
|
| 1225 |
+
477,
|
| 1226 |
+
890,
|
| 1227 |
+
648
|
| 1228 |
+
],
|
| 1229 |
+
"page_idx": 7
|
| 1230 |
+
},
|
| 1231 |
+
{
|
| 1232 |
+
"type": "image",
|
| 1233 |
+
"img_path": "images/646bafa55111242749ce634b5712afdf0bffc821e8ac504c4a1216483c10e58f.jpg",
|
| 1234 |
+
"image_caption": [
|
| 1235 |
+
"(d)"
|
| 1236 |
+
],
|
| 1237 |
+
"image_footnote": [],
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
106,
|
| 1240 |
+
674,
|
| 1241 |
+
364,
|
| 1242 |
+
847
|
| 1243 |
+
],
|
| 1244 |
+
"page_idx": 7
|
| 1245 |
+
},
|
| 1246 |
+
{
|
| 1247 |
+
"type": "image",
|
| 1248 |
+
"img_path": "images/7338343d4691b5e18ebf916ee79654043820bc03fc5050dd761643bbe91330d4.jpg",
|
| 1249 |
+
"image_caption": [
|
| 1250 |
+
"(e)"
|
| 1251 |
+
],
|
| 1252 |
+
"image_footnote": [],
|
| 1253 |
+
"bbox": [
|
| 1254 |
+
372,
|
| 1255 |
+
674,
|
| 1256 |
+
630,
|
| 1257 |
+
847
|
| 1258 |
+
],
|
| 1259 |
+
"page_idx": 7
|
| 1260 |
+
},
|
| 1261 |
+
{
|
| 1262 |
+
"type": "image",
|
| 1263 |
+
"img_path": "images/7d17b2ff0d1ff378cc91fca88ba5cb02d67facc9f09c1834ac75f2c27937f4a8.jpg",
|
| 1264 |
+
"image_caption": [
|
| 1265 |
+
"(f)",
|
| 1266 |
+
"Fig. 7: Confusion matrices of activity recognition. The first row shows the test results for three uni-modal networks: (a) RGB; (b) Acceleration; (c) Gyroscope. The second row demonstrates the difference between the multi-modal combination networks and the RGB network: (d) Difference between 'All' and 'RGB'; (e) Difference between 'RGB+Acc' and 'RGB'; (f) Difference between 'RGB+Gyro' and 'RGB'."
|
| 1267 |
+
],
|
| 1268 |
+
"image_footnote": [],
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
635,
|
| 1271 |
+
674,
|
| 1272 |
+
893,
|
| 1273 |
+
847
|
| 1274 |
+
],
|
| 1275 |
+
"page_idx": 7
|
| 1276 |
+
},
|
| 1277 |
+
{
|
| 1278 |
+
"type": "page_number",
|
| 1279 |
+
"text": "8",
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
911,
|
| 1282 |
+
30,
|
| 1283 |
+
919,
|
| 1284 |
+
40
|
| 1285 |
+
],
|
| 1286 |
+
"page_idx": 7
|
| 1287 |
+
},
|
| 1288 |
+
{
|
| 1289 |
+
"type": "image",
|
| 1290 |
+
"img_path": "images/8e169bfb1263de66b71adbcb0bf4f00237a1a092db68810e359db42dc60f1743.jpg",
|
| 1291 |
+
"image_caption": [],
|
| 1292 |
+
"image_footnote": [],
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
214,
|
| 1295 |
+
71,
|
| 1296 |
+
782,
|
| 1297 |
+
85
|
| 1298 |
+
],
|
| 1299 |
+
"page_idx": 8
|
| 1300 |
+
},
|
| 1301 |
+
{
|
| 1302 |
+
"type": "image",
|
| 1303 |
+
"img_path": "images/3982aebc5c2b013835fe5680693b9feb027b6efab42593d02d7147d4ee49490a.jpg",
|
| 1304 |
+
"image_caption": [
|
| 1305 |
+
"Fig. 8: Fine-tune results on UESTC-MMEA-CL with $N / S = 2$ (left), 4 (middle) and 8 (right)."
|
| 1306 |
+
],
|
| 1307 |
+
"image_footnote": [],
|
| 1308 |
+
"bbox": [
|
| 1309 |
+
109,
|
| 1310 |
+
92,
|
| 1311 |
+
349,
|
| 1312 |
+
265
|
| 1313 |
+
],
|
| 1314 |
+
"page_idx": 8
|
| 1315 |
+
},
|
| 1316 |
+
{
|
| 1317 |
+
"type": "image",
|
| 1318 |
+
"img_path": "images/bf673f51bfa8a9490f93bca96197c5b2adcbbf13fe670e420f25b2f860952eca.jpg",
|
| 1319 |
+
"image_caption": [],
|
| 1320 |
+
"image_footnote": [],
|
| 1321 |
+
"bbox": [
|
| 1322 |
+
377,
|
| 1323 |
+
92,
|
| 1324 |
+
619,
|
| 1325 |
+
265
|
| 1326 |
+
],
|
| 1327 |
+
"page_idx": 8
|
| 1328 |
+
},
|
| 1329 |
+
{
|
| 1330 |
+
"type": "image",
|
| 1331 |
+
"img_path": "images/b2d9e5606fe5f046707cca53ed85c166cda44f7299ec075678c6198300acd9de.jpg",
|
| 1332 |
+
"image_caption": [],
|
| 1333 |
+
"image_footnote": [],
|
| 1334 |
+
"bbox": [
|
| 1335 |
+
648,
|
| 1336 |
+
93,
|
| 1337 |
+
890,
|
| 1338 |
+
266
|
| 1339 |
+
],
|
| 1340 |
+
"page_idx": 8
|
| 1341 |
+
},
|
| 1342 |
+
{
|
| 1343 |
+
"type": "image",
|
| 1344 |
+
"img_path": "images/2a9142311263f7ebc23882ff440454c5bbd36463809a8cc7213ad703cddb2942.jpg",
|
| 1345 |
+
"image_caption": [],
|
| 1346 |
+
"image_footnote": [],
|
| 1347 |
+
"bbox": [
|
| 1348 |
+
91,
|
| 1349 |
+
314,
|
| 1350 |
+
292,
|
| 1351 |
+
468
|
| 1352 |
+
],
|
| 1353 |
+
"page_idx": 8
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "image",
|
| 1357 |
+
"img_path": "images/ff9d3c09fd799df0df5a7cd9d44ceaa9d367b17ba4bf5b5913112d5e28a9053e.jpg",
|
| 1358 |
+
"image_caption": [],
|
| 1359 |
+
"image_footnote": [],
|
| 1360 |
+
"bbox": [
|
| 1361 |
+
303,
|
| 1362 |
+
314,
|
| 1363 |
+
495,
|
| 1364 |
+
467
|
| 1365 |
+
],
|
| 1366 |
+
"page_idx": 8
|
| 1367 |
+
},
|
| 1368 |
+
{
|
| 1369 |
+
"type": "image",
|
| 1370 |
+
"img_path": "images/2d9a1f04d46ca1d4162330e65fe1a212326ac0d4a45c6fbee4929f9475faddef.jpg",
|
| 1371 |
+
"image_caption": [],
|
| 1372 |
+
"image_footnote": [],
|
| 1373 |
+
"bbox": [
|
| 1374 |
+
506,
|
| 1375 |
+
314,
|
| 1376 |
+
700,
|
| 1377 |
+
468
|
| 1378 |
+
],
|
| 1379 |
+
"page_idx": 8
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "image",
|
| 1383 |
+
"img_path": "images/ada25eb3a5db3336f9b27af0f9fc335d5e3117c6000d8340d10f12d438a19598.jpg",
|
| 1384 |
+
"image_caption": [],
|
| 1385 |
+
"image_footnote": [],
|
| 1386 |
+
"bbox": [
|
| 1387 |
+
707,
|
| 1388 |
+
314,
|
| 1389 |
+
906,
|
| 1390 |
+
469
|
| 1391 |
+
],
|
| 1392 |
+
"page_idx": 8
|
| 1393 |
+
},
|
| 1394 |
+
{
|
| 1395 |
+
"type": "image",
|
| 1396 |
+
"img_path": "images/fd3aa9c12da94a47ece6a704b0190844c77c84812263fd70decc71c2c16eacc4.jpg",
|
| 1397 |
+
"image_caption": [
|
| 1398 |
+
"Fig. 9: Multi-modal continual learning performance on UESTC-MMEA-CL with $N / S = 4$ . Three continual learning methods and fine-tune are evaluated on our dataset using different modalities and modal combinations: (a) RGB; (b) Acc; (c) Gyro; (d) RGB+Acc; (e) RGB+Gyro; (f) Acc+Gyro; (g) RGB+Acc+Gyro."
|
| 1399 |
+
],
|
| 1400 |
+
"image_footnote": [],
|
| 1401 |
+
"bbox": [
|
| 1402 |
+
91,
|
| 1403 |
+
470,
|
| 1404 |
+
292,
|
| 1405 |
+
625
|
| 1406 |
+
],
|
| 1407 |
+
"page_idx": 8
|
| 1408 |
+
},
|
| 1409 |
+
{
|
| 1410 |
+
"type": "image",
|
| 1411 |
+
"img_path": "images/e53dd43706059acd40e67c89630de73a40f3c3d854dde722edce4fe7ef811136.jpg",
|
| 1412 |
+
"image_caption": [],
|
| 1413 |
+
"image_footnote": [],
|
| 1414 |
+
"bbox": [
|
| 1415 |
+
303,
|
| 1416 |
+
470,
|
| 1417 |
+
495,
|
| 1418 |
+
625
|
| 1419 |
+
],
|
| 1420 |
+
"page_idx": 8
|
| 1421 |
+
},
|
| 1422 |
+
{
|
| 1423 |
+
"type": "image",
|
| 1424 |
+
"img_path": "images/40c6fb4bb2811134d859c87459b35c8b79e469fc1a05032d4749137c04a5e5a7.jpg",
|
| 1425 |
+
"image_caption": [],
|
| 1426 |
+
"image_footnote": [],
|
| 1427 |
+
"bbox": [
|
| 1428 |
+
503,
|
| 1429 |
+
470,
|
| 1430 |
+
700,
|
| 1431 |
+
625
|
| 1432 |
+
],
|
| 1433 |
+
"page_idx": 8
|
| 1434 |
+
},
|
| 1435 |
+
{
|
| 1436 |
+
"type": "image",
|
| 1437 |
+
"img_path": "images/db66a8da5314da1038e5aae0a7ced595ea4c56ab3d863d1fe6355f69a9e017ff.jpg",
|
| 1438 |
+
"image_caption": [],
|
| 1439 |
+
"image_footnote": [],
|
| 1440 |
+
"bbox": [
|
| 1441 |
+
746,
|
| 1442 |
+
481,
|
| 1443 |
+
830,
|
| 1444 |
+
530
|
| 1445 |
+
],
|
| 1446 |
+
"page_idx": 8
|
| 1447 |
+
},
|
| 1448 |
+
{
|
| 1449 |
+
"type": "text",
|
| 1450 |
+
"text": "are demonstrated in the first column of Table IV. It can be observed that uni-modal network 'RGB' maintains a low forgetting rate while uni-modal network 'Acc' suffers from very severe forgetting of previously learned activities. Although 'Gyro' performs not well, the average forgetting of 'Gyro' is not as high as 'Acc' due to the low accuracy of 'Gyro' at the first incremental step. It can be seen that multi-modal combinations 'RGB+Acc', 'RGB+Gyro', and 'All' ('RGB+Acc+Gyro'), don't add gain to uni-modal network 'RGB' in the continual learning. Instead, the catastrophic forgetting problem of fine-tune is aggravated with the addition of complementary sensor data.",
|
| 1451 |
+
"bbox": [
|
| 1452 |
+
73,
|
| 1453 |
+
710,
|
| 1454 |
+
491,
|
| 1455 |
+
893
|
| 1456 |
+
],
|
| 1457 |
+
"page_idx": 8
|
| 1458 |
+
},
|
| 1459 |
+
{
|
| 1460 |
+
"type": "text",
|
| 1461 |
+
"text": "Evaluation with continual learning strategies. To overcome catastrophic forgetting, we transfer the popular continual learning methods iCaRL [23], EWC [46] and LwF [22]",
|
| 1462 |
+
"bbox": [
|
| 1463 |
+
73,
|
| 1464 |
+
898,
|
| 1465 |
+
491,
|
| 1466 |
+
946
|
| 1467 |
+
],
|
| 1468 |
+
"page_idx": 8
|
| 1469 |
+
},
|
| 1470 |
+
{
|
| 1471 |
+
"type": "text",
|
| 1472 |
+
"text": "to the continual multi-modal activity recognition. Fig. 9(a) demonstrates the suppression of catastrophic forgetting by continual learning strategies with RGB uni-modal network. With the help of exemplar replay, iCaRL effectively alleviates the forgetting problem while the effectiveness of exemplar-free strategies EWC and LwF is not so noticeable. As shown in Fig. 9(b) and (c), these continual learning strategies do not produce the same effect on alleviating forgetting of the acceleration and gyroscope uni-modal networks as the RGB network. As listed in the second and third rows of Table IV, the average accuracy of the sensor networks is below $20\\%$ even if continual learning strategies are adopted.",
|
| 1473 |
+
"bbox": [
|
| 1474 |
+
501,
|
| 1475 |
+
710,
|
| 1476 |
+
921,
|
| 1477 |
+
893
|
| 1478 |
+
],
|
| 1479 |
+
"page_idx": 8
|
| 1480 |
+
},
|
| 1481 |
+
{
|
| 1482 |
+
"type": "text",
|
| 1483 |
+
"text": "Fig. 9(d)-(g) present the recognition accuracy of the multimodal combination networks. When combining the sensor data, the replay-based iCaRL can implicitly exploit",
|
| 1484 |
+
"bbox": [
|
| 1485 |
+
503,
|
| 1486 |
+
898,
|
| 1487 |
+
921,
|
| 1488 |
+
946
|
| 1489 |
+
],
|
| 1490 |
+
"page_idx": 8
|
| 1491 |
+
},
|
| 1492 |
+
{
|
| 1493 |
+
"type": "page_number",
|
| 1494 |
+
"text": "9",
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
911,
|
| 1497 |
+
30,
|
| 1498 |
+
919,
|
| 1499 |
+
40
|
| 1500 |
+
],
|
| 1501 |
+
"page_idx": 8
|
| 1502 |
+
},
|
| 1503 |
+
{
|
| 1504 |
+
"type": "table",
|
| 1505 |
+
"img_path": "images/5865813d2f0d3bc521b63a828d75d936f4576c94fee52f37f3c533f149adf530.jpg",
|
| 1506 |
+
"table_caption": [
|
| 1507 |
+
"TABLE IV: Average accuracy (A) and average forgetting (F) of continual learning strategies and fine-tune on UESTC-MMEA-CL $(N / S = 4)$ . Note that $\\uparrow$ indicates the higher the better and vice versa."
|
| 1508 |
+
],
|
| 1509 |
+
"table_footnote": [],
|
| 1510 |
+
"table_body": "<table><tr><td rowspan=\"2\"></td><td colspan=\"2\">Fine-tune</td><td colspan=\"2\">iCaRL [23]</td><td colspan=\"2\">EWC [46]</td><td colspan=\"2\">LwF [22]</td></tr><tr><td>A↑</td><td>F↓</td><td>A↑</td><td>F↓</td><td>A↑</td><td>F↓</td><td>A↑</td><td>F↓</td></tr><tr><td>RGB</td><td>29.3</td><td>64.5</td><td>70.4</td><td>32.1</td><td>51.6</td><td>35.4</td><td>40.4</td><td>51.8</td></tr><tr><td>Acc</td><td>9.0</td><td>86.3</td><td>17.0</td><td>67.4</td><td>9.4</td><td>49.6</td><td>12.5</td><td>20.7</td></tr><tr><td>Gyro</td><td>8.1</td><td>68.8</td><td>14.3</td><td>58.1</td><td>4.3</td><td>42.8</td><td>9.8</td><td>33.7</td></tr><tr><td>RGB+Acc</td><td>12.2</td><td>99.0</td><td>68.2</td><td>34.8</td><td>22.4</td><td>24.3</td><td>22.7</td><td>44.6</td></tr><tr><td>RGB+Gyro</td><td>12.2</td><td>98.9</td><td>77.4</td><td>34.2</td><td>18.2</td><td>18.6</td><td>29.0</td><td>49.0</td></tr><tr><td>Acc+Gyro</td><td>12.1</td><td>83.3</td><td>34.8</td><td>56.9</td><td>12.2</td><td>66.1</td><td>15.3</td><td>15.7</td></tr><tr><td>RGB+Acc+Gyro</td><td>12.3</td><td>99.0</td><td>77.8</td><td>33.5</td><td>19.6</td><td>29.9</td><td>17.1</td><td>49.4</td></tr></table>",
|
| 1511 |
+
"bbox": [
|
| 1512 |
+
86,
|
| 1513 |
+
102,
|
| 1514 |
+
916,
|
| 1515 |
+
252
|
| 1516 |
+
],
|
| 1517 |
+
"page_idx": 9
|
| 1518 |
+
},
|
| 1519 |
+
{
|
| 1520 |
+
"type": "text",
|
| 1521 |
+
"text": "the multi-modal complementary information to reduce the forgetting rate of RGB network. As shown in Table IV, 'iCaRL-RGB+Acc+Gyro' achieves the highest average accuracy $77.8\\%$ and a relatively low forgetting rate $33.5\\%$ . 'iCaRL-RGB+Gyro' also perform well with average accuracy $77.4\\%$ compared with $70.4\\%$ of 'iCaRL-RGB'. Compared with fine-tune, exemplar-free strategies EWC and LwF can also suppress the model forgetting, but not obvious as iCaRL.",
|
| 1522 |
+
"bbox": [
|
| 1523 |
+
73,
|
| 1524 |
+
277,
|
| 1525 |
+
491,
|
| 1526 |
+
398
|
| 1527 |
+
],
|
| 1528 |
+
"page_idx": 9
|
| 1529 |
+
},
|
| 1530 |
+
{
|
| 1531 |
+
"type": "text",
|
| 1532 |
+
"text": "D. Discussion",
|
| 1533 |
+
"text_level": 1,
|
| 1534 |
+
"bbox": [
|
| 1535 |
+
73,
|
| 1536 |
+
424,
|
| 1537 |
+
176,
|
| 1538 |
+
436
|
| 1539 |
+
],
|
| 1540 |
+
"page_idx": 9
|
| 1541 |
+
},
|
| 1542 |
+
{
|
| 1543 |
+
"type": "text",
|
| 1544 |
+
"text": "Fusion of multi-modal data. In our work, we use TBW-like midfusion to aggregate features from different modalities, while it can not be ignored that an early fusion or late fusion way will receive different performances against catastrophic forgetting. Exploring a more reasonable manner to fuse and align the multi-modal data deserves further study.",
|
| 1545 |
+
"bbox": [
|
| 1546 |
+
73,
|
| 1547 |
+
445,
|
| 1548 |
+
490,
|
| 1549 |
+
535
|
| 1550 |
+
],
|
| 1551 |
+
"page_idx": 9
|
| 1552 |
+
},
|
| 1553 |
+
{
|
| 1554 |
+
"type": "text",
|
| 1555 |
+
"text": "Catastrophic forgetting of sensor modalities. As shown in Fig. 9(b), (c), and (f), continual learning using sensor modalities performs poorly if RGB is unavailable and the forgetting problem is severer than using RGB. This phenomenon may be closely related to the network architecture.",
|
| 1556 |
+
"bbox": [
|
| 1557 |
+
73,
|
| 1558 |
+
536,
|
| 1559 |
+
491,
|
| 1560 |
+
611
|
| 1561 |
+
],
|
| 1562 |
+
"page_idx": 9
|
| 1563 |
+
},
|
| 1564 |
+
{
|
| 1565 |
+
"type": "text",
|
| 1566 |
+
"text": "Continual leaning without exemplar. In this paper, three popular continual learning strategies, i.e., exemplar-based method iCaRL and exemplar-free methods LwF and EWC, are evaluated on UESTC-MMEA-CL. The experimental results indicate that exemplars can effectively alleviate the forgetting problem in multi-modal networks. However, in practical applications, especially in services involving privacy, it is not always available to select and store exemplars. Therefore, studying how to improve the catastrophic forgetting problem of multi-modal networks (especially sensor networks) under exemplar-free case will be an vital research direction in the future.",
|
| 1567 |
+
"bbox": [
|
| 1568 |
+
73,
|
| 1569 |
+
612,
|
| 1570 |
+
491,
|
| 1571 |
+
792
|
| 1572 |
+
],
|
| 1573 |
+
"page_idx": 9
|
| 1574 |
+
},
|
| 1575 |
+
{
|
| 1576 |
+
"type": "text",
|
| 1577 |
+
"text": "VI. CONCLUSION",
|
| 1578 |
+
"text_level": 1,
|
| 1579 |
+
"bbox": [
|
| 1580 |
+
217,
|
| 1581 |
+
816,
|
| 1582 |
+
349,
|
| 1583 |
+
829
|
| 1584 |
+
],
|
| 1585 |
+
"page_idx": 9
|
| 1586 |
+
},
|
| 1587 |
+
{
|
| 1588 |
+
"type": "text",
|
| 1589 |
+
"text": "In this paper, we propose a multi-modal egocentric dataset, named UESTC-MMEA-CL, for continual activity recognition task. UESTC-MMEA-CL contains video, acceleration, and gyroscope data of 32 daily activity classes. Compared to the existing multi-modal datasets, UESTC-MMEA-CL provides not only vision data with auxiliary inertial sensor data but also abundant categories for the purpose of continual learning",
|
| 1590 |
+
"bbox": [
|
| 1591 |
+
73,
|
| 1592 |
+
839,
|
| 1593 |
+
491,
|
| 1594 |
+
946
|
| 1595 |
+
],
|
| 1596 |
+
"page_idx": 9
|
| 1597 |
+
},
|
| 1598 |
+
{
|
| 1599 |
+
"type": "text",
|
| 1600 |
+
"text": "research. Besides, a baseline model is presented for continual multi-modal egocentric activity recognition. We have conducted comprehensive experiments on UESTC-MMEA-CL to explore catastrophic forgetting of multi-modal networks and evaluate four baseline methods to address this problem. Finally, we have given some potential research directions for future research. We hope our multi-modal egocentric dataset can facilitate future studies on multi-modal first-person activity recognition as well as continual learning in wearable applications.",
|
| 1601 |
+
"bbox": [
|
| 1602 |
+
501,
|
| 1603 |
+
277,
|
| 1604 |
+
921,
|
| 1605 |
+
429
|
| 1606 |
+
],
|
| 1607 |
+
"page_idx": 9
|
| 1608 |
+
},
|
| 1609 |
+
{
|
| 1610 |
+
"type": "text",
|
| 1611 |
+
"text": "REFERENCES",
|
| 1612 |
+
"text_level": 1,
|
| 1613 |
+
"bbox": [
|
| 1614 |
+
665,
|
| 1615 |
+
455,
|
| 1616 |
+
761,
|
| 1617 |
+
469
|
| 1618 |
+
],
|
| 1619 |
+
"page_idx": 9
|
| 1620 |
+
},
|
| 1621 |
+
{
|
| 1622 |
+
"type": "list",
|
| 1623 |
+
"sub_type": "ref_text",
|
| 1624 |
+
"list_items": [
|
| 1625 |
+
"[1] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, \"Imagenet: A large-scale hierarchical image database,\" in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248-255.",
|
| 1626 |
+
"[2] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, \"The pascal visual object classes challenge: A retrospective,\" International Journal of Computer Vision, vol. 111, no. 1, pp. 98-136, Jan. 2015.",
|
| 1627 |
+
"[3] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, \"Microsoft coco: Common objects in context,\" in Computer Vision - ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds. Cham: Springer International Publishing, 2014, pp. 740-755.",
|
| 1628 |
+
"[4] J. Liu, A. Shahroudy, M. Perez, G. Wang, L.-Y. Duan, and A. C. Kot, \"Ntu rb+d 120: A large-scale benchmark for 3d human activity understanding,\" IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 10, pp. 2684-2701, 2020.",
|
| 1629 |
+
"[5] F. C. Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles, “Activitynet: A large-scale video benchmark for human activity understanding,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, pp. 961–970.",
|
| 1630 |
+
"[6] K. Grauman, A. Westbury, E. Byrne et al., \"Ego4d: Around the world in 3,000 hours of egocentric video,\" in Proceedings of the IEEE / CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, Louisiana, USA, Jun. 2022, pp. 18995-19012.",
|
| 1631 |
+
"[7] A. Cartas, P. Radeva, and M. Dimiccoli, \"Activities of daily living monitoring via a wearable camera: Toward real-world applications,\" IEEE Access, vol. 8, pp. 77344-77363, 2020.",
|
| 1632 |
+
"[8] T. Nagarajan, C. Feichtenhofer, and K. Grauman, “Grounded human-object interaction hotspots from video,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8687–8696.",
|
| 1633 |
+
"[9] Z. Zuo, L. Yang, Y. Peng, F. Chao, and Y. Qu, “Gaze-informed egocentric action recognition for memory aid systems,” IEEE Access, vol. 6, pp. 12894–12904, 2018.",
|
| 1634 |
+
"[10] E. Ng, D. Xiang, H. Joo, and K. Grauman, “You2me: Inferring body pose in egocentric video via first and second person interactions,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 9887–9897.",
|
| 1635 |
+
"[11] H. Jiang and V. K. Ithapu, \"Egocentric pose estimation from human vision span,\" in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 10986-10994."
|
| 1636 |
+
],
|
| 1637 |
+
"bbox": [
|
| 1638 |
+
506,
|
| 1639 |
+
482,
|
| 1640 |
+
921,
|
| 1641 |
+
943
|
| 1642 |
+
],
|
| 1643 |
+
"page_idx": 9
|
| 1644 |
+
},
|
| 1645 |
+
{
|
| 1646 |
+
"type": "page_number",
|
| 1647 |
+
"text": "10",
|
| 1648 |
+
"bbox": [
|
| 1649 |
+
906,
|
| 1650 |
+
30,
|
| 1651 |
+
919,
|
| 1652 |
+
40
|
| 1653 |
+
],
|
| 1654 |
+
"page_idx": 9
|
| 1655 |
+
},
|
| 1656 |
+
{
|
| 1657 |
+
"type": "list",
|
| 1658 |
+
"sub_type": "ref_text",
|
| 1659 |
+
"list_items": [
|
| 1660 |
+
"[12] J. S. Smith, R. Xu, and P. Vela, \"egoteb: Egocentric, perception space navigation using timed-elastic-bands,\" in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 2703-2709.",
|
| 1661 |
+
"[13] J. Li, H. Gang, H. Ma, M. Tomizuka, and C. Choi, \"Important object identification with semi-supervised learning for autonomous driving,\" in 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 2913-2919.",
|
| 1662 |
+
"[14] Y.-C. Su and K. Grauman, “Detecting engagement in egocentric video,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 454–471.",
|
| 1663 |
+
"[15] Y. Li, T. Nagarajan, B. Xiong, and K. Grauman, \"Ego-exo: Transferring visual representations from third-person to first-person videos,\" in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 6939-6949.",
|
| 1664 |
+
"[16] M. Hu, M. Luo, M. Huang, W. Meng, B. Xiong, X. Yang, and J. Sang, \"Towards a multimodal human activity dataset for healthcare,\" Multimedia Systems, Mar 2022. [Online]. Available: https://doi.org/10.1007/s00530-021-00875-6",
|
| 1665 |
+
"[17] S. Song, V. Chandrasekhar, B. Mandal, L. Li, J.-H. Lim, G. S. Babu, P. P. San, and N.-M. Cheung, \"Multimodal multi-stream deep learning for egocentric activity recognition,\" in 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2016, pp. 378-385.",
|
| 1666 |
+
"[18] H. Rezaie and M. Ghassemian, \"Implementation study of wearable sensors for activity recognition systems,\" Healthcare Technology Letters, vol. 2, no. 4, pp. 95-100, Jul. 2015.",
|
| 1667 |
+
"[19] B. Goertzel and P. Wang, “A foundational architecture for artificial general intelligence,” Advances in artificial general intelligence: Concepts, architectures and algorithms, vol. 6, p. 36, 2007.",
|
| 1668 |
+
"[20] M. McCloskey and N. J. Cohen, \"Catastrophic interference in connectionist networks: The sequential learning problem,\" ser. Psychology of Learning and Motivation, G. H. Bower, Ed. Academic Press, 1989, vol. 24, pp. 109-165. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0079742108605368",
|
| 1669 |
+
"[21] A. ROBINS, “Catastrophic forgetting, rehearsal and pseudorehearsal,” Connection Science, vol. 7, no. 2, pp. 123–146, 1995.",
|
| 1670 |
+
"[22] Z. Li and D. Hoiem, “Learning without forgetting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 12, pp. 2935–2947, 2018.",
|
| 1671 |
+
"[23] S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, \"icarl: Incremental classifier and representation learning,\" in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5533-5542.",
|
| 1672 |
+
"[24] S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin, \"Learning a unified classifier incrementally via rebalancing,\" in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 831-839.",
|
| 1673 |
+
"[25] A. Douillard, M. Cord, C. Ollion, T. Robert, and E. Valle, \"Podnet: Pooled outputs distillation for small-tasks incremental learning,\" in Computer Vision - ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds. Cham: Springer International Publishing, 2020, pp. 86-102.",
|
| 1674 |
+
"[26] X. Hu, K. Tang, C. Miao, X.-S. Hua, and H. Zhang, \"Distilling causal effect of data in class-incremental learning,\" in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3956-3965.",
|
| 1675 |
+
"[27] S. Yan, J. Xie, and X. He, \"Der: Dynamically expandable representation for class incremental learning,\" in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3013-3022.",
|
| 1676 |
+
"[28] K. Shmelkov, C. Schmid, and K. Alahari, \"Incremental learning of object detectors without catastrophic forgetting,\" in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3420-3429.",
|
| 1677 |
+
"[29] K. J. Joseph, S. Khan, F. S. Khan, and V. N. Balasubramanian, \"Towards open world object detection,\" in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 5826-5836.",
|
| 1678 |
+
"[30] U. Michieli and P. Zanuttigh, \"Incremental learning techniques for semantic segmentation,\" in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, pp. 3205-3212.",
|
| 1679 |
+
"[31] A. Douillard, Y. Chen, A. Dapogny, and M. Cord, “Plop: Learning without forgetting for continual semantic segmentation,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 4039–4049."
|
| 1680 |
+
],
|
| 1681 |
+
"bbox": [
|
| 1682 |
+
76,
|
| 1683 |
+
70,
|
| 1684 |
+
491,
|
| 1685 |
+
943
|
| 1686 |
+
],
|
| 1687 |
+
"page_idx": 10
|
| 1688 |
+
},
|
| 1689 |
+
{
|
| 1690 |
+
"type": "list",
|
| 1691 |
+
"sub_type": "ref_text",
|
| 1692 |
+
"list_items": [
|
| 1693 |
+
"[32] J. Park, M. Kang, and B. Han, \"Class-incremental learning for action recognition in videos,\" in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 13678-13687.",
|
| 1694 |
+
"[33] K. Nakamura, S. Yeung, A. Alahi, and L. Fei-Fei, \"Jointly learning energy expenditures and activities using egocentric multimodal signals,\" in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6817-6826.",
|
| 1695 |
+
"[34] E. H. Spriggs, F. De La Torre, and M. Hebert, \"Temporal segmentation and activity classification from first-person sensing,\" in 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2009, pp. 17-24.",
|
| 1696 |
+
"[35] C. Chen, R. Jafari, and N. Kehtarnavaz, \"Utd-mhad: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor,\" in 2015 IEEE International Conference on Image Processing (ICIP), 2015, pp. 168-172.",
|
| 1697 |
+
"[36] F. Ofii, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy, \"Berkeley mhad: A comprehensive multimodal human action database,\" in 2013 IEEE Workshop on Applications of Computer Vision (WACV), 2013, pp. 53-60.",
|
| 1698 |
+
"[37] L. Martínez-Villaseñor, H. Ponce, J. Brieva, E. Moya-Albor, J. Núñez-Martínez, and C. Peñafort-Asturiano, “Up-fall detection dataset: A multimodal approach,” Sensors, vol. 19, no. 9, 2019. [Online]. Available: https://www.mdpi.com/1424-8220/19/9/1988",
|
| 1699 |
+
"[38] D. Damen, H. Doughty, G. M. Farinella, A. Furnari, E. Kazakos, J. Ma, D. Moltisanti, J. Munro, T. Perrett, W. Price, and M. Wray, \"Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100,\" International Journal of Computer Vision, vol. 130, no. 1, pp. 33-55, Jan 2022. [Online]. Available: https://doi.org/10.1007/s11263-021-01531-2",
|
| 1700 |
+
"[39] E. Kazakos, A. Nagrani, A. Zisserman, and D. Damen, \"Epic-fusion: Audio-visual temporal binding for egocentric action recognition,\" in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 5491-5500.",
|
| 1701 |
+
"[40] K. M. Kitani, T. Okabe, Y. Sato, and A. Sugimoto, \"Fast unsupervised ego-action learning for first-person sports videos,\" in CVPR 2011, 2011, pp. 3241-3248.",
|
| 1702 |
+
"[41] Y. Li, A. Fathi, and J. M. Rehg, “Learning to predict gaze in egocentric video,” in 2013 IEEE International Conference on Computer Vision, 2013, pp. 3216–3223.",
|
| 1703 |
+
"[42] S. Bambach, S. Lee, D. J. Crandall, and C. Yu, “Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions,” in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1949–1957.",
|
| 1704 |
+
"[43] A. Krizhevsky, “Learning multiple layers of features from tiny images,” 2009.",
|
| 1705 |
+
"[44] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results,” http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html.",
|
| 1706 |
+
"[45] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, \"Scene parsing through ade20k dataset,\" in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5122-5130.",
|
| 1707 |
+
"[46] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in neural networks,” Proc. Natl. Acad. Sci. U. S. A., vol. 114, no. 13, pp. 3521–3526, Mar. 2017.",
|
| 1708 |
+
"[47] F. Zenke, B. Poole, and S. Ganguli, “Continual learning through synaptic intelligence,” in Proceedings of the 34th International Conference on Machine Learning - Volume 70, ser. ICML'17. JMLR.org, 2017, pp. 3987-3995.",
|
| 1709 |
+
"[48] R. Aljundi, F. Babiloni, M. Elhoseiny, M. Rohrbach, and T. Tuytelaars, \"Memory aware synapses: Learning what (not) to forget,\" in Computer Vision - ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. Cham: Springer International Publishing, 2018, pp. 144-161.",
|
| 1710 |
+
"[49] X. Liu, M. Masana, L. Herranz, J. Van de Weijer, A. M. López, and A. D. Bagdanov, \"Rotate your networks: Better weight consolidation and less catastrophic forgetting,\" in 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2262-2268.",
|
| 1711 |
+
"[50] I. J. Myung, \"Tutorial on maximum likelihood estimation,\" Journal of Mathematical Psychology, vol. 47, no. 1, pp. 90-100, 2003. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0022249602000287",
|
| 1712 |
+
"[51] R. Aljundi, P. Chakravarty, and T. Tuytelaars, “Expert gate: Lifelong learning with a network of experts,” in 2017 IEEE Conference on"
|
| 1713 |
+
],
|
| 1714 |
+
"bbox": [
|
| 1715 |
+
506,
|
| 1716 |
+
70,
|
| 1717 |
+
919,
|
| 1718 |
+
943
|
| 1719 |
+
],
|
| 1720 |
+
"page_idx": 10
|
| 1721 |
+
},
|
| 1722 |
+
{
|
| 1723 |
+
"type": "page_number",
|
| 1724 |
+
"text": "11",
|
| 1725 |
+
"bbox": [
|
| 1726 |
+
906,
|
| 1727 |
+
31,
|
| 1728 |
+
919,
|
| 1729 |
+
40
|
| 1730 |
+
],
|
| 1731 |
+
"page_idx": 10
|
| 1732 |
+
},
|
| 1733 |
+
{
|
| 1734 |
+
"type": "list",
|
| 1735 |
+
"sub_type": "ref_text",
|
| 1736 |
+
"list_items": [
|
| 1737 |
+
"Computer Vision and Pattern Recognition (CVPR), 2017, pp. 7120-7129.",
|
| 1738 |
+
"[52] D. Abati, J. Tomczak, T. Blankevoort, S. Calderara, R. Cucchiara, and B. E. Bejnordi, \"Conditional channel gated networks for task-aware continual learning,\" in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 3930-3939.",
|
| 1739 |
+
"[53] M. S. Ryoo and L. Matthies, \"First-person activity recognition: What are they doing to me?\" in Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 2730-2737.",
|
| 1740 |
+
"[54] A. Fathi, Y. Li, and J. M. Rehg, “Learning to recognize daily actions using gaze,” in European Conference on Computer Vision. Springer, 2012, pp. 314–327.",
|
| 1741 |
+
"[55] A. Reiss and D. Stricker, \"Introducing a new benchmarked dataset for activity monitoring,\" in 2012 16th international symposium on wearable computers. IEEE, 2012, pp. 108-109.",
|
| 1742 |
+
"[56] S. Ioffe and C. Szegedy, \"Batch normalization: Accelerating deep network training by reducing internal covariate shift,\" in International conference on machine learning. PMLR, 2015, pp. 448-456.",
|
| 1743 |
+
"[57] F. J. Ordóñez and D. Roggen, \"Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition,\" Sensors, vol. 16, no. 1, p. 115, 2016.",
|
| 1744 |
+
"[58] N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural networks, vol. 12, no. 1, pp. 145–151, 1999.",
|
| 1745 |
+
"[59] G. Hinton, N. Srivastava, and K. Swersky, “Neural networks for machine learning lecture 6a overview of mini-batch gradient descent,” Cited on, vol. 14, no. 8, p. 2, 2012.",
|
| 1746 |
+
"[60] D.-W. Zhou, F.-Y. Wang, H.-J. Ye, and D.-C. Zhan, \"Pycil: A python toolbox for class-incremental learning,\" arXiv preprint arXiv:2112.12533, 2021.",
|
| 1747 |
+
"[61] A. Chaudhry, P. K. Dokania, T. Ajanthan, and P. H. Torr, \"Riemannian walk for incremental learning: Understanding forgetting and intransigence,\" in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 532-547."
|
| 1748 |
+
],
|
| 1749 |
+
"bbox": [
|
| 1750 |
+
76,
|
| 1751 |
+
71,
|
| 1752 |
+
491,
|
| 1753 |
+
445
|
| 1754 |
+
],
|
| 1755 |
+
"page_idx": 11
|
| 1756 |
+
},
|
| 1757 |
+
{
|
| 1758 |
+
"type": "page_number",
|
| 1759 |
+
"text": "12",
|
| 1760 |
+
"bbox": [
|
| 1761 |
+
906,
|
| 1762 |
+
31,
|
| 1763 |
+
919,
|
| 1764 |
+
40
|
| 1765 |
+
],
|
| 1766 |
+
"page_idx": 11
|
| 1767 |
+
}
|
| 1768 |
+
]
|
2301.10xxx/2301.10931/afdf8b0d-7ab1-41e7-80f8-fbf01ffd0d6c_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10931/afdf8b0d-7ab1-41e7-80f8-fbf01ffd0d6c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8ae0732deb7ffbc4839e11eebb8aaf40b9938c620ea30b083c43f9ab85256052
|
| 3 |
+
size 9551338
|
2301.10xxx/2301.10931/full.md
ADDED
|
@@ -0,0 +1,354 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Towards Continual Egocentric Activity Recognition: A Multi-modal Egocentric Activity Dataset for Continual Learning
|
| 2 |
+
|
| 3 |
+
Linfeng Xu, Qingbo Wu, Lili Pan, Fanman Meng, Hongliang Li, Chiyuan He, Hanxin Wang, Shaoxu Cheng, Yu Dai
|
| 4 |
+
|
| 5 |
+
Abstract—With the rapid development of wearable cameras, a massive collection of egocentric video for first-person visual perception becomes available. Using egocentric videos to predict first-person activity faces many challenges, including limited field of view (FoV), occlusions, and unstable motions. Observing that sensor data from wearable devices facilitates human activity recognition (HAR), activity recognition using multi-modal data is attracting increasing attention. However, the deficiency of related dataset hinders the development of multi-modal deep learning for egocentric activity recognition. Nowadays, deep learning in real world has led to a focus on continual learning that often suffers from catastrophic forgetting. But the catastrophic forgetting problem of continual learning for egocentric activity recognition, especially in the context of multiple modalities, remains unexplored due to unavailability of dataset. In order to assist this research, in this paper, we present a multi-modal egocentric activity dataset for continual learning named UESTC-MMEA-CL, which is collected by self-developed glasses integrating a first-person camera and wearable sensors. It contains synchronized data of videos, accelerometers, and gyroscopes, for 32 types of daily activities, performed by 10 participants wearing the glasses. The collection device and process of our dataset are described. Its class types and scale are compared with other publicly available multi-modal datasets for egocentric activity recognition. The statistical analysis of the sensor data is given to show the auxiliary effects for different behaviors. And results of egocentric activity recognition are reported when using separately, and jointly, three modalities: RGB, acceleration, and gyroscope, on a base multi-modal network architecture. To explore the catastrophic forgetting in continual learning tasks on UESTC-MMEA-CL, four baseline methods are extensively evaluated with different multi-modal combinations. We hope the UESTC-MMEA-CL dataset can promote future studies on continual learning for first-person activity recognition in wearable applications. Our dataset will be released soon.
|
| 6 |
+
|
| 7 |
+
Index Terms-Multi-modal dataset, egocentric activity recognition, continual learning, wearable device
|
| 8 |
+
|
| 9 |
+
# I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
OVER the last decades, enormous annotated images and videos boost the tremendous progress of the models and systems in deep learning and computer vision. Most of popular image and video datasets [1], [2], [3], [4], [5] capture moments from a third-person "spectator" view, which leads to the limited visual perception in current models and systems
|
| 12 |
+
|
| 13 |
+
L. Xu, Q. Wu, L. Pan, F. Meng, H. Li, C. He, H. Wang, S. Cheng, and Y. Dai are with School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China (e-mail: {lfxu, qbwu, lilipan, fmmeng, hlli} @uestc.edu.cn).
|
| 14 |
+
|
| 15 |
+
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
|
| 16 |
+
|
| 17 |
+
[6]. Compared to the widespread third-person images, videos from the egocentric point of view can provide the first-person experience of immersion or "participant", i.e., we can feel what a person sees when doing an action. Recently, with the rapid development of wearable devices, especially portable head-mounted cameras, such as GoPro, Insta360, Envision Glasses, Vuzix Blade, ThinkReality A3, and Mijia Glasses, the collection of rich egocentric videos becomes available. Analyzing and understanding the content in the egocentric perspective is key to the paradigm shift from "spectator" view to "participant" view in computer vision research, which is of prevalent interest due to a large number of applications, including military operations, lifestyle analysis [7], human-object interactions [8], medical monitoring [9], augmented and virtual reality [10], [11], industrial robotics [12], and autonomous driving [13].
|
| 18 |
+
|
| 19 |
+
Modeling human activity recognition or anticipation for egocentric videos poses lots of challenges. Firstly, unlike third-person video with apparent motion cues [14], egocentric video changes quickly with the movements of the wearer's head and body. It is difficult to capture the motion cues in egocentric video due to drastic alteration of motion direction and speed, as well as the absence of static backgrounds. Secondly, whereas third person images and videos are captured by a "spectator" for some purpose, egocentric images are driven by the active behavior of the camera wearer. The attention of the egocentric video or the "participant", when doing an action, may focus on hands, objects, and the interaction with the surroundings [15], which is quite different from the interest points of a photographer watching from a "spectator" view. Finally, in some egocentric scenes (e.g., riding bicycle and walking on the road), the objects or body associated with the behavior may not appear in the video due to the limited FoV of the egocentric camera.
|
| 20 |
+
|
| 21 |
+
Complementary to vision data, inertial sensor data (e.g., gyroscopes and accelerometers) provide position and direction information of the wearable device, which may facilitate human activity recognition for egocentric videos. Recently, with the advancement and application of wearable inertial sensors, multi-modal methods, i.e., combining vision data and sensor data to recognize human activities, are of widespread interest, which may promote vision-based methods [16], [17], [18]. Some pioneering work [17] uses LSTM to learn the feature from sensor data and CNNs to learn the feature from vision data, which are fused together to predict wearer's activity. However, due to the difficulty of collecting data and
|
| 22 |
+
|
| 23 |
+
lack of dataset, the progress of multi-modal egocentric activity recognition is slow compared with the vision-based methods.
|
| 24 |
+
|
| 25 |
+
Nowadays, deep neural networks (DNNs) have made a tremendous progress in various fields and applications, such as computer vision, pattern recognition, and natural language processing. Although this progress is wonderful, most of current DNNs only are good at dealing with static data, because they no longer learn after a training period. This learning strategy is different from what human beings do. Actually, in real world, humans keep acquiring new skills and knowledge to adapt dynamic environments based on what previously learned. This on-going ability is crucial for the development of artificial general intelligence (AGI) [19]. However, when the network is trained on a sequence of multiple tasks, the performance on previous tasks will severely degrade, because the weights of the network, which are important for previous tasks, are modified to fit the objectives of the new task. This phenomenon is termed catastrophic forgetting [20], [21], which has seriously hampered further development of machine learning in real world. To alleviate catastrophic forgetting, many promising continual learning algorithms were proposed in recent years. Most of continual learning research focuses on incremental classification tasks [22], [23], [24], [25], [26], [27]. To tackle computer vision tasks, continual learning for object detection [28], [29], semantic segmentation [30], [31], and activity recognition [32] has attracted much attention and is an emerging trend due to lots of real-world applications, such as robotics and autonomous driving. However, the catastrophic forgetting in the context of continual learning for multi-modal egocentric activity recognition and possible approaches to address this problem have remained unexplored due to unavailability of related dataset. To fill in this gap, we propose a multi-modal egocentric activity dataset for continual learning named UESTC-MMEA-CL.
|
| 26 |
+
|
| 27 |
+
Different from the existing multi-modal egocentric activity datasets [33], [34], which are collected by separate camera and sensors, our dataset is collected by self-developed glasses integrated with a first-person camera and an inertial measurement unit (IMU). So UESTC-MMEA-CL is suitable to develop applications for life-logging wearable devices (e.g., smart glasses). The vision data and sensor data of our dataset are synchronized well when doing actions. Similar to our manner of collection, MEAD [17] was collected by Google Glasses to capture synchronous video and sensor data. However, the scale of MEAD is too limited to take full advantage of DNNs, let alone facilitate research of continual learning. The proposed UESTC-MMEA-CL contains 32 daily activity classes with the duration over 30 hours in total. Each sample clip consists of video, acceleration and gyroscope signals which can provide rich object and motion attributes. Besides, as shown in Fig. 1, we divide these classes into different tasks/steps to adapt to the requirements of continual learning to encourage more research on continual multi-modal egocentric activity recognition.
|
| 28 |
+
|
| 29 |
+
In order to better describe catastrophic forgetting in the context of continual learning for multi-modal egocentric activity recognition, we propose a benchmark model and evaluate several classic continual learning methods on our UESTC-MMEA-CL.
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
Fig. 1: Continual egocentric activity recognition with multi modalities: Video stream, acceleration data(green) and gyroscope data(purple).
|
| 33 |
+
|
| 34 |
+
In summary, the main contributions of this paper are listed as follows:
|
| 35 |
+
|
| 36 |
+
- We propose a new multi-modal egocentric activity dataset UESTC-MMEA-CL, which aims at addressing the catastrophic forgetting problem in the context of continual egocentric activity recognition. To the best of our knowledge, this is the first multi-modal dataset for continual egocentric activity recognition.
|
| 37 |
+
- We propose a benchmark model for mulimodal egocentric activity recognition and demonstrate the experimental results when using separately, and jointly, the three modalities, i.e., RGB, acceleration, and gyroscope, on UESTC-MMEA-CL.
|
| 38 |
+
- We set the continual egocentric activity recognition tasks and describe the main challenges raised by UESTC-MMEA-CL: the catastrophic forgetting of each modality. Besides, we try to employ popular continual learning methods to tackle this problem and provide some potential research directions.
|
| 39 |
+
|
| 40 |
+
# II. RELATED WORK
|
| 41 |
+
|
| 42 |
+
# A. Multi-modal Human Activity Recognition
|
| 43 |
+
|
| 44 |
+
1) Datasets: In order to integrate the complementary information for vision data, some multi-modal datasets have been proposed for human activity recognition task. UTD-MHAD [35] is collected from two independent devices, i.e., a Kinect camera and a wearable inertial sensor. The dataset consists of RGB videos, depth videos, skeleton positions, and inertial signals for 27 human actions such as right arm throw, cross arms in the chest, basketball shoot, et al. For the purpose of developing and evaluating multi-modal algorithms, Berkeley-MHAD [36] consists of multi-modal data for 11 actions, which is captured by five different systems: an optical motion capture system, stereo cameras, Microsoft Kinect cameras, accelerometers, and microphones. To address the health problem of elder persons, the Up-Fall dataset [37] is proposed for reliable fall
|
| 45 |
+
|
| 46 |
+
detection. The dataset contains multi-modal data for six daily living activities and five types of simulated falls from wearable sensors, ambient sensors, and vision devices.
|
| 47 |
+
|
| 48 |
+
Nowadays, activity recognition from the egocentric perspective has become a widely concerned topic due to the interesting life-logging applications, such as lifestyle analysis and health monitoring [17]. However, the progress of multi-modal egocentric activity recognition is relatively slow because it is not easy to capture multi-modal data from wearable devices such as smart glasses. The existing multimodal datasets for egocentric activity recognition are quite limited. EPIC-KITCHENS [38] is a large-scale egocentric video dataset which is collected by 32 participants in kitchen environments. Every participant is commanded to use a head-mounted GoPro Hero7 black to record every second from the time they entered the kitchen. This dataset contains multimodal data of RGB, flow, and audio, except position and direction information related to the activities. With the wide use of wearable sensors, a number of works introduced some auxiliary data for a comprehensive understanding of human activities. Stanford-ECM [33] consists of egocentric video, accelerometer data, and heart rate data, collected by a mobile phone placed in the chest pocket and a wrist-worn heart rate sensor. The dataset contains 24 daily activities under natural conditions, including various levels of motion intensity. CMUMMAC [34] introduces 29 kitchen activities, such as opening fridge, removing cap, which is collected from 7 participants using an egocentric camera, IMUs, and other sensors. In the existing datasets, the MEAD dataset collected by Song et al. [17] is most similar to our proposed UESTC-MMEA-CL dataset. The MEAD dataset was collected by Google Glasses to record 20 human activities which contains modalities of synchronous video and sensor data. However, there are only 200 sequences in total in the MEAD dataset, whose scale is too limited for the research of DNNs.
|
| 49 |
+
|
| 50 |
+
2) Methods: Datasets with more complex scenes and more categories of behaviors make it challenging to recognize human activities for vision-based methods. Is is helpful to integrate complementary information for vision data. In order to improve the algorithm's robustness, Song et al. [17] disassemble the visual signals into three input forms (single frame, optical flow, and stabilized optical flow), then classify activities with the aid of gyroscope and accelerometer data. Kazakos et al. [39] propose a mid-level fusion Temporal Binding Network (TBN) to combine signals of three modalities, i.e., video, flow, and audio. Different from traditional fusion method, multimodal signals are aggregated before temporal fusion with the shared weights over time and each modality is trained individually. Spriggs et al. [34] segment human motion into several actions and classify activities for first-person sensing, which is captured by a wearable vision sensor and IMUs. Kitani et al. [40] propose an unsupervised method for the egocentric activity recognition task, which adopts a stacked Dirichlet process mixture model to infer the motion histogram codebook and the activity category. Nakamura et al. [33] employ a stacked LSTM network to process the fused features from vision and acceleration, then jointly predict activities and energy expenditures with the aid of heart-rate sensor data.
|
| 51 |
+
|
| 52 |
+
Besides, some researches [41], [42] have been devoted to predicting people's intentions by analyzing some mid-level features like people's face, gaze, and hands.
|
| 53 |
+
|
| 54 |
+
# B. Continual Learning
|
| 55 |
+
|
| 56 |
+
1) Datasets: To the best of our knowledge, there are no datasets dedicated to continual learning tasks, and the researchers usually manually divide the well-known datasets into continual learning task sequences according to the specific task types, such as image classification, object detection and image segmentation, etc. Specifically, for the image classification task, the most widely-used datasets are ImageNet [1] and CIFAR100 [43], which are originally used for non-continuous image classification. ImageNet consists of 1000 classes with approximately 1000 pictures for each class. The size of each picture is $224 \times 224$ . CIFAR100 is made up of 60000 images evenly divided into 100 classes, where each class is comprised of 500 training samples and 100 test samples. For the image segmentation task, the selected datasets are Pascal-VOC 2012 [44] and ADE20K [45]. The former contains 20 classes and the latter contains 150 classes. The Pascal-VOC 2012 dataset is also widely used in the object detection task and the action classification task. Another popular dataset in object detection is Microsoft-COCO [3], which contains 80 classes in total and is comprised of more than 300,000 images and more than 2 million instances. It is worth noting again that all the datasets mentioned here are used originally for non-continuous tasks, but the researchers in continual learning manually partition them into continuous task sequences.
|
| 57 |
+
|
| 58 |
+
2) Methods: Many efforts have been made to improve the performance of continual learning. The existing work can be mainly divided into parameter-based, knowledge-distillation-based, and parameter-expansion-based.
|
| 59 |
+
|
| 60 |
+
Parameter-based. The key to this method is to evaluate the importance of parameters and protect the important ones. Methods [46], [47], [48], [49] fall into this category with different parameter importance estimations. A quadratic penalty imposed on the parameters critical to old tasks is proposed by EWC [46]. The authors utilize the Fisher Information Matrix [50] to choose the critical parameters. Liu et al. [49] obtain a better Fisher Information Matrix approximation by rotating the parameter space.
|
| 61 |
+
|
| 62 |
+
However, overestimation and underestimation might happen due to batch updates. To solve this problem, [47] accumulates the changes in the learning of parameters via which the importance is estimated. Memory Aware Synapses (MAS) [48] solves the same problem by accumulating the gradient magnitude.
|
| 63 |
+
|
| 64 |
+
Distillation-based. The core idea of this category is to prevent the drift between new and old models. Learning without Forgetting (LwF) [22] first introduces knowledge distillation to continual learning. Specifically, the predictions made by the new model should be close enough to the old model predictions. iCaRL [23] proposes a rehearsal strategy and a nearest-mean-of-exemplars classifier to cooperate with the LwF loss. The less-forget loss is then devised by UCIR [24], which penalizes the activation drift of the backbone. For
|
| 65 |
+
|
| 66 |
+
a stronger distillation constraint, a spatial-based multi-level distillation loss is designed by PODNet [25]. DDE [26] aims to solve catastrophic forgetting from the scope of causal analysis and then proposes to distill the colliding effect of new and old data.
|
| 67 |
+
|
| 68 |
+
Parameter-expansion-based. Another straightforward idea is to prevent the parameters of previous tasks from drifting and expand new branches for new tasks. EG [51] allocates a duplicate model for new tasks. CCGN [52] devises a task-specific gating mechanism to select the target filters for specific inputs. DER [27] also duplicates the entire backbone to learn new classes. Additionally, DER concatenates all the features obtained from the backbones and utilizes them to learn a unified classifier. However, the excessive parameter overhead hinders the application of these methods in real-world scenarios.
|
| 69 |
+
|
| 70 |
+
# III. UESTC-MMEA-CL DATASET
|
| 71 |
+
|
| 72 |
+
In this section, we introduce the data collection of UESTC-MMEA-CL, present statistics, and compare with other multimodal egocentric datasets. The distributions of standard deviation of acceleration and gyroscope sensor data are shown to demonstrate the motion intensity of each activity, as well as the motion correlation of the two sensor modalities.
|
| 73 |
+
|
| 74 |
+
# A. Data Collection
|
| 75 |
+
|
| 76 |
+
In order to collect synchronous video and sensor data for egocentric activity recognition, we developed a pair of wearable smart glasses, as show in Fig. 2(a), with a first-person camera, IMU sensors, and the function of wireless connection. The mainboard of the glasses is very tiny as shown in Fig. 2(b). The process of data collection can be conducted in the following two steps: 1) device configuration; 2) data collection and post-processing.
|
| 77 |
+
|
| 78 |
+
First, we set up the device as follows. For camera, the video resolution is $640 \times 480$ , and the frame rate is 25FPS. For sensors, the sample rate is $25\mathrm{Hz}$ . The sensitivity of gyroscope is $16.4\mathrm{LSB / deg / s}$ , and the sensitivity of accelerator is $8192\mathrm{LSB / g}$ . We developed applications to capture videos, accelerometers, and gyroscopes data, which are synchronized by time-delay correction and transferred to a terminal via WIFI.
|
| 79 |
+
|
| 80 |
+
After the configuration, ten subjects are divided into five groups. For each group, one subject equips the glasses and acts, while another uses a terminal to ensure each video only contains one action. All data are collected from different scenes with adequate illumination. Because the sensors are sensitive to noise, the median filtering method is used to filter the abnormal values and noise. The kernel size of the median filter is 5. After filtering the sensor data can reflect the movement of the subject better.
|
| 81 |
+
|
| 82 |
+
# B. Dataset Overview
|
| 83 |
+
|
| 84 |
+
We first introduce some general statistics of our proposed dataset UESTC-MMEA-CL, compared with the available egocentric datasets, which is shown in Table I. Our dataset
|
| 85 |
+
|
| 86 |
+

|
| 87 |
+
(a)
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
(b)
|
| 91 |
+
Fig. 2: The device for data collection. (a) Our developed Kuaiyan Vision Smart Glasses. (b) The mainboard of the glasses.
|
| 92 |
+
|
| 93 |
+
comprises 30.4 hours of video clips, acceleration stream and gyroscope data in total. There are 32 daily activities included in our dataset as shown in Table II, containing some basic movements (upstairs, walking, standing, etc.), indoor behaviors (writing, reading, type-PC, etc.), some kinds of cleaning labor (mop-floor, wash-dish, wipe-table, etc.), several recreations and leisure activities (watch-TV, play-phone, playcard, etc.), activities with hands (wash-hands, wash-dish, and cooking), and activities with head movements (eating and drinking).
|
| 94 |
+
|
| 95 |
+
In contrast to Stanford-ECM [33], which suffers from limited FoV and contextual information due to the lower location of chest-mounted camera, we embed the camera into the head-mounted glasses to capture more useful visual information. Compared with uni-modality datasets such as JPL-Interaction [53], GTEA Gaze [54], GTEA Gaze+ [54], UEC EgoAction [40], EPIC-KITCHENS [38], we provide additional synchronized data of accelerometers and gyroscopes, which are complementary to vision data and make it available to explore the catastrophic forgetting problem using separately, and jointly, the three modalities. CMU-MMAC [34] provides multi-modal measures of human activities with four wireless IMUs and five wired IMUs located on multiple parts of the subjects' body, such as wrists, ankles, arms, and waist, in order to capture motion details when performing cooking and food preparation. However, the complex devices make the data collection of daily behaviors difficult, which is not suitable for wearable applications. MEAD [17] contains 20 life-logging activities, which uses Google Glasses to capture multi-modal data of video, accelerometer and gyroscope. But due to the limited scale, duration, and category number of MEAD, it is difficult to take full advantage of DNNs and set up multiple tasks for continual learning research.
|
| 96 |
+
|
| 97 |
+
# C. Dataset Statistics
|
| 98 |
+
|
| 99 |
+
Our UESTC-MMEA-CL contains 32 different activity classes and each class contains approximately 200 samples, consisting of fully synchronized first-person video clips, acceleration sensing sequences, and gyroscope sensing sequences. A sample is shown in Fig. 3.
|
| 100 |
+
|
| 101 |
+
Although visual information dominates human activity recognition, sensor data may provide complementary position and direction information to facilitate the recognition task for egocentric video. In order to demonstrate the auxiliary motion
|
| 102 |
+
|
| 103 |
+
TABLE I: Comparison with available egocentric datasets.
|
| 104 |
+
|
| 105 |
+
<table><tr><td>Dataset</td><td>#Subjects</td><td>#Class</td><td>#Duration (h)</td><td>Mount</td><td>Scenario</td><td>Video</td><td>Acc</td><td>Gyro</td></tr><tr><td>CMU-MMAC [34]</td><td>39</td><td>29</td><td>17.0</td><td>Head</td><td>Natural</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>JPL-Interaction [53]</td><td>1</td><td>7</td><td>0.4</td><td>Head</td><td>Indoor</td><td>✓</td><td></td><td></td></tr><tr><td>MEAD [17]</td><td>7</td><td>20</td><td>0.5</td><td>Head</td><td>Natural</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>GTEA Gaze [54]</td><td>14</td><td>40</td><td>1.0</td><td>Head</td><td>Kitchen</td><td>✓</td><td></td><td></td></tr><tr><td>GTEA Gaze+ [54]</td><td>5</td><td>44</td><td>9.0</td><td>Head</td><td>Kitchen</td><td>✓</td><td></td><td></td></tr><tr><td>PAMAP2 [55]</td><td>9</td><td>18</td><td>-</td><td>-</td><td>-</td><td></td><td>✓</td><td></td></tr><tr><td>UEC EgoAction [40]</td><td>1</td><td>37</td><td>0.5</td><td>Head</td><td>Kitchen</td><td>✓</td><td></td><td></td></tr><tr><td>Stanford-ECM [33]</td><td>10</td><td>24</td><td>31.0</td><td>Chest</td><td>Natural</td><td>✓</td><td>✓</td><td></td></tr><tr><td>EPIC-KITCHENS [38]</td><td>32</td><td>149</td><td>-</td><td>Head</td><td>Kitchen</td><td>✓</td><td></td><td></td></tr><tr><td>UESTC-MMEA-CL(ours)</td><td>10</td><td>32</td><td>30.4</td><td>Head</td><td>Natural</td><td>✓</td><td>✓</td><td>✓</td></tr></table>
|
| 106 |
+
|
| 107 |
+
TABLE II: Activities in UESTC-MMEA-CL Dataset.
|
| 108 |
+
|
| 109 |
+
<table><tr><td></td><td>Class</td><td>#Clips</td><td>#Avg-Dur(s)</td><td>Scenario</td></tr><tr><td>1</td><td>upstairs</td><td>192</td><td>17.7</td><td>teaching building, park, library</td></tr><tr><td>2</td><td>downstairs</td><td>190</td><td>17.3</td><td>teaching building, park, library</td></tr><tr><td>3</td><td>drinking</td><td>202</td><td>16.0</td><td>dorm, office</td></tr><tr><td>4</td><td>fall</td><td>185</td><td>13.7</td><td>campus, office, corridor</td></tr><tr><td>5</td><td>reading</td><td>201</td><td>18.2</td><td>office, classroom</td></tr><tr><td>6</td><td>sweep-floor</td><td>229</td><td>18.0</td><td>corridor, office, campus</td></tr><tr><td>7</td><td>cut-fruits</td><td>203</td><td>17.5</td><td>teaching building, park, office</td></tr><tr><td>8</td><td>mop-floor</td><td>206</td><td>17.7</td><td>corridor, office</td></tr><tr><td>9</td><td>writing</td><td>209</td><td>18.8</td><td>classroom, office</td></tr><tr><td>10</td><td>wipe-table</td><td>245</td><td>18.2</td><td>home, office, dorm</td></tr><tr><td>11</td><td>wash-hand</td><td>189</td><td>17.0</td><td>bathroom, kitchen</td></tr><tr><td>12</td><td>standing</td><td>203</td><td>18.0</td><td>corridor, office, dining hall</td></tr><tr><td>13</td><td>play-phone</td><td>205</td><td>17.4</td><td>classroom, office, campus, park</td></tr><tr><td>14</td><td>type-PC</td><td>204</td><td>18.1</td><td>classroom, office</td></tr><tr><td>15</td><td>eating</td><td>213</td><td>17.1</td><td>classroom, office, dining hall, canteen</td></tr><tr><td>16</td><td>cooking</td><td>225</td><td>17.1</td><td>kitchen, office</td></tr><tr><td>17</td><td>pick-up-phone</td><td>213</td><td>14.4</td><td>classroom, office, teaching building, campus</td></tr><tr><td>18</td><td>drop-trash</td><td>201</td><td>13.1</td><td>campus, park, teaching building</td></tr><tr><td>19</td><td>fold-clothes</td><td>204</td><td>17.3</td><td>home, dorm, office</td></tr><tr><td>20</td><td>walking</td><td>203</td><td>17.1</td><td>campus, library, park</td></tr><tr><td>21</td><td>play-card</td><td>206</td><td>17.0</td><td>classroom, restroom</td></tr><tr><td>22</td><td>brush-teeth</td><td>203</td><td>17.0</td><td>bathroom</td></tr><tr><td>23</td><td>wash-dish</td><td>189</td><td>16.0</td><td>kitchen, bathroom</td></tr><tr><td>24</td><td>moving-sth</td><td>201</td><td>15.7</td><td>corridor, office, teaching building</td></tr><tr><td>25</td><td>type-phone</td><td>195</td><td>17.3</td><td>classroom, office, campus</td></tr><tr><td>26</td><td>chat</td><td>203</td><td>17.3</td><td>classroom, office, dorm</td></tr><tr><td>27</td><td>open-close-door</td><td>200</td><td>15.8</td><td>office, home</td></tr><tr><td>28</td><td>ride-bike</td><td>198</td><td>17.1</td><td>campus, park, road</td></tr><tr><td>29</td><td>sit-stand</td><td>201</td><td>15.7</td><td>office, classroom, library</td></tr><tr><td>30</td><td>take-drop-sth</td><td>201</td><td>13.5</td><td>office, classroom, library</td></tr><tr><td>31</td><td>shopping</td><td>208</td><td>17.1</td><td>mall, street</td></tr><tr><td>32</td><td>watch-TV</td><td>205</td><td>16.9</td><td>office, home</td></tr></table>
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
Fig. 3: A sample of activities "drinking", which consists of the synchronized video stream, acceleration, and gyroscope sensor data.
|
| 113 |
+
|
| 114 |
+
information of the sensor data, we make statistical analysis of the two modalities, i.e., accelerometer and gyroscope data. Following Stanford-ECM [33], we calculate the standard deviation (STD) of the sensor data to show the relative motion intensity for each activity. Fig. 4(a) demonstrates the distribution of acceleration STD. Activities are sorted by the median STD of acceleration and divided into four levels of intensity. Behaviors such as upstairs, downstairs, and moving-sth are relatively vigorous while chat, type-phone, and watch-TV are stable. The degree of variation in orientation can be measured by the STD of the gyroscope data, which is shown in Fig. 4(b). Besides, Fig. 4(c) shows a scatter plot of the STD distributions of acceleration and gyroscope data, which reflects the motion correlation (correlation coefficient $r = 0.78$ ) of the acceleration and gyroscope data.
|
| 115 |
+
|
| 116 |
+
# IV. CONTINUAL LEARNING ON UESTC-MMEA-CL
|
| 117 |
+
|
| 118 |
+
# A. Problem Setup
|
| 119 |
+
|
| 120 |
+
Although DNNs have made a remarkable progress in many applications, most of current DNNs face the catastrophic forgetting problem when dealing with dynamic data. In the wearable application of egocentric activity recognition, data may come dynamically. Limited by the memory capacity and computing power of wearable devices, models are expected to accommodate new recognition tasks when the data from the past are inaccessible or partially accessible. In order to explore the catastrophic forgetting and promote possible approaches to address this problem, we introduce continual learning into our task scenario.
|
| 121 |
+
|
| 122 |
+
The activity class set $\mathcal{C}$ , which contains $N$ classes ( $N = 32$ ), of our dataset is divided into $S$ incremental steps/tasks. The class set $\mathcal{C}^s$ of each step/task $s$ ( $0 \leq s \leq S - 1$ ) contains $N / S$ classes ( $\mathcal{C}^s = \bigcup_{l=0}^{(N / S) - 1} \mathcal{C}_l^s = \{\mathcal{C}_0^s, \mathcal{C}_1^s,.., \mathcal{C}_{(N / S) - 1}^s\}$ ). The multi-modal sample set of class $\mathcal{C}_l^s$ is denoted as $\mathcal{D}_l^s = \left\{\left(\mathbf{v}_i^{l,s}, \mathbf{a}_i^{l,s}, \mathbf{g}_i^{l,s}, \mathbf{y}_i^{l,s}\right)\bigg|_{i=1}^{N_{l,s}}\right\}$ . Sample set $\mathcal{D}_l^s$ contains $N_{l,s}$ pairs of samples $\mathbf{x}_i^{l,s} = (\mathbf{v}_i^{l,s}, \mathbf{a}_i^{l,s}, \mathbf{g}_i^{l,s})$ and activity class label $\mathbf{y}_i^{l,s}$ , where $\mathbf{v}_i^{l,s}, \mathbf{a}_i^{l,s}, \mathbf{g}_i^{l,s}$ represent the visual signal, acceleration signal and gyroscope signal of the $i$ -th sample of $\mathcal{C}_l^s$ , respectively. At the $s$ step, the models are trained with the available samples $\mathcal{D}^s$ , where $\mathcal{D}^s = \bigcup_{l=0}^{(N / S) - 1} \mathcal{D}_l^s$ , and evaluated on the test set of all seen classes $\bigcup_{j=0}^{s} \mathcal{C}^j$ . For the methods based on exemplar replay (or rehearsal), we define a replay buffer to store exemplar samples $\mathcal{E}^s$ of old classes. At
|
| 123 |
+
|
| 124 |
+

|
| 125 |
+
(a)
|
| 126 |
+
(b)
|
| 127 |
+
|
| 128 |
+

|
| 129 |
+
Fig. 4: Statistics of sensor data. (a) STD distributions of acceleration for all activity classes. The relative motion intensity of the activities increase sequentially from the leftmost column to the right, which are divided into four different levels according to the median STD. (b) STD distributions of gyroscope for each activity. (c) Scatter plot of the STD distributions of acceleration and gyroscope (Correlation coefficient $r = 0.78$ on all samples).
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
(c)
|
| 133 |
+
|
| 134 |
+
the first step, available data is only $\mathcal{D}^0$ , and at each subsequent incremental step, the available data is $\mathcal{D}^s \cup \mathcal{E}^s (s \geq 1)$ .
|
| 135 |
+
|
| 136 |
+
# B. Base Multi-modal Architecture
|
| 137 |
+
|
| 138 |
+
First, we introduce the base architecture employed for the multi-modal egocentric activity recognition, which is shown in Fig. 5. The architecture is based on the temporal binding network (TBN) [39], which is effective for modal fusion and temporal aggregation. BN-Inception [56] is adopted as the feature extractor $\mathcal{F}_v$ for the frame from the video stream. The deep convolutional and LSTM recurrent neural networks [57] are used for the feature extraction of acceleration signal $\mathcal{F}_a$ and gyroscope signal $\mathcal{F}_g$ . We use the random sampling method to sample the multi-modal data within a temporal blinding window (TBW) [39]. The input multi-modal data $x = \{v, a, g\}$ , where $v, a,$ and $g$ denote the video, acceleration data, and gyroscope data respectively, are divided into $T$ TBWs. Within a temporal blinding window $TBW_t$ ( $1 \leq t \leq T$ ), modalities are sampled as a single video frame, a sequence of acceleration data, and gyroscope data, which is denoted as $x_t = \{v_t, a_t, g_t\}$ . Thus, we can get fused feature:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
y _ {t} = Q \left[ p \left(\mathcal {F} _ {v} \left(v _ {t}\right)\right), \mathcal {F} _ {a} \left(a _ {t}\right), \mathcal {F} _ {g} \left(g _ {t}\right) \right], \tag {1}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
where $p$ denotes the average pooling operation, and $Q$ represents the mid-fusion block to aggregate features of the three modalities, which contains concatenation, convolution, and ReLU operations. Then, all features $y_{t}$ from the $T$ TBWs are averaged as the input to the activity classifier:
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
\tilde {y} = \operatorname {s o f t m a x} \left(\frac {1}{T} \sum_ {t = 1} ^ {T} y _ {t}\right). \tag {2}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
Fig. 5: Base architecture of multi-modal egocentric activity recognition. The number of TBWs $T$ is set to 8.
|
| 152 |
+
|
| 153 |
+
Following most classification tasks, cross entropy is employed as the loss function for the final prediction of activities, and branches of all modalities are trained jointly.
|
| 154 |
+
|
| 155 |
+
# C. Benchmark for Continual Learning
|
| 156 |
+
|
| 157 |
+
In this paper, we implement three baseline methods, as well as the most straightforward fine-tune solution, on the base multi-modal architecture (Fig. 5) as the benchmark for continual learning on UESTC-MMEA-CL dataset. These continual learning methods are as follows:
|
| 158 |
+
|
| 159 |
+
- EWC [46]: a parameter-based continual learning model, where the important parameters to old tasks are regular-
|
| 160 |
+
|
| 161 |
+
ized and changed in a small range. Therefore the influence to old tasks is alleviated during new task learning.
|
| 162 |
+
|
| 163 |
+
- LwF [22]: a distillation-based continual learning model, where knowledge distillation (KD) is combined with finetuning, and the output of the old network is used to constrain the parameter update of the new task.
|
| 164 |
+
- iCaRL [23]: a replay-based continual learning model, which constructs and manages an exemplar set consisting of collection of representative old data. The exemplars that are closest to the mean feature of each class are selected. For the new task, the new data and exemplar set are mixed as input in the learning phase.
|
| 165 |
+
|
| 166 |
+
# V. EXPERIMENTS
|
| 167 |
+
|
| 168 |
+
# A. Implementation Details
|
| 169 |
+
|
| 170 |
+
Sensor signal processing: We use a median filter with kernel size 5 to filter abnormal values of acceleration and gyroscope signals. Since the gyroscope is not reliable in a long term, the trapezoidal integral of the filtered angular velocity is calculated to get the angle data. However, there exists a bias drift problem in the gyroscope signal which would cause a large cumulative error in the integral results. To tackle this problem, we subtract the mean value before the integral. After the filtering and integral processing, 24 consecutive sensor data are sampled within a TBW.
|
| 171 |
+
|
| 172 |
+
Multi-modal training details: We implement the model in PyTorch. The video stream branch is trained by the SGD optimizer [58] with a momentum of 0.9, a batch size of 8, a dropout of 0.5, and a learning rate of 0.001. The acceleration and gyroscope steam branches are trained by the RMSprop optimizer [59] with a dropout of 0.5 and a learning rate of 0.001. The batch size is set to 32 for the acceleration network and 8 for the gyroscope network. We initialize the RGB network with pre-trained model from the ImageNet. All networks are trained for 50 epochs, and the learning rate is decayed by a factor of 10 at epoch 10 and 20.
|
| 173 |
+
|
| 174 |
+
Continual learning training details: All continual learning benchmarks are implemented using PyTorch and PyCIL [60]. The settings of incremental steps and activity classes in each step are shown in Fig. 6. Based on the Problem Setup introduced in IV-A, we set the number of total activity classes $N = 32$ and the number of incremental steps $S = \{16,8,4\}$ . Therefore, each step contains $N / S = \{2,4,8\}$ activity classes. For the replay-based continual learning method, we set the memory size to 320. Other parameter settings are the same as multi-modal training.
|
| 175 |
+
|
| 176 |
+
# B. Metrics
|
| 177 |
+
|
| 178 |
+
Following the research in continual learning [23], [61], two metrics, i.e., average accuracy and average forgetting, are used to evaluate the overall accuracy in continual learning stages and the average decrease of accuracy on previous tasks, respectively, which is defined as follows.
|
| 179 |
+
|
| 180 |
+
Average accuracy (A) Here, $a_{k,j} \in [0,1]$ denotes the accuracy evaluated on the test set of task $j$ after learning task $k$ ( $j \leq k$ ). Then the average accuracy on task $k$ can be calculated as
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
A _ {k} = \frac {1}{k} \sum_ {j = 1} ^ {k} a _ {k, j} \tag {3}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
Average forgetting (F) The forgetting for a certain task is defined as the difference between the maximum knowledge obtained with respect to the task during the learning process in the past and the current knowledge the model has about it [61]. $f_{j}^{k}\in [-1,1]$ denotes the forgetting on the previous task $j$ after learning task $k$ , which can be formulated as
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
f _ {j} ^ {k} = \max _ {l \in j, \dots , k - 1} a _ {l, j} - a _ {k, j}, \quad \forall j < k \tag {4}
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
Thus, the average forgetting at the $k$ -th task can be defined as
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
F _ {k} = \frac {1}{k - 1} \sum_ {j = 1} ^ {k - 1} f _ {j} ^ {k} \tag {5}
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
Note that the lower $F_{k}$ , the less forgetting of a model on the previous tasks.
|
| 199 |
+
|
| 200 |
+
# C. Evaluation on UESTC-MMEA-CL
|
| 201 |
+
|
| 202 |
+
Multi-modal egocentric activity recognition. We evaluate the multi-modal egocentric activity recognition on UESTC-MMEA-CL with different modal combinations using the base architecture in Fig. 5. The results are summarized in Table III. For the uni-modal recognition, RGB network achieves the most prominent performance compared with acceleration and gyroscope network. Fusing the two modalities of sensor data, the average class precision is $59.9\%$ , which achieves significant improvements of more than $56\%$ compared to the unimodality 'Acc' and 'Gyro'. The improvements of the modal combination methods 'RGB+Acc' and 'RGB+Gyro' on RGB are not great but also obvious. When fusing all modalities together, 'RGB+Acc+Gyro' achieves the highest recognition accuracy.
|
| 203 |
+
|
| 204 |
+
In order to demonstrate the recognition performance of different modality and modal combinations on each activity class, we present the confusion matrices which are shown in Fig. 7. As shown in Fig. 7(d), with the help of the two types of motion-based sensor data, recognition accuracy of activities such as 'upstairs', 'drinking', 'fall', 'walking', 'sit-stand' and 'shopping' is obviously improved. Fig. 7(e) and Fig. 7(f) prove that $\mathrm{RGB + Acc}$ and $\mathrm{RGB + Gyro}$ also perform well but are not as good as 'All'.
|
| 205 |
+
|
| 206 |
+
Catastrophic forgetting. Deep neural networks often suffer from catastrophic forgetting in continual learning tasks. In order to demonstrate the catastrophic forgetting in the context of continual learning for multi-modal activity recognition, the straightforward solution fine-tune is evaluated on UESTC-MMEA-CL with three incremental settings and two metrics A and F that are introduced in section V-B. As shown in Fig. 8, with the increase of incremental tasks, the recognition accuracy of fine-tune on different modalities and modal combinations decrease dramatically. Moreover, when using the sensor data the model suffers from more serious catastrophic forgetting than using RGB only. The average accuracy and forgetting of fine-tune with incremental setting $N / S = 4$
|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
Fig. 6: Settings of incremental steps. Each number denotes the activity class in Table II.
|
| 210 |
+
|
| 211 |
+
TABLE III: Results on UESTC-MMEA-CL using multi-modal combinations ('All' denotes 'RGB+Acc+Gyro').
|
| 212 |
+
|
| 213 |
+
<table><tr><td rowspan="2"></td><td colspan="3">Uni-modal</td><td colspan="4">Multi-modal</td></tr><tr><td>RGB</td><td>Acc</td><td>Gyro</td><td>RGB + Acc</td><td>RGB + Gyro</td><td>Acc + Gyro</td><td>All</td></tr><tr><td>Top1-Accuracy (%)</td><td>92.6</td><td>35.0</td><td>38.2</td><td>94.5</td><td>93.9</td><td>59.7</td><td>95.6</td></tr><tr><td>Avg Class Precision (%)</td><td>92.5</td><td>35.1</td><td>38.3</td><td>94.4</td><td>93.9</td><td>59.9</td><td>95.6</td></tr></table>
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
(a)
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
(b)
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
(c)
|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
(d)
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
(e)
|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
(f)
|
| 232 |
+
Fig. 7: Confusion matrices of activity recognition. The first row shows the test results for three uni-modal networks: (a) RGB; (b) Acceleration; (c) Gyroscope. The second row demonstrates the difference between the multi-modal combination networks and the RGB network: (d) Difference between 'All' and 'RGB'; (e) Difference between 'RGB+Acc' and 'RGB'; (f) Difference between 'RGB+Gyro' and 'RGB'.
|
| 233 |
+
|
| 234 |
+

|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
Fig. 8: Fine-tune results on UESTC-MMEA-CL with $N / S = 2$ (left), 4 (middle) and 8 (right).
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
Fig. 9: Multi-modal continual learning performance on UESTC-MMEA-CL with $N / S = 4$ . Three continual learning methods and fine-tune are evaluated on our dataset using different modalities and modal combinations: (a) RGB; (b) Acc; (c) Gyro; (d) RGB+Acc; (e) RGB+Gyro; (f) Acc+Gyro; (g) RGB+Acc+Gyro.
|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
|
| 260 |
+
are demonstrated in the first column of Table IV. It can be observed that uni-modal network 'RGB' maintains a low forgetting rate while uni-modal network 'Acc' suffers from very severe forgetting of previously learned activities. Although 'Gyro' performs not well, the average forgetting of 'Gyro' is not as high as 'Acc' due to the low accuracy of 'Gyro' at the first incremental step. It can be seen that multi-modal combinations 'RGB+Acc', 'RGB+Gyro', and 'All' ('RGB+Acc+Gyro'), don't add gain to uni-modal network 'RGB' in the continual learning. Instead, the catastrophic forgetting problem of fine-tune is aggravated with the addition of complementary sensor data.
|
| 261 |
+
|
| 262 |
+
Evaluation with continual learning strategies. To overcome catastrophic forgetting, we transfer the popular continual learning methods iCaRL [23], EWC [46] and LwF [22]
|
| 263 |
+
|
| 264 |
+
to the continual multi-modal activity recognition. Fig. 9(a) demonstrates the suppression of catastrophic forgetting by continual learning strategies with RGB uni-modal network. With the help of exemplar replay, iCaRL effectively alleviates the forgetting problem while the effectiveness of exemplar-free strategies EWC and LwF is not so noticeable. As shown in Fig. 9(b) and (c), these continual learning strategies do not produce the same effect on alleviating forgetting of the acceleration and gyroscope uni-modal networks as the RGB network. As listed in the second and third rows of Table IV, the average accuracy of the sensor networks is below $20\%$ even if continual learning strategies are adopted.
|
| 265 |
+
|
| 266 |
+
Fig. 9(d)-(g) present the recognition accuracy of the multimodal combination networks. When combining the sensor data, the replay-based iCaRL can implicitly exploit
|
| 267 |
+
|
| 268 |
+
TABLE IV: Average accuracy (A) and average forgetting (F) of continual learning strategies and fine-tune on UESTC-MMEA-CL $(N / S = 4)$ . Note that $\uparrow$ indicates the higher the better and vice versa.
|
| 269 |
+
|
| 270 |
+
<table><tr><td rowspan="2"></td><td colspan="2">Fine-tune</td><td colspan="2">iCaRL [23]</td><td colspan="2">EWC [46]</td><td colspan="2">LwF [22]</td></tr><tr><td>A↑</td><td>F↓</td><td>A↑</td><td>F↓</td><td>A↑</td><td>F↓</td><td>A↑</td><td>F↓</td></tr><tr><td>RGB</td><td>29.3</td><td>64.5</td><td>70.4</td><td>32.1</td><td>51.6</td><td>35.4</td><td>40.4</td><td>51.8</td></tr><tr><td>Acc</td><td>9.0</td><td>86.3</td><td>17.0</td><td>67.4</td><td>9.4</td><td>49.6</td><td>12.5</td><td>20.7</td></tr><tr><td>Gyro</td><td>8.1</td><td>68.8</td><td>14.3</td><td>58.1</td><td>4.3</td><td>42.8</td><td>9.8</td><td>33.7</td></tr><tr><td>RGB+Acc</td><td>12.2</td><td>99.0</td><td>68.2</td><td>34.8</td><td>22.4</td><td>24.3</td><td>22.7</td><td>44.6</td></tr><tr><td>RGB+Gyro</td><td>12.2</td><td>98.9</td><td>77.4</td><td>34.2</td><td>18.2</td><td>18.6</td><td>29.0</td><td>49.0</td></tr><tr><td>Acc+Gyro</td><td>12.1</td><td>83.3</td><td>34.8</td><td>56.9</td><td>12.2</td><td>66.1</td><td>15.3</td><td>15.7</td></tr><tr><td>RGB+Acc+Gyro</td><td>12.3</td><td>99.0</td><td>77.8</td><td>33.5</td><td>19.6</td><td>29.9</td><td>17.1</td><td>49.4</td></tr></table>
|
| 271 |
+
|
| 272 |
+
the multi-modal complementary information to reduce the forgetting rate of RGB network. As shown in Table IV, 'iCaRL-RGB+Acc+Gyro' achieves the highest average accuracy $77.8\%$ and a relatively low forgetting rate $33.5\%$ . 'iCaRL-RGB+Gyro' also perform well with average accuracy $77.4\%$ compared with $70.4\%$ of 'iCaRL-RGB'. Compared with fine-tune, exemplar-free strategies EWC and LwF can also suppress the model forgetting, but not obvious as iCaRL.
|
| 273 |
+
|
| 274 |
+
# D. Discussion
|
| 275 |
+
|
| 276 |
+
Fusion of multi-modal data. In our work, we use TBW-like midfusion to aggregate features from different modalities, while it can not be ignored that an early fusion or late fusion way will receive different performances against catastrophic forgetting. Exploring a more reasonable manner to fuse and align the multi-modal data deserves further study.
|
| 277 |
+
|
| 278 |
+
Catastrophic forgetting of sensor modalities. As shown in Fig. 9(b), (c), and (f), continual learning using sensor modalities performs poorly if RGB is unavailable and the forgetting problem is severer than using RGB. This phenomenon may be closely related to the network architecture.
|
| 279 |
+
|
| 280 |
+
Continual leaning without exemplar. In this paper, three popular continual learning strategies, i.e., exemplar-based method iCaRL and exemplar-free methods LwF and EWC, are evaluated on UESTC-MMEA-CL. The experimental results indicate that exemplars can effectively alleviate the forgetting problem in multi-modal networks. However, in practical applications, especially in services involving privacy, it is not always available to select and store exemplars. Therefore, studying how to improve the catastrophic forgetting problem of multi-modal networks (especially sensor networks) under exemplar-free case will be an vital research direction in the future.
|
| 281 |
+
|
| 282 |
+
# VI. CONCLUSION
|
| 283 |
+
|
| 284 |
+
In this paper, we propose a multi-modal egocentric dataset, named UESTC-MMEA-CL, for continual activity recognition task. UESTC-MMEA-CL contains video, acceleration, and gyroscope data of 32 daily activity classes. Compared to the existing multi-modal datasets, UESTC-MMEA-CL provides not only vision data with auxiliary inertial sensor data but also abundant categories for the purpose of continual learning
|
| 285 |
+
|
| 286 |
+
research. Besides, a baseline model is presented for continual multi-modal egocentric activity recognition. We have conducted comprehensive experiments on UESTC-MMEA-CL to explore catastrophic forgetting of multi-modal networks and evaluate four baseline methods to address this problem. Finally, we have given some potential research directions for future research. We hope our multi-modal egocentric dataset can facilitate future studies on multi-modal first-person activity recognition as well as continual learning in wearable applications.
|
| 287 |
+
|
| 288 |
+
# REFERENCES
|
| 289 |
+
|
| 290 |
+
[1] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, "Imagenet: A large-scale hierarchical image database," in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248-255.
|
| 291 |
+
[2] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, "The pascal visual object classes challenge: A retrospective," International Journal of Computer Vision, vol. 111, no. 1, pp. 98-136, Jan. 2015.
|
| 292 |
+
[3] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, "Microsoft coco: Common objects in context," in Computer Vision - ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds. Cham: Springer International Publishing, 2014, pp. 740-755.
|
| 293 |
+
[4] J. Liu, A. Shahroudy, M. Perez, G. Wang, L.-Y. Duan, and A. C. Kot, "Ntu rb+d 120: A large-scale benchmark for 3d human activity understanding," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 10, pp. 2684-2701, 2020.
|
| 294 |
+
[5] F. C. Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles, “Activitynet: A large-scale video benchmark for human activity understanding,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, pp. 961–970.
|
| 295 |
+
[6] K. Grauman, A. Westbury, E. Byrne et al., "Ego4d: Around the world in 3,000 hours of egocentric video," in Proceedings of the IEEE / CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, Louisiana, USA, Jun. 2022, pp. 18995-19012.
|
| 296 |
+
[7] A. Cartas, P. Radeva, and M. Dimiccoli, "Activities of daily living monitoring via a wearable camera: Toward real-world applications," IEEE Access, vol. 8, pp. 77344-77363, 2020.
|
| 297 |
+
[8] T. Nagarajan, C. Feichtenhofer, and K. Grauman, “Grounded human-object interaction hotspots from video,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8687–8696.
|
| 298 |
+
[9] Z. Zuo, L. Yang, Y. Peng, F. Chao, and Y. Qu, “Gaze-informed egocentric action recognition for memory aid systems,” IEEE Access, vol. 6, pp. 12894–12904, 2018.
|
| 299 |
+
[10] E. Ng, D. Xiang, H. Joo, and K. Grauman, “You2me: Inferring body pose in egocentric video via first and second person interactions,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 9887–9897.
|
| 300 |
+
[11] H. Jiang and V. K. Ithapu, "Egocentric pose estimation from human vision span," in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 10986-10994.
|
| 301 |
+
|
| 302 |
+
[12] J. S. Smith, R. Xu, and P. Vela, "egoteb: Egocentric, perception space navigation using timed-elastic-bands," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 2703-2709.
|
| 303 |
+
[13] J. Li, H. Gang, H. Ma, M. Tomizuka, and C. Choi, "Important object identification with semi-supervised learning for autonomous driving," in 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 2913-2919.
|
| 304 |
+
[14] Y.-C. Su and K. Grauman, “Detecting engagement in egocentric video,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 454–471.
|
| 305 |
+
[15] Y. Li, T. Nagarajan, B. Xiong, and K. Grauman, "Ego-exo: Transferring visual representations from third-person to first-person videos," in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 6939-6949.
|
| 306 |
+
[16] M. Hu, M. Luo, M. Huang, W. Meng, B. Xiong, X. Yang, and J. Sang, "Towards a multimodal human activity dataset for healthcare," Multimedia Systems, Mar 2022. [Online]. Available: https://doi.org/10.1007/s00530-021-00875-6
|
| 307 |
+
[17] S. Song, V. Chandrasekhar, B. Mandal, L. Li, J.-H. Lim, G. S. Babu, P. P. San, and N.-M. Cheung, "Multimodal multi-stream deep learning for egocentric activity recognition," in 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2016, pp. 378-385.
|
| 308 |
+
[18] H. Rezaie and M. Ghassemian, "Implementation study of wearable sensors for activity recognition systems," Healthcare Technology Letters, vol. 2, no. 4, pp. 95-100, Jul. 2015.
|
| 309 |
+
[19] B. Goertzel and P. Wang, “A foundational architecture for artificial general intelligence,” Advances in artificial general intelligence: Concepts, architectures and algorithms, vol. 6, p. 36, 2007.
|
| 310 |
+
[20] M. McCloskey and N. J. Cohen, "Catastrophic interference in connectionist networks: The sequential learning problem," ser. Psychology of Learning and Motivation, G. H. Bower, Ed. Academic Press, 1989, vol. 24, pp. 109-165. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0079742108605368
|
| 311 |
+
[21] A. ROBINS, “Catastrophic forgetting, rehearsal and pseudorehearsal,” Connection Science, vol. 7, no. 2, pp. 123–146, 1995.
|
| 312 |
+
[22] Z. Li and D. Hoiem, “Learning without forgetting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 12, pp. 2935–2947, 2018.
|
| 313 |
+
[23] S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, "icarl: Incremental classifier and representation learning," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5533-5542.
|
| 314 |
+
[24] S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin, "Learning a unified classifier incrementally via rebalancing," in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 831-839.
|
| 315 |
+
[25] A. Douillard, M. Cord, C. Ollion, T. Robert, and E. Valle, "Podnet: Pooled outputs distillation for small-tasks incremental learning," in Computer Vision - ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds. Cham: Springer International Publishing, 2020, pp. 86-102.
|
| 316 |
+
[26] X. Hu, K. Tang, C. Miao, X.-S. Hua, and H. Zhang, "Distilling causal effect of data in class-incremental learning," in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3956-3965.
|
| 317 |
+
[27] S. Yan, J. Xie, and X. He, "Der: Dynamically expandable representation for class incremental learning," in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3013-3022.
|
| 318 |
+
[28] K. Shmelkov, C. Schmid, and K. Alahari, "Incremental learning of object detectors without catastrophic forgetting," in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3420-3429.
|
| 319 |
+
[29] K. J. Joseph, S. Khan, F. S. Khan, and V. N. Balasubramanian, "Towards open world object detection," in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 5826-5836.
|
| 320 |
+
[30] U. Michieli and P. Zanuttigh, "Incremental learning techniques for semantic segmentation," in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, pp. 3205-3212.
|
| 321 |
+
[31] A. Douillard, Y. Chen, A. Dapogny, and M. Cord, “Plop: Learning without forgetting for continual semantic segmentation,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 4039–4049.
|
| 322 |
+
|
| 323 |
+
[32] J. Park, M. Kang, and B. Han, "Class-incremental learning for action recognition in videos," in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 13678-13687.
|
| 324 |
+
[33] K. Nakamura, S. Yeung, A. Alahi, and L. Fei-Fei, "Jointly learning energy expenditures and activities using egocentric multimodal signals," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6817-6826.
|
| 325 |
+
[34] E. H. Spriggs, F. De La Torre, and M. Hebert, "Temporal segmentation and activity classification from first-person sensing," in 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2009, pp. 17-24.
|
| 326 |
+
[35] C. Chen, R. Jafari, and N. Kehtarnavaz, "Utd-mhad: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor," in 2015 IEEE International Conference on Image Processing (ICIP), 2015, pp. 168-172.
|
| 327 |
+
[36] F. Ofii, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy, "Berkeley mhad: A comprehensive multimodal human action database," in 2013 IEEE Workshop on Applications of Computer Vision (WACV), 2013, pp. 53-60.
|
| 328 |
+
[37] L. Martínez-Villaseñor, H. Ponce, J. Brieva, E. Moya-Albor, J. Núñez-Martínez, and C. Peñafort-Asturiano, “Up-fall detection dataset: A multimodal approach,” Sensors, vol. 19, no. 9, 2019. [Online]. Available: https://www.mdpi.com/1424-8220/19/9/1988
|
| 329 |
+
[38] D. Damen, H. Doughty, G. M. Farinella, A. Furnari, E. Kazakos, J. Ma, D. Moltisanti, J. Munro, T. Perrett, W. Price, and M. Wray, "Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100," International Journal of Computer Vision, vol. 130, no. 1, pp. 33-55, Jan 2022. [Online]. Available: https://doi.org/10.1007/s11263-021-01531-2
|
| 330 |
+
[39] E. Kazakos, A. Nagrani, A. Zisserman, and D. Damen, "Epic-fusion: Audio-visual temporal binding for egocentric action recognition," in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 5491-5500.
|
| 331 |
+
[40] K. M. Kitani, T. Okabe, Y. Sato, and A. Sugimoto, "Fast unsupervised ego-action learning for first-person sports videos," in CVPR 2011, 2011, pp. 3241-3248.
|
| 332 |
+
[41] Y. Li, A. Fathi, and J. M. Rehg, “Learning to predict gaze in egocentric video,” in 2013 IEEE International Conference on Computer Vision, 2013, pp. 3216–3223.
|
| 333 |
+
[42] S. Bambach, S. Lee, D. J. Crandall, and C. Yu, “Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions,” in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1949–1957.
|
| 334 |
+
[43] A. Krizhevsky, “Learning multiple layers of features from tiny images,” 2009.
|
| 335 |
+
[44] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results,” http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html.
|
| 336 |
+
[45] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, "Scene parsing through ade20k dataset," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5122-5130.
|
| 337 |
+
[46] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in neural networks,” Proc. Natl. Acad. Sci. U. S. A., vol. 114, no. 13, pp. 3521–3526, Mar. 2017.
|
| 338 |
+
[47] F. Zenke, B. Poole, and S. Ganguli, “Continual learning through synaptic intelligence,” in Proceedings of the 34th International Conference on Machine Learning - Volume 70, ser. ICML'17. JMLR.org, 2017, pp. 3987-3995.
|
| 339 |
+
[48] R. Aljundi, F. Babiloni, M. Elhoseiny, M. Rohrbach, and T. Tuytelaars, "Memory aware synapses: Learning what (not) to forget," in Computer Vision - ECCV 2018, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. Cham: Springer International Publishing, 2018, pp. 144-161.
|
| 340 |
+
[49] X. Liu, M. Masana, L. Herranz, J. Van de Weijer, A. M. López, and A. D. Bagdanov, "Rotate your networks: Better weight consolidation and less catastrophic forgetting," in 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2262-2268.
|
| 341 |
+
[50] I. J. Myung, "Tutorial on maximum likelihood estimation," Journal of Mathematical Psychology, vol. 47, no. 1, pp. 90-100, 2003. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0022249602000287
|
| 342 |
+
[51] R. Aljundi, P. Chakravarty, and T. Tuytelaars, “Expert gate: Lifelong learning with a network of experts,” in 2017 IEEE Conference on
|
| 343 |
+
|
| 344 |
+
Computer Vision and Pattern Recognition (CVPR), 2017, pp. 7120-7129.
|
| 345 |
+
[52] D. Abati, J. Tomczak, T. Blankevoort, S. Calderara, R. Cucchiara, and B. E. Bejnordi, "Conditional channel gated networks for task-aware continual learning," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 3930-3939.
|
| 346 |
+
[53] M. S. Ryoo and L. Matthies, "First-person activity recognition: What are they doing to me?" in Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 2730-2737.
|
| 347 |
+
[54] A. Fathi, Y. Li, and J. M. Rehg, “Learning to recognize daily actions using gaze,” in European Conference on Computer Vision. Springer, 2012, pp. 314–327.
|
| 348 |
+
[55] A. Reiss and D. Stricker, "Introducing a new benchmarked dataset for activity monitoring," in 2012 16th international symposium on wearable computers. IEEE, 2012, pp. 108-109.
|
| 349 |
+
[56] S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in International conference on machine learning. PMLR, 2015, pp. 448-456.
|
| 350 |
+
[57] F. J. Ordóñez and D. Roggen, "Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition," Sensors, vol. 16, no. 1, p. 115, 2016.
|
| 351 |
+
[58] N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural networks, vol. 12, no. 1, pp. 145–151, 1999.
|
| 352 |
+
[59] G. Hinton, N. Srivastava, and K. Swersky, “Neural networks for machine learning lecture 6a overview of mini-batch gradient descent,” Cited on, vol. 14, no. 8, p. 2, 2012.
|
| 353 |
+
[60] D.-W. Zhou, F.-Y. Wang, H.-J. Ye, and D.-C. Zhan, "Pycil: A python toolbox for class-incremental learning," arXiv preprint arXiv:2112.12533, 2021.
|
| 354 |
+
[61] A. Chaudhry, P. K. Dokania, T. Ajanthan, and P. H. Torr, "Riemannian walk for incremental learning: Understanding forgetting and intransigence," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 532-547.
|
2301.10xxx/2301.10931/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c30f8bc222b4f54abb45f666bb49fc8e3b09d1f776ecc9e9616478bad6ec419e
|
| 3 |
+
size 873458
|
2301.10xxx/2301.10931/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10937/0902a8dd-f087-4ad6-b1fd-650f398952de_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10937/0902a8dd-f087-4ad6-b1fd-650f398952de_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10937/0902a8dd-f087-4ad6-b1fd-650f398952de_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bab1fac4ab351351a47b08d3230fcd460badfac4e0d6a40e30643c439a56bb60
|
| 3 |
+
size 10804571
|
2301.10xxx/2301.10937/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10937/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f8d24cae398d87869fd053fe78c28f5affc5bfa3fb8f5eed0b7ebe5290960042
|
| 3 |
+
size 1232646
|
2301.10xxx/2301.10937/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10938/e2b2cbfc-a0df-462f-9845-caeaa831fe88_content_list.json
ADDED
|
@@ -0,0 +1,1532 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Compact Transformer Tracker with Correlative Masked Modeling",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
163,
|
| 8 |
+
119,
|
| 9 |
+
833,
|
| 10 |
+
142
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Zikai Song $^{1}$ , Run Luo $^{1}$ , Junqing Yu $^{1*}$ , Yi-Ping Phoebe Chen $^{2}$ , Wei Yang $^{1*}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
192,
|
| 19 |
+
157,
|
| 20 |
+
807,
|
| 21 |
+
178
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "<sup>1</sup>Huazhong University of Science and Technology, China",
|
| 28 |
+
"bbox": [
|
| 29 |
+
310,
|
| 30 |
+
181,
|
| 31 |
+
686,
|
| 32 |
+
196
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "$^{2}$ La Trobe University, Australia",
|
| 39 |
+
"bbox": [
|
| 40 |
+
393,
|
| 41 |
+
196,
|
| 42 |
+
602,
|
| 43 |
+
210
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "{skyesong, lr_8823, yjqing, weiyangcs}@hust.edu.cn, phoebe.chen@latrobe.edu.au",
|
| 50 |
+
"bbox": [
|
| 51 |
+
223,
|
| 52 |
+
210,
|
| 53 |
+
772,
|
| 54 |
+
226
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Abstract",
|
| 61 |
+
"text_level": 1,
|
| 62 |
+
"bbox": [
|
| 63 |
+
248,
|
| 64 |
+
273,
|
| 65 |
+
313,
|
| 66 |
+
286
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "Transformer framework has been showing superior performances in visual object tracking for its great strength in information aggregation across the template and search image with the well-known attention mechanism. Most recent advances focus on exploring attention mechanism variants for better information aggregation. We find these schemes are equivalent to or even just a subset of the basic self-attention mechanism. In this paper, we prove that the vanilla self-attention structure is sufficient for information aggregation, and structural adaption is unnecessary. The key is not the attention structure, but how to extract the discriminative feature for tracking and enhance the communication between the target and search image. Based on this finding, we adopt the basic vision transformer (ViT) architecture as our main tracker and concatenate the template and search image for feature embedding. To guide the encoder to capture the invariant feature for tracking, we attach a lightweight correlative masked decoder which reconstructs the original template and search image from the corresponding masked tokens. The correlative masked decoder serves as a plugin for the compact transform tracker and is skipped in inference. Our compact tracker uses the most simple structure which only consists of a ViT backbone and a box head, and can run at 40 fps. Extensive experiments show the proposed compact transform tracker outperforms existing approaches, including advanced attention variants, and demonstrates the sufficiency of self-attention in tracking tasks. Our method achieves state-of-the-art performance on five challenging datasets, along with the VOT2020, UAV123, LaSOT, TrackingNet, and GOT-10k benchmarks. Our project is available at https://github.com/HUSTDML/CTTrack.",
|
| 73 |
+
"bbox": [
|
| 74 |
+
98,
|
| 75 |
+
297,
|
| 76 |
+
464,
|
| 77 |
+
686
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "1 Introduction",
|
| 84 |
+
"text_level": 1,
|
| 85 |
+
"bbox": [
|
| 86 |
+
210,
|
| 87 |
+
709,
|
| 88 |
+
351,
|
| 89 |
+
724
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Visual Object Tracking is one of the fundamental tasks in computer vision with applications ranging from human-computer interaction, surveillance, traffic flow monitoring and etc. It aims to estimate the location, denoted as a bounding box, of an arbitrary target object throughout the subsequent video sequence. Deep Learning based trackers have achieved great success due to their strong representation ability. Trackers (Bertinetto et al. 2016; Nam and Han 2016;",
|
| 96 |
+
"bbox": [
|
| 97 |
+
81,
|
| 98 |
+
729,
|
| 99 |
+
478,
|
| 100 |
+
840
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "image",
|
| 106 |
+
"img_path": "images/05eb647bb79588888f150a3918bab97a58a7119c00c03eceec80ef31e13b683d.jpg",
|
| 107 |
+
"image_caption": [
|
| 108 |
+
"Figure 1: Our compact transformer tracker adopts the simple ViT structure (encoder) with the concatenation of the template and search image as input, which essentially exploits the standard self-attention mechanism for information aggregation. The encoded tokens pass through a box head to estimate the result bounding box. And we develop a correlative masked decoder reconstructing the original template and search pixels to enhance the information aggregation, which is skipped during inference."
|
| 109 |
+
],
|
| 110 |
+
"image_footnote": [],
|
| 111 |
+
"bbox": [
|
| 112 |
+
522,
|
| 113 |
+
271,
|
| 114 |
+
908,
|
| 115 |
+
501
|
| 116 |
+
],
|
| 117 |
+
"page_idx": 0
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"type": "text",
|
| 121 |
+
"text": "Li et al. 2018, 2019) derived from Convolutional Neural Networks (CNN) (Krizhevsky, Sutskever, and Hinton 2012; Simonyan and Zisserman 2015; He et al. 2016) produce tracking accuracy that beyond the comparison of traditional approaches, especially the trackers built on Siamese network (Bertinetto et al. 2016; Xu et al. 2020; Li et al. 2018, 2019; Voigtaender et al. 2020; Yu et al. 2020; Guo et al. 2021). The key of Siamese network trackers is to produce the cross-correlation and measure the similarity between the target template and search image. Nowadays, transformer-based trackers (Chen et al. 2021; Wang et al. 2021; Yan et al. 2021; Shen et al. 2022; Song et al. 2022; Cui et al. 2022) have shown great strength by introducing the attention mechanism (Vaswani et al. 2017) to enhance and fuse the features of querying sample and tracked objects. Prevalent transformer trackers (Chen et al. 2021; Yan et al. 2021;",
|
| 122 |
+
"bbox": [
|
| 123 |
+
514,
|
| 124 |
+
666,
|
| 125 |
+
913,
|
| 126 |
+
888
|
| 127 |
+
],
|
| 128 |
+
"page_idx": 0
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"type": "aside_text",
|
| 132 |
+
"text": "arXiv:2301.10938v1 [cs.CV] 26 Jan 2023",
|
| 133 |
+
"bbox": [
|
| 134 |
+
22,
|
| 135 |
+
268,
|
| 136 |
+
57,
|
| 137 |
+
707
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "page_footnote",
|
| 143 |
+
"text": "*indicates co-corresponding author. Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.",
|
| 144 |
+
"bbox": [
|
| 145 |
+
81,
|
| 146 |
+
849,
|
| 147 |
+
478,
|
| 148 |
+
888
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "text",
|
| 154 |
+
"text": "Cui et al. 2022) more or less adapt the attention for aggregating information across the template and search image.",
|
| 155 |
+
"bbox": [
|
| 156 |
+
83,
|
| 157 |
+
68,
|
| 158 |
+
477,
|
| 159 |
+
97
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 1
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "text",
|
| 165 |
+
"text": "We find that the advanced variants of attention mechanism in recent research, including mix-attention (Cui et al. 2022) and cross-attention (Yu et al. 2020; Chen et al. 2021), are equivalent or even just a subset of the packed self-attention (i.e., standard self-attention with the concatenation of the template and search image as input). Then the question is which parts of the self-attention mechanism play an important role in visual object tracking? We revisited the transformer tracking framework and find that the tracking results are generated from tokens corresponding to the search image (search tokens), while the tokens corresponding to the template (template tokens) are always discarded in the last. The representational ability of search tokens comes from two parts: the cross-information enhancement from the template tokens and the self-information enhancement from the search tokens themselves. In this paper, we prove that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation, though cross-information aggregation is indispensable in visual object tracking but not greatly beneficial.",
|
| 166 |
+
"bbox": [
|
| 167 |
+
81,
|
| 168 |
+
97,
|
| 169 |
+
478,
|
| 170 |
+
375
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 1
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "text",
|
| 176 |
+
"text": "Driven by this analysis, we propose a compact transformer tracker combined with correlative masked modeling for the cross-information aggregation and self-information reinforcement. As shown in Figure 1, our tracker adopts the basic vision transformer as the main branch and applies a lightweight masked decoder to enhance the implicit representation capability of the packed self-attention. The correlative masked decoder, which is inspired by Masked Image Modeling (He et al. 2022; Xie et al. 2022), reconstructs the both original template and search pixels from the corresponding masked tokens, to guide the encoder to capture the invariant feature for tracking. In addition, our decoder can be plugged into other transformer trackers, which can effectively improve the tracking performance without compromising speed. Applying our correlative masked modeling strategy to the compact transformer tracker can improve the AUC from $64.0\\%$ to $65.8\\%$ on the LaSOT (Fan et al. 2019) dataset. Extensive comparison experiments on 5 challenging datasets including VOT2020 (Kristan et al. 2020), UAV123 (Mueller, Smith, and Ghanem 2016), LaSOT, GOT-10k (Huang, Zhao, and Huang 2019), and TrackingNet (Muller et al. 2018) exhibits the state-of-the-art performance, which further evidence the correctness of our analysis regarding the self-attention in visual tracking.",
|
| 177 |
+
"bbox": [
|
| 178 |
+
81,
|
| 179 |
+
376,
|
| 180 |
+
480,
|
| 181 |
+
708
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 1
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "text",
|
| 187 |
+
"text": "To summarize, our main contributions include:",
|
| 188 |
+
"bbox": [
|
| 189 |
+
99,
|
| 190 |
+
709,
|
| 191 |
+
410,
|
| 192 |
+
723
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 1
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "list",
|
| 198 |
+
"sub_type": "text",
|
| 199 |
+
"list_items": [
|
| 200 |
+
"1. We present a unified analyzing method for the attention mechanism and find that the advanced variants of the attention mechanism are equivalent or even just a subset of the self-attention. We also prove that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation.",
|
| 201 |
+
"2. We develop a compact transformer tracker with a correlative masked decoder, which has a very simple structure and achieves state-of-the-art accuracy at a high Frames-Per-Seconds (fps) tracking speed. The decoder reconstructs the original template and search image from the"
|
| 202 |
+
],
|
| 203 |
+
"bbox": [
|
| 204 |
+
84,
|
| 205 |
+
729,
|
| 206 |
+
480,
|
| 207 |
+
891
|
| 208 |
+
],
|
| 209 |
+
"page_idx": 1
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"type": "text",
|
| 213 |
+
"text": "corresponding masked tokens and serves as a training plugin for the tracker. The experiment demonstrates that our analysis regarding self-attention is correct.",
|
| 214 |
+
"bbox": [
|
| 215 |
+
537,
|
| 216 |
+
69,
|
| 217 |
+
911,
|
| 218 |
+
111
|
| 219 |
+
],
|
| 220 |
+
"page_idx": 1
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"type": "text",
|
| 224 |
+
"text": "2 Related Work",
|
| 225 |
+
"text_level": 1,
|
| 226 |
+
"bbox": [
|
| 227 |
+
638,
|
| 228 |
+
127,
|
| 229 |
+
790,
|
| 230 |
+
142
|
| 231 |
+
],
|
| 232 |
+
"page_idx": 1
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"type": "text",
|
| 236 |
+
"text": "Traditional trackers. Traditional single object tracking algorithms can be roughly summarized as Correlation Filter based trackers (CF), Deep Network based trackers (DLN). CF-based trackers(Bolme et al. 2010; Henriques et al. 2015; Danelljan et al. 2016, 2017, 2019; Bhat et al. 2019) exploit the convolution theorem and learn a filter in the Fourier domain that maps known target images to the desired output. DLN-based trackers refer to algorithms employing deep neural networks for the tracking process. Earlier approaches (Nam and Han 2016; Pu et al. 2018) treat the tracking task as a classification problem and exploit deep features for locating the target. Shortly afterwards more trackers adopt the Siamese network (Bertinetto et al. 2016; Li et al. 2018, 2019) for its effectiveness in measuring similarity. The Siamese network consists of two branches, one operates on the template and the other for the search area.",
|
| 237 |
+
"bbox": [
|
| 238 |
+
514,
|
| 239 |
+
151,
|
| 240 |
+
911,
|
| 241 |
+
372
|
| 242 |
+
],
|
| 243 |
+
"page_idx": 1
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"type": "text",
|
| 247 |
+
"text": "Above all, these methods mainly consist of a backbone which extracts the features of search image and template separately, a similarity measuring module, and heads to predict the location and bounding box. Compared to our framework, traditional trackers have too many modules and a very complex design, we simply adapt a ViT backbone with a box head to get better tracking results.",
|
| 248 |
+
"bbox": [
|
| 249 |
+
514,
|
| 250 |
+
373,
|
| 251 |
+
913,
|
| 252 |
+
470
|
| 253 |
+
],
|
| 254 |
+
"page_idx": 1
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"type": "text",
|
| 258 |
+
"text": "Transformer trackers. The ViT (Dosovitskiy et al. 2021) first introduces the transformer to image recognition tasks and presents an impressive performance. Ever since, transformer has been widely applied in image classification(Dosovitskiy et al. 2021; Wu et al. 2021; Liu et al. 2021), object detection(Carion et al. 2020; Li et al. 2022), visual object tracking(Yan et al. 2021; Chen et al. 2021; Wang et al. 2021; Song et al. 2022; Shen et al. 2022; Cui et al. 2022) and etc. Transformer-based tracking methods have become the mainstream tracking algorithms nowadays. TransT (Chen et al. 2021) proposes a feature fusion network and employs an attention mechanism to combine the features of the template and search region. STARK (Yan et al. 2021) develops a spatial-temporal architecture based on the encoder-decoder transformer. CSWinTT (Song et al. 2022) proposes a transformer architecture with multi-scale cyclic shifting window attention for visual tracking, elevating the attention from pixel level to window level. MixFormer (Cui et al. 2022) constructs a compact tracking framework and designs a mixed attention module that unifies the process of feature extraction and information matching module.",
|
| 259 |
+
"bbox": [
|
| 260 |
+
514,
|
| 261 |
+
470,
|
| 262 |
+
913,
|
| 263 |
+
762
|
| 264 |
+
],
|
| 265 |
+
"page_idx": 1
|
| 266 |
+
},
|
| 267 |
+
{
|
| 268 |
+
"type": "text",
|
| 269 |
+
"text": "Instead of designing a complex attention mechanism as in the previous tracking approaches, we compare the essential differences of attention variants(such as mix-attention and cross-attention) and find these attention variants are equivalent or even just a subset of the packed self-attention. To verify the capability of self-attention in information aggregation, we design a compact transformer tracker using the most simple pipeline which only consists of a ViT backbone and a box head, without any extra design including separate",
|
| 270 |
+
"bbox": [
|
| 271 |
+
514,
|
| 272 |
+
763,
|
| 273 |
+
913,
|
| 274 |
+
890
|
| 275 |
+
],
|
| 276 |
+
"page_idx": 1
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"type": "text",
|
| 280 |
+
"text": "modules of feature extraction and aggregation, and multi-layer feature aggregation.",
|
| 281 |
+
"bbox": [
|
| 282 |
+
81,
|
| 283 |
+
68,
|
| 284 |
+
478,
|
| 285 |
+
97
|
| 286 |
+
],
|
| 287 |
+
"page_idx": 2
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"type": "text",
|
| 291 |
+
"text": "Masked image modeling (MIM). MIM masks an area of the original images and predicts the missing pixels, which aims to enhance the representation of models. Recently, MIM approaches((Chen et al. 2020; He et al. 2022; Xie et al. 2022; Wei et al. 2022; Bao, Dong, and Wei 2021)) are extended to the modern vision transformers (Dosovitskiy et al. 2021; Liu et al. 2021). iGPT (Chen et al. 2020) first proposes a transformer to predict unknown pixels from a sequence of low-resolution pixels. BEiT (Bao, Dong, and Wei 2021) tokenizes the images via an additional dVAE (Ramesh et al. 2021) network with a block-wise masking strategy. SimMIM (Xie et al. 2022) find that a moderately large masked patch size of the input image for pixel predictions makes a strong pre-text task. MAE (He et al. 2022) develops an asymmetric encoder-decoder architecture, the encoder operates on a small proportion of the visible patches, and the decoder reconstructs the original pixels. MaskFeat (Wei et al. 2022) reconstructs the feature descriptors such as HoG (Dalal and Triggs 2005) instead of pixels.",
|
| 292 |
+
"bbox": [
|
| 293 |
+
81,
|
| 294 |
+
95,
|
| 295 |
+
478,
|
| 296 |
+
359
|
| 297 |
+
],
|
| 298 |
+
"page_idx": 2
|
| 299 |
+
},
|
| 300 |
+
{
|
| 301 |
+
"type": "text",
|
| 302 |
+
"text": "Our approach is inspired by the previous MIM method (Xie et al. 2022; He et al. 2022), but we have to deal with two fundamental problems in the tracking framework: (1) Visual tracking is a downstream vision task that generally does not have the pre-train process to apply the MIM strategy. We develop a masked decoder to leverage the search and the template tokens to predict the original images, which is embedded as an attachment plugin in the training phase to implement an end-to-end model. (2) MIM methods reconstructing the single image do not fit the tracking framework which involves cross-aggregation of multiple images. According to the properties of packed self-attention, we design a self-decoder and a cross-decoder to reconstruct the original template and search image from the corresponding masked tokens. As far as we know, we are the first to artfully introduce the MIM into the visual tracking field to improve the information aggregation capabilities.",
|
| 303 |
+
"bbox": [
|
| 304 |
+
81,
|
| 305 |
+
359,
|
| 306 |
+
480,
|
| 307 |
+
595
|
| 308 |
+
],
|
| 309 |
+
"page_idx": 2
|
| 310 |
+
},
|
| 311 |
+
{
|
| 312 |
+
"type": "text",
|
| 313 |
+
"text": "3 Approach",
|
| 314 |
+
"text_level": 1,
|
| 315 |
+
"bbox": [
|
| 316 |
+
220,
|
| 317 |
+
609,
|
| 318 |
+
341,
|
| 319 |
+
626
|
| 320 |
+
],
|
| 321 |
+
"page_idx": 2
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"type": "text",
|
| 325 |
+
"text": "In this section, we introduce our compact transformer tracker with correlative masked modeling in detail. Before proceeding, we first present a analysis on the key component of transformer tracker, and demonstrate that existing attention variants are equivalent to the packed self-attention.",
|
| 326 |
+
"bbox": [
|
| 327 |
+
81,
|
| 328 |
+
628,
|
| 329 |
+
480,
|
| 330 |
+
700
|
| 331 |
+
],
|
| 332 |
+
"page_idx": 2
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"type": "text",
|
| 336 |
+
"text": "3.1 Revisiting Transformer Tracker",
|
| 337 |
+
"text_level": 1,
|
| 338 |
+
"bbox": [
|
| 339 |
+
81,
|
| 340 |
+
710,
|
| 341 |
+
372,
|
| 342 |
+
726
|
| 343 |
+
],
|
| 344 |
+
"page_idx": 2
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "text",
|
| 348 |
+
"text": "Transformer tracking framework. As described in ViT(Vaswani et al. 2017), the query-key-value attention mechanism is applied with query $\\mathbf{Q}$ , key $\\mathbf{K}$ , and value $\\mathbf{V}$ . The linear weights of $\\mathbf{Q}, \\mathbf{K}, \\mathbf{V}$ are $\\mathbf{W}_Q, \\mathbf{W}_K, \\mathbf{W}_V$ respectively. The attention (Attn) is computed as:",
|
| 349 |
+
"bbox": [
|
| 350 |
+
81,
|
| 351 |
+
729,
|
| 352 |
+
480,
|
| 353 |
+
801
|
| 354 |
+
],
|
| 355 |
+
"page_idx": 2
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "equation",
|
| 359 |
+
"text": "\n$$\n\\operatorname {A t t n} (\\mathbf {X}) = \\operatorname {s o f t m a x} \\left(\\frac {\\mathbf {X} \\mathbf {W} _ {Q} \\cdot \\mathbf {W} _ {K} ^ {T} \\mathbf {X} ^ {T}}{\\sqrt {d _ {k}}}\\right) \\cdot \\mathbf {X} \\mathbf {W} _ {V} \\tag {1}\n$$\n",
|
| 360 |
+
"text_format": "latex",
|
| 361 |
+
"bbox": [
|
| 362 |
+
120,
|
| 363 |
+
818,
|
| 364 |
+
478,
|
| 365 |
+
854
|
| 366 |
+
],
|
| 367 |
+
"page_idx": 2
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"type": "text",
|
| 371 |
+
"text": "where the $\\mathbf{X}$ is the input token and the $d_{k}$ is the dimension of the key. For a clearer description of the post-order steps,",
|
| 372 |
+
"bbox": [
|
| 373 |
+
81,
|
| 374 |
+
859,
|
| 375 |
+
480,
|
| 376 |
+
890
|
| 377 |
+
],
|
| 378 |
+
"page_idx": 2
|
| 379 |
+
},
|
| 380 |
+
{
|
| 381 |
+
"type": "image",
|
| 382 |
+
"img_path": "images/d2333e6bc90d363dbb1a160af7aaf0e96363e4f41130c364625e6efbff830779.jpg",
|
| 383 |
+
"image_caption": [
|
| 384 |
+
"Figure 2: Information streams in the attention mechanism. The four information streams of Q-K-V are corresponding to the four parts in the attention map. Variants of attention can be uniformly explained under this analytical approach."
|
| 385 |
+
],
|
| 386 |
+
"image_footnote": [],
|
| 387 |
+
"bbox": [
|
| 388 |
+
537,
|
| 389 |
+
64,
|
| 390 |
+
893,
|
| 391 |
+
455
|
| 392 |
+
],
|
| 393 |
+
"page_idx": 2
|
| 394 |
+
},
|
| 395 |
+
{
|
| 396 |
+
"type": "text",
|
| 397 |
+
"text": "we apply an attention calculation with the inputs of two different tokens, the token $\\mathbf{X}_Q$ computed with query and the token $\\mathbf{X}_K V$ computed with key and value. We modify the attention formula and define the attention map (AMap) as:",
|
| 398 |
+
"bbox": [
|
| 399 |
+
514,
|
| 400 |
+
545,
|
| 401 |
+
913,
|
| 402 |
+
603
|
| 403 |
+
],
|
| 404 |
+
"page_idx": 2
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"type": "equation",
|
| 408 |
+
"text": "\n$$\n\\operatorname {A t t n} \\left(\\mathbf {X} _ {Q}, \\mathbf {X} _ {K V}\\right) = \\operatorname {A M a p} \\left(\\mathbf {X} _ {Q}, \\mathbf {X} _ {K V}\\right) \\cdot \\mathbf {X} _ {K V} \\mathbf {W} _ {V}\n$$\n",
|
| 409 |
+
"text_format": "latex",
|
| 410 |
+
"bbox": [
|
| 411 |
+
542,
|
| 412 |
+
617,
|
| 413 |
+
880,
|
| 414 |
+
635
|
| 415 |
+
],
|
| 416 |
+
"page_idx": 2
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"type": "equation",
|
| 420 |
+
"text": "\n$$\n\\operatorname {A M a p} \\left(\\mathbf {X} _ {Q}, \\mathbf {X} _ {K V}\\right) = \\operatorname {s o f t m a x} \\left(\\frac {\\mathbf {X} _ {Q} \\mathbf {W} _ {Q} \\cdot \\mathbf {W} _ {K} ^ {T} \\mathbf {X} _ {K V} ^ {T}}{\\sqrt {d}}\\right) \\tag {2}\n$$\n",
|
| 421 |
+
"text_format": "latex",
|
| 422 |
+
"bbox": [
|
| 423 |
+
529,
|
| 424 |
+
637,
|
| 425 |
+
911,
|
| 426 |
+
670
|
| 427 |
+
],
|
| 428 |
+
"page_idx": 2
|
| 429 |
+
},
|
| 430 |
+
{
|
| 431 |
+
"type": "text",
|
| 432 |
+
"text": "Our compact transformer tracker consists of two parts: a transformer backbone for information aggregation and a box head for the bounding box estimation. Give the template $z$ in the initial frame and a search image $s$ . We obtain the tokens $X_{t} \\in \\mathbb{R}^{L_{z} \\times d}$ and $X_{s} \\in \\mathbb{R}^{L_{s} \\times d}$ respectively through patch embedding, where $d$ represents the number of channels. The packed self-attention (PSelf-Attn) in the tracking field is defined as the self-attention with the input of the concatenation (Cat) of the template and the search image:",
|
| 433 |
+
"bbox": [
|
| 434 |
+
514,
|
| 435 |
+
675,
|
| 436 |
+
913,
|
| 437 |
+
801
|
| 438 |
+
],
|
| 439 |
+
"page_idx": 2
|
| 440 |
+
},
|
| 441 |
+
{
|
| 442 |
+
"type": "equation",
|
| 443 |
+
"text": "\n$$\n\\operatorname {P S e l f - A t t n} = \\operatorname {A t t n} \\left(C a t \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right), C a t \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right)\\right) \\tag {3}\n$$\n",
|
| 444 |
+
"text_format": "latex",
|
| 445 |
+
"bbox": [
|
| 446 |
+
537,
|
| 447 |
+
816,
|
| 448 |
+
911,
|
| 449 |
+
842
|
| 450 |
+
],
|
| 451 |
+
"page_idx": 2
|
| 452 |
+
},
|
| 453 |
+
{
|
| 454 |
+
"type": "text",
|
| 455 |
+
"text": "Analysis on Attention. As shown in Figure 2, we divide the computation of attention mechanism, which involves both template and search image, into four information streams:",
|
| 456 |
+
"bbox": [
|
| 457 |
+
514,
|
| 458 |
+
845,
|
| 459 |
+
913,
|
| 460 |
+
888
|
| 461 |
+
],
|
| 462 |
+
"page_idx": 2
|
| 463 |
+
},
|
| 464 |
+
{
|
| 465 |
+
"type": "image",
|
| 466 |
+
"img_path": "images/785c6ac69c9bcb37e0e17072bd6d15a8a01dad8e6e7ad07a01a97353972c5283.jpg",
|
| 467 |
+
"image_caption": [
|
| 468 |
+
"(a) PSelf-Attn"
|
| 469 |
+
],
|
| 470 |
+
"image_footnote": [],
|
| 471 |
+
"bbox": [
|
| 472 |
+
98,
|
| 473 |
+
79,
|
| 474 |
+
215,
|
| 475 |
+
170
|
| 476 |
+
],
|
| 477 |
+
"page_idx": 3
|
| 478 |
+
},
|
| 479 |
+
{
|
| 480 |
+
"type": "image",
|
| 481 |
+
"img_path": "images/b13c85c48fd6151e077aae756996ca22967882cb9b8f5e57182b13c89227103e.jpg",
|
| 482 |
+
"image_caption": [
|
| 483 |
+
"(b) AMix-Attn"
|
| 484 |
+
],
|
| 485 |
+
"image_footnote": [],
|
| 486 |
+
"bbox": [
|
| 487 |
+
220,
|
| 488 |
+
78,
|
| 489 |
+
338,
|
| 490 |
+
169
|
| 491 |
+
],
|
| 492 |
+
"page_idx": 3
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"type": "image",
|
| 496 |
+
"img_path": "images/a72d6ef65afb825b8feb441024538ea6b9c9a61863876e43be0f732bc5b892b9.jpg",
|
| 497 |
+
"image_caption": [
|
| 498 |
+
"(c) Cross-Attn",
|
| 499 |
+
"Figure 3: Configurations of information stream in attention map of packed self-attention (PSelf-Attn), asymmetric mix-attention(AMix-Attn) and cross-attention (Cross-Attn)."
|
| 500 |
+
],
|
| 501 |
+
"image_footnote": [],
|
| 502 |
+
"bbox": [
|
| 503 |
+
343,
|
| 504 |
+
79,
|
| 505 |
+
460,
|
| 506 |
+
169
|
| 507 |
+
],
|
| 508 |
+
"page_idx": 3
|
| 509 |
+
},
|
| 510 |
+
{
|
| 511 |
+
"type": "list",
|
| 512 |
+
"sub_type": "text",
|
| 513 |
+
"list_items": [
|
| 514 |
+
"(1) self-information enhancement on template;",
|
| 515 |
+
"(2) cross-information aggregation on template;",
|
| 516 |
+
"(3) cross-information aggregation on search image;",
|
| 517 |
+
"(4) self-information enhancement on search image."
|
| 518 |
+
],
|
| 519 |
+
"bbox": [
|
| 520 |
+
93,
|
| 521 |
+
279,
|
| 522 |
+
436,
|
| 523 |
+
344
|
| 524 |
+
],
|
| 525 |
+
"page_idx": 3
|
| 526 |
+
},
|
| 527 |
+
{
|
| 528 |
+
"type": "text",
|
| 529 |
+
"text": "These four information streams are also reflected in the four parts of the attention map (In Figure 2, the index of each part in the attention map corresponds to the information stream). Based on this dissection, we can conveniently compare the differences between existing attention, including packed self-attention, mix-attention, and cross-attention.",
|
| 530 |
+
"bbox": [
|
| 531 |
+
81,
|
| 532 |
+
347,
|
| 533 |
+
478,
|
| 534 |
+
430
|
| 535 |
+
],
|
| 536 |
+
"page_idx": 3
|
| 537 |
+
},
|
| 538 |
+
{
|
| 539 |
+
"type": "text",
|
| 540 |
+
"text": "The PSelf-Attn and the mix-attention(Cui et al. 2022) are essentially equivalent, the mix-attention is calculated as:",
|
| 541 |
+
"bbox": [
|
| 542 |
+
83,
|
| 543 |
+
431,
|
| 544 |
+
478,
|
| 545 |
+
459
|
| 546 |
+
],
|
| 547 |
+
"page_idx": 3
|
| 548 |
+
},
|
| 549 |
+
{
|
| 550 |
+
"type": "equation",
|
| 551 |
+
"text": "\n$$\n\\text {P S e l f - A t t n} = = \\text {M i x - A t t n} =\n$$\n",
|
| 552 |
+
"text_format": "latex",
|
| 553 |
+
"bbox": [
|
| 554 |
+
184,
|
| 555 |
+
465,
|
| 556 |
+
374,
|
| 557 |
+
478
|
| 558 |
+
],
|
| 559 |
+
"page_idx": 3
|
| 560 |
+
},
|
| 561 |
+
{
|
| 562 |
+
"type": "equation",
|
| 563 |
+
"text": "\n$$\n\\operatorname {C a t} \\left(\\operatorname {A M a p} \\left(\\mathbf {X} _ {z}, C a t \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right)\\right), \\operatorname {A M a p} \\left(\\mathbf {X} _ {s}, C a t \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right)\\right)\\right) \\tag {4}\n$$\n",
|
| 564 |
+
"text_format": "latex",
|
| 565 |
+
"bbox": [
|
| 566 |
+
83,
|
| 567 |
+
484,
|
| 568 |
+
485,
|
| 569 |
+
522
|
| 570 |
+
],
|
| 571 |
+
"page_idx": 3
|
| 572 |
+
},
|
| 573 |
+
{
|
| 574 |
+
"type": "text",
|
| 575 |
+
"text": "which is the same as Eqn. 3, and they include all four information streams (the attention map is shown as Figure 3a).",
|
| 576 |
+
"bbox": [
|
| 577 |
+
81,
|
| 578 |
+
527,
|
| 579 |
+
477,
|
| 580 |
+
556
|
| 581 |
+
],
|
| 582 |
+
"page_idx": 3
|
| 583 |
+
},
|
| 584 |
+
{
|
| 585 |
+
"type": "text",
|
| 586 |
+
"text": "By the same analysis, the asymmetric mix-attention (AMix-Attn) contains three information streams (#1, #3, #4 info stream), which is shown in the Figure 3b and is calculated as follows:",
|
| 587 |
+
"bbox": [
|
| 588 |
+
81,
|
| 589 |
+
556,
|
| 590 |
+
478,
|
| 591 |
+
611
|
| 592 |
+
],
|
| 593 |
+
"page_idx": 3
|
| 594 |
+
},
|
| 595 |
+
{
|
| 596 |
+
"type": "equation",
|
| 597 |
+
"text": "\n$$\n\\mathrm {A M i x - A t t n} =\n$$\n",
|
| 598 |
+
"text_format": "latex",
|
| 599 |
+
"bbox": [
|
| 600 |
+
235,
|
| 601 |
+
616,
|
| 602 |
+
331,
|
| 603 |
+
628
|
| 604 |
+
],
|
| 605 |
+
"page_idx": 3
|
| 606 |
+
},
|
| 607 |
+
{
|
| 608 |
+
"type": "equation",
|
| 609 |
+
"text": "\n$$\n\\operatorname {C a t} \\left(\\operatorname {A M a p} \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {z}\\right), \\operatorname {A M a p} \\left(\\mathbf {X} _ {s}, \\operatorname {C a t} \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right)\\right)\\right) \\tag {5}\n$$\n",
|
| 610 |
+
"text_format": "latex",
|
| 611 |
+
"bbox": [
|
| 612 |
+
102,
|
| 613 |
+
631,
|
| 614 |
+
477,
|
| 615 |
+
660
|
| 616 |
+
],
|
| 617 |
+
"page_idx": 3
|
| 618 |
+
},
|
| 619 |
+
{
|
| 620 |
+
"type": "text",
|
| 621 |
+
"text": "The cross-attention contains two information streams (#2,#3 info stream) for cross information aggregation, which is shown in the Figure 3c and is calculated as follows:",
|
| 622 |
+
"bbox": [
|
| 623 |
+
81,
|
| 624 |
+
664,
|
| 625 |
+
478,
|
| 626 |
+
705
|
| 627 |
+
],
|
| 628 |
+
"page_idx": 3
|
| 629 |
+
},
|
| 630 |
+
{
|
| 631 |
+
"type": "equation",
|
| 632 |
+
"text": "\n$$\n\\operatorname {C r o s s - A t t n} = \\operatorname {C a t} \\left(\\operatorname {A M a p} \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right), \\operatorname {A M a p} \\left(\\mathbf {X} _ {s}, \\mathbf {X} _ {z}\\right)\\right) \\tag {6}\n$$\n",
|
| 633 |
+
"text_format": "latex",
|
| 634 |
+
"bbox": [
|
| 635 |
+
99,
|
| 636 |
+
713,
|
| 637 |
+
477,
|
| 638 |
+
750
|
| 639 |
+
],
|
| 640 |
+
"page_idx": 3
|
| 641 |
+
},
|
| 642 |
+
{
|
| 643 |
+
"type": "text",
|
| 644 |
+
"text": "In order to fully verify the importance of each part of packed attention, it is necessary to evaluate the impact of each information stream individually. The key of visual object tracking is to find the target in the search image, there must be a cross-information aggregation of the search image (#3 info stream). The other information streams can be blocked out to verify their performance.",
|
| 645 |
+
"bbox": [
|
| 646 |
+
81,
|
| 647 |
+
750,
|
| 648 |
+
478,
|
| 649 |
+
847
|
| 650 |
+
],
|
| 651 |
+
"page_idx": 3
|
| 652 |
+
},
|
| 653 |
+
{
|
| 654 |
+
"type": "text",
|
| 655 |
+
"text": "Based on the above idea, we conduct detailed experiments and the result is shown in Table 1. Removing cross-information aggregation of the template (#2 info stream) of",
|
| 656 |
+
"bbox": [
|
| 657 |
+
81,
|
| 658 |
+
845,
|
| 659 |
+
478,
|
| 660 |
+
888
|
| 661 |
+
],
|
| 662 |
+
"page_idx": 3
|
| 663 |
+
},
|
| 664 |
+
{
|
| 665 |
+
"type": "table",
|
| 666 |
+
"img_path": "images/212f993326b1de02fa6674740579cfcfeb57c124445e7d1bb7c4fd14624a57b8.jpg",
|
| 667 |
+
"table_caption": [
|
| 668 |
+
"Table 1: The effectiveness of information streams in the attention mechanism on the LaSOT dataset. The visualized four parts in the attention map (AMap) correspond to the four information streams at the matched location."
|
| 669 |
+
],
|
| 670 |
+
"table_footnote": [],
|
| 671 |
+
"table_body": "<table><tr><td rowspan=\"2\" colspan=\"2\">#AMap</td><td colspan=\"4\">No. Info Stream</td><td rowspan=\"2\">AUC</td><td rowspan=\"2\">Prec</td></tr><tr><td>①</td><td>②</td><td>③</td><td>④</td></tr><tr><td>1</td><td></td><td>√</td><td>√</td><td>√</td><td>√</td><td>61.7</td><td>64.2</td></tr><tr><td>2</td><td></td><td>√</td><td></td><td>√</td><td>√</td><td>64.0</td><td>67.7</td></tr><tr><td>3</td><td></td><td></td><td>√</td><td>√</td><td>√</td><td>60.6</td><td>63.7</td></tr><tr><td>4</td><td></td><td>√</td><td>√</td><td>√</td><td></td><td>58.8</td><td>60.1</td></tr><tr><td>5</td><td></td><td></td><td>√</td><td>√</td><td></td><td>57.9</td><td>58.5</td></tr></table>",
|
| 672 |
+
"bbox": [
|
| 673 |
+
532,
|
| 674 |
+
133,
|
| 675 |
+
893,
|
| 676 |
+
284
|
| 677 |
+
],
|
| 678 |
+
"page_idx": 3
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"text": "self-attention can greatly improve tracking performance (the AUC and Prec of Table 1 #2 are better than that of Table 1 #1), and the cross-information aggregation of the template will introduce a lot of noise in template features, which is not recommended in visual tracking. However, removing self-information enhancement (#3 and #4 info stream) of self-attention severely degrades the tracking performance (the AUC and Prec of Table 1 #3 and #4 are worse than that of Table 1 #1). From the results we can conclude that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation, the cross-information aggregation is indispensable in tracking but not greatly beneficial.",
|
| 683 |
+
"bbox": [
|
| 684 |
+
514,
|
| 685 |
+
310,
|
| 686 |
+
913,
|
| 687 |
+
492
|
| 688 |
+
],
|
| 689 |
+
"page_idx": 3
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "text",
|
| 693 |
+
"text": "3.2 Correlative Masked Modeling",
|
| 694 |
+
"text_level": 1,
|
| 695 |
+
"bbox": [
|
| 696 |
+
516,
|
| 697 |
+
506,
|
| 698 |
+
792,
|
| 699 |
+
522
|
| 700 |
+
],
|
| 701 |
+
"page_idx": 3
|
| 702 |
+
},
|
| 703 |
+
{
|
| 704 |
+
"type": "text",
|
| 705 |
+
"text": "According to the above analysis, the best tracking performance can be achieved by adopting three information streams: self-information on the template(#1 info stream), cross-information on the search image (#3 info stream), and self-information on the search image (#4 info stream). These three information streams can be grouped into two categories: two self-information enhancements and one cross-information aggregation. We designed a correlative masked modeling method to enhance the information aggregation of our tracking framework, as shown in Figure 1. The ViT backbone is an encoder, and the correlative masked decoder reconstructs the original image (the template and search image respectively) from randomly masked tokens to enhance the self-information and reconstructs the template image from search tokens to improve cross-information aggregation. In parallel with the masked decoder, the search image tokens go through a box estimation head as in (Yan et al. 2021) to generate the result bounding box.",
|
| 706 |
+
"bbox": [
|
| 707 |
+
514,
|
| 708 |
+
527,
|
| 709 |
+
911,
|
| 710 |
+
776
|
| 711 |
+
],
|
| 712 |
+
"page_idx": 3
|
| 713 |
+
},
|
| 714 |
+
{
|
| 715 |
+
"type": "text",
|
| 716 |
+
"text": "Decoder. The decoders in our framework consist of a self-decoder and a cross-decoder, these two decoders have the same structure but do not share weights, each one is composed of a series of transformer blocks similar to the MAE, and the last layer of the decoder is a linear projection with output channels equal to the number of pixels in a patch. As shown in Figure 4, the decoder takes masked tokens as input and predicts the original image pixels corresponding to",
|
| 717 |
+
"bbox": [
|
| 718 |
+
514,
|
| 719 |
+
777,
|
| 720 |
+
913,
|
| 721 |
+
888
|
| 722 |
+
],
|
| 723 |
+
"page_idx": 3
|
| 724 |
+
},
|
| 725 |
+
{
|
| 726 |
+
"type": "image",
|
| 727 |
+
"img_path": "images/5d6f163d8a00faf35891b37c8e4aa98ae0fa377db52f3501c0b27962c1ab543a.jpg",
|
| 728 |
+
"image_caption": [
|
| 729 |
+
"Figure 4: The correlative masked decoders consists of a self-decoder and a cross-decoder. The self-decoder reconstructs the two original images, template and search image, from its corresponding masked tokens. The cross-decoder reconstructs the template image from search tokens."
|
| 730 |
+
],
|
| 731 |
+
"image_footnote": [],
|
| 732 |
+
"bbox": [
|
| 733 |
+
86,
|
| 734 |
+
65,
|
| 735 |
+
475,
|
| 736 |
+
281
|
| 737 |
+
],
|
| 738 |
+
"page_idx": 4
|
| 739 |
+
},
|
| 740 |
+
{
|
| 741 |
+
"type": "text",
|
| 742 |
+
"text": "the template token and the search image token, where the template tokens are only self-reconstructed to the template image for enhancing the #1 information stream, search tokens are used to crossly reconstruct the template image (for #3 info stream) and self-reconstruct the search image (for #4 info stream).",
|
| 743 |
+
"bbox": [
|
| 744 |
+
81,
|
| 745 |
+
386,
|
| 746 |
+
478,
|
| 747 |
+
469
|
| 748 |
+
],
|
| 749 |
+
"page_idx": 4
|
| 750 |
+
},
|
| 751 |
+
{
|
| 752 |
+
"type": "text",
|
| 753 |
+
"text": "Masking and Reconstruction. The encoder embeds the concatenation set of template tokens and search tokens. Then we split the encoded tokens into template tokens and search tokens, crop the search tokens using Precise RoI Pooling(Jiang et al. 2018) to the same size as the template tokens, and sample a subset of them. We randomly sample tokens at a high masking ratio (75%). Our decoder predicts the pixel values for each masked token, and the output of the decoder is reshaped to form a reconstructed image. We use the mean squared error (MSE) between the reconstructed and original images on masked tokens as our loss function.",
|
| 754 |
+
"bbox": [
|
| 755 |
+
81,
|
| 756 |
+
469,
|
| 757 |
+
480,
|
| 758 |
+
623
|
| 759 |
+
],
|
| 760 |
+
"page_idx": 4
|
| 761 |
+
},
|
| 762 |
+
{
|
| 763 |
+
"type": "text",
|
| 764 |
+
"text": "3.3 Training and Inference",
|
| 765 |
+
"text_level": 1,
|
| 766 |
+
"bbox": [
|
| 767 |
+
83,
|
| 768 |
+
633,
|
| 769 |
+
305,
|
| 770 |
+
648
|
| 771 |
+
],
|
| 772 |
+
"page_idx": 4
|
| 773 |
+
},
|
| 774 |
+
{
|
| 775 |
+
"type": "text",
|
| 776 |
+
"text": "Our decoder is only used in the training phase, while does not participate in the inference phase, hence it doesn't affect the tracking speed. During the training phase, our tracker takes a triplet input consisting of one search region and two templates similar to STARK(Yan et al. 2021). We randomly sample multiple frames from sequences in the training set, select the first frame and the second frame as templates, and the last frame as the search region. In the target localization training, we train the whole network except the scoring head in an end-to-end manner with the combination of $L1$ Loss, generalized IoU loss (Rezatofighi et al. 2019), and decoder loss $L_{dec}$ . The full loss function is defined as follows:",
|
| 777 |
+
"bbox": [
|
| 778 |
+
81,
|
| 779 |
+
650,
|
| 780 |
+
478,
|
| 781 |
+
816
|
| 782 |
+
],
|
| 783 |
+
"page_idx": 4
|
| 784 |
+
},
|
| 785 |
+
{
|
| 786 |
+
"type": "equation",
|
| 787 |
+
"text": "\n$$\nL o s s = \\lambda_ {L 1} L _ {1} \\left(B _ {i}, \\hat {B} _ {i}\\right) + \\lambda_ {g} L _ {g} \\left(B _ {i}, \\hat {B} _ {i}\\right) + \\lambda_ {d e c} L _ {d e c} \\tag {7}\n$$\n",
|
| 788 |
+
"text_format": "latex",
|
| 789 |
+
"bbox": [
|
| 790 |
+
94,
|
| 791 |
+
821,
|
| 792 |
+
478,
|
| 793 |
+
840
|
| 794 |
+
],
|
| 795 |
+
"page_idx": 4
|
| 796 |
+
},
|
| 797 |
+
{
|
| 798 |
+
"type": "text",
|
| 799 |
+
"text": "where $\\lambda_{L1} = 5.0$ , $\\lambda_{g} = 2.0$ and $\\lambda_{dec} = 0.3$ are the weighting factors of three losses, $\\hat{B}_i$ is the estimated box of the target and $B_i$ is the ground-truth bounding box. The decoder",
|
| 800 |
+
"bbox": [
|
| 801 |
+
81,
|
| 802 |
+
843,
|
| 803 |
+
480,
|
| 804 |
+
888
|
| 805 |
+
],
|
| 806 |
+
"page_idx": 4
|
| 807 |
+
},
|
| 808 |
+
{
|
| 809 |
+
"type": "text",
|
| 810 |
+
"text": "loss $L_{dec}$ is defined as:",
|
| 811 |
+
"bbox": [
|
| 812 |
+
516,
|
| 813 |
+
68,
|
| 814 |
+
671,
|
| 815 |
+
82
|
| 816 |
+
],
|
| 817 |
+
"page_idx": 4
|
| 818 |
+
},
|
| 819 |
+
{
|
| 820 |
+
"type": "equation",
|
| 821 |
+
"text": "\n$$\nL _ {d e c} = L _ {2} \\left(z, z _ {p}\\right) + L _ {2} \\left(s, s _ {p}\\right) + L _ {2} \\left(z, s _ {p}\\right) \\tag {8}\n$$\n",
|
| 822 |
+
"text_format": "latex",
|
| 823 |
+
"bbox": [
|
| 824 |
+
573,
|
| 825 |
+
92,
|
| 826 |
+
911,
|
| 827 |
+
108
|
| 828 |
+
],
|
| 829 |
+
"page_idx": 4
|
| 830 |
+
},
|
| 831 |
+
{
|
| 832 |
+
"type": "text",
|
| 833 |
+
"text": "where the $L_{2}$ is the MSE loss, $z$ and $s$ represent the original template image and search image, $z_{p}$ and $s_p$ represent the predicting template image and search image respectively.",
|
| 834 |
+
"bbox": [
|
| 835 |
+
514,
|
| 836 |
+
116,
|
| 837 |
+
911,
|
| 838 |
+
159
|
| 839 |
+
],
|
| 840 |
+
"page_idx": 4
|
| 841 |
+
},
|
| 842 |
+
{
|
| 843 |
+
"type": "text",
|
| 844 |
+
"text": "In the inference phase, we use two templates of the same size as the input. One of which is the initial template and fixed, the other is online updated and always set to the latest tracking result with high confidence. We use a score head to control the updating of the online template. Our score head consists of the multilayer perceptron (MLP) that receives a class-token(Dosovitskiy et al. 2021) as input and evaluates the accuracy of current tracking results.",
|
| 845 |
+
"bbox": [
|
| 846 |
+
514,
|
| 847 |
+
159,
|
| 848 |
+
913,
|
| 849 |
+
270
|
| 850 |
+
],
|
| 851 |
+
"page_idx": 4
|
| 852 |
+
},
|
| 853 |
+
{
|
| 854 |
+
"type": "text",
|
| 855 |
+
"text": "4 Experiments",
|
| 856 |
+
"text_level": 1,
|
| 857 |
+
"bbox": [
|
| 858 |
+
643,
|
| 859 |
+
284,
|
| 860 |
+
785,
|
| 861 |
+
301
|
| 862 |
+
],
|
| 863 |
+
"page_idx": 4
|
| 864 |
+
},
|
| 865 |
+
{
|
| 866 |
+
"type": "text",
|
| 867 |
+
"text": "4.1 Implementation Details",
|
| 868 |
+
"text_level": 1,
|
| 869 |
+
"bbox": [
|
| 870 |
+
516,
|
| 871 |
+
306,
|
| 872 |
+
741,
|
| 873 |
+
321
|
| 874 |
+
],
|
| 875 |
+
"page_idx": 4
|
| 876 |
+
},
|
| 877 |
+
{
|
| 878 |
+
"type": "text",
|
| 879 |
+
"text": "In order to effectively verify the correctness of our analysis, we design the compact transformer tracker without any other extra attention mechanisms. The only structures remaining are feature extraction and aggregation, and multilayer feature aggregation. The main tracker only consists of a ViT backbone and a box estimation head, we test both ViT-Base and ViT-Large, and the ViT parameters are initialized with MAE (He et al. 2022) pre-trained model. We refer our Compact Transformer tracker as CTTrack-B (the backbone of ViT-Base) and CTTrack-L (the backbone of ViT-Large) in this section.",
|
| 880 |
+
"bbox": [
|
| 881 |
+
514,
|
| 882 |
+
325,
|
| 883 |
+
911,
|
| 884 |
+
478
|
| 885 |
+
],
|
| 886 |
+
"page_idx": 4
|
| 887 |
+
},
|
| 888 |
+
{
|
| 889 |
+
"type": "text",
|
| 890 |
+
"text": "We adopt CoCo(Lin et al. 2014), LaSOT(Fan et al. 2019), GOT-10k(Huang, Zhao, and Huang 2019), and TrackingNet(Muller et al. 2018) as our training dataset except the GOT-10k benchmark. The training samples are directly sampled from the same sequence and we apply common data augmentation operations including brightness jitter and horizontal flip. The size of the input template is $128 \\times 128$ , the search region is $5^2$ times of the target box area and further resized to $320 \\times 320$ . The decoder parameters are initialized with Xavier Uniform. The AdamW optimizer (Loshchilov and Hutter 2018) is employed with initial learning rate (lr) of 1e-4 with the layer-wise decay 0.75, and the lr decreases according to the cosine function with the final decrease factor of 0.1. We adopt a warm-up lr with the 0.2 warm-up factor on the first 5 epochs. We train our model on 4 Nvidia Tesla V100 GPUs for a total of 500 epochs, each epoch uses $6 \\times 10^4$ images. The mini-batch size is set to 128 images with each GPU hosting 32 images. Our approach is implemented in Python 3.7 with PyTorch 1.7.",
|
| 891 |
+
"bbox": [
|
| 892 |
+
514,
|
| 893 |
+
479,
|
| 894 |
+
913,
|
| 895 |
+
743
|
| 896 |
+
],
|
| 897 |
+
"page_idx": 4
|
| 898 |
+
},
|
| 899 |
+
{
|
| 900 |
+
"type": "text",
|
| 901 |
+
"text": "4.2 Ablation Study",
|
| 902 |
+
"text_level": 1,
|
| 903 |
+
"bbox": [
|
| 904 |
+
516,
|
| 905 |
+
756,
|
| 906 |
+
678,
|
| 907 |
+
772
|
| 908 |
+
],
|
| 909 |
+
"page_idx": 4
|
| 910 |
+
},
|
| 911 |
+
{
|
| 912 |
+
"type": "text",
|
| 913 |
+
"text": "We ablate our compact transformer tracker on several intriguing properties using the challenging LaSOT dataset and report the Area Under the Curve (AUC) and Precision (Prec) as the validation accuracy.",
|
| 914 |
+
"bbox": [
|
| 915 |
+
514,
|
| 916 |
+
776,
|
| 917 |
+
911,
|
| 918 |
+
833
|
| 919 |
+
],
|
| 920 |
+
"page_idx": 4
|
| 921 |
+
},
|
| 922 |
+
{
|
| 923 |
+
"type": "text",
|
| 924 |
+
"text": "Backbone Comparison. Table 2 shows the comparison of the transformer backbones between the ViT-Base and ViT-Large backbone. The CTTrack-B reaches a higher tracking speed while the CTTrack-L exhibits a better performance.",
|
| 925 |
+
"bbox": [
|
| 926 |
+
514,
|
| 927 |
+
832,
|
| 928 |
+
913,
|
| 929 |
+
888
|
| 930 |
+
],
|
| 931 |
+
"page_idx": 4
|
| 932 |
+
},
|
| 933 |
+
{
|
| 934 |
+
"type": "table",
|
| 935 |
+
"img_path": "images/6bbff9fd1959f56e2ac151c471ef197549b2c2d999be0dc0909fd93a8f611bf2.jpg",
|
| 936 |
+
"table_caption": [
|
| 937 |
+
"Table 2: Model size and speed using different backbones."
|
| 938 |
+
],
|
| 939 |
+
"table_footnote": [],
|
| 940 |
+
"table_body": "<table><tr><td>Methods</td><td>Params(M)</td><td>FLOPs(G)</td><td>Speed(fps)</td></tr><tr><td>CTTrack-B</td><td>93.8</td><td>48.1</td><td>40</td></tr><tr><td>CTTrack-L</td><td>313.9</td><td>163.7</td><td>22</td></tr></table>",
|
| 941 |
+
"bbox": [
|
| 942 |
+
86,
|
| 943 |
+
90,
|
| 944 |
+
470,
|
| 945 |
+
150
|
| 946 |
+
],
|
| 947 |
+
"page_idx": 5
|
| 948 |
+
},
|
| 949 |
+
{
|
| 950 |
+
"type": "text",
|
| 951 |
+
"text": "Reconstruction Streams. Our decoder enforces three types of reconstruction streams as shown in Figure 4. Table 3 exhibits different configurations of reconstruction streams, through varied combinations of search tokens reconstruct search image (s2s), template tokens reconstruct template image (t2t) and search tokens reconstruct template image(s2t). The result is consistent with the conclusion of our previous analysis that self-information enhancement (#5) plays the most important role in transformer tracking, compared to cross-information aggregation(#4). Besides, search image information has more influence than the template information, the s2s (#2) improves performance the most among all streams (#2, #3, #4), from 64.0 to 64.7 in AUC score. After adopting all three reconstruction streams, tracking accuracy improved by an impressive AUC score of $1.8\\%$ , which validates the effectiveness of our masked modeling decoders.",
|
| 952 |
+
"bbox": [
|
| 953 |
+
81,
|
| 954 |
+
176,
|
| 955 |
+
478,
|
| 956 |
+
398
|
| 957 |
+
],
|
| 958 |
+
"page_idx": 5
|
| 959 |
+
},
|
| 960 |
+
{
|
| 961 |
+
"type": "table",
|
| 962 |
+
"img_path": "images/64bd5a70113256112cb9666ab57445f2fb787292d13585c6bcaf369a9eaedda6.jpg",
|
| 963 |
+
"table_caption": [
|
| 964 |
+
"Table 3: Ablation Study for the reconstruction streams. s2s represents search tokens reconstruct search image, t2t denotes template tokens reconstruct template image and s2t means search tokens reconstruct template image."
|
| 965 |
+
],
|
| 966 |
+
"table_footnote": [],
|
| 967 |
+
"table_body": "<table><tr><td rowspan=\"2\">#</td><td colspan=\"3\">Recons Type</td><td rowspan=\"2\">AUC</td><td rowspan=\"2\">Prec</td></tr><tr><td>s2s</td><td>t2t</td><td>s2t</td></tr><tr><td>1</td><td>-</td><td>-</td><td>-</td><td>64.0</td><td>67.7</td></tr><tr><td>2</td><td>✓</td><td>-</td><td>-</td><td>64.7</td><td>69.1</td></tr><tr><td>3</td><td>-</td><td>✓</td><td>-</td><td>64.4</td><td>68.4</td></tr><tr><td>4</td><td>-</td><td>-</td><td>✓</td><td>64.4</td><td>68.6</td></tr><tr><td>5</td><td>✓</td><td>✓</td><td>-</td><td>65.1</td><td>69.9</td></tr><tr><td>6</td><td>✓</td><td>✓</td><td>✓</td><td>65.8</td><td>70.9</td></tr></table>",
|
| 968 |
+
"bbox": [
|
| 969 |
+
116,
|
| 970 |
+
479,
|
| 971 |
+
444,
|
| 972 |
+
608
|
| 973 |
+
],
|
| 974 |
+
"page_idx": 5
|
| 975 |
+
},
|
| 976 |
+
{
|
| 977 |
+
"type": "text",
|
| 978 |
+
"text": "Masking ratio. When we conduct reconstruction streams, we randomly mask the input tokens according to a predefined ratio. Table 4 shows the influence of different masking ratios. We mask the encoded template token and search tokens with a random sampling strategy at different masking rates. Similar to the conclusion obtained by the MAE(He et al. 2022), the optimal ratios are relatively high, and the accuracy increases steadily with the masking ratio growing until reaching $75\\%$ , which produces the best tracking results.",
|
| 979 |
+
"bbox": [
|
| 980 |
+
81,
|
| 981 |
+
622,
|
| 982 |
+
478,
|
| 983 |
+
748
|
| 984 |
+
],
|
| 985 |
+
"page_idx": 5
|
| 986 |
+
},
|
| 987 |
+
{
|
| 988 |
+
"type": "table",
|
| 989 |
+
"img_path": "images/e24eefba5032c662f080fc45286a9130ea58249251066dbd08bbdb90dc387a49.jpg",
|
| 990 |
+
"table_caption": [
|
| 991 |
+
"Table 4: Comparison on masking ratio."
|
| 992 |
+
],
|
| 993 |
+
"table_footnote": [],
|
| 994 |
+
"table_body": "<table><tr><td>Mask Ratio</td><td>25%</td><td>50%</td><td>75%</td><td>90%</td></tr><tr><td>AUC</td><td>64.6</td><td>65.7</td><td>65.8</td><td>64.9</td></tr><tr><td>Prec</td><td>69.0</td><td>70.7</td><td>70.9</td><td>69.5</td></tr></table>",
|
| 995 |
+
"bbox": [
|
| 996 |
+
133,
|
| 997 |
+
787,
|
| 998 |
+
426,
|
| 999 |
+
845
|
| 1000 |
+
],
|
| 1001 |
+
"page_idx": 5
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "text",
|
| 1005 |
+
"text": "Online Template Updating. We evaluate the effect of the online update strategy in our method. The ablation study",
|
| 1006 |
+
"bbox": [
|
| 1007 |
+
81,
|
| 1008 |
+
859,
|
| 1009 |
+
478,
|
| 1010 |
+
888
|
| 1011 |
+
],
|
| 1012 |
+
"page_idx": 5
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "image",
|
| 1016 |
+
"img_path": "images/28c0ac63ea3a42754d3423f83f62a3700a4dbf7cced38ea264900db647a43127.jpg",
|
| 1017 |
+
"image_caption": [
|
| 1018 |
+
"Target"
|
| 1019 |
+
],
|
| 1020 |
+
"image_footnote": [],
|
| 1021 |
+
"bbox": [
|
| 1022 |
+
522,
|
| 1023 |
+
74,
|
| 1024 |
+
576,
|
| 1025 |
+
116
|
| 1026 |
+
],
|
| 1027 |
+
"page_idx": 5
|
| 1028 |
+
},
|
| 1029 |
+
{
|
| 1030 |
+
"type": "image",
|
| 1031 |
+
"img_path": "images/31703abbac56d6e922b52f7d1d723dc162b357f025945d5424ecffd909220296.jpg",
|
| 1032 |
+
"image_caption": [
|
| 1033 |
+
"S-to-S"
|
| 1034 |
+
],
|
| 1035 |
+
"image_footnote": [],
|
| 1036 |
+
"bbox": [
|
| 1037 |
+
581,
|
| 1038 |
+
74,
|
| 1039 |
+
686,
|
| 1040 |
+
114
|
| 1041 |
+
],
|
| 1042 |
+
"page_idx": 5
|
| 1043 |
+
},
|
| 1044 |
+
{
|
| 1045 |
+
"type": "image",
|
| 1046 |
+
"img_path": "images/60f1e7021d47d67a03dd6a879cb5370cf68dac891b450af5f8284009b01aec8e.jpg",
|
| 1047 |
+
"image_caption": [
|
| 1048 |
+
"T-to-T"
|
| 1049 |
+
],
|
| 1050 |
+
"image_footnote": [],
|
| 1051 |
+
"bbox": [
|
| 1052 |
+
692,
|
| 1053 |
+
74,
|
| 1054 |
+
797,
|
| 1055 |
+
114
|
| 1056 |
+
],
|
| 1057 |
+
"page_idx": 5
|
| 1058 |
+
},
|
| 1059 |
+
{
|
| 1060 |
+
"type": "image",
|
| 1061 |
+
"img_path": "images/d2f0b77b520dd7b90514d117753f7ed4759d8c662fc9a05aefe87b3fd159823e.jpg",
|
| 1062 |
+
"image_caption": [
|
| 1063 |
+
"S-to-T"
|
| 1064 |
+
],
|
| 1065 |
+
"image_footnote": [],
|
| 1066 |
+
"bbox": [
|
| 1067 |
+
803,
|
| 1068 |
+
74,
|
| 1069 |
+
906,
|
| 1070 |
+
114
|
| 1071 |
+
],
|
| 1072 |
+
"page_idx": 5
|
| 1073 |
+
},
|
| 1074 |
+
{
|
| 1075 |
+
"type": "image",
|
| 1076 |
+
"img_path": "images/dcfa569a16b54597aa91f67cf8c4683efefa7ea243f9af5ea141a8044b63e703.jpg",
|
| 1077 |
+
"image_caption": [],
|
| 1078 |
+
"image_footnote": [],
|
| 1079 |
+
"bbox": [
|
| 1080 |
+
522,
|
| 1081 |
+
116,
|
| 1082 |
+
576,
|
| 1083 |
+
157
|
| 1084 |
+
],
|
| 1085 |
+
"page_idx": 5
|
| 1086 |
+
},
|
| 1087 |
+
{
|
| 1088 |
+
"type": "image",
|
| 1089 |
+
"img_path": "images/e57613f9306a8cf7da22cc11a738ef05bd4aca98575f0153fb4dfb97a8c4ae3b.jpg",
|
| 1090 |
+
"image_caption": [],
|
| 1091 |
+
"image_footnote": [],
|
| 1092 |
+
"bbox": [
|
| 1093 |
+
581,
|
| 1094 |
+
117,
|
| 1095 |
+
658,
|
| 1096 |
+
157
|
| 1097 |
+
],
|
| 1098 |
+
"page_idx": 5
|
| 1099 |
+
},
|
| 1100 |
+
{
|
| 1101 |
+
"type": "image",
|
| 1102 |
+
"img_path": "images/4d32f3f66ea43806d9c87ee4808bd8f09603588b335ddffa113d97e1303efc6c.jpg",
|
| 1103 |
+
"image_caption": [],
|
| 1104 |
+
"image_footnote": [],
|
| 1105 |
+
"bbox": [
|
| 1106 |
+
663,
|
| 1107 |
+
117,
|
| 1108 |
+
797,
|
| 1109 |
+
157
|
| 1110 |
+
],
|
| 1111 |
+
"page_idx": 5
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "image",
|
| 1115 |
+
"img_path": "images/c235b7df8c66dc18cafc7cc097031b2c2b2f6c4cf7b60433cc35463e21b3f1c0.jpg",
|
| 1116 |
+
"image_caption": [
|
| 1117 |
+
"Figure 5: Visualization of attention map which compares the difference between training with correlative decoder (w) and training without correlative decoder(w/o). S-to-S is self-information enhancement on search image, T-to-T is self-information enhancement on template, S-to-T is cross-information aggregation on search image."
|
| 1118 |
+
],
|
| 1119 |
+
"image_footnote": [],
|
| 1120 |
+
"bbox": [
|
| 1121 |
+
522,
|
| 1122 |
+
159,
|
| 1123 |
+
576,
|
| 1124 |
+
200
|
| 1125 |
+
],
|
| 1126 |
+
"page_idx": 5
|
| 1127 |
+
},
|
| 1128 |
+
{
|
| 1129 |
+
"type": "image",
|
| 1130 |
+
"img_path": "images/73076a59237d00e8d475f0ef5c9a987b74e2b2cac04cf175d0c2b46b36a3695b.jpg",
|
| 1131 |
+
"image_caption": [
|
| 1132 |
+
"w/o W"
|
| 1133 |
+
],
|
| 1134 |
+
"image_footnote": [],
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
581,
|
| 1137 |
+
159,
|
| 1138 |
+
686,
|
| 1139 |
+
200
|
| 1140 |
+
],
|
| 1141 |
+
"page_idx": 5
|
| 1142 |
+
},
|
| 1143 |
+
{
|
| 1144 |
+
"type": "image",
|
| 1145 |
+
"img_path": "images/29776ab48c29fe7c41c210398565ed6d52ab4088d5b29ff744b379bfd61bd392.jpg",
|
| 1146 |
+
"image_caption": [
|
| 1147 |
+
"W/O W"
|
| 1148 |
+
],
|
| 1149 |
+
"image_footnote": [],
|
| 1150 |
+
"bbox": [
|
| 1151 |
+
689,
|
| 1152 |
+
159,
|
| 1153 |
+
797,
|
| 1154 |
+
200
|
| 1155 |
+
],
|
| 1156 |
+
"page_idx": 5
|
| 1157 |
+
},
|
| 1158 |
+
{
|
| 1159 |
+
"type": "image",
|
| 1160 |
+
"img_path": "images/d979876b0f8bf6977de438ab92151779b979257efddce1a88cfbd55f98b5c172.jpg",
|
| 1161 |
+
"image_caption": [],
|
| 1162 |
+
"image_footnote": [],
|
| 1163 |
+
"bbox": [
|
| 1164 |
+
802,
|
| 1165 |
+
117,
|
| 1166 |
+
880,
|
| 1167 |
+
157
|
| 1168 |
+
],
|
| 1169 |
+
"page_idx": 5
|
| 1170 |
+
},
|
| 1171 |
+
{
|
| 1172 |
+
"type": "image",
|
| 1173 |
+
"img_path": "images/20184cd27cc8c5caf54381d75e2e0c9c43b379dc11fe298b86b271897d0afdd4.jpg",
|
| 1174 |
+
"image_caption": [],
|
| 1175 |
+
"image_footnote": [],
|
| 1176 |
+
"bbox": [
|
| 1177 |
+
883,
|
| 1178 |
+
117,
|
| 1179 |
+
906,
|
| 1180 |
+
157
|
| 1181 |
+
],
|
| 1182 |
+
"page_idx": 5
|
| 1183 |
+
},
|
| 1184 |
+
{
|
| 1185 |
+
"type": "image",
|
| 1186 |
+
"img_path": "images/e7629fcb7ab6996765e32119b4caeec925446cf6b06db368757d2808507b8016.jpg",
|
| 1187 |
+
"image_caption": [
|
| 1188 |
+
"w/o"
|
| 1189 |
+
],
|
| 1190 |
+
"image_footnote": [],
|
| 1191 |
+
"bbox": [
|
| 1192 |
+
797,
|
| 1193 |
+
159,
|
| 1194 |
+
880,
|
| 1195 |
+
200
|
| 1196 |
+
],
|
| 1197 |
+
"page_idx": 5
|
| 1198 |
+
},
|
| 1199 |
+
{
|
| 1200 |
+
"type": "image",
|
| 1201 |
+
"img_path": "images/bd2d3584d2ebc68114650c2f88ad4023521c7ac8294122c425513ada997c5081.jpg",
|
| 1202 |
+
"image_caption": [],
|
| 1203 |
+
"image_footnote": [],
|
| 1204 |
+
"bbox": [
|
| 1205 |
+
898,
|
| 1206 |
+
159,
|
| 1207 |
+
908,
|
| 1208 |
+
200
|
| 1209 |
+
],
|
| 1210 |
+
"page_idx": 5
|
| 1211 |
+
},
|
| 1212 |
+
{
|
| 1213 |
+
"type": "text",
|
| 1214 |
+
"text": "result is shown in Table 5, #1 represents the performance without template updating. We can see that applying a fixed interval to update the online template (#2) is ineffective as it greatly reduces the quality of template and causes tracking drift. It can be seen in #3, there is a $0.2\\%$ improvement in the AUC score after applying the scoring head to evaluate the accuracy of current tracking results.",
|
| 1215 |
+
"bbox": [
|
| 1216 |
+
514,
|
| 1217 |
+
330,
|
| 1218 |
+
911,
|
| 1219 |
+
429
|
| 1220 |
+
],
|
| 1221 |
+
"page_idx": 5
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"type": "table",
|
| 1225 |
+
"img_path": "images/2c1f63803fb5dda9cf41ecd4906eae2b3449fdc33325b8e54c44ec243444de7a.jpg",
|
| 1226 |
+
"table_caption": [
|
| 1227 |
+
"Table 5: Ablation for the online template updating component. Online denotes updating the template at a fixed update interval. Score represents the online template is only updated with high confident samples."
|
| 1228 |
+
],
|
| 1229 |
+
"table_footnote": [],
|
| 1230 |
+
"table_body": "<table><tr><td></td><td>Online</td><td>Score</td><td>AUC</td><td>Prec</td></tr><tr><td rowspan=\"3\">CTTrack-B</td><td>-</td><td>-</td><td>65.8</td><td>70.9</td></tr><tr><td>✓</td><td>-</td><td>64.9</td><td>69.9</td></tr><tr><td>✓</td><td>✓</td><td>66.0</td><td>71.1</td></tr></table>",
|
| 1231 |
+
"bbox": [
|
| 1232 |
+
555,
|
| 1233 |
+
508,
|
| 1234 |
+
870,
|
| 1235 |
+
580
|
| 1236 |
+
],
|
| 1237 |
+
"page_idx": 5
|
| 1238 |
+
},
|
| 1239 |
+
{
|
| 1240 |
+
"type": "text",
|
| 1241 |
+
"text": "Visualization of attention maps. We visualize attention maps in Figure5, our tracker adopting the correlative decoder has a stronger discriminative ability. The baseline transformer without a reconstruction decoder tends to lose the target position, and the distractors in the background get suppressed with the training by the correlative decoder.",
|
| 1242 |
+
"bbox": [
|
| 1243 |
+
514,
|
| 1244 |
+
594,
|
| 1245 |
+
911,
|
| 1246 |
+
679
|
| 1247 |
+
],
|
| 1248 |
+
"page_idx": 5
|
| 1249 |
+
},
|
| 1250 |
+
{
|
| 1251 |
+
"type": "text",
|
| 1252 |
+
"text": "4.3 Comparison with the SOTA",
|
| 1253 |
+
"text_level": 1,
|
| 1254 |
+
"bbox": [
|
| 1255 |
+
514,
|
| 1256 |
+
689,
|
| 1257 |
+
772,
|
| 1258 |
+
705
|
| 1259 |
+
],
|
| 1260 |
+
"page_idx": 5
|
| 1261 |
+
},
|
| 1262 |
+
{
|
| 1263 |
+
"type": "text",
|
| 1264 |
+
"text": "We compare our compact tracker with the state-of-the-art trackers on UAV123(Mueller, Smith, and Ghanem 2016), LaSOT(Fan et al. 2019), TrackingNet(Muller et al. 2018), GOT-10k(Huang, Zhao, and Huang 2019), and VOT2020(Kristan et al. 2020). For a fairer comparison, here we adopt relative position biases in our ViT backbones, this addition improves AUC by around 1 point.",
|
| 1265 |
+
"bbox": [
|
| 1266 |
+
514,
|
| 1267 |
+
708,
|
| 1268 |
+
911,
|
| 1269 |
+
805
|
| 1270 |
+
],
|
| 1271 |
+
"page_idx": 5
|
| 1272 |
+
},
|
| 1273 |
+
{
|
| 1274 |
+
"type": "text",
|
| 1275 |
+
"text": "UAV123 gathers an application-specific collection of 123 sequences. It adopts the AUC and Precision (P) as the evaluation metrics. As shown in Table 1, Our CTTrack-L outperforms previous trackers and exhibits very competitive performance (71.3% AUC) when compared to the previous best-performing tracker CSWinTT (70.5% AUC).",
|
| 1276 |
+
"bbox": [
|
| 1277 |
+
514,
|
| 1278 |
+
805,
|
| 1279 |
+
911,
|
| 1280 |
+
888
|
| 1281 |
+
],
|
| 1282 |
+
"page_idx": 5
|
| 1283 |
+
},
|
| 1284 |
+
{
|
| 1285 |
+
"type": "table",
|
| 1286 |
+
"img_path": "images/5c2c68db02bce1311ed0d3b8d8dc3e113f51cabaa2de5bf2e84f3b752bb243da.jpg",
|
| 1287 |
+
"table_caption": [
|
| 1288 |
+
"Table 6: Comparisons with previous state-of-the-art trackers on four challenge benchmarks. The red, green and blue indicate performances ranked at first, second, and third places. The tracker -GOT denotes only trained on the GOT-10k train split."
|
| 1289 |
+
],
|
| 1290 |
+
"table_footnote": [],
|
| 1291 |
+
"table_body": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"2\">UAV123</td><td colspan=\"3\">LaSOT</td><td colspan=\"3\">TrackingNet</td><td colspan=\"3\">GOT-10k</td></tr><tr><td>AUC</td><td>P</td><td>AUC</td><td>PNorm</td><td>P</td><td>AUC</td><td>PNorm</td><td>P</td><td>AO</td><td>SR0.5</td><td>SR0.75</td></tr><tr><td>CTTrack-L</td><td>71.3</td><td>93.3</td><td>69.8</td><td>79.7</td><td>76.2</td><td>84.9</td><td>89.1</td><td>83.5</td><td>75.3</td><td>84.5</td><td>74.0</td></tr><tr><td>CTTrack-B</td><td>68.8</td><td>89.5</td><td>67.8</td><td>77.8</td><td>74.0</td><td>82.5</td><td>87.1</td><td>80.3</td><td>73.5</td><td>83.5</td><td>70.6</td></tr><tr><td>CTTrack-L -GOT</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>72.8</td><td>81.3</td><td>71.5</td></tr><tr><td>CTTrack-B -GOT</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>71.3</td><td>80.7</td><td>70.3</td></tr><tr><td>MixFormer(Cui et al. 2022)</td><td>69.5</td><td>91.0</td><td>70.1</td><td>79.9</td><td>76.3</td><td>83.9</td><td>88.9</td><td>83.1</td><td>70.7</td><td>80.0</td><td>67.8</td></tr><tr><td>CSWinTT(Song et al. 2022)</td><td>70.5</td><td>90.3</td><td>66.2</td><td>75.2</td><td>70.9</td><td>81.9</td><td>86.7</td><td>79.5</td><td>69.4</td><td>78.9</td><td>65.4</td></tr><tr><td>UTT(Shen et al. 2022)</td><td>-</td><td>-</td><td>64.6</td><td>-</td><td>67.2</td><td>79.7</td><td>-</td><td>77.0</td><td>67.2</td><td>76.3</td><td>60.5</td></tr><tr><td>STARK(Yan et al. 2021)</td><td>-</td><td>-</td><td>67.1</td><td>77.0</td><td>-</td><td>82.0</td><td>86.9</td><td>-</td><td>68.8</td><td>78.1</td><td>64.1</td></tr><tr><td>TransT(Chen et al. 2021)</td><td>68.1</td><td>87.6</td><td>64.9</td><td>73.8</td><td>69.0</td><td>81.4</td><td>86.7</td><td>80.3</td><td>67.1</td><td>76.8</td><td>60.9</td></tr><tr><td>TrDiMP(Wang et al. 2021)</td><td>67.0</td><td>87.6</td><td>64.0</td><td>73.2</td><td>66.6</td><td>78.4</td><td>83.3</td><td>73.1</td><td>68.8</td><td>80.5</td><td>59.7</td></tr><tr><td>STMTrack(Fu et al. 2021)</td><td>64.7</td><td>-</td><td>60.6</td><td>69.3</td><td>63.3</td><td>80.3</td><td>85.1</td><td>76.7</td><td>64.2</td><td>73.7</td><td>57.5</td></tr><tr><td>AutoMatch(Zhang et al. 2021)</td><td>64.4</td><td>83.8</td><td>58.2</td><td>67.5</td><td>59.9</td><td>76.0</td><td>82.4</td><td>72.5</td><td>65.2</td><td>76.6</td><td>54.3</td></tr><tr><td>SiamGAT(Guo et al. 2021)</td><td>64.6</td><td>84.3</td><td>53.9</td><td>63.3</td><td>53.0</td><td>-</td><td>-</td><td>-</td><td>62.7</td><td>74.3</td><td>48.8</td></tr><tr><td>KYS(Bhat et al. 2020)</td><td>-</td><td>-</td><td>55.4</td><td>63.3</td><td>55.8</td><td>74.0</td><td>80.0</td><td>68.8</td><td>63.6</td><td>75.1</td><td>51.5</td></tr><tr><td>MAML(Wang et al. 2020)</td><td>-</td><td>-</td><td>52.3</td><td>-</td><td>53.1</td><td>75.7</td><td>82.2</td><td>72.5</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SiamAttn(Yu et al. 2020)</td><td>65.0</td><td>84.5</td><td>56.0</td><td>64.8</td><td>-</td><td>75.2</td><td>81.7</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SiamFC++(Xu et al. 2020)</td><td>61.8</td><td>80.4</td><td>54.4</td><td>62.3</td><td>54.7</td><td>75.4</td><td>80.0</td><td>70.5</td><td>59.5</td><td>69.5</td><td>47.9</td></tr><tr><td>SiamRPN++(Li et al. 2019)</td><td>64.2</td><td>84.0</td><td>49.6</td><td>56.9</td><td>49.1</td><td>73.3</td><td>80.0</td><td>69.4</td><td>51.7</td><td>61.6</td><td>32.5</td></tr><tr><td>DiMP(Bhat et al. 2019)</td><td>64.2</td><td>84.9</td><td>57.7</td><td>66.4</td><td>57.9</td><td>74.0</td><td>80.1</td><td>68.7</td><td>61.1</td><td>71.7</td><td>49.2</td></tr><tr><td>ATOM(Danelljan et al. 2019)</td><td>61.7</td><td>82.7</td><td>51.5</td><td>57.6</td><td>50.5</td><td>70.3</td><td>77.1</td><td>64.8</td><td>55.6</td><td>63.4</td><td>40.2</td></tr></table>",
|
| 1292 |
+
"bbox": [
|
| 1293 |
+
89,
|
| 1294 |
+
104,
|
| 1295 |
+
903,
|
| 1296 |
+
400
|
| 1297 |
+
],
|
| 1298 |
+
"page_idx": 6
|
| 1299 |
+
},
|
| 1300 |
+
{
|
| 1301 |
+
"type": "table",
|
| 1302 |
+
"img_path": "images/89add33d2f38c4bd64db3aec9d1e6b65da915ca6ab68866265227cce237c940f.jpg",
|
| 1303 |
+
"table_caption": [
|
| 1304 |
+
"Table 7: Comparisons on VOT2020, where trackers only predict bounding boxes rather than masks."
|
| 1305 |
+
],
|
| 1306 |
+
"table_footnote": [],
|
| 1307 |
+
"table_body": "<table><tr><td>Methods</td><td>EAO↑</td><td>Accuracy↑</td><td>Robustness↑</td></tr><tr><td>SiamFC</td><td>0.179</td><td>0.418</td><td>0.502</td></tr><tr><td>ATOM</td><td>0.271</td><td>0.462</td><td>0.734</td></tr><tr><td>DiMP</td><td>0.274</td><td>0.457</td><td>0.740</td></tr><tr><td>UPDT</td><td>0.278</td><td>0.465</td><td>0.755</td></tr><tr><td>TransT</td><td>0.293</td><td>0.477</td><td>0.754</td></tr><tr><td>CSWinTT</td><td>0.304</td><td>0.480</td><td>0.787</td></tr><tr><td>CTTrack-L</td><td>0.287</td><td>0.453</td><td>0.787</td></tr></table>",
|
| 1308 |
+
"bbox": [
|
| 1309 |
+
106,
|
| 1310 |
+
462,
|
| 1311 |
+
452,
|
| 1312 |
+
595
|
| 1313 |
+
],
|
| 1314 |
+
"page_idx": 6
|
| 1315 |
+
},
|
| 1316 |
+
{
|
| 1317 |
+
"type": "text",
|
| 1318 |
+
"text": "LaSOT is a long-term dataset including 1400 sequences and distributed over 14 attributes, the testing subset of LaSOT contains 280 sequences. Methods are ranked by the AUC, P, and Normalized Precision $(\\mathbb{P}_{Norm})$ . Our CTTrack-L achieves the AUC $(69.8\\%)$ and Prec $(76.2\\%)$ , which is an excellent result that outperforms other methods only except the MixFormer. Our tracker has lower performance than MixFormer on LaSOT because it contains long-term sequences and large variations in content. ViT backbone is a plain and non-hierarchical architecture that maintains feature maps at a certain scale, which may not be able to well handle long-term tracking sequences with scale variations.",
|
| 1319 |
+
"bbox": [
|
| 1320 |
+
81,
|
| 1321 |
+
622,
|
| 1322 |
+
478,
|
| 1323 |
+
789
|
| 1324 |
+
],
|
| 1325 |
+
"page_idx": 6
|
| 1326 |
+
},
|
| 1327 |
+
{
|
| 1328 |
+
"type": "text",
|
| 1329 |
+
"text": "TrackingNet is a large-scale tracking dataset consisting of 511 sequences for testing. The evaluation is performed on the online server. Table 1 shows that CTTrack-L performs better quality and ranks first in AUC score at $84.9\\%$ . The gain is $1.0\\%$ improvement when compared with the previous best results.",
|
| 1330 |
+
"bbox": [
|
| 1331 |
+
81,
|
| 1332 |
+
789,
|
| 1333 |
+
478,
|
| 1334 |
+
873
|
| 1335 |
+
],
|
| 1336 |
+
"page_idx": 6
|
| 1337 |
+
},
|
| 1338 |
+
{
|
| 1339 |
+
"type": "text",
|
| 1340 |
+
"text": "GOT-10k contains over 10k videos for training and 180 for",
|
| 1341 |
+
"bbox": [
|
| 1342 |
+
83,
|
| 1343 |
+
875,
|
| 1344 |
+
478,
|
| 1345 |
+
888
|
| 1346 |
+
],
|
| 1347 |
+
"page_idx": 6
|
| 1348 |
+
},
|
| 1349 |
+
{
|
| 1350 |
+
"type": "text",
|
| 1351 |
+
"text": "testing. It forbids the trackers to use external datasets for training. We follow this protocol by retraining our trackers to only use the GOT10k train split. As in Table 1, MixFormer and CSWinTT provide the best performance, with an AO score of $70.7\\%$ and $69.4\\%$ . Our CTTrack-L has obtained an AO score of $72.8\\%$ , significantly outperforming the best existing tracker by $2.1\\%$ .",
|
| 1352 |
+
"bbox": [
|
| 1353 |
+
514,
|
| 1354 |
+
425,
|
| 1355 |
+
913,
|
| 1356 |
+
521
|
| 1357 |
+
],
|
| 1358 |
+
"page_idx": 6
|
| 1359 |
+
},
|
| 1360 |
+
{
|
| 1361 |
+
"type": "text",
|
| 1362 |
+
"text": "VOT2020 benchmark contains 60 challenging videos. The performance is evaluated using the expected average overlap (EAO), which takes both accuracy (A) and robustness (R). Since our algorithm does not output a segmentation mask, trackers that only predict bounding boxes are selected for comparisons to ensure fairness. It can be seen from Table 7 that our CTTrack-L obtains an EAO of 0.287.",
|
| 1363 |
+
"bbox": [
|
| 1364 |
+
514,
|
| 1365 |
+
521,
|
| 1366 |
+
913,
|
| 1367 |
+
619
|
| 1368 |
+
],
|
| 1369 |
+
"page_idx": 6
|
| 1370 |
+
},
|
| 1371 |
+
{
|
| 1372 |
+
"type": "text",
|
| 1373 |
+
"text": "5 Conclusion",
|
| 1374 |
+
"text_level": 1,
|
| 1375 |
+
"bbox": [
|
| 1376 |
+
650,
|
| 1377 |
+
632,
|
| 1378 |
+
779,
|
| 1379 |
+
648
|
| 1380 |
+
],
|
| 1381 |
+
"page_idx": 6
|
| 1382 |
+
},
|
| 1383 |
+
{
|
| 1384 |
+
"type": "text",
|
| 1385 |
+
"text": "In this work, we analyze the information stream in the attention mechanism in depth. We prove that the vanilla self-attention structure is sufficient for information aggregation, and employ the three information streams of the packed self-attention in the transformer tracking framework. To enhance the information representation, we design the correlative masked decoder consisting of a self-decoder and a cross-decoder to reconstruct the original pixels of both template and search image. Extensive experiments demonstrate the effectiveness of our correlative masked modeling strategy and our compact transformer tracker exhibits impressive performance over previous trackers. In addition, our correlative masked decoder can be plugged into other transformer trackers, which can effectively improve the tracking performance without compromising speed. In the future, we plan to combine the feature pyramid or convolution module for better performance on long-term tracking sequences.",
|
| 1386 |
+
"bbox": [
|
| 1387 |
+
514,
|
| 1388 |
+
652,
|
| 1389 |
+
913,
|
| 1390 |
+
890
|
| 1391 |
+
],
|
| 1392 |
+
"page_idx": 6
|
| 1393 |
+
},
|
| 1394 |
+
{
|
| 1395 |
+
"type": "text",
|
| 1396 |
+
"text": "Acknowledgments",
|
| 1397 |
+
"text_level": 1,
|
| 1398 |
+
"bbox": [
|
| 1399 |
+
202,
|
| 1400 |
+
66,
|
| 1401 |
+
359,
|
| 1402 |
+
83
|
| 1403 |
+
],
|
| 1404 |
+
"page_idx": 7
|
| 1405 |
+
},
|
| 1406 |
+
{
|
| 1407 |
+
"type": "text",
|
| 1408 |
+
"text": "This work is supported by the national key research and development program of China under Grant No.2020YFB1805601, National Natural Science Foundation of China (NSFC No. 62272184), and CCF-Tencent Open Research Fund (CCF-Tencent RAGR20220120). The computation is completed in the HPC Platform of Huazhong University of Science and Technology.",
|
| 1409 |
+
"bbox": [
|
| 1410 |
+
81,
|
| 1411 |
+
85,
|
| 1412 |
+
480,
|
| 1413 |
+
186
|
| 1414 |
+
],
|
| 1415 |
+
"page_idx": 7
|
| 1416 |
+
},
|
| 1417 |
+
{
|
| 1418 |
+
"type": "text",
|
| 1419 |
+
"text": "References",
|
| 1420 |
+
"text_level": 1,
|
| 1421 |
+
"bbox": [
|
| 1422 |
+
233,
|
| 1423 |
+
196,
|
| 1424 |
+
330,
|
| 1425 |
+
213
|
| 1426 |
+
],
|
| 1427 |
+
"page_idx": 7
|
| 1428 |
+
},
|
| 1429 |
+
{
|
| 1430 |
+
"type": "list",
|
| 1431 |
+
"sub_type": "ref_text",
|
| 1432 |
+
"list_items": [
|
| 1433 |
+
"Bao, H.; Dong, L.; and Wei, F. 2021. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254.",
|
| 1434 |
+
"Bertinetto, L.; Valmadre, J.; Henriques, J. F.; Vedaldi, A.; and Torr, P. H. S. 2016. Fully-Convolutional Siamese Networks for Object Tracking. In Proceedings of the ECCV, 850-865. Springer.",
|
| 1435 |
+
"Bhat, G.; Danelljan, M.; Gool, L. V.; and Timofte, R. 2019. Learning Discriminative Model Prediction for Tracking. In Proceedings of the ICCV, 6182-6191. IEEE.",
|
| 1436 |
+
"Bhat, G.; Danelljan, M.; Van Gool, L.; and Timofte, R. 2020. Know Your Surroundings: Exploiting Scene Information for Object Tracking. In Proceedings of the ECCV. Springer.",
|
| 1437 |
+
"Bolme, D. S.; Beveridge, J. R.; Draper, B. A.; and Lui, Y. M. 2010. Visual object tracking using adaptive correlation filters. In Proceedings of the CVPR, 2544-2550. IEEE.",
|
| 1438 |
+
"Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In ECCV, 213-229. Springer.",
|
| 1439 |
+
"Chen, M.; Radford, A.; Child, R.; Wu, J.; Jun, H.; Luan, D.; and Sutskever, I. 2020. Generative pretraining from pixels. In International conference on machine learning, 1691-1703. PMLR.",
|
| 1440 |
+
"Chen, X.; Yan, B.; Zhu, J.; Wang, D.; Yang, X.; and Lu, H. 2021. Transformer tracking. In Proceedings of the CVPR, 8126-8135.",
|
| 1441 |
+
"Cui, Y.; Jiang, C.; Wang, L.; and Wu, G. 2022. MixFormer: End-to-End Tracking With Iterative Mixed Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13608-13618.",
|
| 1442 |
+
"Dalal, N.; and Triggs, B. 2005. Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), volume 1, 886-893. IEEE.",
|
| 1443 |
+
"Danelljan, M.; Bhat, G.; Khan, F. S.; and Felsberg, M. 2019. ATOM: Accurate Tracking by Overlap Maximization. In Proceedings of the CVPR, 4660-4669. IEEE.",
|
| 1444 |
+
"Danelljan, M.; Bhat, G.; Shahbaz Khan, F.; and Felsberg, M. 2017. ECO: Efficient Convolution Operators for Tracking. In Proceedings of the CVPR, 6638-6646. IEEE.",
|
| 1445 |
+
"Danelljan, M.; Robinson, A.; Khan, F. S.; and Felsberg, M. 2016. Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking. In Proceedings of the ECCV, 472-488. Springer.",
|
| 1446 |
+
"Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.;"
|
| 1447 |
+
],
|
| 1448 |
+
"bbox": [
|
| 1449 |
+
84,
|
| 1450 |
+
217,
|
| 1451 |
+
478,
|
| 1452 |
+
888
|
| 1453 |
+
],
|
| 1454 |
+
"page_idx": 7
|
| 1455 |
+
},
|
| 1456 |
+
{
|
| 1457 |
+
"type": "list",
|
| 1458 |
+
"sub_type": "ref_text",
|
| 1459 |
+
"list_items": [
|
| 1460 |
+
"Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR.",
|
| 1461 |
+
"Fan, H.; Lin, L.; Yang, F.; Chu, P.; Deng, G.; Yu, S.; Bai, H.; Xu, Y.; Liao, C.; and Ling, H. 2019. LaSOT: A High-Quality Benchmark for Large-Scale Single Object Tracking. In Proceedings of the CVPR. IEEE.",
|
| 1462 |
+
"Fu, Z.; Liu, Q.; Fu, Z.; and Wang, Y. 2021. Stmtrack: Template-free visual tracking with space-time memory networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13774-13783.",
|
| 1463 |
+
"Guo, D.; Shao, Y.; Cui, Y.; Wang, Z.; Zhang, L.; and Shen, C. 2021. Graph attention tracking. In Proceedings of the CVPR, 9543-9552.",
|
| 1464 |
+
"He, K.; Chen, X.; Xie, S.; Li, Y.; Dollar, P.; and Girshick, R. 2022. Masked Autoencoders Are Scalable Vision Learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 16000-16009.",
|
| 1465 |
+
"He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the CVPR, 770-778. IEEE.",
|
| 1466 |
+
"Henriques, J. F.; Caseiro, R.; Martins, P.; and Batista, J. 2015. High-Speed Tracking with Kernelized Correlation Filters. IEEE TPAMI, 37(3): 583-596.",
|
| 1467 |
+
"Huang, L.; Zhao, X.; and Huang, K. 2019. GOT-10k: A Large High-Diversity Benchmark for Generic Object Tracking in the Wild. IEEE TPAMI.",
|
| 1468 |
+
"Jiang, B.; Luo, R.; Mao, J.; Xiao, T.; and Jiang, Y. 2018. Acquisition of localization confidence for accurate object detection. In Proceedings of the European conference on computer vision (ECCV), 784-799.",
|
| 1469 |
+
"Kristan, M.; Leonardis, A.; Matas, J.; Felsberg, M.; Pflugfelder, R.; Kämäräinen, J.-K.; Danelljan, M.; Zajc, L. C.; Lukežić, A.; Drbohlav, O.; et al. 2020. The eighth visual object tracking VOT2020 challenge results. In ECCV, 547-601. Springer.",
|
| 1470 |
+
"Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25: 1097-1105.",
|
| 1471 |
+
"Li, B.; Wu, W.; Wang, Q.; Zhang, F.; Xing, J.; and Yan, J. 2019. SiamRPN++: Evolution of Siamese Visual Tracking With Very Deep Networks. In Proceedings of the CVPR, 4282-4291. IEEE.",
|
| 1472 |
+
"Li, B.; Yan, J.; Wu, W.; Zhu, Z.; and Hu, X. 2018. High Performance Visual Tracking With Siamese Region Proposal Network. In Proceedings of the CVPR, 8971-8980. IEEE.",
|
| 1473 |
+
"Li, Y.; Mao, H.; Girshick, R.; and He, K. 2022. Exploring plain vision transformer backbones for object detection. arXiv preprint arXiv:2203.16527.",
|
| 1474 |
+
"Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In ECCV, 740-755. Springer."
|
| 1475 |
+
],
|
| 1476 |
+
"bbox": [
|
| 1477 |
+
517,
|
| 1478 |
+
66,
|
| 1479 |
+
913,
|
| 1480 |
+
890
|
| 1481 |
+
],
|
| 1482 |
+
"page_idx": 7
|
| 1483 |
+
},
|
| 1484 |
+
{
|
| 1485 |
+
"type": "list",
|
| 1486 |
+
"sub_type": "ref_text",
|
| 1487 |
+
"list_items": [
|
| 1488 |
+
"Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the ICCV.",
|
| 1489 |
+
"Loshchilov, I.; and Hutter, F. 2018. Decoupled weight decay regularization. In Proceedings of the ICLR.",
|
| 1490 |
+
"Mueller, M.; Smith, N.; and Ghanem, B. 2016. A benchmark and simulator for uav tracking. In Proceedings of the ECCV, 445-461. Springer.",
|
| 1491 |
+
"Muller, M.; Bibi, A.; Giancola, S.; Alsubaihi, S.; and Ghanem, B. 2018. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild. In Proceedings of the ECCV.",
|
| 1492 |
+
"Nam, H.; and Han, B. 2016. Learning Multi-Domain Convolutional Neural Networks for Visual Tracking. In Proceedings of the CVPR, 4293-4302. IEEE.",
|
| 1493 |
+
"Pu, S.; Song, Y.; Ma, C.; Zhang, H.; and Yang, M.-H. 2018. Deep Attentive Tracking via Reciprocativc Learning. In NeurIPS, 1931-1941.",
|
| 1494 |
+
"Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-shot text-to-image generation. In International Conference on Machine Learning, 8821-8831. PMLR.",
|
| 1495 |
+
"Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; and Savarese, S. 2019. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the CVPR, 658-666.",
|
| 1496 |
+
"Shen, Q.; Qiao, L.; Guo, J.; Li, P.; Li, X.; Li, B.; Feng, W.; Gan, W.; Wu, W.; and Ouyang, W. 2022. Unsupervised Learning of Accurate Siamese Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8101-8110.",
|
| 1497 |
+
"Simonyan, K.; and Zisserman, A. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations.",
|
| 1498 |
+
"Song, Z.; Yu, J.; Chen, Y.-P. P.; and Yang, W. 2022. Transformer Tracking With Cyclic Shifting Window Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8791-8800.",
|
| 1499 |
+
"Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is all you need. In NIPS, 5998-6008.",
|
| 1500 |
+
"Voigtlaender, P.; Luiten, J.; Torr, P. H.; and Leibe, B. 2020. Siam r-cnn: Visual tracking by re-detection. In Proceedings of the CVPR, 6578-6588.",
|
| 1501 |
+
"Wang, G.; Luo, C.; Sun, X.; Xiong, Z.; and Zeng, W. 2020. Tracking by instance detection: A meta-learning approach. In Proceedings of the CVPR, 6288-6297.",
|
| 1502 |
+
"Wang, N.; Zhou, W.; Wang, J.; and Li, H. 2021. Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking. In Proceedings of the CVPR, 1571-1580.",
|
| 1503 |
+
"Wei, C.; Fan, H.; Xie, S.; Wu, C.-Y.; Yuille, A.; and Feichtenhofer, C. 2022. Masked feature prediction for self-supervised visual pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14668-14678."
|
| 1504 |
+
],
|
| 1505 |
+
"bbox": [
|
| 1506 |
+
83,
|
| 1507 |
+
68,
|
| 1508 |
+
480,
|
| 1509 |
+
888
|
| 1510 |
+
],
|
| 1511 |
+
"page_idx": 8
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"type": "list",
|
| 1515 |
+
"sub_type": "ref_text",
|
| 1516 |
+
"list_items": [
|
| 1517 |
+
"Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; and Zhang, L. 2021. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 22-31.",
|
| 1518 |
+
"Xie, Z.; Zhang, Z.; Cao, Y.; Lin, Y.; Bao, J.; Yao, Z.; Dai, Q.; and Hu, H. 2022. SimMIM: A Simple Framework for Masked Image Modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9653-9663.",
|
| 1519 |
+
"Xu, Y.; Wang, Z.; Li, Z.; Yuan, Y.; and Yu, G. 2020. SiamFC++: Towards robust and accurate visual tracking with target estimation guidelines. In Proceedings of the AAAI, volume 34, 12549-12556.",
|
| 1520 |
+
"Yan, B.; Peng, H.; Fu, J.; Wang, D.; and Lu, H. 2021. Learning spatio-temporal transformer for visual tracking. In Proceedings of the ICCV.",
|
| 1521 |
+
"Yu, Y.; Xiong, Y.; Huang, W.; and Scott, M. R. 2020. Deformable siamese attention networks for visual object tracking. In Proceedings of the CVPR, 6728-6737.",
|
| 1522 |
+
"Zhang, Z.; Liu, Y.; Wang, X.; Li, B.; and Hu, W. 2021. Learn to match: Automatic matching network design for visual tracking. In Proceedings of the ICCV, 13339-13348."
|
| 1523 |
+
],
|
| 1524 |
+
"bbox": [
|
| 1525 |
+
517,
|
| 1526 |
+
68,
|
| 1527 |
+
913,
|
| 1528 |
+
390
|
| 1529 |
+
],
|
| 1530 |
+
"page_idx": 8
|
| 1531 |
+
}
|
| 1532 |
+
]
|
2301.10xxx/2301.10938/e2b2cbfc-a0df-462f-9845-caeaa831fe88_model.json
ADDED
|
@@ -0,0 +1,2220 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "aside_text",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.023,
|
| 7 |
+
0.269,
|
| 8 |
+
0.058,
|
| 9 |
+
0.708
|
| 10 |
+
],
|
| 11 |
+
"angle": 270,
|
| 12 |
+
"content": "arXiv:2301.10938v1 [cs.CV] 26 Jan 2023"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "title",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.165,
|
| 18 |
+
0.121,
|
| 19 |
+
0.834,
|
| 20 |
+
0.143
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Compact Transformer Tracker with Correlative Masked Modeling"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.193,
|
| 29 |
+
0.159,
|
| 30 |
+
0.808,
|
| 31 |
+
0.179
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Zikai Song\\(^{1}\\), Run Luo\\(^{1}\\), Junqing Yu\\(^{1*}\\), Yi-Ping Phoebe Chen\\(^{2}\\), Wei Yang\\(^{1*}\\)"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.312,
|
| 40 |
+
0.183,
|
| 41 |
+
0.687,
|
| 42 |
+
0.198
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "<sup>1</sup>Huazhong University of Science and Technology, China"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.395,
|
| 51 |
+
0.198,
|
| 52 |
+
0.603,
|
| 53 |
+
0.212
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "\\(^{2}\\)La Trobe University, Australia"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.225,
|
| 62 |
+
0.212,
|
| 63 |
+
0.773,
|
| 64 |
+
0.227
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "{skyesong, lr_8823, yjqing, weiyangcs}@hust.edu.cn, phoebe.chen@latrobe.edu.au"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "title",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.249,
|
| 73 |
+
0.274,
|
| 74 |
+
0.314,
|
| 75 |
+
0.287
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "Abstract"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.099,
|
| 84 |
+
0.298,
|
| 85 |
+
0.465,
|
| 86 |
+
0.688
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Transformer framework has been showing superior performances in visual object tracking for its great strength in information aggregation across the template and search image with the well-known attention mechanism. Most recent advances focus on exploring attention mechanism variants for better information aggregation. We find these schemes are equivalent to or even just a subset of the basic self-attention mechanism. In this paper, we prove that the vanilla self-attention structure is sufficient for information aggregation, and structural adaption is unnecessary. The key is not the attention structure, but how to extract the discriminative feature for tracking and enhance the communication between the target and search image. Based on this finding, we adopt the basic vision transformer (ViT) architecture as our main tracker and concatenate the template and search image for feature embedding. To guide the encoder to capture the invariant feature for tracking, we attach a lightweight correlative masked decoder which reconstructs the original template and search image from the corresponding masked tokens. The correlative masked decoder serves as a plugin for the compact transform tracker and is skipped in inference. Our compact tracker uses the most simple structure which only consists of a ViT backbone and a box head, and can run at 40 fps. Extensive experiments show the proposed compact transform tracker outperforms existing approaches, including advanced attention variants, and demonstrates the sufficiency of self-attention in tracking tasks. Our method achieves state-of-the-art performance on five challenging datasets, along with the VOT2020, UAV123, LaSOT, TrackingNet, and GOT-10k benchmarks. Our project is available at https://github.com/HUSTDML/CTTrack."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "title",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.212,
|
| 95 |
+
0.71,
|
| 96 |
+
0.352,
|
| 97 |
+
0.725
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "1 Introduction"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.082,
|
| 106 |
+
0.73,
|
| 107 |
+
0.48,
|
| 108 |
+
0.841
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "Visual Object Tracking is one of the fundamental tasks in computer vision with applications ranging from human-computer interaction, surveillance, traffic flow monitoring and etc. It aims to estimate the location, denoted as a bounding box, of an arbitrary target object throughout the subsequent video sequence. Deep Learning based trackers have achieved great success due to their strong representation ability. Trackers (Bertinetto et al. 2016; Nam and Han 2016;"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "image",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.523,
|
| 117 |
+
0.272,
|
| 118 |
+
0.909,
|
| 119 |
+
0.502
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": null
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "image_caption",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.516,
|
| 128 |
+
0.51,
|
| 129 |
+
0.914,
|
| 130 |
+
0.637
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "Figure 1: Our compact transformer tracker adopts the simple ViT structure (encoder) with the concatenation of the template and search image as input, which essentially exploits the standard self-attention mechanism for information aggregation. The encoded tokens pass through a box head to estimate the result bounding box. And we develop a correlative masked decoder reconstructing the original template and search pixels to enhance the information aggregation, which is skipped during inference."
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.516,
|
| 139 |
+
0.667,
|
| 140 |
+
0.915,
|
| 141 |
+
0.89
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "Li et al. 2018, 2019) derived from Convolutional Neural Networks (CNN) (Krizhevsky, Sutskever, and Hinton 2012; Simonyan and Zisserman 2015; He et al. 2016) produce tracking accuracy that beyond the comparison of traditional approaches, especially the trackers built on Siamese network (Bertinetto et al. 2016; Xu et al. 2020; Li et al. 2018, 2019; Voigtaender et al. 2020; Yu et al. 2020; Guo et al. 2021). The key of Siamese network trackers is to produce the cross-correlation and measure the similarity between the target template and search image. Nowadays, transformer-based trackers (Chen et al. 2021; Wang et al. 2021; Yan et al. 2021; Shen et al. 2022; Song et al. 2022; Cui et al. 2022) have shown great strength by introducing the attention mechanism (Vaswani et al. 2017) to enhance and fuse the features of querying sample and tracked objects. Prevalent transformer trackers (Chen et al. 2021; Yan et al. 2021;"
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "page_footnote",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.082,
|
| 150 |
+
0.851,
|
| 151 |
+
0.48,
|
| 152 |
+
0.89
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "*indicates co-corresponding author. Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved."
|
| 156 |
+
}
|
| 157 |
+
],
|
| 158 |
+
[
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"bbox": [
|
| 162 |
+
0.084,
|
| 163 |
+
0.069,
|
| 164 |
+
0.478,
|
| 165 |
+
0.098
|
| 166 |
+
],
|
| 167 |
+
"angle": 0,
|
| 168 |
+
"content": "Cui et al. 2022) more or less adapt the attention for aggregating information across the template and search image."
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"bbox": [
|
| 173 |
+
0.082,
|
| 174 |
+
0.098,
|
| 175 |
+
0.48,
|
| 176 |
+
0.376
|
| 177 |
+
],
|
| 178 |
+
"angle": 0,
|
| 179 |
+
"content": "We find that the advanced variants of attention mechanism in recent research, including mix-attention (Cui et al. 2022) and cross-attention (Yu et al. 2020; Chen et al. 2021), are equivalent or even just a subset of the packed self-attention (i.e., standard self-attention with the concatenation of the template and search image as input). Then the question is which parts of the self-attention mechanism play an important role in visual object tracking? We revisited the transformer tracking framework and find that the tracking results are generated from tokens corresponding to the search image (search tokens), while the tokens corresponding to the template (template tokens) are always discarded in the last. The representational ability of search tokens comes from two parts: the cross-information enhancement from the template tokens and the self-information enhancement from the search tokens themselves. In this paper, we prove that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation, though cross-information aggregation is indispensable in visual object tracking but not greatly beneficial."
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.082,
|
| 185 |
+
0.377,
|
| 186 |
+
0.481,
|
| 187 |
+
0.709
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "Driven by this analysis, we propose a compact transformer tracker combined with correlative masked modeling for the cross-information aggregation and self-information reinforcement. As shown in Figure 1, our tracker adopts the basic vision transformer as the main branch and applies a lightweight masked decoder to enhance the implicit representation capability of the packed self-attention. The correlative masked decoder, which is inspired by Masked Image Modeling (He et al. 2022; Xie et al. 2022), reconstructs the both original template and search pixels from the corresponding masked tokens, to guide the encoder to capture the invariant feature for tracking. In addition, our decoder can be plugged into other transformer trackers, which can effectively improve the tracking performance without compromising speed. Applying our correlative masked modeling strategy to the compact transformer tracker can improve the AUC from \\(64.0\\%\\) to \\(65.8\\%\\) on the LaSOT (Fan et al. 2019) dataset. Extensive comparison experiments on 5 challenging datasets including VOT2020 (Kristan et al. 2020), UAV123 (Mueller, Smith, and Ghanem 2016), LaSOT, GOT-10k (Huang, Zhao, and Huang 2019), and TrackingNet (Muller et al. 2018) exhibits the state-of-the-art performance, which further evidence the correctness of our analysis regarding the self-attention in visual tracking."
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.1,
|
| 196 |
+
0.71,
|
| 197 |
+
0.411,
|
| 198 |
+
0.724
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "To summarize, our main contributions include:"
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.086,
|
| 207 |
+
0.731,
|
| 208 |
+
0.481,
|
| 209 |
+
0.816
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "1. We present a unified analyzing method for the attention mechanism and find that the advanced variants of the attention mechanism are equivalent or even just a subset of the self-attention. We also prove that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation."
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.085,
|
| 218 |
+
0.819,
|
| 219 |
+
0.482,
|
| 220 |
+
0.892
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "2. We develop a compact transformer tracker with a correlative masked decoder, which has a very simple structure and achieves state-of-the-art accuracy at a high Frames-Per-Seconds (fps) tracking speed. The decoder reconstructs the original template and search image from the"
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "list",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.085,
|
| 229 |
+
0.731,
|
| 230 |
+
0.482,
|
| 231 |
+
0.892
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": null
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.538,
|
| 240 |
+
0.07,
|
| 241 |
+
0.913,
|
| 242 |
+
0.112
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "corresponding masked tokens and serves as a training plugin for the tracker. The experiment demonstrates that our analysis regarding self-attention is correct."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "title",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.64,
|
| 251 |
+
0.128,
|
| 252 |
+
0.792,
|
| 253 |
+
0.143
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "2 Related Work"
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.516,
|
| 262 |
+
0.152,
|
| 263 |
+
0.913,
|
| 264 |
+
0.373
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "Traditional trackers. Traditional single object tracking algorithms can be roughly summarized as Correlation Filter based trackers (CF), Deep Network based trackers (DLN). CF-based trackers(Bolme et al. 2010; Henriques et al. 2015; Danelljan et al. 2016, 2017, 2019; Bhat et al. 2019) exploit the convolution theorem and learn a filter in the Fourier domain that maps known target images to the desired output. DLN-based trackers refer to algorithms employing deep neural networks for the tracking process. Earlier approaches (Nam and Han 2016; Pu et al. 2018) treat the tracking task as a classification problem and exploit deep features for locating the target. Shortly afterwards more trackers adopt the Siamese network (Bertinetto et al. 2016; Li et al. 2018, 2019) for its effectiveness in measuring similarity. The Siamese network consists of two branches, one operates on the template and the other for the search area."
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.516,
|
| 273 |
+
0.374,
|
| 274 |
+
0.914,
|
| 275 |
+
0.472
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "Above all, these methods mainly consist of a backbone which extracts the features of search image and template separately, a similarity measuring module, and heads to predict the location and bounding box. Compared to our framework, traditional trackers have too many modules and a very complex design, we simply adapt a ViT backbone with a box head to get better tracking results."
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.516,
|
| 284 |
+
0.472,
|
| 285 |
+
0.915,
|
| 286 |
+
0.763
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "Transformer trackers. The ViT (Dosovitskiy et al. 2021) first introduces the transformer to image recognition tasks and presents an impressive performance. Ever since, transformer has been widely applied in image classification(Dosovitskiy et al. 2021; Wu et al. 2021; Liu et al. 2021), object detection(Carion et al. 2020; Li et al. 2022), visual object tracking(Yan et al. 2021; Chen et al. 2021; Wang et al. 2021; Song et al. 2022; Shen et al. 2022; Cui et al. 2022) and etc. Transformer-based tracking methods have become the mainstream tracking algorithms nowadays. TransT (Chen et al. 2021) proposes a feature fusion network and employs an attention mechanism to combine the features of the template and search region. STARK (Yan et al. 2021) develops a spatial-temporal architecture based on the encoder-decoder transformer. CSWinTT (Song et al. 2022) proposes a transformer architecture with multi-scale cyclic shifting window attention for visual tracking, elevating the attention from pixel level to window level. MixFormer (Cui et al. 2022) constructs a compact tracking framework and designs a mixed attention module that unifies the process of feature extraction and information matching module."
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.516,
|
| 295 |
+
0.764,
|
| 296 |
+
0.915,
|
| 297 |
+
0.891
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "Instead of designing a complex attention mechanism as in the previous tracking approaches, we compare the essential differences of attention variants(such as mix-attention and cross-attention) and find these attention variants are equivalent or even just a subset of the packed self-attention. To verify the capability of self-attention in information aggregation, we design a compact transformer tracker using the most simple pipeline which only consists of a ViT backbone and a box head, without any extra design including separate"
|
| 301 |
+
}
|
| 302 |
+
],
|
| 303 |
+
[
|
| 304 |
+
{
|
| 305 |
+
"type": "text",
|
| 306 |
+
"bbox": [
|
| 307 |
+
0.082,
|
| 308 |
+
0.069,
|
| 309 |
+
0.48,
|
| 310 |
+
0.098
|
| 311 |
+
],
|
| 312 |
+
"angle": 0,
|
| 313 |
+
"content": "modules of feature extraction and aggregation, and multi-layer feature aggregation."
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"type": "text",
|
| 317 |
+
"bbox": [
|
| 318 |
+
0.082,
|
| 319 |
+
0.097,
|
| 320 |
+
0.48,
|
| 321 |
+
0.36
|
| 322 |
+
],
|
| 323 |
+
"angle": 0,
|
| 324 |
+
"content": "Masked image modeling (MIM). MIM masks an area of the original images and predicts the missing pixels, which aims to enhance the representation of models. Recently, MIM approaches((Chen et al. 2020; He et al. 2022; Xie et al. 2022; Wei et al. 2022; Bao, Dong, and Wei 2021)) are extended to the modern vision transformers (Dosovitskiy et al. 2021; Liu et al. 2021). iGPT (Chen et al. 2020) first proposes a transformer to predict unknown pixels from a sequence of low-resolution pixels. BEiT (Bao, Dong, and Wei 2021) tokenizes the images via an additional dVAE (Ramesh et al. 2021) network with a block-wise masking strategy. SimMIM (Xie et al. 2022) find that a moderately large masked patch size of the input image for pixel predictions makes a strong pre-text task. MAE (He et al. 2022) develops an asymmetric encoder-decoder architecture, the encoder operates on a small proportion of the visible patches, and the decoder reconstructs the original pixels. MaskFeat (Wei et al. 2022) reconstructs the feature descriptors such as HoG (Dalal and Triggs 2005) instead of pixels."
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "text",
|
| 328 |
+
"bbox": [
|
| 329 |
+
0.082,
|
| 330 |
+
0.361,
|
| 331 |
+
0.481,
|
| 332 |
+
0.597
|
| 333 |
+
],
|
| 334 |
+
"angle": 0,
|
| 335 |
+
"content": "Our approach is inspired by the previous MIM method (Xie et al. 2022; He et al. 2022), but we have to deal with two fundamental problems in the tracking framework: (1) Visual tracking is a downstream vision task that generally does not have the pre-train process to apply the MIM strategy. We develop a masked decoder to leverage the search and the template tokens to predict the original images, which is embedded as an attachment plugin in the training phase to implement an end-to-end model. (2) MIM methods reconstructing the single image do not fit the tracking framework which involves cross-aggregation of multiple images. According to the properties of packed self-attention, we design a self-decoder and a cross-decoder to reconstruct the original template and search image from the corresponding masked tokens. As far as we know, we are the first to artfully introduce the MIM into the visual tracking field to improve the information aggregation capabilities."
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "title",
|
| 339 |
+
"bbox": [
|
| 340 |
+
0.222,
|
| 341 |
+
0.61,
|
| 342 |
+
0.342,
|
| 343 |
+
0.627
|
| 344 |
+
],
|
| 345 |
+
"angle": 0,
|
| 346 |
+
"content": "3 Approach"
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "text",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.082,
|
| 352 |
+
0.63,
|
| 353 |
+
0.481,
|
| 354 |
+
0.701
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": "In this section, we introduce our compact transformer tracker with correlative masked modeling in detail. Before proceeding, we first present a analysis on the key component of transformer tracker, and demonstrate that existing attention variants are equivalent to the packed self-attention."
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "title",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.083,
|
| 363 |
+
0.712,
|
| 364 |
+
0.373,
|
| 365 |
+
0.727
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "3.1 Revisiting Transformer Tracker"
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "text",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.082,
|
| 374 |
+
0.73,
|
| 375 |
+
0.481,
|
| 376 |
+
0.802
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "Transformer tracking framework. As described in ViT(Vaswani et al. 2017), the query-key-value attention mechanism is applied with query \\(\\mathbf{Q}\\), key \\(\\mathbf{K}\\), and value \\(\\mathbf{V}\\). The linear weights of \\(\\mathbf{Q}, \\mathbf{K}, \\mathbf{V}\\) are \\(\\mathbf{W}_Q, \\mathbf{W}_K, \\mathbf{W}_V\\) respectively. The attention (Attn) is computed as:"
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "equation",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.122,
|
| 385 |
+
0.819,
|
| 386 |
+
0.48,
|
| 387 |
+
0.855
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "\\[\n\\operatorname {A t t n} (\\mathbf {X}) = \\operatorname {s o f t m a x} \\left(\\frac {\\mathbf {X} \\mathbf {W} _ {Q} \\cdot \\mathbf {W} _ {K} ^ {T} \\mathbf {X} ^ {T}}{\\sqrt {d _ {k}}}\\right) \\cdot \\mathbf {X} \\mathbf {W} _ {V} \\tag {1}\n\\]"
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.082,
|
| 396 |
+
0.861,
|
| 397 |
+
0.481,
|
| 398 |
+
0.891
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "where the \\( \\mathbf{X} \\) is the input token and the \\( d_{k} \\) is the dimension of the key. For a clearer description of the post-order steps,"
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "image",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.539,
|
| 407 |
+
0.065,
|
| 408 |
+
0.895,
|
| 409 |
+
0.456
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": null
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "image_caption",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.516,
|
| 418 |
+
0.464,
|
| 419 |
+
0.915,
|
| 420 |
+
0.522
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "Figure 2: Information streams in the attention mechanism. The four information streams of Q-K-V are corresponding to the four parts in the attention map. Variants of attention can be uniformly explained under this analytical approach."
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.516,
|
| 429 |
+
0.546,
|
| 430 |
+
0.914,
|
| 431 |
+
0.604
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "we apply an attention calculation with the inputs of two different tokens, the token \\(\\mathbf{X}_Q\\) computed with query and the token \\(\\mathbf{X}_K V\\) computed with key and value. We modify the attention formula and define the attention map (AMap) as:"
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "equation",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.543,
|
| 440 |
+
0.618,
|
| 441 |
+
0.882,
|
| 442 |
+
0.636
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "\\[\n\\operatorname {A t t n} \\left(\\mathbf {X} _ {Q}, \\mathbf {X} _ {K V}\\right) = \\operatorname {A M a p} \\left(\\mathbf {X} _ {Q}, \\mathbf {X} _ {K V}\\right) \\cdot \\mathbf {X} _ {K V} \\mathbf {W} _ {V}\n\\]"
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "equation",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.53,
|
| 451 |
+
0.638,
|
| 452 |
+
0.912,
|
| 453 |
+
0.671
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "\\[\n\\operatorname {A M a p} \\left(\\mathbf {X} _ {Q}, \\mathbf {X} _ {K V}\\right) = \\operatorname {s o f t m a x} \\left(\\frac {\\mathbf {X} _ {Q} \\mathbf {W} _ {Q} \\cdot \\mathbf {W} _ {K} ^ {T} \\mathbf {X} _ {K V} ^ {T}}{\\sqrt {d}}\\right) \\tag {2}\n\\]"
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.516,
|
| 462 |
+
0.676,
|
| 463 |
+
0.914,
|
| 464 |
+
0.802
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "Our compact transformer tracker consists of two parts: a transformer backbone for information aggregation and a box head for the bounding box estimation. Give the template \\( z \\) in the initial frame and a search image \\( s \\). We obtain the tokens \\( X_{t} \\in \\mathbb{R}^{L_{z} \\times d} \\) and \\( X_{s} \\in \\mathbb{R}^{L_{s} \\times d} \\) respectively through patch embedding, where \\( d \\) represents the number of channels. The packed self-attention (PSelf-Attn) in the tracking field is defined as the self-attention with the input of the concatenation (Cat) of the template and the search image:"
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "equation",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.539,
|
| 473 |
+
0.817,
|
| 474 |
+
0.912,
|
| 475 |
+
0.843
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "\\[\n\\operatorname {P S e l f - A t t n} = \\operatorname {A t t n} \\left(C a t \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right), C a t \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right)\\right) \\tag {3}\n\\]"
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "text",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.516,
|
| 484 |
+
0.847,
|
| 485 |
+
0.914,
|
| 486 |
+
0.89
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "Analysis on Attention. As shown in Figure 2, we divide the computation of attention mechanism, which involves both template and search image, into four information streams:"
|
| 490 |
+
}
|
| 491 |
+
],
|
| 492 |
+
[
|
| 493 |
+
{
|
| 494 |
+
"type": "image",
|
| 495 |
+
"bbox": [
|
| 496 |
+
0.099,
|
| 497 |
+
0.08,
|
| 498 |
+
0.216,
|
| 499 |
+
0.171
|
| 500 |
+
],
|
| 501 |
+
"angle": 0,
|
| 502 |
+
"content": null
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"type": "image_caption",
|
| 506 |
+
"bbox": [
|
| 507 |
+
0.115,
|
| 508 |
+
0.174,
|
| 509 |
+
0.2,
|
| 510 |
+
0.187
|
| 511 |
+
],
|
| 512 |
+
"angle": 0,
|
| 513 |
+
"content": "(a) PSelf-Attn"
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"type": "image",
|
| 517 |
+
"bbox": [
|
| 518 |
+
0.222,
|
| 519 |
+
0.079,
|
| 520 |
+
0.339,
|
| 521 |
+
0.17
|
| 522 |
+
],
|
| 523 |
+
"angle": 0,
|
| 524 |
+
"content": null
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "image_caption",
|
| 528 |
+
"bbox": [
|
| 529 |
+
0.236,
|
| 530 |
+
0.174,
|
| 531 |
+
0.325,
|
| 532 |
+
0.186
|
| 533 |
+
],
|
| 534 |
+
"angle": 0,
|
| 535 |
+
"content": "(b) AMix-Attn"
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "image",
|
| 539 |
+
"bbox": [
|
| 540 |
+
0.344,
|
| 541 |
+
0.08,
|
| 542 |
+
0.462,
|
| 543 |
+
0.17
|
| 544 |
+
],
|
| 545 |
+
"angle": 0,
|
| 546 |
+
"content": null
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "image_caption",
|
| 550 |
+
"bbox": [
|
| 551 |
+
0.361,
|
| 552 |
+
0.174,
|
| 553 |
+
0.446,
|
| 554 |
+
0.186
|
| 555 |
+
],
|
| 556 |
+
"angle": 0,
|
| 557 |
+
"content": "(c) Cross-Attn"
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "image_caption",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.084,
|
| 563 |
+
0.199,
|
| 564 |
+
0.475,
|
| 565 |
+
0.255
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "Figure 3: Configurations of information stream in attention map of packed self-attention (PSelf-Attn), asymmetric mix-attention(AMix-Attn) and cross-attention (Cross-Attn)."
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "text",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.094,
|
| 574 |
+
0.28,
|
| 575 |
+
0.408,
|
| 576 |
+
0.296
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "(1) self-information enhancement on template;"
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "text",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.094,
|
| 585 |
+
0.297,
|
| 586 |
+
0.408,
|
| 587 |
+
0.312
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "(2) cross-information aggregation on template;"
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "text",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.094,
|
| 596 |
+
0.314,
|
| 597 |
+
0.437,
|
| 598 |
+
0.329
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "(3) cross-information aggregation on search image;"
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.094,
|
| 607 |
+
0.33,
|
| 608 |
+
0.436,
|
| 609 |
+
0.345
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": "(4) self-information enhancement on search image."
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "list",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.094,
|
| 618 |
+
0.28,
|
| 619 |
+
0.437,
|
| 620 |
+
0.345
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": null
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "text",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.082,
|
| 629 |
+
0.348,
|
| 630 |
+
0.479,
|
| 631 |
+
0.431
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "These four information streams are also reflected in the four parts of the attention map (In Figure 2, the index of each part in the attention map corresponds to the information stream). Based on this dissection, we can conveniently compare the differences between existing attention, including packed self-attention, mix-attention, and cross-attention."
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.084,
|
| 640 |
+
0.432,
|
| 641 |
+
0.479,
|
| 642 |
+
0.46
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "The PSelf-Attn and the mix-attention(Cui et al. 2022) are essentially equivalent, the mix-attention is calculated as:"
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "equation",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.186,
|
| 651 |
+
0.466,
|
| 652 |
+
0.375,
|
| 653 |
+
0.479
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "\\[\n\\text {P S e l f - A t t n} = = \\text {M i x - A t t n} =\n\\]"
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "equation",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.084,
|
| 662 |
+
0.485,
|
| 663 |
+
0.486,
|
| 664 |
+
0.523
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": "\\[\n\\operatorname {C a t} \\left(\\operatorname {A M a p} \\left(\\mathbf {X} _ {z}, C a t \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right)\\right), \\operatorname {A M a p} \\left(\\mathbf {X} _ {s}, C a t \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right)\\right)\\right) \\tag {4}\n\\]"
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "text",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.082,
|
| 673 |
+
0.529,
|
| 674 |
+
0.478,
|
| 675 |
+
0.558
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "which is the same as Eqn. 3, and they include all four information streams (the attention map is shown as Figure 3a)."
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"bbox": [
|
| 683 |
+
0.082,
|
| 684 |
+
0.558,
|
| 685 |
+
0.479,
|
| 686 |
+
0.612
|
| 687 |
+
],
|
| 688 |
+
"angle": 0,
|
| 689 |
+
"content": "By the same analysis, the asymmetric mix-attention (AMix-Attn) contains three information streams (#1, #3, #4 info stream), which is shown in the Figure 3b and is calculated as follows:"
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "equation",
|
| 693 |
+
"bbox": [
|
| 694 |
+
0.236,
|
| 695 |
+
0.617,
|
| 696 |
+
0.332,
|
| 697 |
+
0.63
|
| 698 |
+
],
|
| 699 |
+
"angle": 0,
|
| 700 |
+
"content": "\\[\n\\mathrm {A M i x - A t t n} =\n\\]"
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "equation",
|
| 704 |
+
"bbox": [
|
| 705 |
+
0.104,
|
| 706 |
+
0.632,
|
| 707 |
+
0.478,
|
| 708 |
+
0.661
|
| 709 |
+
],
|
| 710 |
+
"angle": 0,
|
| 711 |
+
"content": "\\[\n\\operatorname {C a t} \\left(\\operatorname {A M a p} \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {z}\\right), \\operatorname {A M a p} \\left(\\mathbf {X} _ {s}, \\operatorname {C a t} \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right)\\right)\\right) \\tag {5}\n\\]"
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "text",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.083,
|
| 717 |
+
0.665,
|
| 718 |
+
0.479,
|
| 719 |
+
0.707
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": "The cross-attention contains two information streams (#2,#3 info stream) for cross information aggregation, which is shown in the Figure 3c and is calculated as follows:"
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "equation",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.1,
|
| 728 |
+
0.714,
|
| 729 |
+
0.478,
|
| 730 |
+
0.75
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": "\\[\n\\operatorname {C r o s s - A t t n} = \\operatorname {C a t} \\left(\\operatorname {A M a p} \\left(\\mathbf {X} _ {z}, \\mathbf {X} _ {s}\\right), \\operatorname {A M a p} \\left(\\mathbf {X} _ {s}, \\mathbf {X} _ {z}\\right)\\right) \\tag {6}\n\\]"
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "text",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.082,
|
| 739 |
+
0.751,
|
| 740 |
+
0.48,
|
| 741 |
+
0.848
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": "In order to fully verify the importance of each part of packed attention, it is necessary to evaluate the impact of each information stream individually. The key of visual object tracking is to find the target in the search image, there must be a cross-information aggregation of the search image (#3 info stream). The other information streams can be blocked out to verify their performance."
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "text",
|
| 748 |
+
"bbox": [
|
| 749 |
+
0.082,
|
| 750 |
+
0.847,
|
| 751 |
+
0.48,
|
| 752 |
+
0.89
|
| 753 |
+
],
|
| 754 |
+
"angle": 0,
|
| 755 |
+
"content": "Based on the above idea, we conduct detailed experiments and the result is shown in Table 1. Removing cross-information aggregation of the template (#2 info stream) of"
|
| 756 |
+
},
|
| 757 |
+
{
|
| 758 |
+
"type": "table_caption",
|
| 759 |
+
"bbox": [
|
| 760 |
+
0.516,
|
| 761 |
+
0.066,
|
| 762 |
+
0.913,
|
| 763 |
+
0.122
|
| 764 |
+
],
|
| 765 |
+
"angle": 0,
|
| 766 |
+
"content": "Table 1: The effectiveness of information streams in the attention mechanism on the LaSOT dataset. The visualized four parts in the attention map (AMap) correspond to the four information streams at the matched location."
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"type": "table",
|
| 770 |
+
"bbox": [
|
| 771 |
+
0.534,
|
| 772 |
+
0.134,
|
| 773 |
+
0.895,
|
| 774 |
+
0.285
|
| 775 |
+
],
|
| 776 |
+
"angle": 0,
|
| 777 |
+
"content": "<table><tr><td rowspan=\"2\" colspan=\"2\">#AMap</td><td colspan=\"4\">No. Info Stream</td><td rowspan=\"2\">AUC</td><td rowspan=\"2\">Prec</td></tr><tr><td>①</td><td>②</td><td>③</td><td>④</td></tr><tr><td>1</td><td></td><td>√</td><td>√</td><td>√</td><td>√</td><td>61.7</td><td>64.2</td></tr><tr><td>2</td><td></td><td>√</td><td></td><td>√</td><td>√</td><td>64.0</td><td>67.7</td></tr><tr><td>3</td><td></td><td></td><td>√</td><td>√</td><td>√</td><td>60.6</td><td>63.7</td></tr><tr><td>4</td><td></td><td>√</td><td>√</td><td>√</td><td></td><td>58.8</td><td>60.1</td></tr><tr><td>5</td><td></td><td></td><td>√</td><td>√</td><td></td><td>57.9</td><td>58.5</td></tr></table>"
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "text",
|
| 781 |
+
"bbox": [
|
| 782 |
+
0.516,
|
| 783 |
+
0.311,
|
| 784 |
+
0.914,
|
| 785 |
+
0.493
|
| 786 |
+
],
|
| 787 |
+
"angle": 0,
|
| 788 |
+
"content": "self-attention can greatly improve tracking performance (the AUC and Prec of Table 1 #2 are better than that of Table 1 #1), and the cross-information aggregation of the template will introduce a lot of noise in template features, which is not recommended in visual tracking. However, removing self-information enhancement (#3 and #4 info stream) of self-attention severely degrades the tracking performance (the AUC and Prec of Table 1 #3 and #4 are worse than that of Table 1 #1). From the results we can conclude that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation, the cross-information aggregation is indispensable in tracking but not greatly beneficial."
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "title",
|
| 792 |
+
"bbox": [
|
| 793 |
+
0.517,
|
| 794 |
+
0.507,
|
| 795 |
+
0.794,
|
| 796 |
+
0.523
|
| 797 |
+
],
|
| 798 |
+
"angle": 0,
|
| 799 |
+
"content": "3.2 Correlative Masked Modeling"
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "text",
|
| 803 |
+
"bbox": [
|
| 804 |
+
0.515,
|
| 805 |
+
0.528,
|
| 806 |
+
0.913,
|
| 807 |
+
0.777
|
| 808 |
+
],
|
| 809 |
+
"angle": 0,
|
| 810 |
+
"content": "According to the above analysis, the best tracking performance can be achieved by adopting three information streams: self-information on the template(#1 info stream), cross-information on the search image (#3 info stream), and self-information on the search image (#4 info stream). These three information streams can be grouped into two categories: two self-information enhancements and one cross-information aggregation. We designed a correlative masked modeling method to enhance the information aggregation of our tracking framework, as shown in Figure 1. The ViT backbone is an encoder, and the correlative masked decoder reconstructs the original image (the template and search image respectively) from randomly masked tokens to enhance the self-information and reconstructs the template image from search tokens to improve cross-information aggregation. In parallel with the masked decoder, the search image tokens go through a box estimation head as in (Yan et al. 2021) to generate the result bounding box."
|
| 811 |
+
},
|
| 812 |
+
{
|
| 813 |
+
"type": "text",
|
| 814 |
+
"bbox": [
|
| 815 |
+
0.516,
|
| 816 |
+
0.778,
|
| 817 |
+
0.914,
|
| 818 |
+
0.89
|
| 819 |
+
],
|
| 820 |
+
"angle": 0,
|
| 821 |
+
"content": "Decoder. The decoders in our framework consist of a self-decoder and a cross-decoder, these two decoders have the same structure but do not share weights, each one is composed of a series of transformer blocks similar to the MAE, and the last layer of the decoder is a linear projection with output channels equal to the number of pixels in a patch. As shown in Figure 4, the decoder takes masked tokens as input and predicts the original image pixels corresponding to"
|
| 822 |
+
}
|
| 823 |
+
],
|
| 824 |
+
[
|
| 825 |
+
{
|
| 826 |
+
"type": "image",
|
| 827 |
+
"bbox": [
|
| 828 |
+
0.088,
|
| 829 |
+
0.066,
|
| 830 |
+
0.476,
|
| 831 |
+
0.282
|
| 832 |
+
],
|
| 833 |
+
"angle": 0,
|
| 834 |
+
"content": null
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "image_caption",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.082,
|
| 840 |
+
0.291,
|
| 841 |
+
0.48,
|
| 842 |
+
0.362
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": "Figure 4: The correlative masked decoders consists of a self-decoder and a cross-decoder. The self-decoder reconstructs the two original images, template and search image, from its corresponding masked tokens. The cross-decoder reconstructs the template image from search tokens."
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "text",
|
| 849 |
+
"bbox": [
|
| 850 |
+
0.082,
|
| 851 |
+
0.387,
|
| 852 |
+
0.48,
|
| 853 |
+
0.47
|
| 854 |
+
],
|
| 855 |
+
"angle": 0,
|
| 856 |
+
"content": "the template token and the search image token, where the template tokens are only self-reconstructed to the template image for enhancing the #1 information stream, search tokens are used to crossly reconstruct the template image (for #3 info stream) and self-reconstruct the search image (for #4 info stream)."
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "text",
|
| 860 |
+
"bbox": [
|
| 861 |
+
0.082,
|
| 862 |
+
0.47,
|
| 863 |
+
0.481,
|
| 864 |
+
0.624
|
| 865 |
+
],
|
| 866 |
+
"angle": 0,
|
| 867 |
+
"content": "Masking and Reconstruction. The encoder embeds the concatenation set of template tokens and search tokens. Then we split the encoded tokens into template tokens and search tokens, crop the search tokens using Precise RoI Pooling(Jiang et al. 2018) to the same size as the template tokens, and sample a subset of them. We randomly sample tokens at a high masking ratio (75%). Our decoder predicts the pixel values for each masked token, and the output of the decoder is reshaped to form a reconstructed image. We use the mean squared error (MSE) between the reconstructed and original images on masked tokens as our loss function."
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "title",
|
| 871 |
+
"bbox": [
|
| 872 |
+
0.084,
|
| 873 |
+
0.634,
|
| 874 |
+
0.307,
|
| 875 |
+
0.649
|
| 876 |
+
],
|
| 877 |
+
"angle": 0,
|
| 878 |
+
"content": "3.3 Training and Inference"
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "text",
|
| 882 |
+
"bbox": [
|
| 883 |
+
0.082,
|
| 884 |
+
0.651,
|
| 885 |
+
0.48,
|
| 886 |
+
0.818
|
| 887 |
+
],
|
| 888 |
+
"angle": 0,
|
| 889 |
+
"content": "Our decoder is only used in the training phase, while does not participate in the inference phase, hence it doesn't affect the tracking speed. During the training phase, our tracker takes a triplet input consisting of one search region and two templates similar to STARK(Yan et al. 2021). We randomly sample multiple frames from sequences in the training set, select the first frame and the second frame as templates, and the last frame as the search region. In the target localization training, we train the whole network except the scoring head in an end-to-end manner with the combination of \\( L1 \\) Loss, generalized IoU loss (Rezatofighi et al. 2019), and decoder loss \\( L_{dec} \\). The full loss function is defined as follows:"
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "equation",
|
| 893 |
+
"bbox": [
|
| 894 |
+
0.095,
|
| 895 |
+
0.823,
|
| 896 |
+
0.48,
|
| 897 |
+
0.841
|
| 898 |
+
],
|
| 899 |
+
"angle": 0,
|
| 900 |
+
"content": "\\[\nL o s s = \\lambda_ {L 1} L _ {1} \\left(B _ {i}, \\hat {B} _ {i}\\right) + \\lambda_ {g} L _ {g} \\left(B _ {i}, \\hat {B} _ {i}\\right) + \\lambda_ {d e c} L _ {d e c} \\tag {7}\n\\]"
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "text",
|
| 904 |
+
"bbox": [
|
| 905 |
+
0.082,
|
| 906 |
+
0.844,
|
| 907 |
+
0.481,
|
| 908 |
+
0.89
|
| 909 |
+
],
|
| 910 |
+
"angle": 0,
|
| 911 |
+
"content": "where \\(\\lambda_{L1} = 5.0\\), \\(\\lambda_{g} = 2.0\\) and \\(\\lambda_{dec} = 0.3\\) are the weighting factors of three losses, \\(\\hat{B}_i\\) is the estimated box of the target and \\(B_i\\) is the ground-truth bounding box. The decoder"
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "text",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.517,
|
| 917 |
+
0.069,
|
| 918 |
+
0.672,
|
| 919 |
+
0.083
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": "loss \\(L_{dec}\\) is defined as:"
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "equation",
|
| 926 |
+
"bbox": [
|
| 927 |
+
0.574,
|
| 928 |
+
0.093,
|
| 929 |
+
0.913,
|
| 930 |
+
0.109
|
| 931 |
+
],
|
| 932 |
+
"angle": 0,
|
| 933 |
+
"content": "\\[\nL _ {d e c} = L _ {2} \\left(z, z _ {p}\\right) + L _ {2} \\left(s, s _ {p}\\right) + L _ {2} \\left(z, s _ {p}\\right) \\tag {8}\n\\]"
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "text",
|
| 937 |
+
"bbox": [
|
| 938 |
+
0.516,
|
| 939 |
+
0.117,
|
| 940 |
+
0.913,
|
| 941 |
+
0.16
|
| 942 |
+
],
|
| 943 |
+
"angle": 0,
|
| 944 |
+
"content": "where the \\(L_{2}\\) is the MSE loss, \\(z\\) and \\(s\\) represent the original template image and search image, \\(z_{p}\\) and \\(s_p\\) represent the predicting template image and search image respectively."
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "text",
|
| 948 |
+
"bbox": [
|
| 949 |
+
0.516,
|
| 950 |
+
0.16,
|
| 951 |
+
0.915,
|
| 952 |
+
0.271
|
| 953 |
+
],
|
| 954 |
+
"angle": 0,
|
| 955 |
+
"content": "In the inference phase, we use two templates of the same size as the input. One of which is the initial template and fixed, the other is online updated and always set to the latest tracking result with high confidence. We use a score head to control the updating of the online template. Our score head consists of the multilayer perceptron (MLP) that receives a class-token(Dosovitskiy et al. 2021) as input and evaluates the accuracy of current tracking results."
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "title",
|
| 959 |
+
"bbox": [
|
| 960 |
+
0.644,
|
| 961 |
+
0.285,
|
| 962 |
+
0.787,
|
| 963 |
+
0.303
|
| 964 |
+
],
|
| 965 |
+
"angle": 0,
|
| 966 |
+
"content": "4 Experiments"
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"type": "title",
|
| 970 |
+
"bbox": [
|
| 971 |
+
0.517,
|
| 972 |
+
0.307,
|
| 973 |
+
0.743,
|
| 974 |
+
0.323
|
| 975 |
+
],
|
| 976 |
+
"angle": 0,
|
| 977 |
+
"content": "4.1 Implementation Details"
|
| 978 |
+
},
|
| 979 |
+
{
|
| 980 |
+
"type": "text",
|
| 981 |
+
"bbox": [
|
| 982 |
+
0.516,
|
| 983 |
+
0.327,
|
| 984 |
+
0.913,
|
| 985 |
+
0.479
|
| 986 |
+
],
|
| 987 |
+
"angle": 0,
|
| 988 |
+
"content": "In order to effectively verify the correctness of our analysis, we design the compact transformer tracker without any other extra attention mechanisms. The only structures remaining are feature extraction and aggregation, and multilayer feature aggregation. The main tracker only consists of a ViT backbone and a box estimation head, we test both ViT-Base and ViT-Large, and the ViT parameters are initialized with MAE (He et al. 2022) pre-trained model. We refer our Compact Transformer tracker as CTTrack-B (the backbone of ViT-Base) and CTTrack-L (the backbone of ViT-Large) in this section."
|
| 989 |
+
},
|
| 990 |
+
{
|
| 991 |
+
"type": "text",
|
| 992 |
+
"bbox": [
|
| 993 |
+
0.516,
|
| 994 |
+
0.48,
|
| 995 |
+
0.914,
|
| 996 |
+
0.744
|
| 997 |
+
],
|
| 998 |
+
"angle": 0,
|
| 999 |
+
"content": "We adopt CoCo(Lin et al. 2014), LaSOT(Fan et al. 2019), GOT-10k(Huang, Zhao, and Huang 2019), and TrackingNet(Muller et al. 2018) as our training dataset except the GOT-10k benchmark. The training samples are directly sampled from the same sequence and we apply common data augmentation operations including brightness jitter and horizontal flip. The size of the input template is \\(128 \\times 128\\), the search region is \\(5^2\\) times of the target box area and further resized to \\(320 \\times 320\\). The decoder parameters are initialized with Xavier Uniform. The AdamW optimizer (Loshchilov and Hutter 2018) is employed with initial learning rate (lr) of 1e-4 with the layer-wise decay 0.75, and the lr decreases according to the cosine function with the final decrease factor of 0.1. We adopt a warm-up lr with the 0.2 warm-up factor on the first 5 epochs. We train our model on 4 Nvidia Tesla V100 GPUs for a total of 500 epochs, each epoch uses \\(6 \\times 10^4\\) images. The mini-batch size is set to 128 images with each GPU hosting 32 images. Our approach is implemented in Python 3.7 with PyTorch 1.7."
|
| 1000 |
+
},
|
| 1001 |
+
{
|
| 1002 |
+
"type": "title",
|
| 1003 |
+
"bbox": [
|
| 1004 |
+
0.517,
|
| 1005 |
+
0.757,
|
| 1006 |
+
0.679,
|
| 1007 |
+
0.773
|
| 1008 |
+
],
|
| 1009 |
+
"angle": 0,
|
| 1010 |
+
"content": "4.2 Ablation Study"
|
| 1011 |
+
},
|
| 1012 |
+
{
|
| 1013 |
+
"type": "text",
|
| 1014 |
+
"bbox": [
|
| 1015 |
+
0.516,
|
| 1016 |
+
0.777,
|
| 1017 |
+
0.913,
|
| 1018 |
+
0.834
|
| 1019 |
+
],
|
| 1020 |
+
"angle": 0,
|
| 1021 |
+
"content": "We ablate our compact transformer tracker on several intriguing properties using the challenging LaSOT dataset and report the Area Under the Curve (AUC) and Precision (Prec) as the validation accuracy."
|
| 1022 |
+
},
|
| 1023 |
+
{
|
| 1024 |
+
"type": "text",
|
| 1025 |
+
"bbox": [
|
| 1026 |
+
0.516,
|
| 1027 |
+
0.833,
|
| 1028 |
+
0.914,
|
| 1029 |
+
0.89
|
| 1030 |
+
],
|
| 1031 |
+
"angle": 0,
|
| 1032 |
+
"content": "Backbone Comparison. Table 2 shows the comparison of the transformer backbones between the ViT-Base and ViT-Large backbone. The CTTrack-B reaches a higher tracking speed while the CTTrack-L exhibits a better performance."
|
| 1033 |
+
}
|
| 1034 |
+
],
|
| 1035 |
+
[
|
| 1036 |
+
{
|
| 1037 |
+
"type": "table_caption",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
0.092,
|
| 1040 |
+
0.066,
|
| 1041 |
+
0.471,
|
| 1042 |
+
0.081
|
| 1043 |
+
],
|
| 1044 |
+
"angle": 0,
|
| 1045 |
+
"content": "Table 2: Model size and speed using different backbones."
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "table",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
0.088,
|
| 1051 |
+
0.092,
|
| 1052 |
+
0.472,
|
| 1053 |
+
0.151
|
| 1054 |
+
],
|
| 1055 |
+
"angle": 0,
|
| 1056 |
+
"content": "<table><tr><td>Methods</td><td>Params(M)</td><td>FLOPs(G)</td><td>Speed(fps)</td></tr><tr><td>CTTrack-B</td><td>93.8</td><td>48.1</td><td>40</td></tr><tr><td>CTTrack-L</td><td>313.9</td><td>163.7</td><td>22</td></tr></table>"
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "text",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
0.082,
|
| 1062 |
+
0.177,
|
| 1063 |
+
0.48,
|
| 1064 |
+
0.4
|
| 1065 |
+
],
|
| 1066 |
+
"angle": 0,
|
| 1067 |
+
"content": "Reconstruction Streams. Our decoder enforces three types of reconstruction streams as shown in Figure 4. Table 3 exhibits different configurations of reconstruction streams, through varied combinations of search tokens reconstruct search image (s2s), template tokens reconstruct template image (t2t) and search tokens reconstruct template image(s2t). The result is consistent with the conclusion of our previous analysis that self-information enhancement (#5) plays the most important role in transformer tracking, compared to cross-information aggregation(#4). Besides, search image information has more influence than the template information, the s2s (#2) improves performance the most among all streams (#2, #3, #4), from 64.0 to 64.7 in AUC score. After adopting all three reconstruction streams, tracking accuracy improved by an impressive AUC score of \\(1.8\\%\\), which validates the effectiveness of our masked modeling decoders."
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "table_caption",
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
0.082,
|
| 1073 |
+
0.413,
|
| 1074 |
+
0.481,
|
| 1075 |
+
0.47
|
| 1076 |
+
],
|
| 1077 |
+
"angle": 0,
|
| 1078 |
+
"content": "Table 3: Ablation Study for the reconstruction streams. s2s represents search tokens reconstruct search image, t2t denotes template tokens reconstruct template image and s2t means search tokens reconstruct template image."
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "table",
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
0.117,
|
| 1084 |
+
0.48,
|
| 1085 |
+
0.445,
|
| 1086 |
+
0.609
|
| 1087 |
+
],
|
| 1088 |
+
"angle": 0,
|
| 1089 |
+
"content": "<table><tr><td rowspan=\"2\">#</td><td colspan=\"3\">Recons Type</td><td rowspan=\"2\">AUC</td><td rowspan=\"2\">Prec</td></tr><tr><td>s2s</td><td>t2t</td><td>s2t</td></tr><tr><td>1</td><td>-</td><td>-</td><td>-</td><td>64.0</td><td>67.7</td></tr><tr><td>2</td><td>✓</td><td>-</td><td>-</td><td>64.7</td><td>69.1</td></tr><tr><td>3</td><td>-</td><td>✓</td><td>-</td><td>64.4</td><td>68.4</td></tr><tr><td>4</td><td>-</td><td>-</td><td>✓</td><td>64.4</td><td>68.6</td></tr><tr><td>5</td><td>✓</td><td>✓</td><td>-</td><td>65.1</td><td>69.9</td></tr><tr><td>6</td><td>✓</td><td>✓</td><td>✓</td><td>65.8</td><td>70.9</td></tr></table>"
|
| 1090 |
+
},
|
| 1091 |
+
{
|
| 1092 |
+
"type": "text",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
0.082,
|
| 1095 |
+
0.623,
|
| 1096 |
+
0.48,
|
| 1097 |
+
0.749
|
| 1098 |
+
],
|
| 1099 |
+
"angle": 0,
|
| 1100 |
+
"content": "Masking ratio. When we conduct reconstruction streams, we randomly mask the input tokens according to a predefined ratio. Table 4 shows the influence of different masking ratios. We mask the encoded template token and search tokens with a random sampling strategy at different masking rates. Similar to the conclusion obtained by the MAE(He et al. 2022), the optimal ratios are relatively high, and the accuracy increases steadily with the masking ratio growing until reaching \\(75\\%\\), which produces the best tracking results."
|
| 1101 |
+
},
|
| 1102 |
+
{
|
| 1103 |
+
"type": "table_caption",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
0.151,
|
| 1106 |
+
0.762,
|
| 1107 |
+
0.411,
|
| 1108 |
+
0.777
|
| 1109 |
+
],
|
| 1110 |
+
"angle": 0,
|
| 1111 |
+
"content": "Table 4: Comparison on masking ratio."
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "table",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
0.135,
|
| 1117 |
+
0.788,
|
| 1118 |
+
0.427,
|
| 1119 |
+
0.847
|
| 1120 |
+
],
|
| 1121 |
+
"angle": 0,
|
| 1122 |
+
"content": "<table><tr><td>Mask Ratio</td><td>25%</td><td>50%</td><td>75%</td><td>90%</td></tr><tr><td>AUC</td><td>64.6</td><td>65.7</td><td>65.8</td><td>64.9</td></tr><tr><td>Prec</td><td>69.0</td><td>70.7</td><td>70.9</td><td>69.5</td></tr></table>"
|
| 1123 |
+
},
|
| 1124 |
+
{
|
| 1125 |
+
"type": "text",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
0.082,
|
| 1128 |
+
0.861,
|
| 1129 |
+
0.48,
|
| 1130 |
+
0.89
|
| 1131 |
+
],
|
| 1132 |
+
"angle": 0,
|
| 1133 |
+
"content": "Online Template Updating. We evaluate the effect of the online update strategy in our method. The ablation study"
|
| 1134 |
+
},
|
| 1135 |
+
{
|
| 1136 |
+
"type": "image_caption",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
0.523,
|
| 1139 |
+
0.066,
|
| 1140 |
+
0.572,
|
| 1141 |
+
0.075
|
| 1142 |
+
],
|
| 1143 |
+
"angle": 0,
|
| 1144 |
+
"content": "Target"
|
| 1145 |
+
},
|
| 1146 |
+
{
|
| 1147 |
+
"type": "image",
|
| 1148 |
+
"bbox": [
|
| 1149 |
+
0.523,
|
| 1150 |
+
0.075,
|
| 1151 |
+
0.577,
|
| 1152 |
+
0.117
|
| 1153 |
+
],
|
| 1154 |
+
"angle": 0,
|
| 1155 |
+
"content": null
|
| 1156 |
+
},
|
| 1157 |
+
{
|
| 1158 |
+
"type": "image_caption",
|
| 1159 |
+
"bbox": [
|
| 1160 |
+
0.579,
|
| 1161 |
+
0.066,
|
| 1162 |
+
0.651,
|
| 1163 |
+
0.075
|
| 1164 |
+
],
|
| 1165 |
+
"angle": 0,
|
| 1166 |
+
"content": "S-to-S"
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"type": "image",
|
| 1170 |
+
"bbox": [
|
| 1171 |
+
0.583,
|
| 1172 |
+
0.075,
|
| 1173 |
+
0.687,
|
| 1174 |
+
0.116
|
| 1175 |
+
],
|
| 1176 |
+
"angle": 0,
|
| 1177 |
+
"content": null
|
| 1178 |
+
},
|
| 1179 |
+
{
|
| 1180 |
+
"type": "image_caption",
|
| 1181 |
+
"bbox": [
|
| 1182 |
+
0.69,
|
| 1183 |
+
0.066,
|
| 1184 |
+
0.762,
|
| 1185 |
+
0.075
|
| 1186 |
+
],
|
| 1187 |
+
"angle": 0,
|
| 1188 |
+
"content": "T-to-T"
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "image",
|
| 1192 |
+
"bbox": [
|
| 1193 |
+
0.693,
|
| 1194 |
+
0.075,
|
| 1195 |
+
0.798,
|
| 1196 |
+
0.116
|
| 1197 |
+
],
|
| 1198 |
+
"angle": 0,
|
| 1199 |
+
"content": null
|
| 1200 |
+
},
|
| 1201 |
+
{
|
| 1202 |
+
"type": "image_caption",
|
| 1203 |
+
"bbox": [
|
| 1204 |
+
0.803,
|
| 1205 |
+
0.066,
|
| 1206 |
+
0.874,
|
| 1207 |
+
0.075
|
| 1208 |
+
],
|
| 1209 |
+
"angle": 0,
|
| 1210 |
+
"content": "S-to-T"
|
| 1211 |
+
},
|
| 1212 |
+
{
|
| 1213 |
+
"type": "image",
|
| 1214 |
+
"bbox": [
|
| 1215 |
+
0.804,
|
| 1216 |
+
0.075,
|
| 1217 |
+
0.908,
|
| 1218 |
+
0.116
|
| 1219 |
+
],
|
| 1220 |
+
"angle": 0,
|
| 1221 |
+
"content": null
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"type": "image",
|
| 1225 |
+
"bbox": [
|
| 1226 |
+
0.524,
|
| 1227 |
+
0.117,
|
| 1228 |
+
0.577,
|
| 1229 |
+
0.159
|
| 1230 |
+
],
|
| 1231 |
+
"angle": 0,
|
| 1232 |
+
"content": null
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "image",
|
| 1236 |
+
"bbox": [
|
| 1237 |
+
0.583,
|
| 1238 |
+
0.118,
|
| 1239 |
+
0.66,
|
| 1240 |
+
0.159
|
| 1241 |
+
],
|
| 1242 |
+
"angle": 0,
|
| 1243 |
+
"content": null
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "image",
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
0.665,
|
| 1249 |
+
0.118,
|
| 1250 |
+
0.798,
|
| 1251 |
+
0.159
|
| 1252 |
+
],
|
| 1253 |
+
"angle": 0,
|
| 1254 |
+
"content": null
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "image",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
0.524,
|
| 1260 |
+
0.16,
|
| 1261 |
+
0.577,
|
| 1262 |
+
0.202
|
| 1263 |
+
],
|
| 1264 |
+
"angle": 0,
|
| 1265 |
+
"content": null
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "image",
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
0.583,
|
| 1271 |
+
0.16,
|
| 1272 |
+
0.687,
|
| 1273 |
+
0.202
|
| 1274 |
+
],
|
| 1275 |
+
"angle": 0,
|
| 1276 |
+
"content": null
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"type": "image",
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
0.691,
|
| 1282 |
+
0.16,
|
| 1283 |
+
0.798,
|
| 1284 |
+
0.202
|
| 1285 |
+
],
|
| 1286 |
+
"angle": 0,
|
| 1287 |
+
"content": null
|
| 1288 |
+
},
|
| 1289 |
+
{
|
| 1290 |
+
"type": "image",
|
| 1291 |
+
"bbox": [
|
| 1292 |
+
0.803,
|
| 1293 |
+
0.118,
|
| 1294 |
+
0.882,
|
| 1295 |
+
0.159
|
| 1296 |
+
],
|
| 1297 |
+
"angle": 0,
|
| 1298 |
+
"content": null
|
| 1299 |
+
},
|
| 1300 |
+
{
|
| 1301 |
+
"type": "image",
|
| 1302 |
+
"bbox": [
|
| 1303 |
+
0.884,
|
| 1304 |
+
0.118,
|
| 1305 |
+
0.908,
|
| 1306 |
+
0.159
|
| 1307 |
+
],
|
| 1308 |
+
"angle": 0,
|
| 1309 |
+
"content": null
|
| 1310 |
+
},
|
| 1311 |
+
{
|
| 1312 |
+
"type": "image_caption",
|
| 1313 |
+
"bbox": [
|
| 1314 |
+
0.598,
|
| 1315 |
+
0.202,
|
| 1316 |
+
0.667,
|
| 1317 |
+
0.211
|
| 1318 |
+
],
|
| 1319 |
+
"angle": 0,
|
| 1320 |
+
"content": "w/o W"
|
| 1321 |
+
},
|
| 1322 |
+
{
|
| 1323 |
+
"type": "image_caption",
|
| 1324 |
+
"bbox": [
|
| 1325 |
+
0.692,
|
| 1326 |
+
0.202,
|
| 1327 |
+
0.779,
|
| 1328 |
+
0.211
|
| 1329 |
+
],
|
| 1330 |
+
"angle": 0,
|
| 1331 |
+
"content": "W/O W"
|
| 1332 |
+
},
|
| 1333 |
+
{
|
| 1334 |
+
"type": "image",
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
0.799,
|
| 1337 |
+
0.16,
|
| 1338 |
+
0.882,
|
| 1339 |
+
0.202
|
| 1340 |
+
],
|
| 1341 |
+
"angle": 0,
|
| 1342 |
+
"content": null
|
| 1343 |
+
},
|
| 1344 |
+
{
|
| 1345 |
+
"type": "image",
|
| 1346 |
+
"bbox": [
|
| 1347 |
+
0.899,
|
| 1348 |
+
0.16,
|
| 1349 |
+
0.909,
|
| 1350 |
+
0.202
|
| 1351 |
+
],
|
| 1352 |
+
"angle": 0,
|
| 1353 |
+
"content": null
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "image_caption",
|
| 1357 |
+
"bbox": [
|
| 1358 |
+
0.821,
|
| 1359 |
+
0.202,
|
| 1360 |
+
0.866,
|
| 1361 |
+
0.211
|
| 1362 |
+
],
|
| 1363 |
+
"angle": 0,
|
| 1364 |
+
"content": "w/o"
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "image_caption",
|
| 1368 |
+
"bbox": [
|
| 1369 |
+
0.516,
|
| 1370 |
+
0.22,
|
| 1371 |
+
0.914,
|
| 1372 |
+
0.305
|
| 1373 |
+
],
|
| 1374 |
+
"angle": 0,
|
| 1375 |
+
"content": "Figure 5: Visualization of attention map which compares the difference between training with correlative decoder (w) and training without correlative decoder(w/o). S-to-S is self-information enhancement on search image, T-to-T is self-information enhancement on template, S-to-T is cross-information aggregation on search image."
|
| 1376 |
+
},
|
| 1377 |
+
{
|
| 1378 |
+
"type": "text",
|
| 1379 |
+
"bbox": [
|
| 1380 |
+
0.516,
|
| 1381 |
+
0.332,
|
| 1382 |
+
0.913,
|
| 1383 |
+
0.43
|
| 1384 |
+
],
|
| 1385 |
+
"angle": 0,
|
| 1386 |
+
"content": "result is shown in Table 5, #1 represents the performance without template updating. We can see that applying a fixed interval to update the online template (#2) is ineffective as it greatly reduces the quality of template and causes tracking drift. It can be seen in #3, there is a \\(0.2\\%\\) improvement in the AUC score after applying the scoring head to evaluate the accuracy of current tracking results."
|
| 1387 |
+
},
|
| 1388 |
+
{
|
| 1389 |
+
"type": "table_caption",
|
| 1390 |
+
"bbox": [
|
| 1391 |
+
0.516,
|
| 1392 |
+
0.442,
|
| 1393 |
+
0.913,
|
| 1394 |
+
0.498
|
| 1395 |
+
],
|
| 1396 |
+
"angle": 0,
|
| 1397 |
+
"content": "Table 5: Ablation for the online template updating component. Online denotes updating the template at a fixed update interval. Score represents the online template is only updated with high confident samples."
|
| 1398 |
+
},
|
| 1399 |
+
{
|
| 1400 |
+
"type": "table",
|
| 1401 |
+
"bbox": [
|
| 1402 |
+
0.556,
|
| 1403 |
+
0.51,
|
| 1404 |
+
0.872,
|
| 1405 |
+
0.581
|
| 1406 |
+
],
|
| 1407 |
+
"angle": 0,
|
| 1408 |
+
"content": "<table><tr><td></td><td>Online</td><td>Score</td><td>AUC</td><td>Prec</td></tr><tr><td rowspan=\"3\">CTTrack-B</td><td>-</td><td>-</td><td>65.8</td><td>70.9</td></tr><tr><td>✓</td><td>-</td><td>64.9</td><td>69.9</td></tr><tr><td>✓</td><td>✓</td><td>66.0</td><td>71.1</td></tr></table>"
|
| 1409 |
+
},
|
| 1410 |
+
{
|
| 1411 |
+
"type": "text",
|
| 1412 |
+
"bbox": [
|
| 1413 |
+
0.516,
|
| 1414 |
+
0.595,
|
| 1415 |
+
0.913,
|
| 1416 |
+
0.68
|
| 1417 |
+
],
|
| 1418 |
+
"angle": 0,
|
| 1419 |
+
"content": "Visualization of attention maps. We visualize attention maps in Figure5, our tracker adopting the correlative decoder has a stronger discriminative ability. The baseline transformer without a reconstruction decoder tends to lose the target position, and the distractors in the background get suppressed with the training by the correlative decoder."
|
| 1420 |
+
},
|
| 1421 |
+
{
|
| 1422 |
+
"type": "title",
|
| 1423 |
+
"bbox": [
|
| 1424 |
+
0.516,
|
| 1425 |
+
0.69,
|
| 1426 |
+
0.774,
|
| 1427 |
+
0.706
|
| 1428 |
+
],
|
| 1429 |
+
"angle": 0,
|
| 1430 |
+
"content": "4.3 Comparison with the SOTA"
|
| 1431 |
+
},
|
| 1432 |
+
{
|
| 1433 |
+
"type": "text",
|
| 1434 |
+
"bbox": [
|
| 1435 |
+
0.516,
|
| 1436 |
+
0.709,
|
| 1437 |
+
0.913,
|
| 1438 |
+
0.806
|
| 1439 |
+
],
|
| 1440 |
+
"angle": 0,
|
| 1441 |
+
"content": "We compare our compact tracker with the state-of-the-art trackers on UAV123(Mueller, Smith, and Ghanem 2016), LaSOT(Fan et al. 2019), TrackingNet(Muller et al. 2018), GOT-10k(Huang, Zhao, and Huang 2019), and VOT2020(Kristan et al. 2020). For a fairer comparison, here we adopt relative position biases in our ViT backbones, this addition improves AUC by around 1 point."
|
| 1442 |
+
},
|
| 1443 |
+
{
|
| 1444 |
+
"type": "text",
|
| 1445 |
+
"bbox": [
|
| 1446 |
+
0.516,
|
| 1447 |
+
0.806,
|
| 1448 |
+
0.913,
|
| 1449 |
+
0.89
|
| 1450 |
+
],
|
| 1451 |
+
"angle": 0,
|
| 1452 |
+
"content": "UAV123 gathers an application-specific collection of 123 sequences. It adopts the AUC and Precision (P) as the evaluation metrics. As shown in Table 1, Our CTTrack-L outperforms previous trackers and exhibits very competitive performance (71.3% AUC) when compared to the previous best-performing tracker CSWinTT (70.5% AUC)."
|
| 1453 |
+
}
|
| 1454 |
+
],
|
| 1455 |
+
[
|
| 1456 |
+
{
|
| 1457 |
+
"type": "table_caption",
|
| 1458 |
+
"bbox": [
|
| 1459 |
+
0.083,
|
| 1460 |
+
0.066,
|
| 1461 |
+
0.916,
|
| 1462 |
+
0.096
|
| 1463 |
+
],
|
| 1464 |
+
"angle": 0,
|
| 1465 |
+
"content": "Table 6: Comparisons with previous state-of-the-art trackers on four challenge benchmarks. The red, green and blue indicate performances ranked at first, second, and third places. The tracker -GOT denotes only trained on the GOT-10k train split."
|
| 1466 |
+
},
|
| 1467 |
+
{
|
| 1468 |
+
"type": "table",
|
| 1469 |
+
"bbox": [
|
| 1470 |
+
0.09,
|
| 1471 |
+
0.106,
|
| 1472 |
+
0.905,
|
| 1473 |
+
0.401
|
| 1474 |
+
],
|
| 1475 |
+
"angle": 0,
|
| 1476 |
+
"content": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"2\">UAV123</td><td colspan=\"3\">LaSOT</td><td colspan=\"3\">TrackingNet</td><td colspan=\"3\">GOT-10k</td></tr><tr><td>AUC</td><td>P</td><td>AUC</td><td>PNorm</td><td>P</td><td>AUC</td><td>PNorm</td><td>P</td><td>AO</td><td>SR0.5</td><td>SR0.75</td></tr><tr><td>CTTrack-L</td><td>71.3</td><td>93.3</td><td>69.8</td><td>79.7</td><td>76.2</td><td>84.9</td><td>89.1</td><td>83.5</td><td>75.3</td><td>84.5</td><td>74.0</td></tr><tr><td>CTTrack-B</td><td>68.8</td><td>89.5</td><td>67.8</td><td>77.8</td><td>74.0</td><td>82.5</td><td>87.1</td><td>80.3</td><td>73.5</td><td>83.5</td><td>70.6</td></tr><tr><td>CTTrack-L -GOT</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>72.8</td><td>81.3</td><td>71.5</td></tr><tr><td>CTTrack-B -GOT</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>71.3</td><td>80.7</td><td>70.3</td></tr><tr><td>MixFormer(Cui et al. 2022)</td><td>69.5</td><td>91.0</td><td>70.1</td><td>79.9</td><td>76.3</td><td>83.9</td><td>88.9</td><td>83.1</td><td>70.7</td><td>80.0</td><td>67.8</td></tr><tr><td>CSWinTT(Song et al. 2022)</td><td>70.5</td><td>90.3</td><td>66.2</td><td>75.2</td><td>70.9</td><td>81.9</td><td>86.7</td><td>79.5</td><td>69.4</td><td>78.9</td><td>65.4</td></tr><tr><td>UTT(Shen et al. 2022)</td><td>-</td><td>-</td><td>64.6</td><td>-</td><td>67.2</td><td>79.7</td><td>-</td><td>77.0</td><td>67.2</td><td>76.3</td><td>60.5</td></tr><tr><td>STARK(Yan et al. 2021)</td><td>-</td><td>-</td><td>67.1</td><td>77.0</td><td>-</td><td>82.0</td><td>86.9</td><td>-</td><td>68.8</td><td>78.1</td><td>64.1</td></tr><tr><td>TransT(Chen et al. 2021)</td><td>68.1</td><td>87.6</td><td>64.9</td><td>73.8</td><td>69.0</td><td>81.4</td><td>86.7</td><td>80.3</td><td>67.1</td><td>76.8</td><td>60.9</td></tr><tr><td>TrDiMP(Wang et al. 2021)</td><td>67.0</td><td>87.6</td><td>64.0</td><td>73.2</td><td>66.6</td><td>78.4</td><td>83.3</td><td>73.1</td><td>68.8</td><td>80.5</td><td>59.7</td></tr><tr><td>STMTrack(Fu et al. 2021)</td><td>64.7</td><td>-</td><td>60.6</td><td>69.3</td><td>63.3</td><td>80.3</td><td>85.1</td><td>76.7</td><td>64.2</td><td>73.7</td><td>57.5</td></tr><tr><td>AutoMatch(Zhang et al. 2021)</td><td>64.4</td><td>83.8</td><td>58.2</td><td>67.5</td><td>59.9</td><td>76.0</td><td>82.4</td><td>72.5</td><td>65.2</td><td>76.6</td><td>54.3</td></tr><tr><td>SiamGAT(Guo et al. 2021)</td><td>64.6</td><td>84.3</td><td>53.9</td><td>63.3</td><td>53.0</td><td>-</td><td>-</td><td>-</td><td>62.7</td><td>74.3</td><td>48.8</td></tr><tr><td>KYS(Bhat et al. 2020)</td><td>-</td><td>-</td><td>55.4</td><td>63.3</td><td>55.8</td><td>74.0</td><td>80.0</td><td>68.8</td><td>63.6</td><td>75.1</td><td>51.5</td></tr><tr><td>MAML(Wang et al. 2020)</td><td>-</td><td>-</td><td>52.3</td><td>-</td><td>53.1</td><td>75.7</td><td>82.2</td><td>72.5</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SiamAttn(Yu et al. 2020)</td><td>65.0</td><td>84.5</td><td>56.0</td><td>64.8</td><td>-</td><td>75.2</td><td>81.7</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SiamFC++(Xu et al. 2020)</td><td>61.8</td><td>80.4</td><td>54.4</td><td>62.3</td><td>54.7</td><td>75.4</td><td>80.0</td><td>70.5</td><td>59.5</td><td>69.5</td><td>47.9</td></tr><tr><td>SiamRPN++(Li et al. 2019)</td><td>64.2</td><td>84.0</td><td>49.6</td><td>56.9</td><td>49.1</td><td>73.3</td><td>80.0</td><td>69.4</td><td>51.7</td><td>61.6</td><td>32.5</td></tr><tr><td>DiMP(Bhat et al. 2019)</td><td>64.2</td><td>84.9</td><td>57.7</td><td>66.4</td><td>57.9</td><td>74.0</td><td>80.1</td><td>68.7</td><td>61.1</td><td>71.7</td><td>49.2</td></tr><tr><td>ATOM(Danelljan et al. 2019)</td><td>61.7</td><td>82.7</td><td>51.5</td><td>57.6</td><td>50.5</td><td>70.3</td><td>77.1</td><td>64.8</td><td>55.6</td><td>63.4</td><td>40.2</td></tr></table>"
|
| 1477 |
+
},
|
| 1478 |
+
{
|
| 1479 |
+
"type": "table_caption",
|
| 1480 |
+
"bbox": [
|
| 1481 |
+
0.083,
|
| 1482 |
+
0.423,
|
| 1483 |
+
0.48,
|
| 1484 |
+
0.454
|
| 1485 |
+
],
|
| 1486 |
+
"angle": 0,
|
| 1487 |
+
"content": "Table 7: Comparisons on VOT2020, where trackers only predict bounding boxes rather than masks."
|
| 1488 |
+
},
|
| 1489 |
+
{
|
| 1490 |
+
"type": "table",
|
| 1491 |
+
"bbox": [
|
| 1492 |
+
0.107,
|
| 1493 |
+
0.463,
|
| 1494 |
+
0.454,
|
| 1495 |
+
0.597
|
| 1496 |
+
],
|
| 1497 |
+
"angle": 0,
|
| 1498 |
+
"content": "<table><tr><td>Methods</td><td>EAO↑</td><td>Accuracy↑</td><td>Robustness↑</td></tr><tr><td>SiamFC</td><td>0.179</td><td>0.418</td><td>0.502</td></tr><tr><td>ATOM</td><td>0.271</td><td>0.462</td><td>0.734</td></tr><tr><td>DiMP</td><td>0.274</td><td>0.457</td><td>0.740</td></tr><tr><td>UPDT</td><td>0.278</td><td>0.465</td><td>0.755</td></tr><tr><td>TransT</td><td>0.293</td><td>0.477</td><td>0.754</td></tr><tr><td>CSWinTT</td><td>0.304</td><td>0.480</td><td>0.787</td></tr><tr><td>CTTrack-L</td><td>0.287</td><td>0.453</td><td>0.787</td></tr></table>"
|
| 1499 |
+
},
|
| 1500 |
+
{
|
| 1501 |
+
"type": "text",
|
| 1502 |
+
"bbox": [
|
| 1503 |
+
0.082,
|
| 1504 |
+
0.623,
|
| 1505 |
+
0.48,
|
| 1506 |
+
0.79
|
| 1507 |
+
],
|
| 1508 |
+
"angle": 0,
|
| 1509 |
+
"content": "LaSOT is a long-term dataset including 1400 sequences and distributed over 14 attributes, the testing subset of LaSOT contains 280 sequences. Methods are ranked by the AUC, P, and Normalized Precision \\((\\mathbb{P}_{Norm})\\). Our CTTrack-L achieves the AUC \\((69.8\\%)\\) and Prec \\((76.2\\%)\\), which is an excellent result that outperforms other methods only except the MixFormer. Our tracker has lower performance than MixFormer on LaSOT because it contains long-term sequences and large variations in content. ViT backbone is a plain and non-hierarchical architecture that maintains feature maps at a certain scale, which may not be able to well handle long-term tracking sequences with scale variations."
|
| 1510 |
+
},
|
| 1511 |
+
{
|
| 1512 |
+
"type": "text",
|
| 1513 |
+
"bbox": [
|
| 1514 |
+
0.082,
|
| 1515 |
+
0.79,
|
| 1516 |
+
0.48,
|
| 1517 |
+
0.874
|
| 1518 |
+
],
|
| 1519 |
+
"angle": 0,
|
| 1520 |
+
"content": "TrackingNet is a large-scale tracking dataset consisting of 511 sequences for testing. The evaluation is performed on the online server. Table 1 shows that CTTrack-L performs better quality and ranks first in AUC score at \\(84.9\\%\\). The gain is \\(1.0\\%\\) improvement when compared with the previous best results."
|
| 1521 |
+
},
|
| 1522 |
+
{
|
| 1523 |
+
"type": "text",
|
| 1524 |
+
"bbox": [
|
| 1525 |
+
0.084,
|
| 1526 |
+
0.875,
|
| 1527 |
+
0.48,
|
| 1528 |
+
0.89
|
| 1529 |
+
],
|
| 1530 |
+
"angle": 0,
|
| 1531 |
+
"content": "GOT-10k contains over 10k videos for training and 180 for"
|
| 1532 |
+
},
|
| 1533 |
+
{
|
| 1534 |
+
"type": "text",
|
| 1535 |
+
"bbox": [
|
| 1536 |
+
0.516,
|
| 1537 |
+
0.426,
|
| 1538 |
+
0.914,
|
| 1539 |
+
0.522
|
| 1540 |
+
],
|
| 1541 |
+
"angle": 0,
|
| 1542 |
+
"content": "testing. It forbids the trackers to use external datasets for training. We follow this protocol by retraining our trackers to only use the GOT10k train split. As in Table 1, MixFormer and CSWinTT provide the best performance, with an AO score of \\(70.7\\%\\) and \\(69.4\\%\\). Our CTTrack-L has obtained an AO score of \\(72.8\\%\\), significantly outperforming the best existing tracker by \\(2.1\\%\\)."
|
| 1543 |
+
},
|
| 1544 |
+
{
|
| 1545 |
+
"type": "text",
|
| 1546 |
+
"bbox": [
|
| 1547 |
+
0.516,
|
| 1548 |
+
0.522,
|
| 1549 |
+
0.915,
|
| 1550 |
+
0.62
|
| 1551 |
+
],
|
| 1552 |
+
"angle": 0,
|
| 1553 |
+
"content": "VOT2020 benchmark contains 60 challenging videos. The performance is evaluated using the expected average overlap (EAO), which takes both accuracy (A) and robustness (R). Since our algorithm does not output a segmentation mask, trackers that only predict bounding boxes are selected for comparisons to ensure fairness. It can be seen from Table 7 that our CTTrack-L obtains an EAO of 0.287."
|
| 1554 |
+
},
|
| 1555 |
+
{
|
| 1556 |
+
"type": "title",
|
| 1557 |
+
"bbox": [
|
| 1558 |
+
0.651,
|
| 1559 |
+
0.633,
|
| 1560 |
+
0.78,
|
| 1561 |
+
0.649
|
| 1562 |
+
],
|
| 1563 |
+
"angle": 0,
|
| 1564 |
+
"content": "5 Conclusion"
|
| 1565 |
+
},
|
| 1566 |
+
{
|
| 1567 |
+
"type": "text",
|
| 1568 |
+
"bbox": [
|
| 1569 |
+
0.516,
|
| 1570 |
+
0.653,
|
| 1571 |
+
0.915,
|
| 1572 |
+
0.891
|
| 1573 |
+
],
|
| 1574 |
+
"angle": 0,
|
| 1575 |
+
"content": "In this work, we analyze the information stream in the attention mechanism in depth. We prove that the vanilla self-attention structure is sufficient for information aggregation, and employ the three information streams of the packed self-attention in the transformer tracking framework. To enhance the information representation, we design the correlative masked decoder consisting of a self-decoder and a cross-decoder to reconstruct the original pixels of both template and search image. Extensive experiments demonstrate the effectiveness of our correlative masked modeling strategy and our compact transformer tracker exhibits impressive performance over previous trackers. In addition, our correlative masked decoder can be plugged into other transformer trackers, which can effectively improve the tracking performance without compromising speed. In the future, we plan to combine the feature pyramid or convolution module for better performance on long-term tracking sequences."
|
| 1576 |
+
}
|
| 1577 |
+
],
|
| 1578 |
+
[
|
| 1579 |
+
{
|
| 1580 |
+
"type": "title",
|
| 1581 |
+
"bbox": [
|
| 1582 |
+
0.204,
|
| 1583 |
+
0.068,
|
| 1584 |
+
0.36,
|
| 1585 |
+
0.084
|
| 1586 |
+
],
|
| 1587 |
+
"angle": 0,
|
| 1588 |
+
"content": "Acknowledgments"
|
| 1589 |
+
},
|
| 1590 |
+
{
|
| 1591 |
+
"type": "text",
|
| 1592 |
+
"bbox": [
|
| 1593 |
+
0.082,
|
| 1594 |
+
0.087,
|
| 1595 |
+
0.481,
|
| 1596 |
+
0.187
|
| 1597 |
+
],
|
| 1598 |
+
"angle": 0,
|
| 1599 |
+
"content": "This work is supported by the national key research and development program of China under Grant No.2020YFB1805601, National Natural Science Foundation of China (NSFC No. 62272184), and CCF-Tencent Open Research Fund (CCF-Tencent RAGR20220120). The computation is completed in the HPC Platform of Huazhong University of Science and Technology."
|
| 1600 |
+
},
|
| 1601 |
+
{
|
| 1602 |
+
"type": "title",
|
| 1603 |
+
"bbox": [
|
| 1604 |
+
0.234,
|
| 1605 |
+
0.198,
|
| 1606 |
+
0.331,
|
| 1607 |
+
0.214
|
| 1608 |
+
],
|
| 1609 |
+
"angle": 0,
|
| 1610 |
+
"content": "References"
|
| 1611 |
+
},
|
| 1612 |
+
{
|
| 1613 |
+
"type": "ref_text",
|
| 1614 |
+
"bbox": [
|
| 1615 |
+
0.086,
|
| 1616 |
+
0.218,
|
| 1617 |
+
0.48,
|
| 1618 |
+
0.247
|
| 1619 |
+
],
|
| 1620 |
+
"angle": 0,
|
| 1621 |
+
"content": "Bao, H.; Dong, L.; and Wei, F. 2021. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254."
|
| 1622 |
+
},
|
| 1623 |
+
{
|
| 1624 |
+
"type": "ref_text",
|
| 1625 |
+
"bbox": [
|
| 1626 |
+
0.085,
|
| 1627 |
+
0.25,
|
| 1628 |
+
0.48,
|
| 1629 |
+
0.307
|
| 1630 |
+
],
|
| 1631 |
+
"angle": 0,
|
| 1632 |
+
"content": "Bertinetto, L.; Valmadre, J.; Henriques, J. F.; Vedaldi, A.; and Torr, P. H. S. 2016. Fully-Convolutional Siamese Networks for Object Tracking. In Proceedings of the ECCV, 850-865. Springer."
|
| 1633 |
+
},
|
| 1634 |
+
{
|
| 1635 |
+
"type": "ref_text",
|
| 1636 |
+
"bbox": [
|
| 1637 |
+
0.086,
|
| 1638 |
+
0.309,
|
| 1639 |
+
0.48,
|
| 1640 |
+
0.351
|
| 1641 |
+
],
|
| 1642 |
+
"angle": 0,
|
| 1643 |
+
"content": "Bhat, G.; Danelljan, M.; Gool, L. V.; and Timofte, R. 2019. Learning Discriminative Model Prediction for Tracking. In Proceedings of the ICCV, 6182-6191. IEEE."
|
| 1644 |
+
},
|
| 1645 |
+
{
|
| 1646 |
+
"type": "ref_text",
|
| 1647 |
+
"bbox": [
|
| 1648 |
+
0.086,
|
| 1649 |
+
0.354,
|
| 1650 |
+
0.48,
|
| 1651 |
+
0.397
|
| 1652 |
+
],
|
| 1653 |
+
"angle": 0,
|
| 1654 |
+
"content": "Bhat, G.; Danelljan, M.; Van Gool, L.; and Timofte, R. 2020. Know Your Surroundings: Exploiting Scene Information for Object Tracking. In Proceedings of the ECCV. Springer."
|
| 1655 |
+
},
|
| 1656 |
+
{
|
| 1657 |
+
"type": "ref_text",
|
| 1658 |
+
"bbox": [
|
| 1659 |
+
0.086,
|
| 1660 |
+
0.4,
|
| 1661 |
+
0.48,
|
| 1662 |
+
0.442
|
| 1663 |
+
],
|
| 1664 |
+
"angle": 0,
|
| 1665 |
+
"content": "Bolme, D. S.; Beveridge, J. R.; Draper, B. A.; and Lui, Y. M. 2010. Visual object tracking using adaptive correlation filters. In Proceedings of the CVPR, 2544-2550. IEEE."
|
| 1666 |
+
},
|
| 1667 |
+
{
|
| 1668 |
+
"type": "ref_text",
|
| 1669 |
+
"bbox": [
|
| 1670 |
+
0.086,
|
| 1671 |
+
0.444,
|
| 1672 |
+
0.48,
|
| 1673 |
+
0.488
|
| 1674 |
+
],
|
| 1675 |
+
"angle": 0,
|
| 1676 |
+
"content": "Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In ECCV, 213-229. Springer."
|
| 1677 |
+
},
|
| 1678 |
+
{
|
| 1679 |
+
"type": "ref_text",
|
| 1680 |
+
"bbox": [
|
| 1681 |
+
0.086,
|
| 1682 |
+
0.49,
|
| 1683 |
+
0.48,
|
| 1684 |
+
0.545
|
| 1685 |
+
],
|
| 1686 |
+
"angle": 0,
|
| 1687 |
+
"content": "Chen, M.; Radford, A.; Child, R.; Wu, J.; Jun, H.; Luan, D.; and Sutskever, I. 2020. Generative pretraining from pixels. In International conference on machine learning, 1691-1703. PMLR."
|
| 1688 |
+
},
|
| 1689 |
+
{
|
| 1690 |
+
"type": "ref_text",
|
| 1691 |
+
"bbox": [
|
| 1692 |
+
0.086,
|
| 1693 |
+
0.548,
|
| 1694 |
+
0.48,
|
| 1695 |
+
0.59
|
| 1696 |
+
],
|
| 1697 |
+
"angle": 0,
|
| 1698 |
+
"content": "Chen, X.; Yan, B.; Zhu, J.; Wang, D.; Yang, X.; and Lu, H. 2021. Transformer tracking. In Proceedings of the CVPR, 8126-8135."
|
| 1699 |
+
},
|
| 1700 |
+
{
|
| 1701 |
+
"type": "ref_text",
|
| 1702 |
+
"bbox": [
|
| 1703 |
+
0.086,
|
| 1704 |
+
0.594,
|
| 1705 |
+
0.48,
|
| 1706 |
+
0.65
|
| 1707 |
+
],
|
| 1708 |
+
"angle": 0,
|
| 1709 |
+
"content": "Cui, Y.; Jiang, C.; Wang, L.; and Wu, G. 2022. MixFormer: End-to-End Tracking With Iterative Mixed Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13608-13618."
|
| 1710 |
+
},
|
| 1711 |
+
{
|
| 1712 |
+
"type": "ref_text",
|
| 1713 |
+
"bbox": [
|
| 1714 |
+
0.086,
|
| 1715 |
+
0.653,
|
| 1716 |
+
0.48,
|
| 1717 |
+
0.709
|
| 1718 |
+
],
|
| 1719 |
+
"angle": 0,
|
| 1720 |
+
"content": "Dalal, N.; and Triggs, B. 2005. Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), volume 1, 886-893. IEEE."
|
| 1721 |
+
},
|
| 1722 |
+
{
|
| 1723 |
+
"type": "ref_text",
|
| 1724 |
+
"bbox": [
|
| 1725 |
+
0.086,
|
| 1726 |
+
0.712,
|
| 1727 |
+
0.48,
|
| 1728 |
+
0.754
|
| 1729 |
+
],
|
| 1730 |
+
"angle": 0,
|
| 1731 |
+
"content": "Danelljan, M.; Bhat, G.; Khan, F. S.; and Felsberg, M. 2019. ATOM: Accurate Tracking by Overlap Maximization. In Proceedings of the CVPR, 4660-4669. IEEE."
|
| 1732 |
+
},
|
| 1733 |
+
{
|
| 1734 |
+
"type": "ref_text",
|
| 1735 |
+
"bbox": [
|
| 1736 |
+
0.086,
|
| 1737 |
+
0.757,
|
| 1738 |
+
0.48,
|
| 1739 |
+
0.799
|
| 1740 |
+
],
|
| 1741 |
+
"angle": 0,
|
| 1742 |
+
"content": "Danelljan, M.; Bhat, G.; Shahbaz Khan, F.; and Felsberg, M. 2017. ECO: Efficient Convolution Operators for Tracking. In Proceedings of the CVPR, 6638-6646. IEEE."
|
| 1743 |
+
},
|
| 1744 |
+
{
|
| 1745 |
+
"type": "ref_text",
|
| 1746 |
+
"bbox": [
|
| 1747 |
+
0.086,
|
| 1748 |
+
0.803,
|
| 1749 |
+
0.48,
|
| 1750 |
+
0.859
|
| 1751 |
+
],
|
| 1752 |
+
"angle": 0,
|
| 1753 |
+
"content": "Danelljan, M.; Robinson, A.; Khan, F. S.; and Felsberg, M. 2016. Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking. In Proceedings of the ECCV, 472-488. Springer."
|
| 1754 |
+
},
|
| 1755 |
+
{
|
| 1756 |
+
"type": "ref_text",
|
| 1757 |
+
"bbox": [
|
| 1758 |
+
0.086,
|
| 1759 |
+
0.862,
|
| 1760 |
+
0.48,
|
| 1761 |
+
0.89
|
| 1762 |
+
],
|
| 1763 |
+
"angle": 0,
|
| 1764 |
+
"content": "Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.;"
|
| 1765 |
+
},
|
| 1766 |
+
{
|
| 1767 |
+
"type": "list",
|
| 1768 |
+
"bbox": [
|
| 1769 |
+
0.085,
|
| 1770 |
+
0.218,
|
| 1771 |
+
0.48,
|
| 1772 |
+
0.89
|
| 1773 |
+
],
|
| 1774 |
+
"angle": 0,
|
| 1775 |
+
"content": null
|
| 1776 |
+
},
|
| 1777 |
+
{
|
| 1778 |
+
"type": "ref_text",
|
| 1779 |
+
"bbox": [
|
| 1780 |
+
0.518,
|
| 1781 |
+
0.068,
|
| 1782 |
+
0.912,
|
| 1783 |
+
0.112
|
| 1784 |
+
],
|
| 1785 |
+
"angle": 0,
|
| 1786 |
+
"content": "Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR."
|
| 1787 |
+
},
|
| 1788 |
+
{
|
| 1789 |
+
"type": "ref_text",
|
| 1790 |
+
"bbox": [
|
| 1791 |
+
0.518,
|
| 1792 |
+
0.116,
|
| 1793 |
+
0.913,
|
| 1794 |
+
0.172
|
| 1795 |
+
],
|
| 1796 |
+
"angle": 0,
|
| 1797 |
+
"content": "Fan, H.; Lin, L.; Yang, F.; Chu, P.; Deng, G.; Yu, S.; Bai, H.; Xu, Y.; Liao, C.; and Ling, H. 2019. LaSOT: A High-Quality Benchmark for Large-Scale Single Object Tracking. In Proceedings of the CVPR. IEEE."
|
| 1798 |
+
},
|
| 1799 |
+
{
|
| 1800 |
+
"type": "ref_text",
|
| 1801 |
+
"bbox": [
|
| 1802 |
+
0.519,
|
| 1803 |
+
0.176,
|
| 1804 |
+
0.914,
|
| 1805 |
+
0.233
|
| 1806 |
+
],
|
| 1807 |
+
"angle": 0,
|
| 1808 |
+
"content": "Fu, Z.; Liu, Q.; Fu, Z.; and Wang, Y. 2021. Stmtrack: Template-free visual tracking with space-time memory networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13774-13783."
|
| 1809 |
+
},
|
| 1810 |
+
{
|
| 1811 |
+
"type": "ref_text",
|
| 1812 |
+
"bbox": [
|
| 1813 |
+
0.519,
|
| 1814 |
+
0.236,
|
| 1815 |
+
0.914,
|
| 1816 |
+
0.279
|
| 1817 |
+
],
|
| 1818 |
+
"angle": 0,
|
| 1819 |
+
"content": "Guo, D.; Shao, Y.; Cui, Y.; Wang, Z.; Zhang, L.; and Shen, C. 2021. Graph attention tracking. In Proceedings of the CVPR, 9543-9552."
|
| 1820 |
+
},
|
| 1821 |
+
{
|
| 1822 |
+
"type": "ref_text",
|
| 1823 |
+
"bbox": [
|
| 1824 |
+
0.519,
|
| 1825 |
+
0.283,
|
| 1826 |
+
0.914,
|
| 1827 |
+
0.34
|
| 1828 |
+
],
|
| 1829 |
+
"angle": 0,
|
| 1830 |
+
"content": "He, K.; Chen, X.; Xie, S.; Li, Y.; Dollar, P.; and Girshick, R. 2022. Masked Autoencoders Are Scalable Vision Learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 16000-16009."
|
| 1831 |
+
},
|
| 1832 |
+
{
|
| 1833 |
+
"type": "ref_text",
|
| 1834 |
+
"bbox": [
|
| 1835 |
+
0.519,
|
| 1836 |
+
0.343,
|
| 1837 |
+
0.914,
|
| 1838 |
+
0.386
|
| 1839 |
+
],
|
| 1840 |
+
"angle": 0,
|
| 1841 |
+
"content": "He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the CVPR, 770-778. IEEE."
|
| 1842 |
+
},
|
| 1843 |
+
{
|
| 1844 |
+
"type": "ref_text",
|
| 1845 |
+
"bbox": [
|
| 1846 |
+
0.519,
|
| 1847 |
+
0.39,
|
| 1848 |
+
0.914,
|
| 1849 |
+
0.433
|
| 1850 |
+
],
|
| 1851 |
+
"angle": 0,
|
| 1852 |
+
"content": "Henriques, J. F.; Caseiro, R.; Martins, P.; and Batista, J. 2015. High-Speed Tracking with Kernelized Correlation Filters. IEEE TPAMI, 37(3): 583-596."
|
| 1853 |
+
},
|
| 1854 |
+
{
|
| 1855 |
+
"type": "ref_text",
|
| 1856 |
+
"bbox": [
|
| 1857 |
+
0.519,
|
| 1858 |
+
0.436,
|
| 1859 |
+
0.914,
|
| 1860 |
+
0.48
|
| 1861 |
+
],
|
| 1862 |
+
"angle": 0,
|
| 1863 |
+
"content": "Huang, L.; Zhao, X.; and Huang, K. 2019. GOT-10k: A Large High-Diversity Benchmark for Generic Object Tracking in the Wild. IEEE TPAMI."
|
| 1864 |
+
},
|
| 1865 |
+
{
|
| 1866 |
+
"type": "ref_text",
|
| 1867 |
+
"bbox": [
|
| 1868 |
+
0.519,
|
| 1869 |
+
0.484,
|
| 1870 |
+
0.914,
|
| 1871 |
+
0.54
|
| 1872 |
+
],
|
| 1873 |
+
"angle": 0,
|
| 1874 |
+
"content": "Jiang, B.; Luo, R.; Mao, J.; Xiao, T.; and Jiang, Y. 2018. Acquisition of localization confidence for accurate object detection. In Proceedings of the European conference on computer vision (ECCV), 784-799."
|
| 1875 |
+
},
|
| 1876 |
+
{
|
| 1877 |
+
"type": "ref_text",
|
| 1878 |
+
"bbox": [
|
| 1879 |
+
0.519,
|
| 1880 |
+
0.543,
|
| 1881 |
+
0.914,
|
| 1882 |
+
0.616
|
| 1883 |
+
],
|
| 1884 |
+
"angle": 0,
|
| 1885 |
+
"content": "Kristan, M.; Leonardis, A.; Matas, J.; Felsberg, M.; Pflugfelder, R.; Kämäräinen, J.-K.; Danelljan, M.; Zajc, L. C.; Lukežić, A.; Drbohlav, O.; et al. 2020. The eighth visual object tracking VOT2020 challenge results. In ECCV, 547-601. Springer."
|
| 1886 |
+
},
|
| 1887 |
+
{
|
| 1888 |
+
"type": "ref_text",
|
| 1889 |
+
"bbox": [
|
| 1890 |
+
0.519,
|
| 1891 |
+
0.619,
|
| 1892 |
+
0.914,
|
| 1893 |
+
0.675
|
| 1894 |
+
],
|
| 1895 |
+
"angle": 0,
|
| 1896 |
+
"content": "Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25: 1097-1105."
|
| 1897 |
+
},
|
| 1898 |
+
{
|
| 1899 |
+
"type": "ref_text",
|
| 1900 |
+
"bbox": [
|
| 1901 |
+
0.519,
|
| 1902 |
+
0.679,
|
| 1903 |
+
0.914,
|
| 1904 |
+
0.736
|
| 1905 |
+
],
|
| 1906 |
+
"angle": 0,
|
| 1907 |
+
"content": "Li, B.; Wu, W.; Wang, Q.; Zhang, F.; Xing, J.; and Yan, J. 2019. SiamRPN++: Evolution of Siamese Visual Tracking With Very Deep Networks. In Proceedings of the CVPR, 4282-4291. IEEE."
|
| 1908 |
+
},
|
| 1909 |
+
{
|
| 1910 |
+
"type": "ref_text",
|
| 1911 |
+
"bbox": [
|
| 1912 |
+
0.519,
|
| 1913 |
+
0.74,
|
| 1914 |
+
0.914,
|
| 1915 |
+
0.784
|
| 1916 |
+
],
|
| 1917 |
+
"angle": 0,
|
| 1918 |
+
"content": "Li, B.; Yan, J.; Wu, W.; Zhu, Z.; and Hu, X. 2018. High Performance Visual Tracking With Siamese Region Proposal Network. In Proceedings of the CVPR, 8971-8980. IEEE."
|
| 1919 |
+
},
|
| 1920 |
+
{
|
| 1921 |
+
"type": "ref_text",
|
| 1922 |
+
"bbox": [
|
| 1923 |
+
0.519,
|
| 1924 |
+
0.787,
|
| 1925 |
+
0.914,
|
| 1926 |
+
0.83
|
| 1927 |
+
],
|
| 1928 |
+
"angle": 0,
|
| 1929 |
+
"content": "Li, Y.; Mao, H.; Girshick, R.; and He, K. 2022. Exploring plain vision transformer backbones for object detection. arXiv preprint arXiv:2203.16527."
|
| 1930 |
+
},
|
| 1931 |
+
{
|
| 1932 |
+
"type": "ref_text",
|
| 1933 |
+
"bbox": [
|
| 1934 |
+
0.519,
|
| 1935 |
+
0.834,
|
| 1936 |
+
0.914,
|
| 1937 |
+
0.891
|
| 1938 |
+
],
|
| 1939 |
+
"angle": 0,
|
| 1940 |
+
"content": "Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In ECCV, 740-755. Springer."
|
| 1941 |
+
},
|
| 1942 |
+
{
|
| 1943 |
+
"type": "list",
|
| 1944 |
+
"bbox": [
|
| 1945 |
+
0.518,
|
| 1946 |
+
0.068,
|
| 1947 |
+
0.914,
|
| 1948 |
+
0.891
|
| 1949 |
+
],
|
| 1950 |
+
"angle": 0,
|
| 1951 |
+
"content": null
|
| 1952 |
+
}
|
| 1953 |
+
],
|
| 1954 |
+
[
|
| 1955 |
+
{
|
| 1956 |
+
"type": "ref_text",
|
| 1957 |
+
"bbox": [
|
| 1958 |
+
0.084,
|
| 1959 |
+
0.069,
|
| 1960 |
+
0.481,
|
| 1961 |
+
0.125
|
| 1962 |
+
],
|
| 1963 |
+
"angle": 0,
|
| 1964 |
+
"content": "Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the ICCV."
|
| 1965 |
+
},
|
| 1966 |
+
{
|
| 1967 |
+
"type": "ref_text",
|
| 1968 |
+
"bbox": [
|
| 1969 |
+
0.085,
|
| 1970 |
+
0.127,
|
| 1971 |
+
0.48,
|
| 1972 |
+
0.157
|
| 1973 |
+
],
|
| 1974 |
+
"angle": 0,
|
| 1975 |
+
"content": "Loshchilov, I.; and Hutter, F. 2018. Decoupled weight decay regularization. In Proceedings of the ICLR."
|
| 1976 |
+
},
|
| 1977 |
+
{
|
| 1978 |
+
"type": "ref_text",
|
| 1979 |
+
"bbox": [
|
| 1980 |
+
0.084,
|
| 1981 |
+
0.158,
|
| 1982 |
+
0.48,
|
| 1983 |
+
0.201
|
| 1984 |
+
],
|
| 1985 |
+
"angle": 0,
|
| 1986 |
+
"content": "Mueller, M.; Smith, N.; and Ghanem, B. 2016. A benchmark and simulator for uav tracking. In Proceedings of the ECCV, 445-461. Springer."
|
| 1987 |
+
},
|
| 1988 |
+
{
|
| 1989 |
+
"type": "ref_text",
|
| 1990 |
+
"bbox": [
|
| 1991 |
+
0.084,
|
| 1992 |
+
0.203,
|
| 1993 |
+
0.48,
|
| 1994 |
+
0.259
|
| 1995 |
+
],
|
| 1996 |
+
"angle": 0,
|
| 1997 |
+
"content": "Muller, M.; Bibi, A.; Giancola, S.; Alsubaihi, S.; and Ghanem, B. 2018. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild. In Proceedings of the ECCV."
|
| 1998 |
+
},
|
| 1999 |
+
{
|
| 2000 |
+
"type": "ref_text",
|
| 2001 |
+
"bbox": [
|
| 2002 |
+
0.084,
|
| 2003 |
+
0.261,
|
| 2004 |
+
0.48,
|
| 2005 |
+
0.303
|
| 2006 |
+
],
|
| 2007 |
+
"angle": 0,
|
| 2008 |
+
"content": "Nam, H.; and Han, B. 2016. Learning Multi-Domain Convolutional Neural Networks for Visual Tracking. In Proceedings of the CVPR, 4293-4302. IEEE."
|
| 2009 |
+
},
|
| 2010 |
+
{
|
| 2011 |
+
"type": "ref_text",
|
| 2012 |
+
"bbox": [
|
| 2013 |
+
0.084,
|
| 2014 |
+
0.305,
|
| 2015 |
+
0.48,
|
| 2016 |
+
0.347
|
| 2017 |
+
],
|
| 2018 |
+
"angle": 0,
|
| 2019 |
+
"content": "Pu, S.; Song, Y.; Ma, C.; Zhang, H.; and Yang, M.-H. 2018. Deep Attentive Tracking via Reciprocativc Learning. In NeurIPS, 1931-1941."
|
| 2020 |
+
},
|
| 2021 |
+
{
|
| 2022 |
+
"type": "ref_text",
|
| 2023 |
+
"bbox": [
|
| 2024 |
+
0.084,
|
| 2025 |
+
0.349,
|
| 2026 |
+
0.48,
|
| 2027 |
+
0.406
|
| 2028 |
+
],
|
| 2029 |
+
"angle": 0,
|
| 2030 |
+
"content": "Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-shot text-to-image generation. In International Conference on Machine Learning, 8821-8831. PMLR."
|
| 2031 |
+
},
|
| 2032 |
+
{
|
| 2033 |
+
"type": "ref_text",
|
| 2034 |
+
"bbox": [
|
| 2035 |
+
0.084,
|
| 2036 |
+
0.408,
|
| 2037 |
+
0.48,
|
| 2038 |
+
0.464
|
| 2039 |
+
],
|
| 2040 |
+
"angle": 0,
|
| 2041 |
+
"content": "Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; and Savarese, S. 2019. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the CVPR, 658-666."
|
| 2042 |
+
},
|
| 2043 |
+
{
|
| 2044 |
+
"type": "ref_text",
|
| 2045 |
+
"bbox": [
|
| 2046 |
+
0.084,
|
| 2047 |
+
0.467,
|
| 2048 |
+
0.481,
|
| 2049 |
+
0.537
|
| 2050 |
+
],
|
| 2051 |
+
"angle": 0,
|
| 2052 |
+
"content": "Shen, Q.; Qiao, L.; Guo, J.; Li, P.; Li, X.; Li, B.; Feng, W.; Gan, W.; Wu, W.; and Ouyang, W. 2022. Unsupervised Learning of Accurate Siamese Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8101-8110."
|
| 2053 |
+
},
|
| 2054 |
+
{
|
| 2055 |
+
"type": "ref_text",
|
| 2056 |
+
"bbox": [
|
| 2057 |
+
0.084,
|
| 2058 |
+
0.539,
|
| 2059 |
+
0.48,
|
| 2060 |
+
0.582
|
| 2061 |
+
],
|
| 2062 |
+
"angle": 0,
|
| 2063 |
+
"content": "Simonyan, K.; and Zisserman, A. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations."
|
| 2064 |
+
},
|
| 2065 |
+
{
|
| 2066 |
+
"type": "ref_text",
|
| 2067 |
+
"bbox": [
|
| 2068 |
+
0.084,
|
| 2069 |
+
0.584,
|
| 2070 |
+
0.48,
|
| 2071 |
+
0.64
|
| 2072 |
+
],
|
| 2073 |
+
"angle": 0,
|
| 2074 |
+
"content": "Song, Z.; Yu, J.; Chen, Y.-P. P.; and Yang, W. 2022. Transformer Tracking With Cyclic Shifting Window Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8791-8800."
|
| 2075 |
+
},
|
| 2076 |
+
{
|
| 2077 |
+
"type": "ref_text",
|
| 2078 |
+
"bbox": [
|
| 2079 |
+
0.084,
|
| 2080 |
+
0.642,
|
| 2081 |
+
0.48,
|
| 2082 |
+
0.684
|
| 2083 |
+
],
|
| 2084 |
+
"angle": 0,
|
| 2085 |
+
"content": "Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is all you need. In NIPS, 5998-6008."
|
| 2086 |
+
},
|
| 2087 |
+
{
|
| 2088 |
+
"type": "ref_text",
|
| 2089 |
+
"bbox": [
|
| 2090 |
+
0.084,
|
| 2091 |
+
0.686,
|
| 2092 |
+
0.48,
|
| 2093 |
+
0.729
|
| 2094 |
+
],
|
| 2095 |
+
"angle": 0,
|
| 2096 |
+
"content": "Voigtlaender, P.; Luiten, J.; Torr, P. H.; and Leibe, B. 2020. Siam r-cnn: Visual tracking by re-detection. In Proceedings of the CVPR, 6578-6588."
|
| 2097 |
+
},
|
| 2098 |
+
{
|
| 2099 |
+
"type": "ref_text",
|
| 2100 |
+
"bbox": [
|
| 2101 |
+
0.084,
|
| 2102 |
+
0.731,
|
| 2103 |
+
0.48,
|
| 2104 |
+
0.773
|
| 2105 |
+
],
|
| 2106 |
+
"angle": 0,
|
| 2107 |
+
"content": "Wang, G.; Luo, C.; Sun, X.; Xiong, Z.; and Zeng, W. 2020. Tracking by instance detection: A meta-learning approach. In Proceedings of the CVPR, 6288-6297."
|
| 2108 |
+
},
|
| 2109 |
+
{
|
| 2110 |
+
"type": "ref_text",
|
| 2111 |
+
"bbox": [
|
| 2112 |
+
0.084,
|
| 2113 |
+
0.775,
|
| 2114 |
+
0.48,
|
| 2115 |
+
0.817
|
| 2116 |
+
],
|
| 2117 |
+
"angle": 0,
|
| 2118 |
+
"content": "Wang, N.; Zhou, W.; Wang, J.; and Li, H. 2021. Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking. In Proceedings of the CVPR, 1571-1580."
|
| 2119 |
+
},
|
| 2120 |
+
{
|
| 2121 |
+
"type": "ref_text",
|
| 2122 |
+
"bbox": [
|
| 2123 |
+
0.084,
|
| 2124 |
+
0.819,
|
| 2125 |
+
0.48,
|
| 2126 |
+
0.89
|
| 2127 |
+
],
|
| 2128 |
+
"angle": 0,
|
| 2129 |
+
"content": "Wei, C.; Fan, H.; Xie, S.; Wu, C.-Y.; Yuille, A.; and Feichtenhofer, C. 2022. Masked feature prediction for self-supervised visual pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14668-14678."
|
| 2130 |
+
},
|
| 2131 |
+
{
|
| 2132 |
+
"type": "list",
|
| 2133 |
+
"bbox": [
|
| 2134 |
+
0.084,
|
| 2135 |
+
0.069,
|
| 2136 |
+
0.481,
|
| 2137 |
+
0.89
|
| 2138 |
+
],
|
| 2139 |
+
"angle": 0,
|
| 2140 |
+
"content": null
|
| 2141 |
+
},
|
| 2142 |
+
{
|
| 2143 |
+
"type": "ref_text",
|
| 2144 |
+
"bbox": [
|
| 2145 |
+
0.518,
|
| 2146 |
+
0.069,
|
| 2147 |
+
0.914,
|
| 2148 |
+
0.125
|
| 2149 |
+
],
|
| 2150 |
+
"angle": 0,
|
| 2151 |
+
"content": "Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; and Zhang, L. 2021. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 22-31."
|
| 2152 |
+
},
|
| 2153 |
+
{
|
| 2154 |
+
"type": "ref_text",
|
| 2155 |
+
"bbox": [
|
| 2156 |
+
0.518,
|
| 2157 |
+
0.128,
|
| 2158 |
+
0.915,
|
| 2159 |
+
0.197
|
| 2160 |
+
],
|
| 2161 |
+
"angle": 0,
|
| 2162 |
+
"content": "Xie, Z.; Zhang, Z.; Cao, Y.; Lin, Y.; Bao, J.; Yao, Z.; Dai, Q.; and Hu, H. 2022. SimMIM: A Simple Framework for Masked Image Modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9653-9663."
|
| 2163 |
+
},
|
| 2164 |
+
{
|
| 2165 |
+
"type": "ref_text",
|
| 2166 |
+
"bbox": [
|
| 2167 |
+
0.519,
|
| 2168 |
+
0.2,
|
| 2169 |
+
0.914,
|
| 2170 |
+
0.256
|
| 2171 |
+
],
|
| 2172 |
+
"angle": 0,
|
| 2173 |
+
"content": "Xu, Y.; Wang, Z.; Li, Z.; Yuan, Y.; and Yu, G. 2020. SiamFC++: Towards robust and accurate visual tracking with target estimation guidelines. In Proceedings of the AAAI, volume 34, 12549-12556."
|
| 2174 |
+
},
|
| 2175 |
+
{
|
| 2176 |
+
"type": "ref_text",
|
| 2177 |
+
"bbox": [
|
| 2178 |
+
0.519,
|
| 2179 |
+
0.259,
|
| 2180 |
+
0.913,
|
| 2181 |
+
0.301
|
| 2182 |
+
],
|
| 2183 |
+
"angle": 0,
|
| 2184 |
+
"content": "Yan, B.; Peng, H.; Fu, J.; Wang, D.; and Lu, H. 2021. Learning spatio-temporal transformer for visual tracking. In Proceedings of the ICCV."
|
| 2185 |
+
},
|
| 2186 |
+
{
|
| 2187 |
+
"type": "ref_text",
|
| 2188 |
+
"bbox": [
|
| 2189 |
+
0.519,
|
| 2190 |
+
0.303,
|
| 2191 |
+
0.913,
|
| 2192 |
+
0.346
|
| 2193 |
+
],
|
| 2194 |
+
"angle": 0,
|
| 2195 |
+
"content": "Yu, Y.; Xiong, Y.; Huang, W.; and Scott, M. R. 2020. Deformable siamese attention networks for visual object tracking. In Proceedings of the CVPR, 6728-6737."
|
| 2196 |
+
},
|
| 2197 |
+
{
|
| 2198 |
+
"type": "ref_text",
|
| 2199 |
+
"bbox": [
|
| 2200 |
+
0.519,
|
| 2201 |
+
0.349,
|
| 2202 |
+
0.913,
|
| 2203 |
+
0.391
|
| 2204 |
+
],
|
| 2205 |
+
"angle": 0,
|
| 2206 |
+
"content": "Zhang, Z.; Liu, Y.; Wang, X.; Li, B.; and Hu, W. 2021. Learn to match: Automatic matching network design for visual tracking. In Proceedings of the ICCV, 13339-13348."
|
| 2207 |
+
},
|
| 2208 |
+
{
|
| 2209 |
+
"type": "list",
|
| 2210 |
+
"bbox": [
|
| 2211 |
+
0.518,
|
| 2212 |
+
0.069,
|
| 2213 |
+
0.915,
|
| 2214 |
+
0.391
|
| 2215 |
+
],
|
| 2216 |
+
"angle": 0,
|
| 2217 |
+
"content": null
|
| 2218 |
+
}
|
| 2219 |
+
]
|
| 2220 |
+
]
|
2301.10xxx/2301.10938/e2b2cbfc-a0df-462f-9845-caeaa831fe88_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:607fb58b4721982d739d4026c833f83614124a78773fc2c1859003c6ab76789b
|
| 3 |
+
size 1886375
|
2301.10xxx/2301.10938/full.md
ADDED
|
@@ -0,0 +1,339 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Compact Transformer Tracker with Correlative Masked Modeling
|
| 2 |
+
|
| 3 |
+
Zikai Song $^{1}$ , Run Luo $^{1}$ , Junqing Yu $^{1*}$ , Yi-Ping Phoebe Chen $^{2}$ , Wei Yang $^{1*}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Huazhong University of Science and Technology, China
|
| 6 |
+
|
| 7 |
+
$^{2}$ La Trobe University, Australia
|
| 8 |
+
|
| 9 |
+
{skyesong, lr_8823, yjqing, weiyangcs}@hust.edu.cn, phoebe.chen@latrobe.edu.au
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Transformer framework has been showing superior performances in visual object tracking for its great strength in information aggregation across the template and search image with the well-known attention mechanism. Most recent advances focus on exploring attention mechanism variants for better information aggregation. We find these schemes are equivalent to or even just a subset of the basic self-attention mechanism. In this paper, we prove that the vanilla self-attention structure is sufficient for information aggregation, and structural adaption is unnecessary. The key is not the attention structure, but how to extract the discriminative feature for tracking and enhance the communication between the target and search image. Based on this finding, we adopt the basic vision transformer (ViT) architecture as our main tracker and concatenate the template and search image for feature embedding. To guide the encoder to capture the invariant feature for tracking, we attach a lightweight correlative masked decoder which reconstructs the original template and search image from the corresponding masked tokens. The correlative masked decoder serves as a plugin for the compact transform tracker and is skipped in inference. Our compact tracker uses the most simple structure which only consists of a ViT backbone and a box head, and can run at 40 fps. Extensive experiments show the proposed compact transform tracker outperforms existing approaches, including advanced attention variants, and demonstrates the sufficiency of self-attention in tracking tasks. Our method achieves state-of-the-art performance on five challenging datasets, along with the VOT2020, UAV123, LaSOT, TrackingNet, and GOT-10k benchmarks. Our project is available at https://github.com/HUSTDML/CTTrack.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Visual Object Tracking is one of the fundamental tasks in computer vision with applications ranging from human-computer interaction, surveillance, traffic flow monitoring and etc. It aims to estimate the location, denoted as a bounding box, of an arbitrary target object throughout the subsequent video sequence. Deep Learning based trackers have achieved great success due to their strong representation ability. Trackers (Bertinetto et al. 2016; Nam and Han 2016;
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1: Our compact transformer tracker adopts the simple ViT structure (encoder) with the concatenation of the template and search image as input, which essentially exploits the standard self-attention mechanism for information aggregation. The encoded tokens pass through a box head to estimate the result bounding box. And we develop a correlative masked decoder reconstructing the original template and search pixels to enhance the information aggregation, which is skipped during inference.
|
| 21 |
+
|
| 22 |
+
Li et al. 2018, 2019) derived from Convolutional Neural Networks (CNN) (Krizhevsky, Sutskever, and Hinton 2012; Simonyan and Zisserman 2015; He et al. 2016) produce tracking accuracy that beyond the comparison of traditional approaches, especially the trackers built on Siamese network (Bertinetto et al. 2016; Xu et al. 2020; Li et al. 2018, 2019; Voigtaender et al. 2020; Yu et al. 2020; Guo et al. 2021). The key of Siamese network trackers is to produce the cross-correlation and measure the similarity between the target template and search image. Nowadays, transformer-based trackers (Chen et al. 2021; Wang et al. 2021; Yan et al. 2021; Shen et al. 2022; Song et al. 2022; Cui et al. 2022) have shown great strength by introducing the attention mechanism (Vaswani et al. 2017) to enhance and fuse the features of querying sample and tracked objects. Prevalent transformer trackers (Chen et al. 2021; Yan et al. 2021;
|
| 23 |
+
|
| 24 |
+
Cui et al. 2022) more or less adapt the attention for aggregating information across the template and search image.
|
| 25 |
+
|
| 26 |
+
We find that the advanced variants of attention mechanism in recent research, including mix-attention (Cui et al. 2022) and cross-attention (Yu et al. 2020; Chen et al. 2021), are equivalent or even just a subset of the packed self-attention (i.e., standard self-attention with the concatenation of the template and search image as input). Then the question is which parts of the self-attention mechanism play an important role in visual object tracking? We revisited the transformer tracking framework and find that the tracking results are generated from tokens corresponding to the search image (search tokens), while the tokens corresponding to the template (template tokens) are always discarded in the last. The representational ability of search tokens comes from two parts: the cross-information enhancement from the template tokens and the self-information enhancement from the search tokens themselves. In this paper, we prove that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation, though cross-information aggregation is indispensable in visual object tracking but not greatly beneficial.
|
| 27 |
+
|
| 28 |
+
Driven by this analysis, we propose a compact transformer tracker combined with correlative masked modeling for the cross-information aggregation and self-information reinforcement. As shown in Figure 1, our tracker adopts the basic vision transformer as the main branch and applies a lightweight masked decoder to enhance the implicit representation capability of the packed self-attention. The correlative masked decoder, which is inspired by Masked Image Modeling (He et al. 2022; Xie et al. 2022), reconstructs the both original template and search pixels from the corresponding masked tokens, to guide the encoder to capture the invariant feature for tracking. In addition, our decoder can be plugged into other transformer trackers, which can effectively improve the tracking performance without compromising speed. Applying our correlative masked modeling strategy to the compact transformer tracker can improve the AUC from $64.0\%$ to $65.8\%$ on the LaSOT (Fan et al. 2019) dataset. Extensive comparison experiments on 5 challenging datasets including VOT2020 (Kristan et al. 2020), UAV123 (Mueller, Smith, and Ghanem 2016), LaSOT, GOT-10k (Huang, Zhao, and Huang 2019), and TrackingNet (Muller et al. 2018) exhibits the state-of-the-art performance, which further evidence the correctness of our analysis regarding the self-attention in visual tracking.
|
| 29 |
+
|
| 30 |
+
To summarize, our main contributions include:
|
| 31 |
+
|
| 32 |
+
1. We present a unified analyzing method for the attention mechanism and find that the advanced variants of the attention mechanism are equivalent or even just a subset of the self-attention. We also prove that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation.
|
| 33 |
+
2. We develop a compact transformer tracker with a correlative masked decoder, which has a very simple structure and achieves state-of-the-art accuracy at a high Frames-Per-Seconds (fps) tracking speed. The decoder reconstructs the original template and search image from the
|
| 34 |
+
|
| 35 |
+
corresponding masked tokens and serves as a training plugin for the tracker. The experiment demonstrates that our analysis regarding self-attention is correct.
|
| 36 |
+
|
| 37 |
+
# 2 Related Work
|
| 38 |
+
|
| 39 |
+
Traditional trackers. Traditional single object tracking algorithms can be roughly summarized as Correlation Filter based trackers (CF), Deep Network based trackers (DLN). CF-based trackers(Bolme et al. 2010; Henriques et al. 2015; Danelljan et al. 2016, 2017, 2019; Bhat et al. 2019) exploit the convolution theorem and learn a filter in the Fourier domain that maps known target images to the desired output. DLN-based trackers refer to algorithms employing deep neural networks for the tracking process. Earlier approaches (Nam and Han 2016; Pu et al. 2018) treat the tracking task as a classification problem and exploit deep features for locating the target. Shortly afterwards more trackers adopt the Siamese network (Bertinetto et al. 2016; Li et al. 2018, 2019) for its effectiveness in measuring similarity. The Siamese network consists of two branches, one operates on the template and the other for the search area.
|
| 40 |
+
|
| 41 |
+
Above all, these methods mainly consist of a backbone which extracts the features of search image and template separately, a similarity measuring module, and heads to predict the location and bounding box. Compared to our framework, traditional trackers have too many modules and a very complex design, we simply adapt a ViT backbone with a box head to get better tracking results.
|
| 42 |
+
|
| 43 |
+
Transformer trackers. The ViT (Dosovitskiy et al. 2021) first introduces the transformer to image recognition tasks and presents an impressive performance. Ever since, transformer has been widely applied in image classification(Dosovitskiy et al. 2021; Wu et al. 2021; Liu et al. 2021), object detection(Carion et al. 2020; Li et al. 2022), visual object tracking(Yan et al. 2021; Chen et al. 2021; Wang et al. 2021; Song et al. 2022; Shen et al. 2022; Cui et al. 2022) and etc. Transformer-based tracking methods have become the mainstream tracking algorithms nowadays. TransT (Chen et al. 2021) proposes a feature fusion network and employs an attention mechanism to combine the features of the template and search region. STARK (Yan et al. 2021) develops a spatial-temporal architecture based on the encoder-decoder transformer. CSWinTT (Song et al. 2022) proposes a transformer architecture with multi-scale cyclic shifting window attention for visual tracking, elevating the attention from pixel level to window level. MixFormer (Cui et al. 2022) constructs a compact tracking framework and designs a mixed attention module that unifies the process of feature extraction and information matching module.
|
| 44 |
+
|
| 45 |
+
Instead of designing a complex attention mechanism as in the previous tracking approaches, we compare the essential differences of attention variants(such as mix-attention and cross-attention) and find these attention variants are equivalent or even just a subset of the packed self-attention. To verify the capability of self-attention in information aggregation, we design a compact transformer tracker using the most simple pipeline which only consists of a ViT backbone and a box head, without any extra design including separate
|
| 46 |
+
|
| 47 |
+
modules of feature extraction and aggregation, and multi-layer feature aggregation.
|
| 48 |
+
|
| 49 |
+
Masked image modeling (MIM). MIM masks an area of the original images and predicts the missing pixels, which aims to enhance the representation of models. Recently, MIM approaches((Chen et al. 2020; He et al. 2022; Xie et al. 2022; Wei et al. 2022; Bao, Dong, and Wei 2021)) are extended to the modern vision transformers (Dosovitskiy et al. 2021; Liu et al. 2021). iGPT (Chen et al. 2020) first proposes a transformer to predict unknown pixels from a sequence of low-resolution pixels. BEiT (Bao, Dong, and Wei 2021) tokenizes the images via an additional dVAE (Ramesh et al. 2021) network with a block-wise masking strategy. SimMIM (Xie et al. 2022) find that a moderately large masked patch size of the input image for pixel predictions makes a strong pre-text task. MAE (He et al. 2022) develops an asymmetric encoder-decoder architecture, the encoder operates on a small proportion of the visible patches, and the decoder reconstructs the original pixels. MaskFeat (Wei et al. 2022) reconstructs the feature descriptors such as HoG (Dalal and Triggs 2005) instead of pixels.
|
| 50 |
+
|
| 51 |
+
Our approach is inspired by the previous MIM method (Xie et al. 2022; He et al. 2022), but we have to deal with two fundamental problems in the tracking framework: (1) Visual tracking is a downstream vision task that generally does not have the pre-train process to apply the MIM strategy. We develop a masked decoder to leverage the search and the template tokens to predict the original images, which is embedded as an attachment plugin in the training phase to implement an end-to-end model. (2) MIM methods reconstructing the single image do not fit the tracking framework which involves cross-aggregation of multiple images. According to the properties of packed self-attention, we design a self-decoder and a cross-decoder to reconstruct the original template and search image from the corresponding masked tokens. As far as we know, we are the first to artfully introduce the MIM into the visual tracking field to improve the information aggregation capabilities.
|
| 52 |
+
|
| 53 |
+
# 3 Approach
|
| 54 |
+
|
| 55 |
+
In this section, we introduce our compact transformer tracker with correlative masked modeling in detail. Before proceeding, we first present a analysis on the key component of transformer tracker, and demonstrate that existing attention variants are equivalent to the packed self-attention.
|
| 56 |
+
|
| 57 |
+
# 3.1 Revisiting Transformer Tracker
|
| 58 |
+
|
| 59 |
+
Transformer tracking framework. As described in ViT(Vaswani et al. 2017), the query-key-value attention mechanism is applied with query $\mathbf{Q}$ , key $\mathbf{K}$ , and value $\mathbf{V}$ . The linear weights of $\mathbf{Q}, \mathbf{K}, \mathbf{V}$ are $\mathbf{W}_Q, \mathbf{W}_K, \mathbf{W}_V$ respectively. The attention (Attn) is computed as:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\operatorname {A t t n} (\mathbf {X}) = \operatorname {s o f t m a x} \left(\frac {\mathbf {X} \mathbf {W} _ {Q} \cdot \mathbf {W} _ {K} ^ {T} \mathbf {X} ^ {T}}{\sqrt {d _ {k}}}\right) \cdot \mathbf {X} \mathbf {W} _ {V} \tag {1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where the $\mathbf{X}$ is the input token and the $d_{k}$ is the dimension of the key. For a clearer description of the post-order steps,
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
Figure 2: Information streams in the attention mechanism. The four information streams of Q-K-V are corresponding to the four parts in the attention map. Variants of attention can be uniformly explained under this analytical approach.
|
| 69 |
+
|
| 70 |
+
we apply an attention calculation with the inputs of two different tokens, the token $\mathbf{X}_Q$ computed with query and the token $\mathbf{X}_K V$ computed with key and value. We modify the attention formula and define the attention map (AMap) as:
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
\operatorname {A t t n} \left(\mathbf {X} _ {Q}, \mathbf {X} _ {K V}\right) = \operatorname {A M a p} \left(\mathbf {X} _ {Q}, \mathbf {X} _ {K V}\right) \cdot \mathbf {X} _ {K V} \mathbf {W} _ {V}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\operatorname {A M a p} \left(\mathbf {X} _ {Q}, \mathbf {X} _ {K V}\right) = \operatorname {s o f t m a x} \left(\frac {\mathbf {X} _ {Q} \mathbf {W} _ {Q} \cdot \mathbf {W} _ {K} ^ {T} \mathbf {X} _ {K V} ^ {T}}{\sqrt {d}}\right) \tag {2}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
Our compact transformer tracker consists of two parts: a transformer backbone for information aggregation and a box head for the bounding box estimation. Give the template $z$ in the initial frame and a search image $s$ . We obtain the tokens $X_{t} \in \mathbb{R}^{L_{z} \times d}$ and $X_{s} \in \mathbb{R}^{L_{s} \times d}$ respectively through patch embedding, where $d$ represents the number of channels. The packed self-attention (PSelf-Attn) in the tracking field is defined as the self-attention with the input of the concatenation (Cat) of the template and the search image:
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\operatorname {P S e l f - A t t n} = \operatorname {A t t n} \left(C a t \left(\mathbf {X} _ {z}, \mathbf {X} _ {s}\right), C a t \left(\mathbf {X} _ {z}, \mathbf {X} _ {s}\right)\right) \tag {3}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
Analysis on Attention. As shown in Figure 2, we divide the computation of attention mechanism, which involves both template and search image, into four information streams:
|
| 87 |
+
|
| 88 |
+

|
| 89 |
+
(a) PSelf-Attn
|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
(b) AMix-Attn
|
| 93 |
+
|
| 94 |
+

|
| 95 |
+
(c) Cross-Attn
|
| 96 |
+
Figure 3: Configurations of information stream in attention map of packed self-attention (PSelf-Attn), asymmetric mix-attention(AMix-Attn) and cross-attention (Cross-Attn).
|
| 97 |
+
|
| 98 |
+
(1) self-information enhancement on template;
|
| 99 |
+
(2) cross-information aggregation on template;
|
| 100 |
+
(3) cross-information aggregation on search image;
|
| 101 |
+
(4) self-information enhancement on search image.
|
| 102 |
+
|
| 103 |
+
These four information streams are also reflected in the four parts of the attention map (In Figure 2, the index of each part in the attention map corresponds to the information stream). Based on this dissection, we can conveniently compare the differences between existing attention, including packed self-attention, mix-attention, and cross-attention.
|
| 104 |
+
|
| 105 |
+
The PSelf-Attn and the mix-attention(Cui et al. 2022) are essentially equivalent, the mix-attention is calculated as:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\text {P S e l f - A t t n} = = \text {M i x - A t t n} =
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\operatorname {C a t} \left(\operatorname {A M a p} \left(\mathbf {X} _ {z}, C a t \left(\mathbf {X} _ {z}, \mathbf {X} _ {s}\right)\right), \operatorname {A M a p} \left(\mathbf {X} _ {s}, C a t \left(\mathbf {X} _ {z}, \mathbf {X} _ {s}\right)\right)\right) \tag {4}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
which is the same as Eqn. 3, and they include all four information streams (the attention map is shown as Figure 3a).
|
| 116 |
+
|
| 117 |
+
By the same analysis, the asymmetric mix-attention (AMix-Attn) contains three information streams (#1, #3, #4 info stream), which is shown in the Figure 3b and is calculated as follows:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\mathrm {A M i x - A t t n} =
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\operatorname {C a t} \left(\operatorname {A M a p} \left(\mathbf {X} _ {z}, \mathbf {X} _ {z}\right), \operatorname {A M a p} \left(\mathbf {X} _ {s}, \operatorname {C a t} \left(\mathbf {X} _ {z}, \mathbf {X} _ {s}\right)\right)\right) \tag {5}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
The cross-attention contains two information streams (#2,#3 info stream) for cross information aggregation, which is shown in the Figure 3c and is calculated as follows:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\operatorname {C r o s s - A t t n} = \operatorname {C a t} \left(\operatorname {A M a p} \left(\mathbf {X} _ {z}, \mathbf {X} _ {s}\right), \operatorname {A M a p} \left(\mathbf {X} _ {s}, \mathbf {X} _ {z}\right)\right) \tag {6}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
In order to fully verify the importance of each part of packed attention, it is necessary to evaluate the impact of each information stream individually. The key of visual object tracking is to find the target in the search image, there must be a cross-information aggregation of the search image (#3 info stream). The other information streams can be blocked out to verify their performance.
|
| 134 |
+
|
| 135 |
+
Based on the above idea, we conduct detailed experiments and the result is shown in Table 1. Removing cross-information aggregation of the template (#2 info stream) of
|
| 136 |
+
|
| 137 |
+
Table 1: The effectiveness of information streams in the attention mechanism on the LaSOT dataset. The visualized four parts in the attention map (AMap) correspond to the four information streams at the matched location.
|
| 138 |
+
|
| 139 |
+
<table><tr><td rowspan="2" colspan="2">#AMap</td><td colspan="4">No. Info Stream</td><td rowspan="2">AUC</td><td rowspan="2">Prec</td></tr><tr><td>①</td><td>②</td><td>③</td><td>④</td></tr><tr><td>1</td><td></td><td>√</td><td>√</td><td>√</td><td>√</td><td>61.7</td><td>64.2</td></tr><tr><td>2</td><td></td><td>√</td><td></td><td>√</td><td>√</td><td>64.0</td><td>67.7</td></tr><tr><td>3</td><td></td><td></td><td>√</td><td>√</td><td>√</td><td>60.6</td><td>63.7</td></tr><tr><td>4</td><td></td><td>√</td><td>√</td><td>√</td><td></td><td>58.8</td><td>60.1</td></tr><tr><td>5</td><td></td><td></td><td>√</td><td>√</td><td></td><td>57.9</td><td>58.5</td></tr></table>
|
| 140 |
+
|
| 141 |
+
self-attention can greatly improve tracking performance (the AUC and Prec of Table 1 #2 are better than that of Table 1 #1), and the cross-information aggregation of the template will introduce a lot of noise in template features, which is not recommended in visual tracking. However, removing self-information enhancement (#3 and #4 info stream) of self-attention severely degrades the tracking performance (the AUC and Prec of Table 1 #3 and #4 are worse than that of Table 1 #1). From the results we can conclude that self-information enhancement in multi-image attention plays a greater role than cross-information aggregation, the cross-information aggregation is indispensable in tracking but not greatly beneficial.
|
| 142 |
+
|
| 143 |
+
# 3.2 Correlative Masked Modeling
|
| 144 |
+
|
| 145 |
+
According to the above analysis, the best tracking performance can be achieved by adopting three information streams: self-information on the template(#1 info stream), cross-information on the search image (#3 info stream), and self-information on the search image (#4 info stream). These three information streams can be grouped into two categories: two self-information enhancements and one cross-information aggregation. We designed a correlative masked modeling method to enhance the information aggregation of our tracking framework, as shown in Figure 1. The ViT backbone is an encoder, and the correlative masked decoder reconstructs the original image (the template and search image respectively) from randomly masked tokens to enhance the self-information and reconstructs the template image from search tokens to improve cross-information aggregation. In parallel with the masked decoder, the search image tokens go through a box estimation head as in (Yan et al. 2021) to generate the result bounding box.
|
| 146 |
+
|
| 147 |
+
Decoder. The decoders in our framework consist of a self-decoder and a cross-decoder, these two decoders have the same structure but do not share weights, each one is composed of a series of transformer blocks similar to the MAE, and the last layer of the decoder is a linear projection with output channels equal to the number of pixels in a patch. As shown in Figure 4, the decoder takes masked tokens as input and predicts the original image pixels corresponding to
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
Figure 4: The correlative masked decoders consists of a self-decoder and a cross-decoder. The self-decoder reconstructs the two original images, template and search image, from its corresponding masked tokens. The cross-decoder reconstructs the template image from search tokens.
|
| 151 |
+
|
| 152 |
+
the template token and the search image token, where the template tokens are only self-reconstructed to the template image for enhancing the #1 information stream, search tokens are used to crossly reconstruct the template image (for #3 info stream) and self-reconstruct the search image (for #4 info stream).
|
| 153 |
+
|
| 154 |
+
Masking and Reconstruction. The encoder embeds the concatenation set of template tokens and search tokens. Then we split the encoded tokens into template tokens and search tokens, crop the search tokens using Precise RoI Pooling(Jiang et al. 2018) to the same size as the template tokens, and sample a subset of them. We randomly sample tokens at a high masking ratio (75%). Our decoder predicts the pixel values for each masked token, and the output of the decoder is reshaped to form a reconstructed image. We use the mean squared error (MSE) between the reconstructed and original images on masked tokens as our loss function.
|
| 155 |
+
|
| 156 |
+
# 3.3 Training and Inference
|
| 157 |
+
|
| 158 |
+
Our decoder is only used in the training phase, while does not participate in the inference phase, hence it doesn't affect the tracking speed. During the training phase, our tracker takes a triplet input consisting of one search region and two templates similar to STARK(Yan et al. 2021). We randomly sample multiple frames from sequences in the training set, select the first frame and the second frame as templates, and the last frame as the search region. In the target localization training, we train the whole network except the scoring head in an end-to-end manner with the combination of $L1$ Loss, generalized IoU loss (Rezatofighi et al. 2019), and decoder loss $L_{dec}$ . The full loss function is defined as follows:
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
L o s s = \lambda_ {L 1} L _ {1} \left(B _ {i}, \hat {B} _ {i}\right) + \lambda_ {g} L _ {g} \left(B _ {i}, \hat {B} _ {i}\right) + \lambda_ {d e c} L _ {d e c} \tag {7}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
where $\lambda_{L1} = 5.0$ , $\lambda_{g} = 2.0$ and $\lambda_{dec} = 0.3$ are the weighting factors of three losses, $\hat{B}_i$ is the estimated box of the target and $B_i$ is the ground-truth bounding box. The decoder
|
| 165 |
+
|
| 166 |
+
loss $L_{dec}$ is defined as:
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
L _ {d e c} = L _ {2} \left(z, z _ {p}\right) + L _ {2} \left(s, s _ {p}\right) + L _ {2} \left(z, s _ {p}\right) \tag {8}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
where the $L_{2}$ is the MSE loss, $z$ and $s$ represent the original template image and search image, $z_{p}$ and $s_p$ represent the predicting template image and search image respectively.
|
| 173 |
+
|
| 174 |
+
In the inference phase, we use two templates of the same size as the input. One of which is the initial template and fixed, the other is online updated and always set to the latest tracking result with high confidence. We use a score head to control the updating of the online template. Our score head consists of the multilayer perceptron (MLP) that receives a class-token(Dosovitskiy et al. 2021) as input and evaluates the accuracy of current tracking results.
|
| 175 |
+
|
| 176 |
+
# 4 Experiments
|
| 177 |
+
|
| 178 |
+
# 4.1 Implementation Details
|
| 179 |
+
|
| 180 |
+
In order to effectively verify the correctness of our analysis, we design the compact transformer tracker without any other extra attention mechanisms. The only structures remaining are feature extraction and aggregation, and multilayer feature aggregation. The main tracker only consists of a ViT backbone and a box estimation head, we test both ViT-Base and ViT-Large, and the ViT parameters are initialized with MAE (He et al. 2022) pre-trained model. We refer our Compact Transformer tracker as CTTrack-B (the backbone of ViT-Base) and CTTrack-L (the backbone of ViT-Large) in this section.
|
| 181 |
+
|
| 182 |
+
We adopt CoCo(Lin et al. 2014), LaSOT(Fan et al. 2019), GOT-10k(Huang, Zhao, and Huang 2019), and TrackingNet(Muller et al. 2018) as our training dataset except the GOT-10k benchmark. The training samples are directly sampled from the same sequence and we apply common data augmentation operations including brightness jitter and horizontal flip. The size of the input template is $128 \times 128$ , the search region is $5^2$ times of the target box area and further resized to $320 \times 320$ . The decoder parameters are initialized with Xavier Uniform. The AdamW optimizer (Loshchilov and Hutter 2018) is employed with initial learning rate (lr) of 1e-4 with the layer-wise decay 0.75, and the lr decreases according to the cosine function with the final decrease factor of 0.1. We adopt a warm-up lr with the 0.2 warm-up factor on the first 5 epochs. We train our model on 4 Nvidia Tesla V100 GPUs for a total of 500 epochs, each epoch uses $6 \times 10^4$ images. The mini-batch size is set to 128 images with each GPU hosting 32 images. Our approach is implemented in Python 3.7 with PyTorch 1.7.
|
| 183 |
+
|
| 184 |
+
# 4.2 Ablation Study
|
| 185 |
+
|
| 186 |
+
We ablate our compact transformer tracker on several intriguing properties using the challenging LaSOT dataset and report the Area Under the Curve (AUC) and Precision (Prec) as the validation accuracy.
|
| 187 |
+
|
| 188 |
+
Backbone Comparison. Table 2 shows the comparison of the transformer backbones between the ViT-Base and ViT-Large backbone. The CTTrack-B reaches a higher tracking speed while the CTTrack-L exhibits a better performance.
|
| 189 |
+
|
| 190 |
+
Table 2: Model size and speed using different backbones.
|
| 191 |
+
|
| 192 |
+
<table><tr><td>Methods</td><td>Params(M)</td><td>FLOPs(G)</td><td>Speed(fps)</td></tr><tr><td>CTTrack-B</td><td>93.8</td><td>48.1</td><td>40</td></tr><tr><td>CTTrack-L</td><td>313.9</td><td>163.7</td><td>22</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Reconstruction Streams. Our decoder enforces three types of reconstruction streams as shown in Figure 4. Table 3 exhibits different configurations of reconstruction streams, through varied combinations of search tokens reconstruct search image (s2s), template tokens reconstruct template image (t2t) and search tokens reconstruct template image(s2t). The result is consistent with the conclusion of our previous analysis that self-information enhancement (#5) plays the most important role in transformer tracking, compared to cross-information aggregation(#4). Besides, search image information has more influence than the template information, the s2s (#2) improves performance the most among all streams (#2, #3, #4), from 64.0 to 64.7 in AUC score. After adopting all three reconstruction streams, tracking accuracy improved by an impressive AUC score of $1.8\%$ , which validates the effectiveness of our masked modeling decoders.
|
| 195 |
+
|
| 196 |
+
Table 3: Ablation Study for the reconstruction streams. s2s represents search tokens reconstruct search image, t2t denotes template tokens reconstruct template image and s2t means search tokens reconstruct template image.
|
| 197 |
+
|
| 198 |
+
<table><tr><td rowspan="2">#</td><td colspan="3">Recons Type</td><td rowspan="2">AUC</td><td rowspan="2">Prec</td></tr><tr><td>s2s</td><td>t2t</td><td>s2t</td></tr><tr><td>1</td><td>-</td><td>-</td><td>-</td><td>64.0</td><td>67.7</td></tr><tr><td>2</td><td>✓</td><td>-</td><td>-</td><td>64.7</td><td>69.1</td></tr><tr><td>3</td><td>-</td><td>✓</td><td>-</td><td>64.4</td><td>68.4</td></tr><tr><td>4</td><td>-</td><td>-</td><td>✓</td><td>64.4</td><td>68.6</td></tr><tr><td>5</td><td>✓</td><td>✓</td><td>-</td><td>65.1</td><td>69.9</td></tr><tr><td>6</td><td>✓</td><td>✓</td><td>✓</td><td>65.8</td><td>70.9</td></tr></table>
|
| 199 |
+
|
| 200 |
+
Masking ratio. When we conduct reconstruction streams, we randomly mask the input tokens according to a predefined ratio. Table 4 shows the influence of different masking ratios. We mask the encoded template token and search tokens with a random sampling strategy at different masking rates. Similar to the conclusion obtained by the MAE(He et al. 2022), the optimal ratios are relatively high, and the accuracy increases steadily with the masking ratio growing until reaching $75\%$ , which produces the best tracking results.
|
| 201 |
+
|
| 202 |
+
Table 4: Comparison on masking ratio.
|
| 203 |
+
|
| 204 |
+
<table><tr><td>Mask Ratio</td><td>25%</td><td>50%</td><td>75%</td><td>90%</td></tr><tr><td>AUC</td><td>64.6</td><td>65.7</td><td>65.8</td><td>64.9</td></tr><tr><td>Prec</td><td>69.0</td><td>70.7</td><td>70.9</td><td>69.5</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Online Template Updating. We evaluate the effect of the online update strategy in our method. The ablation study
|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
Target
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
S-to-S
|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
T-to-T
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
S-to-T
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
|
| 222 |
+

|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
Figure 5: Visualization of attention map which compares the difference between training with correlative decoder (w) and training without correlative decoder(w/o). S-to-S is self-information enhancement on search image, T-to-T is self-information enhancement on template, S-to-T is cross-information aggregation on search image.
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
w/o W
|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
W/O W
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
w/o
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
|
| 244 |
+
result is shown in Table 5, #1 represents the performance without template updating. We can see that applying a fixed interval to update the online template (#2) is ineffective as it greatly reduces the quality of template and causes tracking drift. It can be seen in #3, there is a $0.2\%$ improvement in the AUC score after applying the scoring head to evaluate the accuracy of current tracking results.
|
| 245 |
+
|
| 246 |
+
Table 5: Ablation for the online template updating component. Online denotes updating the template at a fixed update interval. Score represents the online template is only updated with high confident samples.
|
| 247 |
+
|
| 248 |
+
<table><tr><td></td><td>Online</td><td>Score</td><td>AUC</td><td>Prec</td></tr><tr><td rowspan="3">CTTrack-B</td><td>-</td><td>-</td><td>65.8</td><td>70.9</td></tr><tr><td>✓</td><td>-</td><td>64.9</td><td>69.9</td></tr><tr><td>✓</td><td>✓</td><td>66.0</td><td>71.1</td></tr></table>
|
| 249 |
+
|
| 250 |
+
Visualization of attention maps. We visualize attention maps in Figure5, our tracker adopting the correlative decoder has a stronger discriminative ability. The baseline transformer without a reconstruction decoder tends to lose the target position, and the distractors in the background get suppressed with the training by the correlative decoder.
|
| 251 |
+
|
| 252 |
+
# 4.3 Comparison with the SOTA
|
| 253 |
+
|
| 254 |
+
We compare our compact tracker with the state-of-the-art trackers on UAV123(Mueller, Smith, and Ghanem 2016), LaSOT(Fan et al. 2019), TrackingNet(Muller et al. 2018), GOT-10k(Huang, Zhao, and Huang 2019), and VOT2020(Kristan et al. 2020). For a fairer comparison, here we adopt relative position biases in our ViT backbones, this addition improves AUC by around 1 point.
|
| 255 |
+
|
| 256 |
+
UAV123 gathers an application-specific collection of 123 sequences. It adopts the AUC and Precision (P) as the evaluation metrics. As shown in Table 1, Our CTTrack-L outperforms previous trackers and exhibits very competitive performance (71.3% AUC) when compared to the previous best-performing tracker CSWinTT (70.5% AUC).
|
| 257 |
+
|
| 258 |
+
Table 6: Comparisons with previous state-of-the-art trackers on four challenge benchmarks. The red, green and blue indicate performances ranked at first, second, and third places. The tracker -GOT denotes only trained on the GOT-10k train split.
|
| 259 |
+
|
| 260 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="2">UAV123</td><td colspan="3">LaSOT</td><td colspan="3">TrackingNet</td><td colspan="3">GOT-10k</td></tr><tr><td>AUC</td><td>P</td><td>AUC</td><td>PNorm</td><td>P</td><td>AUC</td><td>PNorm</td><td>P</td><td>AO</td><td>SR0.5</td><td>SR0.75</td></tr><tr><td>CTTrack-L</td><td>71.3</td><td>93.3</td><td>69.8</td><td>79.7</td><td>76.2</td><td>84.9</td><td>89.1</td><td>83.5</td><td>75.3</td><td>84.5</td><td>74.0</td></tr><tr><td>CTTrack-B</td><td>68.8</td><td>89.5</td><td>67.8</td><td>77.8</td><td>74.0</td><td>82.5</td><td>87.1</td><td>80.3</td><td>73.5</td><td>83.5</td><td>70.6</td></tr><tr><td>CTTrack-L -GOT</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>72.8</td><td>81.3</td><td>71.5</td></tr><tr><td>CTTrack-B -GOT</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>71.3</td><td>80.7</td><td>70.3</td></tr><tr><td>MixFormer(Cui et al. 2022)</td><td>69.5</td><td>91.0</td><td>70.1</td><td>79.9</td><td>76.3</td><td>83.9</td><td>88.9</td><td>83.1</td><td>70.7</td><td>80.0</td><td>67.8</td></tr><tr><td>CSWinTT(Song et al. 2022)</td><td>70.5</td><td>90.3</td><td>66.2</td><td>75.2</td><td>70.9</td><td>81.9</td><td>86.7</td><td>79.5</td><td>69.4</td><td>78.9</td><td>65.4</td></tr><tr><td>UTT(Shen et al. 2022)</td><td>-</td><td>-</td><td>64.6</td><td>-</td><td>67.2</td><td>79.7</td><td>-</td><td>77.0</td><td>67.2</td><td>76.3</td><td>60.5</td></tr><tr><td>STARK(Yan et al. 2021)</td><td>-</td><td>-</td><td>67.1</td><td>77.0</td><td>-</td><td>82.0</td><td>86.9</td><td>-</td><td>68.8</td><td>78.1</td><td>64.1</td></tr><tr><td>TransT(Chen et al. 2021)</td><td>68.1</td><td>87.6</td><td>64.9</td><td>73.8</td><td>69.0</td><td>81.4</td><td>86.7</td><td>80.3</td><td>67.1</td><td>76.8</td><td>60.9</td></tr><tr><td>TrDiMP(Wang et al. 2021)</td><td>67.0</td><td>87.6</td><td>64.0</td><td>73.2</td><td>66.6</td><td>78.4</td><td>83.3</td><td>73.1</td><td>68.8</td><td>80.5</td><td>59.7</td></tr><tr><td>STMTrack(Fu et al. 2021)</td><td>64.7</td><td>-</td><td>60.6</td><td>69.3</td><td>63.3</td><td>80.3</td><td>85.1</td><td>76.7</td><td>64.2</td><td>73.7</td><td>57.5</td></tr><tr><td>AutoMatch(Zhang et al. 2021)</td><td>64.4</td><td>83.8</td><td>58.2</td><td>67.5</td><td>59.9</td><td>76.0</td><td>82.4</td><td>72.5</td><td>65.2</td><td>76.6</td><td>54.3</td></tr><tr><td>SiamGAT(Guo et al. 2021)</td><td>64.6</td><td>84.3</td><td>53.9</td><td>63.3</td><td>53.0</td><td>-</td><td>-</td><td>-</td><td>62.7</td><td>74.3</td><td>48.8</td></tr><tr><td>KYS(Bhat et al. 2020)</td><td>-</td><td>-</td><td>55.4</td><td>63.3</td><td>55.8</td><td>74.0</td><td>80.0</td><td>68.8</td><td>63.6</td><td>75.1</td><td>51.5</td></tr><tr><td>MAML(Wang et al. 2020)</td><td>-</td><td>-</td><td>52.3</td><td>-</td><td>53.1</td><td>75.7</td><td>82.2</td><td>72.5</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SiamAttn(Yu et al. 2020)</td><td>65.0</td><td>84.5</td><td>56.0</td><td>64.8</td><td>-</td><td>75.2</td><td>81.7</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SiamFC++(Xu et al. 2020)</td><td>61.8</td><td>80.4</td><td>54.4</td><td>62.3</td><td>54.7</td><td>75.4</td><td>80.0</td><td>70.5</td><td>59.5</td><td>69.5</td><td>47.9</td></tr><tr><td>SiamRPN++(Li et al. 2019)</td><td>64.2</td><td>84.0</td><td>49.6</td><td>56.9</td><td>49.1</td><td>73.3</td><td>80.0</td><td>69.4</td><td>51.7</td><td>61.6</td><td>32.5</td></tr><tr><td>DiMP(Bhat et al. 2019)</td><td>64.2</td><td>84.9</td><td>57.7</td><td>66.4</td><td>57.9</td><td>74.0</td><td>80.1</td><td>68.7</td><td>61.1</td><td>71.7</td><td>49.2</td></tr><tr><td>ATOM(Danelljan et al. 2019)</td><td>61.7</td><td>82.7</td><td>51.5</td><td>57.6</td><td>50.5</td><td>70.3</td><td>77.1</td><td>64.8</td><td>55.6</td><td>63.4</td><td>40.2</td></tr></table>
|
| 261 |
+
|
| 262 |
+
Table 7: Comparisons on VOT2020, where trackers only predict bounding boxes rather than masks.
|
| 263 |
+
|
| 264 |
+
<table><tr><td>Methods</td><td>EAO↑</td><td>Accuracy↑</td><td>Robustness↑</td></tr><tr><td>SiamFC</td><td>0.179</td><td>0.418</td><td>0.502</td></tr><tr><td>ATOM</td><td>0.271</td><td>0.462</td><td>0.734</td></tr><tr><td>DiMP</td><td>0.274</td><td>0.457</td><td>0.740</td></tr><tr><td>UPDT</td><td>0.278</td><td>0.465</td><td>0.755</td></tr><tr><td>TransT</td><td>0.293</td><td>0.477</td><td>0.754</td></tr><tr><td>CSWinTT</td><td>0.304</td><td>0.480</td><td>0.787</td></tr><tr><td>CTTrack-L</td><td>0.287</td><td>0.453</td><td>0.787</td></tr></table>
|
| 265 |
+
|
| 266 |
+
LaSOT is a long-term dataset including 1400 sequences and distributed over 14 attributes, the testing subset of LaSOT contains 280 sequences. Methods are ranked by the AUC, P, and Normalized Precision $(\mathbb{P}_{Norm})$ . Our CTTrack-L achieves the AUC $(69.8\%)$ and Prec $(76.2\%)$ , which is an excellent result that outperforms other methods only except the MixFormer. Our tracker has lower performance than MixFormer on LaSOT because it contains long-term sequences and large variations in content. ViT backbone is a plain and non-hierarchical architecture that maintains feature maps at a certain scale, which may not be able to well handle long-term tracking sequences with scale variations.
|
| 267 |
+
|
| 268 |
+
TrackingNet is a large-scale tracking dataset consisting of 511 sequences for testing. The evaluation is performed on the online server. Table 1 shows that CTTrack-L performs better quality and ranks first in AUC score at $84.9\%$ . The gain is $1.0\%$ improvement when compared with the previous best results.
|
| 269 |
+
|
| 270 |
+
GOT-10k contains over 10k videos for training and 180 for
|
| 271 |
+
|
| 272 |
+
testing. It forbids the trackers to use external datasets for training. We follow this protocol by retraining our trackers to only use the GOT10k train split. As in Table 1, MixFormer and CSWinTT provide the best performance, with an AO score of $70.7\%$ and $69.4\%$ . Our CTTrack-L has obtained an AO score of $72.8\%$ , significantly outperforming the best existing tracker by $2.1\%$ .
|
| 273 |
+
|
| 274 |
+
VOT2020 benchmark contains 60 challenging videos. The performance is evaluated using the expected average overlap (EAO), which takes both accuracy (A) and robustness (R). Since our algorithm does not output a segmentation mask, trackers that only predict bounding boxes are selected for comparisons to ensure fairness. It can be seen from Table 7 that our CTTrack-L obtains an EAO of 0.287.
|
| 275 |
+
|
| 276 |
+
# 5 Conclusion
|
| 277 |
+
|
| 278 |
+
In this work, we analyze the information stream in the attention mechanism in depth. We prove that the vanilla self-attention structure is sufficient for information aggregation, and employ the three information streams of the packed self-attention in the transformer tracking framework. To enhance the information representation, we design the correlative masked decoder consisting of a self-decoder and a cross-decoder to reconstruct the original pixels of both template and search image. Extensive experiments demonstrate the effectiveness of our correlative masked modeling strategy and our compact transformer tracker exhibits impressive performance over previous trackers. In addition, our correlative masked decoder can be plugged into other transformer trackers, which can effectively improve the tracking performance without compromising speed. In the future, we plan to combine the feature pyramid or convolution module for better performance on long-term tracking sequences.
|
| 279 |
+
|
| 280 |
+
# Acknowledgments
|
| 281 |
+
|
| 282 |
+
This work is supported by the national key research and development program of China under Grant No.2020YFB1805601, National Natural Science Foundation of China (NSFC No. 62272184), and CCF-Tencent Open Research Fund (CCF-Tencent RAGR20220120). The computation is completed in the HPC Platform of Huazhong University of Science and Technology.
|
| 283 |
+
|
| 284 |
+
# References
|
| 285 |
+
|
| 286 |
+
Bao, H.; Dong, L.; and Wei, F. 2021. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254.
|
| 287 |
+
Bertinetto, L.; Valmadre, J.; Henriques, J. F.; Vedaldi, A.; and Torr, P. H. S. 2016. Fully-Convolutional Siamese Networks for Object Tracking. In Proceedings of the ECCV, 850-865. Springer.
|
| 288 |
+
Bhat, G.; Danelljan, M.; Gool, L. V.; and Timofte, R. 2019. Learning Discriminative Model Prediction for Tracking. In Proceedings of the ICCV, 6182-6191. IEEE.
|
| 289 |
+
Bhat, G.; Danelljan, M.; Van Gool, L.; and Timofte, R. 2020. Know Your Surroundings: Exploiting Scene Information for Object Tracking. In Proceedings of the ECCV. Springer.
|
| 290 |
+
Bolme, D. S.; Beveridge, J. R.; Draper, B. A.; and Lui, Y. M. 2010. Visual object tracking using adaptive correlation filters. In Proceedings of the CVPR, 2544-2550. IEEE.
|
| 291 |
+
Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-end object detection with transformers. In ECCV, 213-229. Springer.
|
| 292 |
+
Chen, M.; Radford, A.; Child, R.; Wu, J.; Jun, H.; Luan, D.; and Sutskever, I. 2020. Generative pretraining from pixels. In International conference on machine learning, 1691-1703. PMLR.
|
| 293 |
+
Chen, X.; Yan, B.; Zhu, J.; Wang, D.; Yang, X.; and Lu, H. 2021. Transformer tracking. In Proceedings of the CVPR, 8126-8135.
|
| 294 |
+
Cui, Y.; Jiang, C.; Wang, L.; and Wu, G. 2022. MixFormer: End-to-End Tracking With Iterative Mixed Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13608-13618.
|
| 295 |
+
Dalal, N.; and Triggs, B. 2005. Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), volume 1, 886-893. IEEE.
|
| 296 |
+
Danelljan, M.; Bhat, G.; Khan, F. S.; and Felsberg, M. 2019. ATOM: Accurate Tracking by Overlap Maximization. In Proceedings of the CVPR, 4660-4669. IEEE.
|
| 297 |
+
Danelljan, M.; Bhat, G.; Shahbaz Khan, F.; and Felsberg, M. 2017. ECO: Efficient Convolution Operators for Tracking. In Proceedings of the CVPR, 6638-6646. IEEE.
|
| 298 |
+
Danelljan, M.; Robinson, A.; Khan, F. S.; and Felsberg, M. 2016. Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking. In Proceedings of the ECCV, 472-488. Springer.
|
| 299 |
+
Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.;
|
| 300 |
+
|
| 301 |
+
Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR.
|
| 302 |
+
Fan, H.; Lin, L.; Yang, F.; Chu, P.; Deng, G.; Yu, S.; Bai, H.; Xu, Y.; Liao, C.; and Ling, H. 2019. LaSOT: A High-Quality Benchmark for Large-Scale Single Object Tracking. In Proceedings of the CVPR. IEEE.
|
| 303 |
+
Fu, Z.; Liu, Q.; Fu, Z.; and Wang, Y. 2021. Stmtrack: Template-free visual tracking with space-time memory networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13774-13783.
|
| 304 |
+
Guo, D.; Shao, Y.; Cui, Y.; Wang, Z.; Zhang, L.; and Shen, C. 2021. Graph attention tracking. In Proceedings of the CVPR, 9543-9552.
|
| 305 |
+
He, K.; Chen, X.; Xie, S.; Li, Y.; Dollar, P.; and Girshick, R. 2022. Masked Autoencoders Are Scalable Vision Learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 16000-16009.
|
| 306 |
+
He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the CVPR, 770-778. IEEE.
|
| 307 |
+
Henriques, J. F.; Caseiro, R.; Martins, P.; and Batista, J. 2015. High-Speed Tracking with Kernelized Correlation Filters. IEEE TPAMI, 37(3): 583-596.
|
| 308 |
+
Huang, L.; Zhao, X.; and Huang, K. 2019. GOT-10k: A Large High-Diversity Benchmark for Generic Object Tracking in the Wild. IEEE TPAMI.
|
| 309 |
+
Jiang, B.; Luo, R.; Mao, J.; Xiao, T.; and Jiang, Y. 2018. Acquisition of localization confidence for accurate object detection. In Proceedings of the European conference on computer vision (ECCV), 784-799.
|
| 310 |
+
Kristan, M.; Leonardis, A.; Matas, J.; Felsberg, M.; Pflugfelder, R.; Kämäräinen, J.-K.; Danelljan, M.; Zajc, L. C.; Lukežić, A.; Drbohlav, O.; et al. 2020. The eighth visual object tracking VOT2020 challenge results. In ECCV, 547-601. Springer.
|
| 311 |
+
Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25: 1097-1105.
|
| 312 |
+
Li, B.; Wu, W.; Wang, Q.; Zhang, F.; Xing, J.; and Yan, J. 2019. SiamRPN++: Evolution of Siamese Visual Tracking With Very Deep Networks. In Proceedings of the CVPR, 4282-4291. IEEE.
|
| 313 |
+
Li, B.; Yan, J.; Wu, W.; Zhu, Z.; and Hu, X. 2018. High Performance Visual Tracking With Siamese Region Proposal Network. In Proceedings of the CVPR, 8971-8980. IEEE.
|
| 314 |
+
Li, Y.; Mao, H.; Girshick, R.; and He, K. 2022. Exploring plain vision transformer backbones for object detection. arXiv preprint arXiv:2203.16527.
|
| 315 |
+
Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollar, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In ECCV, 740-755. Springer.
|
| 316 |
+
|
| 317 |
+
Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and Guo, B. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the ICCV.
|
| 318 |
+
Loshchilov, I.; and Hutter, F. 2018. Decoupled weight decay regularization. In Proceedings of the ICLR.
|
| 319 |
+
Mueller, M.; Smith, N.; and Ghanem, B. 2016. A benchmark and simulator for uav tracking. In Proceedings of the ECCV, 445-461. Springer.
|
| 320 |
+
Muller, M.; Bibi, A.; Giancola, S.; Alsubaihi, S.; and Ghanem, B. 2018. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild. In Proceedings of the ECCV.
|
| 321 |
+
Nam, H.; and Han, B. 2016. Learning Multi-Domain Convolutional Neural Networks for Visual Tracking. In Proceedings of the CVPR, 4293-4302. IEEE.
|
| 322 |
+
Pu, S.; Song, Y.; Ma, C.; Zhang, H.; and Yang, M.-H. 2018. Deep Attentive Tracking via Reciprocativc Learning. In NeurIPS, 1931-1941.
|
| 323 |
+
Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-shot text-to-image generation. In International Conference on Machine Learning, 8821-8831. PMLR.
|
| 324 |
+
Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; and Savarese, S. 2019. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the CVPR, 658-666.
|
| 325 |
+
Shen, Q.; Qiao, L.; Guo, J.; Li, P.; Li, X.; Li, B.; Feng, W.; Gan, W.; Wu, W.; and Ouyang, W. 2022. Unsupervised Learning of Accurate Siamese Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8101-8110.
|
| 326 |
+
Simonyan, K.; and Zisserman, A. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations.
|
| 327 |
+
Song, Z.; Yu, J.; Chen, Y.-P. P.; and Yang, W. 2022. Transformer Tracking With Cyclic Shifting Window Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8791-8800.
|
| 328 |
+
Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is all you need. In NIPS, 5998-6008.
|
| 329 |
+
Voigtlaender, P.; Luiten, J.; Torr, P. H.; and Leibe, B. 2020. Siam r-cnn: Visual tracking by re-detection. In Proceedings of the CVPR, 6578-6588.
|
| 330 |
+
Wang, G.; Luo, C.; Sun, X.; Xiong, Z.; and Zeng, W. 2020. Tracking by instance detection: A meta-learning approach. In Proceedings of the CVPR, 6288-6297.
|
| 331 |
+
Wang, N.; Zhou, W.; Wang, J.; and Li, H. 2021. Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking. In Proceedings of the CVPR, 1571-1580.
|
| 332 |
+
Wei, C.; Fan, H.; Xie, S.; Wu, C.-Y.; Yuille, A.; and Feichtenhofer, C. 2022. Masked feature prediction for self-supervised visual pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14668-14678.
|
| 333 |
+
|
| 334 |
+
Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; and Zhang, L. 2021. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 22-31.
|
| 335 |
+
Xie, Z.; Zhang, Z.; Cao, Y.; Lin, Y.; Bao, J.; Yao, Z.; Dai, Q.; and Hu, H. 2022. SimMIM: A Simple Framework for Masked Image Modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9653-9663.
|
| 336 |
+
Xu, Y.; Wang, Z.; Li, Z.; Yuan, Y.; and Yu, G. 2020. SiamFC++: Towards robust and accurate visual tracking with target estimation guidelines. In Proceedings of the AAAI, volume 34, 12549-12556.
|
| 337 |
+
Yan, B.; Peng, H.; Fu, J.; Wang, D.; and Lu, H. 2021. Learning spatio-temporal transformer for visual tracking. In Proceedings of the ICCV.
|
| 338 |
+
Yu, Y.; Xiong, Y.; Huang, W.; and Scott, M. R. 2020. Deformable siamese attention networks for visual object tracking. In Proceedings of the CVPR, 6728-6737.
|
| 339 |
+
Zhang, Z.; Liu, Y.; Wang, X.; Li, B.; and Hu, W. 2021. Learn to match: Automatic matching network design for visual tracking. In Proceedings of the ICCV, 13339-13348.
|
2301.10xxx/2301.10938/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b801f96b9bfd012577a1462f6f1ccceae9f914fc92fb6834f2435364530560b8
|
| 3 |
+
size 514871
|
2301.10xxx/2301.10938/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10941/c35dbdab-14b3-4d81-9ce3-5fce0461d6c8_content_list.json
ADDED
|
@@ -0,0 +1,1761 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
114,
|
| 8 |
+
109,
|
| 9 |
+
854,
|
| 10 |
+
131
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Min-Seop Kwak*1 Jiuhn Song*1 Seungryong Kim",
|
| 17 |
+
"bbox": [
|
| 18 |
+
292,
|
| 19 |
+
176,
|
| 20 |
+
671,
|
| 21 |
+
193
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Abstract",
|
| 28 |
+
"text_level": 1,
|
| 29 |
+
"bbox": [
|
| 30 |
+
241,
|
| 31 |
+
220,
|
| 32 |
+
318,
|
| 33 |
+
236
|
| 34 |
+
],
|
| 35 |
+
"page_idx": 0
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"type": "text",
|
| 39 |
+
"text": "We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometric consistency regularization. The proposed approach leverages a rendered depth map at unobserved viewpoint to warp sparse input images to the unobserved viewpoint and impose them as pseudo ground truths to facilitate learning of NeRF. By encouraging such geometric consistency at a feature-level instead of using pixel-level reconstruction loss, we regularize the NeRF at semantic and structural levels while allowing for modeling view-dependent radiance to account for color variations across viewpoints. We also propose an effective method to filter out erroneous warped solutions, along with training strategies to stabilize training during optimization. We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models.",
|
| 40 |
+
"bbox": [
|
| 41 |
+
117,
|
| 42 |
+
246,
|
| 43 |
+
444,
|
| 44 |
+
518
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "1. Introduction",
|
| 51 |
+
"text_level": 1,
|
| 52 |
+
"bbox": [
|
| 53 |
+
86,
|
| 54 |
+
550,
|
| 55 |
+
217,
|
| 56 |
+
566
|
| 57 |
+
],
|
| 58 |
+
"page_idx": 0
|
| 59 |
+
},
|
| 60 |
+
{
|
| 61 |
+
"type": "text",
|
| 62 |
+
"text": "Recently, representing a 3D scene as a Neural Radiance Field (NeRF) (Mildenhall et al., 2020) has proven to be a powerful approach for novel view synthesis and 3D reconstruction (Barron et al., 2021; Jain et al., 2021; Chen et al., 2021). However, despite its impressive performance, NeRF requires a large number of densely, well distributed calibrated images for optimization, which limits its applicability. When limited to sparse observations, NeRF easily overfits to the input view images and is unable to reconstruct correct geometry (Zhang et al., 2020).",
|
| 63 |
+
"bbox": [
|
| 64 |
+
84,
|
| 65 |
+
577,
|
| 66 |
+
475,
|
| 67 |
+
728
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "The task that directly addresses this problem, also called a few-shot NeRF, aims to optimize high-fidelity neural radiance field in such sparse scenarios (Jain et al., 2021; Kim",
|
| 74 |
+
"bbox": [
|
| 75 |
+
84,
|
| 76 |
+
734,
|
| 77 |
+
475,
|
| 78 |
+
781
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "et al., 2022; Niemeyer et al., 2022), countering the underconstrained nature of said problem by introducing additional priors. Specifically, previous works attempted to solve this by utilizing a semantic feature (Jain et al., 2021), entropy minimization (Kim et al., 2022), SfM depth priors (Deng et al., 2022) or normalizing flow (Niemeyer et al., 2022), but their necessity for handcrafted methods or inability to extract local and fine structures limited their performance.",
|
| 85 |
+
"bbox": [
|
| 86 |
+
495,
|
| 87 |
+
220,
|
| 88 |
+
887,
|
| 89 |
+
343
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "To alleviate these issues, we propose a novel regularization technique that enforces a geometric consistency across different views with a depth-guided warping and a geometry-aware consistency modeling. Based on these, we propose a novel framework, called Neural Radiance Fields with Geometric Consistency (GeCoNeRF), for training neural radiance fields in a few-shot setting. Our key insight is that we can leverage a depth rendered by NeRF to warp sparse input images to novel viewpoints, and use them as pseudo ground truths to facilitate learning of fine details and high-frequency features by NeRF. By encouraging images rendered at novel views to model warped images with a consistency loss, we can successfully constrain both geometry and appearance to boost fidelity of neural radiance fields even in highly under-constrained few-shot setting. Taking into consideration non-Lambertian nature of given datasets, we propose feature-level regularization loss that captures contextual and structural information while largely ignoring individual color differences. We also present a method to generate a consistency mask to prevent inconsistently warped information from harming the network. Finally, we provide coarse-to-fine training strategies for sampling and pose generation to stabilize optimization of the model.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
495,
|
| 98 |
+
349,
|
| 99 |
+
888,
|
| 100 |
+
698
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "We demonstrate the effectiveness of our method on synthetic and real datasets (Mildenhall et al., 2020; Jensen et al., 2014). Experimental results prove the effectiveness of the proposed model over the latest methods for few-shot novel view synthesis.",
|
| 107 |
+
"bbox": [
|
| 108 |
+
495,
|
| 109 |
+
704,
|
| 110 |
+
888,
|
| 111 |
+
781
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "2. Related Work",
|
| 118 |
+
"text_level": 1,
|
| 119 |
+
"bbox": [
|
| 120 |
+
496,
|
| 121 |
+
799,
|
| 122 |
+
638,
|
| 123 |
+
814
|
| 124 |
+
],
|
| 125 |
+
"page_idx": 0
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"type": "text",
|
| 129 |
+
"text": "Neural radiance fields. Among the most notable of approaches regarding the task of novel view synthesis and 3D reconstruction is Neural Radiance Field (NeRF) (Mildenhall et al., 2020), where photo-realistic images are rendered by a simple MLP architecture. Sparked by its impress",
|
| 130 |
+
"bbox": [
|
| 131 |
+
495,
|
| 132 |
+
825,
|
| 133 |
+
888,
|
| 134 |
+
902
|
| 135 |
+
],
|
| 136 |
+
"page_idx": 0
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"type": "page_footnote",
|
| 140 |
+
"text": "*Equal contribution ${}^{1}$ Department of Computer Science and Engineering,Korea University,Seoul,Korea.Authors: Min-Seop Kwak <mskwak01@korea.ac.kr>, Jiuhn Song <jiuhn-song@korea.ac.kr>. Correspondence to: Seungryong Kim <seungryong.kim@korea.ac.kr>.",
|
| 141 |
+
"bbox": [
|
| 142 |
+
84,
|
| 143 |
+
789,
|
| 144 |
+
475,
|
| 145 |
+
854
|
| 146 |
+
],
|
| 147 |
+
"page_idx": 0
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"type": "page_footnote",
|
| 151 |
+
"text": "Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).",
|
| 152 |
+
"bbox": [
|
| 153 |
+
84,
|
| 154 |
+
864,
|
| 155 |
+
473,
|
| 156 |
+
905
|
| 157 |
+
],
|
| 158 |
+
"page_idx": 0
|
| 159 |
+
},
|
| 160 |
+
{
|
| 161 |
+
"type": "aside_text",
|
| 162 |
+
"text": "arXiv:2301.10941v3 [cs.CV] 27 Apr 2023",
|
| 163 |
+
"bbox": [
|
| 164 |
+
22,
|
| 165 |
+
262,
|
| 166 |
+
60,
|
| 167 |
+
705
|
| 168 |
+
],
|
| 169 |
+
"page_idx": 0
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"type": "image",
|
| 173 |
+
"img_path": "images/299ad1fa96e06e23f38f5d62eafab6103f5d862a38b0e7c16d247e18d5a9c3bd.jpg",
|
| 174 |
+
"image_caption": [
|
| 175 |
+
"Figure 1. Illustration of our consistency modeling pipeline for few-shot NeRF. Given an image $I_{i}$ and estimated depth map $D_{j}$ of $j$ -th unobserved viewpoint, we warp the image $I_{i}$ to that novel viewpoint as $I_{i\\rightarrow j}$ by establishing geometric correspondence between two viewpoints. Using the warped image as a pseudo ground truth, we cause rendered image of unseen viewpoint, $I_{j}$ , to be consistent in structure with warped image, with occlusions taken into consideration."
|
| 176 |
+
],
|
| 177 |
+
"image_footnote": [],
|
| 178 |
+
"bbox": [
|
| 179 |
+
98,
|
| 180 |
+
84,
|
| 181 |
+
880,
|
| 182 |
+
349
|
| 183 |
+
],
|
| 184 |
+
"page_idx": 1
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"type": "text",
|
| 188 |
+
"text": "sive performance, a variety of follow-up studies based on its continuous neural volumetric representation have been prompted, including dynamic and deformable scenes (Park et al., 2021; Tretschk et al., 2021; Pumarola et al., 2021; Attal et al., 2021), real-time rendering (Yu et al., 2021a; Hedman et al., 2021; Reiser et al., 2021; Müller et al., 2022), self-calibration (Jeong et al., 2021) and generative modeling (Schwarz et al., 2020; Niemeyer & Geiger, 2021; Xu et al., 2021; Deng et al., 2021). Mip-NeRF (Barron et al., 2021) eliminates aliasing artifacts by adopting cone tracing with a single multi-scale MLP. In general, most of these works have difficulty in optimizing a single scene with a few number of images.",
|
| 189 |
+
"bbox": [
|
| 190 |
+
83,
|
| 191 |
+
430,
|
| 192 |
+
475,
|
| 193 |
+
626
|
| 194 |
+
],
|
| 195 |
+
"page_idx": 1
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"type": "text",
|
| 199 |
+
"text": "Few-shot NeRF. One key limitation of NeRF is its necessity for large number of calibrated views in optimizing neural radiance fields. Some recent works attempted to address this in the case where only few observed views of the scene are available. PixelNeRF(Yu et al., 2021b) conditions a NeRF on image inputs using local CNN features. This conditional model allows the network to learn scene priors across multiple scenes. Stereo radiance fields (Chibane et al., 2021) use local CNN features from input views for scene geometry reasoning and MVSNeRF (Chen et al., 2021) combines cost volume with neural radiance field for improved performance. However, pre-training with multi-view images of numerous scenes are essential for these methods for them to learn reconstruction priors.",
|
| 200 |
+
"bbox": [
|
| 201 |
+
83,
|
| 202 |
+
641,
|
| 203 |
+
475,
|
| 204 |
+
852
|
| 205 |
+
],
|
| 206 |
+
"page_idx": 1
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"type": "text",
|
| 210 |
+
"text": "Other works attempt different approaches of optimizing NeRF from scratch in few-shot settings: DSNeRF (Deng et al., 2022) makes use of depth supervision to network to optimize",
|
| 211 |
+
"bbox": [
|
| 212 |
+
84,
|
| 213 |
+
859,
|
| 214 |
+
475,
|
| 215 |
+
905
|
| 216 |
+
],
|
| 217 |
+
"page_idx": 1
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"type": "text",
|
| 221 |
+
"text": "a scene with few images. (Roessle et al., 2021) also utilizes sparse depth prior by extending into dense depth map by depth completion module to guide network optimization. On the other hand, there are models that tackle depth prior-free few-shot optimization: DietNeRF (Jain et al., 2021) enforces semantic consistency between rendered images from unseen view and seen images. RegNeRF (Niemeyer et al., 2022) regularizes the geometry and appearance of patches rendered from unobserved viewpoints. InfoNeRF (Kim et al., 2022) constrains the density's entropy in each ray and ensures consistency across rays in the neighborhood. While these methods constrain NeRF into learning more realistic geometry, their regularizations are limited in that they require extensive dataset-specific fine-tuning and that they only provide regularization at a global level in a generalized manner.",
|
| 222 |
+
"bbox": [
|
| 223 |
+
495,
|
| 224 |
+
430,
|
| 225 |
+
888,
|
| 226 |
+
672
|
| 227 |
+
],
|
| 228 |
+
"page_idx": 1
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"type": "text",
|
| 232 |
+
"text": "Self-supervised photometric consistency. In the field of multiview stereo depth estimation, consistency modeling between stereo images and their warped images has been widely used for self-supervised training (Godard et al., 2017; Garg et al., 2016; Zhou et al., 2017) In weakly supervised or unsupervised settings (Huang et al., 2021; Khot et al., 2019) where there is lack of ground truth depth information, consistency modeling between images with geometry-based warping is used as a supervisory signal (Zhou et al., 2017; Huang et al., 2021; Khot et al., 2019) formulating depth learning as a form of reconstruction task between viewpoints.",
|
| 233 |
+
"bbox": [
|
| 234 |
+
495,
|
| 235 |
+
686,
|
| 236 |
+
888,
|
| 237 |
+
867
|
| 238 |
+
],
|
| 239 |
+
"page_idx": 1
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"type": "text",
|
| 243 |
+
"text": "Recently, methods utilizing self-supervised photometric consistency have been introduced to NeRF: concurrent",
|
| 244 |
+
"bbox": [
|
| 245 |
+
496,
|
| 246 |
+
875,
|
| 247 |
+
885,
|
| 248 |
+
905
|
| 249 |
+
],
|
| 250 |
+
"page_idx": 1
|
| 251 |
+
},
|
| 252 |
+
{
|
| 253 |
+
"type": "header",
|
| 254 |
+
"text": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency",
|
| 255 |
+
"bbox": [
|
| 256 |
+
251,
|
| 257 |
+
56,
|
| 258 |
+
718,
|
| 259 |
+
70
|
| 260 |
+
],
|
| 261 |
+
"page_idx": 1
|
| 262 |
+
},
|
| 263 |
+
{
|
| 264 |
+
"type": "text",
|
| 265 |
+
"text": "works such as NeuralWarp (Darmon et al., 2022), Struct-NeRF (Chen et al., 2022) and Geo-NeuS (Fu et al., 2022) model photometric consistency between source images and their warped counterparts from other source viewpoints to improve their reconstruction quality. However, these methods only discuss dense view input scenarios where pose differences between source viewpoints are small, and do not address their behavior in few-shot settings - where sharp performance drop is expected due to scarcity of input viewpoints and increased difficulty in the warping procedure owing to large viewpoint differences and heavy self-occlusions. RapNeRF (Zhang et al., 2022) uses geometry-based reprojection method to enhance view extrapolation performance, and (Bortolon et al., 2022) uses depth rendered by NeRF as correspondence information for view-morphing module to synthesize images between input viewpoints. However, these methods do not take occlusions into account, and their pixel-level photometric consistency modeling comes with downside of suppressing view-dependent specular effects.",
|
| 266 |
+
"bbox": [
|
| 267 |
+
84,
|
| 268 |
+
84,
|
| 269 |
+
475,
|
| 270 |
+
372
|
| 271 |
+
],
|
| 272 |
+
"page_idx": 2
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"type": "text",
|
| 276 |
+
"text": "3. Preliminaries",
|
| 277 |
+
"text_level": 1,
|
| 278 |
+
"bbox": [
|
| 279 |
+
84,
|
| 280 |
+
391,
|
| 281 |
+
225,
|
| 282 |
+
407
|
| 283 |
+
],
|
| 284 |
+
"page_idx": 2
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"type": "text",
|
| 288 |
+
"text": "Neural Radiance Field (NeRF) (Mildenhall et al., 2020) represents a scene as a continuous function $f_{\\theta}$ represented by a neural network with parameters $\\theta$ , where the points are sampled along rays, represented by $r$ , for evaluation by the neural network. Typically, the sampled coordinates $\\mathbf{x} \\in \\mathbb{R}^3$ and view direction $\\mathbf{d} \\in \\mathbb{R}^2$ are transformed by a positional encoding $\\gamma$ into Fourier features (Tancik et al., 2020) that facilitates learning of high-frequency details. The neural network $f_{\\theta}$ takes as input the transformed coordinate $\\gamma(\\mathbf{x})$ and viewing directions $\\gamma(\\mathbf{d})$ , and outputs a view-invariant density value $\\sigma \\in \\mathbb{R}$ and a view-dependent color value $\\mathbf{c} \\in \\mathbb{R}^3$ such that",
|
| 289 |
+
"bbox": [
|
| 290 |
+
84,
|
| 291 |
+
416,
|
| 292 |
+
473,
|
| 293 |
+
598
|
| 294 |
+
],
|
| 295 |
+
"page_idx": 2
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"type": "equation",
|
| 299 |
+
"text": "\n$$\n\\{\\mathbf {c}, \\sigma \\} = f _ {\\theta} (\\gamma (\\mathbf {x}), \\gamma (\\mathbf {d})). \\tag {1}\n$$\n",
|
| 300 |
+
"text_format": "latex",
|
| 301 |
+
"bbox": [
|
| 302 |
+
192,
|
| 303 |
+
606,
|
| 304 |
+
473,
|
| 305 |
+
625
|
| 306 |
+
],
|
| 307 |
+
"page_idx": 2
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"type": "text",
|
| 311 |
+
"text": "With a ray parameterized as $\\mathbf{r}_p(t) = \\mathbf{o} + t\\mathbf{d}_p$ from the camera center $\\mathbf{o}$ through the pixel $p$ along direction $\\mathbf{d}_p$ , the color is rendered as follows:",
|
| 312 |
+
"bbox": [
|
| 313 |
+
84,
|
| 314 |
+
628,
|
| 315 |
+
473,
|
| 316 |
+
675
|
| 317 |
+
],
|
| 318 |
+
"page_idx": 2
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"type": "equation",
|
| 322 |
+
"text": "\n$$\nC (\\mathbf {r} _ {p}) = \\int_ {t _ {n}} ^ {t _ {f}} T (t) \\sigma (\\mathbf {r} _ {p} (t)) \\mathbf {c} (\\mathbf {r} _ {p} (t), \\mathbf {d} _ {p}) d t, \\tag {2}\n$$\n",
|
| 323 |
+
"text_format": "latex",
|
| 324 |
+
"bbox": [
|
| 325 |
+
135,
|
| 326 |
+
679,
|
| 327 |
+
473,
|
| 328 |
+
715
|
| 329 |
+
],
|
| 330 |
+
"page_idx": 2
|
| 331 |
+
},
|
| 332 |
+
{
|
| 333 |
+
"type": "text",
|
| 334 |
+
"text": "where $C(\\mathbf{r}_p)$ is a predicted color value at the pixel $p$ along the ray $\\mathbf{r}_p(t)$ from $t_n$ to $t_f$ , and $T(t)$ denotes an accumulated transmittance along the ray from $t_n$ to $t$ , defined such that",
|
| 335 |
+
"bbox": [
|
| 336 |
+
84,
|
| 337 |
+
720,
|
| 338 |
+
473,
|
| 339 |
+
767
|
| 340 |
+
],
|
| 341 |
+
"page_idx": 2
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"type": "equation",
|
| 345 |
+
"text": "\n$$\nT (t) = \\exp \\left(- \\int_ {t _ {n}} ^ {t} \\sigma (\\mathbf {r} _ {p} (s)) d s\\right). \\tag {3}\n$$\n",
|
| 346 |
+
"text_format": "latex",
|
| 347 |
+
"bbox": [
|
| 348 |
+
165,
|
| 349 |
+
773,
|
| 350 |
+
473,
|
| 351 |
+
809
|
| 352 |
+
],
|
| 353 |
+
"page_idx": 2
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"type": "text",
|
| 357 |
+
"text": "To optimize the networks $f_{\\theta}$ , the observation loss $\\mathcal{L}_{\\mathrm{obs}}$ enforces the rendered color values to be consistent with ground truth color value $C^{\\prime}(\\mathbf{r})$ :",
|
| 358 |
+
"bbox": [
|
| 359 |
+
84,
|
| 360 |
+
821,
|
| 361 |
+
475,
|
| 362 |
+
867
|
| 363 |
+
],
|
| 364 |
+
"page_idx": 2
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"type": "equation",
|
| 368 |
+
"text": "\n$$\n\\mathcal {L} _ {\\mathrm {o b s}} = \\sum_ {\\mathbf {r} _ {p} \\in \\mathcal {R}} \\| C ^ {\\prime} (\\mathbf {r} _ {p}) - C (\\mathbf {r} _ {p}) \\| _ {2} ^ {2}, \\tag {4}\n$$\n",
|
| 369 |
+
"text_format": "latex",
|
| 370 |
+
"bbox": [
|
| 371 |
+
166,
|
| 372 |
+
875,
|
| 373 |
+
473,
|
| 374 |
+
907
|
| 375 |
+
],
|
| 376 |
+
"page_idx": 2
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"type": "text",
|
| 380 |
+
"text": "where $\\mathcal{R}$ represents a batch of training rays.",
|
| 381 |
+
"bbox": [
|
| 382 |
+
496,
|
| 383 |
+
84,
|
| 384 |
+
787,
|
| 385 |
+
101
|
| 386 |
+
],
|
| 387 |
+
"page_idx": 2
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"type": "text",
|
| 391 |
+
"text": "4. Methodology",
|
| 392 |
+
"text_level": 1,
|
| 393 |
+
"bbox": [
|
| 394 |
+
496,
|
| 395 |
+
112,
|
| 396 |
+
633,
|
| 397 |
+
130
|
| 398 |
+
],
|
| 399 |
+
"page_idx": 2
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "text",
|
| 403 |
+
"text": "4.1. Motivation and Overview",
|
| 404 |
+
"text_level": 1,
|
| 405 |
+
"bbox": [
|
| 406 |
+
496,
|
| 407 |
+
138,
|
| 408 |
+
709,
|
| 409 |
+
152
|
| 410 |
+
],
|
| 411 |
+
"page_idx": 2
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"type": "text",
|
| 415 |
+
"text": "Let us denote an image at $i$ -th viewpoint as $I_{i}$ . In a few-shot novel view synthesis, NeRF is given only a few images $\\{I_i\\}$ for $i \\in \\{1, \\dots, N\\}$ with small $N$ , e.g., $N = 3$ or $N = 5$ . The objective of novel view synthesis is to train the mapping function $f_{\\theta}$ that can be used to recover an image $I_{j}$ at $j$ -th unseen or novel viewpoint. As we described above, in the few-shot setting, given $\\{I_i\\}$ , directly optimizing $f_{\\theta}$ solely with the pixel-wise reconstruction loss $\\mathcal{L}_{\\mathrm{obs}}$ is limited by its inability to model view-dependent effects, and thus an additional regularization to encourage the network $f_{\\theta}$ to generate consistent appearance and geometry is required.",
|
| 416 |
+
"bbox": [
|
| 417 |
+
495,
|
| 418 |
+
162,
|
| 419 |
+
885,
|
| 420 |
+
329
|
| 421 |
+
],
|
| 422 |
+
"page_idx": 2
|
| 423 |
+
},
|
| 424 |
+
{
|
| 425 |
+
"type": "text",
|
| 426 |
+
"text": "To achieve this, we propose a novel regularization technique to enforce a geometric consistency across different views with depth-guided warping and consistency modeling. We focus on the fact that NeRF (Mildenhall et al., 2020) inherently renders not only color image but depth image as well. Combined with known viewpoint difference, the rendered depths can be used to define a geometric correspondence relationship between two arbitrary views.",
|
| 427 |
+
"bbox": [
|
| 428 |
+
495,
|
| 429 |
+
335,
|
| 430 |
+
885,
|
| 431 |
+
455
|
| 432 |
+
],
|
| 433 |
+
"page_idx": 2
|
| 434 |
+
},
|
| 435 |
+
{
|
| 436 |
+
"type": "text",
|
| 437 |
+
"text": "Specifically, we consider a depth image rendered by the NeRF model, $D_{j}$ at unseen viewpoint $j$ . By formulating a warping function $\\psi (I_i;D_j,R_{i\\rightarrow j})$ that warps an image $I_{i}$ according to the depth $D_{j}$ and viewpoint difference $R_{i\\rightarrow j}$ , we can encourage a consistency between warped image $I_{i\\rightarrow j} = \\psi (I_i;D_j,R_{i\\rightarrow j})$ and rendered image $I_{j}$ at $j$ -th unseen viewpoint, which in turn improves the few-shot novel view synthesis performance. This framework can overcome the limitations of previous few-shot setting approaches (Mildenhall et al., 2020; Chen et al., 2021; Barron et al., 2021), improving not only global geometry but also high-frequency details and appearance as well.",
|
| 438 |
+
"bbox": [
|
| 439 |
+
495,
|
| 440 |
+
464,
|
| 441 |
+
885,
|
| 442 |
+
647
|
| 443 |
+
],
|
| 444 |
+
"page_idx": 2
|
| 445 |
+
},
|
| 446 |
+
{
|
| 447 |
+
"type": "text",
|
| 448 |
+
"text": "In the following, we first explain how input images can be warped to unseen viewpoints in our framework. Then, we demonstrate how we impose consistency upon the pair of warped image and rendered image for regularization, followed by explanation of occlusion handling method and several training strategies that proved crucial for stabilization of NeRF optimization in few-shot scenario.",
|
| 449 |
+
"bbox": [
|
| 450 |
+
495,
|
| 451 |
+
652,
|
| 452 |
+
885,
|
| 453 |
+
758
|
| 454 |
+
],
|
| 455 |
+
"page_idx": 2
|
| 456 |
+
},
|
| 457 |
+
{
|
| 458 |
+
"type": "text",
|
| 459 |
+
"text": "4.2. Rendered Depth-Guided Warping",
|
| 460 |
+
"text_level": 1,
|
| 461 |
+
"bbox": [
|
| 462 |
+
496,
|
| 463 |
+
768,
|
| 464 |
+
769,
|
| 465 |
+
786
|
| 466 |
+
],
|
| 467 |
+
"page_idx": 2
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "text",
|
| 471 |
+
"text": "To render an image at novel viewpoints, we first sample a random camera viewpoint, from which corresponding ray vectors are generated in a patch-wise manner. As NeRF outputs density and color values of sampled points along the novel rays, we use recovered density values to render a consistent depth map. Following (Mildenhall et al., 2020), we formulate per-ray depth values as weighted composition of",
|
| 472 |
+
"bbox": [
|
| 473 |
+
495,
|
| 474 |
+
792,
|
| 475 |
+
885,
|
| 476 |
+
898
|
| 477 |
+
],
|
| 478 |
+
"page_idx": 2
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "header",
|
| 482 |
+
"text": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency",
|
| 483 |
+
"bbox": [
|
| 484 |
+
251,
|
| 485 |
+
56,
|
| 486 |
+
718,
|
| 487 |
+
71
|
| 488 |
+
],
|
| 489 |
+
"page_idx": 2
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "image",
|
| 493 |
+
"img_path": "images/afc9a2d5f39bf4f05be6acfd37908ca12035ae542a33f135f5a3ba1ef6116b38.jpg",
|
| 494 |
+
"image_caption": [
|
| 495 |
+
"Figure 2. Illustration of the proposed framework. GeCoNeRF regularizes the networks with consistency modeling. Consistency loss function $\\mathcal{L}_{\\mathrm{cons}}^M$ is applied between unobserved viewpoint image and warped observed viewpoint image, while disparity regularization loss $\\mathcal{L}_{\\mathrm{reg}}$ regularizes depth at seen viewpoints."
|
| 496 |
+
],
|
| 497 |
+
"image_footnote": [],
|
| 498 |
+
"bbox": [
|
| 499 |
+
101,
|
| 500 |
+
84,
|
| 501 |
+
872,
|
| 502 |
+
310
|
| 503 |
+
],
|
| 504 |
+
"page_idx": 3
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"type": "text",
|
| 508 |
+
"text": "distances traveled from origin. Since ray $\\mathbf{r}_p$ corresponding to pixel $p$ is parameterized as $\\mathbf{r}_p(t) = \\mathbf{o} + t\\mathbf{d}_p$ , the depth rendering is defined similarly to the color rendering:",
|
| 509 |
+
"bbox": [
|
| 510 |
+
84,
|
| 511 |
+
378,
|
| 512 |
+
473,
|
| 513 |
+
426
|
| 514 |
+
],
|
| 515 |
+
"page_idx": 3
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"type": "equation",
|
| 519 |
+
"text": "\n$$\nD (\\mathbf {r} _ {p}) = \\int_ {t _ {n}} ^ {t _ {f}} T (t) \\sigma (\\mathbf {r} _ {p} (t)) t d t, \\tag {5}\n$$\n",
|
| 520 |
+
"text_format": "latex",
|
| 521 |
+
"bbox": [
|
| 522 |
+
173,
|
| 523 |
+
436,
|
| 524 |
+
473,
|
| 525 |
+
472
|
| 526 |
+
],
|
| 527 |
+
"page_idx": 3
|
| 528 |
+
},
|
| 529 |
+
{
|
| 530 |
+
"type": "text",
|
| 531 |
+
"text": "where $D(\\mathbf{r}_p)$ is a predicted depth along the ray $\\mathbf{r}_p$ . As described in Figure 1, we use the rendered depth map $D_j$ to warp input ground truth image $I_i$ to $j$ -th unseen viewpoint and acquire a warped image $I_{i\\rightarrow j}$ , which is defined as a process such that $I_{i\\rightarrow j} = \\psi (I_i;D_j,R_{i\\rightarrow j})$ . More specifically, pixel location $p_j$ in target unseen viewpoint image is transformed to $p_{j\\to i}$ at source viewpoint image by viewpoint difference $R_{j\\to i}$ and camera intrinsic parameter $K$ such that",
|
| 532 |
+
"bbox": [
|
| 533 |
+
84,
|
| 534 |
+
482,
|
| 535 |
+
475,
|
| 536 |
+
617
|
| 537 |
+
],
|
| 538 |
+
"page_idx": 3
|
| 539 |
+
},
|
| 540 |
+
{
|
| 541 |
+
"type": "equation",
|
| 542 |
+
"text": "\n$$\np _ {j \\rightarrow i} \\sim K R _ {j \\rightarrow i} D _ {j} (p _ {j}) K ^ {- 1} p _ {j}, \\tag {6}\n$$\n",
|
| 543 |
+
"text_format": "latex",
|
| 544 |
+
"bbox": [
|
| 545 |
+
171,
|
| 546 |
+
628,
|
| 547 |
+
473,
|
| 548 |
+
648
|
| 549 |
+
],
|
| 550 |
+
"page_idx": 3
|
| 551 |
+
},
|
| 552 |
+
{
|
| 553 |
+
"type": "text",
|
| 554 |
+
"text": "where $\\sim$ indicates approximate equality and the projected coordinate $p_{j\\rightarrow i}$ is a continuous value. With a differentiable sampler, we extract color values of $p_{j\\rightarrow i}$ on $I_{i}$ . More formally, the transforming components process can be written as follows:",
|
| 555 |
+
"bbox": [
|
| 556 |
+
84,
|
| 557 |
+
659,
|
| 558 |
+
475,
|
| 559 |
+
734
|
| 560 |
+
],
|
| 561 |
+
"page_idx": 3
|
| 562 |
+
},
|
| 563 |
+
{
|
| 564 |
+
"type": "equation",
|
| 565 |
+
"text": "\n$$\nI _ {i \\rightarrow j} \\left(p _ {j}\\right) = \\operatorname {s a m p l e r} \\left(I _ {i}; p _ {j \\rightarrow i}\\right), \\tag {7}\n$$\n",
|
| 566 |
+
"text_format": "latex",
|
| 567 |
+
"bbox": [
|
| 568 |
+
173,
|
| 569 |
+
746,
|
| 570 |
+
473,
|
| 571 |
+
763
|
| 572 |
+
],
|
| 573 |
+
"page_idx": 3
|
| 574 |
+
},
|
| 575 |
+
{
|
| 576 |
+
"type": "text",
|
| 577 |
+
"text": "where $\\text{sampler}(\\cdot)$ is a bilinear sampling operator (Jaderberg et al., 2015).",
|
| 578 |
+
"bbox": [
|
| 579 |
+
84,
|
| 580 |
+
773,
|
| 581 |
+
473,
|
| 582 |
+
805
|
| 583 |
+
],
|
| 584 |
+
"page_idx": 3
|
| 585 |
+
},
|
| 586 |
+
{
|
| 587 |
+
"type": "text",
|
| 588 |
+
"text": "Acceleration. Rendering a full image is computationally heavy and extremely timetaking, requiring tens of seconds for a single iteration. To overcome the computational bottleneck of full image rendering and warping, rays are sampled on a strided grid to make the patch with stride $s$ , which we have set as 2. After the rays undergo volumetric rendering,",
|
| 589 |
+
"bbox": [
|
| 590 |
+
84,
|
| 591 |
+
814,
|
| 592 |
+
475,
|
| 593 |
+
906
|
| 594 |
+
],
|
| 595 |
+
"page_idx": 3
|
| 596 |
+
},
|
| 597 |
+
{
|
| 598 |
+
"type": "text",
|
| 599 |
+
"text": "we upsample the low-resolution depth map back to original resolution with bilinear interpolation. This full-resolution depth map is used for the inverse warping. This way, detailed warped patches of full-resolution can be generated with only a fraction of computational cost that would be required when rendering the original sized ray batch.",
|
| 600 |
+
"bbox": [
|
| 601 |
+
496,
|
| 602 |
+
378,
|
| 603 |
+
885,
|
| 604 |
+
470
|
| 605 |
+
],
|
| 606 |
+
"page_idx": 3
|
| 607 |
+
},
|
| 608 |
+
{
|
| 609 |
+
"type": "text",
|
| 610 |
+
"text": "4.3. Consistency Modeling",
|
| 611 |
+
"text_level": 1,
|
| 612 |
+
"bbox": [
|
| 613 |
+
496,
|
| 614 |
+
479,
|
| 615 |
+
687,
|
| 616 |
+
496
|
| 617 |
+
],
|
| 618 |
+
"page_idx": 3
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"type": "text",
|
| 622 |
+
"text": "Given the rendered patch $I_{j}$ at $j$ -th viewpoint and the warped patch $I_{i\\rightarrow j}$ with depth $D_{j}$ and viewpoint difference $R_{i\\rightarrow j}$ , we define the consistency between the two to encourage additional regularization for globally consistent rendering. One viable option is to naively apply the pixelwise image reconstruction loss $\\mathcal{L}_{\\mathrm{pix}}$ such that",
|
| 623 |
+
"bbox": [
|
| 624 |
+
495,
|
| 625 |
+
503,
|
| 626 |
+
888,
|
| 627 |
+
595
|
| 628 |
+
],
|
| 629 |
+
"page_idx": 3
|
| 630 |
+
},
|
| 631 |
+
{
|
| 632 |
+
"type": "equation",
|
| 633 |
+
"text": "\n$$\n\\mathcal {L} _ {\\mathrm {p i x}} = \\left\\| I _ {i \\rightarrow j} - I _ {j} \\right\\|. \\tag {8}\n$$\n",
|
| 634 |
+
"text_format": "latex",
|
| 635 |
+
"bbox": [
|
| 636 |
+
619,
|
| 637 |
+
604,
|
| 638 |
+
885,
|
| 639 |
+
625
|
| 640 |
+
],
|
| 641 |
+
"page_idx": 3
|
| 642 |
+
},
|
| 643 |
+
{
|
| 644 |
+
"type": "text",
|
| 645 |
+
"text": "However, we observe that this simple strategy is prone to cause failures in reflectant non-Lambertian surfaces where appearance changes greatly regarding viewpoints (Zhan et al., 2018). In addition, geometry-related problems, such as self-occlusion and artifacts, prohibits naive usage of pixelwise image reconstruction loss for regularization in unseen viewpoints.",
|
| 646 |
+
"bbox": [
|
| 647 |
+
495,
|
| 648 |
+
632,
|
| 649 |
+
887,
|
| 650 |
+
737
|
| 651 |
+
],
|
| 652 |
+
"page_idx": 3
|
| 653 |
+
},
|
| 654 |
+
{
|
| 655 |
+
"type": "text",
|
| 656 |
+
"text": "Feature-level consistency modeling. To overcome these issues, we propose masked feature-level regularization loss that encourages structural consistency while ignoring view-dependent radiance effects, as illustrated in Figure 2.",
|
| 657 |
+
"bbox": [
|
| 658 |
+
495,
|
| 659 |
+
747,
|
| 660 |
+
887,
|
| 661 |
+
808
|
| 662 |
+
],
|
| 663 |
+
"page_idx": 3
|
| 664 |
+
},
|
| 665 |
+
{
|
| 666 |
+
"type": "text",
|
| 667 |
+
"text": "Given an image $I$ as an input, we use a convolutional network to extract multi-level feature maps such that $f_{\\phi ,l}(I)\\in \\mathbb{R}^{H_l\\times W_l\\times C_l}$ , with channel depth $C_l$ for $l$ -th layer. To measure feature-level consistency between warped image $I_{i\\rightarrow j}$ and rendered image $I_{j}$ , we extract their features maps from $L$ layers and compute difference within each feature map",
|
| 668 |
+
"bbox": [
|
| 669 |
+
495,
|
| 670 |
+
814,
|
| 671 |
+
888,
|
| 672 |
+
906
|
| 673 |
+
],
|
| 674 |
+
"page_idx": 3
|
| 675 |
+
},
|
| 676 |
+
{
|
| 677 |
+
"type": "header",
|
| 678 |
+
"text": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency",
|
| 679 |
+
"bbox": [
|
| 680 |
+
251,
|
| 681 |
+
56,
|
| 682 |
+
718,
|
| 683 |
+
71
|
| 684 |
+
],
|
| 685 |
+
"page_idx": 3
|
| 686 |
+
},
|
| 687 |
+
{
|
| 688 |
+
"type": "image",
|
| 689 |
+
"img_path": "images/1c651c0ac94eb1e6eeaf7fb50a28d950f123b4d7bdc0550c1d2606e2f0cd93f5.jpg",
|
| 690 |
+
"image_caption": [
|
| 691 |
+
"(a) GT patch"
|
| 692 |
+
],
|
| 693 |
+
"image_footnote": [],
|
| 694 |
+
"bbox": [
|
| 695 |
+
96,
|
| 696 |
+
80,
|
| 697 |
+
251,
|
| 698 |
+
202
|
| 699 |
+
],
|
| 700 |
+
"page_idx": 4
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "image",
|
| 704 |
+
"img_path": "images/e4dee544c08849609da55e0e5b4759ac8e47a540f6a82077ef8e62cbc65f4daf.jpg",
|
| 705 |
+
"image_caption": [
|
| 706 |
+
"(b) Rendered patch"
|
| 707 |
+
],
|
| 708 |
+
"image_footnote": [],
|
| 709 |
+
"bbox": [
|
| 710 |
+
251,
|
| 711 |
+
80,
|
| 712 |
+
406,
|
| 713 |
+
202
|
| 714 |
+
],
|
| 715 |
+
"page_idx": 4
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"type": "image",
|
| 719 |
+
"img_path": "images/895b2fdc3053b69af8452e740bf044fb453bd56bd49248ba1599195f423c90a2.jpg",
|
| 720 |
+
"image_caption": [
|
| 721 |
+
"(c) Warped patch"
|
| 722 |
+
],
|
| 723 |
+
"image_footnote": [],
|
| 724 |
+
"bbox": [
|
| 725 |
+
408,
|
| 726 |
+
80,
|
| 727 |
+
562,
|
| 728 |
+
202
|
| 729 |
+
],
|
| 730 |
+
"page_idx": 4
|
| 731 |
+
},
|
| 732 |
+
{
|
| 733 |
+
"type": "image",
|
| 734 |
+
"img_path": "images/c775c52cbd779711b7ebba20b5181ce68438c596c027b4d17a4540c528ceda93.jpg",
|
| 735 |
+
"image_caption": [
|
| 736 |
+
"(d) Occlusion mask"
|
| 737 |
+
],
|
| 738 |
+
"image_footnote": [],
|
| 739 |
+
"bbox": [
|
| 740 |
+
578,
|
| 741 |
+
80,
|
| 742 |
+
689,
|
| 743 |
+
202
|
| 744 |
+
],
|
| 745 |
+
"page_idx": 4
|
| 746 |
+
},
|
| 747 |
+
{
|
| 748 |
+
"type": "image",
|
| 749 |
+
"img_path": "images/04564af4573f78bcfc6a91151e0151040a25b87f6ced994270399ba14262277c.jpg",
|
| 750 |
+
"image_caption": [
|
| 751 |
+
"(e) Masked patch"
|
| 752 |
+
],
|
| 753 |
+
"image_footnote": [],
|
| 754 |
+
"bbox": [
|
| 755 |
+
717,
|
| 756 |
+
80,
|
| 757 |
+
870,
|
| 758 |
+
202
|
| 759 |
+
],
|
| 760 |
+
"page_idx": 4
|
| 761 |
+
},
|
| 762 |
+
{
|
| 763 |
+
"type": "text",
|
| 764 |
+
"text": "pairs that are extracted from the same layer.",
|
| 765 |
+
"bbox": [
|
| 766 |
+
84,
|
| 767 |
+
282,
|
| 768 |
+
375,
|
| 769 |
+
297
|
| 770 |
+
],
|
| 771 |
+
"page_idx": 4
|
| 772 |
+
},
|
| 773 |
+
{
|
| 774 |
+
"type": "text",
|
| 775 |
+
"text": "In accordance with the idea of using the warped image $I_{i \\to j}$ as pseudo ground truths, we allow a gradient backpropagation to pass only through the rendered image and block it for the warped image. By applying the consistency loss at multiple levels of feature maps, we cause $I_{j}$ to model after $I_{i \\to j}$ both on semantic and structural level.",
|
| 776 |
+
"bbox": [
|
| 777 |
+
84,
|
| 778 |
+
304,
|
| 779 |
+
473,
|
| 780 |
+
395
|
| 781 |
+
],
|
| 782 |
+
"page_idx": 4
|
| 783 |
+
},
|
| 784 |
+
{
|
| 785 |
+
"type": "text",
|
| 786 |
+
"text": "Formally written, the consistency loss $\\mathcal{L}_{\\mathrm{cons}}$ is defined as such that",
|
| 787 |
+
"bbox": [
|
| 788 |
+
84,
|
| 789 |
+
402,
|
| 790 |
+
473,
|
| 791 |
+
431
|
| 792 |
+
],
|
| 793 |
+
"page_idx": 4
|
| 794 |
+
},
|
| 795 |
+
{
|
| 796 |
+
"type": "equation",
|
| 797 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {c o n s}} = \\sum_ {l = 1} ^ {L} \\frac {1}{C _ {l}} \\left\\| f _ {\\phi} ^ {l} \\left(I _ {j \\rightarrow i}\\right) - f _ {\\phi} ^ {l} \\left(I _ {j}\\right)\\right\\|. \\tag {9}\n$$\n",
|
| 798 |
+
"text_format": "latex",
|
| 799 |
+
"bbox": [
|
| 800 |
+
150,
|
| 801 |
+
439,
|
| 802 |
+
473,
|
| 803 |
+
479
|
| 804 |
+
],
|
| 805 |
+
"page_idx": 4
|
| 806 |
+
},
|
| 807 |
+
{
|
| 808 |
+
"type": "text",
|
| 809 |
+
"text": "For this loss function $\\mathcal{L}_{\\mathrm{cons}}$ , we find $l-1$ distance function most suited for our task and utilize it to measure consistency across feature difference maps. Empirically, we have discovered that VGG-19 network (Simonyan & Zisserman, 2014) yields best performance in modeling consistencies, likely due to the absence of normalization layers (Johnson et al., 2016) that scale down absolute values of feature differences. Therefore, we employ VGG19 network as our feature extractor network $f_{\\phi}$ throughout all of our models.",
|
| 810 |
+
"bbox": [
|
| 811 |
+
84,
|
| 812 |
+
496,
|
| 813 |
+
475,
|
| 814 |
+
632
|
| 815 |
+
],
|
| 816 |
+
"page_idx": 4
|
| 817 |
+
},
|
| 818 |
+
{
|
| 819 |
+
"type": "text",
|
| 820 |
+
"text": "It should be noted that our loss function differs from that of DietNeRF (Jain et al., 2021) in that while DietNeRF's consistency loss is limited to regularizing the radiance field in a globally semantic level, our loss combined with the warping module is also able to give the network highly rich information on a local, structural level as well. In other words, contrary to DietNeRF giving only high-level feature consistency, our method of using multiple levels of convolutional network for feature difference calculation can be interpreted as enforcing a mixture of all levels, from high-level semantic consistency to low-level structural consistency.",
|
| 821 |
+
"bbox": [
|
| 822 |
+
84,
|
| 823 |
+
638,
|
| 824 |
+
475,
|
| 825 |
+
806
|
| 826 |
+
],
|
| 827 |
+
"page_idx": 4
|
| 828 |
+
},
|
| 829 |
+
{
|
| 830 |
+
"type": "text",
|
| 831 |
+
"text": "Occlusion handling. In order to prevent imperfect and distorted warpings caused by erroneous geometry from influencing the model, which degrades overall reconstruction quality, we construct consistency mask $M_{l}$ to let NeRF ignore regions with geometric inconsistencies, as demonstrated in Figure 3. Instead of applying masks to the images",
|
| 832 |
+
"bbox": [
|
| 833 |
+
84,
|
| 834 |
+
814,
|
| 835 |
+
475,
|
| 836 |
+
906
|
| 837 |
+
],
|
| 838 |
+
"page_idx": 4
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "image",
|
| 842 |
+
"img_path": "images/4c053393d52d6b08f7a4803387fa33e3c22ff689c64193eb42d512e80b44101f.jpg",
|
| 843 |
+
"image_caption": [
|
| 844 |
+
"Figure 3. Visualization of consistency modeling process. (a) ground truth patch, (b) rendered patch at novel viewpoint, (c) warped patch, from input viewpoint to novel viewpoint, (d) occlusion mask with threshold masking, and (e) final warped patch with occlusion masking at novel viewpoint.",
|
| 845 |
+
"Figure 4. Occlusion-aware mask generation. Mask generation by comparing geometry between novel view $j$ and source view $i$ , with $I_{i\\rightarrow j}$ being warped patch generated for view $j$ . For (a) and (b), warping does not occur correctly due to artifacts and self-occlusion, respectively. Such pixels are masked out by $M_l$ , allowing only (c), with accurate warping, as training signal for rendered image $I_j$ ."
|
| 846 |
+
],
|
| 847 |
+
"image_footnote": [],
|
| 848 |
+
"bbox": [
|
| 849 |
+
506,
|
| 850 |
+
282,
|
| 851 |
+
890,
|
| 852 |
+
474
|
| 853 |
+
],
|
| 854 |
+
"page_idx": 4
|
| 855 |
+
},
|
| 856 |
+
{
|
| 857 |
+
"type": "text",
|
| 858 |
+
"text": "before inputting them into the feature extractor network, we apply resized masks $M_{l}$ directly to the feature maps, after using nearest-neighbor down-sampling to make them match the dimensions of $l$ -th layer outputs.",
|
| 859 |
+
"bbox": [
|
| 860 |
+
495,
|
| 861 |
+
584,
|
| 862 |
+
885,
|
| 863 |
+
643
|
| 864 |
+
],
|
| 865 |
+
"page_idx": 4
|
| 866 |
+
},
|
| 867 |
+
{
|
| 868 |
+
"type": "text",
|
| 869 |
+
"text": "We generate $M$ by measuring consistency between rendered depth values from the target viewpoint and source viewpoint such that",
|
| 870 |
+
"bbox": [
|
| 871 |
+
495,
|
| 872 |
+
651,
|
| 873 |
+
885,
|
| 874 |
+
695
|
| 875 |
+
],
|
| 876 |
+
"page_idx": 4
|
| 877 |
+
},
|
| 878 |
+
{
|
| 879 |
+
"type": "equation",
|
| 880 |
+
"text": "\n$$\nM \\left(p _ {j}\\right) = \\left[\\left\\| D _ {j} \\left(p _ {j}\\right) - D _ {i} \\left(p _ {j \\rightarrow i}\\right)\\right\\| < \\tau \\right]. \\tag {10}\n$$\n",
|
| 881 |
+
"text_format": "latex",
|
| 882 |
+
"bbox": [
|
| 883 |
+
555,
|
| 884 |
+
709,
|
| 885 |
+
883,
|
| 886 |
+
728
|
| 887 |
+
],
|
| 888 |
+
"page_idx": 4
|
| 889 |
+
},
|
| 890 |
+
{
|
| 891 |
+
"type": "text",
|
| 892 |
+
"text": "where $[\\cdot ]$ is Iverson bracket, and $p_j\\rightarrow i$ refers to the corresponding pixel in source viewpoint $i$ for reprojected target pixel $p_j$ of $j$ -th viewpoint. Here we measure euclidean distance between depth points rendered from target and source viewpoints as a criterion for a threshold masking. As illustrated in Figure 4, if distance between two points are greater than given threshold value $\\tau$ , we determine two rays as rendering depths of separate surfaces and mask out the corresponding pixel in viewpoint $I_{j}$ . The process takes place over every pixel in viewpoint $I_{j}$ to generate a mask $M$ the same size as rendered pixels. Through this technique, we fil",
|
| 893 |
+
"bbox": [
|
| 894 |
+
495,
|
| 895 |
+
739,
|
| 896 |
+
885,
|
| 897 |
+
905
|
| 898 |
+
],
|
| 899 |
+
"page_idx": 4
|
| 900 |
+
},
|
| 901 |
+
{
|
| 902 |
+
"type": "header",
|
| 903 |
+
"text": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency",
|
| 904 |
+
"bbox": [
|
| 905 |
+
251,
|
| 906 |
+
56,
|
| 907 |
+
718,
|
| 908 |
+
70
|
| 909 |
+
],
|
| 910 |
+
"page_idx": 4
|
| 911 |
+
},
|
| 912 |
+
{
|
| 913 |
+
"type": "image",
|
| 914 |
+
"img_path": "images/b2f9e14becd567e73ce082e5f0e65d51c84cac64e8f4f3aee2ba1d7337c74687.jpg",
|
| 915 |
+
"image_caption": [
|
| 916 |
+
"Figure 5. Qualitative comparison on NeRF-Synthetic (Mildenhall et al., 2020) show that in 3-view setting, our method captures fine details more robustly (such as the wire in the mic scene) and produces less artifacts (background in the materials scene) compared to previous methods. We show GeCoNeRF's results (e) with its rendered depth (f)."
|
| 917 |
+
],
|
| 918 |
+
"image_footnote": [],
|
| 919 |
+
"bbox": [
|
| 920 |
+
91,
|
| 921 |
+
79,
|
| 922 |
+
877,
|
| 923 |
+
303
|
| 924 |
+
],
|
| 925 |
+
"page_idx": 5
|
| 926 |
+
},
|
| 927 |
+
{
|
| 928 |
+
"type": "text",
|
| 929 |
+
"text": "ter out problematic solutions at feature level and regularize NeRF with only high-confidence image features.",
|
| 930 |
+
"bbox": [
|
| 931 |
+
84,
|
| 932 |
+
361,
|
| 933 |
+
473,
|
| 934 |
+
391
|
| 935 |
+
],
|
| 936 |
+
"page_idx": 5
|
| 937 |
+
},
|
| 938 |
+
{
|
| 939 |
+
"type": "text",
|
| 940 |
+
"text": "Based on this, the consistency loss $\\mathcal{L}_{\\mathrm{cons}}$ is extended as such that",
|
| 941 |
+
"bbox": [
|
| 942 |
+
84,
|
| 943 |
+
398,
|
| 944 |
+
473,
|
| 945 |
+
428
|
| 946 |
+
],
|
| 947 |
+
"page_idx": 5
|
| 948 |
+
},
|
| 949 |
+
{
|
| 950 |
+
"type": "equation",
|
| 951 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {c o n s}} ^ {M} = \\sum_ {l = 1} ^ {L} \\frac {1}{C _ {l} m _ {l}} \\| M _ {l} \\odot \\left(f _ {\\phi} ^ {l} \\left(I _ {i \\rightarrow j}\\right) - f _ {\\phi} ^ {l} \\left(I _ {j}\\right)\\right) \\|, \\tag {11}\n$$\n",
|
| 952 |
+
"text_format": "latex",
|
| 953 |
+
"bbox": [
|
| 954 |
+
101,
|
| 955 |
+
433,
|
| 956 |
+
473,
|
| 957 |
+
476
|
| 958 |
+
],
|
| 959 |
+
"page_idx": 5
|
| 960 |
+
},
|
| 961 |
+
{
|
| 962 |
+
"type": "text",
|
| 963 |
+
"text": "where $m_l$ is the sum of non-zero values.",
|
| 964 |
+
"bbox": [
|
| 965 |
+
84,
|
| 966 |
+
481,
|
| 967 |
+
352,
|
| 968 |
+
494
|
| 969 |
+
],
|
| 970 |
+
"page_idx": 5
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"type": "text",
|
| 974 |
+
"text": "Edge-aware disparity regularization. Since our method is dependent upon the quality of depth rendered by NeRF, we directly impose additional regularization on rendered depth to facilitate optimization. We further encourage local depth smoothness on rendered scenes by imposing $l-1$ penalty on disparity gradient within randomly sampled patches of input views. In addition, inspired by (Godard et al., 2017), we take into account the fact that depth discontinuities in depth maps are likely to be aligned to gradients of its color image, and introduce an edge-aware term with image gradients $\\partial I$ to weight the disparity values. Specifically, following (Godard et al., 2017), we regularize for edge-aware depth smoothness such that",
|
| 975 |
+
"bbox": [
|
| 976 |
+
84,
|
| 977 |
+
500,
|
| 978 |
+
475,
|
| 979 |
+
695
|
| 980 |
+
],
|
| 981 |
+
"page_idx": 5
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "equation",
|
| 985 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {r e g}} = \\left| \\partial_ {x} D _ {i} ^ {*} \\right| e ^ {- \\left| \\partial_ {x} I _ {i} \\right|} + \\left| \\partial_ {y} D _ {i} ^ {*} \\right| e ^ {- \\left| \\partial_ {y} I _ {i} \\right|}, \\tag {12}\n$$\n",
|
| 986 |
+
"text_format": "latex",
|
| 987 |
+
"bbox": [
|
| 988 |
+
138,
|
| 989 |
+
700,
|
| 990 |
+
473,
|
| 991 |
+
720
|
| 992 |
+
],
|
| 993 |
+
"page_idx": 5
|
| 994 |
+
},
|
| 995 |
+
{
|
| 996 |
+
"type": "text",
|
| 997 |
+
"text": "where $D_{i}^{*} = D_{i} / \\overline{D_{i}}$ is the mean-normalized inverse depth from (Godard et al., 2017) to discourage shrinking of the estimated depth.",
|
| 998 |
+
"bbox": [
|
| 999 |
+
84,
|
| 1000 |
+
726,
|
| 1001 |
+
475,
|
| 1002 |
+
773
|
| 1003 |
+
],
|
| 1004 |
+
"page_idx": 5
|
| 1005 |
+
},
|
| 1006 |
+
{
|
| 1007 |
+
"type": "text",
|
| 1008 |
+
"text": "4.4. Training Strategy",
|
| 1009 |
+
"text_level": 1,
|
| 1010 |
+
"bbox": [
|
| 1011 |
+
84,
|
| 1012 |
+
782,
|
| 1013 |
+
243,
|
| 1014 |
+
799
|
| 1015 |
+
],
|
| 1016 |
+
"page_idx": 5
|
| 1017 |
+
},
|
| 1018 |
+
{
|
| 1019 |
+
"type": "text",
|
| 1020 |
+
"text": "In this section, we present novel training strategies to learn the model with the proposed losses.",
|
| 1021 |
+
"bbox": [
|
| 1022 |
+
84,
|
| 1023 |
+
806,
|
| 1024 |
+
473,
|
| 1025 |
+
837
|
| 1026 |
+
],
|
| 1027 |
+
"page_idx": 5
|
| 1028 |
+
},
|
| 1029 |
+
{
|
| 1030 |
+
"type": "text",
|
| 1031 |
+
"text": "Total losses. We optimize our model with a combined final loss of original NeRF's pixel-wise reconstruction loss $\\mathcal{L}_{\\mathrm{obs}}$ and two types of regularization loss, $\\mathcal{L}_{\\mathrm{cons}}^M$ for unobserved view consistency modeling and $\\mathcal{L}_{\\mathrm{reg}}$ for disparity regularization.",
|
| 1032 |
+
"bbox": [
|
| 1033 |
+
84,
|
| 1034 |
+
839,
|
| 1035 |
+
475,
|
| 1036 |
+
915
|
| 1037 |
+
],
|
| 1038 |
+
"page_idx": 5
|
| 1039 |
+
},
|
| 1040 |
+
{
|
| 1041 |
+
"type": "text",
|
| 1042 |
+
"text": "Progressive camera pose generation. Difficulty of of accurate warping increases the further target view is from the source view, which means that sampling far camera poses straight from the beginning of training may have negative effects on our model. Therefore, we first generate camera poses near source views, then progressively further as training proceeds. We sample noise value uniformly between an interval of $[- \\beta, + \\beta]$ and add it to the original Euler rotation angles of input view poses, with parameter $\\beta$ growing linearly from 3 to 9 degrees throughout the course of optimization. This design choice can be intuitively understood as stabilizing locations near observed viewpoints at start and propagating this regularization to further locations, where warping becomes progressively more difficult.",
|
| 1043 |
+
"bbox": [
|
| 1044 |
+
496,
|
| 1045 |
+
359,
|
| 1046 |
+
888,
|
| 1047 |
+
571
|
| 1048 |
+
],
|
| 1049 |
+
"page_idx": 5
|
| 1050 |
+
},
|
| 1051 |
+
{
|
| 1052 |
+
"type": "text",
|
| 1053 |
+
"text": "Positional encoding frequency annealing. We find that most of the artifacts occurring are high-frequency occlusions that fill the space between scene and camera. This behaviour can be effectively suppressed by constraining the order of fourier positional encoding (Tancik et al., 2020) to low dimensions. Due to this reason, we adopt coarse-to-fine frequency annealing strategy previously used by (Park et al., 2021) to regularize our optimization. This strategy forces our network to primarily optimize from coarse, low-frequency details where self-occlusions and fine features are minimized, easing the difficulty of warping process in the beginning stages of training. Following (Park et al., 2021), the annealing equation is $\\alpha(t) = mt / K$ , with $m$ as the number of encoding frequencies, $t$ as iteration step, and we set hyper-parameter $K$ as $15k$ .",
|
| 1054 |
+
"bbox": [
|
| 1055 |
+
496,
|
| 1056 |
+
577,
|
| 1057 |
+
888,
|
| 1058 |
+
805
|
| 1059 |
+
],
|
| 1060 |
+
"page_idx": 5
|
| 1061 |
+
},
|
| 1062 |
+
{
|
| 1063 |
+
"type": "text",
|
| 1064 |
+
"text": "5. Experiments",
|
| 1065 |
+
"text_level": 1,
|
| 1066 |
+
"bbox": [
|
| 1067 |
+
496,
|
| 1068 |
+
816,
|
| 1069 |
+
629,
|
| 1070 |
+
834
|
| 1071 |
+
],
|
| 1072 |
+
"page_idx": 5
|
| 1073 |
+
},
|
| 1074 |
+
{
|
| 1075 |
+
"type": "text",
|
| 1076 |
+
"text": "5.1. Experimental Settings",
|
| 1077 |
+
"text_level": 1,
|
| 1078 |
+
"bbox": [
|
| 1079 |
+
496,
|
| 1080 |
+
835,
|
| 1081 |
+
687,
|
| 1082 |
+
852
|
| 1083 |
+
],
|
| 1084 |
+
"page_idx": 5
|
| 1085 |
+
},
|
| 1086 |
+
{
|
| 1087 |
+
"type": "text",
|
| 1088 |
+
"text": "Baselines. We use mip-NeRF (Barron et al., 2021) as our backbone. We give our comparisons to the baseline and several state-of-the-art models for few-shot NeRF: InfoN",
|
| 1089 |
+
"bbox": [
|
| 1090 |
+
496,
|
| 1091 |
+
859,
|
| 1092 |
+
888,
|
| 1093 |
+
905
|
| 1094 |
+
],
|
| 1095 |
+
"page_idx": 5
|
| 1096 |
+
},
|
| 1097 |
+
{
|
| 1098 |
+
"type": "header",
|
| 1099 |
+
"text": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency",
|
| 1100 |
+
"bbox": [
|
| 1101 |
+
251,
|
| 1102 |
+
56,
|
| 1103 |
+
718,
|
| 1104 |
+
70
|
| 1105 |
+
],
|
| 1106 |
+
"page_idx": 5
|
| 1107 |
+
},
|
| 1108 |
+
{
|
| 1109 |
+
"type": "table",
|
| 1110 |
+
"img_path": "images/a3fd4db1e7ddfe8f0834657cd44d7680a1dd7efb26225da226b86fc879ae5cf5.jpg",
|
| 1111 |
+
"table_caption": [
|
| 1112 |
+
"Table 1. Quantitative comparison on NeRF-Synthetic (Mildenhall et al., 2020) and LLFF (Mildenhall et al., 2019) datasets."
|
| 1113 |
+
],
|
| 1114 |
+
"table_footnote": [],
|
| 1115 |
+
"table_body": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"4\">NeRF-Synthetic (Mildenhall et al., 2020)</td><td colspan=\"4\">LLFF (Mildenhall et al., 2019)</td></tr><tr><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td><td>Avg. ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td><td>Avg. ↓</td></tr><tr><td>NeRF (Mildenhall et al., 2020)</td><td>14.73</td><td>0.734</td><td>0.451</td><td>0.199</td><td>13.34</td><td>0.373</td><td>0.451</td><td>0.255</td></tr><tr><td>mip-NeRF (Barron et al., 2021)</td><td>17.71</td><td>0.798</td><td>0.745</td><td>0.178</td><td>14.62</td><td>0.351</td><td>0.495</td><td>0.246</td></tr><tr><td>DietNeRF (Jain et al., 2021)</td><td>16.06</td><td>0.793</td><td>0.306</td><td>0.151</td><td>14.94</td><td>0.370</td><td>0.496</td><td>0.232</td></tr><tr><td>InfoNeRF (Kim et al., 2022)</td><td>18.65</td><td>0.811</td><td>0.230</td><td>0.111</td><td>14.37</td><td>0.349</td><td>0.457</td><td>0.238</td></tr><tr><td>RegNeRF (Niemeyer et al., 2022)</td><td>18.01</td><td>0.842</td><td>0.352</td><td>0.132</td><td>19.08</td><td>0.587</td><td>0.336</td><td>0.146</td></tr><tr><td>GeCoNeRF (Ours)</td><td>19.23</td><td>0.866</td><td>0.201</td><td>0.096</td><td>18.77</td><td>0.596</td><td>0.338</td><td>0.145</td></tr></table>",
|
| 1116 |
+
"bbox": [
|
| 1117 |
+
91,
|
| 1118 |
+
112,
|
| 1119 |
+
883,
|
| 1120 |
+
253
|
| 1121 |
+
],
|
| 1122 |
+
"page_idx": 6
|
| 1123 |
+
},
|
| 1124 |
+
{
|
| 1125 |
+
"type": "image",
|
| 1126 |
+
"img_path": "images/53cdedb3baacce03743bdad8658f2404cf7e406121ddf383ad6d125a9f4ed598.jpg",
|
| 1127 |
+
"image_caption": [],
|
| 1128 |
+
"image_footnote": [],
|
| 1129 |
+
"bbox": [
|
| 1130 |
+
98,
|
| 1131 |
+
258,
|
| 1132 |
+
251,
|
| 1133 |
+
349
|
| 1134 |
+
],
|
| 1135 |
+
"page_idx": 6
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "image",
|
| 1139 |
+
"img_path": "images/6782f2d0e28ad46ab391f59194f16bfcf9d24637852883b138a078943c7cc19d.jpg",
|
| 1140 |
+
"image_caption": [
|
| 1141 |
+
"(a) Ground-truth"
|
| 1142 |
+
],
|
| 1143 |
+
"image_footnote": [],
|
| 1144 |
+
"bbox": [
|
| 1145 |
+
98,
|
| 1146 |
+
349,
|
| 1147 |
+
251,
|
| 1148 |
+
439
|
| 1149 |
+
],
|
| 1150 |
+
"page_idx": 6
|
| 1151 |
+
},
|
| 1152 |
+
{
|
| 1153 |
+
"type": "image",
|
| 1154 |
+
"img_path": "images/dec902b72037dabdc1e25ad10c896e82bd4a69d796974f182644598ff1550aed.jpg",
|
| 1155 |
+
"image_caption": [],
|
| 1156 |
+
"image_footnote": [],
|
| 1157 |
+
"bbox": [
|
| 1158 |
+
254,
|
| 1159 |
+
258,
|
| 1160 |
+
406,
|
| 1161 |
+
349
|
| 1162 |
+
],
|
| 1163 |
+
"page_idx": 6
|
| 1164 |
+
},
|
| 1165 |
+
{
|
| 1166 |
+
"type": "image",
|
| 1167 |
+
"img_path": "images/afd3c67232703a32094832a5dbfab9075bec13d08dc1d53ced4976a8e23d5213.jpg",
|
| 1168 |
+
"image_caption": [
|
| 1169 |
+
"(b) mip-NeRF"
|
| 1170 |
+
],
|
| 1171 |
+
"image_footnote": [],
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
254,
|
| 1174 |
+
349,
|
| 1175 |
+
406,
|
| 1176 |
+
439
|
| 1177 |
+
],
|
| 1178 |
+
"page_idx": 6
|
| 1179 |
+
},
|
| 1180 |
+
{
|
| 1181 |
+
"type": "image",
|
| 1182 |
+
"img_path": "images/397aaee9e7d92eeb1617e104682dbab101216a025df5fb36b757dac3273a4192.jpg",
|
| 1183 |
+
"image_caption": [],
|
| 1184 |
+
"image_footnote": [],
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
408,
|
| 1187 |
+
258,
|
| 1188 |
+
562,
|
| 1189 |
+
349
|
| 1190 |
+
],
|
| 1191 |
+
"page_idx": 6
|
| 1192 |
+
},
|
| 1193 |
+
{
|
| 1194 |
+
"type": "image",
|
| 1195 |
+
"img_path": "images/04052f2634588a00536710bc4bbc0a2d352200890681040393986ca91ae97308.jpg",
|
| 1196 |
+
"image_caption": [
|
| 1197 |
+
"(c) mip-NeRF (D)",
|
| 1198 |
+
"Figure 6. Qualitative results on LLFF (Mildenhall et al., 2019). Comparison with baseline mip-NeRF shows that our model learns of coherent depth and geometry in extremely sparse 3-view setting."
|
| 1199 |
+
],
|
| 1200 |
+
"image_footnote": [],
|
| 1201 |
+
"bbox": [
|
| 1202 |
+
408,
|
| 1203 |
+
349,
|
| 1204 |
+
562,
|
| 1205 |
+
439
|
| 1206 |
+
],
|
| 1207 |
+
"page_idx": 6
|
| 1208 |
+
},
|
| 1209 |
+
{
|
| 1210 |
+
"type": "image",
|
| 1211 |
+
"img_path": "images/3a87279752d996a509c9d07576ed9f5fcb06666fc17b39fd447fd85d2d3d5f0e.jpg",
|
| 1212 |
+
"image_caption": [],
|
| 1213 |
+
"image_footnote": [],
|
| 1214 |
+
"bbox": [
|
| 1215 |
+
563,
|
| 1216 |
+
258,
|
| 1217 |
+
717,
|
| 1218 |
+
349
|
| 1219 |
+
],
|
| 1220 |
+
"page_idx": 6
|
| 1221 |
+
},
|
| 1222 |
+
{
|
| 1223 |
+
"type": "image",
|
| 1224 |
+
"img_path": "images/ac6b3003ef68cf40685781409759b8b48bfdb271c4e8739853aca4bbca04fa2c.jpg",
|
| 1225 |
+
"image_caption": [
|
| 1226 |
+
"(d) GeCoNeRF"
|
| 1227 |
+
],
|
| 1228 |
+
"image_footnote": [],
|
| 1229 |
+
"bbox": [
|
| 1230 |
+
563,
|
| 1231 |
+
349,
|
| 1232 |
+
717,
|
| 1233 |
+
439
|
| 1234 |
+
],
|
| 1235 |
+
"page_idx": 6
|
| 1236 |
+
},
|
| 1237 |
+
{
|
| 1238 |
+
"type": "image",
|
| 1239 |
+
"img_path": "images/a895c28134f7671c54fd9fd33443e2341031c125a04066c1477aa3c11a269674.jpg",
|
| 1240 |
+
"image_caption": [],
|
| 1241 |
+
"image_footnote": [],
|
| 1242 |
+
"bbox": [
|
| 1243 |
+
720,
|
| 1244 |
+
258,
|
| 1245 |
+
872,
|
| 1246 |
+
349
|
| 1247 |
+
],
|
| 1248 |
+
"page_idx": 6
|
| 1249 |
+
},
|
| 1250 |
+
{
|
| 1251 |
+
"type": "image",
|
| 1252 |
+
"img_path": "images/ab6a30f1cefe9583140df932f87b8e1e351ff6a6d7fd903a58f55b1b0f9f3f76.jpg",
|
| 1253 |
+
"image_caption": [
|
| 1254 |
+
"(e) GeCoNeRF (D)"
|
| 1255 |
+
],
|
| 1256 |
+
"image_footnote": [],
|
| 1257 |
+
"bbox": [
|
| 1258 |
+
720,
|
| 1259 |
+
349,
|
| 1260 |
+
872,
|
| 1261 |
+
439
|
| 1262 |
+
],
|
| 1263 |
+
"page_idx": 6
|
| 1264 |
+
},
|
| 1265 |
+
{
|
| 1266 |
+
"type": "text",
|
| 1267 |
+
"text": "eRF (Kim et al., 2022), DietNeRF (Jain et al., 2021), and RegNeRF (Niemeyer et al., 2022). We provide implementation details in the appendix.",
|
| 1268 |
+
"bbox": [
|
| 1269 |
+
84,
|
| 1270 |
+
503,
|
| 1271 |
+
475,
|
| 1272 |
+
550
|
| 1273 |
+
],
|
| 1274 |
+
"page_idx": 6
|
| 1275 |
+
},
|
| 1276 |
+
{
|
| 1277 |
+
"type": "text",
|
| 1278 |
+
"text": "Datasets and metrics. We evaluate our model on NeRF-Synthetic (Mildenhall et al., 2020) and LLFF (Mildenhall et al., 2019). NeRF-Synthetic is a realistically rendered $360^{\\circ}$ synthetic dataset comprised of 8 scenes. We randomly sample 3 viewpoints out of 100 training images in each scene, with 200 testing images for evaluation. We also conduct experiments on LLFF benchmark dataset, which consists of real-life forward facing scenes. Following RegNeRF (Niemeyer et al., 2022), we apply standard settings by selecting test set evenly from list of every 8th image and selecting 3 reference views from remaining images. We quantify novel view synthesis quality using PSNR, Structural Similarity Index Measure (SSIM) (Wang et al., 2004), LPIPS perceptual metric (Zhang et al., 2018) and an average error metric introduced in (Barron et al., 2021) to report the mean value of metrics for all scenes in each dataset.",
|
| 1279 |
+
"bbox": [
|
| 1280 |
+
84,
|
| 1281 |
+
563,
|
| 1282 |
+
475,
|
| 1283 |
+
805
|
| 1284 |
+
],
|
| 1285 |
+
"page_idx": 6
|
| 1286 |
+
},
|
| 1287 |
+
{
|
| 1288 |
+
"type": "text",
|
| 1289 |
+
"text": "5.2. Comparisons",
|
| 1290 |
+
"text_level": 1,
|
| 1291 |
+
"bbox": [
|
| 1292 |
+
84,
|
| 1293 |
+
821,
|
| 1294 |
+
212,
|
| 1295 |
+
835
|
| 1296 |
+
],
|
| 1297 |
+
"page_idx": 6
|
| 1298 |
+
},
|
| 1299 |
+
{
|
| 1300 |
+
"type": "text",
|
| 1301 |
+
"text": "Qualitative comparisons. Qualitative comparison results in Figure 5 and 6 demonstrate that our model shows superior performance to baseline mip-NeRF (Barron et al., 2021) and previous state-of-the-art model, RegNeRF (Niemeyer et al.,",
|
| 1302 |
+
"bbox": [
|
| 1303 |
+
84,
|
| 1304 |
+
845,
|
| 1305 |
+
475,
|
| 1306 |
+
906
|
| 1307 |
+
],
|
| 1308 |
+
"page_idx": 6
|
| 1309 |
+
},
|
| 1310 |
+
{
|
| 1311 |
+
"type": "text",
|
| 1312 |
+
"text": "2022), in 3-view settings. We observe that our warping-based consistency enables GeCoNeRF to capture fine details that mip-NeRF and RegNeRF struggle to capture in same sparse view scenarios, as demonstrated with the mic scene. Our method also displays higher stability in rendering smooth surfaces and reducing artifacts in background in comparison to previous models, as shown in the results of the materials scene. We argue that these results demonstrate how our method, through generation of warped pseudo ground truth patches, is able to give the model local, scene-specific regularization that aids recovery of fine details, which previous few-shot NeRF models with their global, generalized priors were unable to accomplish.",
|
| 1313 |
+
"bbox": [
|
| 1314 |
+
496,
|
| 1315 |
+
503,
|
| 1316 |
+
888,
|
| 1317 |
+
700
|
| 1318 |
+
],
|
| 1319 |
+
"page_idx": 6
|
| 1320 |
+
},
|
| 1321 |
+
{
|
| 1322 |
+
"type": "text",
|
| 1323 |
+
"text": "Quantitative comparisons. Comparisons in Table 1 show our model's competitive results in LLFF dataset, whose PSNR results show large increase in comparison to mip-NeRF baseline and competitive compared to RegN-erF. We see that our warping-based consistency modeling successfully prevents overfitting and artifacts, which allows our model to perform better quantitatively.",
|
| 1324 |
+
"bbox": [
|
| 1325 |
+
495,
|
| 1326 |
+
739,
|
| 1327 |
+
887,
|
| 1328 |
+
845
|
| 1329 |
+
],
|
| 1330 |
+
"page_idx": 6
|
| 1331 |
+
},
|
| 1332 |
+
{
|
| 1333 |
+
"type": "text",
|
| 1334 |
+
"text": "5.3. Ablation Study",
|
| 1335 |
+
"text_level": 1,
|
| 1336 |
+
"bbox": [
|
| 1337 |
+
496,
|
| 1338 |
+
861,
|
| 1339 |
+
635,
|
| 1340 |
+
876
|
| 1341 |
+
],
|
| 1342 |
+
"page_idx": 6
|
| 1343 |
+
},
|
| 1344 |
+
{
|
| 1345 |
+
"type": "text",
|
| 1346 |
+
"text": "We validate our design choices by performing an ablation study on LLFF (Mildenhall et al., 2019) dataset.",
|
| 1347 |
+
"bbox": [
|
| 1348 |
+
496,
|
| 1349 |
+
885,
|
| 1350 |
+
885,
|
| 1351 |
+
915
|
| 1352 |
+
],
|
| 1353 |
+
"page_idx": 6
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "header",
|
| 1357 |
+
"text": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency",
|
| 1358 |
+
"bbox": [
|
| 1359 |
+
251,
|
| 1360 |
+
56,
|
| 1361 |
+
718,
|
| 1362 |
+
71
|
| 1363 |
+
],
|
| 1364 |
+
"page_idx": 6
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "image",
|
| 1368 |
+
"img_path": "images/f6d89c509d6ac27449448d1223c08e67f347416afac25fd46351b9aeb9b81a51.jpg",
|
| 1369 |
+
"image_caption": [
|
| 1370 |
+
"(a) Baseline"
|
| 1371 |
+
],
|
| 1372 |
+
"image_footnote": [],
|
| 1373 |
+
"bbox": [
|
| 1374 |
+
86,
|
| 1375 |
+
80,
|
| 1376 |
+
243,
|
| 1377 |
+
174
|
| 1378 |
+
],
|
| 1379 |
+
"page_idx": 7
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "image",
|
| 1383 |
+
"img_path": "images/e61748df146521bf57a2d1a7d42ad312f116fa9d0cb163e4c425ab8658c05053.jpg",
|
| 1384 |
+
"image_caption": [
|
| 1385 |
+
"(b) (a) + $\\mathcal{L}_{\\mathrm{cons}}$",
|
| 1386 |
+
"Figure 7. Qualitative ablation. Our qualitative ablation results on Horns scene shows the contribution of each module in performance of our model at 3-view scenario."
|
| 1387 |
+
],
|
| 1388 |
+
"image_footnote": [],
|
| 1389 |
+
"bbox": [
|
| 1390 |
+
246,
|
| 1391 |
+
80,
|
| 1392 |
+
403,
|
| 1393 |
+
174
|
| 1394 |
+
],
|
| 1395 |
+
"page_idx": 7
|
| 1396 |
+
},
|
| 1397 |
+
{
|
| 1398 |
+
"type": "image",
|
| 1399 |
+
"img_path": "images/1cde149aba9550265702719aa4d6fcc6fbe9a3659c83ea306908557432859d45.jpg",
|
| 1400 |
+
"image_caption": [
|
| 1401 |
+
"(c) $(\\mathbf{b}) + M$ (O. mask)"
|
| 1402 |
+
],
|
| 1403 |
+
"image_footnote": [],
|
| 1404 |
+
"bbox": [
|
| 1405 |
+
403,
|
| 1406 |
+
80,
|
| 1407 |
+
562,
|
| 1408 |
+
174
|
| 1409 |
+
],
|
| 1410 |
+
"page_idx": 7
|
| 1411 |
+
},
|
| 1412 |
+
{
|
| 1413 |
+
"type": "image",
|
| 1414 |
+
"img_path": "images/098a8856962d99666d33d049e3a634cece695aec9fad96b8b359ac226a8f4a49.jpg",
|
| 1415 |
+
"image_caption": [
|
| 1416 |
+
"(d) (c) + Progressive"
|
| 1417 |
+
],
|
| 1418 |
+
"image_footnote": [],
|
| 1419 |
+
"bbox": [
|
| 1420 |
+
563,
|
| 1421 |
+
80,
|
| 1422 |
+
720,
|
| 1423 |
+
174
|
| 1424 |
+
],
|
| 1425 |
+
"page_idx": 7
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "image",
|
| 1429 |
+
"img_path": "images/cd93bd8e3ad4ade51afaedebd2e18458b1e8e658b9a9a6b9ea70a175625b19e6.jpg",
|
| 1430 |
+
"image_caption": [
|
| 1431 |
+
"(e) (d) + $\\mathcal{L}_{\\mathrm{reg}}$ (Ours)"
|
| 1432 |
+
],
|
| 1433 |
+
"image_footnote": [],
|
| 1434 |
+
"bbox": [
|
| 1435 |
+
723,
|
| 1436 |
+
80,
|
| 1437 |
+
880,
|
| 1438 |
+
174
|
| 1439 |
+
],
|
| 1440 |
+
"page_idx": 7
|
| 1441 |
+
},
|
| 1442 |
+
{
|
| 1443 |
+
"type": "table",
|
| 1444 |
+
"img_path": "images/12d3d5bc762d6fbff8fdc79a5a903af3e9077291eeecb185c4b8a00bcc23a811.jpg",
|
| 1445 |
+
"table_caption": [
|
| 1446 |
+
"Table 2. Ablation study."
|
| 1447 |
+
],
|
| 1448 |
+
"table_footnote": [],
|
| 1449 |
+
"table_body": "<table><tr><td>Components</td><td>PSNR↑</td><td>SSIM↑</td><td>LPIPS↓</td><td>Avg.↓</td></tr><tr><td>(a) Baseline</td><td>14.62</td><td>0.351</td><td>0.495</td><td>0.246</td></tr><tr><td>(b) (a) + Lcons</td><td>18.10</td><td>0.529</td><td>0.408</td><td>0.164</td></tr><tr><td>(c) (b) + M (O. mask)</td><td>18.24</td><td>0.535</td><td>0.379</td><td>0.159</td></tr><tr><td>(d) (c) + Progressive</td><td>18.46</td><td>0.552</td><td>0.349</td><td>0.151</td></tr><tr><td>(e) (d) + Lreg (Ours)</td><td>18.55</td><td>0.592</td><td>0.340</td><td>0.150</td></tr></table>",
|
| 1450 |
+
"bbox": [
|
| 1451 |
+
86,
|
| 1452 |
+
258,
|
| 1453 |
+
470,
|
| 1454 |
+
357
|
| 1455 |
+
],
|
| 1456 |
+
"page_idx": 7
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"type": "table",
|
| 1460 |
+
"img_path": "images/cdc6b7f4ab7970c2ddbb1036d34f678357bd109af72654871ee869f38bbf6cb4.jpg",
|
| 1461 |
+
"table_caption": [
|
| 1462 |
+
"Table 3. Progressive training ablation."
|
| 1463 |
+
],
|
| 1464 |
+
"table_footnote": [],
|
| 1465 |
+
"table_body": "<table><tr><td>Components</td><td>PSNR↑</td><td>SSIM↑</td><td>LPIPS↓</td><td>Avg. ↓</td></tr><tr><td>w/o prog. anneal</td><td>18.50</td><td>0.852</td><td>0.781</td><td>0.161</td></tr><tr><td>w/o prog. pose</td><td>16.96</td><td>0.799</td><td>0.811</td><td>0.194</td></tr><tr><td>w/o both</td><td>17.04</td><td>0.788</td><td>0.823</td><td>0.197</td></tr><tr><td>GeCoNeRF (Ours)</td><td>19.23</td><td>0.866</td><td>0.723</td><td>0.148</td></tr></table>",
|
| 1466 |
+
"bbox": [
|
| 1467 |
+
86,
|
| 1468 |
+
386,
|
| 1469 |
+
470,
|
| 1470 |
+
474
|
| 1471 |
+
],
|
| 1472 |
+
"page_idx": 7
|
| 1473 |
+
},
|
| 1474 |
+
{
|
| 1475 |
+
"type": "text",
|
| 1476 |
+
"text": "Feature-level consistency loss. We observe that without the consistency loss $\\mathcal{L}_{\\mathrm{cons}}$ , our model suffers both quantitative and qualitative decrease in reconstruction fidelity, verified by incoherent geometry in image (a) of Figure 7. Absence of unseen view consistency modeling destabilizes the model, resulting divergent behaviours.",
|
| 1477 |
+
"bbox": [
|
| 1478 |
+
83,
|
| 1479 |
+
488,
|
| 1480 |
+
473,
|
| 1481 |
+
579
|
| 1482 |
+
],
|
| 1483 |
+
"page_idx": 7
|
| 1484 |
+
},
|
| 1485 |
+
{
|
| 1486 |
+
"type": "text",
|
| 1487 |
+
"text": "Occlusion mask. We observe that addition of occlusion mask $M$ improves overall appearance as well as geometry, as shown in image (c) of Figure 7. Its absence results broken geometry throughout the overall scene, as demonstrated in (b). Erroneous artifacts pertaining to projections from different viewpoints were detected in multiple scenes, resulting lower quantitative values.",
|
| 1488 |
+
"bbox": [
|
| 1489 |
+
83,
|
| 1490 |
+
580,
|
| 1491 |
+
473,
|
| 1492 |
+
686
|
| 1493 |
+
],
|
| 1494 |
+
"page_idx": 7
|
| 1495 |
+
},
|
| 1496 |
+
{
|
| 1497 |
+
"type": "text",
|
| 1498 |
+
"text": "Progressive training strategies. In Table 3, we justify our progressive training strategies with additional experiments on NeRF-Synthetic dataset, while in the main ablation we conduct an ablation with progressive annealing only. For pose generation, we sample pose angle from large interval in the beginning, instead of slowly growing the interval. For positional encoding, we replace progressive annealing with naive positional encoding used in NeRF. We observe that their absence causes destabilization of the model and degradation in appearance, respectively.",
|
| 1499 |
+
"bbox": [
|
| 1500 |
+
83,
|
| 1501 |
+
691,
|
| 1502 |
+
473,
|
| 1503 |
+
843
|
| 1504 |
+
],
|
| 1505 |
+
"page_idx": 7
|
| 1506 |
+
},
|
| 1507 |
+
{
|
| 1508 |
+
"type": "text",
|
| 1509 |
+
"text": "Edge-aware disparity regularization. We observe that inclusion of edge-aware disparity regularization $\\mathcal{L}_{\\mathrm{reg}}$ refines given geometry, as shown in image (e) of Figure 7. By applying $\\mathcal{L}_{\\mathrm{reg}}$ , we see increased smoothness in geometry",
|
| 1510 |
+
"bbox": [
|
| 1511 |
+
83,
|
| 1512 |
+
845,
|
| 1513 |
+
475,
|
| 1514 |
+
906
|
| 1515 |
+
],
|
| 1516 |
+
"page_idx": 7
|
| 1517 |
+
},
|
| 1518 |
+
{
|
| 1519 |
+
"type": "table",
|
| 1520 |
+
"img_path": "images/2f52fa4f3507da676f5461ab1c852ccc4f3e1ee0065c49e1b37e7792e9cd6746.jpg",
|
| 1521 |
+
"table_caption": [
|
| 1522 |
+
"Table 4. Pixel-level consistency ablation."
|
| 1523 |
+
],
|
| 1524 |
+
"table_footnote": [],
|
| 1525 |
+
"table_body": "<table><tr><td>Components</td><td>PSNR↑</td><td>SSIM↑</td><td>LPIPS↓</td><td>Avg.↓</td></tr><tr><td>w/ Lpix</td><td>17.98</td><td>0.528</td><td>0.431</td><td>0.165</td></tr><tr><td>w/ Lcons (Ours)</td><td>18.55</td><td>0.592</td><td>0.340</td><td>0.150</td></tr></table>",
|
| 1526 |
+
"bbox": [
|
| 1527 |
+
498,
|
| 1528 |
+
258,
|
| 1529 |
+
880,
|
| 1530 |
+
314
|
| 1531 |
+
],
|
| 1532 |
+
"page_idx": 7
|
| 1533 |
+
},
|
| 1534 |
+
{
|
| 1535 |
+
"type": "image",
|
| 1536 |
+
"img_path": "images/c831801b4091bc41e51c1eb68c92344d659997b013fdabba3f8868f2a8965feb.jpg",
|
| 1537 |
+
"image_caption": [
|
| 1538 |
+
"(a) Pixel-level",
|
| 1539 |
+
"Figure 8. $\\mathcal{L}_{\\mathrm{pix}}^M$ vs. $\\mathcal{L}_{\\mathrm{cons}}^M$ comparison."
|
| 1540 |
+
],
|
| 1541 |
+
"image_footnote": [],
|
| 1542 |
+
"bbox": [
|
| 1543 |
+
555,
|
| 1544 |
+
318,
|
| 1545 |
+
694,
|
| 1546 |
+
426
|
| 1547 |
+
],
|
| 1548 |
+
"page_idx": 7
|
| 1549 |
+
},
|
| 1550 |
+
{
|
| 1551 |
+
"type": "image",
|
| 1552 |
+
"img_path": "images/2c5cc087bc65b0af52ea7cc253d504530d6e8f230cf3e5a387155a6421d41883.jpg",
|
| 1553 |
+
"image_caption": [
|
| 1554 |
+
"(b) Feature-level"
|
| 1555 |
+
],
|
| 1556 |
+
"image_footnote": [],
|
| 1557 |
+
"bbox": [
|
| 1558 |
+
718,
|
| 1559 |
+
320,
|
| 1560 |
+
830,
|
| 1561 |
+
426
|
| 1562 |
+
],
|
| 1563 |
+
"page_idx": 7
|
| 1564 |
+
},
|
| 1565 |
+
{
|
| 1566 |
+
"type": "text",
|
| 1567 |
+
"text": "throughout the overall scene. This loss contributes to removal of erroneous artifacts, which achieves better results both qualitatively and quantitatively, as shown in Table 2.",
|
| 1568 |
+
"bbox": [
|
| 1569 |
+
496,
|
| 1570 |
+
479,
|
| 1571 |
+
885,
|
| 1572 |
+
526
|
| 1573 |
+
],
|
| 1574 |
+
"page_idx": 7
|
| 1575 |
+
},
|
| 1576 |
+
{
|
| 1577 |
+
"type": "text",
|
| 1578 |
+
"text": "Feature-level loss vs. pixel-level loss. In Table 4, we conduct a quantitative ablation comparisons between feature-level consistency loss $\\mathcal{L}_{\\mathrm{cons}}^{M}$ and pixel-level photometric consistency loss $\\mathcal{L}_{\\mathrm{pix}}^{M}$ , both with occlusion masking. As shown in Figure 8, naively applying pixel-level loss for consistency modeling leads to broken geometry. This phenomenon can be attributed to $\\mathcal{L}_{\\mathrm{pix}}$ being agnostic to view-dependent specular effects, which the network tries to model by altering or erasing altogether non-Lambertian surfaces.",
|
| 1579 |
+
"bbox": [
|
| 1580 |
+
495,
|
| 1581 |
+
542,
|
| 1582 |
+
888,
|
| 1583 |
+
679
|
| 1584 |
+
],
|
| 1585 |
+
"page_idx": 7
|
| 1586 |
+
},
|
| 1587 |
+
{
|
| 1588 |
+
"type": "text",
|
| 1589 |
+
"text": "6. Conclusion",
|
| 1590 |
+
"text_level": 1,
|
| 1591 |
+
"bbox": [
|
| 1592 |
+
496,
|
| 1593 |
+
698,
|
| 1594 |
+
616,
|
| 1595 |
+
713
|
| 1596 |
+
],
|
| 1597 |
+
"page_idx": 7
|
| 1598 |
+
},
|
| 1599 |
+
{
|
| 1600 |
+
"type": "text",
|
| 1601 |
+
"text": "We present GeCoNeRF, a novel approach for optimizing Neural Radiance Fields (NeRF) for few-shot novel view synthesis. Inspired by self-supervised monocular depth estimation method, we regularize geometry consistency by giving semantic consistency between rendered image and warped image. This approach overcomes limitation of NeRF with sparse inputs, which shows performance degradation with depth ambiguity and many artifacts. With feature consistency loss, we are able to regularize NeRF at unobserved viewpoints to give it beneficial geometric constraint. Further techniques and training strategies we propose prove to have stabilizing effect and facilitate optimization of our",
|
| 1602 |
+
"bbox": [
|
| 1603 |
+
495,
|
| 1604 |
+
724,
|
| 1605 |
+
887,
|
| 1606 |
+
905
|
| 1607 |
+
],
|
| 1608 |
+
"page_idx": 7
|
| 1609 |
+
},
|
| 1610 |
+
{
|
| 1611 |
+
"type": "header",
|
| 1612 |
+
"text": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency",
|
| 1613 |
+
"bbox": [
|
| 1614 |
+
251,
|
| 1615 |
+
56,
|
| 1616 |
+
718,
|
| 1617 |
+
70
|
| 1618 |
+
],
|
| 1619 |
+
"page_idx": 7
|
| 1620 |
+
},
|
| 1621 |
+
{
|
| 1622 |
+
"type": "text",
|
| 1623 |
+
"text": "network. Our experimental evaluation demonstrates our method's competitiveness results compared to other state of the art baselines.",
|
| 1624 |
+
"bbox": [
|
| 1625 |
+
84,
|
| 1626 |
+
85,
|
| 1627 |
+
478,
|
| 1628 |
+
128
|
| 1629 |
+
],
|
| 1630 |
+
"page_idx": 8
|
| 1631 |
+
},
|
| 1632 |
+
{
|
| 1633 |
+
"type": "text",
|
| 1634 |
+
"text": "References",
|
| 1635 |
+
"text_level": 1,
|
| 1636 |
+
"bbox": [
|
| 1637 |
+
86,
|
| 1638 |
+
148,
|
| 1639 |
+
181,
|
| 1640 |
+
165
|
| 1641 |
+
],
|
| 1642 |
+
"page_idx": 8
|
| 1643 |
+
},
|
| 1644 |
+
{
|
| 1645 |
+
"type": "list",
|
| 1646 |
+
"sub_type": "ref_text",
|
| 1647 |
+
"list_items": [
|
| 1648 |
+
"Attal, B., Laidlaw, E., Gokaslan, A., Kim, C., Richardt, C., Tompkin, J., and O'Toole, M. Törf: Time-of-flight radiance fields for dynamic scene view synthesis. Advances in neural information processing systems, 34, 2021.",
|
| 1649 |
+
"Barron, J. T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., and Srinivasan, P. P. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.",
|
| 1650 |
+
"Bortolon, M., Del Bue, A., and Poiesi, F. Data augmentation for nef: a geometric consistent solution based on view morphing, 2022. URL https://arxiv.org/abs/2210.04214.",
|
| 1651 |
+
"Chen, A., Xu, Z., Zhao, F., Zhang, X., Xiang, F., Yu, J., and Su, H. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14124-14133, 2021.",
|
| 1652 |
+
"Chen, Z., Wang, C., Guo, Y., and Zhang, S.-H. Structnerf: Neural radiance fields for indoor scenes with structural hints. ArXiv, abs/2209.05277, 2022.",
|
| 1653 |
+
"Chibane, J., Bansal, A., Lazova, V., and Pons-Moll, G. Stereo radiance fields (srf): Learning view synthesis for sparse views of novel scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7911-7920, 2021.",
|
| 1654 |
+
"Darmon, F., Bascle, B., Devaux, J., Monasse, P., and Aubry, M. Improving neural implicit surfaces geometry with patch warping. 2022.",
|
| 1655 |
+
"Deng, K., Liu, A., Zhu, J.-Y., and Ramanan, D. Depth-supervised NeRF: Fewer views and faster training for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022.",
|
| 1656 |
+
"Deng, Y., Yang, J., Xiang, J., and Tong, X. Gram: Generative radiance manifolds for 3d-aware image generation. arXiv preprint arXiv:2112.08867, 2021.",
|
| 1657 |
+
"Fu, Q., Xu, Q., Ong, Y.-S., and Tao, W. Geo-neus: Geometry-consistent neural implicit surfaces learning for multi-view reconstruction, 2022. URL https://arxiv.org/abs/2205.15848."
|
| 1658 |
+
],
|
| 1659 |
+
"bbox": [
|
| 1660 |
+
86,
|
| 1661 |
+
172,
|
| 1662 |
+
475,
|
| 1663 |
+
905
|
| 1664 |
+
],
|
| 1665 |
+
"page_idx": 8
|
| 1666 |
+
},
|
| 1667 |
+
{
|
| 1668 |
+
"type": "list",
|
| 1669 |
+
"sub_type": "ref_text",
|
| 1670 |
+
"list_items": [
|
| 1671 |
+
"Garg, R., Bg, V. K., Carneiro, G., and Reid, I. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In European conference on computer vision, pp. 740-756. Springer, 2016.",
|
| 1672 |
+
"Godard, C., Mac Aodha, O., and Brostow, G. J. Unsupervised monocular depth estimation with left-right consistency. In CVPR, 2017.",
|
| 1673 |
+
"Hedman, P., Srinivasan, P. P., Mildenhall, B., Barron, J. T., and Debevec, P. Baking neural radiance fields for real-time view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5875-5884, 2021.",
|
| 1674 |
+
"Huang, B., Yi, H., Huang, C., He, Y., Liu, J., and Liu, X. M3vsnet: Unsupervised multi-metric multi-view stereo network. In 2021 IEEE International Conference on Image Processing (ICIP), pp. 3163-3167, 2021. doi: 10.1109/ICIP42928.2021.9506469.",
|
| 1675 |
+
"Jaderberg, M., Simonyan, K., Zisserman, A., et al. Spatial transformer networks. Advances in neural information processing systems, 28, 2015.",
|
| 1676 |
+
"Jain, A., Tancik, M., and Abbeel, P. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5885-5894, 2021.",
|
| 1677 |
+
"Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., and Aanaes, H. Large scale multi-view stereopsis evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 406-413, 2014.",
|
| 1678 |
+
"Jeong, Y., Ahn, S., Choy, C., Anandkumar, A., Cho, M., and Park, J. Self-calibrating neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5846-5854, 2021.",
|
| 1679 |
+
"Johnson, J., Alahi, A., and Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, 2016.",
|
| 1680 |
+
"Khot, T., Agrawal, S., Tulsiani, S., Mertz, C., Lucey, S., and Hebert, M. Learning unsupervised multi-view stereopsis via robust photometric consistency. arXiv preprint arXiv:1905.02706, 2019.",
|
| 1681 |
+
"Kim, M., Seo, S., and Han, B. Infonerf: Ray entropy minimization for few-shot neural volume rendering. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.",
|
| 1682 |
+
"Mildenhall, B., Srinivasan, P. P., Ortiz-Cayon, R., Kalantari, N. K., Ramamoorthi, R., Ng, R., and Kar, A. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 2019."
|
| 1683 |
+
],
|
| 1684 |
+
"bbox": [
|
| 1685 |
+
500,
|
| 1686 |
+
84,
|
| 1687 |
+
885,
|
| 1688 |
+
905
|
| 1689 |
+
],
|
| 1690 |
+
"page_idx": 8
|
| 1691 |
+
},
|
| 1692 |
+
{
|
| 1693 |
+
"type": "header",
|
| 1694 |
+
"text": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency",
|
| 1695 |
+
"bbox": [
|
| 1696 |
+
251,
|
| 1697 |
+
56,
|
| 1698 |
+
718,
|
| 1699 |
+
71
|
| 1700 |
+
],
|
| 1701 |
+
"page_idx": 8
|
| 1702 |
+
},
|
| 1703 |
+
{
|
| 1704 |
+
"type": "list",
|
| 1705 |
+
"sub_type": "ref_text",
|
| 1706 |
+
"list_items": [
|
| 1707 |
+
"Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., and Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.",
|
| 1708 |
+
"Müller, T., Evans, A., Schied, C., and Keller, A. Instant neural graphics primitives with a multiresolution hash encoding. arXiv preprint arXiv:2201.05989, 2022.",
|
| 1709 |
+
"Niemeyer, M. and Geiger, A. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11453-11464, 2021.",
|
| 1710 |
+
"Niemeyer, M., Barron, J. T., Mildenhall, B., Sajjadi, M. S. M., Geiger, A., and Radwan, N. Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.",
|
| 1711 |
+
"Park, K., Sinha, U., Barron, J. T., Bouaziz, S., Goldman, D. B., Seitz, S. M., and Martin-Brualla, R. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5865-5874, 2021.",
|
| 1712 |
+
"Pumarola, A., Corona, E., Pons-Moll, G., and Moreno-Noguer, F. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10318-10327, 2021.",
|
| 1713 |
+
"Reiser, C., Peng, S., Liao, Y., and Geiger, A. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlp's. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14335-14345, 2021.",
|
| 1714 |
+
"Roessle, B., Barron, J. T., Mildenhall, B., Srinivasan, P. P., and Nießner, M. Dense depth priors for neural radiance fields from sparse input views. arXiv preprint arXiv:2112.03288, 2021.",
|
| 1715 |
+
"Schwarz, K., Liao, Y., Niemeyer, M., and Geiger, A. Graf: Generative radiance fields for 3d-aware image synthesis. Advances in Neural Information Processing Systems, 33: 20154-20166, 2020.",
|
| 1716 |
+
"Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556.",
|
| 1717 |
+
"Tancik, M., Srinivasan, P. P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J. T., and Ng, R. Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS, 2020."
|
| 1718 |
+
],
|
| 1719 |
+
"bbox": [
|
| 1720 |
+
86,
|
| 1721 |
+
84,
|
| 1722 |
+
475,
|
| 1723 |
+
904
|
| 1724 |
+
],
|
| 1725 |
+
"page_idx": 9
|
| 1726 |
+
},
|
| 1727 |
+
{
|
| 1728 |
+
"type": "list",
|
| 1729 |
+
"sub_type": "ref_text",
|
| 1730 |
+
"list_items": [
|
| 1731 |
+
"Tretschk, E., Tewari, A., Golyanik, V., Zollhöfer, M., Lassner, C., and Theobalt, C. Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12959-12970, 2021.",
|
| 1732 |
+
"Wang, Z., Bovik, A., Sheikh, H., and Simoncelli, E. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13 (4):600-612, 2004. doi: 10.1109/TIP.2003.819861.",
|
| 1733 |
+
"Xu, X., Pan, X., Lin, D., and Dai, B. Generative occupancy fields for 3d surface-aware image synthesis. Advances in Neural Information Processing Systems, 34, 2021.",
|
| 1734 |
+
"Yu, A., Li, R., Tancik, M., Li, H., Ng, R., and Kanazawa, A. PlenOctrees for real-time rendering of neural radiance fields. In ICCV, 2021a.",
|
| 1735 |
+
"Yu, A., Ye, V., Tancik, M., and Kanazawa, A. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4578-4587, 2021b.",
|
| 1736 |
+
"Zhan, H., Garg, R., Weerasekera, C. S., Li, K., Agarwal, H., and Reid, I. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 340-349, 2018.",
|
| 1737 |
+
"Zhang, J., Zhang, Y., Fu, H., Zhou, X., Cai, B., Huang, J., Jia, R., Zhao, B., and Tang, X. Ray priors through reprojection: Improving neural radiance fields for novel view extrapolation. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18355-18365, 2022. doi: 10.1109/CVPR52688.2022.01783.",
|
| 1738 |
+
"Zhang, K., Riegler, G., Snavely, N., and Koltun, V. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020.",
|
| 1739 |
+
"Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.",
|
| 1740 |
+
"Zhou, T., Brown, M., Snavely, N., and Lowe, D. G. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1851-1858, 2017."
|
| 1741 |
+
],
|
| 1742 |
+
"bbox": [
|
| 1743 |
+
500,
|
| 1744 |
+
84,
|
| 1745 |
+
887,
|
| 1746 |
+
825
|
| 1747 |
+
],
|
| 1748 |
+
"page_idx": 9
|
| 1749 |
+
},
|
| 1750 |
+
{
|
| 1751 |
+
"type": "header",
|
| 1752 |
+
"text": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency",
|
| 1753 |
+
"bbox": [
|
| 1754 |
+
251,
|
| 1755 |
+
56,
|
| 1756 |
+
718,
|
| 1757 |
+
71
|
| 1758 |
+
],
|
| 1759 |
+
"page_idx": 9
|
| 1760 |
+
}
|
| 1761 |
+
]
|
2301.10xxx/2301.10941/c35dbdab-14b3-4d81-9ce3-5fce0461d6c8_model.json
ADDED
|
@@ -0,0 +1,2376 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.115,
|
| 7 |
+
0.11,
|
| 8 |
+
0.856,
|
| 9 |
+
0.132
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.294,
|
| 18 |
+
0.177,
|
| 19 |
+
0.672,
|
| 20 |
+
0.194
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Min-Seop Kwak*1 Jiuhn Song*1 Seungryong Kim"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "title",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.242,
|
| 29 |
+
0.221,
|
| 30 |
+
0.32,
|
| 31 |
+
0.237
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Abstract"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.118,
|
| 40 |
+
0.247,
|
| 41 |
+
0.445,
|
| 42 |
+
0.519
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometric consistency regularization. The proposed approach leverages a rendered depth map at unobserved viewpoint to warp sparse input images to the unobserved viewpoint and impose them as pseudo ground truths to facilitate learning of NeRF. By encouraging such geometric consistency at a feature-level instead of using pixel-level reconstruction loss, we regularize the NeRF at semantic and structural levels while allowing for modeling view-dependent radiance to account for color variations across viewpoints. We also propose an effective method to filter out erroneous warped solutions, along with training strategies to stabilize training during optimization. We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "title",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.087,
|
| 51 |
+
0.551,
|
| 52 |
+
0.218,
|
| 53 |
+
0.567
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "1. Introduction"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.085,
|
| 62 |
+
0.578,
|
| 63 |
+
0.477,
|
| 64 |
+
0.729
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "Recently, representing a 3D scene as a Neural Radiance Field (NeRF) (Mildenhall et al., 2020) has proven to be a powerful approach for novel view synthesis and 3D reconstruction (Barron et al., 2021; Jain et al., 2021; Chen et al., 2021). However, despite its impressive performance, NeRF requires a large number of densely, well distributed calibrated images for optimization, which limits its applicability. When limited to sparse observations, NeRF easily overfits to the input view images and is unable to reconstruct correct geometry (Zhang et al., 2020)."
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.085,
|
| 73 |
+
0.736,
|
| 74 |
+
0.476,
|
| 75 |
+
0.782
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "The task that directly addresses this problem, also called a few-shot NeRF, aims to optimize high-fidelity neural radiance field in such sparse scenarios (Jain et al., 2021; Kim"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.496,
|
| 84 |
+
0.222,
|
| 85 |
+
0.888,
|
| 86 |
+
0.344
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "et al., 2022; Niemeyer et al., 2022), countering the underconstrained nature of said problem by introducing additional priors. Specifically, previous works attempted to solve this by utilizing a semantic feature (Jain et al., 2021), entropy minimization (Kim et al., 2022), SfM depth priors (Deng et al., 2022) or normalizing flow (Niemeyer et al., 2022), but their necessity for handcrafted methods or inability to extract local and fine structures limited their performance."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.496,
|
| 95 |
+
0.351,
|
| 96 |
+
0.889,
|
| 97 |
+
0.699
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "To alleviate these issues, we propose a novel regularization technique that enforces a geometric consistency across different views with a depth-guided warping and a geometry-aware consistency modeling. Based on these, we propose a novel framework, called Neural Radiance Fields with Geometric Consistency (GeCoNeRF), for training neural radiance fields in a few-shot setting. Our key insight is that we can leverage a depth rendered by NeRF to warp sparse input images to novel viewpoints, and use them as pseudo ground truths to facilitate learning of fine details and high-frequency features by NeRF. By encouraging images rendered at novel views to model warped images with a consistency loss, we can successfully constrain both geometry and appearance to boost fidelity of neural radiance fields even in highly under-constrained few-shot setting. Taking into consideration non-Lambertian nature of given datasets, we propose feature-level regularization loss that captures contextual and structural information while largely ignoring individual color differences. We also present a method to generate a consistency mask to prevent inconsistently warped information from harming the network. Finally, we provide coarse-to-fine training strategies for sampling and pose generation to stabilize optimization of the model."
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.496,
|
| 106 |
+
0.705,
|
| 107 |
+
0.889,
|
| 108 |
+
0.782
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "We demonstrate the effectiveness of our method on synthetic and real datasets (Mildenhall et al., 2020; Jensen et al., 2014). Experimental results prove the effectiveness of the proposed model over the latest methods for few-shot novel view synthesis."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "title",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.498,
|
| 117 |
+
0.8,
|
| 118 |
+
0.64,
|
| 119 |
+
0.815
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "2. Related Work"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.496,
|
| 128 |
+
0.826,
|
| 129 |
+
0.889,
|
| 130 |
+
0.903
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "Neural radiance fields. Among the most notable of approaches regarding the task of novel view synthesis and 3D reconstruction is Neural Radiance Field (NeRF) (Mildenhall et al., 2020), where photo-realistic images are rendered by a simple MLP architecture. Sparked by its impress"
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "page_footnote",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.085,
|
| 139 |
+
0.79,
|
| 140 |
+
0.476,
|
| 141 |
+
0.856
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "*Equal contribution \\( {}^{1} \\) Department of Computer Science and Engineering,Korea University,Seoul,Korea.Authors: Min-Seop Kwak <mskwak01@korea.ac.kr>, Jiuhn Song <jiuhn-song@korea.ac.kr>. Correspondence to: Seungryong Kim <seungryong.kim@korea.ac.kr>."
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "page_footnote",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.085,
|
| 150 |
+
0.866,
|
| 151 |
+
0.475,
|
| 152 |
+
0.906
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "Proceedings of the \\(40^{th}\\) International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s)."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "aside_text",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.023,
|
| 161 |
+
0.263,
|
| 162 |
+
0.061,
|
| 163 |
+
0.707
|
| 164 |
+
],
|
| 165 |
+
"angle": 270,
|
| 166 |
+
"content": "arXiv:2301.10941v3 [cs.CV] 27 Apr 2023"
|
| 167 |
+
}
|
| 168 |
+
],
|
| 169 |
+
[
|
| 170 |
+
{
|
| 171 |
+
"type": "header",
|
| 172 |
+
"bbox": [
|
| 173 |
+
0.253,
|
| 174 |
+
0.057,
|
| 175 |
+
0.719,
|
| 176 |
+
0.071
|
| 177 |
+
],
|
| 178 |
+
"angle": 0,
|
| 179 |
+
"content": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency"
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "image",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.099,
|
| 185 |
+
0.085,
|
| 186 |
+
0.882,
|
| 187 |
+
0.351
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": null
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "image_caption",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.084,
|
| 196 |
+
0.363,
|
| 197 |
+
0.889,
|
| 198 |
+
0.419
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "Figure 1. Illustration of our consistency modeling pipeline for few-shot NeRF. Given an image \\( I_{i} \\) and estimated depth map \\( D_{j} \\) of \\( j \\)-th unobserved viewpoint, we warp the image \\( I_{i} \\) to that novel viewpoint as \\( I_{i\\rightarrow j} \\) by establishing geometric correspondence between two viewpoints. Using the warped image as a pseudo ground truth, we cause rendered image of unseen viewpoint, \\( I_{j} \\), to be consistent in structure with warped image, with occlusions taken into consideration."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.084,
|
| 207 |
+
0.431,
|
| 208 |
+
0.477,
|
| 209 |
+
0.627
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "sive performance, a variety of follow-up studies based on its continuous neural volumetric representation have been prompted, including dynamic and deformable scenes (Park et al., 2021; Tretschk et al., 2021; Pumarola et al., 2021; Attal et al., 2021), real-time rendering (Yu et al., 2021a; Hedman et al., 2021; Reiser et al., 2021; Müller et al., 2022), self-calibration (Jeong et al., 2021) and generative modeling (Schwarz et al., 2020; Niemeyer & Geiger, 2021; Xu et al., 2021; Deng et al., 2021). Mip-NeRF (Barron et al., 2021) eliminates aliasing artifacts by adopting cone tracing with a single multi-scale MLP. In general, most of these works have difficulty in optimizing a single scene with a few number of images."
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.084,
|
| 218 |
+
0.642,
|
| 219 |
+
0.476,
|
| 220 |
+
0.853
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "Few-shot NeRF. One key limitation of NeRF is its necessity for large number of calibrated views in optimizing neural radiance fields. Some recent works attempted to address this in the case where only few observed views of the scene are available. PixelNeRF(Yu et al., 2021b) conditions a NeRF on image inputs using local CNN features. This conditional model allows the network to learn scene priors across multiple scenes. Stereo radiance fields (Chibane et al., 2021) use local CNN features from input views for scene geometry reasoning and MVSNeRF (Chen et al., 2021) combines cost volume with neural radiance field for improved performance. However, pre-training with multi-view images of numerous scenes are essential for these methods for them to learn reconstruction priors."
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.085,
|
| 229 |
+
0.861,
|
| 230 |
+
0.476,
|
| 231 |
+
0.906
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "Other works attempt different approaches of optimizing NeRF from scratch in few-shot settings: DSNeRF (Deng et al., 2022) makes use of depth supervision to network to optimize"
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.496,
|
| 240 |
+
0.431,
|
| 241 |
+
0.889,
|
| 242 |
+
0.673
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "a scene with few images. (Roessle et al., 2021) also utilizes sparse depth prior by extending into dense depth map by depth completion module to guide network optimization. On the other hand, there are models that tackle depth prior-free few-shot optimization: DietNeRF (Jain et al., 2021) enforces semantic consistency between rendered images from unseen view and seen images. RegNeRF (Niemeyer et al., 2022) regularizes the geometry and appearance of patches rendered from unobserved viewpoints. InfoNeRF (Kim et al., 2022) constrains the density's entropy in each ray and ensures consistency across rays in the neighborhood. While these methods constrain NeRF into learning more realistic geometry, their regularizations are limited in that they require extensive dataset-specific fine-tuning and that they only provide regularization at a global level in a generalized manner."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.496,
|
| 251 |
+
0.687,
|
| 252 |
+
0.889,
|
| 253 |
+
0.868
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "Self-supervised photometric consistency. In the field of multiview stereo depth estimation, consistency modeling between stereo images and their warped images has been widely used for self-supervised training (Godard et al., 2017; Garg et al., 2016; Zhou et al., 2017) In weakly supervised or unsupervised settings (Huang et al., 2021; Khot et al., 2019) where there is lack of ground truth depth information, consistency modeling between images with geometry-based warping is used as a supervisory signal (Zhou et al., 2017; Huang et al., 2021; Khot et al., 2019) formulating depth learning as a form of reconstruction task between viewpoints."
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.497,
|
| 262 |
+
0.876,
|
| 263 |
+
0.887,
|
| 264 |
+
0.906
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "Recently, methods utilizing self-supervised photometric consistency have been introduced to NeRF: concurrent"
|
| 268 |
+
}
|
| 269 |
+
],
|
| 270 |
+
[
|
| 271 |
+
{
|
| 272 |
+
"type": "header",
|
| 273 |
+
"bbox": [
|
| 274 |
+
0.253,
|
| 275 |
+
0.057,
|
| 276 |
+
0.719,
|
| 277 |
+
0.072
|
| 278 |
+
],
|
| 279 |
+
"angle": 0,
|
| 280 |
+
"content": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency"
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"type": "text",
|
| 284 |
+
"bbox": [
|
| 285 |
+
0.085,
|
| 286 |
+
0.085,
|
| 287 |
+
0.477,
|
| 288 |
+
0.373
|
| 289 |
+
],
|
| 290 |
+
"angle": 0,
|
| 291 |
+
"content": "works such as NeuralWarp (Darmon et al., 2022), Struct-NeRF (Chen et al., 2022) and Geo-NeuS (Fu et al., 2022) model photometric consistency between source images and their warped counterparts from other source viewpoints to improve their reconstruction quality. However, these methods only discuss dense view input scenarios where pose differences between source viewpoints are small, and do not address their behavior in few-shot settings - where sharp performance drop is expected due to scarcity of input viewpoints and increased difficulty in the warping procedure owing to large viewpoint differences and heavy self-occlusions. RapNeRF (Zhang et al., 2022) uses geometry-based reprojection method to enhance view extrapolation performance, and (Bortolon et al., 2022) uses depth rendered by NeRF as correspondence information for view-morphing module to synthesize images between input viewpoints. However, these methods do not take occlusions into account, and their pixel-level photometric consistency modeling comes with downside of suppressing view-dependent specular effects."
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"type": "title",
|
| 295 |
+
"bbox": [
|
| 296 |
+
0.086,
|
| 297 |
+
0.392,
|
| 298 |
+
0.226,
|
| 299 |
+
0.408
|
| 300 |
+
],
|
| 301 |
+
"angle": 0,
|
| 302 |
+
"content": "3. Preliminaries"
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"type": "text",
|
| 306 |
+
"bbox": [
|
| 307 |
+
0.085,
|
| 308 |
+
0.417,
|
| 309 |
+
0.475,
|
| 310 |
+
0.599
|
| 311 |
+
],
|
| 312 |
+
"angle": 0,
|
| 313 |
+
"content": "Neural Radiance Field (NeRF) (Mildenhall et al., 2020) represents a scene as a continuous function \\( f_{\\theta} \\) represented by a neural network with parameters \\( \\theta \\), where the points are sampled along rays, represented by \\( r \\), for evaluation by the neural network. Typically, the sampled coordinates \\( \\mathbf{x} \\in \\mathbb{R}^3 \\) and view direction \\( \\mathbf{d} \\in \\mathbb{R}^2 \\) are transformed by a positional encoding \\( \\gamma \\) into Fourier features (Tancik et al., 2020) that facilitates learning of high-frequency details. The neural network \\( f_{\\theta} \\) takes as input the transformed coordinate \\( \\gamma(\\mathbf{x}) \\) and viewing directions \\( \\gamma(\\mathbf{d}) \\), and outputs a view-invariant density value \\( \\sigma \\in \\mathbb{R} \\) and a view-dependent color value \\( \\mathbf{c} \\in \\mathbb{R}^3 \\) such that"
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"type": "equation",
|
| 317 |
+
"bbox": [
|
| 318 |
+
0.194,
|
| 319 |
+
0.607,
|
| 320 |
+
0.475,
|
| 321 |
+
0.625
|
| 322 |
+
],
|
| 323 |
+
"angle": 0,
|
| 324 |
+
"content": "\\[\n\\{\\mathbf {c}, \\sigma \\} = f _ {\\theta} (\\gamma (\\mathbf {x}), \\gamma (\\mathbf {d})). \\tag {1}\n\\]"
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "text",
|
| 328 |
+
"bbox": [
|
| 329 |
+
0.085,
|
| 330 |
+
0.63,
|
| 331 |
+
0.475,
|
| 332 |
+
0.676
|
| 333 |
+
],
|
| 334 |
+
"angle": 0,
|
| 335 |
+
"content": "With a ray parameterized as \\(\\mathbf{r}_p(t) = \\mathbf{o} + t\\mathbf{d}_p\\) from the camera center \\(\\mathbf{o}\\) through the pixel \\(p\\) along direction \\(\\mathbf{d}_p\\), the color is rendered as follows:"
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "equation",
|
| 339 |
+
"bbox": [
|
| 340 |
+
0.137,
|
| 341 |
+
0.68,
|
| 342 |
+
0.475,
|
| 343 |
+
0.716
|
| 344 |
+
],
|
| 345 |
+
"angle": 0,
|
| 346 |
+
"content": "\\[\nC (\\mathbf {r} _ {p}) = \\int_ {t _ {n}} ^ {t _ {f}} T (t) \\sigma (\\mathbf {r} _ {p} (t)) \\mathbf {c} (\\mathbf {r} _ {p} (t), \\mathbf {d} _ {p}) d t, \\tag {2}\n\\]"
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "text",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.085,
|
| 352 |
+
0.722,
|
| 353 |
+
0.475,
|
| 354 |
+
0.768
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": "where \\(C(\\mathbf{r}_p)\\) is a predicted color value at the pixel \\(p\\) along the ray \\(\\mathbf{r}_p(t)\\) from \\(t_n\\) to \\(t_f\\), and \\(T(t)\\) denotes an accumulated transmittance along the ray from \\(t_n\\) to \\(t\\), defined such that"
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "equation",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.166,
|
| 363 |
+
0.774,
|
| 364 |
+
0.475,
|
| 365 |
+
0.81
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "\\[\nT (t) = \\exp \\left(- \\int_ {t _ {n}} ^ {t} \\sigma (\\mathbf {r} _ {p} (s)) d s\\right). \\tag {3}\n\\]"
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "text",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.085,
|
| 374 |
+
0.822,
|
| 375 |
+
0.476,
|
| 376 |
+
0.868
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "To optimize the networks \\( f_{\\theta} \\), the observation loss \\( \\mathcal{L}_{\\mathrm{obs}} \\) enforces the rendered color values to be consistent with ground truth color value \\( C^{\\prime}(\\mathbf{r}) \\):"
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "equation",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.167,
|
| 385 |
+
0.876,
|
| 386 |
+
0.475,
|
| 387 |
+
0.909
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "\\[\n\\mathcal {L} _ {\\mathrm {o b s}} = \\sum_ {\\mathbf {r} _ {p} \\in \\mathcal {R}} \\| C ^ {\\prime} (\\mathbf {r} _ {p}) - C (\\mathbf {r} _ {p}) \\| _ {2} ^ {2}, \\tag {4}\n\\]"
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.497,
|
| 396 |
+
0.085,
|
| 397 |
+
0.789,
|
| 398 |
+
0.102
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "where \\(\\mathcal{R}\\) represents a batch of training rays."
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "title",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.497,
|
| 407 |
+
0.113,
|
| 408 |
+
0.634,
|
| 409 |
+
0.131
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "4. Methodology"
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "title",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.497,
|
| 418 |
+
0.14,
|
| 419 |
+
0.71,
|
| 420 |
+
0.154
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "4.1. Motivation and Overview"
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.496,
|
| 429 |
+
0.163,
|
| 430 |
+
0.887,
|
| 431 |
+
0.33
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "Let us denote an image at \\(i\\)-th viewpoint as \\(I_{i}\\). In a few-shot novel view synthesis, NeRF is given only a few images \\(\\{I_i\\}\\) for \\(i \\in \\{1, \\dots, N\\}\\) with small \\(N\\), e.g., \\(N = 3\\) or \\(N = 5\\). The objective of novel view synthesis is to train the mapping function \\(f_{\\theta}\\) that can be used to recover an image \\(I_{j}\\) at \\(j\\)-th unseen or novel viewpoint. As we described above, in the few-shot setting, given \\(\\{I_i\\}\\), directly optimizing \\(f_{\\theta}\\) solely with the pixel-wise reconstruction loss \\(\\mathcal{L}_{\\mathrm{obs}}\\) is limited by its inability to model view-dependent effects, and thus an additional regularization to encourage the network \\(f_{\\theta}\\) to generate consistent appearance and geometry is required."
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "text",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.496,
|
| 440 |
+
0.337,
|
| 441 |
+
0.887,
|
| 442 |
+
0.457
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "To achieve this, we propose a novel regularization technique to enforce a geometric consistency across different views with depth-guided warping and consistency modeling. We focus on the fact that NeRF (Mildenhall et al., 2020) inherently renders not only color image but depth image as well. Combined with known viewpoint difference, the rendered depths can be used to define a geometric correspondence relationship between two arbitrary views."
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.496,
|
| 451 |
+
0.465,
|
| 452 |
+
0.887,
|
| 453 |
+
0.648
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "Specifically, we consider a depth image rendered by the NeRF model, \\( D_{j} \\) at unseen viewpoint \\( j \\). By formulating a warping function \\( \\psi (I_i;D_j,R_{i\\rightarrow j}) \\) that warps an image \\( I_{i} \\) according to the depth \\( D_{j} \\) and viewpoint difference \\( R_{i\\rightarrow j} \\), we can encourage a consistency between warped image \\( I_{i\\rightarrow j} = \\psi (I_i;D_j,R_{i\\rightarrow j}) \\) and rendered image \\( I_{j} \\) at \\( j \\)-th unseen viewpoint, which in turn improves the few-shot novel view synthesis performance. This framework can overcome the limitations of previous few-shot setting approaches (Mildenhall et al., 2020; Chen et al., 2021; Barron et al., 2021), improving not only global geometry but also high-frequency details and appearance as well."
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.496,
|
| 462 |
+
0.653,
|
| 463 |
+
0.887,
|
| 464 |
+
0.76
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "In the following, we first explain how input images can be warped to unseen viewpoints in our framework. Then, we demonstrate how we impose consistency upon the pair of warped image and rendered image for regularization, followed by explanation of occlusion handling method and several training strategies that proved crucial for stabilization of NeRF optimization in few-shot scenario."
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "title",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.497,
|
| 473 |
+
0.77,
|
| 474 |
+
0.771,
|
| 475 |
+
0.787
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "4.2. Rendered Depth-Guided Warping"
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "text",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.496,
|
| 484 |
+
0.793,
|
| 485 |
+
0.887,
|
| 486 |
+
0.9
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "To render an image at novel viewpoints, we first sample a random camera viewpoint, from which corresponding ray vectors are generated in a patch-wise manner. As NeRF outputs density and color values of sampled points along the novel rays, we use recovered density values to render a consistent depth map. Following (Mildenhall et al., 2020), we formulate per-ray depth values as weighted composition of"
|
| 490 |
+
}
|
| 491 |
+
],
|
| 492 |
+
[
|
| 493 |
+
{
|
| 494 |
+
"type": "header",
|
| 495 |
+
"bbox": [
|
| 496 |
+
0.253,
|
| 497 |
+
0.057,
|
| 498 |
+
0.719,
|
| 499 |
+
0.073
|
| 500 |
+
],
|
| 501 |
+
"angle": 0,
|
| 502 |
+
"content": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency"
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"type": "image",
|
| 506 |
+
"bbox": [
|
| 507 |
+
0.102,
|
| 508 |
+
0.085,
|
| 509 |
+
0.873,
|
| 510 |
+
0.311
|
| 511 |
+
],
|
| 512 |
+
"angle": 0,
|
| 513 |
+
"content": null
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"type": "image_caption",
|
| 517 |
+
"bbox": [
|
| 518 |
+
0.085,
|
| 519 |
+
0.326,
|
| 520 |
+
0.888,
|
| 521 |
+
0.369
|
| 522 |
+
],
|
| 523 |
+
"angle": 0,
|
| 524 |
+
"content": "Figure 2. Illustration of the proposed framework. GeCoNeRF regularizes the networks with consistency modeling. Consistency loss function \\(\\mathcal{L}_{\\mathrm{cons}}^M\\) is applied between unobserved viewpoint image and warped observed viewpoint image, while disparity regularization loss \\(\\mathcal{L}_{\\mathrm{reg}}\\) regularizes depth at seen viewpoints."
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "text",
|
| 528 |
+
"bbox": [
|
| 529 |
+
0.085,
|
| 530 |
+
0.38,
|
| 531 |
+
0.475,
|
| 532 |
+
0.427
|
| 533 |
+
],
|
| 534 |
+
"angle": 0,
|
| 535 |
+
"content": "distances traveled from origin. Since ray \\(\\mathbf{r}_p\\) corresponding to pixel \\(p\\) is parameterized as \\(\\mathbf{r}_p(t) = \\mathbf{o} + t\\mathbf{d}_p\\), the depth rendering is defined similarly to the color rendering:"
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "equation",
|
| 539 |
+
"bbox": [
|
| 540 |
+
0.174,
|
| 541 |
+
0.437,
|
| 542 |
+
0.475,
|
| 543 |
+
0.473
|
| 544 |
+
],
|
| 545 |
+
"angle": 0,
|
| 546 |
+
"content": "\\[\nD (\\mathbf {r} _ {p}) = \\int_ {t _ {n}} ^ {t _ {f}} T (t) \\sigma (\\mathbf {r} _ {p} (t)) t d t, \\tag {5}\n\\]"
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "text",
|
| 550 |
+
"bbox": [
|
| 551 |
+
0.085,
|
| 552 |
+
0.483,
|
| 553 |
+
0.477,
|
| 554 |
+
0.618
|
| 555 |
+
],
|
| 556 |
+
"angle": 0,
|
| 557 |
+
"content": "where \\(D(\\mathbf{r}_p)\\) is a predicted depth along the ray \\(\\mathbf{r}_p\\). As described in Figure 1, we use the rendered depth map \\(D_j\\) to warp input ground truth image \\(I_i\\) to \\(j\\)-th unseen viewpoint and acquire a warped image \\(I_{i\\rightarrow j}\\), which is defined as a process such that \\(I_{i\\rightarrow j} = \\psi (I_i;D_j,R_{i\\rightarrow j})\\). More specifically, pixel location \\(p_j\\) in target unseen viewpoint image is transformed to \\(p_{j\\to i}\\) at source viewpoint image by viewpoint difference \\(R_{j\\to i}\\) and camera intrinsic parameter \\(K\\) such that"
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "equation",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.173,
|
| 563 |
+
0.63,
|
| 564 |
+
0.475,
|
| 565 |
+
0.65
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "\\[\np _ {j \\rightarrow i} \\sim K R _ {j \\rightarrow i} D _ {j} (p _ {j}) K ^ {- 1} p _ {j}, \\tag {6}\n\\]"
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "text",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.085,
|
| 574 |
+
0.66,
|
| 575 |
+
0.476,
|
| 576 |
+
0.735
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "where \\(\\sim\\) indicates approximate equality and the projected coordinate \\(p_{j\\rightarrow i}\\) is a continuous value. With a differentiable sampler, we extract color values of \\(p_{j\\rightarrow i}\\) on \\(I_{i}\\). More formally, the transforming components process can be written as follows:"
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "equation",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.174,
|
| 585 |
+
0.747,
|
| 586 |
+
0.475,
|
| 587 |
+
0.765
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "\\[\nI _ {i \\rightarrow j} \\left(p _ {j}\\right) = \\operatorname {s a m p l e r} \\left(I _ {i}; p _ {j \\rightarrow i}\\right), \\tag {7}\n\\]"
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "text",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.085,
|
| 596 |
+
0.775,
|
| 597 |
+
0.475,
|
| 598 |
+
0.806
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "where \\( \\text{sampler}(\\cdot) \\) is a bilinear sampling operator (Jaderberg et al., 2015)."
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.085,
|
| 607 |
+
0.815,
|
| 608 |
+
0.476,
|
| 609 |
+
0.907
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": "Acceleration. Rendering a full image is computationally heavy and extremely timetaking, requiring tens of seconds for a single iteration. To overcome the computational bottleneck of full image rendering and warping, rays are sampled on a strided grid to make the patch with stride \\( s \\), which we have set as 2. After the rays undergo volumetric rendering,"
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "text",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.497,
|
| 618 |
+
0.38,
|
| 619 |
+
0.887,
|
| 620 |
+
0.472
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "we upsample the low-resolution depth map back to original resolution with bilinear interpolation. This full-resolution depth map is used for the inverse warping. This way, detailed warped patches of full-resolution can be generated with only a fraction of computational cost that would be required when rendering the original sized ray batch."
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "title",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.498,
|
| 629 |
+
0.481,
|
| 630 |
+
0.688,
|
| 631 |
+
0.497
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "4.3. Consistency Modeling"
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.496,
|
| 640 |
+
0.505,
|
| 641 |
+
0.889,
|
| 642 |
+
0.597
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "Given the rendered patch \\( I_{j} \\) at \\( j \\)-th viewpoint and the warped patch \\( I_{i\\rightarrow j} \\) with depth \\( D_{j} \\) and viewpoint difference \\( R_{i\\rightarrow j} \\), we define the consistency between the two to encourage additional regularization for globally consistent rendering. One viable option is to naively apply the pixelwise image reconstruction loss \\( \\mathcal{L}_{\\mathrm{pix}} \\) such that"
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "equation",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.62,
|
| 651 |
+
0.606,
|
| 652 |
+
0.887,
|
| 653 |
+
0.625
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "\\[\n\\mathcal {L} _ {\\mathrm {p i x}} = \\left\\| I _ {i \\rightarrow j} - I _ {j} \\right\\|. \\tag {8}\n\\]"
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "text",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.496,
|
| 662 |
+
0.633,
|
| 663 |
+
0.888,
|
| 664 |
+
0.738
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": "However, we observe that this simple strategy is prone to cause failures in reflectant non-Lambertian surfaces where appearance changes greatly regarding viewpoints (Zhan et al., 2018). In addition, geometry-related problems, such as self-occlusion and artifacts, prohibits naive usage of pixelwise image reconstruction loss for regularization in unseen viewpoints."
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "text",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.496,
|
| 673 |
+
0.748,
|
| 674 |
+
0.888,
|
| 675 |
+
0.809
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "Feature-level consistency modeling. To overcome these issues, we propose masked feature-level regularization loss that encourages structural consistency while ignoring view-dependent radiance effects, as illustrated in Figure 2."
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"bbox": [
|
| 683 |
+
0.496,
|
| 684 |
+
0.815,
|
| 685 |
+
0.889,
|
| 686 |
+
0.907
|
| 687 |
+
],
|
| 688 |
+
"angle": 0,
|
| 689 |
+
"content": "Given an image \\(I\\) as an input, we use a convolutional network to extract multi-level feature maps such that \\(f_{\\phi ,l}(I)\\in \\mathbb{R}^{H_l\\times W_l\\times C_l}\\), with channel depth \\(C_l\\) for \\(l\\)-th layer. To measure feature-level consistency between warped image \\(I_{i\\rightarrow j}\\) and rendered image \\(I_{j}\\), we extract their features maps from \\(L\\) layers and compute difference within each feature map"
|
| 690 |
+
}
|
| 691 |
+
],
|
| 692 |
+
[
|
| 693 |
+
{
|
| 694 |
+
"type": "header",
|
| 695 |
+
"bbox": [
|
| 696 |
+
0.253,
|
| 697 |
+
0.057,
|
| 698 |
+
0.719,
|
| 699 |
+
0.071
|
| 700 |
+
],
|
| 701 |
+
"angle": 0,
|
| 702 |
+
"content": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency"
|
| 703 |
+
},
|
| 704 |
+
{
|
| 705 |
+
"type": "image",
|
| 706 |
+
"bbox": [
|
| 707 |
+
0.098,
|
| 708 |
+
0.082,
|
| 709 |
+
0.252,
|
| 710 |
+
0.203
|
| 711 |
+
],
|
| 712 |
+
"angle": 0,
|
| 713 |
+
"content": null
|
| 714 |
+
},
|
| 715 |
+
{
|
| 716 |
+
"type": "image_caption",
|
| 717 |
+
"bbox": [
|
| 718 |
+
0.136,
|
| 719 |
+
0.208,
|
| 720 |
+
0.213,
|
| 721 |
+
0.222
|
| 722 |
+
],
|
| 723 |
+
"angle": 0,
|
| 724 |
+
"content": "(a) GT patch"
|
| 725 |
+
},
|
| 726 |
+
{
|
| 727 |
+
"type": "image",
|
| 728 |
+
"bbox": [
|
| 729 |
+
0.253,
|
| 730 |
+
0.082,
|
| 731 |
+
0.407,
|
| 732 |
+
0.203
|
| 733 |
+
],
|
| 734 |
+
"angle": 0,
|
| 735 |
+
"content": null
|
| 736 |
+
},
|
| 737 |
+
{
|
| 738 |
+
"type": "image_caption",
|
| 739 |
+
"bbox": [
|
| 740 |
+
0.272,
|
| 741 |
+
0.208,
|
| 742 |
+
0.386,
|
| 743 |
+
0.222
|
| 744 |
+
],
|
| 745 |
+
"angle": 0,
|
| 746 |
+
"content": "(b) Rendered patch"
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"type": "image",
|
| 750 |
+
"bbox": [
|
| 751 |
+
0.409,
|
| 752 |
+
0.082,
|
| 753 |
+
0.563,
|
| 754 |
+
0.203
|
| 755 |
+
],
|
| 756 |
+
"angle": 0,
|
| 757 |
+
"content": null
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "image_caption",
|
| 761 |
+
"bbox": [
|
| 762 |
+
0.434,
|
| 763 |
+
0.208,
|
| 764 |
+
0.536,
|
| 765 |
+
0.222
|
| 766 |
+
],
|
| 767 |
+
"angle": 0,
|
| 768 |
+
"content": "(c) Warped patch"
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "image",
|
| 772 |
+
"bbox": [
|
| 773 |
+
0.58,
|
| 774 |
+
0.082,
|
| 775 |
+
0.691,
|
| 776 |
+
0.203
|
| 777 |
+
],
|
| 778 |
+
"angle": 0,
|
| 779 |
+
"content": null
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "image_caption",
|
| 783 |
+
"bbox": [
|
| 784 |
+
0.581,
|
| 785 |
+
0.209,
|
| 786 |
+
0.698,
|
| 787 |
+
0.222
|
| 788 |
+
],
|
| 789 |
+
"angle": 0,
|
| 790 |
+
"content": "(d) Occlusion mask"
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "image",
|
| 794 |
+
"bbox": [
|
| 795 |
+
0.718,
|
| 796 |
+
0.082,
|
| 797 |
+
0.871,
|
| 798 |
+
0.203
|
| 799 |
+
],
|
| 800 |
+
"angle": 0,
|
| 801 |
+
"content": null
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "image_caption",
|
| 805 |
+
"bbox": [
|
| 806 |
+
0.743,
|
| 807 |
+
0.209,
|
| 808 |
+
0.846,
|
| 809 |
+
0.222
|
| 810 |
+
],
|
| 811 |
+
"angle": 0,
|
| 812 |
+
"content": "(e) Masked patch"
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "image_caption",
|
| 816 |
+
"bbox": [
|
| 817 |
+
0.084,
|
| 818 |
+
0.227,
|
| 819 |
+
0.886,
|
| 820 |
+
0.27
|
| 821 |
+
],
|
| 822 |
+
"angle": 0,
|
| 823 |
+
"content": "Figure 3. Visualization of consistency modeling process. (a) ground truth patch, (b) rendered patch at novel viewpoint, (c) warped patch, from input viewpoint to novel viewpoint, (d) occlusion mask with threshold masking, and (e) final warped patch with occlusion masking at novel viewpoint."
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "text",
|
| 827 |
+
"bbox": [
|
| 828 |
+
0.085,
|
| 829 |
+
0.283,
|
| 830 |
+
0.376,
|
| 831 |
+
0.298
|
| 832 |
+
],
|
| 833 |
+
"angle": 0,
|
| 834 |
+
"content": "pairs that are extracted from the same layer."
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "text",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.085,
|
| 840 |
+
0.305,
|
| 841 |
+
0.475,
|
| 842 |
+
0.396
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": "In accordance with the idea of using the warped image \\( I_{i \\to j} \\) as pseudo ground truths, we allow a gradient backpropagation to pass only through the rendered image and block it for the warped image. By applying the consistency loss at multiple levels of feature maps, we cause \\( I_{j} \\) to model after \\( I_{i \\to j} \\) both on semantic and structural level."
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "text",
|
| 849 |
+
"bbox": [
|
| 850 |
+
0.085,
|
| 851 |
+
0.403,
|
| 852 |
+
0.474,
|
| 853 |
+
0.432
|
| 854 |
+
],
|
| 855 |
+
"angle": 0,
|
| 856 |
+
"content": "Formally written, the consistency loss \\(\\mathcal{L}_{\\mathrm{cons}}\\) is defined as such that"
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "equation",
|
| 860 |
+
"bbox": [
|
| 861 |
+
0.151,
|
| 862 |
+
0.44,
|
| 863 |
+
0.474,
|
| 864 |
+
0.481
|
| 865 |
+
],
|
| 866 |
+
"angle": 0,
|
| 867 |
+
"content": "\\[\n\\mathcal {L} _ {\\text {c o n s}} = \\sum_ {l = 1} ^ {L} \\frac {1}{C _ {l}} \\left\\| f _ {\\phi} ^ {l} \\left(I _ {j \\rightarrow i}\\right) - f _ {\\phi} ^ {l} \\left(I _ {j}\\right)\\right\\|. \\tag {9}\n\\]"
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "text",
|
| 871 |
+
"bbox": [
|
| 872 |
+
0.085,
|
| 873 |
+
0.497,
|
| 874 |
+
0.477,
|
| 875 |
+
0.633
|
| 876 |
+
],
|
| 877 |
+
"angle": 0,
|
| 878 |
+
"content": "For this loss function \\(\\mathcal{L}_{\\mathrm{cons}}\\), we find \\(l-1\\) distance function most suited for our task and utilize it to measure consistency across feature difference maps. Empirically, we have discovered that VGG-19 network (Simonyan & Zisserman, 2014) yields best performance in modeling consistencies, likely due to the absence of normalization layers (Johnson et al., 2016) that scale down absolute values of feature differences. Therefore, we employ VGG19 network as our feature extractor network \\(f_{\\phi}\\) throughout all of our models."
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "text",
|
| 882 |
+
"bbox": [
|
| 883 |
+
0.085,
|
| 884 |
+
0.64,
|
| 885 |
+
0.477,
|
| 886 |
+
0.807
|
| 887 |
+
],
|
| 888 |
+
"angle": 0,
|
| 889 |
+
"content": "It should be noted that our loss function differs from that of DietNeRF (Jain et al., 2021) in that while DietNeRF's consistency loss is limited to regularizing the radiance field in a globally semantic level, our loss combined with the warping module is also able to give the network highly rich information on a local, structural level as well. In other words, contrary to DietNeRF giving only high-level feature consistency, our method of using multiple levels of convolutional network for feature difference calculation can be interpreted as enforcing a mixture of all levels, from high-level semantic consistency to low-level structural consistency."
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "text",
|
| 893 |
+
"bbox": [
|
| 894 |
+
0.085,
|
| 895 |
+
0.815,
|
| 896 |
+
0.476,
|
| 897 |
+
0.907
|
| 898 |
+
],
|
| 899 |
+
"angle": 0,
|
| 900 |
+
"content": "Occlusion handling. In order to prevent imperfect and distorted warpings caused by erroneous geometry from influencing the model, which degrades overall reconstruction quality, we construct consistency mask \\( M_{l} \\) to let NeRF ignore regions with geometric inconsistencies, as demonstrated in Figure 3. Instead of applying masks to the images"
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "image",
|
| 904 |
+
"bbox": [
|
| 905 |
+
0.508,
|
| 906 |
+
0.283,
|
| 907 |
+
0.892,
|
| 908 |
+
0.475
|
| 909 |
+
],
|
| 910 |
+
"angle": 0,
|
| 911 |
+
"content": null
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "image_caption",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.496,
|
| 917 |
+
0.487,
|
| 918 |
+
0.888,
|
| 919 |
+
0.571
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": "Figure 4. Occlusion-aware mask generation. Mask generation by comparing geometry between novel view \\( j \\) and source view \\( i \\), with \\( I_{i\\rightarrow j} \\) being warped patch generated for view \\( j \\). For (a) and (b), warping does not occur correctly due to artifacts and self-occlusion, respectively. Such pixels are masked out by \\( M_l \\), allowing only (c), with accurate warping, as training signal for rendered image \\( I_j \\)."
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "text",
|
| 926 |
+
"bbox": [
|
| 927 |
+
0.496,
|
| 928 |
+
0.585,
|
| 929 |
+
0.886,
|
| 930 |
+
0.645
|
| 931 |
+
],
|
| 932 |
+
"angle": 0,
|
| 933 |
+
"content": "before inputting them into the feature extractor network, we apply resized masks \\( M_{l} \\) directly to the feature maps, after using nearest-neighbor down-sampling to make them match the dimensions of \\( l \\)-th layer outputs."
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "text",
|
| 937 |
+
"bbox": [
|
| 938 |
+
0.496,
|
| 939 |
+
0.652,
|
| 940 |
+
0.886,
|
| 941 |
+
0.696
|
| 942 |
+
],
|
| 943 |
+
"angle": 0,
|
| 944 |
+
"content": "We generate \\( M \\) by measuring consistency between rendered depth values from the target viewpoint and source viewpoint such that"
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "equation",
|
| 948 |
+
"bbox": [
|
| 949 |
+
0.557,
|
| 950 |
+
0.71,
|
| 951 |
+
0.885,
|
| 952 |
+
0.729
|
| 953 |
+
],
|
| 954 |
+
"angle": 0,
|
| 955 |
+
"content": "\\[\nM \\left(p _ {j}\\right) = \\left[\\left\\| D _ {j} \\left(p _ {j}\\right) - D _ {i} \\left(p _ {j \\rightarrow i}\\right)\\right\\| < \\tau \\right]. \\tag {10}\n\\]"
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "text",
|
| 959 |
+
"bbox": [
|
| 960 |
+
0.496,
|
| 961 |
+
0.74,
|
| 962 |
+
0.887,
|
| 963 |
+
0.906
|
| 964 |
+
],
|
| 965 |
+
"angle": 0,
|
| 966 |
+
"content": "where \\([\\cdot ]\\) is Iverson bracket, and \\(p_j\\rightarrow i\\) refers to the corresponding pixel in source viewpoint \\(i\\) for reprojected target pixel \\(p_j\\) of \\(j\\)-th viewpoint. Here we measure euclidean distance between depth points rendered from target and source viewpoints as a criterion for a threshold masking. As illustrated in Figure 4, if distance between two points are greater than given threshold value \\(\\tau\\), we determine two rays as rendering depths of separate surfaces and mask out the corresponding pixel in viewpoint \\(I_{j}\\). The process takes place over every pixel in viewpoint \\(I_{j}\\) to generate a mask \\(M\\) the same size as rendered pixels. Through this technique, we fil"
|
| 967 |
+
}
|
| 968 |
+
],
|
| 969 |
+
[
|
| 970 |
+
{
|
| 971 |
+
"type": "header",
|
| 972 |
+
"bbox": [
|
| 973 |
+
0.253,
|
| 974 |
+
0.057,
|
| 975 |
+
0.719,
|
| 976 |
+
0.071
|
| 977 |
+
],
|
| 978 |
+
"angle": 0,
|
| 979 |
+
"content": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency"
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "image",
|
| 983 |
+
"bbox": [
|
| 984 |
+
0.092,
|
| 985 |
+
0.08,
|
| 986 |
+
0.878,
|
| 987 |
+
0.304
|
| 988 |
+
],
|
| 989 |
+
"angle": 0,
|
| 990 |
+
"content": null
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "image_caption",
|
| 994 |
+
"bbox": [
|
| 995 |
+
0.084,
|
| 996 |
+
0.307,
|
| 997 |
+
0.889,
|
| 998 |
+
0.35
|
| 999 |
+
],
|
| 1000 |
+
"angle": 0,
|
| 1001 |
+
"content": "Figure 5. Qualitative comparison on NeRF-Synthetic (Mildenhall et al., 2020) show that in 3-view setting, our method captures fine details more robustly (such as the wire in the mic scene) and produces less artifacts (background in the materials scene) compared to previous methods. We show GeCoNeRF's results (e) with its rendered depth (f)."
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "text",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
0.085,
|
| 1007 |
+
0.362,
|
| 1008 |
+
0.475,
|
| 1009 |
+
0.392
|
| 1010 |
+
],
|
| 1011 |
+
"angle": 0,
|
| 1012 |
+
"content": "ter out problematic solutions at feature level and regularize NeRF with only high-confidence image features."
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "text",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
0.085,
|
| 1018 |
+
0.399,
|
| 1019 |
+
0.475,
|
| 1020 |
+
0.429
|
| 1021 |
+
],
|
| 1022 |
+
"angle": 0,
|
| 1023 |
+
"content": "Based on this, the consistency loss \\(\\mathcal{L}_{\\mathrm{cons}}\\) is extended as such that"
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "equation",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
0.102,
|
| 1029 |
+
0.434,
|
| 1030 |
+
0.475,
|
| 1031 |
+
0.477
|
| 1032 |
+
],
|
| 1033 |
+
"angle": 0,
|
| 1034 |
+
"content": "\\[\n\\mathcal {L} _ {\\text {c o n s}} ^ {M} = \\sum_ {l = 1} ^ {L} \\frac {1}{C _ {l} m _ {l}} \\| M _ {l} \\odot \\left(f _ {\\phi} ^ {l} \\left(I _ {i \\rightarrow j}\\right) - f _ {\\phi} ^ {l} \\left(I _ {j}\\right)\\right) \\|, \\tag {11}\n\\]"
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "text",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
0.085,
|
| 1040 |
+
0.482,
|
| 1041 |
+
0.354,
|
| 1042 |
+
0.496
|
| 1043 |
+
],
|
| 1044 |
+
"angle": 0,
|
| 1045 |
+
"content": "where \\(m_l\\) is the sum of non-zero values."
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "text",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
0.085,
|
| 1051 |
+
0.5,
|
| 1052 |
+
0.477,
|
| 1053 |
+
0.696
|
| 1054 |
+
],
|
| 1055 |
+
"angle": 0,
|
| 1056 |
+
"content": "Edge-aware disparity regularization. Since our method is dependent upon the quality of depth rendered by NeRF, we directly impose additional regularization on rendered depth to facilitate optimization. We further encourage local depth smoothness on rendered scenes by imposing \\( l-1 \\) penalty on disparity gradient within randomly sampled patches of input views. In addition, inspired by (Godard et al., 2017), we take into account the fact that depth discontinuities in depth maps are likely to be aligned to gradients of its color image, and introduce an edge-aware term with image gradients \\( \\partial I \\) to weight the disparity values. Specifically, following (Godard et al., 2017), we regularize for edge-aware depth smoothness such that"
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "equation",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
0.14,
|
| 1062 |
+
0.701,
|
| 1063 |
+
0.474,
|
| 1064 |
+
0.722
|
| 1065 |
+
],
|
| 1066 |
+
"angle": 0,
|
| 1067 |
+
"content": "\\[\n\\mathcal {L} _ {\\text {r e g}} = \\left| \\partial_ {x} D _ {i} ^ {*} \\right| e ^ {- \\left| \\partial_ {x} I _ {i} \\right|} + \\left| \\partial_ {y} D _ {i} ^ {*} \\right| e ^ {- \\left| \\partial_ {y} I _ {i} \\right|}, \\tag {12}\n\\]"
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "text",
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
0.085,
|
| 1073 |
+
0.727,
|
| 1074 |
+
0.476,
|
| 1075 |
+
0.775
|
| 1076 |
+
],
|
| 1077 |
+
"angle": 0,
|
| 1078 |
+
"content": "where \\( D_{i}^{*} = D_{i} / \\overline{D_{i}} \\) is the mean-normalized inverse depth from (Godard et al., 2017) to discourage shrinking of the estimated depth."
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "title",
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
0.086,
|
| 1084 |
+
0.784,
|
| 1085 |
+
0.245,
|
| 1086 |
+
0.8
|
| 1087 |
+
],
|
| 1088 |
+
"angle": 0,
|
| 1089 |
+
"content": "4.4. Training Strategy"
|
| 1090 |
+
},
|
| 1091 |
+
{
|
| 1092 |
+
"type": "text",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
0.085,
|
| 1095 |
+
0.808,
|
| 1096 |
+
0.475,
|
| 1097 |
+
0.838
|
| 1098 |
+
],
|
| 1099 |
+
"angle": 0,
|
| 1100 |
+
"content": "In this section, we present novel training strategies to learn the model with the proposed losses."
|
| 1101 |
+
},
|
| 1102 |
+
{
|
| 1103 |
+
"type": "text",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
0.085,
|
| 1106 |
+
0.84,
|
| 1107 |
+
0.476,
|
| 1108 |
+
0.916
|
| 1109 |
+
],
|
| 1110 |
+
"angle": 0,
|
| 1111 |
+
"content": "Total losses. We optimize our model with a combined final loss of original NeRF's pixel-wise reconstruction loss \\(\\mathcal{L}_{\\mathrm{obs}}\\) and two types of regularization loss, \\(\\mathcal{L}_{\\mathrm{cons}}^M\\) for unobserved view consistency modeling and \\(\\mathcal{L}_{\\mathrm{reg}}\\) for disparity regularization."
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "text",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
0.497,
|
| 1117 |
+
0.361,
|
| 1118 |
+
0.889,
|
| 1119 |
+
0.573
|
| 1120 |
+
],
|
| 1121 |
+
"angle": 0,
|
| 1122 |
+
"content": "Progressive camera pose generation. Difficulty of of accurate warping increases the further target view is from the source view, which means that sampling far camera poses straight from the beginning of training may have negative effects on our model. Therefore, we first generate camera poses near source views, then progressively further as training proceeds. We sample noise value uniformly between an interval of \\([- \\beta, + \\beta]\\) and add it to the original Euler rotation angles of input view poses, with parameter \\(\\beta\\) growing linearly from 3 to 9 degrees throughout the course of optimization. This design choice can be intuitively understood as stabilizing locations near observed viewpoints at start and propagating this regularization to further locations, where warping becomes progressively more difficult."
|
| 1123 |
+
},
|
| 1124 |
+
{
|
| 1125 |
+
"type": "text",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
0.497,
|
| 1128 |
+
0.578,
|
| 1129 |
+
0.889,
|
| 1130 |
+
0.806
|
| 1131 |
+
],
|
| 1132 |
+
"angle": 0,
|
| 1133 |
+
"content": "Positional encoding frequency annealing. We find that most of the artifacts occurring are high-frequency occlusions that fill the space between scene and camera. This behaviour can be effectively suppressed by constraining the order of fourier positional encoding (Tancik et al., 2020) to low dimensions. Due to this reason, we adopt coarse-to-fine frequency annealing strategy previously used by (Park et al., 2021) to regularize our optimization. This strategy forces our network to primarily optimize from coarse, low-frequency details where self-occlusions and fine features are minimized, easing the difficulty of warping process in the beginning stages of training. Following (Park et al., 2021), the annealing equation is \\(\\alpha(t) = mt / K\\), with \\(m\\) as the number of encoding frequencies, \\(t\\) as iteration step, and we set hyper-parameter \\(K\\) as \\(15k\\)."
|
| 1134 |
+
},
|
| 1135 |
+
{
|
| 1136 |
+
"type": "title",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
0.498,
|
| 1139 |
+
0.818,
|
| 1140 |
+
0.63,
|
| 1141 |
+
0.835
|
| 1142 |
+
],
|
| 1143 |
+
"angle": 0,
|
| 1144 |
+
"content": "5. Experiments"
|
| 1145 |
+
},
|
| 1146 |
+
{
|
| 1147 |
+
"type": "title",
|
| 1148 |
+
"bbox": [
|
| 1149 |
+
0.498,
|
| 1150 |
+
0.837,
|
| 1151 |
+
0.688,
|
| 1152 |
+
0.853
|
| 1153 |
+
],
|
| 1154 |
+
"angle": 0,
|
| 1155 |
+
"content": "5.1. Experimental Settings"
|
| 1156 |
+
},
|
| 1157 |
+
{
|
| 1158 |
+
"type": "text",
|
| 1159 |
+
"bbox": [
|
| 1160 |
+
0.497,
|
| 1161 |
+
0.861,
|
| 1162 |
+
0.889,
|
| 1163 |
+
0.906
|
| 1164 |
+
],
|
| 1165 |
+
"angle": 0,
|
| 1166 |
+
"content": "Baselines. We use mip-NeRF (Barron et al., 2021) as our backbone. We give our comparisons to the baseline and several state-of-the-art models for few-shot NeRF: InfoN"
|
| 1167 |
+
}
|
| 1168 |
+
],
|
| 1169 |
+
[
|
| 1170 |
+
{
|
| 1171 |
+
"type": "header",
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
0.253,
|
| 1174 |
+
0.057,
|
| 1175 |
+
0.719,
|
| 1176 |
+
0.073
|
| 1177 |
+
],
|
| 1178 |
+
"angle": 0,
|
| 1179 |
+
"content": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency"
|
| 1180 |
+
},
|
| 1181 |
+
{
|
| 1182 |
+
"type": "table_caption",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
0.086,
|
| 1185 |
+
0.094,
|
| 1186 |
+
0.851,
|
| 1187 |
+
0.109
|
| 1188 |
+
],
|
| 1189 |
+
"angle": 0,
|
| 1190 |
+
"content": "Table 1. Quantitative comparison on NeRF-Synthetic (Mildenhall et al., 2020) and LLFF (Mildenhall et al., 2019) datasets."
|
| 1191 |
+
},
|
| 1192 |
+
{
|
| 1193 |
+
"type": "table",
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
0.092,
|
| 1196 |
+
0.113,
|
| 1197 |
+
0.885,
|
| 1198 |
+
0.255
|
| 1199 |
+
],
|
| 1200 |
+
"angle": 0,
|
| 1201 |
+
"content": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"4\">NeRF-Synthetic (Mildenhall et al., 2020)</td><td colspan=\"4\">LLFF (Mildenhall et al., 2019)</td></tr><tr><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td><td>Avg. ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td><td>Avg. ↓</td></tr><tr><td>NeRF (Mildenhall et al., 2020)</td><td>14.73</td><td>0.734</td><td>0.451</td><td>0.199</td><td>13.34</td><td>0.373</td><td>0.451</td><td>0.255</td></tr><tr><td>mip-NeRF (Barron et al., 2021)</td><td>17.71</td><td>0.798</td><td>0.745</td><td>0.178</td><td>14.62</td><td>0.351</td><td>0.495</td><td>0.246</td></tr><tr><td>DietNeRF (Jain et al., 2021)</td><td>16.06</td><td>0.793</td><td>0.306</td><td>0.151</td><td>14.94</td><td>0.370</td><td>0.496</td><td>0.232</td></tr><tr><td>InfoNeRF (Kim et al., 2022)</td><td>18.65</td><td>0.811</td><td>0.230</td><td>0.111</td><td>14.37</td><td>0.349</td><td>0.457</td><td>0.238</td></tr><tr><td>RegNeRF (Niemeyer et al., 2022)</td><td>18.01</td><td>0.842</td><td>0.352</td><td>0.132</td><td>19.08</td><td>0.587</td><td>0.336</td><td>0.146</td></tr><tr><td>GeCoNeRF (Ours)</td><td>19.23</td><td>0.866</td><td>0.201</td><td>0.096</td><td>18.77</td><td>0.596</td><td>0.338</td><td>0.145</td></tr></table>"
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "image",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
0.099,
|
| 1207 |
+
0.259,
|
| 1208 |
+
0.253,
|
| 1209 |
+
0.35
|
| 1210 |
+
],
|
| 1211 |
+
"angle": 0,
|
| 1212 |
+
"content": null
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "image",
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
0.099,
|
| 1218 |
+
0.35,
|
| 1219 |
+
0.253,
|
| 1220 |
+
0.44
|
| 1221 |
+
],
|
| 1222 |
+
"angle": 0,
|
| 1223 |
+
"content": null
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "image_caption",
|
| 1227 |
+
"bbox": [
|
| 1228 |
+
0.127,
|
| 1229 |
+
0.445,
|
| 1230 |
+
0.227,
|
| 1231 |
+
0.457
|
| 1232 |
+
],
|
| 1233 |
+
"angle": 0,
|
| 1234 |
+
"content": "(a) Ground-truth"
|
| 1235 |
+
},
|
| 1236 |
+
{
|
| 1237 |
+
"type": "image",
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
0.255,
|
| 1240 |
+
0.259,
|
| 1241 |
+
0.408,
|
| 1242 |
+
0.35
|
| 1243 |
+
],
|
| 1244 |
+
"angle": 0,
|
| 1245 |
+
"content": null
|
| 1246 |
+
},
|
| 1247 |
+
{
|
| 1248 |
+
"type": "image",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
0.255,
|
| 1251 |
+
0.35,
|
| 1252 |
+
0.408,
|
| 1253 |
+
0.44
|
| 1254 |
+
],
|
| 1255 |
+
"angle": 0,
|
| 1256 |
+
"content": null
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "image_caption",
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
0.288,
|
| 1262 |
+
0.445,
|
| 1263 |
+
0.375,
|
| 1264 |
+
0.459
|
| 1265 |
+
],
|
| 1266 |
+
"angle": 0,
|
| 1267 |
+
"content": "(b) mip-NeRF"
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "image",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
0.41,
|
| 1273 |
+
0.259,
|
| 1274 |
+
0.563,
|
| 1275 |
+
0.35
|
| 1276 |
+
],
|
| 1277 |
+
"angle": 0,
|
| 1278 |
+
"content": null
|
| 1279 |
+
},
|
| 1280 |
+
{
|
| 1281 |
+
"type": "image",
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
0.41,
|
| 1284 |
+
0.35,
|
| 1285 |
+
0.563,
|
| 1286 |
+
0.44
|
| 1287 |
+
],
|
| 1288 |
+
"angle": 0,
|
| 1289 |
+
"content": null
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"type": "image_caption",
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
0.432,
|
| 1295 |
+
0.445,
|
| 1296 |
+
0.541,
|
| 1297 |
+
0.459
|
| 1298 |
+
],
|
| 1299 |
+
"angle": 0,
|
| 1300 |
+
"content": "(c) mip-NeRF (D)"
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "image",
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
0.565,
|
| 1306 |
+
0.26,
|
| 1307 |
+
0.718,
|
| 1308 |
+
0.35
|
| 1309 |
+
],
|
| 1310 |
+
"angle": 0,
|
| 1311 |
+
"content": null
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "image",
|
| 1315 |
+
"bbox": [
|
| 1316 |
+
0.565,
|
| 1317 |
+
0.35,
|
| 1318 |
+
0.718,
|
| 1319 |
+
0.44
|
| 1320 |
+
],
|
| 1321 |
+
"angle": 0,
|
| 1322 |
+
"content": null
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "image_caption",
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
0.595,
|
| 1328 |
+
0.445,
|
| 1329 |
+
0.688,
|
| 1330 |
+
0.457
|
| 1331 |
+
],
|
| 1332 |
+
"angle": 0,
|
| 1333 |
+
"content": "(d) GeCoNeRF"
|
| 1334 |
+
},
|
| 1335 |
+
{
|
| 1336 |
+
"type": "image",
|
| 1337 |
+
"bbox": [
|
| 1338 |
+
0.721,
|
| 1339 |
+
0.259,
|
| 1340 |
+
0.874,
|
| 1341 |
+
0.35
|
| 1342 |
+
],
|
| 1343 |
+
"angle": 0,
|
| 1344 |
+
"content": null
|
| 1345 |
+
},
|
| 1346 |
+
{
|
| 1347 |
+
"type": "image",
|
| 1348 |
+
"bbox": [
|
| 1349 |
+
0.721,
|
| 1350 |
+
0.35,
|
| 1351 |
+
0.874,
|
| 1352 |
+
0.44
|
| 1353 |
+
],
|
| 1354 |
+
"angle": 0,
|
| 1355 |
+
"content": null
|
| 1356 |
+
},
|
| 1357 |
+
{
|
| 1358 |
+
"type": "image_caption",
|
| 1359 |
+
"bbox": [
|
| 1360 |
+
0.739,
|
| 1361 |
+
0.445,
|
| 1362 |
+
0.854,
|
| 1363 |
+
0.458
|
| 1364 |
+
],
|
| 1365 |
+
"angle": 0,
|
| 1366 |
+
"content": "(e) GeCoNeRF (D)"
|
| 1367 |
+
},
|
| 1368 |
+
{
|
| 1369 |
+
"type": "image_caption",
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
0.085,
|
| 1372 |
+
0.464,
|
| 1373 |
+
0.888,
|
| 1374 |
+
0.493
|
| 1375 |
+
],
|
| 1376 |
+
"angle": 0,
|
| 1377 |
+
"content": "Figure 6. Qualitative results on LLFF (Mildenhall et al., 2019). Comparison with baseline mip-NeRF shows that our model learns of coherent depth and geometry in extremely sparse 3-view setting."
|
| 1378 |
+
},
|
| 1379 |
+
{
|
| 1380 |
+
"type": "text",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
0.085,
|
| 1383 |
+
0.504,
|
| 1384 |
+
0.477,
|
| 1385 |
+
0.551
|
| 1386 |
+
],
|
| 1387 |
+
"angle": 0,
|
| 1388 |
+
"content": "eRF (Kim et al., 2022), DietNeRF (Jain et al., 2021), and RegNeRF (Niemeyer et al., 2022). We provide implementation details in the appendix."
|
| 1389 |
+
},
|
| 1390 |
+
{
|
| 1391 |
+
"type": "text",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
0.085,
|
| 1394 |
+
0.564,
|
| 1395 |
+
0.477,
|
| 1396 |
+
0.806
|
| 1397 |
+
],
|
| 1398 |
+
"angle": 0,
|
| 1399 |
+
"content": "Datasets and metrics. We evaluate our model on NeRF-Synthetic (Mildenhall et al., 2020) and LLFF (Mildenhall et al., 2019). NeRF-Synthetic is a realistically rendered \\(360^{\\circ}\\) synthetic dataset comprised of 8 scenes. We randomly sample 3 viewpoints out of 100 training images in each scene, with 200 testing images for evaluation. We also conduct experiments on LLFF benchmark dataset, which consists of real-life forward facing scenes. Following RegNeRF (Niemeyer et al., 2022), we apply standard settings by selecting test set evenly from list of every 8th image and selecting 3 reference views from remaining images. We quantify novel view synthesis quality using PSNR, Structural Similarity Index Measure (SSIM) (Wang et al., 2004), LPIPS perceptual metric (Zhang et al., 2018) and an average error metric introduced in (Barron et al., 2021) to report the mean value of metrics for all scenes in each dataset."
|
| 1400 |
+
},
|
| 1401 |
+
{
|
| 1402 |
+
"type": "title",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
0.086,
|
| 1405 |
+
0.822,
|
| 1406 |
+
0.214,
|
| 1407 |
+
0.837
|
| 1408 |
+
],
|
| 1409 |
+
"angle": 0,
|
| 1410 |
+
"content": "5.2. Comparisons"
|
| 1411 |
+
},
|
| 1412 |
+
{
|
| 1413 |
+
"type": "text",
|
| 1414 |
+
"bbox": [
|
| 1415 |
+
0.085,
|
| 1416 |
+
0.846,
|
| 1417 |
+
0.476,
|
| 1418 |
+
0.907
|
| 1419 |
+
],
|
| 1420 |
+
"angle": 0,
|
| 1421 |
+
"content": "Qualitative comparisons. Qualitative comparison results in Figure 5 and 6 demonstrate that our model shows superior performance to baseline mip-NeRF (Barron et al., 2021) and previous state-of-the-art model, RegNeRF (Niemeyer et al.,"
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "text",
|
| 1425 |
+
"bbox": [
|
| 1426 |
+
0.497,
|
| 1427 |
+
0.504,
|
| 1428 |
+
0.889,
|
| 1429 |
+
0.702
|
| 1430 |
+
],
|
| 1431 |
+
"angle": 0,
|
| 1432 |
+
"content": "2022), in 3-view settings. We observe that our warping-based consistency enables GeCoNeRF to capture fine details that mip-NeRF and RegNeRF struggle to capture in same sparse view scenarios, as demonstrated with the mic scene. Our method also displays higher stability in rendering smooth surfaces and reducing artifacts in background in comparison to previous models, as shown in the results of the materials scene. We argue that these results demonstrate how our method, through generation of warped pseudo ground truth patches, is able to give the model local, scene-specific regularization that aids recovery of fine details, which previous few-shot NeRF models with their global, generalized priors were unable to accomplish."
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "text",
|
| 1436 |
+
"bbox": [
|
| 1437 |
+
0.496,
|
| 1438 |
+
0.74,
|
| 1439 |
+
0.888,
|
| 1440 |
+
0.846
|
| 1441 |
+
],
|
| 1442 |
+
"angle": 0,
|
| 1443 |
+
"content": "Quantitative comparisons. Comparisons in Table 1 show our model's competitive results in LLFF dataset, whose PSNR results show large increase in comparison to mip-NeRF baseline and competitive compared to RegN-erF. We see that our warping-based consistency modeling successfully prevents overfitting and artifacts, which allows our model to perform better quantitatively."
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "title",
|
| 1447 |
+
"bbox": [
|
| 1448 |
+
0.498,
|
| 1449 |
+
0.862,
|
| 1450 |
+
0.637,
|
| 1451 |
+
0.877
|
| 1452 |
+
],
|
| 1453 |
+
"angle": 0,
|
| 1454 |
+
"content": "5.3. Ablation Study"
|
| 1455 |
+
},
|
| 1456 |
+
{
|
| 1457 |
+
"type": "text",
|
| 1458 |
+
"bbox": [
|
| 1459 |
+
0.497,
|
| 1460 |
+
0.886,
|
| 1461 |
+
0.886,
|
| 1462 |
+
0.916
|
| 1463 |
+
],
|
| 1464 |
+
"angle": 0,
|
| 1465 |
+
"content": "We validate our design choices by performing an ablation study on LLFF (Mildenhall et al., 2019) dataset."
|
| 1466 |
+
}
|
| 1467 |
+
],
|
| 1468 |
+
[
|
| 1469 |
+
{
|
| 1470 |
+
"type": "header",
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
0.253,
|
| 1473 |
+
0.057,
|
| 1474 |
+
0.719,
|
| 1475 |
+
0.071
|
| 1476 |
+
],
|
| 1477 |
+
"angle": 0,
|
| 1478 |
+
"content": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency"
|
| 1479 |
+
},
|
| 1480 |
+
{
|
| 1481 |
+
"type": "image",
|
| 1482 |
+
"bbox": [
|
| 1483 |
+
0.088,
|
| 1484 |
+
0.082,
|
| 1485 |
+
0.245,
|
| 1486 |
+
0.175
|
| 1487 |
+
],
|
| 1488 |
+
"angle": 0,
|
| 1489 |
+
"content": null
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "image_caption",
|
| 1493 |
+
"bbox": [
|
| 1494 |
+
0.13,
|
| 1495 |
+
0.181,
|
| 1496 |
+
0.204,
|
| 1497 |
+
0.194
|
| 1498 |
+
],
|
| 1499 |
+
"angle": 0,
|
| 1500 |
+
"content": "(a) Baseline"
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "image",
|
| 1504 |
+
"bbox": [
|
| 1505 |
+
0.247,
|
| 1506 |
+
0.082,
|
| 1507 |
+
0.404,
|
| 1508 |
+
0.175
|
| 1509 |
+
],
|
| 1510 |
+
"angle": 0,
|
| 1511 |
+
"content": null
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"type": "image_caption",
|
| 1515 |
+
"bbox": [
|
| 1516 |
+
0.281,
|
| 1517 |
+
0.181,
|
| 1518 |
+
0.37,
|
| 1519 |
+
0.194
|
| 1520 |
+
],
|
| 1521 |
+
"angle": 0,
|
| 1522 |
+
"content": "(b) (a) + \\(\\mathcal{L}_{\\mathrm{cons}}\\)"
|
| 1523 |
+
},
|
| 1524 |
+
{
|
| 1525 |
+
"type": "image",
|
| 1526 |
+
"bbox": [
|
| 1527 |
+
0.405,
|
| 1528 |
+
0.082,
|
| 1529 |
+
0.563,
|
| 1530 |
+
0.175
|
| 1531 |
+
],
|
| 1532 |
+
"angle": 0,
|
| 1533 |
+
"content": null
|
| 1534 |
+
},
|
| 1535 |
+
{
|
| 1536 |
+
"type": "image_caption",
|
| 1537 |
+
"bbox": [
|
| 1538 |
+
0.418,
|
| 1539 |
+
0.181,
|
| 1540 |
+
0.551,
|
| 1541 |
+
0.194
|
| 1542 |
+
],
|
| 1543 |
+
"angle": 0,
|
| 1544 |
+
"content": "(c) \\((\\mathbf{b}) + M\\) (O. mask)"
|
| 1545 |
+
},
|
| 1546 |
+
{
|
| 1547 |
+
"type": "image",
|
| 1548 |
+
"bbox": [
|
| 1549 |
+
0.565,
|
| 1550 |
+
0.082,
|
| 1551 |
+
0.722,
|
| 1552 |
+
0.175
|
| 1553 |
+
],
|
| 1554 |
+
"angle": 0,
|
| 1555 |
+
"content": null
|
| 1556 |
+
},
|
| 1557 |
+
{
|
| 1558 |
+
"type": "image_caption",
|
| 1559 |
+
"bbox": [
|
| 1560 |
+
0.582,
|
| 1561 |
+
0.181,
|
| 1562 |
+
0.705,
|
| 1563 |
+
0.195
|
| 1564 |
+
],
|
| 1565 |
+
"angle": 0,
|
| 1566 |
+
"content": "(d) (c) + Progressive"
|
| 1567 |
+
},
|
| 1568 |
+
{
|
| 1569 |
+
"type": "image",
|
| 1570 |
+
"bbox": [
|
| 1571 |
+
0.724,
|
| 1572 |
+
0.082,
|
| 1573 |
+
0.881,
|
| 1574 |
+
0.175
|
| 1575 |
+
],
|
| 1576 |
+
"angle": 0,
|
| 1577 |
+
"content": null
|
| 1578 |
+
},
|
| 1579 |
+
{
|
| 1580 |
+
"type": "image_caption",
|
| 1581 |
+
"bbox": [
|
| 1582 |
+
0.741,
|
| 1583 |
+
0.181,
|
| 1584 |
+
0.864,
|
| 1585 |
+
0.195
|
| 1586 |
+
],
|
| 1587 |
+
"angle": 0,
|
| 1588 |
+
"content": "(e) (d) + \\(\\mathcal{L}_{\\mathrm{reg}}\\) (Ours)"
|
| 1589 |
+
},
|
| 1590 |
+
{
|
| 1591 |
+
"type": "image_caption",
|
| 1592 |
+
"bbox": [
|
| 1593 |
+
0.084,
|
| 1594 |
+
0.2,
|
| 1595 |
+
0.886,
|
| 1596 |
+
0.226
|
| 1597 |
+
],
|
| 1598 |
+
"angle": 0,
|
| 1599 |
+
"content": "Figure 7. Qualitative ablation. Our qualitative ablation results on Horns scene shows the contribution of each module in performance of our model at 3-view scenario."
|
| 1600 |
+
},
|
| 1601 |
+
{
|
| 1602 |
+
"type": "table_caption",
|
| 1603 |
+
"bbox": [
|
| 1604 |
+
0.205,
|
| 1605 |
+
0.24,
|
| 1606 |
+
0.356,
|
| 1607 |
+
0.254
|
| 1608 |
+
],
|
| 1609 |
+
"angle": 0,
|
| 1610 |
+
"content": "Table 2. Ablation study."
|
| 1611 |
+
},
|
| 1612 |
+
{
|
| 1613 |
+
"type": "table",
|
| 1614 |
+
"bbox": [
|
| 1615 |
+
0.088,
|
| 1616 |
+
0.259,
|
| 1617 |
+
0.471,
|
| 1618 |
+
0.358
|
| 1619 |
+
],
|
| 1620 |
+
"angle": 0,
|
| 1621 |
+
"content": "<table><tr><td>Components</td><td>PSNR↑</td><td>SSIM↑</td><td>LPIPS↓</td><td>Avg.↓</td></tr><tr><td>(a) Baseline</td><td>14.62</td><td>0.351</td><td>0.495</td><td>0.246</td></tr><tr><td>(b) (a) + Lcons</td><td>18.10</td><td>0.529</td><td>0.408</td><td>0.164</td></tr><tr><td>(c) (b) + M (O. mask)</td><td>18.24</td><td>0.535</td><td>0.379</td><td>0.159</td></tr><tr><td>(d) (c) + Progressive</td><td>18.46</td><td>0.552</td><td>0.349</td><td>0.151</td></tr><tr><td>(e) (d) + Lreg (Ours)</td><td>18.55</td><td>0.592</td><td>0.340</td><td>0.150</td></tr></table>"
|
| 1622 |
+
},
|
| 1623 |
+
{
|
| 1624 |
+
"type": "table_caption",
|
| 1625 |
+
"bbox": [
|
| 1626 |
+
0.16,
|
| 1627 |
+
0.369,
|
| 1628 |
+
0.402,
|
| 1629 |
+
0.383
|
| 1630 |
+
],
|
| 1631 |
+
"angle": 0,
|
| 1632 |
+
"content": "Table 3. Progressive training ablation."
|
| 1633 |
+
},
|
| 1634 |
+
{
|
| 1635 |
+
"type": "table",
|
| 1636 |
+
"bbox": [
|
| 1637 |
+
0.088,
|
| 1638 |
+
0.387,
|
| 1639 |
+
0.471,
|
| 1640 |
+
0.476
|
| 1641 |
+
],
|
| 1642 |
+
"angle": 0,
|
| 1643 |
+
"content": "<table><tr><td>Components</td><td>PSNR↑</td><td>SSIM↑</td><td>LPIPS↓</td><td>Avg. ↓</td></tr><tr><td>w/o prog. anneal</td><td>18.50</td><td>0.852</td><td>0.781</td><td>0.161</td></tr><tr><td>w/o prog. pose</td><td>16.96</td><td>0.799</td><td>0.811</td><td>0.194</td></tr><tr><td>w/o both</td><td>17.04</td><td>0.788</td><td>0.823</td><td>0.197</td></tr><tr><td>GeCoNeRF (Ours)</td><td>19.23</td><td>0.866</td><td>0.723</td><td>0.148</td></tr></table>"
|
| 1644 |
+
},
|
| 1645 |
+
{
|
| 1646 |
+
"type": "text",
|
| 1647 |
+
"bbox": [
|
| 1648 |
+
0.084,
|
| 1649 |
+
0.489,
|
| 1650 |
+
0.475,
|
| 1651 |
+
0.58
|
| 1652 |
+
],
|
| 1653 |
+
"angle": 0,
|
| 1654 |
+
"content": "Feature-level consistency loss. We observe that without the consistency loss \\(\\mathcal{L}_{\\mathrm{cons}}\\), our model suffers both quantitative and qualitative decrease in reconstruction fidelity, verified by incoherent geometry in image (a) of Figure 7. Absence of unseen view consistency modeling destabilizes the model, resulting divergent behaviours."
|
| 1655 |
+
},
|
| 1656 |
+
{
|
| 1657 |
+
"type": "text",
|
| 1658 |
+
"bbox": [
|
| 1659 |
+
0.084,
|
| 1660 |
+
0.582,
|
| 1661 |
+
0.475,
|
| 1662 |
+
0.688
|
| 1663 |
+
],
|
| 1664 |
+
"angle": 0,
|
| 1665 |
+
"content": "Occlusion mask. We observe that addition of occlusion mask \\( M \\) improves overall appearance as well as geometry, as shown in image (c) of Figure 7. Its absence results broken geometry throughout the overall scene, as demonstrated in (b). Erroneous artifacts pertaining to projections from different viewpoints were detected in multiple scenes, resulting lower quantitative values."
|
| 1666 |
+
},
|
| 1667 |
+
{
|
| 1668 |
+
"type": "text",
|
| 1669 |
+
"bbox": [
|
| 1670 |
+
0.084,
|
| 1671 |
+
0.692,
|
| 1672 |
+
0.475,
|
| 1673 |
+
0.844
|
| 1674 |
+
],
|
| 1675 |
+
"angle": 0,
|
| 1676 |
+
"content": "Progressive training strategies. In Table 3, we justify our progressive training strategies with additional experiments on NeRF-Synthetic dataset, while in the main ablation we conduct an ablation with progressive annealing only. For pose generation, we sample pose angle from large interval in the beginning, instead of slowly growing the interval. For positional encoding, we replace progressive annealing with naive positional encoding used in NeRF. We observe that their absence causes destabilization of the model and degradation in appearance, respectively."
|
| 1677 |
+
},
|
| 1678 |
+
{
|
| 1679 |
+
"type": "text",
|
| 1680 |
+
"bbox": [
|
| 1681 |
+
0.084,
|
| 1682 |
+
0.846,
|
| 1683 |
+
0.476,
|
| 1684 |
+
0.907
|
| 1685 |
+
],
|
| 1686 |
+
"angle": 0,
|
| 1687 |
+
"content": "Edge-aware disparity regularization. We observe that inclusion of edge-aware disparity regularization \\(\\mathcal{L}_{\\mathrm{reg}}\\) refines given geometry, as shown in image (e) of Figure 7. By applying \\(\\mathcal{L}_{\\mathrm{reg}}\\), we see increased smoothness in geometry"
|
| 1688 |
+
},
|
| 1689 |
+
{
|
| 1690 |
+
"type": "table_caption",
|
| 1691 |
+
"bbox": [
|
| 1692 |
+
0.566,
|
| 1693 |
+
0.24,
|
| 1694 |
+
0.819,
|
| 1695 |
+
0.254
|
| 1696 |
+
],
|
| 1697 |
+
"angle": 0,
|
| 1698 |
+
"content": "Table 4. Pixel-level consistency ablation."
|
| 1699 |
+
},
|
| 1700 |
+
{
|
| 1701 |
+
"type": "table",
|
| 1702 |
+
"bbox": [
|
| 1703 |
+
0.499,
|
| 1704 |
+
0.259,
|
| 1705 |
+
0.882,
|
| 1706 |
+
0.315
|
| 1707 |
+
],
|
| 1708 |
+
"angle": 0,
|
| 1709 |
+
"content": "<table><tr><td>Components</td><td>PSNR↑</td><td>SSIM↑</td><td>LPIPS↓</td><td>Avg.↓</td></tr><tr><td>w/ Lpix</td><td>17.98</td><td>0.528</td><td>0.431</td><td>0.165</td></tr><tr><td>w/ Lcons (Ours)</td><td>18.55</td><td>0.592</td><td>0.340</td><td>0.150</td></tr></table>"
|
| 1710 |
+
},
|
| 1711 |
+
{
|
| 1712 |
+
"type": "image",
|
| 1713 |
+
"bbox": [
|
| 1714 |
+
0.557,
|
| 1715 |
+
0.319,
|
| 1716 |
+
0.695,
|
| 1717 |
+
0.427
|
| 1718 |
+
],
|
| 1719 |
+
"angle": 0,
|
| 1720 |
+
"content": null
|
| 1721 |
+
},
|
| 1722 |
+
{
|
| 1723 |
+
"type": "image_caption",
|
| 1724 |
+
"bbox": [
|
| 1725 |
+
0.578,
|
| 1726 |
+
0.432,
|
| 1727 |
+
0.672,
|
| 1728 |
+
0.446
|
| 1729 |
+
],
|
| 1730 |
+
"angle": 0,
|
| 1731 |
+
"content": "(a) Pixel-level"
|
| 1732 |
+
},
|
| 1733 |
+
{
|
| 1734 |
+
"type": "image",
|
| 1735 |
+
"bbox": [
|
| 1736 |
+
0.719,
|
| 1737 |
+
0.321,
|
| 1738 |
+
0.831,
|
| 1739 |
+
0.427
|
| 1740 |
+
],
|
| 1741 |
+
"angle": 0,
|
| 1742 |
+
"content": null
|
| 1743 |
+
},
|
| 1744 |
+
{
|
| 1745 |
+
"type": "image_caption",
|
| 1746 |
+
"bbox": [
|
| 1747 |
+
0.706,
|
| 1748 |
+
0.432,
|
| 1749 |
+
0.815,
|
| 1750 |
+
0.446
|
| 1751 |
+
],
|
| 1752 |
+
"angle": 0,
|
| 1753 |
+
"content": "(b) Feature-level"
|
| 1754 |
+
},
|
| 1755 |
+
{
|
| 1756 |
+
"type": "image_caption",
|
| 1757 |
+
"bbox": [
|
| 1758 |
+
0.575,
|
| 1759 |
+
0.45,
|
| 1760 |
+
0.805,
|
| 1761 |
+
0.468
|
| 1762 |
+
],
|
| 1763 |
+
"angle": 0,
|
| 1764 |
+
"content": "Figure 8. \\(\\mathcal{L}_{\\mathrm{pix}}^M\\) vs. \\(\\mathcal{L}_{\\mathrm{cons}}^M\\) comparison."
|
| 1765 |
+
},
|
| 1766 |
+
{
|
| 1767 |
+
"type": "text",
|
| 1768 |
+
"bbox": [
|
| 1769 |
+
0.497,
|
| 1770 |
+
0.481,
|
| 1771 |
+
0.887,
|
| 1772 |
+
0.527
|
| 1773 |
+
],
|
| 1774 |
+
"angle": 0,
|
| 1775 |
+
"content": "throughout the overall scene. This loss contributes to removal of erroneous artifacts, which achieves better results both qualitatively and quantitatively, as shown in Table 2."
|
| 1776 |
+
},
|
| 1777 |
+
{
|
| 1778 |
+
"type": "text",
|
| 1779 |
+
"bbox": [
|
| 1780 |
+
0.496,
|
| 1781 |
+
0.543,
|
| 1782 |
+
0.889,
|
| 1783 |
+
0.68
|
| 1784 |
+
],
|
| 1785 |
+
"angle": 0,
|
| 1786 |
+
"content": "Feature-level loss vs. pixel-level loss. In Table 4, we conduct a quantitative ablation comparisons between feature-level consistency loss \\(\\mathcal{L}_{\\mathrm{cons}}^{M}\\) and pixel-level photometric consistency loss \\(\\mathcal{L}_{\\mathrm{pix}}^{M}\\), both with occlusion masking. As shown in Figure 8, naively applying pixel-level loss for consistency modeling leads to broken geometry. This phenomenon can be attributed to \\(\\mathcal{L}_{\\mathrm{pix}}\\) being agnostic to view-dependent specular effects, which the network tries to model by altering or erasing altogether non-Lambertian surfaces."
|
| 1787 |
+
},
|
| 1788 |
+
{
|
| 1789 |
+
"type": "title",
|
| 1790 |
+
"bbox": [
|
| 1791 |
+
0.497,
|
| 1792 |
+
0.699,
|
| 1793 |
+
0.617,
|
| 1794 |
+
0.714
|
| 1795 |
+
],
|
| 1796 |
+
"angle": 0,
|
| 1797 |
+
"content": "6. Conclusion"
|
| 1798 |
+
},
|
| 1799 |
+
{
|
| 1800 |
+
"type": "text",
|
| 1801 |
+
"bbox": [
|
| 1802 |
+
0.496,
|
| 1803 |
+
0.725,
|
| 1804 |
+
0.888,
|
| 1805 |
+
0.906
|
| 1806 |
+
],
|
| 1807 |
+
"angle": 0,
|
| 1808 |
+
"content": "We present GeCoNeRF, a novel approach for optimizing Neural Radiance Fields (NeRF) for few-shot novel view synthesis. Inspired by self-supervised monocular depth estimation method, we regularize geometry consistency by giving semantic consistency between rendered image and warped image. This approach overcomes limitation of NeRF with sparse inputs, which shows performance degradation with depth ambiguity and many artifacts. With feature consistency loss, we are able to regularize NeRF at unobserved viewpoints to give it beneficial geometric constraint. Further techniques and training strategies we propose prove to have stabilizing effect and facilitate optimization of our"
|
| 1809 |
+
}
|
| 1810 |
+
],
|
| 1811 |
+
[
|
| 1812 |
+
{
|
| 1813 |
+
"type": "header",
|
| 1814 |
+
"bbox": [
|
| 1815 |
+
0.253,
|
| 1816 |
+
0.057,
|
| 1817 |
+
0.719,
|
| 1818 |
+
0.072
|
| 1819 |
+
],
|
| 1820 |
+
"angle": 0,
|
| 1821 |
+
"content": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency"
|
| 1822 |
+
},
|
| 1823 |
+
{
|
| 1824 |
+
"type": "text",
|
| 1825 |
+
"bbox": [
|
| 1826 |
+
0.085,
|
| 1827 |
+
0.086,
|
| 1828 |
+
0.479,
|
| 1829 |
+
0.13
|
| 1830 |
+
],
|
| 1831 |
+
"angle": 0,
|
| 1832 |
+
"content": "network. Our experimental evaluation demonstrates our method's competitiveness results compared to other state of the art baselines."
|
| 1833 |
+
},
|
| 1834 |
+
{
|
| 1835 |
+
"type": "title",
|
| 1836 |
+
"bbox": [
|
| 1837 |
+
0.088,
|
| 1838 |
+
0.15,
|
| 1839 |
+
0.183,
|
| 1840 |
+
0.166
|
| 1841 |
+
],
|
| 1842 |
+
"angle": 0,
|
| 1843 |
+
"content": "References"
|
| 1844 |
+
},
|
| 1845 |
+
{
|
| 1846 |
+
"type": "ref_text",
|
| 1847 |
+
"bbox": [
|
| 1848 |
+
0.088,
|
| 1849 |
+
0.174,
|
| 1850 |
+
0.476,
|
| 1851 |
+
0.235
|
| 1852 |
+
],
|
| 1853 |
+
"angle": 0,
|
| 1854 |
+
"content": "Attal, B., Laidlaw, E., Gokaslan, A., Kim, C., Richardt, C., Tompkin, J., and O'Toole, M. Törf: Time-of-flight radiance fields for dynamic scene view synthesis. Advances in neural information processing systems, 34, 2021."
|
| 1855 |
+
},
|
| 1856 |
+
{
|
| 1857 |
+
"type": "ref_text",
|
| 1858 |
+
"bbox": [
|
| 1859 |
+
0.088,
|
| 1860 |
+
0.247,
|
| 1861 |
+
0.476,
|
| 1862 |
+
0.323
|
| 1863 |
+
],
|
| 1864 |
+
"angle": 0,
|
| 1865 |
+
"content": "Barron, J. T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., and Srinivasan, P. P. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021."
|
| 1866 |
+
},
|
| 1867 |
+
{
|
| 1868 |
+
"type": "ref_text",
|
| 1869 |
+
"bbox": [
|
| 1870 |
+
0.088,
|
| 1871 |
+
0.335,
|
| 1872 |
+
0.476,
|
| 1873 |
+
0.395
|
| 1874 |
+
],
|
| 1875 |
+
"angle": 0,
|
| 1876 |
+
"content": "Bortolon, M., Del Bue, A., and Poiesi, F. Data augmentation for nef: a geometric consistent solution based on view morphing, 2022. URL https://arxiv.org/abs/2210.04214."
|
| 1877 |
+
},
|
| 1878 |
+
{
|
| 1879 |
+
"type": "ref_text",
|
| 1880 |
+
"bbox": [
|
| 1881 |
+
0.088,
|
| 1882 |
+
0.408,
|
| 1883 |
+
0.476,
|
| 1884 |
+
0.484
|
| 1885 |
+
],
|
| 1886 |
+
"angle": 0,
|
| 1887 |
+
"content": "Chen, A., Xu, Z., Zhao, F., Zhang, X., Xiang, F., Yu, J., and Su, H. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14124-14133, 2021."
|
| 1888 |
+
},
|
| 1889 |
+
{
|
| 1890 |
+
"type": "ref_text",
|
| 1891 |
+
"bbox": [
|
| 1892 |
+
0.088,
|
| 1893 |
+
0.496,
|
| 1894 |
+
0.476,
|
| 1895 |
+
0.541
|
| 1896 |
+
],
|
| 1897 |
+
"angle": 0,
|
| 1898 |
+
"content": "Chen, Z., Wang, C., Guo, Y., and Zhang, S.-H. Structnerf: Neural radiance fields for indoor scenes with structural hints. ArXiv, abs/2209.05277, 2022."
|
| 1899 |
+
},
|
| 1900 |
+
{
|
| 1901 |
+
"type": "ref_text",
|
| 1902 |
+
"bbox": [
|
| 1903 |
+
0.088,
|
| 1904 |
+
0.554,
|
| 1905 |
+
0.476,
|
| 1906 |
+
0.63
|
| 1907 |
+
],
|
| 1908 |
+
"angle": 0,
|
| 1909 |
+
"content": "Chibane, J., Bansal, A., Lazova, V., and Pons-Moll, G. Stereo radiance fields (srf): Learning view synthesis for sparse views of novel scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7911-7920, 2021."
|
| 1910 |
+
},
|
| 1911 |
+
{
|
| 1912 |
+
"type": "ref_text",
|
| 1913 |
+
"bbox": [
|
| 1914 |
+
0.088,
|
| 1915 |
+
0.642,
|
| 1916 |
+
0.476,
|
| 1917 |
+
0.688
|
| 1918 |
+
],
|
| 1919 |
+
"angle": 0,
|
| 1920 |
+
"content": "Darmon, F., Bascle, B., Devaux, J., Monasse, P., and Aubry, M. Improving neural implicit surfaces geometry with patch warping. 2022."
|
| 1921 |
+
},
|
| 1922 |
+
{
|
| 1923 |
+
"type": "ref_text",
|
| 1924 |
+
"bbox": [
|
| 1925 |
+
0.088,
|
| 1926 |
+
0.7,
|
| 1927 |
+
0.476,
|
| 1928 |
+
0.774
|
| 1929 |
+
],
|
| 1930 |
+
"angle": 0,
|
| 1931 |
+
"content": "Deng, K., Liu, A., Zhu, J.-Y., and Ramanan, D. Depth-supervised NeRF: Fewer views and faster training for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022."
|
| 1932 |
+
},
|
| 1933 |
+
{
|
| 1934 |
+
"type": "ref_text",
|
| 1935 |
+
"bbox": [
|
| 1936 |
+
0.088,
|
| 1937 |
+
0.788,
|
| 1938 |
+
0.476,
|
| 1939 |
+
0.833
|
| 1940 |
+
],
|
| 1941 |
+
"angle": 0,
|
| 1942 |
+
"content": "Deng, Y., Yang, J., Xiang, J., and Tong, X. Gram: Generative radiance manifolds for 3d-aware image generation. arXiv preprint arXiv:2112.08867, 2021."
|
| 1943 |
+
},
|
| 1944 |
+
{
|
| 1945 |
+
"type": "ref_text",
|
| 1946 |
+
"bbox": [
|
| 1947 |
+
0.088,
|
| 1948 |
+
0.846,
|
| 1949 |
+
0.476,
|
| 1950 |
+
0.906
|
| 1951 |
+
],
|
| 1952 |
+
"angle": 0,
|
| 1953 |
+
"content": "Fu, Q., Xu, Q., Ong, Y.-S., and Tao, W. Geo-neus: Geometry-consistent neural implicit surfaces learning for multi-view reconstruction, 2022. URL https://arxiv.org/abs/2205.15848."
|
| 1954 |
+
},
|
| 1955 |
+
{
|
| 1956 |
+
"type": "list",
|
| 1957 |
+
"bbox": [
|
| 1958 |
+
0.088,
|
| 1959 |
+
0.174,
|
| 1960 |
+
0.476,
|
| 1961 |
+
0.906
|
| 1962 |
+
],
|
| 1963 |
+
"angle": 0,
|
| 1964 |
+
"content": null
|
| 1965 |
+
},
|
| 1966 |
+
{
|
| 1967 |
+
"type": "ref_text",
|
| 1968 |
+
"bbox": [
|
| 1969 |
+
0.5,
|
| 1970 |
+
0.085,
|
| 1971 |
+
0.886,
|
| 1972 |
+
0.146
|
| 1973 |
+
],
|
| 1974 |
+
"angle": 0,
|
| 1975 |
+
"content": "Garg, R., Bg, V. K., Carneiro, G., and Reid, I. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In European conference on computer vision, pp. 740-756. Springer, 2016."
|
| 1976 |
+
},
|
| 1977 |
+
{
|
| 1978 |
+
"type": "ref_text",
|
| 1979 |
+
"bbox": [
|
| 1980 |
+
0.5,
|
| 1981 |
+
0.155,
|
| 1982 |
+
0.887,
|
| 1983 |
+
0.199
|
| 1984 |
+
],
|
| 1985 |
+
"angle": 0,
|
| 1986 |
+
"content": "Godard, C., Mac Aodha, O., and Brostow, G. J. Unsupervised monocular depth estimation with left-right consistency. In CVPR, 2017."
|
| 1987 |
+
},
|
| 1988 |
+
{
|
| 1989 |
+
"type": "ref_text",
|
| 1990 |
+
"bbox": [
|
| 1991 |
+
0.5,
|
| 1992 |
+
0.209,
|
| 1993 |
+
0.887,
|
| 1994 |
+
0.283
|
| 1995 |
+
],
|
| 1996 |
+
"angle": 0,
|
| 1997 |
+
"content": "Hedman, P., Srinivasan, P. P., Mildenhall, B., Barron, J. T., and Debevec, P. Baking neural radiance fields for real-time view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5875-5884, 2021."
|
| 1998 |
+
},
|
| 1999 |
+
{
|
| 2000 |
+
"type": "ref_text",
|
| 2001 |
+
"bbox": [
|
| 2002 |
+
0.5,
|
| 2003 |
+
0.293,
|
| 2004 |
+
0.887,
|
| 2005 |
+
0.368
|
| 2006 |
+
],
|
| 2007 |
+
"angle": 0,
|
| 2008 |
+
"content": "Huang, B., Yi, H., Huang, C., He, Y., Liu, J., and Liu, X. M3vsnet: Unsupervised multi-metric multi-view stereo network. In 2021 IEEE International Conference on Image Processing (ICIP), pp. 3163-3167, 2021. doi: 10.1109/ICIP42928.2021.9506469."
|
| 2009 |
+
},
|
| 2010 |
+
{
|
| 2011 |
+
"type": "ref_text",
|
| 2012 |
+
"bbox": [
|
| 2013 |
+
0.5,
|
| 2014 |
+
0.377,
|
| 2015 |
+
0.886,
|
| 2016 |
+
0.422
|
| 2017 |
+
],
|
| 2018 |
+
"angle": 0,
|
| 2019 |
+
"content": "Jaderberg, M., Simonyan, K., Zisserman, A., et al. Spatial transformer networks. Advances in neural information processing systems, 28, 2015."
|
| 2020 |
+
},
|
| 2021 |
+
{
|
| 2022 |
+
"type": "ref_text",
|
| 2023 |
+
"bbox": [
|
| 2024 |
+
0.5,
|
| 2025 |
+
0.431,
|
| 2026 |
+
0.886,
|
| 2027 |
+
0.492
|
| 2028 |
+
],
|
| 2029 |
+
"angle": 0,
|
| 2030 |
+
"content": "Jain, A., Tancik, M., and Abbeel, P. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5885-5894, 2021."
|
| 2031 |
+
},
|
| 2032 |
+
{
|
| 2033 |
+
"type": "ref_text",
|
| 2034 |
+
"bbox": [
|
| 2035 |
+
0.5,
|
| 2036 |
+
0.5,
|
| 2037 |
+
0.886,
|
| 2038 |
+
0.561
|
| 2039 |
+
],
|
| 2040 |
+
"angle": 0,
|
| 2041 |
+
"content": "Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., and Aanaes, H. Large scale multi-view stereopsis evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 406-413, 2014."
|
| 2042 |
+
},
|
| 2043 |
+
{
|
| 2044 |
+
"type": "ref_text",
|
| 2045 |
+
"bbox": [
|
| 2046 |
+
0.5,
|
| 2047 |
+
0.569,
|
| 2048 |
+
0.886,
|
| 2049 |
+
0.63
|
| 2050 |
+
],
|
| 2051 |
+
"angle": 0,
|
| 2052 |
+
"content": "Jeong, Y., Ahn, S., Choy, C., Anandkumar, A., Cho, M., and Park, J. Self-calibrating neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5846-5854, 2021."
|
| 2053 |
+
},
|
| 2054 |
+
{
|
| 2055 |
+
"type": "ref_text",
|
| 2056 |
+
"bbox": [
|
| 2057 |
+
0.5,
|
| 2058 |
+
0.638,
|
| 2059 |
+
0.886,
|
| 2060 |
+
0.684
|
| 2061 |
+
],
|
| 2062 |
+
"angle": 0,
|
| 2063 |
+
"content": "Johnson, J., Alahi, A., and Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, 2016."
|
| 2064 |
+
},
|
| 2065 |
+
{
|
| 2066 |
+
"type": "ref_text",
|
| 2067 |
+
"bbox": [
|
| 2068 |
+
0.5,
|
| 2069 |
+
0.692,
|
| 2070 |
+
0.886,
|
| 2071 |
+
0.753
|
| 2072 |
+
],
|
| 2073 |
+
"angle": 0,
|
| 2074 |
+
"content": "Khot, T., Agrawal, S., Tulsiani, S., Mertz, C., Lucey, S., and Hebert, M. Learning unsupervised multi-view stereopsis via robust photometric consistency. arXiv preprint arXiv:1905.02706, 2019."
|
| 2075 |
+
},
|
| 2076 |
+
{
|
| 2077 |
+
"type": "ref_text",
|
| 2078 |
+
"bbox": [
|
| 2079 |
+
0.5,
|
| 2080 |
+
0.761,
|
| 2081 |
+
0.886,
|
| 2082 |
+
0.822
|
| 2083 |
+
],
|
| 2084 |
+
"angle": 0,
|
| 2085 |
+
"content": "Kim, M., Seo, S., and Han, B. Infonerf: Ray entropy minimization for few-shot neural volume rendering. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022."
|
| 2086 |
+
},
|
| 2087 |
+
{
|
| 2088 |
+
"type": "ref_text",
|
| 2089 |
+
"bbox": [
|
| 2090 |
+
0.5,
|
| 2091 |
+
0.83,
|
| 2092 |
+
0.886,
|
| 2093 |
+
0.906
|
| 2094 |
+
],
|
| 2095 |
+
"angle": 0,
|
| 2096 |
+
"content": "Mildenhall, B., Srinivasan, P. P., Ortiz-Cayon, R., Kalantari, N. K., Ramamoorthi, R., Ng, R., and Kar, A. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 2019."
|
| 2097 |
+
},
|
| 2098 |
+
{
|
| 2099 |
+
"type": "list",
|
| 2100 |
+
"bbox": [
|
| 2101 |
+
0.5,
|
| 2102 |
+
0.085,
|
| 2103 |
+
0.887,
|
| 2104 |
+
0.906
|
| 2105 |
+
],
|
| 2106 |
+
"angle": 0,
|
| 2107 |
+
"content": null
|
| 2108 |
+
}
|
| 2109 |
+
],
|
| 2110 |
+
[
|
| 2111 |
+
{
|
| 2112 |
+
"type": "header",
|
| 2113 |
+
"bbox": [
|
| 2114 |
+
0.253,
|
| 2115 |
+
0.057,
|
| 2116 |
+
0.719,
|
| 2117 |
+
0.072
|
| 2118 |
+
],
|
| 2119 |
+
"angle": 0,
|
| 2120 |
+
"content": "GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency"
|
| 2121 |
+
},
|
| 2122 |
+
{
|
| 2123 |
+
"type": "ref_text",
|
| 2124 |
+
"bbox": [
|
| 2125 |
+
0.088,
|
| 2126 |
+
0.085,
|
| 2127 |
+
0.476,
|
| 2128 |
+
0.145
|
| 2129 |
+
],
|
| 2130 |
+
"angle": 0,
|
| 2131 |
+
"content": "Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., and Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020."
|
| 2132 |
+
},
|
| 2133 |
+
{
|
| 2134 |
+
"type": "ref_text",
|
| 2135 |
+
"bbox": [
|
| 2136 |
+
0.088,
|
| 2137 |
+
0.157,
|
| 2138 |
+
0.476,
|
| 2139 |
+
0.203
|
| 2140 |
+
],
|
| 2141 |
+
"angle": 0,
|
| 2142 |
+
"content": "Müller, T., Evans, A., Schied, C., and Keller, A. Instant neural graphics primitives with a multiresolution hash encoding. arXiv preprint arXiv:2201.05989, 2022."
|
| 2143 |
+
},
|
| 2144 |
+
{
|
| 2145 |
+
"type": "ref_text",
|
| 2146 |
+
"bbox": [
|
| 2147 |
+
0.088,
|
| 2148 |
+
0.214,
|
| 2149 |
+
0.476,
|
| 2150 |
+
0.274
|
| 2151 |
+
],
|
| 2152 |
+
"angle": 0,
|
| 2153 |
+
"content": "Niemeyer, M. and Geiger, A. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11453-11464, 2021."
|
| 2154 |
+
},
|
| 2155 |
+
{
|
| 2156 |
+
"type": "ref_text",
|
| 2157 |
+
"bbox": [
|
| 2158 |
+
0.088,
|
| 2159 |
+
0.285,
|
| 2160 |
+
0.476,
|
| 2161 |
+
0.36
|
| 2162 |
+
],
|
| 2163 |
+
"angle": 0,
|
| 2164 |
+
"content": "Niemeyer, M., Barron, J. T., Mildenhall, B., Sajjadi, M. S. M., Geiger, A., and Radwan, N. Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022."
|
| 2165 |
+
},
|
| 2166 |
+
{
|
| 2167 |
+
"type": "ref_text",
|
| 2168 |
+
"bbox": [
|
| 2169 |
+
0.088,
|
| 2170 |
+
0.371,
|
| 2171 |
+
0.476,
|
| 2172 |
+
0.447
|
| 2173 |
+
],
|
| 2174 |
+
"angle": 0,
|
| 2175 |
+
"content": "Park, K., Sinha, U., Barron, J. T., Bouaziz, S., Goldman, D. B., Seitz, S. M., and Martin-Brualla, R. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5865-5874, 2021."
|
| 2176 |
+
},
|
| 2177 |
+
{
|
| 2178 |
+
"type": "ref_text",
|
| 2179 |
+
"bbox": [
|
| 2180 |
+
0.088,
|
| 2181 |
+
0.458,
|
| 2182 |
+
0.476,
|
| 2183 |
+
0.533
|
| 2184 |
+
],
|
| 2185 |
+
"angle": 0,
|
| 2186 |
+
"content": "Pumarola, A., Corona, E., Pons-Moll, G., and Moreno-Noguer, F. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10318-10327, 2021."
|
| 2187 |
+
},
|
| 2188 |
+
{
|
| 2189 |
+
"type": "ref_text",
|
| 2190 |
+
"bbox": [
|
| 2191 |
+
0.088,
|
| 2192 |
+
0.544,
|
| 2193 |
+
0.476,
|
| 2194 |
+
0.606
|
| 2195 |
+
],
|
| 2196 |
+
"angle": 0,
|
| 2197 |
+
"content": "Reiser, C., Peng, S., Liao, Y., and Geiger, A. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlp's. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14335-14345, 2021."
|
| 2198 |
+
},
|
| 2199 |
+
{
|
| 2200 |
+
"type": "ref_text",
|
| 2201 |
+
"bbox": [
|
| 2202 |
+
0.088,
|
| 2203 |
+
0.616,
|
| 2204 |
+
0.476,
|
| 2205 |
+
0.676
|
| 2206 |
+
],
|
| 2207 |
+
"angle": 0,
|
| 2208 |
+
"content": "Roessle, B., Barron, J. T., Mildenhall, B., Srinivasan, P. P., and Nießner, M. Dense depth priors for neural radiance fields from sparse input views. arXiv preprint arXiv:2112.03288, 2021."
|
| 2209 |
+
},
|
| 2210 |
+
{
|
| 2211 |
+
"type": "ref_text",
|
| 2212 |
+
"bbox": [
|
| 2213 |
+
0.088,
|
| 2214 |
+
0.687,
|
| 2215 |
+
0.476,
|
| 2216 |
+
0.747
|
| 2217 |
+
],
|
| 2218 |
+
"angle": 0,
|
| 2219 |
+
"content": "Schwarz, K., Liao, Y., Niemeyer, M., and Geiger, A. Graf: Generative radiance fields for 3d-aware image synthesis. Advances in Neural Information Processing Systems, 33: 20154-20166, 2020."
|
| 2220 |
+
},
|
| 2221 |
+
{
|
| 2222 |
+
"type": "ref_text",
|
| 2223 |
+
"bbox": [
|
| 2224 |
+
0.088,
|
| 2225 |
+
0.759,
|
| 2226 |
+
0.476,
|
| 2227 |
+
0.819
|
| 2228 |
+
],
|
| 2229 |
+
"angle": 0,
|
| 2230 |
+
"content": "Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556."
|
| 2231 |
+
},
|
| 2232 |
+
{
|
| 2233 |
+
"type": "ref_text",
|
| 2234 |
+
"bbox": [
|
| 2235 |
+
0.088,
|
| 2236 |
+
0.83,
|
| 2237 |
+
0.476,
|
| 2238 |
+
0.905
|
| 2239 |
+
],
|
| 2240 |
+
"angle": 0,
|
| 2241 |
+
"content": "Tancik, M., Srinivasan, P. P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J. T., and Ng, R. Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS, 2020."
|
| 2242 |
+
},
|
| 2243 |
+
{
|
| 2244 |
+
"type": "list",
|
| 2245 |
+
"bbox": [
|
| 2246 |
+
0.088,
|
| 2247 |
+
0.085,
|
| 2248 |
+
0.476,
|
| 2249 |
+
0.905
|
| 2250 |
+
],
|
| 2251 |
+
"angle": 0,
|
| 2252 |
+
"content": null
|
| 2253 |
+
},
|
| 2254 |
+
{
|
| 2255 |
+
"type": "ref_text",
|
| 2256 |
+
"bbox": [
|
| 2257 |
+
0.5,
|
| 2258 |
+
0.085,
|
| 2259 |
+
0.888,
|
| 2260 |
+
0.176
|
| 2261 |
+
],
|
| 2262 |
+
"angle": 0,
|
| 2263 |
+
"content": "Tretschk, E., Tewari, A., Golyanik, V., Zollhöfer, M., Lassner, C., and Theobalt, C. Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12959-12970, 2021."
|
| 2264 |
+
},
|
| 2265 |
+
{
|
| 2266 |
+
"type": "ref_text",
|
| 2267 |
+
"bbox": [
|
| 2268 |
+
0.5,
|
| 2269 |
+
0.186,
|
| 2270 |
+
0.885,
|
| 2271 |
+
0.246
|
| 2272 |
+
],
|
| 2273 |
+
"angle": 0,
|
| 2274 |
+
"content": "Wang, Z., Bovik, A., Sheikh, H., and Simoncelli, E. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13 (4):600-612, 2004. doi: 10.1109/TIP.2003.819861."
|
| 2275 |
+
},
|
| 2276 |
+
{
|
| 2277 |
+
"type": "ref_text",
|
| 2278 |
+
"bbox": [
|
| 2279 |
+
0.5,
|
| 2280 |
+
0.257,
|
| 2281 |
+
0.885,
|
| 2282 |
+
0.302
|
| 2283 |
+
],
|
| 2284 |
+
"angle": 0,
|
| 2285 |
+
"content": "Xu, X., Pan, X., Lin, D., and Dai, B. Generative occupancy fields for 3d surface-aware image synthesis. Advances in Neural Information Processing Systems, 34, 2021."
|
| 2286 |
+
},
|
| 2287 |
+
{
|
| 2288 |
+
"type": "ref_text",
|
| 2289 |
+
"bbox": [
|
| 2290 |
+
0.5,
|
| 2291 |
+
0.312,
|
| 2292 |
+
0.887,
|
| 2293 |
+
0.357
|
| 2294 |
+
],
|
| 2295 |
+
"angle": 0,
|
| 2296 |
+
"content": "Yu, A., Li, R., Tancik, M., Li, H., Ng, R., and Kanazawa, A. PlenOctrees for real-time rendering of neural radiance fields. In ICCV, 2021a."
|
| 2297 |
+
},
|
| 2298 |
+
{
|
| 2299 |
+
"type": "ref_text",
|
| 2300 |
+
"bbox": [
|
| 2301 |
+
0.5,
|
| 2302 |
+
0.367,
|
| 2303 |
+
0.887,
|
| 2304 |
+
0.428
|
| 2305 |
+
],
|
| 2306 |
+
"angle": 0,
|
| 2307 |
+
"content": "Yu, A., Ye, V., Tancik, M., and Kanazawa, A. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4578-4587, 2021b."
|
| 2308 |
+
},
|
| 2309 |
+
{
|
| 2310 |
+
"type": "ref_text",
|
| 2311 |
+
"bbox": [
|
| 2312 |
+
0.5,
|
| 2313 |
+
0.438,
|
| 2314 |
+
0.887,
|
| 2315 |
+
0.527
|
| 2316 |
+
],
|
| 2317 |
+
"angle": 0,
|
| 2318 |
+
"content": "Zhan, H., Garg, R., Weerasekera, C. S., Li, K., Agarwal, H., and Reid, I. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 340-349, 2018."
|
| 2319 |
+
},
|
| 2320 |
+
{
|
| 2321 |
+
"type": "ref_text",
|
| 2322 |
+
"bbox": [
|
| 2323 |
+
0.5,
|
| 2324 |
+
0.538,
|
| 2325 |
+
0.887,
|
| 2326 |
+
0.642
|
| 2327 |
+
],
|
| 2328 |
+
"angle": 0,
|
| 2329 |
+
"content": "Zhang, J., Zhang, Y., Fu, H., Zhou, X., Cai, B., Huang, J., Jia, R., Zhao, B., and Tang, X. Ray priors through reprojection: Improving neural radiance fields for novel view extrapolation. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18355-18365, 2022. doi: 10.1109/CVPR52688.2022.01783."
|
| 2330 |
+
},
|
| 2331 |
+
{
|
| 2332 |
+
"type": "ref_text",
|
| 2333 |
+
"bbox": [
|
| 2334 |
+
0.5,
|
| 2335 |
+
0.654,
|
| 2336 |
+
0.885,
|
| 2337 |
+
0.699
|
| 2338 |
+
],
|
| 2339 |
+
"angle": 0,
|
| 2340 |
+
"content": "Zhang, K., Riegler, G., Snavely, N., and Koltun, V. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020."
|
| 2341 |
+
},
|
| 2342 |
+
{
|
| 2343 |
+
"type": "ref_text",
|
| 2344 |
+
"bbox": [
|
| 2345 |
+
0.5,
|
| 2346 |
+
0.71,
|
| 2347 |
+
0.885,
|
| 2348 |
+
0.755
|
| 2349 |
+
],
|
| 2350 |
+
"angle": 0,
|
| 2351 |
+
"content": "Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018."
|
| 2352 |
+
},
|
| 2353 |
+
{
|
| 2354 |
+
"type": "ref_text",
|
| 2355 |
+
"bbox": [
|
| 2356 |
+
0.5,
|
| 2357 |
+
0.765,
|
| 2358 |
+
0.885,
|
| 2359 |
+
0.826
|
| 2360 |
+
],
|
| 2361 |
+
"angle": 0,
|
| 2362 |
+
"content": "Zhou, T., Brown, M., Snavely, N., and Lowe, D. G. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1851-1858, 2017."
|
| 2363 |
+
},
|
| 2364 |
+
{
|
| 2365 |
+
"type": "list",
|
| 2366 |
+
"bbox": [
|
| 2367 |
+
0.5,
|
| 2368 |
+
0.085,
|
| 2369 |
+
0.888,
|
| 2370 |
+
0.826
|
| 2371 |
+
],
|
| 2372 |
+
"angle": 0,
|
| 2373 |
+
"content": null
|
| 2374 |
+
}
|
| 2375 |
+
]
|
| 2376 |
+
]
|
2301.10xxx/2301.10941/c35dbdab-14b3-4d81-9ce3-5fce0461d6c8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b8ddb345adae197e336f63f03405599af2618a3368189224f51ceebb42559796
|
| 3 |
+
size 21789800
|
2301.10xxx/2301.10941/full.md
ADDED
|
@@ -0,0 +1,355 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency
|
| 2 |
+
|
| 3 |
+
Min-Seop Kwak*1 Jiuhn Song*1 Seungryong Kim
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometric consistency regularization. The proposed approach leverages a rendered depth map at unobserved viewpoint to warp sparse input images to the unobserved viewpoint and impose them as pseudo ground truths to facilitate learning of NeRF. By encouraging such geometric consistency at a feature-level instead of using pixel-level reconstruction loss, we regularize the NeRF at semantic and structural levels while allowing for modeling view-dependent radiance to account for color variations across viewpoints. We also propose an effective method to filter out erroneous warped solutions, along with training strategies to stabilize training during optimization. We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models.
|
| 8 |
+
|
| 9 |
+
# 1. Introduction
|
| 10 |
+
|
| 11 |
+
Recently, representing a 3D scene as a Neural Radiance Field (NeRF) (Mildenhall et al., 2020) has proven to be a powerful approach for novel view synthesis and 3D reconstruction (Barron et al., 2021; Jain et al., 2021; Chen et al., 2021). However, despite its impressive performance, NeRF requires a large number of densely, well distributed calibrated images for optimization, which limits its applicability. When limited to sparse observations, NeRF easily overfits to the input view images and is unable to reconstruct correct geometry (Zhang et al., 2020).
|
| 12 |
+
|
| 13 |
+
The task that directly addresses this problem, also called a few-shot NeRF, aims to optimize high-fidelity neural radiance field in such sparse scenarios (Jain et al., 2021; Kim
|
| 14 |
+
|
| 15 |
+
et al., 2022; Niemeyer et al., 2022), countering the underconstrained nature of said problem by introducing additional priors. Specifically, previous works attempted to solve this by utilizing a semantic feature (Jain et al., 2021), entropy minimization (Kim et al., 2022), SfM depth priors (Deng et al., 2022) or normalizing flow (Niemeyer et al., 2022), but their necessity for handcrafted methods or inability to extract local and fine structures limited their performance.
|
| 16 |
+
|
| 17 |
+
To alleviate these issues, we propose a novel regularization technique that enforces a geometric consistency across different views with a depth-guided warping and a geometry-aware consistency modeling. Based on these, we propose a novel framework, called Neural Radiance Fields with Geometric Consistency (GeCoNeRF), for training neural radiance fields in a few-shot setting. Our key insight is that we can leverage a depth rendered by NeRF to warp sparse input images to novel viewpoints, and use them as pseudo ground truths to facilitate learning of fine details and high-frequency features by NeRF. By encouraging images rendered at novel views to model warped images with a consistency loss, we can successfully constrain both geometry and appearance to boost fidelity of neural radiance fields even in highly under-constrained few-shot setting. Taking into consideration non-Lambertian nature of given datasets, we propose feature-level regularization loss that captures contextual and structural information while largely ignoring individual color differences. We also present a method to generate a consistency mask to prevent inconsistently warped information from harming the network. Finally, we provide coarse-to-fine training strategies for sampling and pose generation to stabilize optimization of the model.
|
| 18 |
+
|
| 19 |
+
We demonstrate the effectiveness of our method on synthetic and real datasets (Mildenhall et al., 2020; Jensen et al., 2014). Experimental results prove the effectiveness of the proposed model over the latest methods for few-shot novel view synthesis.
|
| 20 |
+
|
| 21 |
+
# 2. Related Work
|
| 22 |
+
|
| 23 |
+
Neural radiance fields. Among the most notable of approaches regarding the task of novel view synthesis and 3D reconstruction is Neural Radiance Field (NeRF) (Mildenhall et al., 2020), where photo-realistic images are rendered by a simple MLP architecture. Sparked by its impress
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
Figure 1. Illustration of our consistency modeling pipeline for few-shot NeRF. Given an image $I_{i}$ and estimated depth map $D_{j}$ of $j$ -th unobserved viewpoint, we warp the image $I_{i}$ to that novel viewpoint as $I_{i\rightarrow j}$ by establishing geometric correspondence between two viewpoints. Using the warped image as a pseudo ground truth, we cause rendered image of unseen viewpoint, $I_{j}$ , to be consistent in structure with warped image, with occlusions taken into consideration.
|
| 27 |
+
|
| 28 |
+
sive performance, a variety of follow-up studies based on its continuous neural volumetric representation have been prompted, including dynamic and deformable scenes (Park et al., 2021; Tretschk et al., 2021; Pumarola et al., 2021; Attal et al., 2021), real-time rendering (Yu et al., 2021a; Hedman et al., 2021; Reiser et al., 2021; Müller et al., 2022), self-calibration (Jeong et al., 2021) and generative modeling (Schwarz et al., 2020; Niemeyer & Geiger, 2021; Xu et al., 2021; Deng et al., 2021). Mip-NeRF (Barron et al., 2021) eliminates aliasing artifacts by adopting cone tracing with a single multi-scale MLP. In general, most of these works have difficulty in optimizing a single scene with a few number of images.
|
| 29 |
+
|
| 30 |
+
Few-shot NeRF. One key limitation of NeRF is its necessity for large number of calibrated views in optimizing neural radiance fields. Some recent works attempted to address this in the case where only few observed views of the scene are available. PixelNeRF(Yu et al., 2021b) conditions a NeRF on image inputs using local CNN features. This conditional model allows the network to learn scene priors across multiple scenes. Stereo radiance fields (Chibane et al., 2021) use local CNN features from input views for scene geometry reasoning and MVSNeRF (Chen et al., 2021) combines cost volume with neural radiance field for improved performance. However, pre-training with multi-view images of numerous scenes are essential for these methods for them to learn reconstruction priors.
|
| 31 |
+
|
| 32 |
+
Other works attempt different approaches of optimizing NeRF from scratch in few-shot settings: DSNeRF (Deng et al., 2022) makes use of depth supervision to network to optimize
|
| 33 |
+
|
| 34 |
+
a scene with few images. (Roessle et al., 2021) also utilizes sparse depth prior by extending into dense depth map by depth completion module to guide network optimization. On the other hand, there are models that tackle depth prior-free few-shot optimization: DietNeRF (Jain et al., 2021) enforces semantic consistency between rendered images from unseen view and seen images. RegNeRF (Niemeyer et al., 2022) regularizes the geometry and appearance of patches rendered from unobserved viewpoints. InfoNeRF (Kim et al., 2022) constrains the density's entropy in each ray and ensures consistency across rays in the neighborhood. While these methods constrain NeRF into learning more realistic geometry, their regularizations are limited in that they require extensive dataset-specific fine-tuning and that they only provide regularization at a global level in a generalized manner.
|
| 35 |
+
|
| 36 |
+
Self-supervised photometric consistency. In the field of multiview stereo depth estimation, consistency modeling between stereo images and their warped images has been widely used for self-supervised training (Godard et al., 2017; Garg et al., 2016; Zhou et al., 2017) In weakly supervised or unsupervised settings (Huang et al., 2021; Khot et al., 2019) where there is lack of ground truth depth information, consistency modeling between images with geometry-based warping is used as a supervisory signal (Zhou et al., 2017; Huang et al., 2021; Khot et al., 2019) formulating depth learning as a form of reconstruction task between viewpoints.
|
| 37 |
+
|
| 38 |
+
Recently, methods utilizing self-supervised photometric consistency have been introduced to NeRF: concurrent
|
| 39 |
+
|
| 40 |
+
works such as NeuralWarp (Darmon et al., 2022), Struct-NeRF (Chen et al., 2022) and Geo-NeuS (Fu et al., 2022) model photometric consistency between source images and their warped counterparts from other source viewpoints to improve their reconstruction quality. However, these methods only discuss dense view input scenarios where pose differences between source viewpoints are small, and do not address their behavior in few-shot settings - where sharp performance drop is expected due to scarcity of input viewpoints and increased difficulty in the warping procedure owing to large viewpoint differences and heavy self-occlusions. RapNeRF (Zhang et al., 2022) uses geometry-based reprojection method to enhance view extrapolation performance, and (Bortolon et al., 2022) uses depth rendered by NeRF as correspondence information for view-morphing module to synthesize images between input viewpoints. However, these methods do not take occlusions into account, and their pixel-level photometric consistency modeling comes with downside of suppressing view-dependent specular effects.
|
| 41 |
+
|
| 42 |
+
# 3. Preliminaries
|
| 43 |
+
|
| 44 |
+
Neural Radiance Field (NeRF) (Mildenhall et al., 2020) represents a scene as a continuous function $f_{\theta}$ represented by a neural network with parameters $\theta$ , where the points are sampled along rays, represented by $r$ , for evaluation by the neural network. Typically, the sampled coordinates $\mathbf{x} \in \mathbb{R}^3$ and view direction $\mathbf{d} \in \mathbb{R}^2$ are transformed by a positional encoding $\gamma$ into Fourier features (Tancik et al., 2020) that facilitates learning of high-frequency details. The neural network $f_{\theta}$ takes as input the transformed coordinate $\gamma(\mathbf{x})$ and viewing directions $\gamma(\mathbf{d})$ , and outputs a view-invariant density value $\sigma \in \mathbb{R}$ and a view-dependent color value $\mathbf{c} \in \mathbb{R}^3$ such that
|
| 45 |
+
|
| 46 |
+
$$
|
| 47 |
+
\{\mathbf {c}, \sigma \} = f _ {\theta} (\gamma (\mathbf {x}), \gamma (\mathbf {d})). \tag {1}
|
| 48 |
+
$$
|
| 49 |
+
|
| 50 |
+
With a ray parameterized as $\mathbf{r}_p(t) = \mathbf{o} + t\mathbf{d}_p$ from the camera center $\mathbf{o}$ through the pixel $p$ along direction $\mathbf{d}_p$ , the color is rendered as follows:
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
C (\mathbf {r} _ {p}) = \int_ {t _ {n}} ^ {t _ {f}} T (t) \sigma (\mathbf {r} _ {p} (t)) \mathbf {c} (\mathbf {r} _ {p} (t), \mathbf {d} _ {p}) d t, \tag {2}
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
where $C(\mathbf{r}_p)$ is a predicted color value at the pixel $p$ along the ray $\mathbf{r}_p(t)$ from $t_n$ to $t_f$ , and $T(t)$ denotes an accumulated transmittance along the ray from $t_n$ to $t$ , defined such that
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
T (t) = \exp \left(- \int_ {t _ {n}} ^ {t} \sigma (\mathbf {r} _ {p} (s)) d s\right). \tag {3}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
To optimize the networks $f_{\theta}$ , the observation loss $\mathcal{L}_{\mathrm{obs}}$ enforces the rendered color values to be consistent with ground truth color value $C^{\prime}(\mathbf{r})$ :
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\mathcal {L} _ {\mathrm {o b s}} = \sum_ {\mathbf {r} _ {p} \in \mathcal {R}} \| C ^ {\prime} (\mathbf {r} _ {p}) - C (\mathbf {r} _ {p}) \| _ {2} ^ {2}, \tag {4}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
where $\mathcal{R}$ represents a batch of training rays.
|
| 69 |
+
|
| 70 |
+
# 4. Methodology
|
| 71 |
+
|
| 72 |
+
# 4.1. Motivation and Overview
|
| 73 |
+
|
| 74 |
+
Let us denote an image at $i$ -th viewpoint as $I_{i}$ . In a few-shot novel view synthesis, NeRF is given only a few images $\{I_i\}$ for $i \in \{1, \dots, N\}$ with small $N$ , e.g., $N = 3$ or $N = 5$ . The objective of novel view synthesis is to train the mapping function $f_{\theta}$ that can be used to recover an image $I_{j}$ at $j$ -th unseen or novel viewpoint. As we described above, in the few-shot setting, given $\{I_i\}$ , directly optimizing $f_{\theta}$ solely with the pixel-wise reconstruction loss $\mathcal{L}_{\mathrm{obs}}$ is limited by its inability to model view-dependent effects, and thus an additional regularization to encourage the network $f_{\theta}$ to generate consistent appearance and geometry is required.
|
| 75 |
+
|
| 76 |
+
To achieve this, we propose a novel regularization technique to enforce a geometric consistency across different views with depth-guided warping and consistency modeling. We focus on the fact that NeRF (Mildenhall et al., 2020) inherently renders not only color image but depth image as well. Combined with known viewpoint difference, the rendered depths can be used to define a geometric correspondence relationship between two arbitrary views.
|
| 77 |
+
|
| 78 |
+
Specifically, we consider a depth image rendered by the NeRF model, $D_{j}$ at unseen viewpoint $j$ . By formulating a warping function $\psi (I_i;D_j,R_{i\rightarrow j})$ that warps an image $I_{i}$ according to the depth $D_{j}$ and viewpoint difference $R_{i\rightarrow j}$ , we can encourage a consistency between warped image $I_{i\rightarrow j} = \psi (I_i;D_j,R_{i\rightarrow j})$ and rendered image $I_{j}$ at $j$ -th unseen viewpoint, which in turn improves the few-shot novel view synthesis performance. This framework can overcome the limitations of previous few-shot setting approaches (Mildenhall et al., 2020; Chen et al., 2021; Barron et al., 2021), improving not only global geometry but also high-frequency details and appearance as well.
|
| 79 |
+
|
| 80 |
+
In the following, we first explain how input images can be warped to unseen viewpoints in our framework. Then, we demonstrate how we impose consistency upon the pair of warped image and rendered image for regularization, followed by explanation of occlusion handling method and several training strategies that proved crucial for stabilization of NeRF optimization in few-shot scenario.
|
| 81 |
+
|
| 82 |
+
# 4.2. Rendered Depth-Guided Warping
|
| 83 |
+
|
| 84 |
+
To render an image at novel viewpoints, we first sample a random camera viewpoint, from which corresponding ray vectors are generated in a patch-wise manner. As NeRF outputs density and color values of sampled points along the novel rays, we use recovered density values to render a consistent depth map. Following (Mildenhall et al., 2020), we formulate per-ray depth values as weighted composition of
|
| 85 |
+
|
| 86 |
+

|
| 87 |
+
Figure 2. Illustration of the proposed framework. GeCoNeRF regularizes the networks with consistency modeling. Consistency loss function $\mathcal{L}_{\mathrm{cons}}^M$ is applied between unobserved viewpoint image and warped observed viewpoint image, while disparity regularization loss $\mathcal{L}_{\mathrm{reg}}$ regularizes depth at seen viewpoints.
|
| 88 |
+
|
| 89 |
+
distances traveled from origin. Since ray $\mathbf{r}_p$ corresponding to pixel $p$ is parameterized as $\mathbf{r}_p(t) = \mathbf{o} + t\mathbf{d}_p$ , the depth rendering is defined similarly to the color rendering:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
D (\mathbf {r} _ {p}) = \int_ {t _ {n}} ^ {t _ {f}} T (t) \sigma (\mathbf {r} _ {p} (t)) t d t, \tag {5}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where $D(\mathbf{r}_p)$ is a predicted depth along the ray $\mathbf{r}_p$ . As described in Figure 1, we use the rendered depth map $D_j$ to warp input ground truth image $I_i$ to $j$ -th unseen viewpoint and acquire a warped image $I_{i\rightarrow j}$ , which is defined as a process such that $I_{i\rightarrow j} = \psi (I_i;D_j,R_{i\rightarrow j})$ . More specifically, pixel location $p_j$ in target unseen viewpoint image is transformed to $p_{j\to i}$ at source viewpoint image by viewpoint difference $R_{j\to i}$ and camera intrinsic parameter $K$ such that
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
p _ {j \rightarrow i} \sim K R _ {j \rightarrow i} D _ {j} (p _ {j}) K ^ {- 1} p _ {j}, \tag {6}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where $\sim$ indicates approximate equality and the projected coordinate $p_{j\rightarrow i}$ is a continuous value. With a differentiable sampler, we extract color values of $p_{j\rightarrow i}$ on $I_{i}$ . More formally, the transforming components process can be written as follows:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
I _ {i \rightarrow j} \left(p _ {j}\right) = \operatorname {s a m p l e r} \left(I _ {i}; p _ {j \rightarrow i}\right), \tag {7}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
where $\text{sampler}(\cdot)$ is a bilinear sampling operator (Jaderberg et al., 2015).
|
| 108 |
+
|
| 109 |
+
Acceleration. Rendering a full image is computationally heavy and extremely timetaking, requiring tens of seconds for a single iteration. To overcome the computational bottleneck of full image rendering and warping, rays are sampled on a strided grid to make the patch with stride $s$ , which we have set as 2. After the rays undergo volumetric rendering,
|
| 110 |
+
|
| 111 |
+
we upsample the low-resolution depth map back to original resolution with bilinear interpolation. This full-resolution depth map is used for the inverse warping. This way, detailed warped patches of full-resolution can be generated with only a fraction of computational cost that would be required when rendering the original sized ray batch.
|
| 112 |
+
|
| 113 |
+
# 4.3. Consistency Modeling
|
| 114 |
+
|
| 115 |
+
Given the rendered patch $I_{j}$ at $j$ -th viewpoint and the warped patch $I_{i\rightarrow j}$ with depth $D_{j}$ and viewpoint difference $R_{i\rightarrow j}$ , we define the consistency between the two to encourage additional regularization for globally consistent rendering. One viable option is to naively apply the pixelwise image reconstruction loss $\mathcal{L}_{\mathrm{pix}}$ such that
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\mathcal {L} _ {\mathrm {p i x}} = \left\| I _ {i \rightarrow j} - I _ {j} \right\|. \tag {8}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
However, we observe that this simple strategy is prone to cause failures in reflectant non-Lambertian surfaces where appearance changes greatly regarding viewpoints (Zhan et al., 2018). In addition, geometry-related problems, such as self-occlusion and artifacts, prohibits naive usage of pixelwise image reconstruction loss for regularization in unseen viewpoints.
|
| 122 |
+
|
| 123 |
+
Feature-level consistency modeling. To overcome these issues, we propose masked feature-level regularization loss that encourages structural consistency while ignoring view-dependent radiance effects, as illustrated in Figure 2.
|
| 124 |
+
|
| 125 |
+
Given an image $I$ as an input, we use a convolutional network to extract multi-level feature maps such that $f_{\phi ,l}(I)\in \mathbb{R}^{H_l\times W_l\times C_l}$ , with channel depth $C_l$ for $l$ -th layer. To measure feature-level consistency between warped image $I_{i\rightarrow j}$ and rendered image $I_{j}$ , we extract their features maps from $L$ layers and compute difference within each feature map
|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
(a) GT patch
|
| 129 |
+
|
| 130 |
+

|
| 131 |
+
(b) Rendered patch
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
(c) Warped patch
|
| 135 |
+
|
| 136 |
+

|
| 137 |
+
(d) Occlusion mask
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
(e) Masked patch
|
| 141 |
+
|
| 142 |
+
pairs that are extracted from the same layer.
|
| 143 |
+
|
| 144 |
+
In accordance with the idea of using the warped image $I_{i \to j}$ as pseudo ground truths, we allow a gradient backpropagation to pass only through the rendered image and block it for the warped image. By applying the consistency loss at multiple levels of feature maps, we cause $I_{j}$ to model after $I_{i \to j}$ both on semantic and structural level.
|
| 145 |
+
|
| 146 |
+
Formally written, the consistency loss $\mathcal{L}_{\mathrm{cons}}$ is defined as such that
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\mathcal {L} _ {\text {c o n s}} = \sum_ {l = 1} ^ {L} \frac {1}{C _ {l}} \left\| f _ {\phi} ^ {l} \left(I _ {j \rightarrow i}\right) - f _ {\phi} ^ {l} \left(I _ {j}\right)\right\|. \tag {9}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
For this loss function $\mathcal{L}_{\mathrm{cons}}$ , we find $l-1$ distance function most suited for our task and utilize it to measure consistency across feature difference maps. Empirically, we have discovered that VGG-19 network (Simonyan & Zisserman, 2014) yields best performance in modeling consistencies, likely due to the absence of normalization layers (Johnson et al., 2016) that scale down absolute values of feature differences. Therefore, we employ VGG19 network as our feature extractor network $f_{\phi}$ throughout all of our models.
|
| 153 |
+
|
| 154 |
+
It should be noted that our loss function differs from that of DietNeRF (Jain et al., 2021) in that while DietNeRF's consistency loss is limited to regularizing the radiance field in a globally semantic level, our loss combined with the warping module is also able to give the network highly rich information on a local, structural level as well. In other words, contrary to DietNeRF giving only high-level feature consistency, our method of using multiple levels of convolutional network for feature difference calculation can be interpreted as enforcing a mixture of all levels, from high-level semantic consistency to low-level structural consistency.
|
| 155 |
+
|
| 156 |
+
Occlusion handling. In order to prevent imperfect and distorted warpings caused by erroneous geometry from influencing the model, which degrades overall reconstruction quality, we construct consistency mask $M_{l}$ to let NeRF ignore regions with geometric inconsistencies, as demonstrated in Figure 3. Instead of applying masks to the images
|
| 157 |
+
|
| 158 |
+

|
| 159 |
+
Figure 3. Visualization of consistency modeling process. (a) ground truth patch, (b) rendered patch at novel viewpoint, (c) warped patch, from input viewpoint to novel viewpoint, (d) occlusion mask with threshold masking, and (e) final warped patch with occlusion masking at novel viewpoint.
|
| 160 |
+
Figure 4. Occlusion-aware mask generation. Mask generation by comparing geometry between novel view $j$ and source view $i$ , with $I_{i\rightarrow j}$ being warped patch generated for view $j$ . For (a) and (b), warping does not occur correctly due to artifacts and self-occlusion, respectively. Such pixels are masked out by $M_l$ , allowing only (c), with accurate warping, as training signal for rendered image $I_j$ .
|
| 161 |
+
|
| 162 |
+
before inputting them into the feature extractor network, we apply resized masks $M_{l}$ directly to the feature maps, after using nearest-neighbor down-sampling to make them match the dimensions of $l$ -th layer outputs.
|
| 163 |
+
|
| 164 |
+
We generate $M$ by measuring consistency between rendered depth values from the target viewpoint and source viewpoint such that
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
M \left(p _ {j}\right) = \left[\left\| D _ {j} \left(p _ {j}\right) - D _ {i} \left(p _ {j \rightarrow i}\right)\right\| < \tau \right]. \tag {10}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
where $[\cdot ]$ is Iverson bracket, and $p_j\rightarrow i$ refers to the corresponding pixel in source viewpoint $i$ for reprojected target pixel $p_j$ of $j$ -th viewpoint. Here we measure euclidean distance between depth points rendered from target and source viewpoints as a criterion for a threshold masking. As illustrated in Figure 4, if distance between two points are greater than given threshold value $\tau$ , we determine two rays as rendering depths of separate surfaces and mask out the corresponding pixel in viewpoint $I_{j}$ . The process takes place over every pixel in viewpoint $I_{j}$ to generate a mask $M$ the same size as rendered pixels. Through this technique, we fil
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
Figure 5. Qualitative comparison on NeRF-Synthetic (Mildenhall et al., 2020) show that in 3-view setting, our method captures fine details more robustly (such as the wire in the mic scene) and produces less artifacts (background in the materials scene) compared to previous methods. We show GeCoNeRF's results (e) with its rendered depth (f).
|
| 174 |
+
|
| 175 |
+
ter out problematic solutions at feature level and regularize NeRF with only high-confidence image features.
|
| 176 |
+
|
| 177 |
+
Based on this, the consistency loss $\mathcal{L}_{\mathrm{cons}}$ is extended as such that
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
\mathcal {L} _ {\text {c o n s}} ^ {M} = \sum_ {l = 1} ^ {L} \frac {1}{C _ {l} m _ {l}} \| M _ {l} \odot \left(f _ {\phi} ^ {l} \left(I _ {i \rightarrow j}\right) - f _ {\phi} ^ {l} \left(I _ {j}\right)\right) \|, \tag {11}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
where $m_l$ is the sum of non-zero values.
|
| 184 |
+
|
| 185 |
+
Edge-aware disparity regularization. Since our method is dependent upon the quality of depth rendered by NeRF, we directly impose additional regularization on rendered depth to facilitate optimization. We further encourage local depth smoothness on rendered scenes by imposing $l-1$ penalty on disparity gradient within randomly sampled patches of input views. In addition, inspired by (Godard et al., 2017), we take into account the fact that depth discontinuities in depth maps are likely to be aligned to gradients of its color image, and introduce an edge-aware term with image gradients $\partial I$ to weight the disparity values. Specifically, following (Godard et al., 2017), we regularize for edge-aware depth smoothness such that
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\mathcal {L} _ {\text {r e g}} = \left| \partial_ {x} D _ {i} ^ {*} \right| e ^ {- \left| \partial_ {x} I _ {i} \right|} + \left| \partial_ {y} D _ {i} ^ {*} \right| e ^ {- \left| \partial_ {y} I _ {i} \right|}, \tag {12}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
where $D_{i}^{*} = D_{i} / \overline{D_{i}}$ is the mean-normalized inverse depth from (Godard et al., 2017) to discourage shrinking of the estimated depth.
|
| 192 |
+
|
| 193 |
+
# 4.4. Training Strategy
|
| 194 |
+
|
| 195 |
+
In this section, we present novel training strategies to learn the model with the proposed losses.
|
| 196 |
+
|
| 197 |
+
Total losses. We optimize our model with a combined final loss of original NeRF's pixel-wise reconstruction loss $\mathcal{L}_{\mathrm{obs}}$ and two types of regularization loss, $\mathcal{L}_{\mathrm{cons}}^M$ for unobserved view consistency modeling and $\mathcal{L}_{\mathrm{reg}}$ for disparity regularization.
|
| 198 |
+
|
| 199 |
+
Progressive camera pose generation. Difficulty of of accurate warping increases the further target view is from the source view, which means that sampling far camera poses straight from the beginning of training may have negative effects on our model. Therefore, we first generate camera poses near source views, then progressively further as training proceeds. We sample noise value uniformly between an interval of $[- \beta, + \beta]$ and add it to the original Euler rotation angles of input view poses, with parameter $\beta$ growing linearly from 3 to 9 degrees throughout the course of optimization. This design choice can be intuitively understood as stabilizing locations near observed viewpoints at start and propagating this regularization to further locations, where warping becomes progressively more difficult.
|
| 200 |
+
|
| 201 |
+
Positional encoding frequency annealing. We find that most of the artifacts occurring are high-frequency occlusions that fill the space between scene and camera. This behaviour can be effectively suppressed by constraining the order of fourier positional encoding (Tancik et al., 2020) to low dimensions. Due to this reason, we adopt coarse-to-fine frequency annealing strategy previously used by (Park et al., 2021) to regularize our optimization. This strategy forces our network to primarily optimize from coarse, low-frequency details where self-occlusions and fine features are minimized, easing the difficulty of warping process in the beginning stages of training. Following (Park et al., 2021), the annealing equation is $\alpha(t) = mt / K$ , with $m$ as the number of encoding frequencies, $t$ as iteration step, and we set hyper-parameter $K$ as $15k$ .
|
| 202 |
+
|
| 203 |
+
# 5. Experiments
|
| 204 |
+
|
| 205 |
+
# 5.1. Experimental Settings
|
| 206 |
+
|
| 207 |
+
Baselines. We use mip-NeRF (Barron et al., 2021) as our backbone. We give our comparisons to the baseline and several state-of-the-art models for few-shot NeRF: InfoN
|
| 208 |
+
|
| 209 |
+
Table 1. Quantitative comparison on NeRF-Synthetic (Mildenhall et al., 2020) and LLFF (Mildenhall et al., 2019) datasets.
|
| 210 |
+
|
| 211 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="4">NeRF-Synthetic (Mildenhall et al., 2020)</td><td colspan="4">LLFF (Mildenhall et al., 2019)</td></tr><tr><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td><td>Avg. ↓</td><td>PSNR ↑</td><td>SSIM ↑</td><td>LPIPS ↓</td><td>Avg. ↓</td></tr><tr><td>NeRF (Mildenhall et al., 2020)</td><td>14.73</td><td>0.734</td><td>0.451</td><td>0.199</td><td>13.34</td><td>0.373</td><td>0.451</td><td>0.255</td></tr><tr><td>mip-NeRF (Barron et al., 2021)</td><td>17.71</td><td>0.798</td><td>0.745</td><td>0.178</td><td>14.62</td><td>0.351</td><td>0.495</td><td>0.246</td></tr><tr><td>DietNeRF (Jain et al., 2021)</td><td>16.06</td><td>0.793</td><td>0.306</td><td>0.151</td><td>14.94</td><td>0.370</td><td>0.496</td><td>0.232</td></tr><tr><td>InfoNeRF (Kim et al., 2022)</td><td>18.65</td><td>0.811</td><td>0.230</td><td>0.111</td><td>14.37</td><td>0.349</td><td>0.457</td><td>0.238</td></tr><tr><td>RegNeRF (Niemeyer et al., 2022)</td><td>18.01</td><td>0.842</td><td>0.352</td><td>0.132</td><td>19.08</td><td>0.587</td><td>0.336</td><td>0.146</td></tr><tr><td>GeCoNeRF (Ours)</td><td>19.23</td><td>0.866</td><td>0.201</td><td>0.096</td><td>18.77</td><td>0.596</td><td>0.338</td><td>0.145</td></tr></table>
|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
(a) Ground-truth
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
(b) mip-NeRF
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
(c) mip-NeRF (D)
|
| 227 |
+
Figure 6. Qualitative results on LLFF (Mildenhall et al., 2019). Comparison with baseline mip-NeRF shows that our model learns of coherent depth and geometry in extremely sparse 3-view setting.
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
(d) GeCoNeRF
|
| 233 |
+
|
| 234 |
+

|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
(e) GeCoNeRF (D)
|
| 238 |
+
|
| 239 |
+
eRF (Kim et al., 2022), DietNeRF (Jain et al., 2021), and RegNeRF (Niemeyer et al., 2022). We provide implementation details in the appendix.
|
| 240 |
+
|
| 241 |
+
Datasets and metrics. We evaluate our model on NeRF-Synthetic (Mildenhall et al., 2020) and LLFF (Mildenhall et al., 2019). NeRF-Synthetic is a realistically rendered $360^{\circ}$ synthetic dataset comprised of 8 scenes. We randomly sample 3 viewpoints out of 100 training images in each scene, with 200 testing images for evaluation. We also conduct experiments on LLFF benchmark dataset, which consists of real-life forward facing scenes. Following RegNeRF (Niemeyer et al., 2022), we apply standard settings by selecting test set evenly from list of every 8th image and selecting 3 reference views from remaining images. We quantify novel view synthesis quality using PSNR, Structural Similarity Index Measure (SSIM) (Wang et al., 2004), LPIPS perceptual metric (Zhang et al., 2018) and an average error metric introduced in (Barron et al., 2021) to report the mean value of metrics for all scenes in each dataset.
|
| 242 |
+
|
| 243 |
+
# 5.2. Comparisons
|
| 244 |
+
|
| 245 |
+
Qualitative comparisons. Qualitative comparison results in Figure 5 and 6 demonstrate that our model shows superior performance to baseline mip-NeRF (Barron et al., 2021) and previous state-of-the-art model, RegNeRF (Niemeyer et al.,
|
| 246 |
+
|
| 247 |
+
2022), in 3-view settings. We observe that our warping-based consistency enables GeCoNeRF to capture fine details that mip-NeRF and RegNeRF struggle to capture in same sparse view scenarios, as demonstrated with the mic scene. Our method also displays higher stability in rendering smooth surfaces and reducing artifacts in background in comparison to previous models, as shown in the results of the materials scene. We argue that these results demonstrate how our method, through generation of warped pseudo ground truth patches, is able to give the model local, scene-specific regularization that aids recovery of fine details, which previous few-shot NeRF models with their global, generalized priors were unable to accomplish.
|
| 248 |
+
|
| 249 |
+
Quantitative comparisons. Comparisons in Table 1 show our model's competitive results in LLFF dataset, whose PSNR results show large increase in comparison to mip-NeRF baseline and competitive compared to RegN-erF. We see that our warping-based consistency modeling successfully prevents overfitting and artifacts, which allows our model to perform better quantitatively.
|
| 250 |
+
|
| 251 |
+
# 5.3. Ablation Study
|
| 252 |
+
|
| 253 |
+
We validate our design choices by performing an ablation study on LLFF (Mildenhall et al., 2019) dataset.
|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
(a) Baseline
|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
(b) (a) + $\mathcal{L}_{\mathrm{cons}}$
|
| 260 |
+
Figure 7. Qualitative ablation. Our qualitative ablation results on Horns scene shows the contribution of each module in performance of our model at 3-view scenario.
|
| 261 |
+
|
| 262 |
+

|
| 263 |
+
(c) $(\mathbf{b}) + M$ (O. mask)
|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
(d) (c) + Progressive
|
| 267 |
+
|
| 268 |
+

|
| 269 |
+
(e) (d) + $\mathcal{L}_{\mathrm{reg}}$ (Ours)
|
| 270 |
+
|
| 271 |
+
Table 2. Ablation study.
|
| 272 |
+
|
| 273 |
+
<table><tr><td>Components</td><td>PSNR↑</td><td>SSIM↑</td><td>LPIPS↓</td><td>Avg.↓</td></tr><tr><td>(a) Baseline</td><td>14.62</td><td>0.351</td><td>0.495</td><td>0.246</td></tr><tr><td>(b) (a) + Lcons</td><td>18.10</td><td>0.529</td><td>0.408</td><td>0.164</td></tr><tr><td>(c) (b) + M (O. mask)</td><td>18.24</td><td>0.535</td><td>0.379</td><td>0.159</td></tr><tr><td>(d) (c) + Progressive</td><td>18.46</td><td>0.552</td><td>0.349</td><td>0.151</td></tr><tr><td>(e) (d) + Lreg (Ours)</td><td>18.55</td><td>0.592</td><td>0.340</td><td>0.150</td></tr></table>
|
| 274 |
+
|
| 275 |
+
Table 3. Progressive training ablation.
|
| 276 |
+
|
| 277 |
+
<table><tr><td>Components</td><td>PSNR↑</td><td>SSIM↑</td><td>LPIPS↓</td><td>Avg. ↓</td></tr><tr><td>w/o prog. anneal</td><td>18.50</td><td>0.852</td><td>0.781</td><td>0.161</td></tr><tr><td>w/o prog. pose</td><td>16.96</td><td>0.799</td><td>0.811</td><td>0.194</td></tr><tr><td>w/o both</td><td>17.04</td><td>0.788</td><td>0.823</td><td>0.197</td></tr><tr><td>GeCoNeRF (Ours)</td><td>19.23</td><td>0.866</td><td>0.723</td><td>0.148</td></tr></table>
|
| 278 |
+
|
| 279 |
+
Feature-level consistency loss. We observe that without the consistency loss $\mathcal{L}_{\mathrm{cons}}$ , our model suffers both quantitative and qualitative decrease in reconstruction fidelity, verified by incoherent geometry in image (a) of Figure 7. Absence of unseen view consistency modeling destabilizes the model, resulting divergent behaviours.
|
| 280 |
+
|
| 281 |
+
Occlusion mask. We observe that addition of occlusion mask $M$ improves overall appearance as well as geometry, as shown in image (c) of Figure 7. Its absence results broken geometry throughout the overall scene, as demonstrated in (b). Erroneous artifacts pertaining to projections from different viewpoints were detected in multiple scenes, resulting lower quantitative values.
|
| 282 |
+
|
| 283 |
+
Progressive training strategies. In Table 3, we justify our progressive training strategies with additional experiments on NeRF-Synthetic dataset, while in the main ablation we conduct an ablation with progressive annealing only. For pose generation, we sample pose angle from large interval in the beginning, instead of slowly growing the interval. For positional encoding, we replace progressive annealing with naive positional encoding used in NeRF. We observe that their absence causes destabilization of the model and degradation in appearance, respectively.
|
| 284 |
+
|
| 285 |
+
Edge-aware disparity regularization. We observe that inclusion of edge-aware disparity regularization $\mathcal{L}_{\mathrm{reg}}$ refines given geometry, as shown in image (e) of Figure 7. By applying $\mathcal{L}_{\mathrm{reg}}$ , we see increased smoothness in geometry
|
| 286 |
+
|
| 287 |
+
Table 4. Pixel-level consistency ablation.
|
| 288 |
+
|
| 289 |
+
<table><tr><td>Components</td><td>PSNR↑</td><td>SSIM↑</td><td>LPIPS↓</td><td>Avg.↓</td></tr><tr><td>w/ Lpix</td><td>17.98</td><td>0.528</td><td>0.431</td><td>0.165</td></tr><tr><td>w/ Lcons (Ours)</td><td>18.55</td><td>0.592</td><td>0.340</td><td>0.150</td></tr></table>
|
| 290 |
+
|
| 291 |
+

|
| 292 |
+
(a) Pixel-level
|
| 293 |
+
Figure 8. $\mathcal{L}_{\mathrm{pix}}^M$ vs. $\mathcal{L}_{\mathrm{cons}}^M$ comparison.
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
(b) Feature-level
|
| 297 |
+
|
| 298 |
+
throughout the overall scene. This loss contributes to removal of erroneous artifacts, which achieves better results both qualitatively and quantitatively, as shown in Table 2.
|
| 299 |
+
|
| 300 |
+
Feature-level loss vs. pixel-level loss. In Table 4, we conduct a quantitative ablation comparisons between feature-level consistency loss $\mathcal{L}_{\mathrm{cons}}^{M}$ and pixel-level photometric consistency loss $\mathcal{L}_{\mathrm{pix}}^{M}$ , both with occlusion masking. As shown in Figure 8, naively applying pixel-level loss for consistency modeling leads to broken geometry. This phenomenon can be attributed to $\mathcal{L}_{\mathrm{pix}}$ being agnostic to view-dependent specular effects, which the network tries to model by altering or erasing altogether non-Lambertian surfaces.
|
| 301 |
+
|
| 302 |
+
# 6. Conclusion
|
| 303 |
+
|
| 304 |
+
We present GeCoNeRF, a novel approach for optimizing Neural Radiance Fields (NeRF) for few-shot novel view synthesis. Inspired by self-supervised monocular depth estimation method, we regularize geometry consistency by giving semantic consistency between rendered image and warped image. This approach overcomes limitation of NeRF with sparse inputs, which shows performance degradation with depth ambiguity and many artifacts. With feature consistency loss, we are able to regularize NeRF at unobserved viewpoints to give it beneficial geometric constraint. Further techniques and training strategies we propose prove to have stabilizing effect and facilitate optimization of our
|
| 305 |
+
|
| 306 |
+
network. Our experimental evaluation demonstrates our method's competitiveness results compared to other state of the art baselines.
|
| 307 |
+
|
| 308 |
+
# References
|
| 309 |
+
|
| 310 |
+
Attal, B., Laidlaw, E., Gokaslan, A., Kim, C., Richardt, C., Tompkin, J., and O'Toole, M. Törf: Time-of-flight radiance fields for dynamic scene view synthesis. Advances in neural information processing systems, 34, 2021.
|
| 311 |
+
Barron, J. T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., and Srinivasan, P. P. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
|
| 312 |
+
Bortolon, M., Del Bue, A., and Poiesi, F. Data augmentation for nef: a geometric consistent solution based on view morphing, 2022. URL https://arxiv.org/abs/2210.04214.
|
| 313 |
+
Chen, A., Xu, Z., Zhao, F., Zhang, X., Xiang, F., Yu, J., and Su, H. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14124-14133, 2021.
|
| 314 |
+
Chen, Z., Wang, C., Guo, Y., and Zhang, S.-H. Structnerf: Neural radiance fields for indoor scenes with structural hints. ArXiv, abs/2209.05277, 2022.
|
| 315 |
+
Chibane, J., Bansal, A., Lazova, V., and Pons-Moll, G. Stereo radiance fields (srf): Learning view synthesis for sparse views of novel scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7911-7920, 2021.
|
| 316 |
+
Darmon, F., Bascle, B., Devaux, J., Monasse, P., and Aubry, M. Improving neural implicit surfaces geometry with patch warping. 2022.
|
| 317 |
+
Deng, K., Liu, A., Zhu, J.-Y., and Ramanan, D. Depth-supervised NeRF: Fewer views and faster training for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022.
|
| 318 |
+
Deng, Y., Yang, J., Xiang, J., and Tong, X. Gram: Generative radiance manifolds for 3d-aware image generation. arXiv preprint arXiv:2112.08867, 2021.
|
| 319 |
+
Fu, Q., Xu, Q., Ong, Y.-S., and Tao, W. Geo-neus: Geometry-consistent neural implicit surfaces learning for multi-view reconstruction, 2022. URL https://arxiv.org/abs/2205.15848.
|
| 320 |
+
|
| 321 |
+
Garg, R., Bg, V. K., Carneiro, G., and Reid, I. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In European conference on computer vision, pp. 740-756. Springer, 2016.
|
| 322 |
+
Godard, C., Mac Aodha, O., and Brostow, G. J. Unsupervised monocular depth estimation with left-right consistency. In CVPR, 2017.
|
| 323 |
+
Hedman, P., Srinivasan, P. P., Mildenhall, B., Barron, J. T., and Debevec, P. Baking neural radiance fields for real-time view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5875-5884, 2021.
|
| 324 |
+
Huang, B., Yi, H., Huang, C., He, Y., Liu, J., and Liu, X. M3vsnet: Unsupervised multi-metric multi-view stereo network. In 2021 IEEE International Conference on Image Processing (ICIP), pp. 3163-3167, 2021. doi: 10.1109/ICIP42928.2021.9506469.
|
| 325 |
+
Jaderberg, M., Simonyan, K., Zisserman, A., et al. Spatial transformer networks. Advances in neural information processing systems, 28, 2015.
|
| 326 |
+
Jain, A., Tancik, M., and Abbeel, P. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5885-5894, 2021.
|
| 327 |
+
Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., and Aanaes, H. Large scale multi-view stereopsis evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 406-413, 2014.
|
| 328 |
+
Jeong, Y., Ahn, S., Choy, C., Anandkumar, A., Cho, M., and Park, J. Self-calibrating neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5846-5854, 2021.
|
| 329 |
+
Johnson, J., Alahi, A., and Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, 2016.
|
| 330 |
+
Khot, T., Agrawal, S., Tulsiani, S., Mertz, C., Lucey, S., and Hebert, M. Learning unsupervised multi-view stereopsis via robust photometric consistency. arXiv preprint arXiv:1905.02706, 2019.
|
| 331 |
+
Kim, M., Seo, S., and Han, B. Infonerf: Ray entropy minimization for few-shot neural volume rendering. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
|
| 332 |
+
Mildenhall, B., Srinivasan, P. P., Ortiz-Cayon, R., Kalantari, N. K., Ramamoorthi, R., Ng, R., and Kar, A. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 2019.
|
| 333 |
+
|
| 334 |
+
Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., and Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
|
| 335 |
+
Müller, T., Evans, A., Schied, C., and Keller, A. Instant neural graphics primitives with a multiresolution hash encoding. arXiv preprint arXiv:2201.05989, 2022.
|
| 336 |
+
Niemeyer, M. and Geiger, A. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11453-11464, 2021.
|
| 337 |
+
Niemeyer, M., Barron, J. T., Mildenhall, B., Sajjadi, M. S. M., Geiger, A., and Radwan, N. Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
|
| 338 |
+
Park, K., Sinha, U., Barron, J. T., Bouaziz, S., Goldman, D. B., Seitz, S. M., and Martin-Brualla, R. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5865-5874, 2021.
|
| 339 |
+
Pumarola, A., Corona, E., Pons-Moll, G., and Moreno-Noguer, F. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10318-10327, 2021.
|
| 340 |
+
Reiser, C., Peng, S., Liao, Y., and Geiger, A. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlp's. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14335-14345, 2021.
|
| 341 |
+
Roessle, B., Barron, J. T., Mildenhall, B., Srinivasan, P. P., and Nießner, M. Dense depth priors for neural radiance fields from sparse input views. arXiv preprint arXiv:2112.03288, 2021.
|
| 342 |
+
Schwarz, K., Liao, Y., Niemeyer, M., and Geiger, A. Graf: Generative radiance fields for 3d-aware image synthesis. Advances in Neural Information Processing Systems, 33: 20154-20166, 2020.
|
| 343 |
+
Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556.
|
| 344 |
+
Tancik, M., Srinivasan, P. P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J. T., and Ng, R. Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS, 2020.
|
| 345 |
+
|
| 346 |
+
Tretschk, E., Tewari, A., Golyanik, V., Zollhöfer, M., Lassner, C., and Theobalt, C. Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12959-12970, 2021.
|
| 347 |
+
Wang, Z., Bovik, A., Sheikh, H., and Simoncelli, E. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13 (4):600-612, 2004. doi: 10.1109/TIP.2003.819861.
|
| 348 |
+
Xu, X., Pan, X., Lin, D., and Dai, B. Generative occupancy fields for 3d surface-aware image synthesis. Advances in Neural Information Processing Systems, 34, 2021.
|
| 349 |
+
Yu, A., Li, R., Tancik, M., Li, H., Ng, R., and Kanazawa, A. PlenOctrees for real-time rendering of neural radiance fields. In ICCV, 2021a.
|
| 350 |
+
Yu, A., Ye, V., Tancik, M., and Kanazawa, A. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4578-4587, 2021b.
|
| 351 |
+
Zhan, H., Garg, R., Weerasekera, C. S., Li, K., Agarwal, H., and Reid, I. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 340-349, 2018.
|
| 352 |
+
Zhang, J., Zhang, Y., Fu, H., Zhou, X., Cai, B., Huang, J., Jia, R., Zhao, B., and Tang, X. Ray priors through reprojection: Improving neural radiance fields for novel view extrapolation. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18355-18365, 2022. doi: 10.1109/CVPR52688.2022.01783.
|
| 353 |
+
Zhang, K., Riegler, G., Snavely, N., and Koltun, V. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020.
|
| 354 |
+
Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
|
| 355 |
+
Zhou, T., Brown, M., Snavely, N., and Lowe, D. G. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1851-1858, 2017.
|
2301.10xxx/2301.10941/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8dfee590029fc9b04ece4ffdb306292b2b6eeae9073a18e05baaa6244857af9a
|
| 3 |
+
size 688449
|
2301.10xxx/2301.10941/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10945/5954f0e5-fced-4027-b87d-404a02fb2c9d_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10945/5954f0e5-fced-4027-b87d-404a02fb2c9d_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10945/5954f0e5-fced-4027-b87d-404a02fb2c9d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:02e8491c9488f04846aa23e55aee9215216b49b114c78a673da7ef2b22b2cadd
|
| 3 |
+
size 607718
|
2301.10xxx/2301.10945/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10945/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:870d64c9c3c6c5b22d90fcf21786161542b44ad8f030279553368758787fadc0
|
| 3 |
+
size 3128364
|
2301.10xxx/2301.10945/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10964/6e1dc718-2719-4720-8a29-9ffcfae23cc6_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10964/6e1dc718-2719-4720-8a29-9ffcfae23cc6_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10964/6e1dc718-2719-4720-8a29-9ffcfae23cc6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5f6398c363ca2aa252f006ca30715092517a39782870b9a94951dc5171e01acd
|
| 3 |
+
size 5416663
|
2301.10xxx/2301.10964/full.md
ADDED
|
@@ -0,0 +1,452 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Interaction-level Membership Inference Attack Against Federated Recommender Systems
|
| 2 |
+
|
| 3 |
+
Wei Yuan
|
| 4 |
+
|
| 5 |
+
The University of Queensland
|
| 6 |
+
|
| 7 |
+
Brisbane, Australia
|
| 8 |
+
|
| 9 |
+
w.yuan@uq.edu.au
|
| 10 |
+
|
| 11 |
+
Chaoqun Yang
|
| 12 |
+
|
| 13 |
+
Griffith University
|
| 14 |
+
|
| 15 |
+
Gold Coast, Australia
|
| 16 |
+
|
| 17 |
+
chaoqun.yang@griffith.edu.au
|
| 18 |
+
|
| 19 |
+
Quoc Viet Hung Nguyen
|
| 20 |
+
|
| 21 |
+
Griffith University
|
| 22 |
+
|
| 23 |
+
Gold Coast, Australia
|
| 24 |
+
|
| 25 |
+
henry.nguyen@griffith.edu.au
|
| 26 |
+
|
| 27 |
+
Lizhen Cui
|
| 28 |
+
|
| 29 |
+
Shandong University
|
| 30 |
+
|
| 31 |
+
Jinan, China
|
| 32 |
+
|
| 33 |
+
clz@sdu.edu.cn
|
| 34 |
+
|
| 35 |
+
Tieke He
|
| 36 |
+
|
| 37 |
+
Nanjing University
|
| 38 |
+
|
| 39 |
+
Nanjing, China
|
| 40 |
+
|
| 41 |
+
hetieke@gmail.com
|
| 42 |
+
|
| 43 |
+
Hongzhi Yin*
|
| 44 |
+
|
| 45 |
+
The University of Queensland
|
| 46 |
+
|
| 47 |
+
Brisbane, Australia
|
| 48 |
+
|
| 49 |
+
h.yin1@uq.edu.au
|
| 50 |
+
|
| 51 |
+
# ABSTRACT
|
| 52 |
+
|
| 53 |
+
The marriage of federated learning and recommender system (FedRec) has been widely used to address the growing data privacy concerns in personalized recommendation services. In FedRecs, users' attribute information and behavior data (i.e., user-item interaction data) are kept locally on their personal devices, therefore, it is considered a fairly secure approach to protect user privacy. As a result, the privacy issue of FedRecs is rarely explored. Unfortunately, several recent studies reveal that FedRecs are vulnerable to user attribute inference attacks, highlighting the privacy concerns of FedRecs. In this paper, we further investigate the privacy problem of user behavior data (i.e., user-item interactions) in FedRecs. Specifically, we perform the first systematic study on interaction-level membership inference attacks on FedRecs. An interaction-level membership inference attacker is first designed, and then the classical privacy protection mechanism, Local Differential Privacy (LDP), is adopted to defend against the membership inference attack. Unfortunately, the empirical analysis shows that LDP is not effective against such new attacks unless the recommendation performance is largely compromised. To mitigate the interaction-level membership attack threats, we design a simple yet effective defense method to significantly reduce the attacker's inference accuracy without losing recommendation performance. Extensive experiments are conducted with two widely used FedRecs (Fed-NCF and Fed-LightGCN) on three real-world recommendation datasets (MovieLens-100K, Steam-200K, and Amazon Cell Phone), and the experimental results show the effectiveness of our solutions.
|
| 54 |
+
|
| 55 |
+
# CCS CONCEPTS
|
| 56 |
+
|
| 57 |
+
- Information systems $\rightarrow$ Recommender systems.
|
| 58 |
+
|
| 59 |
+
*Corresponding author.
|
| 60 |
+
|
| 61 |
+
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
|
| 62 |
+
|
| 63 |
+
Conference acronym 'XX, June 03-05, 2018, Woodstock, NY
|
| 64 |
+
|
| 65 |
+
© 2018 Association for Computing Machinery.
|
| 66 |
+
|
| 67 |
+
ACM ISBN 978-1-4503-XXXX-X/18/06...$15.00
|
| 68 |
+
|
| 69 |
+
https://doi.org/XXXXXXXXXXXXXXXXXX
|
| 70 |
+
|
| 71 |
+
# KEYWORDS
|
| 72 |
+
|
| 73 |
+
Recommender System, Federated Learning, Membership Inference Attack and Defense
|
| 74 |
+
|
| 75 |
+
# ACM Reference Format:
|
| 76 |
+
|
| 77 |
+
Wei Yuan, Chaoqun Yang, Quoc Viet Hung Nguyen, Lizhen Cui, Tieke He, and Hongzhi Yin. 2018. Interaction-level Membership Inference Attack Against Federated Recommender Systems. In Proceedings of Make sure to enter the correct conference title from your rights confirmation email (Conference acronym 'XX). ACM, New York, NY, USA, 10 pages. https://doi.org/XXXXXXXXX.XXXXXXX
|
| 78 |
+
|
| 79 |
+
# 1 INTRODUCTION
|
| 80 |
+
|
| 81 |
+
In the age of the information explosion, recommender systems have become an essential means to alleviate information overload [5, 41], and many recommendation techniques have been proposed, including matrix factorization [23], deep learning based methods [10, 11], etc. These traditional recommender systems have already achieved good performance in diverse scenarios [44]. However, most of these traditional recommender systems work in a centralized way, i.e., they require collecting and storing users' historical interaction data to train a powerful recommender model in a central server [15]. As the increasing concerns of user privacy and the relevant privacy protection regulations such as the General Data Protection Regulation (GDPR) [32] in European Union and the California Consumer Privacy Act (CCPA) [6] in the United States, centrally collecting users' personal data is harder and even becomes infeasible in many cases [16].
|
| 82 |
+
|
| 83 |
+
To address the privacy issue, federated learning (FL) [22] has been recently adopted in recommender systems. In federated recommender systems (FedRecs), users can collaboratively train the recommender model but do not need to share their private data with either central servers or other users (clients). Therefore, FedRecs are considered a natural solution to protect users' sensitive information. Generally, FedRecs can be further divided into FedRecs with explicit feedback [16] and FedRecs with implicit feedback, according to their training datasets and optimization objectives. In this paper, we focus on FedRecs with implicit feedback<sup>1</sup>. Since Ammad et al. [1] proposed the first FedRec with collaborative filtering, many studies followed and extended their basic FedRec framework.
|
| 84 |
+
|
| 85 |
+
For example, FedFast [24] aims to accelerate the convergence of FedRec training. Imran et al. [13] and Wang et al. [35] focused on the efficiency of FedRecs.
|
| 86 |
+
|
| 87 |
+
With the remarkable attainment achieved in a short time [39], a few recent studies have started to verify whether FedRecs are "safe" enough. [45] is the first work to analyze the privacy issue of FedRecs. However, it only discussed sensitive attribute information leakage [46] and developed an effective attribute information protection approach. Although [16-19] studied the leakage and protection of user rating information in FedRecs, they all focused on explicit feedback data, which are much different from this work targeting FedRecs with implicit feedback.
|
| 88 |
+
|
| 89 |
+
Inferring a user's interaction data in FedRecs is one type of membership inference attack (MIA). Although MIA has been widely investigated in federated classification tasks [4, 21, 25, 30, 43, 49], their proposed attack and defense approaches cannot apply to FedRecs due to the following major differences between federated recommendation and federated classification. (1) From the perspective of attack objective, MIA in federated classification aims to infer or predict whether a sample has been used in the federated training process and which client has used it for the local training. However, in FedRecs, the associated item set of each client can be easily inferred by simply checking which items' embeddings are updated by the client. Furthermore, knowing such an item set is meaningless in FedRecs, since it consists of both positive and negative samples/items, and only positive samples (i.e., interacted items) can leak user privacy. Hence, the membership inference attack on FedRecs aims to infer the user's interacted items (i.e., positive samples), and we name such MIA as Interaction-level Membership Inference Attack (IMIA). (2) From the attack implementation perspective, MIA in federated classification needs to acquire extra i.i.d. data, which is however infeasible in FedRecs. In addition, the federated recommender architecture is significantly different from the federated classification model architecture. A client in FedRecs can have its private parameters (i.e., user embedding), while all model parameters in the federated classification models are shared.
|
| 90 |
+
|
| 91 |
+
In this paper, we first design a novel IMIA attacker to reveal the risk of leaking user interaction data in FedRecs and then propose an efficient and effective defender. The attack is launched by a central server that is honest but curious. The central server aims to identify a user's interacted items (i.e., positive samples) from its associated items (including both positive and negative samples) by analyzing the user's uploaded parameters without breaking the federated learning protocol. To be specific, given a target client, the attacker iteratively identifies its interacted items by repeating the following procedure. The attacker first randomly assigns ratings (0 or 1) to the client's associated items to construct a shadow training set, based on which a shadow recommender model is trained. Then, the attacker compares the relevance between the client's uploaded item embeddings and the item embeddings in the shadow recommender model to find the correctly guessed items. We implement the IMIA attacker on two representative FedRecs (Fed-NCF [1] and Fed-LightGCN [10]), and evaluate its inference accuracy on three real-world recommendation datasets (MovieLens-100K [7], Steam-200K [2], and Amazon Cell Phone [9]). The experimental results show the high inference accuracy of this new IMIA attacker, highlighting the risk of user interaction data leakage in FedRecs.
|
| 92 |
+
|
| 93 |
+
Recently, to improve the privacy-preserving ability of federated learning, Local Differential Privacy (LDP) has been employed in FedRecs and quickly becomes a gold standard for privacy preservation because of its effectiveness [20, 33, 40]. Therefore, we also evaluate the performance of the IMIA attacker in the above-mentioned FedRecs equipped with LDPs. It is found that LDP is not effective against such new attacks unless the recommendation performance is largely compromised, highlighting the timely demand for a new defense mechanism against the new IMIA.
|
| 94 |
+
|
| 95 |
+
In light of this, we propose a novel defense mechanism - IMIA defender. As there are both public and private parameters in FedRecs and only the public parameters can leak user privacy information, we impose a regularization term in the loss function of FedRecs to restrict the update and learning ability of the public parameters and enforce the private parameters to learn more useful patterns and account more for the recommendation performance. In this way, less sensitive information is transmitted to the server via the shared parameters. As shown in our experiments, our proposed defender can significantly decrease the inference accuracy of the IMIA attacker to the level of random guess with negligible influence on the recommendation performance.
|
| 96 |
+
|
| 97 |
+
In conclusion, the main contributions of this paper are summarized as follows:
|
| 98 |
+
|
| 99 |
+
- To the best of our knowledge, we are the first to perform a comprehensive privacy analysis of federated recommender systems under interaction-level membership inference attack (IMIA). Our study discloses the privacy risk of user interaction data in FedRecs.
|
| 100 |
+
- We find that the commonly used privacy-preserving approach, LDP, cannot effectively defend against the new IMIA attack. Then, we propose a simple yet effective defense mechanism to constrain the update of public parameters, which can significantly degenerate the IMIA attacker's performance to the level of random guesses without hurting the recommendation performance.
|
| 101 |
+
- Extensive experiments are conducted with two widely used federated recommender systems (Fed-NCF and Fed-LightGCN) on three real-world recommendation datasets, showing the effectiveness of our attack and defense approaches.
|
| 102 |
+
|
| 103 |
+
# 2 PRELIMINARIES
|
| 104 |
+
|
| 105 |
+
In this section, we first revisit the fundamental settings of FedRecs, and then formally define interaction-level membership inference attack and defense. Note that the bold lowercase (e.g. a) represents vectors, bold uppercase (e.g. A) denotes matrices, and squiggle uppercase (e.g. $\mathcal{A}$ ) signifies sets.
|
| 106 |
+
|
| 107 |
+
# 2.1 Federated Recommender System
|
| 108 |
+
|
| 109 |
+
Let $\mathcal{U}$ and $\mathcal{V}$ denote the sets of users (clients) and items, respectively. In FedRec, each user/client $u_{i}$ has a local training dataset $\mathcal{D}_i$ , which consists of user-item interactions $(u_i,v_j,r_{ij})$ . $r_{ij} = 1$ means that user $u_{i}$ has interacted with item $v_{j}$ ; otherwise, $r_{ij} = 0$ , that is $v_{j}$ is a negative sample. We use $\mathcal{V}_i^+$ and $\mathcal{V}_i^-$ to denote the interacted item set and negative sample set of user $u_{i}$ . The FedRec is trained to predict $\hat{r}_{ij}$ between $u_{i}$ and non-interacted items. Finally, FedRec
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
Figure 1: A typical federated recommender system with IMIA attacker and defender.
|
| 113 |
+
|
| 114 |
+
will recommend top- $K$ ranked items with the highest predicted ratings to each user $u_{i}$ .
|
| 115 |
+
|
| 116 |
+
In FedRec, a central server coordinates a large number of clients. The federated training process mainly contains four steps. First, the central server randomly selects a batch of users/clients as participants and dispenses the global parameters to these clients. Second, after receiving global parameters, each client combines these public parameters with their private parameters to form a local recommendation model and optimize this model on their local datasets regarding a certain objective function (e.g., BPRLoss [27]). Third, after local training, each client sends the updated public parameters back to the central server. Finally, the central server aggregates received public parameters with a certain aggregation strategy (e.g., FedAvg [22]). The above steps form a global training epoch in FedRec and will be repeated many times until the model convergence or meet some pre-defined requirement.
|
| 117 |
+
|
| 118 |
+
# 2.2 Interaction-level Membership Inference Attack and Defense
|
| 119 |
+
|
| 120 |
+
Adversary's Goal. In this paper, we assume the central server is honest-but-curious, i.e., the server is curious about user private data, but it will not break FedRec's learning protocol. The goal of the curious server is to infer the set of interacted items on each client $u_{i}$ based on its uploaded public parameters:
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\hat {\mathcal {V}} _ {i} ^ {+} \leftarrow I M I A \left(\mathrm {V} _ {i} ^ {t}\right) \tag {1}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
where $\hat{\mathcal{V}}_i^+$ is the inferred set of $u_i$ 's interacted items, and $\mathrm{V}_i^t$ represents public or shared parameters that user $u_i$ sends to the server at epoch $t$ . Without loss of generality, the public parameters mainly refer to item embeddings in this paper. The central curious server aims to accurately infer each client's interacted items, and meanwhile, it does not expect its inference attack to affect FedRec's normal learning process and recommendation performance.
|
| 127 |
+
|
| 128 |
+
Adversary's Knowledge. To be more realistic, we assume that the server has the following prior knowledge: (1) the target user $u_{i}$ 's uploaded public parameters (or gradients), which is consistent with the FedRec protocol; and (2) a few basic learning hyper-parameters, such as learning rate $lr$ and the ratio of negative sampling $\eta$ . In FedRecs, these hyper-parameters are pre-defined by the central server and broadcast to each participant client, therefore, this assumption of prior knowledge is reasonable.
|
| 129 |
+
|
| 130 |
+
Defense. The defense is launched locally by each client to defend against the curious server's inference attack. The client anticipates the defense method can significantly reduce the server's inference accuracy to protect their interaction data without much recommendation performance loss and extra computation footprint.
|
| 131 |
+
|
| 132 |
+
# 3 METHOD
|
| 133 |
+
|
| 134 |
+
In this section, we will first describe the base federated recommenders used in this paper and then present the details of the IMIA attacker and defender. Fig. 1 shows the framework of FedRec with IMIA attack and defense and the whole procedure is also described in Alg. 1.
|
| 135 |
+
|
| 136 |
+
# 3.1 Base Federated Recommender
|
| 137 |
+
|
| 138 |
+
Generally, a federated learning framework can be applied to most deep learning-based recommendation models. Among these recommenders, neural collaborative filtering (NCF) [11] and graph neural network (GNN) [29] are the two most widely used techniques. Hence, we extend an NCF-based centralized model and a LightGCN-based [10] centralized model to Fed-NCF and FedLightGCN respectively, which will be then used as our base FedRecs to show the effectiveness of our attacker and defender.
|
| 139 |
+
|
| 140 |
+
Neural Collaborative Filtering. NCF extends collaborative filtering (CF) by leveraging an $L$ -layer feedforward network (FFN) to capture the complex patterns of user-item interactions as follows:
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\hat {r} _ {i j} = \sigma \left(\mathrm {h} ^ {\top} F F N ([ \mathrm {u} _ {i}, \mathrm {v} _ {j} ]])\right) \tag {2}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
where $\mathbf{u}_i$ and $\mathbf{v}_j$ are user $u_i$ 's and item $v_j$ 's embedding; $\mathrm{h}$ denotes a learnable weight vector; $[\cdot, \cdot]$ is concatenation operation, and $\hat{r}_{ij}$ is the predicted preference score of user $u_i$ on item $v_j$ .
|
| 147 |
+
|
| 148 |
+
LightGCN. In graph-based recommenders, the user-item interactions can be constructed as a bipartite graph. Then, LightGCN treats all users and items as distinct nodes. After that, user and item embeddings are learned by propagating their neighbor nodes' embeddings:
|
| 149 |
+
|
| 150 |
+
$$
|
| 151 |
+
u _ {i} ^ {l} = \sum_ {j \in \mathcal {N} _ {u _ {i}}} \frac {1}{\sqrt {\left| \mathcal {N} _ {u _ {i}} \right| \sqrt {\left| \mathcal {N} _ {v _ {j}} \right|}}} v _ {j} ^ {l - 1}, \quad v _ {j} ^ {l} = \sum_ {i \in \mathcal {N} _ {v _ {j}}} \frac {1}{\sqrt {\left| \mathcal {N} _ {v _ {j}} \right|} \sqrt {\left| \mathcal {N} _ {u _ {i}} \right|}} u _ {i} ^ {l - 1} \tag {3}
|
| 152 |
+
$$
|
| 153 |
+
|
| 154 |
+
where $\mathcal{N}_{u_i}$ and $\mathcal{N}_{v_j}$ denote the sets of $u_{i}$ 's and $v_{j}$ 's neighbors. $l$ is the propagation layer. Note that under the federated learning setting, each user/client can only access its own data, thus they can only perform the above calculation on their local bipartite graphs.
|
| 155 |
+
|
| 156 |
+
After $L$ layers propagation, we aggregate all layers' embedding together as the final user and item embeddings:
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
\mathrm {u} _ {i} = \sum_ {l = 0} ^ {L} \mathrm {u} _ {i} ^ {l}, \quad \mathrm {v} _ {j} = \sum_ {l = 0} ^ {L} \mathrm {v} _ {j} ^ {l} \tag {4}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
Then, as done in NCF, E.q. 2 is adopted to compute the predicted preference scores.
|
| 163 |
+
|
| 164 |
+
FedRec Learning Protocol. In FedRec, the parameters can be divided into private and public parameters. Each client initializes its private parameters, i.e., user embedding $\mathbf{u}_i$ , and the public parameters $\mathrm{V}$ are initialized by a central server $s$ . At the beginning of a global training epoch $t$ , the server $s$ randomly selects a group of clients as participants $\mathcal{U}_t$ and sends $\mathrm{V}_t$ to each participant. The
|
| 165 |
+
|
| 166 |
+
participant combines $\mathrm{V}_t$ with its private parameters to form a local recommender and trains the recommender on its local dataset $\mathcal{D}_i$ with the following loss function:
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
\mathcal {L} ^ {r e c} = - \sum_ {(u _ {i}, v _ {j}, r _ {i j}) \in \mathcal {D} _ {i}} r _ {i j} \log \hat {r} _ {i j} + (1 - r _ {i j}) \log (1 - \hat {r} _ {i j}) \quad (5)
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
After the local training, the client $u_{i}$ locally updates its private user embedding $\mathbf{u}_{i}$ and uploads the updated public parameters $\mathrm{V}_i^t$ to the central server $s$ . Then, the server utilizes FedAvg [22] to update the global parameters:
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
V _ {t + 1} = \sum_ {u _ {i} \in \mathcal {U} _ {t}} V _ {i} ^ {t} \tag {6}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
The above steps iterate until the system converges or meets certain requirements.
|
| 179 |
+
|
| 180 |
+
Local Differential Privacy. As one of the most popular ways to protect users' sensitive data, LDP has been integrated into many FedRecs [38]. In this paper, we perform the analysis of IMIA attacks on not only the vanilla FedRecs but also FedRecs with the LDP mechanism. Following [37], before uploading public parameters to server $s$ , the client adds some noises to $V_{i}^{t}$ :
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
\mathrm {V} _ {i} ^ {t} \leftarrow \mathrm {V} _ {i} ^ {t} + \mathcal {N} (0, \lambda^ {2} \mathrm {I}) \tag {7}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
where $\mathcal{N}$ is the normal distribution and $\lambda$ controls the scale of noise.
|
| 187 |
+
|
| 188 |
+
# 3.2 Interaction-level Membership Inference Attacker
|
| 189 |
+
|
| 190 |
+
In this work, the curious-but-honest central server is the IMIA attacker, who attempts to infer target user $u_{i}$ 's interacted item set $\mathcal{V}_i^+$ . Basically, if the server has more prior information, such a membership attack is easier to implement with high accuracy. For example, if the server $s$ can access $u_{i}$ 's private user embedding or a part of $u_{i}$ 's interaction data, it can simply train a shadow recommender to infer its other interacted items. However, these strong prior knowledge assumptions are unrealistic in real-world FedRecs. Therefore, we assume that the malicious server can only access the public parameters $\mathrm{V}_i^t$ uploaded by each client and some training hyper-parameters including the learning rate $lr$ and the negative sampling ratio $\eta$ .
|
| 191 |
+
|
| 192 |
+
Based on the public parameters $\mathbf{V}_i^t$ updated by $u_{i}$ , the server can easily infer which items are involved during the local training according to their embedding updates. That is, for item $v_{j}$ , if its embedding is updated by the client $u_{i}, v_{j}$ participates in $u_{i}$ 's local training. But such simple inference is not useful since $v_{j}$ can also be a negative sample. The malicious server would like to further infer whether $v_{j}$ is positive or not for user $u_{i}$ (i.e., the value of $r_{ij}$ ). Once the $r_{ij}$ is accurately predicted, $u_{i}$ 's private interaction dataset $\mathcal{D}_i$ is exposed to the server. Thus, the membership inference attack problem transforms to predict $r_{ij}$ for item $v_{j}$ in $\mathcal{V}_i$ .
|
| 193 |
+
|
| 194 |
+
Our attacker design is inspired by the following interesting empirical observation. Assume there is a local model $\mathbf{M}_i$ trained on its local dataset $\mathcal{D}_i$ . $\mathbf{M}_i^{\prime}$ is also trained on $\mathcal{D}_i$ but its private parameters (i.e., user embedding) have different initial values. $\mathcal{D}_i^j$ represents a dataset in which $v_{j}$ 's rating $r_{ij}$ is reversed, and all the other ratings are the same as in $\mathcal{D}_i$ . For example, if $r_{ij} = 1$ in $\mathcal{D}_i$ , $r_{ij}$ will be reversed to 0 in $\mathcal{D}_i^j$ . $\mathbf{M}_i^{\prime \prime}$ is trained on $\mathcal{D}_i^j$ with a different private parameter initial point. Before training, these three models' public
|
| 195 |
+
|
| 196 |
+
parameters are the same. After training, we obtain the following interesting observation: $dist(\mathrm{v}_j,\mathrm{v}_j^{\prime}) < dist(\mathrm{v}_j,\mathrm{v}_j^{\prime \prime}).dist(\cdot)$ denotes a distance function and the Euclidean metric is adopted in our paper. $\mathrm{v}_j,\mathrm{v}_j^{\prime}$ , and $\mathrm{v}_j^{\prime \prime}$ are $v_{j}$ 's embeddings from model $\mathbf{M}_i$ , $\mathbf{M}_i^{\prime}$ , and $\mathbf{M}_i^{\prime \prime}$ , respectively. It is worth noting that $u_{i}$ 's embeddings in $\mathbf{M}_i$ , $\mathbf{M}_i^{\prime}$ , and $\mathbf{M}_i^{\prime \prime}$ have different initial values. Table 1 provides a proof-of-concept. For each user, we randomly select one item from its local dataset and reverse the item's rating to construct the dataset $\mathcal{D}_i^j$ . Once $\mathbf{M}_i$ , $\mathbf{M}_i^{\prime}$ , $\mathbf{M}_i^{\prime \prime}$ are trained, we can infer the rating $r_{ij}$ in $\mathcal{D}_i$ only based on the item's rating in $\mathcal{D}_i^j$ and the distance of the item's embeddings in these three models. As shown in Table 1, the inference accuracy is higher than $90\%$ in most cases, showing the effectiveness of this inference attack method. Based on this observation, if all other item ratings in $\mathcal{D}_i$ are known, we can infer $v_{j}$ 's rating $r_{ij}$ by training $\mathbf{M}_i^{\prime}$ and $\mathbf{M}_i^{\prime \prime}$ and then comparing their $v_{j}$ 's item embedding distance with the uploaded parameters $\mathbf{V}_i^t$ .
|
| 197 |
+
|
| 198 |
+
Table 1: Accuracy of inferring randomly select items' ratings for all users based on comparing Euclidean distances $dist(\mathrm{v}_j,\mathrm{v}_j')$ and $dist(\mathrm{v}_j,\mathrm{v}_j^{\prime \prime})$ .
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Models</td><td>MovieLens-100K</td><td>Steam-200K</td><td>Amazon</td></tr><tr><td>Fed-NCF</td><td>93.9%</td><td>97.6%</td><td>99.9%</td></tr><tr><td>Fed-LightGCN</td><td>79.7%</td><td>90.5%</td><td>91.15%</td></tr></table>
|
| 201 |
+
|
| 202 |
+
However, the IMIA attacker does not know any item rating in $\mathcal{D}_i$ , so the above method cannot be directly used as the attack approach for FedRecs. To implement IMIA attacks, we relax the requirement and generalize the observation: if most samples are the same on two datasets $\mathcal{D}$ and $\mathcal{D}'$ , and we train two models M and M' on them respectively, the embeddings of counterpart items will be close if their ratings are the same. Based on this assumption, when the server is curious about user $u_i$ 's interaction data at epoch $t$ , the server first randomly assigns ratings (i.e., 0 and 1) for each item in $\mathcal{V}_i$ according to the negative sampling ratio $\eta$ . For example, if $\eta$ is $1:4$ , the server will randomly choose $25\%$ items as positive items and the remaining items as negative ones, thus constructing a fake dataset $\mathcal{D}_i^{fake}$ . Since the negative samples are empirically several times more than the positive items, $\mathcal{D}_i^{fake}$ and $\mathcal{D}_i$ still have a portion of common ratings. Still taking $\eta = 1:4$ as an example, although in the worst case all positive items are wrongly assigned with rating $0$ , $\mathcal{D}_i^{fake}$ and $\mathcal{D}_i$ still have $50\%$ the same item ratings. Then, the server trains a shadow model $\mathrm{M}_i^{fake}$ based on $\mathcal{D}_i^{fake}$ . After that, the malicious server calculates the distance between item embeddings from $\mathrm{M}_i^{fake}$ and the uploaded item embeddings $V_i^t$ , and it chooses $\gamma * |\mathcal{V}_i|$ items with the smallest distance as "correct guess". The ratings of "correct guess" items will be fixed in the next iteration. Repeat the above steps several times until the server finishes inferring the positive item set $\hat{\mathcal{V}}_i^+$ for user $u_i$ . Since the whole inference attack process happens on the malicious server side by using uploaded public parameters, the client is unaware of the IMIA attack. In addition, the malicious server can also store the target user's uploaded parameters and asynchronously execute the inference attack process without interrupting the normal training of
|
| 203 |
+
|
| 204 |
+
FedRecs. Lines 23-32 in Alg. 1 describe the process of the proposed IMIA attack with pseudo-code.
|
| 205 |
+
|
| 206 |
+
Algorithm 1 FedRec with IMIA attacker and defender.
|
| 207 |
+
Input: global epoch $T$ ; local epoch $L$ ; learning rate $lr$ , negative sampling rate $\eta$ , ...;
|
| 208 |
+
Output: global parameter V, local client embedding $\mathsf{u}_i|_{i\in \mathcal{U}}$ ;
|
| 209 |
+
1: Initializing global parameter $\mathrm{V_0}$ ;
|
| 210 |
+
2: for each round $t = 0,1,\ldots,T$ do
|
| 211 |
+
3: sampling a fraction of clients $\mathcal{U}_t$ ;
|
| 212 |
+
4: for $u_i\in \mathcal{U}_t$ do
|
| 213 |
+
5: $\mathrm{V}_i^t\gets \mathrm{CLIENTTRAIN}(u_i,\mathrm{V}_t,L)$ ;
|
| 214 |
+
6: if curious about $u_i$ 's data then
|
| 215 |
+
7: $\hat{\mathcal{V}}_i^+\gets \mathrm{ATTACKER}(\mathrm{V}_i^t,\eta ,\gamma)$ ;
|
| 216 |
+
8: end if
|
| 217 |
+
9: end for
|
| 218 |
+
10: $\mathrm{V}_{t + 1} = \sum_{u_i\in \mathcal{U}_t}\mathrm{V}_i^t;$
|
| 219 |
+
11: end for
|
| 220 |
+
12: function CLIENTTRAIN(u_i,V_t,L)
|
| 221 |
+
13: downloading $\mathrm{V}_t$ from the server;
|
| 222 |
+
14: sampling negative items $\mathcal{V}_i^{neg}$ ;
|
| 223 |
+
15: if use IMIA defender then
|
| 224 |
+
16: $\mathrm{u}_i^{t + 1},\mathrm{V}_i^t\gets$ training $L$ epochs with E.q. 8;
|
| 225 |
+
17: else
|
| 226 |
+
18: $\mathrm{u}_i^{t + 1},\mathrm{V}_i^t\gets$ training $L$ epochs with E.q. 5;
|
| 227 |
+
19: end if
|
| 228 |
+
20: if use LDP, add noise with E.q. 7;
|
| 229 |
+
21: return $\mathrm{V}_i^t$
|
| 230 |
+
22: end function
|
| 231 |
+
23: function ATTACKER( $\mathrm{V}_i^t,\eta ,\gamma$ )
|
| 232 |
+
24: $\hat{\mathcal{V}}_i^+ = \{\}$
|
| 233 |
+
25: $\mathcal{V}_i\gets$ select updated items according to $\mathrm{V}_i^t$ and $\mathrm{V}_t$ ;
|
| 234 |
+
26: while $\left|\hat{\mathcal{V}}_i^+\right| < \eta |\mathcal{V}_i|$ do
|
| 235 |
+
27: randomly assign ratings to $v_j\in \mathcal{V}_i\setminus \hat{\mathcal{V}}_i^+$ ;
|
| 236 |
+
28: train fake model $\mathrm{M}_i^{face}$ on constructed dataset;
|
| 237 |
+
29: $\hat{\mathcal{V}}_i^+, \hat{\mathcal{V}}_i^- \gets$ select $\gamma *|\mathcal{V}_i|$ items using dist( $\mathrm{V}_i^t,\mathrm{V}_i^{face}$ );
|
| 238 |
+
30: end while
|
| 239 |
+
31: return $\hat{\mathcal{V}}_i^+$
|
| 240 |
+
32: end function
|
| 241 |
+
|
| 242 |
+
# 3.3 Interaction-level Membership Inference Defender
|
| 243 |
+
|
| 244 |
+
In Section 4.5 and 4.6, the experimental results demonstrate that both vanilla FedRecs and FedRecs with LDP are vulnerable to the new attack IMIA, highlighting the need for a new defense mechanism. The experimental results in Table 3 and 4 show that FedLightGCN is more resistant to IMIA. This may be because the private user embeddings in Fed-LightGCN learn more useful information and patterns than in Fed-NCF. Since the private user embeddings in Fed-LightGCN capture more user-item interaction patterns, it is harder for the curious server to infer interactions only from public parameters.
|
| 245 |
+
|
| 246 |
+
To further validate our hypothesis, we compare the deviation of user/item embeddings in the training process from their initial values using L2 loss (i.e., $dist^2(v_i^t - v_i^0)$ ). Fig. 2 illustrates the trend of the average deviation over training time. In Fig. 2, the deviation of item embeddings is much larger than user embeddings' deviation. In other words, on average, user embeddings do not change as much as item embeddings during the whole training process, therefore user embeddings learn less information and patterns. Further, by comparing Fig. 2a and Fig. 2b, we can see that user embeddings in Fed-NCF vary much less than in Fed-LightGCN, which supports our hypothesis. Note that for the sake of visualization, we log the L2 loss value in Fig. 2a because of the large difference between user and item embedding deviation.
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
(a) Deviation trend in Fed-NCF.
|
| 250 |
+
Figure 2: Trend of embedding deviation over time until convergence in Fed-NCF and Fed-LightGCN on MovieLens-100K.
|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
(b) Deviation trend in Fed-LightGCN.
|
| 254 |
+
|
| 255 |
+
Motivated by the above observation, we propose a novel IMIA defender. The basic idea of LDP is to add noise to the shared parameters to distort the sensitive information behind the shared parameters, leading to catastrophic performance dropping. Unlike LPD, the key idea of our defender is to restrict the learning ability of public parameters so that they will convey less information to the curious central server. To implement that, we add a constraint term in the original FedRec loss function E.q. 5, as follows:
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\mathcal {L} = \mathcal {L} ^ {r e c} + \mu \left\| \mathrm {V} _ {i} ^ {t} - \mathrm {V} _ {t} \right\| \tag {8}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
The constraint term limits the update of the public parameters $\mathrm{V}_t$ on each local client/device. Consequently, to optimize $\mathcal{L}^{rec}$ , the recommender model would enforce the private embeddings to learn more information and patterns. Fig. 2 shows the embedding deviation trend after applying our defender to Fed-NCF and FedLightGCN. User embedding deviation becomes larger than vanilla FedRecs, while item embedding deviation significantly drops. More details of embedding deviation are in Appendix A.
|
| 262 |
+
|
| 263 |
+
# 4 EXPERIMENTS
|
| 264 |
+
|
| 265 |
+
# 4.1 Datasets
|
| 266 |
+
|
| 267 |
+
We use three real-world datasets (MovieLens-100K [7], Steam200K [2], and Amazon Cell Phone [9]) from various domains (movie recommendation, game recommendation, and cell phone recommendation) to evaluate the performance of our IMIA attacker and
|
| 268 |
+
|
| 269 |
+
defender. The statistics of these datasets are shown in Table 2. MovieLens-100K contains 100,000 interactions between 943 users and 1,682 items. There are 3,753 users, 5,134 items, and 114,713 interactions in Steam-200K. Amazon Cell Phone consists of 13,174 users, 5,970 cell phone related items, and 103,593 interactions. Note that the densities of these three datasets are different. MovieLens-100K is the densest dataset, while Amazon Cell Phone is the most sparse one. Following [47], we binarize the user feedback, where all ratings are transformed to $r_{ij} = 1$ and negative instances are sampled with $1:4$ ratio. Besides, we utilize the leave-one-out method to split the training, validation, and test sets.
|
| 270 |
+
|
| 271 |
+
Table 2: Statistics of recommendation datasets
|
| 272 |
+
|
| 273 |
+
<table><tr><td>Dataset</td><td>#users</td><td>#items</td><td>#interactions</td><td>Avg.</td><td>Density</td></tr><tr><td>MovieLens-100K</td><td>943</td><td>1,682</td><td>100,000</td><td>106</td><td>6.30%</td></tr><tr><td>Steam-200K</td><td>3,753</td><td>5,134</td><td>114,713</td><td>31</td><td>0.59%</td></tr><tr><td>Amazon</td><td>13,174</td><td>5,970</td><td>103,593</td><td>8</td><td>0.13%</td></tr></table>
|
| 274 |
+
|
| 275 |
+
# 4.2 Evaluation Metrics
|
| 276 |
+
|
| 277 |
+
To measure the effectiveness of IMIA attackers, we employ the widely used classification metric F1 score to evaluate inference performance. To evaluate the recommendation performance, we adopt the widely used hit ratio at rank 10 (Hit@10), which measures the ratio of ground truth items that appear in the top-10 recommendation list.
|
| 278 |
+
|
| 279 |
+
# 4.3 Baselines
|
| 280 |
+
|
| 281 |
+
Since none of the prior works conducts interaction-level membership attacks on FedRecs, we design two baselines.
|
| 282 |
+
|
| 283 |
+
Random Attack. For each client $u_{i}$ , the server randomly selects a group of items from $\mathcal{V}_i$ as the positive items based on the negative sampling ratio $\eta$ . Comparing with Random Attack can reveal whether a privacy issue of user interaction data exists.
|
| 284 |
+
|
| 285 |
+
K-means Attack. Since we do not have any labels of user-item interaction samples, IMIA can naturally be treated as a clustering problem. We adopt K-means [8] algorithm to divide items into two clusters based on the client's uploaded public parameters $\mathbf{V}_i^t$ . Positive items are chosen from the cluster with lower SSE (the sum of squared errors). The intuition of K-means Attack is that for a user, the positive items are more similar to each other than diverse negative items due to the coherence principle of personal interests, therefore, their embeddings will also be more coherent.
|
| 286 |
+
|
| 287 |
+
# 4.4 Parameter Settings
|
| 288 |
+
|
| 289 |
+
For both Fed-NCF and Fed-LightGCN, the dimension of user and item embeddings is 64, and 3 neural layers with dimensions 128, 64, 32 are used to process the concatenated user and item embedding. The negative sampling ratio $\eta$ is set to $1:4$ , as this ratio can well balance the training effectiveness and efficiency for most pair-wise loss functions and has been widely used. The local training batch size and local epoch size are 64 and 20, respectively. Adam [14] optimizer with 0.001 learning rate is employed to optimize local models. To ensure the model convergence, the maximum global
|
| 290 |
+
|
| 291 |
+
epoch is set to 200. $\gamma$ is set to $20\%$ . We also perform the sensitivity analysis of key hyper-parameters in the experiment.
|
| 292 |
+
|
| 293 |
+
# 4.5 Performance of IMIA Attackers
|
| 294 |
+
|
| 295 |
+
Table 3 presents three attackers' performances on two FedRecs and three datasets. The results are average F1 scores that reflect the inference effectiveness of the IMIA attacker. The results in Table 3 highlight that vanilla FedRecs have a high risk of user interaction data leakage, since the performance of our IMIA attacker is much better than Random Attack. Besides, comparing K-means and our attacker, we can see that the naive clustering method cannot effectively infer user interaction information. Furthermore, by comparing our IMIA attacker's performances crossing datasets, we can find that FedRecs trained on Steam-200K and Amazon Cell Phone are more vulnerable to IMIA than the ones trained on MovieLens-100K. With the statistics of datasets in Table 2, we believe that this phenomenon is related to the number of user interactions because the average number of user interactions on MovieLens-100K is much higher than that on the other two datasets. To further investigate this phenomenon on MovieLens-100K, we cluster users into 20 groups according to their interaction numbers and report their average F1 score in Fig. 3. The results show that users with fewer interactions have a higher risk of interaction data leakage. Appendix B analyzes this phenomenon on all datasets.
|
| 296 |
+
|
| 297 |
+
Table 3: The performance (F1 scores) of attackers on vanilla FedRecs. ML-100K is short for MovieLens-100K, Amazon is short for Amazon Cell Phone.
|
| 298 |
+
|
| 299 |
+
<table><tr><td>Model</td><td>Attack</td><td>ML-100K</td><td>Steam-200K</td><td>Amazon</td></tr><tr><td></td><td>Random</td><td>0.2079</td><td>0.2019</td><td>0.1998</td></tr><tr><td rowspan="2">Fed-NCF</td><td>K-means</td><td>0.3183</td><td>0.2477</td><td>0.2458</td></tr><tr><td>Ours</td><td>0.5928</td><td>0.6707</td><td>0.6516</td></tr><tr><td rowspan="2">Fed-LightGCN</td><td>K-means</td><td>0.1460</td><td>0.2573</td><td>0.2697</td></tr><tr><td>Ours</td><td>0.3900</td><td>0.6007</td><td>0.4328</td></tr></table>
|
| 300 |
+
|
| 301 |
+
Finally, the comparison of IMIA attackers' performances on Fed-NCF and Fed-LightGCN shows that Fed-LightGCN is more resistant to IMIA than Fed-NCF. This may be because that private parameters (i.e., user embeddings) in Fed-LightGCN learn more useful information than in Fed-NCF, since user embeddings in Fed-LightGCN aggregate information from item embedding via convolution operation. As a result, only using public parameters to infer user interaction records becomes harder. In Appendix A, we further show the embeddings' deviation from their initial values. The results support our explanation. The above observation motivates us to design our effective IMIA defender (see Section 3.3), which attempts to limit the learning ability of public parameters and enforce private parameters to learn more patterns.
|
| 302 |
+
|
| 303 |
+
# 4.6 Effectiveness of LDP Against IMIA
|
| 304 |
+
|
| 305 |
+
As the most classical and widely used privacy-preserving approach, LDP can effectively prevent attribute inference attacks on FedRecs [45]. Here, we conduct this experiment to study whether LDP can defend against the new inference attack IMIA. Table 4 presents the results of LDP with different noise scales against IMIA attacks. $\lambda = 0.0$
|
| 306 |
+
|
| 307 |
+
Table 4: The result of Local Differential Privacy (LDP) against our IMIA attacker. F1 is the attacker's performance, and the lower scores $(\downarrow)$ are better. Hit@10 $(\uparrow)$ measures recommendation performance, and the higher scores are better.
|
| 308 |
+
|
| 309 |
+
<table><tr><td rowspan="3">Model</td><td rowspan="3">Dataset</td><td colspan="8">Noise Scale</td></tr><tr><td colspan="2">λ=0.0</td><td colspan="2">λ=0.001</td><td colspan="2">λ=0.01</td><td colspan="2">λ=0.1</td></tr><tr><td>F1↓</td><td>Hit@10↑</td><td>F1↓</td><td>Hit@10↑</td><td>F1↓</td><td>Hit@10↑</td><td>F1↓</td><td>Hit@10↑</td></tr><tr><td rowspan="3">Fed-NCF</td><td>ML-100K</td><td>0.5928</td><td>0.3690</td><td>0.5474</td><td>0.3308</td><td>0.3954</td><td>0.2958</td><td>0.2520</td><td>0.1696</td></tr><tr><td>Steam-200K</td><td>0.6707</td><td>0.6645</td><td>0.6012</td><td>0.5901</td><td>0.3334</td><td>0.4524</td><td>0.2199</td><td>0.2224</td></tr><tr><td>Amazon</td><td>0.6516</td><td>0.2176</td><td>0.6260</td><td>0.1984</td><td>0.2933</td><td>0.1505</td><td>0.2126</td><td>0.1217</td></tr><tr><td rowspan="3">Fed-LightGCN</td><td>ML-100K</td><td>0.3900</td><td>0.4072</td><td>0.3786</td><td>0.3923</td><td>0.2816</td><td>0.3658</td><td>0.2357</td><td>0.3138</td></tr><tr><td>Steam-200K</td><td>0.6007</td><td>0.6943</td><td>0.5690</td><td>0.6957</td><td>0.3392</td><td>0.6890</td><td>0.2188</td><td>0.5123</td></tr><tr><td>Amazon</td><td>0.4328</td><td>0.1796</td><td>0.3483</td><td>0.1717</td><td>0.2642</td><td>0.1720</td><td>0.2209</td><td>0.1562</td></tr></table>
|
| 310 |
+
|
| 311 |
+
Table 5: The result of our defender against IMIA. The best results on each dataset are bold.
|
| 312 |
+
|
| 313 |
+
<table><tr><td rowspan="3">Model</td><td rowspan="3">Dataset</td><td colspan="10">Constraint Scale</td></tr><tr><td colspan="2">μ=0.0</td><td colspan="2">μ=0.1</td><td colspan="2">μ=0.4</td><td colspan="2">μ=0.7</td><td colspan="2">μ=1.0</td></tr><tr><td>F1↓</td><td>Hit@10↑</td><td>F1↓</td><td>Hit@10↑</td><td>F1↓</td><td>Hit@10↑</td><td>F1↓</td><td>Hit@10↑</td><td>F1↓</td><td>Hit@10↑</td></tr><tr><td rowspan="3">Fed-NCF</td><td>ML-100K</td><td>0.5928</td><td>0.3690</td><td>0.2638</td><td>0.3605</td><td>0.2140</td><td>0.3743</td><td>0.2166</td><td>0.3563</td><td>0.2145</td><td>0.3531</td></tr><tr><td>Steam-200K</td><td>0.6707</td><td>0.6645</td><td>0.3888</td><td>0.6005</td><td>0.2667</td><td>0.6011</td><td>0.2213</td><td>0.5960</td><td>0.2058</td><td>0.5960</td></tr><tr><td>Amazon</td><td>0.6516</td><td>0.2176</td><td>0.4761</td><td>0.2142</td><td>0.3368</td><td>0.2129</td><td>0.3079</td><td>0.2126</td><td>0.3240</td><td>0.2121</td></tr><tr><td rowspan="3">Fed-LightGCN</td><td>ML-100K</td><td>0.3900</td><td>0.4072</td><td>0.2130</td><td>0.4082</td><td>0.1892</td><td>0.3891</td><td>0.1811</td><td>0.3796</td><td>0.1741</td><td>0.3870</td></tr><tr><td>Steam-200K</td><td>0.6007</td><td>0.6943</td><td>0.4730</td><td>0.6584</td><td>0.4620</td><td>0.5830</td><td>0.4205</td><td>0.5582</td><td>0.2246</td><td>0.5472</td></tr><tr><td>Amazon</td><td>0.4328</td><td>0.1796</td><td>0.2281</td><td>0.1920</td><td>0.2847</td><td>0.1821</td><td>0.3231</td><td>0.1704</td><td>0.3308</td><td>0.1615</td></tr></table>
|
| 314 |
+
|
| 315 |
+

|
| 316 |
+
Figure 3: IMIA attacker performance for users with different number of interactions on MovieLens-100K.
|
| 317 |
+
|
| 318 |
+
means FedRecs without LDP. The results indicate that with subtle noise (e.g. $\lambda = 0.001$ ), LDP cannot well protect user interaction data. Adding more noises (e.g. $\lambda = 0.1$ ) can defend against our IMIA attacker, however, stronger noises severely degenerate the recommendation performance of FedRecs.
|
| 319 |
+
|
| 320 |
+
To measure how much recommendation performance LDP needs to sacrifice to effectively defend the attacker, we calculate $\frac{|\Delta F1|}{|\Delta Hit@10|}$ for the LDP which degenerates the IMIA attacker's performance to the level of Random Attack. Intuitively, $\frac{|\Delta F1|}{|\Delta Hit@10|}$ measures the
|
| 321 |
+
|
| 322 |
+
Table 6: Comparison of $\frac{|\Delta F1|}{|\Delta Hit@10|}$ for LDP and our defender. Higher scores represent the more cost-effective defense. NCF and LightGCN are short for "Fed-NCF" and "Fed-LightGCN".
|
| 323 |
+
|
| 324 |
+
<table><tr><td>Defense</td><td colspan="2">ML-100K</td><td colspan="2">Steam-200K</td><td colspan="2">Amazon</td></tr><tr><td></td><td>NCF</td><td>LightGCN</td><td>NCF</td><td>LightGCN</td><td>NCF</td><td>LightGCN</td></tr><tr><td>LDP</td><td>1.70</td><td>1.65</td><td>1.01</td><td>2.09</td><td>4.57</td><td>9.05</td></tr><tr><td>ours</td><td>71.47</td><td>10.68</td><td>6.78</td><td>2.55</td><td>68.74</td><td>16.50</td></tr></table>
|
| 325 |
+
|
| 326 |
+
change ratio of the attacker's performance and recommendation performance. Lower scores represent that the defender has to sacrifice more recommendation performance to reduce the attacker's threat. Table 6 shows that LDP would sacrifice too much recommendation performance to alleviate IMIA threats. As a result, LDP is not cost-effective to defend against IMIA.
|
| 327 |
+
|
| 328 |
+
# 4.7 Effectiveness of IMIA Defender
|
| 329 |
+
|
| 330 |
+
Since LDP cannot effectively mitigate IMIA threats, we propose a novel defense mechanism against the IMIA attack. The results of our defender against IMIA are shown in Table 5 where we vary the values of the hyper-parameter $\mu$ from 0.0 to 1.0, and $\mu = 0.0$ represents the vanilla FedRecs. With our defense method, the attacker's performance is reduced to the level of random guesses in all cases. Meanwhile, the recommender's performance is even improved in some cases (e.g., Fed-NCF on ML-100K, Fed-LightGCN on ML-100K, and Amazon Cell Phone) due to the regularization effect of the constraint term in the loss function, which indicates that
|
| 331 |
+
|
| 332 |
+
when restricting the updates of public parameters, the recommendation models can still achieve good recommendation performance by enforcing private parameters to learn more patterns.
|
| 333 |
+
|
| 334 |
+
Table 6 shows the comparison between LDP and our defender. The higher scores represent that the defender invalids the IMIA attacker with less performance loss. As we can see, in all cases, our defender is more cost-effective than LDP. Specifically, our defender's $\frac{|\Delta F1|}{|\Delta Hit@10|}$ scores for Fed-NCF on MovieLens-100K and Amazon Cell Phone are nearly 40 times and 15 times higher than LDP. In conclusion, our defender provides a more cost-effective solution against IMIA than LDP.
|
| 335 |
+
|
| 336 |
+
# 4.8 Attack with More Prior Knowledge
|
| 337 |
+
|
| 338 |
+
As mentioned in Section 3.2, to make the threat more realistic, we strictly restrict the curious server's prior knowledge with only uploaded parameters and some hyper-parameters such as learning rate and sampling ratio. In this section, we explore one possible prior knowledge that the server may have chances to access: the popularity information of items. Although the popularity information is not always accessible, it is still available in many scenarios. In this part, we assume that the server knows the top $10\%$ popular items. Based on the popularity information, instead of randomly assigning ratings to items at the initial phase, the server assigns positive ratings to popular items with a higher probability. Fig. 4 shows that with the item popularity information, the IMIA attacker's performance is improved in most cases.
|
| 339 |
+
|
| 340 |
+

|
| 341 |
+
Figure 4: IMIA with popularity information. NCF and LightGCN are short for "IMIA for Fed-NCF" and "IMIA for FedLightGCN". "pop" means popularity information.
|
| 342 |
+
|
| 343 |
+
# 5 RELATED WORK
|
| 344 |
+
|
| 345 |
+
In this section, we mainly introduce the related works of attacks against federated learning and attacks against federated recommender systems. The recent progress of recommender systems, federated recommender systems, federated learning, and local differential privacy can be referred to [21, 31, 34, 39, 44].
|
| 346 |
+
|
| 347 |
+
# 5.1 Attack against Federated Learning
|
| 348 |
+
|
| 349 |
+
Recently, varieties of attacks were proposed to access privacy risks in federated learning (FL) [21, 28]. These attacks include threats such as model inversion [48], attribute inference [3], and membership inference. In this paper, we mainly discuss membership inference attacks. Nasr et al. [25] took the first comprehensive study of class-level membership inference attack in FL under both white-box and black-box settings. Then, many works took further steps to study more fine-grained membership inference attacks, e.g. [12, 26, 30, 36, 49]. However, existing membership inference attacks cannot be used in FedRec because of the major differences mentioned in Section 1.
|
| 350 |
+
|
| 351 |
+
# 5.2 Attack against Federated Recommendation
|
| 352 |
+
|
| 353 |
+
Zhang et al. [45] conducted the first analysis of FedRec's privacy-preserving, however, their work only reveals attribute-level leakage risks. Some research discussed the user rating privacy issue of FedRec with explicit feedback [16-19], but the interaction privacy issue of FedRec with implicit feedback is another pair of shoes. Other attack methods [47] aim to promote/demote item's rank, which cannot reveal the privacy issue of FedRecs. As a result, the privacy issue of FedRecs is still under explored. Besides, the defense method for improving federated recommendation's privacy protection is also under explored [42].
|
| 354 |
+
|
| 355 |
+
# 6 CONCLUSION
|
| 356 |
+
|
| 357 |
+
In this paper, we perform the first study of interaction-level membership inference attacks (IMIA) in federated recommender systems (FedRecs) to reveal the privacy issue of user-item interactions. We first design an attacker from the curious-but-honest server side. The attacker infers the target user's private interaction based on its uploaded public parameters by iteratively training shadow models on shadow datasets. We implement IMIA attack with two commonly used FedRecs on three real-world datasets. The experimental results validate the threats of IMIA for FedRecs. Furthermore, we find that the classical privacy-preserving method, LDP, cannot effectively defend against our attack. In light of this, we propose a novel defender to mitigate IMIA threats with imperceptible influence on the recommendation performance.
|
| 358 |
+
|
| 359 |
+
# ACKNOWLEDGMENTS
|
| 360 |
+
|
| 361 |
+
This work is supported by Australian Research Council Future Fellowship (Grant No. FT210100624), Discovery Project (Grant No. DP190101985), and Discovery Early Career Research Award (Grant No. DE200101465).
|
| 362 |
+
|
| 363 |
+
# REFERENCES
|
| 364 |
+
|
| 365 |
+
[1] Muhammad Ammad-Ud-Din, Elena Ivannikova, Suleiman A Khan, Were Oyomno, Qiang Fu, Kuan Eeik Tan, and Adrian Flanagan. 2019. Federated collaborative filtering for privacy-preserving personalized recommendation system. arXiv preprint arXiv:1901.09888 (2019).
|
| 366 |
+
[2] Germán Cheuque, José Guzmán, and Denis Parra. 2019. Recommender systems for Online video game platforms: The case of STEAM. In Companion Proceedings of The 2019 World Wide Web Conference. 763-771.
|
| 367 |
+
[3] Karan Ganju, Qi Wang, Wei Yang, Carl A Gunter, and Nikita Borisov. 2018. Property inference attacks on fully connected neural networks using permutation invariant representations. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. 619-633.
|
| 368 |
+
|
| 369 |
+
[4] Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. 2020. Inverting gradients-how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems 33 (2020), 16937-16947.
|
| 370 |
+
[5] Carlos A Gomez-Urbe and Neil Hunt. 2015. TheNetflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS) 6, 4 (2015), 1-19.
|
| 371 |
+
[6] Elizabeth Liz Harding, Jarno J Vanto, Reece Clark, L Hannah Ji, and Sara C Ainsworth. 2019. Understanding the scope and impact of the California Consumer Privacy Act of 2018. Journal of Data Protection & Privacy 2, 3 (2019), 234-253.
|
| 372 |
+
[7] F Maxwell Harper and Joseph A Konstan. 2015. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis) 5, 4 (2015), 1-19.
|
| 373 |
+
[8] John A Hartigan and Manchek A Wong. 1979. Algorithm AS 136: A k-means clustering algorithm. Journal of the royal statistical society, series c (applied statistics) 28, 1 (1979), 100-108.
|
| 374 |
+
[9] Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web. 507-517.
|
| 375 |
+
[10] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639-648.
|
| 376 |
+
[11] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173-182.
|
| 377 |
+
[12] Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, and Xuyun Zhang. 2021. Source inference attacks in federated learning. In 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 1102-1107.
|
| 378 |
+
[13] Mubashir Imran, Hongzhi Yin, Tong Chen, Nguyen Quoc Viet Hung, Alexander Zhou, and Kai Zheng. 2022. ReFRS: Resource-efficient Federated Recommender System for Dynamic and Diversified User Preferences. ACM Transactions on Information Systems (TOIS) (2022).
|
| 379 |
+
[14] Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
|
| 380 |
+
[15] Shyong K Lam, Dan Frankowski, John Riedl, et al. 2006. Do you trust your recommendations? An exploration of security and privacy issues in recommender systems. In International conference on emerging trends in information and communication security. Springer, 14-29.
|
| 381 |
+
[16] Feng Liang, Weike Pan, and Zhong Ming. 2021. Fedrec++: Lossless federated recommendation with explicit feedback. In Proceedings of the AAAI conference on artificial intelligence, Vol. 35. 4224-4231.
|
| 382 |
+
[17] Guanyu Lin, Feng Liang, Weike Pan, and Zhong Ming. 2020. Fedrec: Federated recommendation with explicit feedback. IEEE Intelligent Systems 36, 5 (2020), 21-30.
|
| 383 |
+
[18] Zhaohao Lin, Weike Pan, and Zhong Ming. 2021. FR-FMSS: federated recommendation via fake marks and secret sharing. In Fifteenth ACM Conference on Recommender Systems. 668-673.
|
| 384 |
+
[19] Zhaohao Lin, Weike Pan, Qiang Yang, and Zhong Ming. 2022. A Generic Federated Recommendation Framework via Fake Marks and Secret Sharing. ACM Transactions on Information Systems (TOIS) (2022).
|
| 385 |
+
[20] Zhiwei Liu, Liangwei Yang, Ziwei Fan, Hao Peng, and Philip S Yu. 2022. Federated social recommendation with graph neural network. ACM Transactions on Intelligent Systems and Technology (TIST) 13, 4 (2022), 1-24.
|
| 386 |
+
[21] Lingjuan Lyu, Han Yu, and Qiang Yang. 2020. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133 (2020).
|
| 387 |
+
[22] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273-1282.
|
| 388 |
+
[23] Andriy Mnih and Russ R Salakhutdinov. 2007. Probabilistic matrix factorization. Advances in neural information processing systems 20 (2007).
|
| 389 |
+
[24] Khalil Muhammad, Qinqin Wang, Diarmuid O'Reilly-Morgan, Elias Tragos, Barry Smyth, Neil Hurley, James Geraci, and Aonghus Lawlor. 2020. Fedfast: Going beyond average for faster training of federated recommender systems. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1234-1242.
|
| 390 |
+
[25] Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE symposium on security and privacy (SP). IEEE, 739-753.
|
| 391 |
+
[26] Quoc Viet Hung Nguyen, Chi Thang Duong, Thanh Tam Nguyen, Matthias Weidlich, Karl Aberer, Hongzhi Yin, and Xiaofang Zhou. 2017. Argument discovery via crowdsourcing. The VLDB Journal 26 (2017), 511-535.
|
| 392 |
+
[27] Steffen Rendle, Christoph Freudenthaler, Zeno Gartner, and Lars Schmidt-Thieme. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012).
|
| 393 |
+
[28] Nuria Rodríguez-Barroso, Daniel Jiménez López, M Victoria Luzón, Francisco Herrera, and Eugenio Martínez-Cármara. 2022. Survey on federated learning
|
| 394 |
+
|
| 395 |
+
threats: concepts, taxonomy on attacks and defences, experimental study and challenges. Information Fusion (2022).
|
| 396 |
+
[29] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008. The graph neural network model. IEEE transactions on neural networks 20, 1 (2008), 61-80.
|
| 397 |
+
[30] Anshuman Suri, Pallika Kanani, Virendra J Marathe, and Daniel W Peterson. 2022. Subject Membership Inference Attacks in Federated Learning. arXiv preprint arXiv:2206.03317 (2022).
|
| 398 |
+
[31] Huynh Thanh Trung, Tong Van Vinh, Nguyen Thanh Tam, Hongzhi Yin, Matthias Weidlich, and Nguyen Quoc Viet Hung. 2020. Adaptive network alignment with unsupervised and multi-order convolutional networks. In 2020 IEEE 36th International Conference on Data Engineering (ICDE). IEEE, 85-96.
|
| 399 |
+
[32] Paul Voigt and Axel Von dem Bussche. 2017. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10, 3152676 (2017), 10-5555.
|
| 400 |
+
[33] Ning Wang, Xiaokui Xiao, Yin Yang, Jun Zhao, Siu Cheung Hui, Hyejin Shin, Junbum Shin, and Ge Yu. 2019. Collecting and analyzing multidimensional data with local differential privacy. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), IEEE, 638-649.
|
| 401 |
+
[34] Qinyong Wang, Hongzhi Yin, Tong Chen, Zi Huang, Hao Wang, Yanchang Zhao, and Nguyen Quoc Viet Hung. 2020. Next point-of-interest recommendation on resource-constrained mobile devices. In Proceedings of the Web conference 2020. 906-916.
|
| 402 |
+
[35] Qinyong Wang, Hongzhi Yin, Tong Chen, Junliang Yu, Alexander Zhou, and Xiangliang Zhang. 2022. Fast-adapting and privacy-preserving federated recommender system. The VLDB Journal 31, 5 (2022), 877-896.
|
| 403 |
+
[36] Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019. Beyond inferring class representatives: User-level privacy leakage from federated learning. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2512-2520.
|
| 404 |
+
[37] Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H Yang, Farhad Farokhi, Shi Jin, Tony QS Quek, and H Vincent Poor. 2020. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security 15 (2020), 3454-3469.
|
| 405 |
+
[38] Chuhan Wu, Fangzhao Wu, Yongfeng Huang, and Xing Xie. 2021. Personalized news recommendation: A survey. arXiv preprint arXiv:2106.08934 (2021).
|
| 406 |
+
[39] Liu Yang, Ben Tan, Vincent W Zheng, Kai Chen, and Qiang Yang. 2020. Federated recommendation systems. In *Federated Learning*. Springer, 225-239.
|
| 407 |
+
[40] Mengmeng Yang, Lingjuan Lyu, Jun Zhao, Tianqing Zhu, and Kwok-Yan Lam. 2020. Local differential privacy and its applications: A comprehensive survey. arXiv preprint arXiv:2008.03686 (2020).
|
| 408 |
+
[41] Hongzhi Yin, Weiqing Wang, Hao Wang, Ling Chen, and Xiaofang Zhou. 2017. Spatial-aware hierarchical collaborative deep learning for POI recommendation. IEEE Transactions on Knowledge and Data Engineering 29, 11 (2017), 2537-2551.
|
| 409 |
+
[42] Wei Yuan, Hongzhi Yin, Fangzhao Wu, Shijie Zhang, Tieke He, and Hao Wang. 2022. Federated Unlearning for On-Device Recommendation. arXiv preprint arXiv:2210.10958 (2022).
|
| 410 |
+
[43] Jingwen Zhang, Jiale Zhang, Junjun Chen, and Shui Yu. 2020. Gan enhanced membership inference: A passive local attack in federated learning. In ICC 2020-2020 IEEE International Conference on Communications (ICC), IEEE, 1-6.
|
| 411 |
+
[44] Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. 2019. Deep learning based recommender system: A survey and new perspectives. ACM Computing Surveys (CSUR) 52, 1 (2019), 1-38.
|
| 412 |
+
[45] Shijie Zhang and Hongzhi Yin. 2022. Comprehensive Privacy Analysis on Federated Recommender System against Attribute Inference Attacks. arXiv preprint arXiv:2205.11857 (2022).
|
| 413 |
+
[46] Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Lizhen Cui, and Xiangliang Zhang. 2021. Graph embedding for recommendation against attribute inference attacks. In Proceedings of the Web Conference 2021. 3002-3014.
|
| 414 |
+
[47] Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Quoc Viet Hung Nguyen, and Lizhen Cui. 2022. Pipattack: Poisoning federated recommender systems for manipulating item promotion. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. 1415-1423.
|
| 415 |
+
[48] Yuheng Zhang, Ruoxi Jia, Hengzhi Pei, Wenxiao Wang, Bo Li, and Dawn Song. 2020. The secret revealer: Generative model-inversion attacks against deep neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 253-261.
|
| 416 |
+
[49] Yanchao Zhao, Jiale Chen, Jiale Zhang, Zilu Yang, Huawei Tu, Hao Han, Kun Zhu, and Bing Chen. 2021. User-Level Membership Inference for Federated Learning in Wireless Network Environment. Wireless Communications and Mobile Computing 2021 (2021).
|
| 417 |
+
|
| 418 |
+
# A DETAILS OF EMBEDDINGS DEVIATION
|
| 419 |
+
|
| 420 |
+
In Section 3.3, we present the trend of embeddings' deviation on MovieLens-100K. Here, we calculate all FedRecs' embeddings deviation from their converged point to the initial point using L2
|
| 421 |
+
|
| 422 |
+
Table 7: The average deviation (L2 loss) of embeddings from initial point to the converged model point.
|
| 423 |
+
|
| 424 |
+
<table><tr><td rowspan="2"></td><td></td><td colspan="2">ML-100K</td><td colspan="2">Steam-200K</td><td colspan="2">Amazon</td></tr><tr><td>μ</td><td>Fed-NCF</td><td>Fed-LightGCN</td><td>Fed-NCF</td><td>Fed-LightGCN</td><td>Fed-NCF</td><td>Fed-LightGCN</td></tr><tr><td rowspan="3">User Embedding</td><td>0.0</td><td>0.0143</td><td>0.4258</td><td>0.0884</td><td>0.2666</td><td>0.0994</td><td>0.2252</td></tr><tr><td>0.1</td><td>0.0816</td><td>0.5493</td><td>0.0932</td><td>0.3408</td><td>0.0996</td><td>0.2260</td></tr><tr><td>1.0</td><td>0.1580</td><td>0.4481</td><td>0.0935</td><td>0.2935</td><td>0.0994</td><td>0.2031</td></tr><tr><td rowspan="3">Item Embedding</td><td>0.0</td><td>0.6088</td><td>0.6396</td><td>0.3144</td><td>0.2231</td><td>0.0667</td><td>0.0717</td></tr><tr><td>0.1</td><td>0.0030</td><td>0.0653</td><td>0.0042</td><td>0.0303</td><td>0.0060</td><td>0.0313</td></tr><tr><td>1.0</td><td>0.0004</td><td>0.0099</td><td>0.0005</td><td>0.0058</td><td>0.0009</td><td>0.0081</td></tr></table>
|
| 425 |
+
|
| 426 |
+

|
| 427 |
+
(a) F1 on MovieLens-100K with FedNCF.
|
| 428 |
+
|
| 429 |
+

|
| 430 |
+
(b) F1 on MovieLens-100K with FedLightGCN.
|
| 431 |
+
|
| 432 |
+

|
| 433 |
+
(c) F1 on Steam-200K with Fed-NCF.
|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
(d) F1 on Steam-200K with FedLightGCN.
|
| 437 |
+
|
| 438 |
+

|
| 439 |
+
Figure 5: IMIA attacker performance for users with different number of interactions.
|
| 440 |
+
Figure 6: Our IMIA attacker's performance with different values of $\gamma$ . 0.1 means selecting the top $10\% * |\mathcal{V}_i|$ items as correct guesses according to distance metrics each iteration.
|
| 441 |
+
|
| 442 |
+
loss. In Table 7, after applying our defense method, the deviation of item embedding is restrained, meanwhile, the user embedding is forced to update more. As a result, more information is encoded in private parameters, rather than in public parameters. Besides, across FedRecs, we can find that the updates of user embeddings
|
| 443 |
+
|
| 444 |
+
are more significant in Fed-LightGCN than in Fed-NCF. This observation is consistent with our argument that "private parameters in Fed-LightGCN are more sufficiently used than in Fed-NCF".
|
| 445 |
+
|
| 446 |
+
# B THE IMPACT OF INTERACTION NUMBER
|
| 447 |
+
|
| 448 |
+
Fig. 5 is an extension of Fig. 3. We cluster users into 20 groups based on their interaction numbers and report their average F1 score. Since users in Amazon Cell Phone all have fewer interactions, we only visualize the statistics of MovieLens-100K and Steam-200K. As shown in Fig. 5, users with fewer interactions are prone to leak more interaction information. This phenomenon is more obvious in FedLightGCN, because by using convolution aggregation, users with more interaction will have more complicated private embeddings, therefore, they are difficult to be attacked by solely relying on public parameters. This observation further implies that to prevent IMIA, we should improve the importance of private parameters.
|
| 449 |
+
|
| 450 |
+
# B.1 The Impact of $\gamma$
|
| 451 |
+
|
| 452 |
+
The hyper-parameter $\gamma$ denotes the percentage of items whose ratings the attacker is assumed to correctly infer at each iteration. Fig. 6 illustrates the trend of the attacker's performance with different $\gamma$ on all datasets. Generally, with smaller $\gamma$ , the attacker achieves better performance. For example, when $\gamma = 0.1$ , the attacker achieves nearly 0.8 F1 scores on Fed-NCF and MovieLens-100K, however, when $\gamma = 0.9$ , the performance is reduced to lower than 0.3. On the other hand, smaller $\gamma$ needs more iterations to infer all the target user's interacted items. A desirable $\gamma$ value should make a good balance between attack effectiveness and attack efficiency.
|
2301.10xxx/2301.10964/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:209956f05b682c6e2cb7518e21aacdc212776f58deaa87716ad2f5ed65e93ac7
|
| 3 |
+
size 446951
|
2301.10xxx/2301.10964/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2301.10xxx/2301.10972/e2f593df-4e15-4fa3-8e75-388ebabfb5aa_content_list.json
ADDED
|
@@ -0,0 +1,1087 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "On the Importance of Noise Scheduling for Diffusion Models",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
204,
|
| 8 |
+
109,
|
| 9 |
+
803,
|
| 10 |
+
161
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Ting Chen \nGoogle Research, Brain Team \niamtingchen@google.com",
|
| 17 |
+
"bbox": [
|
| 18 |
+
370,
|
| 19 |
+
203,
|
| 20 |
+
624,
|
| 21 |
+
256
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Abstract",
|
| 28 |
+
"text_level": 1,
|
| 29 |
+
"bbox": [
|
| 30 |
+
462,
|
| 31 |
+
287,
|
| 32 |
+
532,
|
| 33 |
+
300
|
| 34 |
+
],
|
| 35 |
+
"page_idx": 0
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"type": "text",
|
| 39 |
+
"text": "We empirically study the effect of noise scheduling strategies for denoising diffusion generative models. There are three findings: (1) the noise scheduling is crucial for the performance, and the optimal one depends on the task (e.g., image sizes), (2) when increasing the image size, the optimal noise scheduling shifts towards a noisier one (due to increased redundancy in pixels), and (3) simply scaling the input data [1] by a factor of $b$ while keeping the noise schedule function fixed (equivalent to shifting the logSNR by $\\log b$ ) is a good strategy across image sizes. This simple recipe, when combined with recently proposed Recurrent Interface Network (RIN) [10], yields state-of-the-art pixel-based diffusion models for high-resolution images on ImageNet, enabling single-stage, end-to-end generation of diverse and high-fidelity images at $1024 \\times 1024$ resolution (without upsampling/cascades).",
|
| 40 |
+
"bbox": [
|
| 41 |
+
151,
|
| 42 |
+
306,
|
| 43 |
+
844,
|
| 44 |
+
434
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "image",
|
| 50 |
+
"img_path": "images/4cd152851b4e0c63518730b1c96a0fb6cc3e3576cbada575cd1a47792761cb9d.jpg",
|
| 51 |
+
"image_caption": [
|
| 52 |
+
"Figure 1: Random samples generated by our single-stage end-to-end model (trained on class-conditional ImageNet images) at high resolutions: $512 \\times 512$ (the first row), $768 \\times 768$ (the second row), $1024 \\times 1024$ (the final row). More samples in Figure 6, 7 and 8."
|
| 53 |
+
],
|
| 54 |
+
"image_footnote": [],
|
| 55 |
+
"bbox": [
|
| 56 |
+
171,
|
| 57 |
+
453,
|
| 58 |
+
823,
|
| 59 |
+
806
|
| 60 |
+
],
|
| 61 |
+
"page_idx": 0
|
| 62 |
+
},
|
| 63 |
+
{
|
| 64 |
+
"type": "aside_text",
|
| 65 |
+
"text": "arXiv:2301.10972v4 [cs.CV] 21 May 2023",
|
| 66 |
+
"bbox": [
|
| 67 |
+
22,
|
| 68 |
+
255,
|
| 69 |
+
60,
|
| 70 |
+
708
|
| 71 |
+
],
|
| 72 |
+
"page_idx": 0
|
| 73 |
+
},
|
| 74 |
+
{
|
| 75 |
+
"type": "page_number",
|
| 76 |
+
"text": "1",
|
| 77 |
+
"bbox": [
|
| 78 |
+
493,
|
| 79 |
+
950,
|
| 80 |
+
503,
|
| 81 |
+
960
|
| 82 |
+
],
|
| 83 |
+
"page_idx": 0
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"type": "text",
|
| 87 |
+
"text": "1 Why is noise scheduling important for diffusion models?",
|
| 88 |
+
"text_level": 1,
|
| 89 |
+
"bbox": [
|
| 90 |
+
112,
|
| 91 |
+
87,
|
| 92 |
+
807,
|
| 93 |
+
108
|
| 94 |
+
],
|
| 95 |
+
"page_idx": 1
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"type": "text",
|
| 99 |
+
"text": "Diffusion models [18, 7, 19, 20, 12, 2] define a noisng process of data by $\\pmb{x}_t = \\sqrt{\\gamma(t)}\\pmb{x}_0 + \\sqrt{1 - \\gamma(t)}\\pmb{\\epsilon}$ where $\\pmb{x}_0$ is an input example (e.g., an image), $\\pmb{\\epsilon}$ is a sample from a isotropic Gaussian distribution, and $t$ is a continuous number between 0 and 1. The training of diffusion models is simple: we first sample $t \\in \\mathcal{U}(0,1)$ to diffuse the input example $\\pmb{x}_0$ to $\\pmb{x}_t$ , and then train a denoising network $f(\\pmb{x}_t)$ to predict either noise $\\pmb{\\epsilon}$ or clean data $\\pmb{x}_0$ . As $t$ is uniformly distributed, the noise schedule $\\gamma(t)$ determines the distribution of noise levels that the neural network is trained on.",
|
| 100 |
+
"bbox": [
|
| 101 |
+
109,
|
| 102 |
+
118,
|
| 103 |
+
883,
|
| 104 |
+
209
|
| 105 |
+
],
|
| 106 |
+
"page_idx": 1
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"type": "text",
|
| 110 |
+
"text": "The importance of noise schedule can be demonstrated by the example in Figure 2. As we increase the image size, the denoising task at the same noise level (i.e. the same $\\gamma$ ) becomes simpler. This is due to the redundancy of information in data (e.g., correlation among nearby pixels) typically increases with the image size. Furthermore, the noises are independently added to each pixels, making it easier to recover the original signal when image size increases. Therefore, the optimal schedule at a smaller resolution may not be optimal at a higher resolution. And if we do not adjust the scheduling accordingly, it may lead to under training of certain noise levels. Similar observations are made in concurrent work [9, 4].",
|
| 111 |
+
"bbox": [
|
| 112 |
+
109,
|
| 113 |
+
210,
|
| 114 |
+
883,
|
| 115 |
+
316
|
| 116 |
+
],
|
| 117 |
+
"page_idx": 1
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"type": "image",
|
| 121 |
+
"img_path": "images/da5eabe97f3a0ccc87c53aaed4250fe031d5437216555bd92c13d3927e900626.jpg",
|
| 122 |
+
"image_caption": [
|
| 123 |
+
"(a) $64\\times 64$"
|
| 124 |
+
],
|
| 125 |
+
"image_footnote": [],
|
| 126 |
+
"bbox": [
|
| 127 |
+
138,
|
| 128 |
+
332,
|
| 129 |
+
272,
|
| 130 |
+
435
|
| 131 |
+
],
|
| 132 |
+
"page_idx": 1
|
| 133 |
+
},
|
| 134 |
+
{
|
| 135 |
+
"type": "image",
|
| 136 |
+
"img_path": "images/6311f4a97b7a4b622d0f77ffc6fb61518ac8ed6f90a88f420d3597b5c447b36e.jpg",
|
| 137 |
+
"image_caption": [
|
| 138 |
+
"(b) $128\\times 128$"
|
| 139 |
+
],
|
| 140 |
+
"image_footnote": [],
|
| 141 |
+
"bbox": [
|
| 142 |
+
285,
|
| 143 |
+
332,
|
| 144 |
+
419,
|
| 145 |
+
435
|
| 146 |
+
],
|
| 147 |
+
"page_idx": 1
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"type": "image",
|
| 151 |
+
"img_path": "images/f786ae5d4390e8714b91a08227e1014f79e1897d48dbc7ca89041e76dc3f2620.jpg",
|
| 152 |
+
"image_caption": [
|
| 153 |
+
"(c) $256 \\times 256$",
|
| 154 |
+
"Figure 2: Noised images $(\\pmb{x}_t = \\sqrt{\\gamma}\\pmb{x}_0 + \\sqrt{1 - \\gamma}\\pmb{\\epsilon})$ with the same noise level $(\\gamma = 0.7)$ . We see that higher resolution natural images tend to exhibit higher degree of redundancy in (nearby) pixels, therefore less information is destroyed with the same level of independent noise."
|
| 155 |
+
],
|
| 156 |
+
"image_footnote": [],
|
| 157 |
+
"bbox": [
|
| 158 |
+
434,
|
| 159 |
+
332,
|
| 160 |
+
568,
|
| 161 |
+
435
|
| 162 |
+
],
|
| 163 |
+
"page_idx": 1
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"type": "image",
|
| 167 |
+
"img_path": "images/a9c81ae41e56b49dfa062f9b059ecf713fc12b93539abbcf86644ab91b01af1b.jpg",
|
| 168 |
+
"image_caption": [
|
| 169 |
+
"(d) $512 \\times 512$"
|
| 170 |
+
],
|
| 171 |
+
"image_footnote": [],
|
| 172 |
+
"bbox": [
|
| 173 |
+
584,
|
| 174 |
+
332,
|
| 175 |
+
717,
|
| 176 |
+
435
|
| 177 |
+
],
|
| 178 |
+
"page_idx": 1
|
| 179 |
+
},
|
| 180 |
+
{
|
| 181 |
+
"type": "image",
|
| 182 |
+
"img_path": "images/b7297b826bf1a234fb36706aab1561090ee669064c65a6d7311566323c181d9a.jpg",
|
| 183 |
+
"image_caption": [
|
| 184 |
+
"(e) $1024\\times 1024$"
|
| 185 |
+
],
|
| 186 |
+
"image_footnote": [],
|
| 187 |
+
"bbox": [
|
| 188 |
+
732,
|
| 189 |
+
332,
|
| 190 |
+
864,
|
| 191 |
+
435
|
| 192 |
+
],
|
| 193 |
+
"page_idx": 1
|
| 194 |
+
},
|
| 195 |
+
{
|
| 196 |
+
"type": "text",
|
| 197 |
+
"text": "2 Strategies to adjust noise scheduling",
|
| 198 |
+
"text_level": 1,
|
| 199 |
+
"bbox": [
|
| 200 |
+
112,
|
| 201 |
+
555,
|
| 202 |
+
578,
|
| 203 |
+
575
|
| 204 |
+
],
|
| 205 |
+
"page_idx": 1
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"type": "text",
|
| 209 |
+
"text": "Built on top of existing work related to noise scheduling [7, 13, 12, 1, 10], we systematically study two different noise scheduling strategies for diffusion models.",
|
| 210 |
+
"bbox": [
|
| 211 |
+
109,
|
| 212 |
+
585,
|
| 213 |
+
883,
|
| 214 |
+
617
|
| 215 |
+
],
|
| 216 |
+
"page_idx": 1
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"type": "text",
|
| 220 |
+
"text": "2.1 Strategy 1: changing noise schedule functions",
|
| 221 |
+
"text_level": 1,
|
| 222 |
+
"bbox": [
|
| 223 |
+
112,
|
| 224 |
+
635,
|
| 225 |
+
606,
|
| 226 |
+
652
|
| 227 |
+
],
|
| 228 |
+
"page_idx": 1
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"type": "text",
|
| 232 |
+
"text": "The first strategy is to parameterized noise schedule with a one-dimensional function [13, 10]. Here we present ones based on part of cosine or sigmoid functions, with temperature scaling. Note that the original Cosine schedule is proposed in [13], with a fixed part of cosine curve that cannot be adjusted, and the sigmoid schedule is proposed in [10]. Other than these two types of functions, we further propose a simple linear noise schedule function, which is just $\\gamma (t) = 1 - t$ (note that this is not the linear schedule proposed in [7]). Algorithm 1 presents the code for these instantiations of the continuous time noise schedule function $\\gamma (t)$ .",
|
| 233 |
+
"bbox": [
|
| 234 |
+
109,
|
| 235 |
+
659,
|
| 236 |
+
883,
|
| 237 |
+
750
|
| 238 |
+
],
|
| 239 |
+
"page_idx": 1
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"type": "text",
|
| 243 |
+
"text": "Figure 3 visualizes the noise schedule functions under different choice of hyper-parameters, and their corresponding logSNR (signal-to-noise ratio). We can see that both cosine and sigmoid functions can parameterize a rich set of noise distributions. Please note that here we choose the hyper-parameters so that the noise distribution is skewed towards noisier levels, which we find to be more helpful.",
|
| 244 |
+
"bbox": [
|
| 245 |
+
109,
|
| 246 |
+
751,
|
| 247 |
+
883,
|
| 248 |
+
810
|
| 249 |
+
],
|
| 250 |
+
"page_idx": 1
|
| 251 |
+
},
|
| 252 |
+
{
|
| 253 |
+
"type": "page_number",
|
| 254 |
+
"text": "2",
|
| 255 |
+
"bbox": [
|
| 256 |
+
493,
|
| 257 |
+
950,
|
| 258 |
+
503,
|
| 259 |
+
962
|
| 260 |
+
],
|
| 261 |
+
"page_idx": 1
|
| 262 |
+
},
|
| 263 |
+
{
|
| 264 |
+
"type": "code",
|
| 265 |
+
"sub_type": "code",
|
| 266 |
+
"code_caption": [
|
| 267 |
+
"Algorithm 1 Continuous time noise scheduling function $\\gamma (t)$"
|
| 268 |
+
],
|
| 269 |
+
"code_body": "def simple_linear_schedule(t, clip_min=1e-9): # A gamma function that simply is 1-t. return np.clip(1 - t, clip_min, 1.) \ndef sigmoid_schedule(t, start=-3, end=3, tau=1.0, clip_min=1e-9): # A gamma function based on sigmoid function. v_start = sigmoid(start / tau) v_end = sigmoid(end / tau) output = sigmoid((t * (end - start) + start) / tau) output = (v_end - output) / (v_end - v_start) return np.clip(output, clip_min, 1.) \ndef cosine_schedule(t, start=0, end=1, tau=1, clip_min=1e-9): # A gamma function based on cosine function. v_start = math.cos(start * math.pi / 2) ** (2 * tau) v_end = math.cos(end * math.pi / 2) ** (2 * tau) output = math.cos((t * (end - start) + start) * math.pi / 2) ** (2 * tau) output = (v_end - output) / (v_end - v_start) return np.clip(output, clip_min, 1.)",
|
| 270 |
+
"guess_lang": "python",
|
| 271 |
+
"bbox": [
|
| 272 |
+
187,
|
| 273 |
+
131,
|
| 274 |
+
740,
|
| 275 |
+
339
|
| 276 |
+
],
|
| 277 |
+
"page_idx": 2
|
| 278 |
+
},
|
| 279 |
+
{
|
| 280 |
+
"type": "image",
|
| 281 |
+
"img_path": "images/9456f47596732b9e89ab3b61bf1f663ba72205a77ba8674f6fccc18d6c491cfb.jpg",
|
| 282 |
+
"image_caption": [
|
| 283 |
+
"(a) Cosine"
|
| 284 |
+
],
|
| 285 |
+
"image_footnote": [],
|
| 286 |
+
"bbox": [
|
| 287 |
+
127,
|
| 288 |
+
363,
|
| 289 |
+
308,
|
| 290 |
+
503
|
| 291 |
+
],
|
| 292 |
+
"page_idx": 2
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"type": "image",
|
| 296 |
+
"img_path": "images/4ed03aa789b0c086f49d06e558ac88eb3df6aa89f8690098fb21570665b5c0c2.jpg",
|
| 297 |
+
"image_caption": [
|
| 298 |
+
"(b) Cosine (logSNR)"
|
| 299 |
+
],
|
| 300 |
+
"image_footnote": [],
|
| 301 |
+
"bbox": [
|
| 302 |
+
318,
|
| 303 |
+
366,
|
| 304 |
+
501,
|
| 305 |
+
503
|
| 306 |
+
],
|
| 307 |
+
"page_idx": 2
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"type": "image",
|
| 311 |
+
"img_path": "images/f8725e0850af699901f42bacc798c8dff2abb4b2c4e1ffbe77b6e95c5b55c42b.jpg",
|
| 312 |
+
"image_caption": [
|
| 313 |
+
"(c) Sigmoid",
|
| 314 |
+
"Figure 3: Instantiations of noise schedule function $\\gamma(t)$ and the corresponding logSNR. Adjusting hyperparameters of cosine and sigmoid functions leads to different noise schedules."
|
| 315 |
+
],
|
| 316 |
+
"image_footnote": [],
|
| 317 |
+
"bbox": [
|
| 318 |
+
516,
|
| 319 |
+
363,
|
| 320 |
+
696,
|
| 321 |
+
503
|
| 322 |
+
],
|
| 323 |
+
"page_idx": 2
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"type": "image",
|
| 327 |
+
"img_path": "images/ff8103718fd6245689a50ce1e4a267b9e7e07086323ba6a2922a1eb52420520d.jpg",
|
| 328 |
+
"image_caption": [
|
| 329 |
+
"(d) Sigmoid (logSNR)"
|
| 330 |
+
],
|
| 331 |
+
"image_footnote": [],
|
| 332 |
+
"bbox": [
|
| 333 |
+
707,
|
| 334 |
+
367,
|
| 335 |
+
890,
|
| 336 |
+
503
|
| 337 |
+
],
|
| 338 |
+
"page_idx": 2
|
| 339 |
+
},
|
| 340 |
+
{
|
| 341 |
+
"type": "text",
|
| 342 |
+
"text": "2.2 Strategy 2: adjusting input scaling factor",
|
| 343 |
+
"text_level": 1,
|
| 344 |
+
"bbox": [
|
| 345 |
+
112,
|
| 346 |
+
594,
|
| 347 |
+
566,
|
| 348 |
+
613
|
| 349 |
+
],
|
| 350 |
+
"page_idx": 2
|
| 351 |
+
},
|
| 352 |
+
{
|
| 353 |
+
"type": "text",
|
| 354 |
+
"text": "Another way to indirectly adjust noise scheduling, proposed in [1], is to scale the input $\\mathbf{x}_0$ by a constant factor $b$ , which results in the following noisig processing.",
|
| 355 |
+
"bbox": [
|
| 356 |
+
111,
|
| 357 |
+
619,
|
| 358 |
+
883,
|
| 359 |
+
651
|
| 360 |
+
],
|
| 361 |
+
"page_idx": 2
|
| 362 |
+
},
|
| 363 |
+
{
|
| 364 |
+
"type": "equation",
|
| 365 |
+
"text": "\n$$\n\\pmb {x} _ {t} = \\sqrt {\\gamma (t)} b \\pmb {x} _ {0} + \\sqrt {1 - \\gamma (t)} \\pmb {\\epsilon}\n$$\n",
|
| 366 |
+
"text_format": "latex",
|
| 367 |
+
"bbox": [
|
| 368 |
+
390,
|
| 369 |
+
660,
|
| 370 |
+
606,
|
| 371 |
+
679
|
| 372 |
+
],
|
| 373 |
+
"page_idx": 2
|
| 374 |
+
},
|
| 375 |
+
{
|
| 376 |
+
"type": "text",
|
| 377 |
+
"text": "As we reduce the scaling factor $b$ , it increases the noise levels, as demonstrated in Figure 4.",
|
| 378 |
+
"bbox": [
|
| 379 |
+
117,
|
| 380 |
+
690,
|
| 381 |
+
772,
|
| 382 |
+
705
|
| 383 |
+
],
|
| 384 |
+
"page_idx": 2
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"type": "image",
|
| 388 |
+
"img_path": "images/da674244f16b1378b2377ecb038cfaf273750a75bec9bc3bddffae1d48a1ffb1.jpg",
|
| 389 |
+
"image_caption": [
|
| 390 |
+
"(a) $b = 1$"
|
| 391 |
+
],
|
| 392 |
+
"image_footnote": [],
|
| 393 |
+
"bbox": [
|
| 394 |
+
138,
|
| 395 |
+
720,
|
| 396 |
+
272,
|
| 397 |
+
823
|
| 398 |
+
],
|
| 399 |
+
"page_idx": 2
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "image",
|
| 403 |
+
"img_path": "images/c0b618fe321a711e655b5b3bd0f8735512f9eb90cc8dca9689534a9b99e3693d.jpg",
|
| 404 |
+
"image_caption": [
|
| 405 |
+
"(b) $b = 0.7$"
|
| 406 |
+
],
|
| 407 |
+
"image_footnote": [],
|
| 408 |
+
"bbox": [
|
| 409 |
+
285,
|
| 410 |
+
720,
|
| 411 |
+
419,
|
| 412 |
+
823
|
| 413 |
+
],
|
| 414 |
+
"page_idx": 2
|
| 415 |
+
},
|
| 416 |
+
{
|
| 417 |
+
"type": "image",
|
| 418 |
+
"img_path": "images/424ebff217505ef3458d42fca6ede8373d39732b1a1a005dc0ac685ec57bf00e.jpg",
|
| 419 |
+
"image_caption": [
|
| 420 |
+
"(c) $b = 0.5$"
|
| 421 |
+
],
|
| 422 |
+
"image_footnote": [],
|
| 423 |
+
"bbox": [
|
| 424 |
+
434,
|
| 425 |
+
720,
|
| 426 |
+
568,
|
| 427 |
+
823
|
| 428 |
+
],
|
| 429 |
+
"page_idx": 2
|
| 430 |
+
},
|
| 431 |
+
{
|
| 432 |
+
"type": "image",
|
| 433 |
+
"img_path": "images/c57100c3d20f79fd36e13f007380c2fea394232b821899d4a2bdd84cde4a3609.jpg",
|
| 434 |
+
"image_caption": [
|
| 435 |
+
"(d) $b = 0.3$",
|
| 436 |
+
"Figure 4: Noised images $(\\pmb{x}_t = \\sqrt{\\gamma} b\\pmb{x}_0 + \\sqrt{1 - \\gamma}\\epsilon)$ with the same noise level ( $\\gamma = 0.7$ ), but $\\pmb{x}_0$ is scaled by $b$ . Using a smaller scaling factor, more information is destroyed with the same noise level. The noised image also becomes darker as the variance decreases."
|
| 437 |
+
],
|
| 438 |
+
"image_footnote": [],
|
| 439 |
+
"bbox": [
|
| 440 |
+
583,
|
| 441 |
+
720,
|
| 442 |
+
715,
|
| 443 |
+
823
|
| 444 |
+
],
|
| 445 |
+
"page_idx": 2
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "image",
|
| 449 |
+
"img_path": "images/218e2540ede3beb75e87798c525fc37cd9eab3f1715bcbe368cd6521b0acd757.jpg",
|
| 450 |
+
"image_caption": [
|
| 451 |
+
"(e) $b = 0.1$"
|
| 452 |
+
],
|
| 453 |
+
"image_footnote": [],
|
| 454 |
+
"bbox": [
|
| 455 |
+
730,
|
| 456 |
+
720,
|
| 457 |
+
864,
|
| 458 |
+
823
|
| 459 |
+
],
|
| 460 |
+
"page_idx": 2
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"type": "page_number",
|
| 464 |
+
"text": "3",
|
| 465 |
+
"bbox": [
|
| 466 |
+
493,
|
| 467 |
+
950,
|
| 468 |
+
503,
|
| 469 |
+
962
|
| 470 |
+
],
|
| 471 |
+
"page_idx": 2
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"type": "text",
|
| 475 |
+
"text": "When $b \\neq 1$ , the variance of $\\pmb{x}_t$ can change even $\\pmb{x}_0$ has the same mean and variance as $\\pmb{\\epsilon}$ , which could lead to decreased performance [11]. In this case, to ensure the variance keeps fixed, one can scale $\\pmb{x}_t$ by a factor of $\\frac{1}{(b^2 - 1)\\gamma(t) + 1}$ . However, in practice, we find that it works well by simply normalizing the $\\pmb{x}_t$ by its variance to make sure it has unit variance before feeding it to the denoising network $f(\\cdot)$ . This variance normalization operation can also be seen as the first layer of the denoising network.",
|
| 476 |
+
"bbox": [
|
| 477 |
+
109,
|
| 478 |
+
90,
|
| 479 |
+
887,
|
| 480 |
+
167
|
| 481 |
+
],
|
| 482 |
+
"page_idx": 3
|
| 483 |
+
},
|
| 484 |
+
{
|
| 485 |
+
"type": "text",
|
| 486 |
+
"text": "While this input scaling strategy is similar to changing the noise scheduling function $\\gamma(t)$ above, it achieves slightly different effect in the logSNR when compared to cosine and sigmoid schedules, particularly when $t$ is closer to 0, as shown in Figure 5. In fact, the input scaling shifts the logSNR along y-axis while keeping its shape unchanged, which is different from all the noise schedule functions considered above. Although, one may also equivalently parameterize $\\gamma(t)$ function in other ways to avoid scaling the inputs, as nicely demonstrated by the concurrent work [9].",
|
| 487 |
+
"bbox": [
|
| 488 |
+
109,
|
| 489 |
+
169,
|
| 490 |
+
887,
|
| 491 |
+
260
|
| 492 |
+
],
|
| 493 |
+
"page_idx": 3
|
| 494 |
+
},
|
| 495 |
+
{
|
| 496 |
+
"type": "image",
|
| 497 |
+
"img_path": "images/73d5d123b2c2b0ae672784f0a46a49658b32a9495361e72baaa8938a25866c49.jpg",
|
| 498 |
+
"image_caption": [
|
| 499 |
+
"Figure 5: Comparison of input scaling (on simple linear schedule) and other cosine-based or sigmoid based noise schedule functions. We can see the input scaling only shifts the logSNR along y-axis without changing its shape, while cosine and sigmoid functions put most emphasis on where $t$ is closer to 1, having much less influence when $t$ is smaller."
|
| 500 |
+
],
|
| 501 |
+
"image_footnote": [],
|
| 502 |
+
"bbox": [
|
| 503 |
+
338,
|
| 504 |
+
276,
|
| 505 |
+
653,
|
| 506 |
+
500
|
| 507 |
+
],
|
| 508 |
+
"page_idx": 3
|
| 509 |
+
},
|
| 510 |
+
{
|
| 511 |
+
"type": "text",
|
| 512 |
+
"text": "2.3 Putting it together: a simple compound noise scheduling strategy",
|
| 513 |
+
"text_level": 1,
|
| 514 |
+
"bbox": [
|
| 515 |
+
109,
|
| 516 |
+
607,
|
| 517 |
+
802,
|
| 518 |
+
625
|
| 519 |
+
],
|
| 520 |
+
"page_idx": 3
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "text",
|
| 524 |
+
"text": "Here we propose to combine these two strategies by having a single noise schedule function, such as $\\gamma(t) = 1 - t$ , and scale the input by a factor of $b$ . The training and inference strategies are given in the following.",
|
| 525 |
+
"bbox": [
|
| 526 |
+
109,
|
| 527 |
+
632,
|
| 528 |
+
887,
|
| 529 |
+
662
|
| 530 |
+
],
|
| 531 |
+
"page_idx": 3
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"type": "text",
|
| 535 |
+
"text": "Training strategy Algorithm 2 shows how to incorporate the combined noising scheduling strategy into the training of diffusion models, with main changes highlighted in blue.",
|
| 536 |
+
"bbox": [
|
| 537 |
+
109,
|
| 538 |
+
679,
|
| 539 |
+
883,
|
| 540 |
+
710
|
| 541 |
+
],
|
| 542 |
+
"page_idx": 3
|
| 543 |
+
},
|
| 544 |
+
{
|
| 545 |
+
"type": "code",
|
| 546 |
+
"sub_type": "code",
|
| 547 |
+
"code_caption": [
|
| 548 |
+
"Algorithm 2 Training a diffusion model with the combined noise scheduling strategy."
|
| 549 |
+
],
|
| 550 |
+
"code_body": "def train_loss(x, gamma= lambda t: 1-t, scale=1, normalize=True):\n '''Returns the diffusion loss on a training example x.''' \nbsz, h, w, c = x.shape\n# Add noise to data.\nt = np.random.uniform(0, 1, size=[bsz, 1, 1, 1])\neps = np.random.normal(0, 1, size=[bsz, h, w, c])\nx_t = np.sqrt(gamma(t)) * scale * x + sqrt(1 - gamma(t)) * eps\n# Denoise and compute loss.\nx_t = x_t / x_t.std(axis=(1,2,3), keepdims=True) if normalize else x_t\neps_pred = neural_net(x_t, t)\nloss = (eps_pred - eps)**2\nreturn loss.mean()",
|
| 551 |
+
"guess_lang": "python",
|
| 552 |
+
"bbox": [
|
| 553 |
+
114,
|
| 554 |
+
746,
|
| 555 |
+
640,
|
| 556 |
+
901
|
| 557 |
+
],
|
| 558 |
+
"page_idx": 3
|
| 559 |
+
},
|
| 560 |
+
{
|
| 561 |
+
"type": "page_number",
|
| 562 |
+
"text": "4",
|
| 563 |
+
"bbox": [
|
| 564 |
+
493,
|
| 565 |
+
950,
|
| 566 |
+
504,
|
| 567 |
+
962
|
| 568 |
+
],
|
| 569 |
+
"page_idx": 3
|
| 570 |
+
},
|
| 571 |
+
{
|
| 572 |
+
"type": "text",
|
| 573 |
+
"text": "Inference/sampling strategy If the variance normalization is used during the training, it should also be used during the sampling (i.e., the normalization can be seen as the first layer of the denoising network). Note that since we use a continuous time steps $t \\in [0,1]$ , so the inference schedule does not need to be the same as training schedule. During the inference we use a uniform discretization of the time between 0 and 1 into a given number of steps, and then we can choose a desired $\\gamma(t)$ function to determine the level of noises at inference time. In practice, we find that standard cosine schedule works well for sampling.",
|
| 574 |
+
"bbox": [
|
| 575 |
+
109,
|
| 576 |
+
90,
|
| 577 |
+
887,
|
| 578 |
+
183
|
| 579 |
+
],
|
| 580 |
+
"page_idx": 4
|
| 581 |
+
},
|
| 582 |
+
{
|
| 583 |
+
"type": "code",
|
| 584 |
+
"sub_type": "code",
|
| 585 |
+
"code_caption": [
|
| 586 |
+
"Algorithm 3 Diffusion sampling algorithm."
|
| 587 |
+
],
|
| 588 |
+
"code_body": "def generate(steps, gamma= lambda t: 1-t, scale=1, normalize=True):\n x_t = normal(mean=0, std=1)\n for step in range(steps):\n # Get time for current and next states.\n t_now = 1 - step / steps\n t_next = max(1 - (step+1) / steps, 0)\n # Predict eps & jump to x at t_next.\n x_t = x_t / x_t.std(axis=(1,2,3), keepdims=True) if normalize else x_t\n eps_pred = neural_net(x_t, t_now)\n x_t = ddim_or_ddpm_step(x_t, eps_pred, t_now, t_next)\n return x_t",
|
| 589 |
+
"guess_lang": "python",
|
| 590 |
+
"bbox": [
|
| 591 |
+
114,
|
| 592 |
+
215,
|
| 593 |
+
653,
|
| 594 |
+
349
|
| 595 |
+
],
|
| 596 |
+
"page_idx": 4
|
| 597 |
+
},
|
| 598 |
+
{
|
| 599 |
+
"type": "text",
|
| 600 |
+
"text": "3 Experiments",
|
| 601 |
+
"text_level": 1,
|
| 602 |
+
"bbox": [
|
| 603 |
+
112,
|
| 604 |
+
377,
|
| 605 |
+
303,
|
| 606 |
+
397
|
| 607 |
+
],
|
| 608 |
+
"page_idx": 4
|
| 609 |
+
},
|
| 610 |
+
{
|
| 611 |
+
"type": "text",
|
| 612 |
+
"text": "3.1 Setup",
|
| 613 |
+
"text_level": 1,
|
| 614 |
+
"bbox": [
|
| 615 |
+
112,
|
| 616 |
+
409,
|
| 617 |
+
225,
|
| 618 |
+
428
|
| 619 |
+
],
|
| 620 |
+
"page_idx": 4
|
| 621 |
+
},
|
| 622 |
+
{
|
| 623 |
+
"type": "text",
|
| 624 |
+
"text": "We mainly conduct experiments on class-conditional ImageNet [15] image generation, and we follow common practice of evaluation, using FID [5] and Inception Score [16] as metrics computed on 50K samples, generated by 1000 steps of DDPM.",
|
| 625 |
+
"bbox": [
|
| 626 |
+
109,
|
| 627 |
+
434,
|
| 628 |
+
883,
|
| 629 |
+
478
|
| 630 |
+
],
|
| 631 |
+
"page_idx": 4
|
| 632 |
+
},
|
| 633 |
+
{
|
| 634 |
+
"type": "text",
|
| 635 |
+
"text": "We follow [10] for model specification but use smaller models as well as shorter overall training steps (except for $>256$ resolutions) to conserve compute. This results in worse performance in general but due to the improvement of noise scheduling, we can still achieve similar performance at lower resolutions ( $64 \\times 64$ and $128 \\times 128$ ), but significantly better results at higher resolutions ( $256 \\times 256$ or higher).",
|
| 636 |
+
"bbox": [
|
| 637 |
+
109,
|
| 638 |
+
479,
|
| 639 |
+
883,
|
| 640 |
+
539
|
| 641 |
+
],
|
| 642 |
+
"page_idx": 4
|
| 643 |
+
},
|
| 644 |
+
{
|
| 645 |
+
"type": "text",
|
| 646 |
+
"text": "For hyper-parameters, we use LAMB [21] optimizer with $\\beta_{1} = 0.9$ , $\\beta_{2} = 0.999$ and weight decay of 0.01, self-conditioning rate of 0.9, and EMA decay of 0.9999. Table 1 and 2 summarize major hyper-parameters.",
|
| 647 |
+
"bbox": [
|
| 648 |
+
109,
|
| 649 |
+
539,
|
| 650 |
+
883,
|
| 651 |
+
571
|
| 652 |
+
],
|
| 653 |
+
"page_idx": 4
|
| 654 |
+
},
|
| 655 |
+
{
|
| 656 |
+
"type": "table",
|
| 657 |
+
"img_path": "images/ba3ff8f2f430f31fbeba22f60cda80513ecbd6e29f139da63ffe4fdd0eb6f979.jpg",
|
| 658 |
+
"table_caption": [
|
| 659 |
+
"Table 1: Model Hyper-parameters."
|
| 660 |
+
],
|
| 661 |
+
"table_footnote": [],
|
| 662 |
+
"table_body": "<table><tr><td>Image Size</td><td>Patch Size</td><td>Tokens</td><td>Latents</td><td>Layers</td><td>Heads</td><td>Params</td><td>Input Scale</td><td>γ(t)</td></tr><tr><td>64×64×3</td><td>8×8</td><td>64×512</td><td>128×768</td><td>6,6,6,6</td><td>16</td><td>214M</td><td>1.0</td><td>1-t</td></tr><tr><td>128×128×3</td><td>8×8</td><td>256×512</td><td>128×768</td><td>6,6,6,6</td><td>16</td><td>215M</td><td>0.6</td><td>1-t</td></tr><tr><td>256×256×3</td><td>8×8</td><td>1024×512</td><td>256×768</td><td>6,6,6,6,6,6</td><td>16</td><td>319M</td><td>0.5</td><td>1-t</td></tr><tr><td>512×512×3</td><td>8×8</td><td>4096×512</td><td>256×768</td><td>6,6,6,6,6,6</td><td>16</td><td>320M</td><td>0.2</td><td>cosine@0.2,1,1 1</td></tr><tr><td>768×768×3</td><td>8×8</td><td>9216×512</td><td>256×768</td><td>8,8,8,8,8,8</td><td>16</td><td>408M</td><td>0.1</td><td>1-t</td></tr><tr><td>1024×1024×3</td><td>8×8</td><td>16384×512</td><td>256×768</td><td>8,8,8,8,8,8</td><td>16</td><td>412M</td><td>0.1</td><td>1-t</td></tr></table>",
|
| 663 |
+
"bbox": [
|
| 664 |
+
114,
|
| 665 |
+
602,
|
| 666 |
+
906,
|
| 667 |
+
718
|
| 668 |
+
],
|
| 669 |
+
"page_idx": 4
|
| 670 |
+
},
|
| 671 |
+
{
|
| 672 |
+
"type": "table",
|
| 673 |
+
"img_path": "images/d3ec074c09d4473182c7c18874daf9067e41bc61362844aadbee0c396113d520.jpg",
|
| 674 |
+
"table_caption": [
|
| 675 |
+
"Table 2: Training Hyper-parameters."
|
| 676 |
+
],
|
| 677 |
+
"table_footnote": [],
|
| 678 |
+
"table_body": "<table><tr><td>Image Size</td><td>Train Steps</td><td>Batch Size</td><td>LR</td><td>LR Decay</td><td>Label Dropout</td></tr><tr><td>64×64×3</td><td>150K</td><td>1024</td><td>2e-3</td><td>Cosine (first 70%)</td><td>0.0</td></tr><tr><td>128×128×3</td><td>250K</td><td>1024</td><td>2e-3</td><td>Cosine (first 70%)</td><td>0.0</td></tr><tr><td>256×256×3</td><td>250K</td><td>1024</td><td>2e-3</td><td>Cosine (first 70%)</td><td>0.0</td></tr><tr><td>512×512×3</td><td>1M</td><td>1024</td><td>1e-3</td><td>Constant</td><td>0.0</td></tr><tr><td>768×768×3</td><td>1M</td><td>1024</td><td>1e-3</td><td>Constant</td><td>0.1</td></tr><tr><td>1024×1024×3</td><td>910K</td><td>1024</td><td>1e-3</td><td>Constant</td><td>0.1</td></tr></table>",
|
| 679 |
+
"bbox": [
|
| 680 |
+
200,
|
| 681 |
+
762,
|
| 682 |
+
797,
|
| 683 |
+
878
|
| 684 |
+
],
|
| 685 |
+
"page_idx": 4
|
| 686 |
+
},
|
| 687 |
+
{
|
| 688 |
+
"type": "page_footnote",
|
| 689 |
+
"text": "Here $\\gamma(t) = 1 - t$ should work as well but it is not compared in our limited experiments.",
|
| 690 |
+
"bbox": [
|
| 691 |
+
130,
|
| 692 |
+
897,
|
| 693 |
+
678,
|
| 694 |
+
912
|
| 695 |
+
],
|
| 696 |
+
"page_idx": 4
|
| 697 |
+
},
|
| 698 |
+
{
|
| 699 |
+
"type": "page_number",
|
| 700 |
+
"text": "5",
|
| 701 |
+
"bbox": [
|
| 702 |
+
493,
|
| 703 |
+
950,
|
| 704 |
+
503,
|
| 705 |
+
962
|
| 706 |
+
],
|
| 707 |
+
"page_idx": 4
|
| 708 |
+
},
|
| 709 |
+
{
|
| 710 |
+
"type": "text",
|
| 711 |
+
"text": "3.2 The effect of strategy 1 (noise schedule functions)",
|
| 712 |
+
"text_level": 1,
|
| 713 |
+
"bbox": [
|
| 714 |
+
109,
|
| 715 |
+
89,
|
| 716 |
+
648,
|
| 717 |
+
108
|
| 718 |
+
],
|
| 719 |
+
"page_idx": 5
|
| 720 |
+
},
|
| 721 |
+
{
|
| 722 |
+
"type": "text",
|
| 723 |
+
"text": "We first keep the input scaling fixed to 1, and evaluate the effect of noise schedules based on cosine, sigmoid and linear functions. As shown in Table 3, different image resolutions require different noise schedule functions to obtain the best performance, and it is difficult to find the optimal schedule due to several hyper-parameters involved.",
|
| 724 |
+
"bbox": [
|
| 725 |
+
109,
|
| 726 |
+
114,
|
| 727 |
+
883,
|
| 728 |
+
176
|
| 729 |
+
],
|
| 730 |
+
"page_idx": 5
|
| 731 |
+
},
|
| 732 |
+
{
|
| 733 |
+
"type": "table",
|
| 734 |
+
"img_path": "images/994d59a35f0fa680fa4d0e539e2c357b01adb33a079bd4e70b7e4f66783ee9e9.jpg",
|
| 735 |
+
"table_caption": [
|
| 736 |
+
"Table 3: FIDs for different noise schedule functions (see Figure 3 for visualization) while keeping the input scaling fixed to 1. For FID, the lower the better. For different image resolutions, optimal schedule function is quite different, making it difficult to find/tune."
|
| 737 |
+
],
|
| 738 |
+
"table_footnote": [],
|
| 739 |
+
"table_body": "<table><tr><td>Noise schedule function γ(t)</td><td>64×64</td><td>128×128</td><td>256×256</td></tr><tr><td>1-t</td><td>2.04</td><td>4.51</td><td>7.21</td></tr><tr><td>cosine (s=0,e=1,τ=1; i.e., cosine)</td><td>2.71</td><td>7.28</td><td>21.6</td></tr><tr><td>cosine (s=0.2,e=1,τ=1)</td><td>2.15</td><td>4.9</td><td>12.3</td></tr><tr><td>cosine (s=0.2,e=1,τ=2)</td><td>2.84</td><td>5.64</td><td>5.61</td></tr><tr><td>cosine (s=0.2,e=1,τ=3)</td><td>3.3</td><td>4.64</td><td>6.24</td></tr><tr><td>sigmoid (s=-3,e=3,τ=0.9)</td><td>2.09</td><td>5.83</td><td>7.19</td></tr><tr><td>sigmoid (s=-3,e=3,τ=1.1)</td><td>2.03</td><td>4.89</td><td>7.23</td></tr><tr><td>sigmoid (s=0,e=3,τ=0.3)</td><td>4.93</td><td>6.07</td><td>5.74</td></tr><tr><td>sigmoid (s=0,e=3,τ=0.5)</td><td>3.12</td><td>5.71</td><td>4.28</td></tr><tr><td>sigmoid (s=0,e=3,τ=0.7)</td><td>3.34</td><td>3.91</td><td>5.49</td></tr><tr><td>sigmoid (s=0,e=3,τ=0.9)</td><td>2.29</td><td>4.42</td><td>5.48</td></tr><tr><td>sigmoid (s=0,e=3,τ=1.1)</td><td>2.36</td><td>4.39</td><td>7.15</td></tr></table>",
|
| 740 |
+
"bbox": [
|
| 741 |
+
271,
|
| 742 |
+
243,
|
| 743 |
+
727,
|
| 744 |
+
452
|
| 745 |
+
],
|
| 746 |
+
"page_idx": 5
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"type": "text",
|
| 750 |
+
"text": "3.3 The effect of strategy 2 (input scaling)",
|
| 751 |
+
"text_level": 1,
|
| 752 |
+
"bbox": [
|
| 753 |
+
109,
|
| 754 |
+
483,
|
| 755 |
+
540,
|
| 756 |
+
502
|
| 757 |
+
],
|
| 758 |
+
"page_idx": 5
|
| 759 |
+
},
|
| 760 |
+
{
|
| 761 |
+
"type": "table",
|
| 762 |
+
"img_path": "images/fd7be79776bd4c0c4953efaba38deae20275287b4c3c9e320085dde80b99f661.jpg",
|
| 763 |
+
"table_caption": [
|
| 764 |
+
"Table 4: FIDs for different input scaling factors while keeping the noise schedule function fixed to either cosine $(s = 0.2, e = 1, \\tau = 1)$ or $1 - t$ . For FID, the lower the better."
|
| 765 |
+
],
|
| 766 |
+
"table_footnote": [],
|
| 767 |
+
"table_body": "<table><tr><td rowspan=\"2\">Input scale factor</td><td colspan=\"2\">64×64</td><td colspan=\"2\">128×128</td><td colspan=\"2\">256×256</td></tr><tr><td>cosine@0.2,1,1</td><td>1 - t</td><td>cosine@0.2,1,1</td><td>1 - t</td><td>cosine@0.2,1,1</td><td>1 - t</td></tr><tr><td>0.3</td><td>5.1</td><td>6.77</td><td>5.63</td><td>5.25</td><td>3.7</td><td>3.58</td></tr><tr><td>0.4</td><td>4</td><td>3.79</td><td>4.65</td><td>6.89</td><td>4.01</td><td>3.52</td></tr><tr><td>0.5</td><td>3.76</td><td>3.79</td><td>4.14</td><td>3.9</td><td>5.12</td><td>5.07</td></tr><tr><td>0.6</td><td>3.42</td><td>2.8</td><td>3.97</td><td>3.5</td><td>5.54</td><td>5.54</td></tr><tr><td>0.7</td><td>2.4</td><td>2.49</td><td>4.78</td><td>5.34</td><td>7.93</td><td>5.72</td></tr><tr><td>0.8</td><td>2.36</td><td>2.43</td><td>6.28</td><td>5.35</td><td>4.52</td><td>7.52</td></tr><tr><td>0.9</td><td>2.31</td><td>2.23</td><td>4.89</td><td>3.86</td><td>5.51</td><td>6.69</td></tr><tr><td>1</td><td>2.15</td><td>2.04</td><td>4.9</td><td>4.51</td><td>12.3</td><td>7.21</td></tr></table>",
|
| 768 |
+
"bbox": [
|
| 769 |
+
181,
|
| 770 |
+
556,
|
| 771 |
+
816,
|
| 772 |
+
719
|
| 773 |
+
],
|
| 774 |
+
"page_idx": 5
|
| 775 |
+
},
|
| 776 |
+
{
|
| 777 |
+
"type": "text",
|
| 778 |
+
"text": "Here we keep the noise schedule functions fixed, and adjust the input scaling factor. The results are shown in Table 4. We find that 1) as image resolution increases, the optimal input scaling factor becomes smaller, 2) compared to the best result from Table 3 where we only change the noise schedule function while keeping input scaling fixed, adjusting input scaling is better (drop FID from 4.28 to 3.52 for $256 \\times 256$ ), and it is also easier to find as we can just tune a single scaling factor. Finally, $1 - t$ seems to be a slightly better noise schedule than cosine ( $s = 0.2, e = 1, \\tau = 1$ ).",
|
| 779 |
+
"bbox": [
|
| 780 |
+
109,
|
| 781 |
+
731,
|
| 782 |
+
883,
|
| 783 |
+
823
|
| 784 |
+
],
|
| 785 |
+
"page_idx": 5
|
| 786 |
+
},
|
| 787 |
+
{
|
| 788 |
+
"type": "text",
|
| 789 |
+
"text": "3.4 The simple compound strategy, combined with RIN [10], enables state-of-the-art single-stage high-resolution image generation based on pixels",
|
| 790 |
+
"text_level": 1,
|
| 791 |
+
"bbox": [
|
| 792 |
+
109,
|
| 793 |
+
839,
|
| 794 |
+
883,
|
| 795 |
+
875
|
| 796 |
+
],
|
| 797 |
+
"page_idx": 5
|
| 798 |
+
},
|
| 799 |
+
{
|
| 800 |
+
"type": "text",
|
| 801 |
+
"text": "Table 5 demonstrates that the simple compound noise scheduling strategy, combined with RIN [10], enables state-of-the-art generation of high resolution images based on pure pixels. We forgo latent diffusion models [14]",
|
| 802 |
+
"bbox": [
|
| 803 |
+
109,
|
| 804 |
+
881,
|
| 805 |
+
883,
|
| 806 |
+
914
|
| 807 |
+
],
|
| 808 |
+
"page_idx": 5
|
| 809 |
+
},
|
| 810 |
+
{
|
| 811 |
+
"type": "page_number",
|
| 812 |
+
"text": "6",
|
| 813 |
+
"bbox": [
|
| 814 |
+
493,
|
| 815 |
+
950,
|
| 816 |
+
504,
|
| 817 |
+
962
|
| 818 |
+
],
|
| 819 |
+
"page_idx": 5
|
| 820 |
+
},
|
| 821 |
+
{
|
| 822 |
+
"type": "text",
|
| 823 |
+
"text": "where \"pixels\" are replaced with learned latent codes, since our scheduling technique is only tested on pixel-based diffusion models, but note these are orthogonal techniques and can potentially be combined. We note that state-of-the-art GANs [17] can achieve similar or better performance but with multi-stage generation, as well as classifier-guidance [3], which we do not use for quantitative evaluation.",
|
| 824 |
+
"bbox": [
|
| 825 |
+
109,
|
| 826 |
+
90,
|
| 827 |
+
887,
|
| 828 |
+
154
|
| 829 |
+
],
|
| 830 |
+
"page_idx": 6
|
| 831 |
+
},
|
| 832 |
+
{
|
| 833 |
+
"type": "table",
|
| 834 |
+
"img_path": "images/bc0ba3c6e312f51eafa31af4f67e6f392245284ff1b3414422be0c78f691a433.jpg",
|
| 835 |
+
"table_caption": [
|
| 836 |
+
"Table 5: Comparison of state-of-the-art class-conditional pixel-based image generation models on ImageNet. For FID, the lower the better; for IS, the higher the better. Our results (based on RIN) reported use neither cascades/up-sampling nor guidance."
|
| 837 |
+
],
|
| 838 |
+
"table_footnote": [],
|
| 839 |
+
"table_body": "<table><tr><td>Resolution</td><td>Method</td><td>FID</td><td>IS</td><td>Params (M)</td></tr><tr><td rowspan=\"5\">64×64</td><td>ADM [3]</td><td>-</td><td>2.07</td><td>297</td></tr><tr><td>CF-guidance [6]</td><td>1.55</td><td>66.0</td><td>-</td></tr><tr><td>CDM [8]</td><td>1.48</td><td>66.0</td><td>-</td></tr><tr><td>RIN [10] (patch size of 4, 300K updates)</td><td>1.23</td><td>66.5</td><td>281</td></tr><tr><td>RIN+our strategy (patch size of 8, 150K updates)</td><td>2.04</td><td>55.8</td><td>214</td></tr><tr><td rowspan=\"6\">128×128</td><td>ADM [3]</td><td>5.91</td><td>-</td><td>386</td></tr><tr><td>ADM+guidance [3]</td><td>2.97</td><td>-</td><td>>386</td></tr><tr><td>CF-guidance[6]</td><td>2.43</td><td>156.0</td><td>-</td></tr><tr><td>CDM[8]</td><td>3.51</td><td>128.0</td><td>1058</td></tr><tr><td>RIN [10] (patch size of 4, 700K updates)</td><td>2.75</td><td>144.1</td><td>410</td></tr><tr><td>RIN+our strategy (patch size of 8, 250K updates)</td><td>3.50</td><td>120.4</td><td>215</td></tr><tr><td rowspan=\"5\">256×256</td><td>ADM [3]</td><td>10.94</td><td>100.9</td><td>553</td></tr><tr><td>ADM+guidance [3]</td><td>4.59</td><td>-</td><td>>553</td></tr><tr><td>CDM [8]</td><td>4.88</td><td>158.7</td><td>1953</td></tr><tr><td>RIN [10] (patch size of 8, 700K updates)</td><td>4.51</td><td>161.0</td><td>410</td></tr><tr><td>RIN+our strategy (patch size of 8, 250K updates)</td><td>3.52</td><td>186.2</td><td>319</td></tr><tr><td rowspan=\"3\">512×512</td><td>ADM [3]</td><td>23.2</td><td>58.1</td><td>559</td></tr><tr><td>ADM+guidance [3]</td><td>7.72</td><td>172.7</td><td>>559</td></tr><tr><td>RIN+our strategy (patch size of 8, 1M updates)</td><td>3.95</td><td>216</td><td>320</td></tr><tr><td>768×768</td><td>RIN+our strategy (patch size of 8, 1M updates)</td><td>5.60</td><td>196.2</td><td>408</td></tr><tr><td>1024×1024</td><td>RIN+our strategy (patch size of 8, 910K updates)</td><td>8.72</td><td>163.9</td><td>412</td></tr></table>",
|
| 840 |
+
"bbox": [
|
| 841 |
+
169,
|
| 842 |
+
215,
|
| 843 |
+
831,
|
| 844 |
+
570
|
| 845 |
+
],
|
| 846 |
+
"page_idx": 6
|
| 847 |
+
},
|
| 848 |
+
{
|
| 849 |
+
"type": "text",
|
| 850 |
+
"text": "3.5 Visualization of generated samples",
|
| 851 |
+
"text_level": 1,
|
| 852 |
+
"bbox": [
|
| 853 |
+
112,
|
| 854 |
+
602,
|
| 855 |
+
503,
|
| 856 |
+
619
|
| 857 |
+
],
|
| 858 |
+
"page_idx": 6
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"type": "text",
|
| 862 |
+
"text": "Even though we do not use label dropout for images at resolution of $512 \\times 512$ , we still find the classifier-free guidance [6] during sampling improves the fidelity of generated samples. Therefore, we generate all the visualization samples with a guidance weight of 3. Figure 6, 7 and 8 show image samples generated from our trained model. Note that these are random samples, without cherry picking, generated conditioned on the given classes. Overall, we do see the global structure is well preserved across various resolutions, though object parts at smaller scale may be imperfect. We believe it can be improved with scaling the model and/or dataset (e.g., with more detailed text descriptions instead of just the class labels), and also the hyper-parameters tuning (as we do not thoroughly tune them for high resolutions).",
|
| 863 |
+
"bbox": [
|
| 864 |
+
109,
|
| 865 |
+
627,
|
| 866 |
+
883,
|
| 867 |
+
750
|
| 868 |
+
],
|
| 869 |
+
"page_idx": 6
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "text",
|
| 873 |
+
"text": "4 Conclusion",
|
| 874 |
+
"text_level": 1,
|
| 875 |
+
"bbox": [
|
| 876 |
+
112,
|
| 877 |
+
770,
|
| 878 |
+
284,
|
| 879 |
+
789
|
| 880 |
+
],
|
| 881 |
+
"page_idx": 6
|
| 882 |
+
},
|
| 883 |
+
{
|
| 884 |
+
"type": "text",
|
| 885 |
+
"text": "In this work, we empirically study noise scheduling strategies for diffusion models and show their importance. The noise scheduling not only plays an important role in image generation but also for other tasks such as panoptic segmentation [1]. A simple strategy of adjusting input scaling factor [1] works well across different image resolutions. When combined with recently proposed RIN architecture [10], our noise scheduling strategy enables single-stage generation of high resolution images. For practitioners, our work suggests that it is important to select a proper noise scheduling scheme when training diffusion models for a new task or a new dataset.",
|
| 886 |
+
"bbox": [
|
| 887 |
+
109,
|
| 888 |
+
801,
|
| 889 |
+
887,
|
| 890 |
+
907
|
| 891 |
+
],
|
| 892 |
+
"page_idx": 6
|
| 893 |
+
},
|
| 894 |
+
{
|
| 895 |
+
"type": "page_number",
|
| 896 |
+
"text": "7",
|
| 897 |
+
"bbox": [
|
| 898 |
+
493,
|
| 899 |
+
950,
|
| 900 |
+
504,
|
| 901 |
+
962
|
| 902 |
+
],
|
| 903 |
+
"page_idx": 6
|
| 904 |
+
},
|
| 905 |
+
{
|
| 906 |
+
"type": "image",
|
| 907 |
+
"img_path": "images/5b03d751d8583c690165efe33786871c148794081e82f5986d91f81847526473.jpg",
|
| 908 |
+
"image_caption": [
|
| 909 |
+
"Figure 6: Random samples at $512 \\times 512$ resolution generated by our single-stage end-to-end model (trained on class-conditional ImageNet images). The classes are strawberry (949), orange (950), macaw (88), tiger (292), panda (388), tree frog (31), go-kart (573), goldfish (1), pekinese (154), otter (360), teddy bear (850), arctic wolf (270), coral reef (973), box tortoise (37), space shuttle (812), loggerhead sea turtle (33), tow truck (864), tractor (866), trailer truck (867), Pembroke Welsh corgi (263), espresso maker (550), school bus (779), coffee mug (504), dog sled (537), flamingo (130)."
|
| 910 |
+
],
|
| 911 |
+
"image_footnote": [],
|
| 912 |
+
"bbox": [
|
| 913 |
+
114,
|
| 914 |
+
142,
|
| 915 |
+
883,
|
| 916 |
+
738
|
| 917 |
+
],
|
| 918 |
+
"page_idx": 7
|
| 919 |
+
},
|
| 920 |
+
{
|
| 921 |
+
"type": "page_number",
|
| 922 |
+
"text": "8",
|
| 923 |
+
"bbox": [
|
| 924 |
+
493,
|
| 925 |
+
950,
|
| 926 |
+
504,
|
| 927 |
+
962
|
| 928 |
+
],
|
| 929 |
+
"page_idx": 7
|
| 930 |
+
},
|
| 931 |
+
{
|
| 932 |
+
"type": "image",
|
| 933 |
+
"img_path": "images/352add23587461ef5fe21a996c0d363369fe0e4cf64513b6e4d616fe964abfbb.jpg",
|
| 934 |
+
"image_caption": [
|
| 935 |
+
"Figure 7: Random samples at $768 \\times 768$ resolution generated by our single-stage end-to-end model (trained on class-conditional ImageNet images). The classes are strawberry (949), orange (950), macaw (88), tiger (292), panda (388), cheeseburger (933), husky (250), sulphur-crested cockatoo (89), volcano (980), lion (291), golden retriever (207), lake shore (975), red panda (387), ice cream (928), lorikeet (90), arctic fox (279), bullet train (466), dungeness crab (118), balloon (417), cliff drop-of (972)."
|
| 936 |
+
],
|
| 937 |
+
"image_footnote": [],
|
| 938 |
+
"bbox": [
|
| 939 |
+
114,
|
| 940 |
+
87,
|
| 941 |
+
883,
|
| 942 |
+
829
|
| 943 |
+
],
|
| 944 |
+
"page_idx": 8
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "page_number",
|
| 948 |
+
"text": "9",
|
| 949 |
+
"bbox": [
|
| 950 |
+
491,
|
| 951 |
+
950,
|
| 952 |
+
504,
|
| 953 |
+
962
|
| 954 |
+
],
|
| 955 |
+
"page_idx": 8
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "image",
|
| 959 |
+
"img_path": "images/d6a2b0e8c7630f8e2579ca05594ca80b3fbd101b3d1b43a63202ab04f79a0a10.jpg",
|
| 960 |
+
"image_caption": [
|
| 961 |
+
"Figure 8: Random samples at $1024 \\times 1024$ resolution generated by our single-stage end-to-end model (trained on class-conditional ImageNet images). The classes are strawberry (949), orange (950), macaw (88), tiger (292), panda (388), cheeseburger (933), tree frog (31), space shuttle (812), loggerhead sea turtle (33), tow truck (864), tractor (866), trailer truck (867), lion (291), golden retriever (207), espresso maker (550), school bus (779), ice cream (928), lorikeet (90), bullet train (466), balloon (417)."
|
| 962 |
+
],
|
| 963 |
+
"image_footnote": [],
|
| 964 |
+
"bbox": [
|
| 965 |
+
114,
|
| 966 |
+
87,
|
| 967 |
+
883,
|
| 968 |
+
829
|
| 969 |
+
],
|
| 970 |
+
"page_idx": 9
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"type": "page_number",
|
| 974 |
+
"text": "10",
|
| 975 |
+
"bbox": [
|
| 976 |
+
490,
|
| 977 |
+
950,
|
| 978 |
+
508,
|
| 979 |
+
962
|
| 980 |
+
],
|
| 981 |
+
"page_idx": 9
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "text",
|
| 985 |
+
"text": "Acknowledgements",
|
| 986 |
+
"text_level": 1,
|
| 987 |
+
"bbox": [
|
| 988 |
+
114,
|
| 989 |
+
87,
|
| 990 |
+
338,
|
| 991 |
+
108
|
| 992 |
+
],
|
| 993 |
+
"page_idx": 10
|
| 994 |
+
},
|
| 995 |
+
{
|
| 996 |
+
"type": "text",
|
| 997 |
+
"text": "We thank David Fleet and Allan Jabri for helpful discussions.",
|
| 998 |
+
"bbox": [
|
| 999 |
+
112,
|
| 1000 |
+
118,
|
| 1001 |
+
558,
|
| 1002 |
+
133
|
| 1003 |
+
],
|
| 1004 |
+
"page_idx": 10
|
| 1005 |
+
},
|
| 1006 |
+
{
|
| 1007 |
+
"type": "text",
|
| 1008 |
+
"text": "References",
|
| 1009 |
+
"text_level": 1,
|
| 1010 |
+
"bbox": [
|
| 1011 |
+
114,
|
| 1012 |
+
154,
|
| 1013 |
+
243,
|
| 1014 |
+
172
|
| 1015 |
+
],
|
| 1016 |
+
"page_idx": 10
|
| 1017 |
+
},
|
| 1018 |
+
{
|
| 1019 |
+
"type": "list",
|
| 1020 |
+
"sub_type": "ref_text",
|
| 1021 |
+
"list_items": [
|
| 1022 |
+
"[1] Ting Chen, Lala Li, Saurabh Saxena, Geoffrey Hinton, and David J Fleet. A generalist framework for panoptic segmentation of images and videos. arXiv preprint arXiv:2210.06366, 2022.",
|
| 1023 |
+
"[2] Ting Chen, Ruixiang Zhang, and Geoffrey Hinton. Analog bits: Generating discrete data using diffusion models with self-conditioning. arXiv preprint arXiv:2208.04202, 2022.",
|
| 1024 |
+
"[3] Prafulla Dhariwal and Alex Nichol. Diffusion models beat GANs on image synthesis. In NeurIPS, 2022.",
|
| 1025 |
+
"[4] Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Miguel Angel Bautista, and Josh Susskind. f-dm: A multi-stage diffusion model via progressive signal transformation. arXiv preprint arXiv:2210.04955, 2022.",
|
| 1026 |
+
"[5] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.",
|
| 1027 |
+
"[6] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021.",
|
| 1028 |
+
"[7] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. NeurIPS, 2020.",
|
| 1029 |
+
"[8] Jonathan Ho, Chitwan Sahara, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. JMLR, 2022.",
|
| 1030 |
+
"[9] Emiel Hoogeboom, Jonathan Heek, and Tim Salimans. simple diffusion: End-to-end diffusion for high resolution images. arXiv preprint arXiv:2301.11093, 2023.",
|
| 1031 |
+
"[10] Allan Jabri, David Fleet, and Ting Chen. Scalable adaptive computation for iterative generation. arXiv preprint arXiv:2212.11972, 2022.",
|
| 1032 |
+
"[11] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. arXiv preprint arXiv:2206.00364, 2022.",
|
| 1033 |
+
"[12] Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. Advances in neural information processing systems, 34:21696-21707, 2021.",
|
| 1034 |
+
"[13] Alex Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. arXiv preprint arXiv:2102.09672, 2021.",
|
| 1035 |
+
"[14] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022.",
|
| 1036 |
+
"[15] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.",
|
| 1037 |
+
"[16] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. Advances in neural information processing systems, 29, 2016.",
|
| 1038 |
+
"[17] Axel Sauer, Katja Schwarz, and Andreas Geiger. Stylegan-xl: Scaling stylegan to large diverse datasets. In ACM SIGGRAPH 2022 conference proceedings, pages 1-10, 2022.",
|
| 1039 |
+
"[18] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256-2265. PMLR, 2015.",
|
| 1040 |
+
"[19] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020."
|
| 1041 |
+
],
|
| 1042 |
+
"bbox": [
|
| 1043 |
+
112,
|
| 1044 |
+
183,
|
| 1045 |
+
883,
|
| 1046 |
+
912
|
| 1047 |
+
],
|
| 1048 |
+
"page_idx": 10
|
| 1049 |
+
},
|
| 1050 |
+
{
|
| 1051 |
+
"type": "page_number",
|
| 1052 |
+
"text": "11",
|
| 1053 |
+
"bbox": [
|
| 1054 |
+
488,
|
| 1055 |
+
950,
|
| 1056 |
+
506,
|
| 1057 |
+
962
|
| 1058 |
+
],
|
| 1059 |
+
"page_idx": 10
|
| 1060 |
+
},
|
| 1061 |
+
{
|
| 1062 |
+
"type": "list",
|
| 1063 |
+
"sub_type": "ref_text",
|
| 1064 |
+
"list_items": [
|
| 1065 |
+
"[20] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021.",
|
| 1066 |
+
"[21] Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. arXiv preprint arXiv:1904.00962, 2019."
|
| 1067 |
+
],
|
| 1068 |
+
"bbox": [
|
| 1069 |
+
114,
|
| 1070 |
+
90,
|
| 1071 |
+
883,
|
| 1072 |
+
186
|
| 1073 |
+
],
|
| 1074 |
+
"page_idx": 11
|
| 1075 |
+
},
|
| 1076 |
+
{
|
| 1077 |
+
"type": "page_number",
|
| 1078 |
+
"text": "12",
|
| 1079 |
+
"bbox": [
|
| 1080 |
+
490,
|
| 1081 |
+
950,
|
| 1082 |
+
508,
|
| 1083 |
+
962
|
| 1084 |
+
],
|
| 1085 |
+
"page_idx": 11
|
| 1086 |
+
}
|
| 1087 |
+
]
|